mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Addons Blog: A new world of open extensions on Firefox for Android has arrived

Mozilla planet - to, 14/12/2023 - 19:20

Woo-hoo you did it! Hundreds of add-on developers heeded the call to make their desktop extensions compatible for today’s debut of a new open ecosystem of Firefox for Android extensions. More than 450 Firefox for Android extensions are now discoverable on the addons.mozilla.org (AMO) Android homepage. It’s a strong start to an exciting new frontier of mobile browser customization. Let’s see where this goes.

Are you a developer who hasn’t migrated your desktop extension to Firefox for Android yet? Here’s a good starting point for developing extensions for Firefox for Android.

If you’ve already embarked on the mobile extension journey and have questions/insights/feedback to offer as we continue to optimize the mobile development experience, we invite you to join the discussion about top APIs missing on Firefox for Android.

Have you found any Firefox for Android bugs? Do tell!

The post A new world of open extensions on Firefox for Android has arrived appeared first on Mozilla Add-ons Community Blog.

Categorieën: Mozilla-nl planet

Mozilla Performance Blog: New Sheriffing feature and significant updates to KPI reporting queries

Mozilla planet - wo, 13/12/2023 - 11:01

A year ago I was sharing how a Mozilla Performance Sheriff catches performance regressions, the entire Workflow they go through, and the incoming improvements. Since I joined the Performance Tools Team (formerly Performance Test), almost five years ago, a whole lot of improvements have been made, and features have been added.

In this article, I want to focus on a special set of features, that give the Performance Sheriffs more control over the Sheriffing Workflow (from when an alert is triggered, triaged to when the regression bug is filed and linked to the alert). We call them time-to-triage (from alert to triage) and time-to-bug (from alert to bug). They are actually the object of our Sheriffing Team’s KPIs, the KPIs that measure the performance of the Performance Sheriffs team (I like puns).

The time-to-triage KPI measures the time since an alert was triggered by a performance change to when it was triaged (basically first-time analysis). It is at most 3 days, and at least 80% of the sheriffed alerts have to meet this deadline (or 20% is allowed not to). However, our team does not work weekends and they have to be excluded. For example, if an alert was created on a Friday (any), the three-day-triage time ends on Monday instead of Wednesday when the three business days actually expire. This means we basically only get a single day to triage it. So every time something like this happens, we have to manually exclude those alerts from the old queries of the KPI report that do not exclude the weekends from those times. The new queries do this exclusion automatically.

 

Triage Response Times (time-to-triage)Year To Date

Triage Response Times (time-to-triage)
Year To Date

Triage Response Times (New Query)Year To Date

Triage Response Times (New Query)
Year To Date

Alerts Exceeding Triage TargetYear To Date

Alerts Exceeding Triage Target
Year To Date

The same thing is true for an alert created on a weekend, where a part of the alert-to-triage time falls on the weekend. Actually, the only alerts that can not capture weekends are the ones created Monday and Tuesday.

The time-to-bug KPI measures the time since an alert was triggered by a performance change to when a bug was linked to the alert. It is at most 5 days, and at least 80% of the valid regression alerts must meet this deadline (or 20% is allowed not to). The only alerts that can not capture weekends within this KPI are the ones created on Monday, the first hour in the morning, whose KPI ends Friday in the last hour of the day.

Regression Bug Response TimesYear To Date

Regression Bug Response Times
Year To Date

Regression Bug Response Times (New Query)Year To Date

Regression Bug Response Times (New Query)
Year To Date

Regressions Exceeding Bug TargetYear To Date

Regressions Exceeding Bug Target
Year To Date

In the images above, you can see a difference in the percentages of time-to-triage (86.9% vs. 97.9% old query vs. new query) and time-to-bug (75.7% vs. 97% old query vs. new query). This is not because the Sheriffing Team is doing a better job, they were doing this the whole time. It is because the feature we developed helps measure the percentages accurately by excluding the weekends from the calculated times. According strictly to the percentages, the impact of this feature is significant, taking us from an average – maybe struggling – performance, to a really good one. Of course, the inclusion of weekends in the report of the KPIs was known a while ago, but having a bigger picture and concrete metrics is more revealing.

The development of these time-to-triage/time-to-bug features is full-stack and involved:

  • Helping our manager’s Sheriffing report calculate the times more accurately (to whom I am grateful for supporting this initiative);
  • Modifying the performance_alert_summary database table to store due dates;
  • Implementing the accurate calculation in the backend as described above;
  • Showing in the UI the countdown until the alert goes overdue gives the Performance Sheriffs more control and the ability to organize themselves throughout the Sheriffing Workflow better.

I didn’t mention the countdown feature yet. It is shown in the image below, right next to the status dropdown of the alert summary (top-right corner). Here are displayed:

  • The type of due date that is in effect (Triage in this case);
  • The amount of time. When the time goes under 24 hours, the timer will switch to showing the hours left.

The alert will become triaged and the counter will switch from triage to bug when the first-time analysis is performed on it (star, assign, add tag, add note).

Alert with Triage due date status

Alert with Triage due date status

 

Below is an example of a time-to-bug timer (the time left until linking the alert to a bug will go due). By default the timer counter is green, but when the timer goes under 24 hours, it will go orange.

Alert with Bug due date status

Alert with Bug due date status

When the timer goes overdue, we can see in the image below that the counter icon becomes red and the “Overdue” status is shown up.

Alert with Overdue status (this is for demo purposes only, the alert wasn’t overdue for real)

Alert with Overdue status
(this is for demo purposes only, the alert wasn’t overdue for real)

Lastly, after the alert is finally linked to a bug, the counter will turn into a green checkmark and the countdown status will be “Ready for acknowledge”.

Alert with Ready for acknowledge status

Alert with Ready for acknowledge status

Now, instead of manually excluding the times inflated by the weekends, we have an automated feature to closely control the alert lifecycle and report the KPI percentages more accurately.

The development of this feature was a personal initiative, encouraged by our manager and by the whole team (without their support I couldn’t have done this). This is part of a wider initiative I support, improvements to Performance Sheriffing Workflow. It improves the developer experience while working with performance regressions and helps the Performance Sheriffs be more efficient by improving their tools and automating as much as possible their workflow.

Categorieën: Mozilla-nl planet

Tiger Oakes: Takeaways from React Day Berlin & TestJS Summit 2023

Mozilla planet - wo, 13/12/2023 - 01:00
What I learned from a conference double feature.
Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Puppeteer Support for the Cross-Browser WebDriver BiDi Standard

Mozilla planet - ti, 12/12/2023 - 17:14

We are pleased to share that Puppeteer now supports the next-generation, cross-browser WebDriver BiDi standard. This new protocol makes it easy for web developers to write automated tests that work across multiple browser engines.

How Do I Use Puppeteer With Firefox?

The WebDriver BiDi protocol is supported starting with Puppeteer v21.6.0. When calling puppeteer.launch pass in "firefox" as the product option, and "webDriverBiDi" as the protocol option:

const browser = await puppeteer.launch({   product: 'firefox',   protocol: 'webDriverBiDi', })

You can also use the "webDriverBiDi" protocol when testing in Chrome, reflecting the fact that WebDriver BiDi offers a single standard for modern cross-browser automation.

In the future we expect "webDriverBiDi" to become the default protocol when using Firefox in Puppeteer.

Doesn’t Puppeteer Already Support Firefox?

Puppeteer has had experimental support for Firefox based on a partial re-implementation of the proprietary Chrome DevTools Protocol (CDP). This approach had the advantage that it worked without significant changes to the existing Puppeteer code. However the CDP implementation in Firefox is incomplete and has significant technical limitations. In addition, the CDP protocol itself is not designed to be cross browser, and undergoes frequent breaking changes, making it unsuitable as a long-term solution for cross-browser automation.

To overcome these problems, we’ve worked with the WebDriver Working Group at the W3C to create a standard automation protocol that meets the needs of modern browser automation clients: this is WebDriver BiDi. For more details on the protocol design and how it compares to the classic HTTP-based WebDriver protocol, see our earlier posts.

As the standardization process has progressed, the Puppeteer team has added a WebDriver BiDi backend in Puppeteer, and provided feedback on the specification to ensure that it meets the needs of Puppeteer users, and that the protocol design enables existing CDP-based tooling to easily transition to WebDriver BiDi. The result is a single protocol based on open standards that can drive both Chrome and Firefox in Puppeteer.

Are All Puppeteer Features Supported?

Not yet; WebDriver BiDi is still a work in progress, and doesn’t yet cover the full feature set of Puppeteer.

Compared to the Chrome+CDP implementation, there are some feature gaps, including support for accessing the cookie store, network request interception, some emulation features, and permissions. These features are actively being standardized and will be integrated as soon as they become available. For Firefox, the only missing feature compared to the Firefox+CDP implementation is cookie access. In addition, WebDriver BiDi already offers improvements, including better support for multi-process Firefox, which is essential for testing some websites. More information on the complete set of supported APIs can be found in the Puppeteer documentation, and as new WebDriver-BiDi features are enabled in Gecko we’ll publish details on the Firefox Developer Experience blog.

Nevertheless, we believe that the WebDriver-based Firefox support in Puppeteer has reached a level of quality which makes it suitable for many real automation scenarios. For example at Mozilla we have successfully ported our Puppeteer tests for pdf.js from Firefox+CDP to Firefox+WebDriver BiDi.

Is Firefox’s CDP Support Going Away?

We currently don’t have a specific timeline for removing CDP support. However, maintaining multiple protocols is not a good use of our resources, and we expect WebDriver BiDi to be the future of remote automation in Firefox. If you are using the CDP support outside of the context of Puppeteer, we’d love to hear from you (see below), so that we can understand your use cases, and help transition to WebDriver BiDi.

Where Can I Provide Feedback?

For any issues you experience when porting Puppeteer tests to BiDi, please open issues in the Puppeteer issue tracker, unless you can verify the bug is in the Firefox implementation, in which case please file a bug on Bugzilla.

If you are currently using CDP with Firefox, please join the #webdriver matrix channel so that we can discuss your use case and requirements, and help you solve any problems you encounter porting your code to WebDriver BiDi.

Update: The Puppeteer team have published “Harness the Power of WebDriver BiDi: Chrome and Firefox Automation with Puppeteer“.

The post Puppeteer Support for the Cross-Browser WebDriver BiDi Standard appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Cargo cache cleaning

Mozilla planet - mo, 11/12/2023 - 01:00

Cargo has recently gained an unstable feature on the nightly channel (starting with nightly-2023-11-17) to perform automatic cleaning of cache content within Cargo's home directory. This post includes:

In short, we are asking people who use the nightly channel to enable this feature and report any issues you encounter on the Cargo issue tracker. To enable it, place the following in your Cargo config file (typically located in ~/.cargo/config.toml or %USERPROFILE%\.cargo\config.toml for Windows):

[unstable] gc = true

Or set the CARGO_UNSTABLE_GC=true environment variable or use the -Zgc CLI flag to turn it on for individual commands.

We'd particularly like people who use unusual filesystems or environments to give it a try, since there are some parts of the implementation which are sensitive and need battle testing before we turn it on for everyone.

What is this feature?

Cargo keeps a variety of cached data within the Cargo home directory. This cache can grow unbounded and can get quite large (easily reaching many gigabytes). Community members have developed tools to manage this cache, such as cargo-cache, but cargo itself never exposed any ability to manage it.

This cache includes:

  • Registry index data, such as package dependency metadata from crates.io.
  • Compressed .crate files downloaded from a registry.
  • The uncompressed contents of those .crate files, which rustc uses to read the source and compile dependencies.
  • Clones of git repositories used by git dependencies.

The new garbage collection ("GC") feature adds tracking of this cache data so that cargo can automatically or manually remove unused files. It keeps an SQLite database which tracks the last time the various cache elements have been used. Every time you run a cargo command that reads or writes any of this cache data, it will update the database with a timestamp of when that data was last used.

What isn't yet included is cleaning of target directories, see Plan for the future.

Automatic cleaning

When you run cargo, once a day it will inspect the last-use cache tracker, and determine if any cache elements have not been used in a while. If they have not, then they will be automatically deleted. This happens with most commands that would normally perform significant work, like cargo build or cargo fetch.

The default is to delete data that can be locally recreated if it hasn't been used for 1 month, and to delete data that has to be re-downloaded after 3 months.

Automatic deletion is disabled if cargo is offline such as with --offline or --frozen to avoid deleting artifacts that may need to be used if you are offline for a long period of time.

The initial implementation has exposed a variety of configuration knobs to control how automatic cleaning works. However, it is unlikely we will expose too many low-level details when it is stabilized, so this may change in the future (see issue #13061). See the Automatic garbage collection section for more details on this configuration.

Manual cleaning

If you want to manually delete data from the cache, several options have been added under the cargo clean gc subcommand. This subcommand can be used to perform the normal automatic daily cleaning, or to specify different options on which data to remove. There are several options for specifying the age of data to delete (such as --max-download-age=3days) or specifying the maximum size of the cache (such as --max-download-size=1GiB). See the Manual garbage collection section or run cargo clean gc --help for more details on which options are supported.

This CLI design is only preliminary, and we are looking at determining what the final design will look like when it is stabilized, see issue #13060.

What to watch out for

After enabling the gc feature, just go about your normal business of using cargo. You should be able to observe the SQLite database stored in your cargo home directory at ~/.cargo/.global-cache.

After the first time you use cargo, it will populate the database tracking all the data that already exists in your cargo home directory. Then, after 1 month, cargo should start deleting old data, and after 3 months will delete even more data.

The end result is that after that period of time you should start to notice the home directory using less space overall.

You can also try out the cargo clean gc command and explore some of its options if you want to try to manually delete some data.

If you run into problems, you can disable the gc feature and cargo should return to its previous behavior. Please let us know on the issue tracker if this happens.

Request for feedback

We'd like to hear from you about your experience using this feature. Some of the things we are interested in are:

  • Have you run into any bugs, errors, issues, or confusing problems? Please file an issue over at https://github.com/rust-lang/cargo/issues/.
  • The first time that you use cargo with GC enabled, is there an unreasonably long delay? Cargo may need to scan your existing cache data once to detect what already exists from previous versions.
  • Do you notice unreasonable delays when it performs automatic cleaning once a day?
  • Do you have use cases where you need to do cleaning based on the size of the cache? If so, please share them at #13062.
  • If you think you would make use of manually deleting cache data, what are your use cases for doing that? Sharing them on #13060 about the CLI interface might help guide us on the overall design.
  • Does the default of deleting 3 month old data seem like a good balance for your use cases?

Or if you would prefer to share your experiences on Zulip, head over to the #t-cargo stream.

Design considerations and implementation details

(These sections are only for the intently curious among you.)

The implementation of this feature had to consider several constraints to try to ensure that it works in nearly all environments, and doesn't introduce a negative experience for users.

Performance

One big focus was to make sure that the performance of each invocation of cargo is not significantly impacted. Cargo needs to potentially save a large chunk of data every time it runs. The performance impact will heavily depend on the number of dependencies and your filesystem. Preliminary testing shows the impact can be anywhere from 0 to about 50ms.

In order to minimize the performance impact of actually deleting files, the automatic GC runs only once a day. This is intended to balance keeping the cache clean without impacting the performance of daily use.

Locking

Another big focus is dealing with cache locking. Previously, cargo had a single lock on the package cache, which cargo would hold while downloading registry data and performing dependency resolution. When cargo is actually running rustc, it previously did not hold a lock under the assumption that existing cache data will not be modified.

However, now that cargo can modify or delete existing cache data, it needs to be careful to coordinate with anything that might be reading from the cache, such as if multiple cargo commands are run simultaneously. To handle this, cargo now has two separate locks, which are used together to provide three separate locking states. There is a shared read lock, which allows multiple builds to run in parallel and read from the cache. There is a write lock held while downloading registry data, which is independent of the read lock which allows concurrent builds to still run while new packages are downloaded. The third state is a write lock that prevents either of the two previous locks from being held, and ensures exclusive access while cleaning the cache.

Versions of cargo before 1.75 don't know about the exclusive write lock. We are hoping that in practice it will be rare to concurrently run old and new cargo versions, and that it is unlikely that the automatic GC will need to delete data that is concurrently in use by an older version.

Error handling and filesystems

Because we do not want problems with GC from disrupting users, the implementation silently skips the GC if it is unable to acquire an exclusive lock on the package cache. Similarly, when cargo saves the timestamp data on every command, it will silently ignore errors if it is unable to open the database, such as if it is on a read-only filesystem, or it is unable to acquire a write lock. This may result in the last-use timestamps becoming stale, but hopefully this should not impact most usage scenarios. For locking, we are paying special attention to scenarios such as Docker container mounts and network filesystems with questionable locking support.

Backwards compatibility

Since the cache is used by any version of cargo, we have to pay close attention to forwards and backwards compatibility. We benefit from SQLite's particularly stable on-disk data format which has been stable since 2004. Cargo has support to do schema migrations within the database that stay backwards compatible.

Plan for the future

A major aspect of this endeavor is to gain experience with using SQLite in a wide variety of environments, with a plan to extend its usage in several other parts of cargo.

Registry index metadata

One place where we are looking to introduce SQLite is for the registry index cache. When cargo downloads registry index data, it stores it in a custom-designed binary file format to improve lookup performance. However, this index cache uses many small files, which may not perform well on some filesystems.

Additionally, the index cache grows without bound. Currently the automatic cache cleaning will only delete an entire index cache if the index itself hasn't been used, which is rarely the case for crates.io. We may also need to consider finer-grained timestamp tracking or some mechanism to periodically purge this data.

Target directory change tracking and cleaning

Another place we are looking to introduce SQLite is for managing the target directory. In cargo's target directory, cargo keeps track of information about each crate that has been built with what is called a fingerprint. These fingerprints help cargo know if it needs to recompile something. Each artifact is tracked with a set of 4 files, using a mixture of custom formats.

We are looking to replace this system with SQLite which will hopefully bring about several improvements. A major focus will be to provide cleaning of stale data in the target directory, which tends to use substantial amount of disk space. Additionally we are looking to implement other improvements, such as more accurate fingerprint tracking, provide information about why cargo thinks something needed to be recompiled, and to hopefully improve performance. This will be important for the script feature, which uses a global cache for build artifacts, and the future implementation of a globally-shared build cache.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla and Allies Say No to Surveillance Blank Check in NDAA, Yes to Strong Surveillance Protections

Mozilla planet - fr, 08/12/2023 - 16:04

Today Mozilla, along with a group of builders and supporters of innovation, sent a letter calling on the US House of Representatives to pass strong surveillance reform proposals such as the Government Surveillance Reform Act (GSRA) and the Protect Liberty and End Warrantless Surveillance Act (PLEWSA).

In line with our previous call for reform, our letter also highlighted the need for codification of the scope of surveillance proposed in the Administration’s own Executive Order on “Enhancing Safeguards for United States Signals Intelligence Activities” and opposed a months-long reauthorization of Section 702 that would effectively greenlight surveillance abuses.

Both GSRA and PLEWSA take critical steps forward in protecting Americans from overbroad surveillance, such as imposing warrant requirements for queries of US person data and banning warrantless purchases of sensitive information on Americans from data brokers. We do, however, encourage Congress to examine how it can further strengthen PLEWSA.

Unfortunately, House and Senate Intelligence Committees are also considering proposals of their own, proposals that would entrench the surveillance status quo.

Those wishing to get involved can add their names to our letter and do their part to engage Congress on this important issue.

You can find the letter HERE.

The post Mozilla and Allies Say No to Surveillance Blank Check in NDAA, Yes to Strong Surveillance Protections appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: What’s up with SUMO – Q4 2023

Mozilla planet - fr, 08/12/2023 - 08:13

Hi everybody,

The last part of our quarterly update in 2023 come early with this post. That means, we won’t get the data from December just yet (but we’ll make sure to update the post later). Lots of updates after the last quarter so let’s just dive in!

Welcome note and shout-outs from Q4

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news
  • Kiki back from maternity leave and Sarto bid her farewell, all happened in this quarter.
  • We have a new contributor policy around the use of generative AI tools. This was one of the things that Sarto initiated back then so I’d like to give the credit to her. Please take some time to read and familiarize yourself with the policy.
  • Spanish contributors are pushing really hard to help localize the in-product and top articles for the Firefox Desktop. I’m so proud that at the moment, 57.65% of Firefox Desktop in-product articles have been translated & updated to Spanish (compared to 11.8% when we started) and 80% of top 50 articles are localized and updated to Spanish. Huge props to those who I mentioned in the shout-outs section above.
  • We’ve got new locale leaders for Catalan and Indonesian (as I mentioned above). Please join me to congratulate Handi S & Carlos Tomás for their new role!
  • The Customer Experience team is officially moved out from the Marketing org to the Strategy and Operations org led by Suba Vasudevan (more about that in our community meeting in Dec).
  • We’ve migrated Pocket support platform (used to be under Help Scout) to SUMO. That means, Pocket help articles are now available on Mozilla Support, and people looking for Pocket premium support can also ask a question through SUMO.
  • Firefox account is transitioned to Mozilla account in early November this year. Read this article to learn more about the background for this transition.
  • We did a SUMO sprint for the Review checker feature with the release of Firefox 119, even though we couldn’t find lots of chatter about it.
  • Please check out this thread to learn more about recent platform fixes and improvements (including the use of emoji! )
  • We’ve also updated and moved Kitsune documentation to GitHub page recently. Check out this thread to learn more.
Catch up
  • Watch the monthly community call if you haven’t. Learn more about what’s new in October, November, and December! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting. First time joining the call? Check out this article to get to know how to join. 
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Consider subscribe to Firefox Daily Digest to get daily updates about Firefox from across different platforms.

Check out SUMO Engineering Board to see what the platform team is currently doing and submit a report through Bugzilla if you want to report a bug/request for improvement.

Community stats KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only Month Page views Vs previous month Oct 2023 7,061,331 9.36% Nov 2023 6,502,248 -7.92% Dec 2023 TBD TBD

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Oct 2023 

pageviews (*)

Nov 2023 pageviews (*) Dec 2023 

pageviews (*)

Localization progress (per Dec, 7)(**) de 10.66% 10.97% TBD 93% fr 7.10% 7.23% TBD 80% zh-CN 6.84% 6.81% TBD 92% es 5.59% 5.49% TBD 27% ja 5.10% 4.72% TBD 33% ru 3.67% 3.8% TBD 88% pt-BR 3.30% 3.11% TBD 43% It 2.52% 2.48% TBD 96% zh-TW 2.42% 2.61% TBD 2% pl 2.13% 2.11% TBD 83% * Locale pageviews is an overall pageviews from the given locale (KB and other pages) ** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness Oct 2023 3,897 66.33% 10.01% 59.68% Nov 2023 2,660 64.77% 9.81% 65.74% Dec 2023 TBD TBD TBD TBD

Top 5 forum contributors in the last 90 days: 

Social Support Channel Total tweets Total moderation by contributors Total reply by contributors Respond conversion rate Oct 2023 311 209 132 63.16% Nov 2023 245 137 87 63.50% Dec 2023 TBD TBD TBD TBD

Top 5 Social Support contributors in the past 3 months: 

  1. Tim Maks 
  2. Wim Benes
  3. Daniel B
  4. Philipp T
  5. Pierre Mozinet
Play Store Support

Firefox for Android only

Channel Total reviews Total conv interacted by contributors Total conv replied by contributors Oct 2023 6,334 45 18 Nov 2023 6,231 281 75 Dec 2023

Top 5 Play Store contributors in the past 3 months: 

Product updates

To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.

Useful links:

 

Categorieën: Mozilla-nl planet

Niko Matsakis: Being Rusty: Discovering Rust's design axioms

Mozilla planet - to, 07/12/2023 - 14:46

To your average Joe, being “rusty” is not seen as a good thing.1 But readers of this blog know that being Rusty – with a capitol R! – is, of course, something completely different! So what is that makes Rust Rust? Our slogans articulate key parts of it, like fearless concurrency, stability without stagnation, or the epic Hack without fear. And there is of course Lindsey Kuper’s epic haiku: “A systems language / pursuing the trifecta: / fast, concurrent, safe”. But I feel like we’re still missing a unified set of axioms that we can refer back to over time and use to guide us as we make decisions. Some of you will remember the Rustacean Principles, which was my first attempt at this. I’ve been dissatisfied with them for a couple of reasons, so I decided to try again. The structure is really different, so I’m calling it Rust’s design axioms. This post documents the current state – I’m quite a bit happier with it! But it’s not quite there yet. So I’ve also got a link to a repository where I’m hoping people can help improve them by opening issues with examples, counter-examples, or other thoughts.

Axioms capture the principles you use in your decision-making process

What I’ve noticed is that when I am trying to make some decision – whether it’s a question of language design or something else – I am implicitly bringing assumptions, intuitions, and hypotheses to bear. Oftentimes, those intutions fly by very quickly in my mind, and I barely even notice them. Ah yeah, we could do X, but if we did that, it would mean Y, and I don’t want that, scratch that idea. I’m slowly learning to be attentive to these moments – whatever Y is right there, it’s related to one of my design axioms — something I’m implicitly using to shape my thinking.

I’ve found that if I can capture those axioms and write them out, they can help me down the line when I’m facing future decisions. It can also help to bring alignment to a group of people by making those intutions explicit (and giving people a chance to refute or sharpen them). Obviously I’m not the first to observe this. I’ve found Amazon’s practice of using tenets to be quite useful2, for example, and I’ve also been inspired by things I’ve read online about the importance of making your hypotheses explicit.3

In proof systems, your axioms are the things that you assert to be true and take on faith, and from which the rest of your argument follows. I choose to call these Rust’s design axioms because that seemed like exactly what I was going for. What are the starting assumptions that, followed to their conclusion, lead you to Rust? The more clearly we can articulate those assumptions, the better we’ll be able to ensure that we continue to follow them as we evolve Rust to meet future needs.

Axioms have a hypothesis and a consequence

I’ve structured the axioms in a particular way. They begin by stating the axiom itself – the core belief that we assert to be true. That is followed by a consequence, which is something that we do as a result of that core belief. To show you what I mean, here is one of the Rust design axioms I’ve drafted:

Rust users want to surface problems as early as possible, and so Rust is designed to be reliable. We make choices that help surface bugs earlier. We don’t make guesses about what our users meant to do, we let them tell us, and we endeavor to make the meaning of code transparent to its reader. And we always, always guarantee memory safety and data-race freedom in safe Rust code.

Axioms have an ordering and earlier things take priority

Each axiom is useful on its own, but where things become interesting is when they come into conflict. Consider reliability: that is a core axiom of Rust, no doubt, but is it the most important? I would argue it is not. If it were, we wouldn’t permit unsafe code, or at least not without a safety proof. I think our core axiom is actually that Rust is is meant to be used, and used for building a particular kind of program. I articulated it like this:

Rust is meant to empower everyone to build reliable and efficient software, so above all else, Rust needs to be accessible to a broad audience. We avoid designs that will be too complex to be used in practice. We build supportive tooling that not only points out potential mistakes but helps users understand and fix them.

When it comes to safety, I think Rust’s approach is eminently practical. We’ve designed a safe type system that we believe covers 90-95% of what people need to do, and we are always working to expand that scope. We to get that last 5-10%, we fallback to unsafe code. Is this as safe and reliable as it could be? No. That would be requiring 100% proofs of correctness. There are systems that do that, but they are maintained by a small handful of experts, and that idea – that systems programming is just for “wizards” – is exactly what we are trying to get away from.

To express this in our axioms, we put accessible as the top-most axiom. It defines the mission overall. But we put reliability as the second in the list, since that takes precedence over everything else.

The design axioms I really like

Without further ado, here is my current list design axioms. Well, part of it. These are the axioms that I feel pretty good about it. The ordering also feels right to me.

We believe that…

  • Rust is meant to empower everyone to build reliable and efficient software, so above all else, Rust needs to be accessible to a broad audience. We avoid designs that will be too complex to be used in practice. We build supportive tooling that not only points out potential mistakes but helps users understand and fix them.
  • Rust users want to surface problems as early as possible, and so Rust is designed to be reliable. We make choices that help surface bugs earlier. We don’t make guesses about what our users meant to do, we let them tell us, and we endeavor to make the meaning of code transparent to its reader. And we always, always guarantee memory safety and data-race freedom in safe Rust code.
  • Rust users are just as obsessed with quality as we are, and so Rust is extensible. We empower our users to build their own abstractions. We prefer to let people build what they need than to try (and fail) to give them everything ourselves.
  • Systems programmers need to know what is happening and where, and so system details and especially performance costs in Rust are transparent and tunable. When building systems, it’s often important to know what’s going on underneath the abstractions. Abstractions should still leave the programmer feeling like they’re in control of the underlying system, such as by making it easy to notice (or avoid) certain types of operations.

…where earlier things take precedence.

The design axioms that are still a work-in-progress

These axioms are things I am less sure of. It’s not that I don’t think they are true. It’s that I don’t know yet if they’re worded correctly. Maybe they should be combined together? And where, exactly, do they fall in the ordering?

  • Rust users want to focus on solving their problem, not the fiddly details, so Rust is productive. We favor APIs that where the most convenient and high-level option is also the most efficient one. We support portability across operating systems and execution environments by default. We aren’t explicit for the sake of being explicit, but rather to surface details we believe are needed.
  • N✕M is bigger than N+M, and so we design for composability and orthogonality. We are looking for features that tackle independent problems and build on one another, giving rise to N✕M possibilities.
  • It’s nicer to use one language than two, so Rust is versatile. Rust can’t be the best at everything, but we can make it decent for just about anything, whether that’s low-level C code or high-level scripting.

Of these, I like the first one best. Also, it follows the axiom structure better, because it starts with a hypothesis about Rust users and what they want. The other two are a bit older and I hadn’t adopted that convention yet.

Help shape the axioms!

My ultimate goal is to author an RFC endorsing these axioms for Rust. But I need help to get there. Are these the right axioms? Am I missing things? Should we change the ordering?

I’d love to know what you think! To aid in collaboration, I’ve created a nikomatsakis/rust-design-axioms github repository. It hosts the current state of the axioms and also has suggested ways to contribute.

I’ve already opened issues for some of the things I am wondering about, such as:

  • nikomatsakis/rust-design-axioms#1: Maybe we need a “performant” axiom? Right now, the idea of “zero-cost abstractions” and ““the default thing is also the most efficient one” feels a bit smeared across “transparent and tunable” and “productive”.
  • nikomatsakis/rust-design-axioms#2: Is “portability” sufficiently important to pull out from “productivity” into its own axiom?
  • nikomatsakis/rust-design-axioms#3: Are “versatility” and “orthogonality” really expressing something different from “productivity”?

Check it out!

  1. I have a Google alert for “Rust” and I cannot tell you how often it seems that some sports teams or another shakes off Rust. I’d never heard that expression before signing up for this Google alert. ↩︎

  2. I’m perhaps a bit unusual in my love for things like Amazon’s Leadership Principles. I can totally understand why, to many people, they seem like corporate nonsense. But if there’s one theme I’ve seen consistenly over my time working on Rust, it’s that process and structure are essential. Take a look at the “People Systems” keynote that Aaron, Ashley, and I gave at RustConf 2018 and you will see that theme running throughout. So many of Rust’s greatest practices – things like the teams or RFCs or public, rfcbot-based decision making – are an attempt to take some kind of informal, unstructured process and give it shape. ↩︎

  3. I really like this Learning for Action page, which I admit I found just by googling for “strategy articulate a hypotheses”. I’m less into this super corporate-sounding LinkedIn post, but I have to admit I think it’s right on the money. ↩︎

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.74.1

Mozilla planet - to, 07/12/2023 - 01:00

The Rust team has published a new point release of Rust, 1.74.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.74.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.74.1

1.74.1 resolves a few regressions introduced in 1.74.0:

Contributors to 1.74.1

Many people came together to create Rust 1.74.1. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Mozilla VPN Security Audit 2023

Mozilla planet - wo, 06/12/2023 - 18:00

To provide transparency into our ongoing efforts to protect your privacy and security on the Internet, we are releasing a security audit of Mozilla VPN that Cure53 conducted earlier this year.

The scope of this security audit included the following products:

  • Mozilla VPN Qt6 App for macOS
  • Mozilla VPN Qt6 App for Linux
  • Mozilla VPN Qt6 App for Windows
  • Mozilla VPN Qt6 App for iOS
  • Mozilla VPN Qt6 App for Android

Here’s a summary of the items discovered within this security audit that the auditors rated as medium or higher severity:

  • FVP-03-003: DoS via serialized intent 
      • Data received via intents within the affected activity should be validated to prevent the Android app from exposing certain activities to third-party apps.
      • There was a risk that a malicious application could leverage this weakness to crash the app at any time.
      • This risk was addressed by Mozilla and confirmed by Cure53.
  • FVP-03-008: Keychain access level leaks WG private key to iCloud 
      • Cure53 confirmed that this risk has been addressed due to an extra layer of encryption, which protects the Keychain specifically with a key from the device’s secure enclave.
  • FVP-03-009: Lack of access controls on daemon socket
      • Access controls to guarantee that the user sending commands to the daemon was permitted to initiate the intended action needs to be implemented.
      • This risk has been addressed by Mozilla and confirmed by Cure53.
  • FVP-03-010: VPN leak via captive portal detection 
      • Cure53 advised that the captive portal detection feature be turned off by default to prevent an opportunity for IP leakage when using maliciously set up WiFi hotspots.
      • Mozilla addressed the risk by no longer pinging for a captive portal outside of the VPN tunnel.
  • FVP-03-011: Lack of local TCP server access controls
      • The VPN client exposes a local TCP interface running on port 8754, which is bound to localhost. Users on localhost can issue a request to the port and disable the VPN.
      • Mozilla addressed this risk as recommended by Cure53.
  • FVP-03-012: Rogue extension can disable VPN using mozillavpnnp (High)
      • mozillavpnnp does not sufficiently restrict the application caller.
      • Mozilla addressed this risk as recommended by Cure53.

If you’d like to read the detailed report from Cure53, including all low and informational items, you can find it here.

 

The post Mozilla VPN Security Audit 2023 appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla Asks US Supreme Court to Support Responsible Content Moderation

Mozilla planet - wo, 06/12/2023 - 01:05

Today Mozilla Corporation joined an amicus brief in a pair of important Supreme Court cases. The cases consider Texas and Florida laws that prohibit social media platforms from removing hateful and abusive content. If upheld, these laws would make content moderation impossible and would make the internet a much less safe place for all of us. Mozilla urges the Supreme Court to find them unconstitutional.

The Texas law, known as H.B. 20, would prohibit large social media sites from blocking, removing, or demonetizing content based on the viewpoint. While it provides an exception for illegal speech, this still means that platforms would be forced to host a huge range of legal but harmful content, such as outright racism or Holocaust denial. It would mandate, for example, that a page devoted to South African history must tolerate pro-Apartheid comments, or that an online community devoted to religious practice allow comments mocking religion. It would condemn all social media to rampant trolling and abuse.

Mozilla has joined a brief filed by Internet Works and other companies including Tumblr and Pinterest. The brief sets out how content moderation works in practice, and how it can vary widely depending on the goals and community of each platform. It explains how content moderation can promote speech and free association by allowing people to choose and build online communities. In Mozilla’s own social media products, our goal is to moderate in favor of a healthy community. This goal is central to our mission, which underscores our commitment to “an internet that promotes civil discourse, human dignity, and individual expression” and “that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts.”

The laws under consideration by the Court do not serve speech, but would instead destroy online communities that rely on healthy moderation. Mozilla is standing with the community and allies to call for a better future online.

The post Mozilla Asks US Supreme Court to Support Responsible Content Moderation appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

IRL (podcast): The Art of AI

Mozilla planet - ti, 05/12/2023 - 06:00

From Hollywood to Hip Hop, artists are negotiating new boundaries of consent for use of AI in the creative industries. Bridget Todd speaks to artists who are pushing the boundaries.

It’s not the first time artists have been squeezed, but generative AI presents new dilemmas. In this episode: a member of the AI working group of the Hollywood writers union; a singer who licenses the use of her voice to others; an emcee and professor of Black music; and an AI music company charting a different path.

Van Robichaux is a comedy writer in Los Angeles who helped craft the Writers Guild of America’s proposals on managing AI in the entertainment industry. 

Holly Herndon is a Berlin-based artist and a computer scientist who has developed “Holly +”, a series of deep fake music tools for making music with Holly’s voice.

Enongo Lumumba-Kasongo creates video games and studies the intersection between AI and Hip Hop at Brown University. Her alias as a rapper is Sammus. 

Rory Kenny is co-founder and CEO of Loudly, an AI music generator platform that employs musicians to train their AI instead of scraping music from the internet.

*Thank you to Sammus for sharing her track ‘1080p.’ Visit Sammus’ Bandcamp page to hear the full track and check out more of her songs.*

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Firefox Developer Edition and Beta: Try out Mozilla’s .deb package!

Mozilla planet - to, 30/11/2023 - 20:55

A month ago, we introduced our Nightly package for Debian-based Linux distributions. Today, we are proud to announce we made our .deb package available for Developer Edition and Beta!

We’ve set up a new APT repository for you to install Firefox as a .deb package. These packages are compatible with the same Debian and Ubuntu versions as our traditional binaries.

Your feedback is invaluable, so don’t hesitate to report any issues you encounter to help us improve the overall experience.

Adopting Mozilla’s Firefox .deb package offers multiple benefits:

  • you will get better performance thanks to our advanced compiler-based optimizations,
  • you will receive the latest updates as fast as possible because the .deb is integrated into Firefox’s release process,
  • you will get hardened binaries with all security flags enabled during compilation,
  • you can continue browsing after upgrading the package, meaning you can restart Firefox at your convenience to get the latest version.
To set up the APT repository and install the Firefox .deb package, simply follow these steps: <code># Create a directory to store APT repository keys if it doesn't exist: sudo install -d -m 0755 /etc/apt/keyrings # Import the Mozilla APT repository signing key: wget -q <a class="c-link" href="https://packages.mozilla.org/apt/repo-signing-key.gpg" target="_blank" rel="noopener noreferrer" data-stringify-link="https://packages.mozilla.org/apt/repo-signing-key.gpg" data-sk="tooltip_parent">https://packages.mozilla.org/apt/repo-signing-key.gpg</a> -O- | sudo tee /etc/apt/keyrings/packages.mozilla.org.asc > /dev/null # The fingerprint should be 35BAA0B33E9EB396F59CA838C0BA5CE6DC6315A3 gpg -n -q --import --import-options import-show /etc/apt/keyrings/packages.mozilla.org.asc | awk '/pub/{getline; gsub(/^ +| +$/,""); print "\n"$0"\n"}' # Next, add the Mozilla APT repository to your sources list: echo "deb [signed-by=/etc/apt/keyrings/packages.mozilla.org.asc] <a class="c-link" href="https://packages.mozilla.org/apt" target="_blank" rel="noopener noreferrer" data-stringify-link="https://packages.mozilla.org/apt" data-sk="tooltip_parent">https://packages.mozilla.org/apt</a> mozilla main" | sudo tee -a /etc/apt/sources.list.d/mozilla.list > /dev/null # Update your package list and install the Firefox .deb package: sudo apt-get update && sudo apt-get install firefox-beta # Replace "beta" by "devedition" for Developer Edition And that’s it! You have now installed the latest Firefox Beta/Developer Edition .deb package on your Linux. Firefox supports more than a hundred different locales. The packages mentioned above are in American English, but we have also created .deb packages containing the Firefox language packs. To install a specific language pack, replace fr in the example below with the desired language code: sudo apt-get install firefox-beta-l10n-fr To list all the available language packs, you can use this command after adding the Mozilla APT repository and running sudo apt-get update: apt-cache search firefox-beta-l10n

The post Firefox Developer Edition and Beta: Try out Mozilla’s .deb package! appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rustup 1.26.0

Mozilla planet - ti, 25/04/2023 - 02:00

The rustup working group is happy to announce the release of rustup version 1.26.0. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.26.0 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.26.0

This version of Rustup involves a significant number of internal cleanups, both in terms of the Rustup code and its tests. In addition to a lot of work on the codebase itself, due to the length of time since the last release this one has a record number of contributors and we thank you all for your efforts and time.

The headlines for this release are:

  1. Add rust-analyzer as a proxy of rustup. Now you can call rust-analyzer and it will be proxied to the rust-analyzer component for the current toolchain.

  2. Bump the clap dependency from 2.x to 3.x. It's a major version bump, so there are some help text changes, but the command line interface is unchanged.

  3. Remove experimental GPG signature validation and the rustup show keys command. Due to its experimental status, validating the integrity of downloaded binaries did not rely on it, and there was no option to abort the installation if a signature mismatch happened. Multiple problems with its implementation were discovered in the recent months, which led to the decision to remove the experimental code. The team is working on the design of a new signature validation scheme, which will be implemented in the future.

Full details are available in the changelog!

Rustup's documentation is also available in the rustup book.

Thanks

Thanks again to all the contributors who made rustup 1.26.0 possible!

  • Daniel Silverstone (kinnison)
  • Sabrina Jewson (SabrinaJewson)
  • Robert Collins (rbtcollins)
  • chansuke (chansuke)
  • Shamil (shamilsan)
  • Oli Lalonde (olalonde)
  • 二手掉包工程师 (hi-rustin)
  • Eric Huss (ehuss)
  • J Balint BIRO (jbalintbiro)
  • Easton Pillay (jedieaston)
  • zhaixiaojuan (zhaixiaojuan)
  • Chris Denton (ChrisDenton)
  • Martin Geisler (mgeisler)
  • Lucio Franco (LucioFranco)
  • Nicholas Bishop (nicholasbishop)
  • SADIK KUZU (sadikkuzu)
  • darkyshiny (darkyshiny)
  • René Dudfield (illume)
  • Noritada Kobayashi (noritada)
  • Mohammad AlSaleh (MoSal)
  • Dustin Martin (dmartin)
  • Ville Skyttä (scop)
  • Tshepang Mbambo (tshepang)
  • Illia Bobyr (ilya-bobyr)
  • Vincent Rischmann (vrischmann)
  • Alexander (Alovchin91)
  • Daniel Brotsky (brotskydotcom)
  • zohnannor (zohnannor)
  • Joshua Nelson (jyn514)
  • Prikshit Gautam (gautamprikshit1)
  • Dylan Thacker-Smith (dylanahsmith)
  • Jan David (jdno)
  • Aurora (lilith13666)
  • Pietro Albini (pietroalbini)
  • Renovate Bot (renovate-bot)
Categorieën: Mozilla-nl planet

Tiger Oakes: Alternatives to the resize event with better performance

Mozilla planet - snein, 23/04/2023 - 09:00
Exploring other APIs that integrate closely with the browser's styling engine.
Categorieën: Mozilla-nl planet

Cameron Kaiser: April patch set for TenFourFox

Mozilla planet - fr, 21/04/2023 - 02:06
As promised, there are new changesets to pick up in the TenFourFox tree. (If you're new to rolling your own TenFourFox build, these instructions still generally apply.) I've tried to limit their scope so that people with a partial build can just pull the changes (git pull) and gmake -f client.mk build without having to "clobber" the tree (completely erase and start over). You'll have to do that for the new ESR when that comes out in a couple months, but I'll spare you that today. Most of these patches are security-related, including one that prevents naughty cookies which would affect us as well, though the rest are mostly crash-preventers and would require PowerPC-specific attacks to be exploitable. There is also an update to the ATSUI font blacklist. As always, if you find problematic fonts that need to be suppressed, post them to issue 566 or in the comments, but read this first.

However, there is one feature update in this patchset: a CSS grid whitelist. Firefox 45, which is the heavily patched underpinning of TenFourFox FPR, has a partially working implementation of CSS grid as explained in this MDN article. CSS grid layout is a more flexible and more generalized way of putting elements on a page than the earlier tables method. Go ahead and try to read that article with the current build before you pull the changes and you'll notice that the page has weirdly scrunched up elements (before a script runs and blanks the whole page with an error). After you build with the updates, you'll notice that while the page still doesn't lay out perfectly right, you can now actually read things. That's because there's a whitelist entry now in TenFourFox that allows grid automatically on developer.mozilla.org (a new layout.css.grid.host.developer.mozilla.org preference defaults to true which is checked for by new code in the CSS parser, and there is also an entry in the problematic scripts filter to block the script that ends up blanking the page when it bugs out). The other issues on that page are unrelated to CSS grid.

This will change things for people who set the global pref layout.css.grid.enabled to true, which we have never shipped in TenFourFox because of (at times significant) bugs in the implementation. This pref is now true, but unless the URL hostname is in the whitelist, CSS grid will still be disabled dynamically and is never enabled for chrome resources. If you set the global pref to false, however, then CSS grid is disabled everywhere. If you were using this for a particular site that lays out better with grid on, post the URL to issue 659 or in the comments and I'll consider adding it to the default set (or add it yourself in about:config).

The next ESR (Firefox 115) comes out end of June-early July, and we'll do the usual root updates then.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.69.0

Mozilla planet - to, 20/04/2023 - 02:00

The Rust team is happy to announce a nice version of Rust, 1.69.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.69.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.69.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.69.0 stable

Rust 1.69.0 introduces no major new features. However, it contains many small improvements, including over 3,000 commits from over 500 contributors.

Cargo now suggests to automatically fix some warnings

Rust 1.29.0 added the cargo fix subcommand to automatically fix some simple compiler warnings. Since then, the number of warnings that can be fixed automatically continues to steadily increase. In addition, support for automatically fixing some simple Clippy warnings has also been added.

In order to draw more attention to these increased capabilities, Cargo will now suggest running cargo fix or cargo clippy --fix when it detects warnings that are automatically fixable:

warning: unused import: `std::hash::Hash` --> src/main.rs:1:5 | 1 | use std::hash::Hash; | ^^^^^^^^^^^^^^^ | = note: `#[warn(unused_imports)]` on by default warning: `foo` (bin "foo") generated 1 warning (run `cargo fix --bin "foo"` to apply 1 suggestion)

Note that the full Cargo invocation shown above is only necessary if you want to precisely apply fixes to a single crate. If you want to apply fixes to all the default members of a workspace, then a simple cargo fix (with no additional arguments) will suffice.

Debug information is not included in build scripts by default anymore

To improve compilation speed, Cargo now avoids emitting debug information in build scripts by default. There will be no visible effect when build scripts execute successfully, but backtraces in build scripts will contain less information.

If you want to debug a build script, you can add this snippet to your Cargo.toml to emit debug information again:

[profile.dev.build-override] debug = true [profile.release.build-override] debug = true Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.69.0

Many people came together to create Rust 1.69.0. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: Meet The Team: Wolf-Martell Montwe, Android Developer

Mozilla planet - wo, 19/04/2023 - 11:37

Auf Deutsch übersetzen Traduire en français 日本語に翻訳

Welcome to a brand new feature called “Meet The Team!” In this ongoing series of conversations, I introduce you to the people behind the software you use every day. We kicked things off by talking to Thunderbird’s Product Design Manager Alex Castellani. Now let’s meet someone much newer to the team: Wolf-Martell Montwe.

Having recently joined us from Berlin as a full-time Android developer, Wolf brings his passion for building mobile applications to the Thunderbird team. He’ll be helping to develop new features and an updated interface for K-9 Mail as we transform it into Thunderbird for Android. I spoke with him about his first computer and early gaming memories, what he hopes to accomplish for the Thunderbird mobile app, and how our community of contributors can help.

Meet The Team: Alex Castellani, Product Design Manager <figcaption class="wp-element-caption">Catch up on the “Meet The Team” series by reading my conversation with Alex Castellani</figcaption> Wolf’s Technology Origin Story

I love a great origin story, and many people working in technology seem to have one that’s directly tied to their first computer. Wolf is no exception.

“I think I started my computer journey with playing games — the first I remember is Sid Meier’s Pirates!” Wolf remembers. “Back then I had an IBM 386. Super slow, super loud! And I hacked around a lot to get games running too, to free up memory, to free up disk space because this was super limited. I think one partition was maximum 3MB! It was a big achievement if something just was running.”

Wolf’s fascination with games eventually led to some basic programming knowledge and web page development.

“I used to develop web pages, especially for my school to build up like a little forum,” he says. “I fell in love with PHP because it had one of the first editors with code completion, and that was awesome.”

What Attracted Wolf To The Thunderbird Project?

“I’m a longtime Thunderbird user, and I have used K-9 Mail from 2010 on,” Wolf says. “In my last position, my task was to build up open source software. (So we developed the software and then prepared it to be open source, because the code was readable, but people couldn’t contribute.) And over that time I fall in love with developing open source, so I was looking for opportunities to to follow up on that direction. “

The Thunderbird Android Team Just Doubled In Size. Now What?

Believe it or not, for many years K-9 Mail had one full-time developer (in addition to a community of contributors). So, Wolf effectively doubles the size of the core team. The first questions that came to mind: what doors does this open to the future of Thunderbird for Android, and what can Wolf and cketti accomplish during the next few months?

“First, I want to strengthen the technology base and also open it up for using more modern tooling, especially because the whole Android ecosystem is right now under a really drastic change,” Wolf explains. “It could be pretty beneficial for the project since it’s being rebranded, and think it’s good timing to then also adapt new technology and base everything on that.”

Why We’re Rebuilding The Thunderbird Interface From Scratch

(The desktop version of Thunderbird is undergoing a similar transformation, as we slowly rebuild it with more modern tooling while eliminating years of technical debt.)

Wolf continues: “I think that would also open the Android app to be a little bit easier maintain from a UI side, because right now it is hard to achieve.”

It’s certainly easier for our developers — and our global team of community contributors — to improve an application and more easily add new features when the code isn’t fighting against them.

How Can The Community Help?

There’s so much we can do to contribute to open source software besides writing code. So I asked Wolf: what’s the most important thing the K-9 Mail and Thunderbird community can do to help development?

“Constructive feedback on what we’re doing,” Wolf says. “Whether it’s positive or negative, I think that’s important. But please be nice!”

We certainly encourage everyone on Android to try K-9 Mail as we continue its transformation to Thunderbird. When you’re ready to give feedback or suggest ideas, we invite you to join our Thunderbird Android Planning mailing list, which is open to the public.

Talk to Wolf on Mastodon, and follow him on GitHub.

Download K-9 Mail: F-Droid | Play Store | GitHub.

The post Meet The Team: Wolf-Martell Montwe, Android Developer appeared first on The Thunderbird Blog.

Categorieën: Mozilla-nl planet

IRL (podcast): Bonus Episode

Mozilla planet - wo, 19/04/2023 - 02:39

We have good news to share. IRL: Online Life is Real Life has been nominated for two Webby Awards: one for Public Service and Activism and another for Technology.  We need your help.  We’d love it if you could go to the links below and vote for us.  It’s quick and easy!  Voting ends on Thursday, April 20th at midnight PDT. 

Vote for IRL in the Webby Awards: Technology and Public Service Activism 

It means so much to spotlight the voices and stories of folks who are making AI more trustworthy in real life, and we love to see them celebrated! 

Thanks for your vote and for listening to IRL!
 

 

 

 

 

Categorieën: Mozilla-nl planet

Cameron Kaiser: Power Mac ransomware? Yes, but it's complicated

Mozilla planet - ti, 18/04/2023 - 18:33
Wired ran an article today (via Ars Technica) about apparent macOS-compatible builds of LockBit, a prominent encrypting ransomware suite, such as this one for Apple silicon. There have been other experimental ransomware samples that have previously surfaced but this may be the first known example of a prominent operation specifically targeting Macs, and it is almost certainly not the last.

What caught my eye in the article was a report of PowerPC builds. I can't seem to get an alleged sample to analyse (feel free to contact me at ckaiser at floodgap dawt com if you can provide one) but the source for that assertion appears to be this tweet.

Can that file run on a Power Mac? It appears it's indeed a PowerPC binary, but the executable format is ELF and not Mach-O, so the file can only run natively on Linux or another ELF-based operating system, not PowerPC Mac OS X (or, for that matter, Mac OS 9 and earlier). Even if the raw machine code were sprayed into memory for an exploitable Mac application to be tricked into running, ELF implies System V ABI, which is similar but different from the PowerOpen ABI used for PowerPC-compatible versions of Mac OS, and we haven't even started talking about system calls. Rather than a specific build targetting Power Macs, most likely this is evidence that the LockBit builders simply ran every crosscompiler variation they could find on their source code: there are no natively little-endian 32-bit PowerPC CPUs, for example, yet there's a ppcle build visible in the screenshot. Heck, there's even an s390x build. Parents, don't let your mainframes out unsupervised.

This is probably a good time to mention that I've been working on security patches for TenFourFox and a couple minor feature adjustments, so stay tuned. It's been awhile but such are hobbies.

Categorieën: Mozilla-nl planet

Pages