Mozilla Nederland LogoDe Nederlandse

Abonneren op feed Mozilla planet
Planet Mozilla -
Bijgewerkt: 1 dag 21 uur geleden

Will Kahn-Greene: RustConf 2020 thoughts

vr, 28/08/2020 - 15:00

Last year, I went to RustConf 2019 in Portland. It was a lovely conference. Everyone I saw was so exuberantly happy to be there--it was just remarkable. It was my first RustConf. Plus while I've been sort-of learning Rust for a while and cursorily related to Rust things (I work on crash ingestion and debug symbols things), I haven't really done any Rust work. Still, it was a remarkable and very exciting conference.

RustConf 2020 was entirely online. I'm in UTC-4, so it occurred during my afternoon and evening. I spent the entire time watching the RustConf 2020 stream and skimming the channels on Discord. Everyone I saw on the channels were so exuberantly happy to be there and supportive of one another--it was just remarkable. Again! Even virtually!

I missed the in-person aspect of a conference a bit. I've still got this thing about conferences that I'm getting over, so I liked that it was virtual because of that and also it meant I didn't have to travel to go.

I enjoyed all of the sessions--they're all top-notch! They were all pretty different in their topics and difficulty level. The organizers should get gold stars for the children's programming between sessions. I really enjoyed the "CAT!" sightings in the channels--that was worth the entrance fee.

This is a summary of the talks I wrote notes for.

Read more… (1 min remaining to read)

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): L10n Report: August 2020 Edition

vr, 28/08/2020 - 11:03

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

As you are probably aware, Mozilla just went through a massive round of layoffs. About 250 people were let go, reducing the overall size of the workforce by a quarter. The l10n-drivers team was heavily impacted, with Axel Hecht (aka Pike) leaving the company.

We are still in the process of understanding how the reorganization will affect our work and the products we localize. A first step was to remove some projects from Pontoon, and we’ll make sure to communicate any further changes in our communication channels.

Telegram channel and Matrix

The “bridge” between our Matrix and Telegram channel, i.e. the tool synchronizing content between the two, has been working only in one direction for a few weeks. For this reason, and given the unsupported status of this tool, we decided to remove it completely.

As of now:

  • Our Telegram and Matrix channels are completely independent from each other.
  • The l10n-community channel on Matrix is the primary channel for synchronous communications. The reason for this is that Matrix is supported as a whole by Mozilla, offering better moderation options among other things, and can be easily accessed from different platforms (browser, phone).

If you haven’t used Matrix yet, we encourage you to set it up following the instructions available in the Mozilla Wiki. You can also set an email address in your profile, to receive notifications (like pings) when you’re offline.

We plan to keep the Telegram channel around for now, but we might revisit this decision in the future.

New content and projects What’s new or coming up in Firefox desktop

Upcoming deadlines:

  • Firefox 81 is currently in beta and will be released on September 22nd. The deadline to update localization is on September 8.

In terms of content and new features, most of the changes are around the new modal print preview, which can be currently tested on Nightly.

What’s new or coming up in mobile

The new Firefox for Android has been rolled out at 100%! You should therefore have either been upgraded from the older version (or will be in just a little bit) – or you can download it directly from the Play Store here.

Congratulations to everyone who has made this possible!

For the next Firefox for Android release, we are expecting string freeze to start towards the end of the week, which will give localizers two weeks to complete localizing and testing.

Concerning Firefox for iOS: v29 strings have been exposed on Pontoon. We are still working out screenshots for testing with iOS devs at the moment, but these should be available soon and as usual from the Pontoon project interface.

On another note, and as mentioned at the beginning of this blog post, due to the recent lay-offs, we have had to deactivate some projects from Pontoon. The mobile products are currently: Scryer, Firefox Lite and Lockwise iOS. More may be added to this list soon, so stay tuned. Once more, thanks to all the localizers who have contributed their time and effort to these projects across the years. Your help has been invaluable for Mozilla.

What’s new or coming up in web projects Common Voice

The Common Voice team is greatly impacted due to the changes in recent announcement. The team has stopped the two-week sprint cycle and is working in a maintenance mode right now. String updates and new language requests would take longer time to process due to resource constraints

Some other changes to the project before the reorg:

  • New site name; All traffic from the old domain will be forwarded to the new domain automatically.
  • New GitHub repo name mozilla/common-voice and new branch name main. All traffic to the previous domain voice-web will be forwarded directly to the new repo, but you may need to manually update your git remote if you have a local copy of the site running.

An updated firefox/welcome/page4.ftl with new layout will be ready for localization in a few days. The turnaround time is rather short. Be on the lookout for it.

Along with this update is the temporary page called banners/firefox-daylight-launch.ftl that promotes Fenix. It has a life of a few weeks. Please localize it as soon as possible. Once done, you will see the localized banner on on production.

The star priority ratings in Pontoon are also revised. The highest priority pages are firefox/all.ftl, firefox/new/*.ftl, firefox/whatsnew/*.ftl, and brands.ftl. The next level priority pages are the shared files. Unless a page has a hard deadline to complete, the rest are normal priority with a 3-star rating and you can take time to localize them.

WebThings Gateway

The team is completely dissolved due to the reorg. At the moment, the project would not take any new language requests or update the repo with changes in Pontoon. The project is actively working to move into a community-maintained state. We will update everyone as soon as that information becomes available.

What’s new or coming up in Foundation projects

The Foundation website homepage got a major revamp, strings have been exposed to the relevant locales in the Engagement and Foundation website projects. There’s no strict deadline, you can complete this anytime. The content will be published live regularly, with a first push happening in a few days.

What’s new or coming up in Pontoon

Download Terminology as TBX

We’ve added the ability to download Terminology from Pontoon in the standardized TBX file format, which allows you to exchange it with other users and systems. To access the feature, click on your user icon in the top-right section of the translation workspace and select “Download Terminology”.

Improving Machinery with SYSTRAN

We have an update on the work we’ve been doing with SYSTRAN to provide you with better machine translation options in Pontoon.

SYSTRAN has published three NMT models (German, French, and Spanish) based on contributions of Mozilla localizers. They are available in the SYSTRAN Marketplace as free and open source and accessible to any existing SYSTRAN NMT customers. In the future, we hope to make those models available beyond the SYSTRAN system.

These models have been integrated with Pontoon and are available in the Machinery tab. Please report any feedback that you have for them, as we want to make sure these are a useful resource for your contributions.

We’ll be working with SYSTRAN to learn how to build the models for new language pairs in 2021, which should widely expand the language coverage.

Search with Enter

From now on you need to press Enter to trigger search in the string list. This change unifies the behaviour of the string list search box and the Machinery search box, which despite similar looks previously hadn’t worked the same way. Former had been searching after the last keystroke (with a 500 ms delay), while latter after Enter was pressed.

Search on every keystroke is great when it’s fast, but string list search is not always fast. It becomes really tedious if more than 500 ms pass between the keystrokes and search gets triggered too early.

  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)
Useful Links Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Enabling better curl bindings

vr, 28/08/2020 - 10:47

I think it is fair to say that libcurl is a library that is very widely spread, widely used and powers a sizable share of Internet transfers. It’s age, it’s availability, it’s stability and its API contribute to it having gotten to this position.

libcurl is in a position where it could remain for a long time to come, unless we do something wrong and given that we stay focused on what we are and what we’re here for. I believe curl and libcurl might still be very meaningful in ten years.

Bindings are key

Another explanation is the fact that there are a large number of bindings to libcurl. A binding is a piece of code that allows libcurl to be used directly and conveniently from another programming language. Bindings are typically authored and created by enthusiasts of a particular language. To bring libcurl powers to applications written in that language.

The list of known bindings we feature on the curl web sites lists around 70 bindings for 62 something different languages. You can access and use libcurl with (almost) any language you can dream of. I figure most mortals can’t even name half that many programming languages! The list starts out with Ada95, Basic, C++, Ch, Cocoa, Clojure, D, Delphi, Dylan, Eiffel, Euphoria and it goes on for quite a while more.

Keeping bindings in sync is work

The bindings are typically written to handle transfers with libcurl as it was working at a certain point in time, knowing what libcurl supported at that moment. But as readers of this blog and followers of the curl project know, libcurl keeps advancing and we change and improve things regularly. We add functionality and new features in almost every new release.

This rather fast pace of development offers a challenge to binding authors, as they need to write the binding in a very clever way and keep up with libcurl developments in order to offer their users the latest libcurl features via their binding.

With libcurl being the foundational underlying engine for so many applications and the number of applications and services accessing libcurl via bindings is truly uncountable – this work of keeping bindings in sync is not insignificant.

If we can provide mechanisms in libcurl to ease that work and to reduce friction, it can literally affect the world.

“easy options” are knobs and levers

Users of the libcurl knows that one of the key functions in the API is the curl_easy_setopt function. Using this function call, the application sets specific options for a transfer, asking for certain behaviors etc. The URL to use, user name, authentication methods, where to send the output, how to provide the input etc etc.

At the time I write this, this key function features no less than 277 different and well-documented options. Of course we should work hard at not adding new options unless really necessary and we should keep the option growth as slow as possible, but at the same time the Internet isn’t stopping and as the whole world is developing we need to follow along.

Options generally come using one of a set of predefined kinds. Like a string, a numerical value or list of strings etc. But the names of the options and knowing about their existence has always been knowledge that exists in the curl source tree, requiring each bindings to be synced with the latest curl in order to get knowledge about the most recent knobs libcurl offers.

Until now…

Introducing an easy options info API

Starting in the coming version 7.73.0 (due to be released on October 14, 2020), libcurl offers API functions that allow applications and bindings to query it for information about all the options this libcurl instance knows about.

curl_easy_option_next lets the application iterate over options, to either go through all of them or a set of them. For each option, there’s details to extract about it that tells what kind of input data that option expects.

curl_easy_option_by_name allows the application to look up details about a specific option using its name. If the application instead has the internal “id” for the option, it can look it up using curl_easy_option_by_id.

With these new functions, bindings should be able to better adapt to the current run-time version of the library and become less dependent on syncing with the latest libcurl source code. We hope this will make it easier to make bindings stay in sync with libcurl developments.

Legacy is still legacy

Lots of bindings have been around for a long time and many of them of course still want to support libcurl versions much older than 7.73.0 so jumping onto this bandwagon of new fancy API for this will not be an instant success or take away code needed for that long tail of old version everyone wants to keep supporting.

We can’t let the burden of legacy stand in the way for improvement and going forward. At least if you find that you are lucky enough to have 7.73.0 or later installed, you can dynamically figure out these things about options. Maybe down the line the number of legacy versions will shrink. Maybe if libcurl still is relevant in ten years none of the pre curl 7.73.0 versions need to be supported anymore!


Lots of the discussions and ideas for this API come from Jeroen Ooms, author of the R binding for libcurl.

Image by Rudy and Peter Skitterians from Pixabay

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Update on Mozilla Mixed Reality

do, 27/08/2020 - 21:29
Update on Mozilla Mixed Reality

The wider XR community has long supported the Mozilla Mixed Reality team, and we look forward to that continuing as the team restructures itself in the face of recent changes at Mozilla.

Charting the future with Hubs

Going forward we will be focusing much of our efforts on Hubs. Over the last few months we have been humbled and inspired by the thousands of community organizers, artists, event planners and educators who’ve joined the Hubs community. We are increasing our investment in this project, and Hubs is excited to welcome several new members from the Firefox Reality team. We are enthusiastic about the possibilities of remote collaboration, and look forward to making Hubs even better. If you are interested in sharing thoughts on new features or use-cases we would love your input here in our feedback form.

The state of Firefox Reality and WebXR

Having developed a solid initial Firefox Reality offering that brings the web to virtual reality, we are going to continue to invest in standards. We’ll also be supporting our partners, but in light of Covid-19 we have chosen to reduce our investment in broad new features at this time.

At the end of the month, we will release Firefox Reality v12 for standalone VR headsets, our last major release for a while. We’ll continue to support the browser (including security updates) and make updates to support Hubs and our partners. In addition, we’ll remain active in the Immersive Web standards group.

Two weeks ago, we released a new preview for Firefox Reality for PC, which we’ll continue to support. We’ll also continue to provide Firefox Reality for Hololens, and it will be accessible in the Microsoft store.

Finally, for iOS users, the WebXR Viewer will remain available, but not continue to be maintained.

If anyone is interested in contributing to the work, we welcome open source contributions at:

We're looking forward to continuing our collaboration with the community and we'll continue to provide updates here on the blog, on the Mixed Reality Twitter, and the Hubs Twitter.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Disconnect’s road to success

do, 27/08/2020 - 17:10

Developers create extensions for a variety of reasons. Some are hobbyists who want to freely share their work with the world. Some find a way to turn their project into a small, independent business. Some companies build extensions as part of a business strategy. Earlier this year, we interviewed several add-on developers to learn more about the business models for their extensions. We learned a lot from those conversations, and have drawn on them to create upcoming experiments that we think will help developers succeed. We’ll be posting more information about participating in these experiments in the next few weeks.

In the meantime, we asked Disconnect CEO Casey Oppenheim to share his thoughts about what has made his company’s popular privacy-enhancing browser extension of the same name successful. Disconnect is an open-source extension that enables users to visualize and block third-party trackers. Together, Mozilla and Disconnect studied the performance benefits of blocking trackers and learned that tracking protection more than doubles page loading speeds. This work led us to build Enhanced Tracking Protection directly into Firefox in 2019 using Disconnect’s tracking protection list.

Today, Disconnect earns revenue by offering privacy apps at different price points and partnerships with organizations like Mozilla. They have also extensively experimented on monetizing the Disconnect browser extension to support its development and maintenance. Following are some of the learnings that Casey shared.

Why did you decide to create this feature as an extension?

Extensions are a really powerful way to improve user privacy. Extensions have the ability to “see” and block network requests on any and all webpages, which gave us the ability to show users exactly what companies were collecting data about their browsing and to stop the tracking. Browser extensions also were a great fit for the protection we offer, because they allow developers to set different rules for different pages. So for example, we can block Facebook tracking on websites Facebook doesn’t own, but allow Facebook tracking on, so that we don’t break the user experience.

What has contributed to Disconnect’s success?

Our whole team is sincerely passionate about creating great privacy products. We make the products we want to use ourselves and fortunately that approach has resonated with a lot of users. That said, user feedback is very important to us and some of our most popular features were based on user suggestions. In terms of user growth, we rely a lot on word of mouth and press coverage rather than paid marketing. Being featured on has given us great visibility and helped us reach a larger audience.

When did you decide to monetize your extension?

We began monetizing our extension in mid-2013, years before Firefox itself included tracker blocking. Since that time we have conducted several experiments that have always been based on voluntary payments, the extension has always been free to use.

Are there any tips you would want to share with developers about user acquisition or monetization?

We’ve learned a few lessons on this topic the hard way. Probably the most important is that it is very difficult to successfully monetize by interrupting the user flow. For example, we had the great idea of serving a notification inside the extension to try and get users to pay. The end result was terrible reviews and a bad user experience coupled with minimal increase in revenue. In our experience, trying to monetize in context (e.g., right after install) or passively (e.g., a button that is visible in the user interface) works better.

Is there anything else you would like to add?

Extensions are essential apps for billions of users. Developers should absolutely pursue monetization.

Thank you, Casey! 

The post Disconnect’s road to success appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

The Firefox Frontier: 7 things to know (and love) about the new Firefox for Android

do, 27/08/2020 - 15:00

The newly redesigned Firefox browser for Android is here! The Firefox app has been overhauled and redesigned from the ground up for Android fans, with more speed, customization and privacy … Read more

The post 7 things to know (and love) about the new Firefox for Android appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Get organized with Firefox Collections

do, 27/08/2020 - 15:00

The numbers aren’t in yet, but we’ll go out on a limb and say that we’ve been online more in 2020 than ever before. Of course we have! The internet … Read more

The post Get organized with Firefox Collections appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Daniel Stenberg: tiny-curl 7.72.0 – Micrium

do, 27/08/2020 - 11:09

You remember my tiny-curl effort to port libcurl to more Real-time operating systems? Back in May 2020 I announced it in association with me porting tiny-curl to FreeRTOS.

Today I’m happy to bring you the news that tiny-curl 7.72.0 was just released. Now it also builds and runs fine on the Micrium OS.

Timed with this release, I changed the tiny-curl version number to use the same as the curl release on which this is based on, and I’ve created a new dedicated section on the curl web site for tiny-curl:

Head over there to download.

Why tiny-curl

With tiny-curl you get an HTTPS-focused small library, that typically fits in 100Kb storage, needing less than 20Kb of dynamic memory to run (excluding TLS and regular libc needs).

You want to go with libcurl even in these tiny devices because your other options are all much much worse. Lots of devices in this category (I call it “devices that are too small to run Linux“) basically go with some default example HTTP code from the OS vendor or similar and sure, that can often be built into a much smaller foot-print than libcurl can but you also get something that is very fragile and error prone. With libcurl, and tiny-curl, instead you get:

  • the same API on all systems – porting your app over now or later becomes a smooth ride
  • a secure and safe library that’s been battle-proven, tested and checked a lot
  • the best documented HTTP library in existence
  • commercial support is readily available
tiny and upward

tiny-curl comes already customized as small as possible, but you always have the option to enable additional powers and by going up slightly in size you can also add more features from the regular libcurl plethora of powerful offerings.

Categorieën: Mozilla-nl planet

Nick Fitzgerald: WebAssembly Reference Types in Wasmtime

do, 27/08/2020 - 09:00

Note: I am cross-posting this article from the Bytecode Alliance blog to my personal blog.

A few weeks ago, I finished implementing support for the WebAssembly reference types proposal in Wasmtime. Wasmtime is a standalone, outside-the-Web WebAssembly runtime, and the reference types proposal is WebAssembly’s first foray beyond simple integers and floating point numbers, into the exciting world of garbage-collected references. This article will explain what the reference types proposal enables, what it leaves for future proposals, and how it is implemented in Wasmtime.

What are Reference Types?

Without the reference types proposal, WebAssembly can only manipulate simple integer and floating point values. It can’t take or return references to the host’s objects like, for example, a DOM node on a Web page or an open connection on a server. There are workarounds: for example, you can store the host objects in a side table and refere to them by index, but this adds an extra indirection and implementing the side table requires cooperating glue code on the host side. That glue code, in particular, is annoying because it is outside of the Wasm sandbox, diluting Wasm’s safety guarantees, and it is host-specific. If you want to use your Wasm module on the Web, in a Rust host, and in a Python host, you’ll need three separate glue code implementations. This makes WebAssembly less attractive as a universal binary format.

With the reference types proposal, you don’t need glue code to interact with host references. The proposal has three main parts:

  1. A new externref type, representing an opaque, unforgable reference to a host object.

  2. An extension to WebAssembly tables, allowing them to hold externrefs in addition to function references.

  3. New instructions for manipulating tables and their entries.

With these new capabilities, Wasm modules can talk about host references directly, rather than requiring external glue code running in the host.

externrefs play nice with WebAssembly’s sandboxing properties:

  • They are opaque: a Wasm module cannot observe an externref value’s bit pattern. Passing a reference to a host object into a Wasm module doesn’t reveal any information about the host’s address space and the layout of host objects within it.

  • They are unforgable: a Wasm module can’t create a fake host reference out of thin air. It can only return either a reference you already gave it or the null reference. It cannot pretend like the integer value 0x1bad2bad is a valid host reference, return it to you, and trick you into dereferencing this invalid pointer.


Here’s a Wasm module that exports a hello function which takes, as an externref parameter, a reference to an open file and writes “Hello, Reference Types!” to the file. This module imports the write syscall from a hypothetical future version of WASI, the WebAssembly System Interface, that leverages reference types.

;; hello.wat (module ;; Import the write syscall from a hypothetical (and ;; simplified) future version of WASI. ;; ;; It takes a host reference to an open file, an address ;; in memory, and byte length and then writes ;; `memory[address..(address+length)]` to the file. (import "future-wasi" "write" (func $write (param externref i32 i32) (result i32))) ;; Define a memory that is one page in size (64KiB). (memory (export "memory") 1) ;; At offset 0x42 in the memory, define our data string. (data (i32.const 0x42) "Hello, Reference Types!\n") ;; Define a function that writes our hello string to a ;; given open file handle. (func (export "hello") (param externref) (call $write ;; The open file handle we were given. (local.get 0) ;; The address of our string in memory. (i32.const 0x42) ;; The length of our string in memory. (i32.const 24)) ;; Ignore the return code. drop))

This Wasm module will run in any WebAssembly environment where WASI is available.0 We don’t need glue code that is maintaining a side table, mapping host references to indices.


For example, we can run hello.wat from a Python host environment without any glue code:

from wasmtime import * import future_wasi # Initial configuration, enabling reference types. config = Config() config.wasm_reference_types = True engine = Engine(config) store = Store(engine) # Compile our `hello.wat` module. module = Module.from_file(engine, "./hello.wat") # Create an instance of the module and give it access # to our `future-wasi.write` polyfill. instance = Instance(store, module, [future_wasi.write(store)]) # Finally, open a file for writing and pass it into # our Wasm instance's `hello` function! with open("hello.txt", "wb") as open_file: instance.exports["hello"](open_file) Rust

And we can run the exact same hello.wat from a Rust host and without any Rust-specific bindings or glue:

use anyhow::Result; use std::{cell::RefCell, fs}; use wasmtime::*; mod future_wasi; fn main() -> Result<()> { // Initial configuration, enabling reference types. let mut config = Config::new(); config.wasm_reference_types(true); let engine = Engine::new(&config); let store = Store::new(&engine); // Compile our `hello.wat` module. let module = Module::from_file( &engine, "./hello.wat", )?; // Create an instance of the module and give it // access to our `future-wasi.write` polyfill. let instance = Instance::new(&store, &module, &[ future_wasi::write(&store).into(), ])?; // Get the instance's `hello` function export. let hello = instance .get_func("hello") .ok_or(anyhow::format_err!( "failed to find `hello` function export" ))? .get1::<Option<ExternRef>, ()>()?; // Open a file for writing and pass it into our Wasm // instance's `hello` function. let file = ExternRef::new(RefCell::new( fs::File::create("hello.txt")?, )); hello(Some(file))?; Ok(()) }

Running hello.wat — once again, without module-specific glue code — is just as easy in .NET and Go host environments, and is left as an exercise for the reader ;)

On the Web

Unlike the other host environments we’ve considered, WASI isn’t natively implemented on the Web. There’s nothing stopping us, however, from polyfilling WASI APIs with a little bit of JavaScript and a couple DOM methods! This is still an improvement because there is overall less module-specific glue code. Once one person has written the polyfills, everyone’s Wasm modules can reuse them.

There are many different things an “open file” could be modeled by on the Web. For this demo, we’ll use a DOM node: writing to it will append text nodes. This works well because we know our module is only writing text data. If we were working with binary data, we would choose another polyfilling approach, like in-memory array buffers backing the file data.

Here is the JavaScript code to run our hello.wat module and polyfill our future-wasi.write function:

// The DOM node we will use as an "open file". const output = document.getElementById("output"); async function main() { // Define the imports we are giving to the Wasm // module: our `future-wasi.write` polyfill. const imports = { "future-wasi": { write(domNode, address, length) { // Get `memory[address..address + length]`. const memory = new Uint8Array( instance.exports.memory.buffer ); const data = memory.subarray( address, address + length ); // Convert it into a string. const decoder = new TextDecoder("utf-8"); const text = decoder.decode(data); // Finally, append it to the given DOM node that // is our "open file". const textNode = document.createTextNode(text); domNode.appendChild(textNode); } } }; // Fetch and instantiate our Wasm module. const response = await fetch("./hello.wasm"); const wasmBytes = await response.arrayBuffer(); const { instance } = await WebAssembly.instantiate( wasmBytes, imports ); // Call its exported `hello` function with a DOM node. instance.exports.hello(output); // Every time the button is clicked, call the exported // `hello` function again. const button = document.getElementById("call-hello"); button.removeAttribute("disabled"); button.addEventListener("click", () => { instance.exports.hello(output); }); } main().catch(e => { output.textContent = `${e}\n\nStack:\n${e.stack}`; });

You can view a live demo of this code in the iframe below. It should work in Firefox 79+, or any other browser that supports reference types (at the time of writing, Chrome has support for an older version of the reference types proposal behind the --experimental-wasm-anyref flag, and Safari has in-progress support behind the JSC_useWebAssemblyReferences flag.)

What Comes After the Reference Types Proposal?

The in-progress WebAssembly type imports proposal builds on reference types to let you distinguish between different kinds of host objects: database connections and open files would be separate types. Or, on the Web, Wasm would distinguish JavaScript Promises from DOM nodes. It also adds references that point to types defined by other Wasm modules, not just host objects.

WASI will want to adopt unforgable reference types for file handles, as the examples above suggest, instead of using integers. It might make sense for WASI to wait for type imports, however, so it can use separate types for, say, an open file and a source of entropy. This is a decision for the WASI subgroup of the WebAssembly Community Group. Either way, when WASI does make this switch, it will continue to support integer file descriptors, but implemented inside the Wasm sandbox as indices into an externref Wasm table.

The interface types and module linking proposals promise to remove even more glue code and bindings. The interface types proposal lets Wasm modules exchange rich, structured types with each other or with the host. It translates between interface types and core Wasm types, while keeping modules encapsulated from each other. For example, a module can exchange in-memory strings and arrays without exporting the whole memory or exposing allocator methods, both of which are currently required without interface types. The module linking proposal makes modules and instances first class values, allowing them to be defined, imported, exported, and instantiated within other WebAssembly modules. This functionality previously relied on the host’s cooperation, requiring a snippet of JavaScript to fetch and instantiate modules, hooking up their imports and exports.

Even further in the future, full support for garbage-collected objects in WebAssembly will come eventually.

Implementation Details

Now we’ll dive into the details of Wasmtime’s reference types implementation.

With externrefs, the host gives Wasm limited access to host objects, but the host also needs to know when Wasm is finished with those objects, so it can clean them up. That clean up might involve closing a file handle or simply deallocating memory. Statically running clean up routines once Wasm returns to the host is attractive, but unfortunately flawed: Wasm can persist an externref into a table or global so that it outlives the function call. We do not, therefore, know ahead of time whether Wasm is still holding references to host objects or not. We require a dynamic method for determining which objects are still in use and which are not. This typically involves either reference counting or a tracing collector or a combination of the two.

This is all to say that implementing the reference types proposal introduced a garbage collector in Wasmtime.

Much of WebAssembly’s value rests on its predictable performance and minimal nondeterminism. Garbage collectors are infamous for their unpredictable pauses, and, if they support finalizers or weak references, nondeterministic behavior. Implementing a collector to support reference types required reconciling these contradictions as much as possible, while also balancing implementation complexity.

The design we settled upon is a deferred reference counting collector, with Cranelift, Wasmtime’s just-in-time (JIT) compiler, producing stack maps for precisely identifying roots inside of Wasm frames. This approach balances predictable latency with throughput. It is more complex than conservative stack scanning, but avoids the nondeterminism inherent in that approach. It also has the desirable property that if a Wasm module doesn’t use reference types, then it can’t trigger garbage collection and the runtime will not impose any other related overhead.

We chose reference counting over tracing garbage collection for a few reasons. First, it lends itself to short and predictable GC pauses. Second, adding reference counting to a runtime that wasn’t built from the ground up with a garbage collector in mind is easier than adding a tracing collector.

Excavating exactly what it is that makes reference counting easier to add post-facto is a worthwhile exercise, because this has implications for the whole runtime and for any application intending to embed it. Tracing collectors identify all live objects, starting from known-live roots, and then any object that wasn’t found to be live is, by definition, no longer in use, and is therefore safe to reclaim. Reference counting collectors start from known-dead roots, find all the dead objects reachable only from those roots, and then reclaim these known-dead objects. Consider what happens, with each type of collector, if an object A fails to reveal to the collector that it is referencing another object B. In a tracing collector, if no other object is referencing B, then collector will conclude that B is not live and that it is safe to reclaim B. But now A can use B after it has been reclaimed, leading to use-after-free memory unsafety. On the other hand, with a reference counting collector, if A does not reveal that it has the only reference that is keeping B alive, then B’s reference count will not be decremented, then the collector never reclaims B and the object is permanently leaked. While leaking B is not ideal, a leak is much safer than a dangling pointer. Reference counting collectors are easier to add to an existing runtime, and make embedding that runtime in larger applications easier, because they have a better failure mode than tracing collectors do. By default, reference counting collectors safely leak, while tracing collectors unsafely allow use-after-free.

Deferred Reference Counting

In normal reference counting, each time a new reference to an object is created, the object’s reference count is incremented, and each time a reference to the object is dropped, its reference count is decremented. When the reference count reaches zero, the object is deallocated. In deferred reference counting, these increments and decrements are not performed immediately, but are deferred until a later time and then processed together in bulk. This assuages one of reference counting’s biggest weaknesses, the overhead of frequently modifying reference counts, by trading prompt deallocation for better throughput.

With naïve reference counting, every single local.get and local.set WebAssembly instruction operating on a reference type needs to manipulate reference counts. This leads to many increments and decrements, most of which are redundant. It also requires code (known as landing pads) for decrementing the reference counts of the externrefs inside each frame during stack unwinding — which can happen, for example, because of an out-of-bounds heap access trap.

By using deferred reference counting for externrefs inside of Wasm frames, we don’t increment or decrement the reference counts at all, unless a reference escapes a call’s stack frame by being stored into a table or global. Additionally, we don’t need to generate landing pads for unwinding because deferred reference counting can already tolerate slack between when a reference to an object is created or dropped and when the object’s reference count is adjusted to reflect that.

Wasmtime implements deferred reference counting by maintaining an over-approximation of the set of externrefs held alive by Wasm stack frames. We call this the VMExternRefActivationsTable. When we pass an externref into Wasm, we insert it into the table. We do not update the table as Wasm runs, so as execution drops references, the table becomes an over-approximation of the set of externrefs actually present on the stack.

Garbage collection is then composed of two phases. In the first phase, we walk the stack, building a set of externrefs currently held alive by Wasm stack frames. This is the precise set that the VMExternRefActivationsTable approximates. The second phase reconciles the difference between the over-approximation and the precise set, decrementing the reference count for each object that is in the over-approximation but not in the precise set. At the end of this phase, we reset the VMExternRefActivationsTable to the new precise set.

If we are not careful with how we schedule the deferred processing of reference counts, we risk introducing nondeterminism. Using a timer-based GC schedule, for example, means that we are at the whims of the operating system’s thread scheduler and a vast variety of other factors that perturb how much we have executed in a given amount of time. Instead, we trigger GC whenever the VMExternRefActivationsTable reaches capacity, or whenever GC is explicitly requested by the embedder application. As long as the embedder triggers GC deterministically, we maintain deterministic GC scheduling, and execution remains deterministic even in the face of finalizers.

It is worth noting that outside of Wasm frames, in native VM code implemented in Rust, we use regular, non-deferred reference counting: cloning increments the count and dropping decrements it. However, Rust’s moves and borrows let us avoid some of the associated overhead. Moving an externref transfers ownership without affecting the reference count. Borrowing an externref lets us safely use the value without incrementing its reference count, or at least delays the increment until the borrowed reference is cloned.

Stack Maps

For the collector to find the GC roots within a stack frame it requires that the compiler emit stack maps. A stack map records which words in a stack frame contain live externrefs at each point in the function where GC may occur. The stacks maps, taken together, logically record a table of the form:

Instruction Address Offsets of Live GC References 0x12345678 2, 6, 12 0x1234abcd 2, 6 ... ...

The offsets denoting where live references are stored within a stack frame are relative to the frame’s stack pointer and are expressed in units of words. Since garbage collection can only occur at certain points within a function, the table is sparse and only has entries for instruction addresses where GC is possible.

Because offsets of live GC references are relative to the stack pointer, and because stack frames grow down from higher addresses to lower addresses, to get a pointer to a live reference at offset x within a stack frame, the collector adds x to the frame’s stack pointer. For example, to calculate the pointer to the live GC reference inside “frame 1” below, the collector would compute frame_1_sp + x:

Stack +-------------------+ | Frame 0 | | | | | | | +-------------------+ <--- Frame 0's SP | | Frame 1 | Grows | | down | | | | Live GC reference | --+-- | | | | | | | | V | | x = offset of live GC ref | | | | | | +-------------------+ --+-- <--- Frame 1's SP | Frame 2 | | ... |

Each individual stack map is associated with just one instruction address within a compiled Wasm function, contains the size of the stack frame, and represents the stack frame as a bitmap. There is one bit per word in the stack frame; if the bit is set, then the word contains a live GC reference.

The actual stack walking functionality is provided by the backtrace crate, which wraps libunwind on unix-like platforms and DbgHelp on Windows. To assist in stack walking, Cranelift emits the platform’s unwind information: .eh_frame on unix-like platforms and structured exception handling for Windows.

Inline Fast Paths for GC Barriers

Despite our deferred reference counting scheme, compiled Wasm code must occasionally manipulate reference counts and run snippets of code called GC barriers. By running GC barriers, the program coordinates with the collector, helping it keep track of objects.

Because of our deferred reference counting, Wasmtime does not need barriers for references that stay within a Wasm function’s scope, but barriers are still needed whenever a reference enters or escapes the function’s scope. Recall that a reference can escape the scope when written into a global or table. It can, similarly, enter a Wasm call’s scope when read from a global or table. Therefore, reading and writing to globals and tables requires barriers.

The barriers for writing a reference into a table slot and a global are similar so, for simplicity, I’ll just refer to tables from now on. These write barriers are responsible for ensuring that:

  1. The new object’s reference count is incremented now that the table is holding a reference to it.

  2. The table element’s prior value, if any, has its reference count decremented.

  3. If the prior element’s reference count reaches zero, then its destructor is called and memory block deallocated.

There are two subtleties here. First, steps 1 and 2, although they may seem independent, must be performed in the given order. If their order were reversed, then if the new object assigned to the table slot and old object currently in the table slot are the same, we would:

  • Decrement the object’s reference count from one to zero.

  • Run the object’s destructor and deallocate it.

  • Re-assign a reference to the (now freed) object into the table slot.

  • Increment the (now freed) object’s reference count.

That’s a use-after-free bug! To avoid it, we must always increment the new object’s reference count before decrementing the old object’s reference count. This way if they are the same object then the reference count never reaches zero.

The second subtlety is that an object’s destructor can do pretty much anything, including touch the table slot we are currently running GC barriers for. If we encounter this kind of reentrancy, we want the destructor to see the new table element. We do not want it to see the half-deinitialized object and let it attempt to resurrect the object.

Most barrier executions operate on non-null references, and most executions don’t decrement a reference count to zero and destroy an object. Therefore, the JIT emits the reference counting operations inline, and only calls out from Wasm to VM code when destroying an object:

inline table_set_barrier(table, index, new_value): if new_value is not null: new_value->ref_count++ let old_value = table[index] table[index] = new_value if old_value is not null: old_value->ref_count-- if old_value->ref_count is zero: call out_of_line_destroy(old_value)

The other scenario for which Wasmtime requires GC barriers is when Wasm reads a reference from a table (or global), causing the reference to enter the scope of the Wasm call. Its responsibility is ensuring that these references are safely held alive by the VMExternRefActivationsTable. The VMExternRefActivationsTable has a simple bump allocation chunk to support fast insertions from inline JIT code. We maintain a next finger pointing within that bump chunk, and an end pointer pointing just after it. The next finger is where we we will insert the next new entry into the bump chunk, unless next is equal to end which means that the bump chunk is at full capacity. When that happens we are forced to call an out-of-line slow path that will trigger GC to free up space.

inline table_get_barrier(table, index): let elem = table[index] if elem is not null: let (next, end) = bump region if next != end: elem->ref_count++ *next = elem next++ else: call out_of_line_insert_and_gc(elem) return elem Conclusion

The reference types proposal is WebAssembly’s first expansion beyond simple integers and floating point numbers, requiring that Wasmtime grow a garbage collector. It also cuts down on the amount of module-specific and host-specific glue code. Future proposals, like interface types and module linking, should completely remove the need for such glue.

Thanks to Alex Crichton for reviewing the bulk of this work, and exposing reference types in the wasmtime-go API. Thanks to Dan Gohman for reviewing the code that implements the inline fast paths for GC barriers and the cranelift-wasm interface changes it required. Thanks to Peter Huene for exposing reference types in wasmtime-dotnet. Finally, thanks to Jim Blandy, Dan Gohman, and Alex Crichton for reading early drafts of this article and providing valuable feedback.

0 However, since this is a hypothetical future version of WASI, we will need to temporarily define our own version of the write syscall. We will, furthermore, need to define this write polyfill once for each host.

Python's future-wasi.write polyfill from wasmtime import * def _write_impl(caller, file, address, length): # Get the module's memory export. memory = caller.get("memory") if not isinstance(memory, Memory): return -1 # Check that the range of data to write is in bounds. if address + length > memory.data_len: return -1 # Read the data from memory and then write it to the # file. data = memory.data_ptr[address : address + length] try: file.write(bytes(data)) return 0 except IOError: return -1 def write(store): return Func(store, FuncType([ValType.externref(), ValType.i32(), ValType.i32()], [ValType.i32()]), _write_impl, access_caller = True)

Rust's future-wasi.write polyfill use std::{cell::RefCell, fs, io::Write}; use wasmtime::*; fn write_impl( caller: Caller, file: Option<ExternRef>, address: u32, len: u32, ) -> i32 { // Get the module's exported memory. let memory = match caller .get_export("memory") .and_then(|export| export.into_memory()) { Some(m) => m, None => return -1, }; let memory = unsafe { memory.data_unchecked() }; // Get the slice of data that will be written to // the file. let start = address as usize; let end = start + len as usize; let data = match memory.get(start..end) { Some(d) => d, None => return -1, }; // Downcast the `externref` into an open file. let file = match file .as_ref() .and_then(|file| {<RefCell<fs::File>>() }) { Some(f) => f, None => return -1, }; // Finally, write the data to the file! let mut file = file.borrow_mut(); match file.write_all(data) { Ok(_) => 0, Err(_) => -1, } } pub fn write(store: &Store) -> Func { Func::wrap(&store, write_impl) }

These polyfills are only required until WASI is updated to leverage reference types.

Categorieën: Mozilla-nl planet

Mozilla Performance Blog: Four pillars of Android performance

do, 27/08/2020 - 08:08

This summer, I had the pleasure of interning at Mozilla with the Android Performance Team. Previously, I had some experience with Android, but not particularly with the performance aspect except for some basic performance optimizations. Throughout the internship, my perspective on the importance of Android performance changed. I learned that we could improve performance by looking at the codebase through the lens of four pillars of android performance. In this blog, I will describe those four pillars of performance: parallelism, prefetching, batching, and improving XML layouts.


Parallelism is the idea of executing multiple tasks simultaneously so that overall time for running a program is shorter. Many tasks have no particular reasons to run on the main UI thread and can be performed on other threads. For example, disk reads are almost always frowned upon and rightfully so. They are generally very time consuming and can block the main thread. It is often helpful to look through your codebase and ask: does this need to be on the main thread? If not, move it to another thread. The main thread’s only responsibilities should be to update the UI and handle the user interactions.

We are used to parallelism through multi-threading in languages such as Java and C++. However, multi-threaded code has several disadvantages, such as higher complexity to write and understand the code. Furthermore, the code can be harder to test, subject to deadlocks, and thread creation is costly. In comes the coroutines! Kotlin’s coroutines are runnable tasks that we can execute concurrently. They are like lightweight threads, which can be suspended/resumed quickly. Structured concurrency, as presented in Kotlin, makes it easier to reason about concurrent applications. Hence, when the code is easier to read, it’s easier to focus on the performance problems.

Kotlin’s coroutines are dispatched on specific threads. Here are the four dispatchers for coroutines.

  • Main
    • Consists of only the Main UI thread.
    • A good rule of thumb is to avoid putting any long-running jobs in this thread, so the jobs do not block the UI.
  • IO
    • Expected to be waiting on IO operations most of the time.
    • Useful for long-running tasks such as Network calls.
  • Default
    • Default when no dispatcher is provided.
    • Optimized for intensive CPU workloads.
  • Unconfined
    • Not restrained to any specific thread or thread-pool.
    • Coroutine dispatched through the Unconfined dispatcher is executed immediately.
    • Used when we do not care about what thread the code runs on.

Furthermore, the function withContext() is optimized for switching between thread-pools. Therefore, you can perform an IO operation on the IO thread and switch to the main thread for updating the UI. Since the system does thread management, all we need to do is tell the system which code to run on which thread pool through the dispatchers.

fun fetchAndDisplayUsers() {
    scope.launch(IO) {
        // fetch users inside IO thread
       val users = fetchUsersFromDB()
       withContext(Main) {
           // update the UI inside the main thread


Prefetching is the idea of fetching the resources early and storing them in memory for faster access when the data is eventually needed. Prefetching is a prevalent technique used by computer processors to get data from slow storage and store them in fast-access storage before the data is required. A standard pattern is to do the prefetching while the application is in the background. One example of prefetching is making network calls in advance and storing the results locally until needed. Prefetching, of course, needs to be balanced. For instance, if the application is trying to provide a smooth scrolling experience that relies on prefetching the data. If you prefetch too little, it’s not going to be very useful since the application will spend a lot of the time making a network call. However, prefetch too much, and you run into the risk of making your users wait and potentially draining the battery.

An example of prefetching in Fenix codebase is warming up the BroswersCache inside FenixApplication (Our main Application class).

Getting all the browser information in advance since it’s used all over the place.


Batching is the idea of grouping tasks together to be executed sequentially without much overhead of setting up the execution. For example, in the android database library Room, you can insert a list object as a single transaction (batching), which will be faster than inserting items one by one. Furthermore, you can also batch network calls to save precious network resources, therefore, saving battery in the process. In Android’s HTTP library called Volley, you can buffer multiple web requests and add them to a single instance of a networkQueue.

An example of batching in Fenix codebase is a visualCompletenessQueue, which is used to store up all the tasks that are needed to run after the first screen is visible to the user. Tasks include warming up the history storage, initializing the account manager, etc.

Attaching a VisualCompletenessQueue to the view to execute once the screen is visible.

XML Layouts

Let’s talk about the importance of improving the XML layout. Suppose the frame rate is 30 FPS, we have roughly around 33 milliseconds to draw each frame. If the drawing is not complete in the given time, we consider the frame to be dropped. The dropped frame is what causes a UI to be laggy and unreliable. Therefore, the more the dropped frames, the more unstable the UI is. Poorly optimized XML layouts can lead to a choppy looking UI. In general, these issues fall within two categories: heavily nested view hierarchy (CPU problem) and overdrawing (GPU problem).

Heavily nested view hierarchies can be reasonably simple to flatten. However, the tricky part is not overdrawing the UI. For example, if you have a UI component fully hidden by other components, it is unnecessary to waste GPU power drawing the component in the background. For instance, It is wasteful to draw a background for a layout that is entirely covered by a recycler view. Android has some features such as layout inspector to help you make the UI better. Additionally, under the developer’s options in Android phones, there are many features for debugging the UI, such as showing the GPU overdraw on the screen.


Paying attention to the application through the lens of parallelism, prefetching, batching, and improving the XML layout will help the application perform better. These fundamentals are often overlooked. Sometimes, developers seem to not care about memory and rely entirely on garbage collection for memory cleanup and optimizations. However, not many developers realize that the more often garbage collection is run, the worse the user experience will be. Since the application’s main thread is stopped while GC is running, it might result in frames not being drawn in time, creating a laggy UI. Hence, using the four pillars of performance as a guide, we can avoid many performance issues before they appear.

Categorieën: Mozilla-nl planet

The Talospace Project: Firefox 80 on POWER

do, 27/08/2020 - 06:27
Firefox 80 is available, and we're glad it's here considering Mozilla's recent layoffs. I've observed in this blog before that Firefox is particularly critical to free computing, not just because of Google's general hostility to non-mainstream platforms but also the general problem of Google moving the Web more towards Google.

I had no issues building Firefox 79 because I was still on rustc 1.44, but rustc 1.45 asserted while compiling Firefox, as reported by Dan Horák. This was fixed with an llvm update, and with Fedora 32 up to date as of Sunday and using the most current toolchain available, Firefox 80 built out of the box with the usual .mozconfigs.

Since there was a toolchain update, I figured I would try out link-time optimization again since a few releases had elapsed since my last failed attempt (export MOZ_LTO=1 in your .mozconfig). This added about 15 minutes of build-time on the dual-8 Talos II to an optimized build, and part of it was spent with the fans screaming since it seemed to ignore my -j24 to make and just took over all 64 threads. However, it not only builds successfully, I'm typing this post in it, so it's clearly working. A cursory benchmark with Speedometer 2.0 indicated LTO yielded about a 4% improvement over the standard optimized build, which is not dramatic but is certainly noticeable. If this continues to stick, I might try profile-guided optimization for the next release. The toolchain on this F32 system is rustc 1.45.2, LLVM 10.0.1-2, gcc 10.2.1 and GNU ld.bfd 2.34-4; your mileage may vary with other versions.

There's not a lot new in this release, but WebRender is still working great with the Raptor BTO WX7100, and a new feature available in Fx80 (since Wayland is a disaster area without a GPU) is Video Acceleration API (VA-API) support for X11. The setup is a little involved. First, make sure WebRender and GPU acceleration is up and working with these prefs (set or create):

gfx.webrender.enabled true
layers.acceleration.force-enabled true

Restart Firefox and check in about:support that the video card shows up and that the compositor is WebRender, and that the browser works as you expect.

VA-API support requires EGL to be enabled in Firefox. Shut down Firefox again and bring it up with the environment variable MOZ_X11_EGL set to 1 (e.g., for us tcsh dweebs, setenv MOZ_X11_EGL 1 ; firefox &, or for the rest of you plebs using bash and descendants, MOZ_X11_EGL=1 firefox &). Now set (or create):

media.ffmpeg.vaapi-drm-display.enabled true
media.ffmpeg.vaapi.enabled true
media.ffvpx.enabled false

The idea is that VA-API will direct video decoding through ffmpeg and theoretically obtain better performance; this is the case for H.264, and the third setting makes it true for WebM as well. This sounds really great, but there's kind of a problem:

Reversing the last three settings fixed this (the rest of the acceleration seems to work fine). It's not clear whose bug this is (ffmpeg, or something about VA-API on OpenPOWER, or both, though VA-API seems to work just fine with VLC), but either way this isn't quite ready for primetime yet on our platform. No worries since the normal decoder seemed more than adequate even on my no-GPU 4-core "stripper" Blackbird. There are known "endian" issues with ffmpeg, presumably because it isn't fully patched yet for little-endian PowerPC, and I suspect once these are fixed then this should "just work."

In the meantime, the LTO improvement with the updated toolchain is welcome, and WebRender continues to be a win. So let's keep evolving Firefox on our platform and supporting Mozilla in the process, because it's supported us and other less common platforms when the big 1000kg gorilla didn't, and we really ought to return that kindness.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.46.0

do, 27/08/2020 - 02:00

The Rust team is happy to announce a new version of Rust, 1.46.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.46.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.46.0 on GitHub.

What's in 1.46.0 stable

This release enables quite a lot of new things to appear in const fn, two new standard library APIs, and one feature useful for library authors. See the detailed release notes to learn about other changes not covered by this post.

const fn improvements

There are several core language features you can now use in a const fn:

  • if, if let, and match
  • while, while let, and loop
  • the && and || operators

You can also cast to a slice:

const fn foo() { let x = [1, 2, 3, 4, 5]; // cast the array to a slice let y: &[_] = &x; }

While these features may not feel new, given that you could use them all outside of const fn, they add a lot of compile-time computation power! As an example, the const-sha1 crate can let you compute SHA-1 hashes at compile time. This led to a 40x performance improvement in Microsoft's WinRT bindings for Rust.


Back in March, the release of Rust 1.42 introduced better error messages when unwrap and related functions would panic. At the time, we mentioned that the way this was implemented was not yet stable. Rust 1.46 stabilizes this feature.

This attribute is called #[track_caller], which was originally proposed in RFC 2091 way back in July of 2017! If you're writing a function like unwrap that may panic, you can put this annotation on your functions, and the default panic formatter will use its caller as the location in its error message. For example, here is unwrap previously:

pub fn unwrap(self) -> T { match self { Some(val) => val, None => panic!("called `Option::unwrap()` on a `None` value"), } }

It now looks like this:

#[track_caller] pub fn unwrap(self) -> T { match self { Some(val) => val, None => panic!("called `Option::unwrap()` on a `None` value"), } }

That's it!

If you are implementing a panic hook yourself, you can use the caller method on std::panic::Location to get access to this information.

Library changes

Keeping with the theme of const fn improvements, std::mem::forget is now a const fn. Additionally, two new APIs were stabilized this release:

See the detailed release notes for more.

Other changes

There are other changes in the Rust 1.46.0 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.46.0

Many people came together to create Rust 1.46.0. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 353

wo, 26/08/2020 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community Official Tooling Newsletters Observations/Thoughts Learn Standard Rust Learn More Rust Project Updates Miscellaneous Crate of the Week

This week's crate is pdf, a crate for reading PDF files.

Thanks to S3bk for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

292 pull requests were merged in the last week

Rust Compiler Performance Triage

This week included a major speedup on optimized builds of real-world crates (up to 5%) as a result of the upgrade to LLVM 11.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs

No new RFCs were proposed this week.

Upcoming Events Online North America Asia Pacific

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust is a very different beast for me. It is a much bigger and much more capable language. However, I've found that it is, in many ways, a lot more restrictive in how you can approach problems. I frequently find myself being perplexed at how to eloquently solve a problem. When I discover the idiomatic way of doing it I'm usually both blown away by the brilliance of it and a bit disheartened by how difficult it would be to come up with that solution by myself :-).

Thanks to Stephan Sokolow for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Extensions in Firefox 80

di, 25/08/2020 - 17:00

Firefox 80 includes some minor improvements for developers using the API:

  • When using the saveAs option, the save dialog now shows a more specific file type filter appropriate for the file type being saved.
  • Firefox now exposes internal errors in the Browser Console to aid debugging.

Special thanks goes to Harsh Arora and Dave for their contributions to the downloads API. This release was also made possible by a number of other folks from within Mozilla for diligent behind-the-scenes work to improve and maintain WebExtensions in Firefox.

The post Extensions in Firefox 80 appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Fast, personalized and private by design on all platforms: introducing a new Firefox for Android experience

di, 25/08/2020 - 10:00

Big news for mobile: as of today, Firefox for Android users in Europe will find an entirely redesigned interface and a fast and secure mobile browser that was overhauled down to the core. Users in North America will receive the update on August 27. Like we did with our “Firefox Quantum” desktop browser revamp, we’re calling this release “Firefox Daylight” as it marks a new beginning for our Android browser. Included with this new mobile experience are lots of innovative features, an improved user experience with new customization options, and some massive changes under the hood. And we couldn’t be more excited to share it.

New Firefox features Android users will love

We have made some very significant changes that could revolutionize mobile browsing:

Privacy & security

  • Firefox for Android now offers Enhanced Tracking Protection, providing a better web experience. The revamped browsing app comes with our highest privacy protections ever – on by default. ETP keeps numerous ad trackers at bay and out of the users’ business, set to “Standard” mode right out of the box to put their needs first. Stricter protections are available to users who want to customize their privacy settings.
  • Additionally, we took the best parts of Firefox Focus, according to its users, and applied them to Private Mode: Now, Private Mode is easily accessible from the Firefox for Android homescreen and users have the option to create a private browsing shortcut on their Android homescreen, which will launch the browsing app automatically in the respective mode and allow users to browse privately on-the-go.

Enhanced Tracking Protection automatically blocks many known third-party trackers, by default, in order to improve user privacy online.                    Private Mode adds another layer for better privacy on device level.

Enhanced Tracking Protection automatically blocks many known third-party trackers, by default, in order to improve user privacy online. Private Mode adds another layer for better privacy on device level.

Appearance & productivity

  • With regard to appearance, we redesigned the user interface of our Android browser completely so that it’s now even cleaner, easier to handle and to make it one’s own: users can set the URL bar at the bottom or top of the screen, improving the accessibility of the most important browser element especially for those with smartphones on the larger side.
  • Taking forever to enter a URL is therefore now a thing of the past, and so are chaotic bookmarks: Collections help to stay organized online, making it easy to return to frequent tasks, share across devices, personalize one’s browsing experience and get more done on mobile. As a working parent, for example, Collections may come in handy when organizing and curating one’s online searches based on type of activity such as kids, work, recipes, and many more. Multitaskers, who want to get more done while watching videos, will also enjoy the new Picture-in-Picture feature.

The new Firefox for Android comes with an adjustable URL bar.                     Collections.

Productivity is key on mobile. That’s why the new Firefox for Android comes with an adjustable URL bar and a convenient solution to organize bookmarks: Collections.

  • Bright or dark, day or night: Firefox for Android simplifies toggling between Light and Dark Themes, depending on individual preferences, vision needs or environment. Those who prefer an automatic switch may also set Firefox to follow the Android setting, so that the browsing app will switch automatically to dark mode at a certain time of day.
  • Last but not least, we revamped the extensions experience. We know that add-ons play an important role for many Firefox users and we want to make sure to offer them the best possible experience when starting to use our newest Android browsing app. We’re kicking it off with the top 9 add-ons for enhanced privacy and user experience from our Recommended Extensions program. At the same time, we’re continuously working on offering more add-on choice in the future that will seamlessly fit into Firefox for Android.

The overhauled Android browser comes with the top add-ons for enhanced privacy and user experience.

Firefox users love add-ons! Our overhauled Android browser therefore comes with the top add-ons for enhanced privacy and user experience from our Recommended Extensions program.

What’s new under the hood

The improvements in Firefox for Android don’t just stop here: they even go way beyond the surface as Firefox for Android is now based on GeckoView, Mozilla’s own mobile browser engine. What does that mean for users?

  • It’s faster. The technology we used in the past limited our capability to further improve the browser as well as our options to implement new features. Now, we’re free to decide and our release cycle is flexible. Also, GeckoView makes browsing in Firefox for Android significantly speedier.
  • It’s built on our standards: private and secure. With our own engine we set the ground rules. We can decide independently which privacy and security features we want to make available for our mobile users and are entirely free to cater to our unique high standards.
  • It’s independent, just like our users. Unlike Edge, Brave and Chrome, Firefox for Android is not based on Blink (Google’s mobile engine). Instead Firefox for Android is based on GeckoView, Mozilla’s wholly-built engine. This allows us to have complete freedom of choice when it comes to implementation of standards and features. This independence lets us create a user interface that when combined with an overall faster browsing pace, enables unprecedented performance. Also, it protects our users if there are security issues with Blink as Firefox will not be affected.
User experience is key, in product and product development

Completely overhauling an existing product is a complex process that comes with a high potential for pitfalls. In order to avoid them and create a browsing experience users would truly appreciate, we looked closely at existing features and functionalities users love and we tested – a lot – to make sure we’d keep the promise to create a whole new browsing experience on Android.

  1. Bringing the best recent features from desktop to mobile. Over the last couple of years we’ve been very busy with continuously improving the Firefox desktop browsing experience: We did experiments, launched new tools like Firefox Monitor, Send and Lockwise and took existing features to a whole new level. This includes, amongst others, Dark Mode, the Picture-in-Picture feature, the support of extensions and add-ons as well as, last but not least, the core element of Firefox privacy technology: today, Enhanced Tracking Protection  protects Firefox users from up to 10 billion third-party tracking cookies, fingerprinters and cryptominers per day. Feedback from users showed that they like the direction Firefox is developing into, so we worked hard to bring the same level of protection and convenience to mobile, as well. As a result, users can now experience a better Firefox mobile experience on their Android devices than ever before.
  1. We tested extensively and emphasized direct feedback from mobile users. Over the course of several months, earlier versions of the new Firefox for Android were available as a separate app called Firefox Preview. This enabled us to adequately try out new features, examine the user experience, gather feedback and implement it in accordance with the users’ wishes and needs. And the result of this process is now available.
Try the new Firefox for Android!

We’re proud to say that we provided Firefox for Android with an entirely new shape and foundation and we’re equally happy to share the result with Android users now. Here’s how to get our overhauled browser:

  • Users who have the browser downloaded to their Android devices already will receive the revamp either as an automatic or manual update, depending on their device preferences. Their usage data, such as the browsing history or bookmarks, will be migrated automatically to the new app version, which might take a few moments. Users who have set a master password will need to disable the master password in order for their logins to automatically migrate over.
  • New users can download the update from the Google PlayStore as of today. It’s easy to find it as ‘Firefox for Android’ through the search functionality and tapping ‘Install’ will get the process started. The new Firefox for Android supports a wide range of devices from Android 5.0+ and above.

Make sure to let us know what you think about the overhauled browsing experience with Firefox for Android and stay tuned for more news in the upcoming months!

UPDATE: We’re all about moving forward. It’s why we rehauled Firefox for Android with a new engine, privacy protections, and a new look. But we know history and looking back is important. So, we’re bringing the “back button” back. We’ll continue to incorporate user feedback as we add new features. So if you haven’t already downloaded it, get the new Firefox for Android now. Added Sept. 2, 2020

The post Fast, personalized and private by design on all platforms: introducing a new Firefox for Android experience appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Introducing a scalable add-ons blocklist

ma, 24/08/2020 - 17:00

When we become aware of add-ons that go against user expectations or risk user privacy and security, we take steps to block them from running in Firefox using a mechanism called the add-ons blocklist. In Firefox 79, we revamped the blocklist to be more scalable in order to help keep users safe as the add-ons ecosystem continues to grow.


Cascading Bloom Filters

One of the constraints of the previous blocklist was that it required parsing of a large number of regular expressions. Each Firefox instance would need to check if any of its user’s installed add-ons matched any of the regular expressions in the blocklist. As the number of blocks increased, the overhead of loading and parsing the blocklist in Firefox became greater. In late 2019, we began looking into a more efficient way for Firefox to determine if an add-on is blocked.

After investigating potential solutions, we decided the new add-ons blocklist would be powered by a data structure created from cascading bloom filters, which provides an efficient way to store millions of records using minimal space.

Using a single bloom filter as a data structure would have carried a risk of false positives when checking if an add-on was blocked. To prevent this, the new add-on blocklist uses a set of filter layers to eliminate false-positives using the data from (AMO) as a single source of truth.

The same underlying technology used here was first used for Certificate Revocation in CRLite. Adapting this approach for add-ons provided an important head-start for the blocklist project.

Further Optimizations: Stash lists

To reduce the need to ship entire blocklists each time blocks are added, an intermediate data structure is created to record new blocks. These “stash-lists” are queried first, before the main blocklist database is checked. Once the stash-lists grow to a certain size they are automatically folded into a newly generated cascading bloom filter database.

We are currently evaluating additional optimizations in order to further minimize the size of the blocklist for use on Fenix, the next major release of Firefox for Android.

Shipping this in Firefox Extended Support Release (ESR)

Firefox Extended Support Release (ESR) is a Firefox distribution that is focused on feature stability. It gets a major feature update about once per year and only critical security updates in between. When we first identified the need to move the blocklist to a cascading bloom filter in late 2019, we knew we had to land the new blocklist for ESR 78 or we would risk having to maintain two different blocklists in parallel until the next ESR cycle.

In order to land this feature in time for Firefox 78, which was slated to hit the Nightly pre-release channel in May, we needed to coordinate efforts between our add-ons server, add-ons frontend and Firefox Add-ons engineering teams, as well the teams in charge of hosting the blocklist and the still-in-development bloom filters library. We also needed to make sure this new solution cleared all security reviews and QA, as well as coordinate its rollout with Release Engineering, and make sure we had enough data to measure its success.

Our leadership encouraged us to land the new blocklist in Firefox 78 and ensured that we would get the cross-team support necessary to achieve it. Having all these hurdles cleared was very exciting and nerve-wracking at the same time, since now our main challenge was to deliver this huge project on time. While a late-breaking bug prevented us from landing the new blocklist in Firefox 78, we have been able to gradually roll it out with the Firefox 79 release and will enable it in an ESR 78 update.

This was an ambitious project, as it had many moving parts that required the support of many teams across Mozilla. During the project Crypto Engineering, Kinto, Security, Release Engineering, QA and data teams all made significant contributions to enable the Add-ons Engineering team to ship this feature in five months.


None of this would have been possible without the help and support of the following people: Simon Bennetts, Shane Caraveo, Alexandru Cornestean, Shell Escalante, Luca Greco, Mark Goodwin, Joe Hildebrand, JC Jones, Dana Keeler, Gijs Kruitbosch, Thyla van der Merwe, Alexandra Moga, Mathieu Pillard, Victor Porof, Amy Tsay, Ryan VanderMeulen, Dan Veditz, Julien Vehent, Eric Smyth, Jorge Villalobos, Andreas Wagner, Andrew Williamson and Rob Wu.

The post Introducing a scalable add-ons blocklist appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Nick Fitzgerald: Writing a Test Case Generator for a Programming Language

ma, 24/08/2020 - 09:00

Maxime Chevalier-Boisvert requested resources for learning about fuzzing programming language implementations on Twitter:

I’d like to learn about fuzzing, specifically fuzzing programming language implementations. Do you have reading materials you would recommend, blog posts, papers, books or even recorded talks?

@Love2Code · August 3, 2020

Maxime received many replies linking to informative papers, blog posts, and lectures. John Regehr suggested writing a simple generative fuzzer for the programming language.

A generative fuzzer combines a test case generator with the system under test (e.g. your compiler), generating new test cases and feeding them into the system:

fn generative_fuzzer() { loop { // Use the test case generator to create a new // input. let input = generate_test_case(); // Feed that input into the system under test. let result = run_system_under_test(input); // Finally, if the system under test crashed, // failed an assertion, etc... then report // that! if result.is_interesting() { report(input); } } }

I realized that many people might not know what it takes to write their own generative fuzzer, so this blog post shows one aspect of it: implementing a test case generator.

Our test case generator will generate WebAssembly programs. While WebAssembly has its own quirks — it’s a binary format and is generally a compilation target rather than a source language — it is a small and simple language. The techniques we use when generating WebAssembly should transfer to generating the programming language of your choice.

If you want to skip the exposition and jump head first into the code, here is the repository for our final test case generator.

Table of Contents What is a Test Case Generator?

Test case generators generate test cases. These test cases are always within the test domain: no cycles are wasted on invalid inputs, such as source text that fails to parse. Compare this to mutation-based fuzzing, where existing seed inputs are mutated to produce new inputs. In general, nothing guarantees that the new, mutated input is still within the test domain: the mutation may have introduced a syntax error. This property, that generated inputs are always within the test domain, is generative fuzzing’s main advantage and the test case generator’s main responsibility.

A test case generator should, additionally, support every feature of its target programming language. You won’t discover a bug in your compiler’s handling of switch statements if the test case generator doesn’t support generating switch statements. Pushing this idea even further, the test case generator should uniformly sample from the test domain. If the test case generator can technically generate switch statements but the probability of doing so is nearly zero, then you likely still won’t find that bug. However, uniformly sampling from the infinite set of all programs that can be written in a particular programming language is nontrivial and an area of active research.

A test case generator should, finally, be fast. The faster we can generate test cases, the faster we will discover bugs. If the generator is too slow, we can blow our time budget, failing to find those bugs at all.

Getting Set Up

First, we create a new crate with cargo. We’ll name this crate wasm-smith, giving a little nod to Csmith, the popular C program generator.

$ cargo new --lib wasm-smith

Second, we add the arbitrary crate as a dependency:

# wasm-smith/Cargo.toml [dependencies] arbitrary = { version = "0.4.6", features = ["derive"] }

The arbitrary crate helps us generate structured data from arbitrary bytes. It is typically used in combination with libFuzzer to translate the raw bytes given to use by libFuzzer into something that the system you’re testing can process. For example, a color conversion library might use arbitrary to turn the raw fuzzer-provided bytes into Rgb or Hsl color types. We will use it in a similar way for this project, translating raw bytes given to us by libFuzzer into semantically valid WebAssembly modules.

The arbitrary crate’s main export is the Arbitrary trait:

pub trait Arbitrary: Sized + 'static { fn arbitrary(u: &mut Unstructured) -> Result<Self>; // Provided methods hidden... }

It takes an Unstructured, which is a helpful wrapper around a byte slice, and returns an instance of the type for which it is implemented.

For our wasm-smith crate, we define a Module type that represents our pseudo-random WebAssembly modules, and then we implement the Arbitrary trait for it:

use arbitrary::{Arbitrary, Result, Unstructured}; /// A pseudo-random WebAssembly module. pub struct Module { // ... } impl Arbitrary for Module { fn arbitrary(u: &mut Unstructured) -> Result<Self> { todo!() } }

Before we fill in that todo!() lets take a moment to settle on a design for what the implementation will look like.

Translating Grammars into Generators

Writing a generator is remarkably similar to hand-writing a recursive descent parser, so if you’ve done that before, then you should feel right at home. For example, given this grammar production (borrowed and lightly edited from the C++ name mangling grammar):

<class-enum-type> ::= Ts <name> | Tu <name> | Te <name>

A recursive descent parser will, almost mechanically, translate the production into something like this:

impl Parse for ClassEnumType { fn parse(p: &mut Parser) -> Result<Self> { // Ts <name> if p.peek("Ts") { p.consume("Ts")?; let name = Name::parse(p)?; return Ok(ClassEnumType::Ts(name)); } // Tu <name> if p.peek("Tu") { p.consume("Tu")?; let name = Name::parse(p)?; return Ok(ClassEnumType::Tu(name)); } // Te <name> p.consume("Te")?; let name = Name::parse(p)?; Ok(ClassEnumType::Te(name)) } }

Our generator will do something similar, except instead of peeking at the input string to decide which right-hand side of the production to parse, we will make a pseudo-random choice to generate one of those potential right hand sides.

We could use a random number generator directly to make these choices, but this has two problems:

  1. We give up determinism unless we are careful to control the RNG’s seed and reuse the same RNG everywhere, threading it through all of our functions as a parameter. Determinism is extremely important for reproducing test failures! It’s definitely possible to do these things, but can occasionally be a little annoying.

  2. More importantly, using an RNG precludes a mature fuzzing engine, like libFuzzer, from guiding our test case generation based on code coverage and other insights.

Instead, we use a raw input byte slice given to us by libFuzzer or AFL as a sequence of predetermined choices.0 This lets the fuzzer guide our test case generation, and gives us test case reduction “for free” since we can ask the fuzzer to reduce the raw input sequence, rather than write a domain-specific test case reducer. This comes as a relief because writing a reducer that understands WebAssembly is easily as much effort as writing the generator itself.

Here is the same C++ mangling example from above, but translated from a parser into a generator, using Unstructured:

fn arbitrary_class_enum_type( u: &mut Unstructured, output: &mut String, ) -> Result<()> { match u.int_in_range::<u8>(0..=2) { // Ts <name> 0 => { output.push_str("Ts"); arbitrary_name(u, output)?; Ok(()) } // Tu <name> 1 => { output.push_str("Tu"); arbitrary_name(u, output)?; Ok(()) } // Te <name> 2 => { output.push_str("Te"); arbitrary_name(u, output)?; Ok(()) } _ => unreachable!(), } }

Once again, this is mostly mechanical.

This pattern will generate syntactically correct test cases that can be parsed successfully but which likely contain a plethora of type errors, calls to undefined functions, etc. We’ve set out to generate semantically correct test cases that pass type checking and will exercise more than just the language implementation’s frontend.

Our final pattern maintains some extra information about the program we’ve generated thus far, so that we can consult that information when generating new forms. This extra information might include which names are in scope, the types of each variable, etc. We consult that information while dynamically building up thunks for every valid option we could generate. Once we have enumerated every option, we ask the Unstructured to choose one of them, and finally we call the chosen thunk to generate the form.

Here is an example of using this pattern for generating integer expressions, where an integer expression is either a constant integer, an arithmetic operation, a use of an integer variable, or a call of a function that returns an integer:

fn arbitrary_int_expr( u: &mut Unstructured, scope: &mut Scope, ) -> Result<Expr> { // We will dynamically build up all of the valid // options of what we can generate. let mut options: Vec<fn ( &mut Unstructured, &mut Scope, ) -> Result<Expr>> = vec![]; // It is always valid to generate a constant. options.push(|u, _| { Ok(Expr::Constant(u.arbitrary::<i32>()?)) }); // It is always valid to generate an addition. options.push(|u, scope| { let lhs = arbitrary_int_expr(u, scope)?; let rhs = arbitrary_int_expr(u, scope)?; Ok(Expr::Add(lhs, rhs)) }); // Subtraction, multiplication, division, etc look // similar to addition. // If there are integer variables in scope, we can // generate a use of one of them. if scope.has_int_variables() { options.push(|u, scope| { let var = u.choose(scope.int_variables())?; Ok(Expr::Var(var)) }); } // If there are any functions that return an integer // in scope, we can generate a call to one of them. if scope.has_int_funcs() { options.push(|u, scope| { let func = u.choose(scope.int_funcs())?; let args = arbitrary_args(u, &func.params)?; Ok(Expr::Call(func, args)) }); } // Choose one of our options and call the function // to generate the expression. let f = u.choose(&options)?; f(u, scope) }

Finally, it is worth noting that, similar to how parser generators take a grammar and generate a parser, there are tools that will take a grammar and generate a test case generator (and even Rust implementations of those tools).

Generating the Type Section

Now we’re ready to continue generating WebAssembly!

The first section in a WebAssembly module is the type section. It declares the function type signatures used in the module and it has the following grammar:

<typesec> ::= 0x01 <size:u32> <num_funcs:u32> <functype>* <functype> ::= <num_params:u32> <valtype>* <num_results:u32> <valtype>* <valtype> ::= 0x7F # i32 | 0x7E # i64 | 0x7D # f32 | 0x7C # f64

This is not context free: size is the size of the whole section in bytes, num_funcs is the number of following <functype>s, num_params defines how many parameter <valtype>s a function type has, and num_results defines how many result <valtype>s it has. But none of this comes into play until we actually serialize the module into bytes. Until then, a type section is a sequence of function type signatures, and these signatures don’t actually need access to any scope information, so we can just derive the Arbitrary trait directly for them:

#[derive(Arbitrary, Clone, Debug)] struct FuncType { params: Vec<ValType>, results: Vec<ValType>, } #[derive(Arbitrary, Clone, Debug)] enum ValType { I32, I64, F32, F64, }

And since a module has a type section, let’s hook this up into our Module definition and its Arbitrary implementation:

pub struct Module { types: Vec<FuncType>, // ... } impl Arbitrary for Module { fn arbitrary(u: &mut Unstructured) -> Result<Self> { let mut module = Module::default(); module.types = u.arbitrary()?; // ... Ok(module) } } Generating the Import Section

The import section is the first section where we will need to look at what we’ve previously generated when generating new items. An import brings an external function, table, memory, or global into scope. When importing a function, we need to specify the function’s type via an index into the types section. This means we can only generate a function import if our types section is non-empty.

The structure definitions and Arbitrary implementations for table and global types are straightforward, and nothing different from what we saw with the type section, so I’ve elided them here. The one thing to note is that the largest memory that a Wasm module can declare is 4GiB, so the Arbitrary implementation for memory types must take that into account:

#[derive(Clone, Debug)] struct Limits { min: u32, max: Option<u32>, } #[derive(Clone, Debug)] struct MemoryType { limits: Limits, } impl Arbitrary for MemoryType { fn arbitrary( u: &mut Unstructured<'_>, ) -> Result<Self> { // Note: memory sizes are in units of pages, // which are 16KiB in Wasm. let min = u.int_in_range(0..=65536)?; let max = if u.arbitrary().unwrap_or(false) { Some(if min == 65536 { 65536 } else { u.int_in_range(min..=65536)? }) } else { None }; Ok(MemoryType { limits: Limits { min, max }, }) } }

When generating an arbitrary import, we dynamically build up a list of functions, one for each semantically valid choice we can make. Then we will use the next bit of raw input from Unstructured to choose one of those functions and call it to create an import. We keep creating more imports while the raw input tells us to — we read a bool from the Unstructured and stop adding imports once we read false.

impl Module { fn arbitrary_imports( &mut self, u: &mut Unstructured, ) -> Result<()> { let mut options: Vec<fn( &mut Unstructured, &mut Module, ) -> Result<Import>> = Vec::with_capacity(4); // If the module's type section is not empty, we // can define function imports with one of those // types. if !self.types.is_empty() { options.push(|u, m| { let max = m.types.len() as u32 - 1; let idx = u.int_in_range(0..=max)?; Ok(Import::Func(idx)) }); } options.push(|u, _| { Ok(Import::Table(u.arbitrary()?)) }); options.push(|u, _| { Ok(Import::Memory(u.arbitrary()?)) }); options.push(|u, _| { Ok(Import::Global(u.arbitrary()?)) }); loop { let keep_going = u.arbitrary::<bool>() .unwrap_or(false); if !keep_going { return Ok(()); } let f = u.choose(&options)?; let import = f(u, self)?; self.imports.push(import); } } }

Hooking this up to Module and its Arbitrary implementation is, once again, straightforward:

#[derive(Default)] pub struct Module { types: Vec<FuncType>, imports: Vec<Import>, // New! // ... } impl Arbitrary for Module { fn arbitrary(u: &mut Unstructured) -> Result<Self> { let mut module = Module::default(); module.types = u.arbitrary()?; module.arbitrary_imports(u)?; // New! // ... Ok(module) } } Generating the Code Section

There are a few more sections before the code section, which contains function bodies, but their implementation is similar to what we’ve already seen with the type and import sections, so we’ll skip them.

WebAssembly is a typed stack-based language, and has structured control flow. There is the operand stack, where values are pushed and popped, and there is the control stack, which keeps track of active control frames (i.e. ifs, loops, and blocks) and their labels that can be jumped to. Whether an instruction is valid depends on the types on the operand stack and, if the instruction is control instruction, the labels on the control stack. Therefore, we will explicitly model these stacks while generating instructions. Choosing which instruction to generate will consult the contents of these stacks, and, once a choice is made, generating a new instruction will update them.

Let’s begin with the basic definitions for value types and control frames:

/// The type of a value. #[derive(Arbitrary, Clone, Copy, Debug, PartialEq, Eq)] enum ValType { I32, I64, F32, F64, } /// A control frame. #[derive(Debug)] struct Control { kind: ControlKind, // Value types that must be on the stack when // entering this control frame. params: Vec<ValType>, // Value types that are left on the stack when // exiting this control frame. results: Vec<ValType>, // How far down the operand stack instructions // inside this control frame can reach. height: usize, } /// The kind of a control frame. #[derive(Clone, Copy, Debug, PartialEq, Eq)] enum ControlKind { Block, If, Loop, }

We model the control stack as a vector of Controls, and the operand stack as a vector of Option<ValType>. The Option is introduced because unreachable code produces values of any type, and these are modeled as None.

type OperandStack = Vec<Option<ValType>>; type ControlStack = Vec<Control>;

In order to generate a function body, we’ll also need to know the function’s parameter and result types, and the local variables available. We’ll collect these things and our stacks in a CodeBuilder type. To avoid thrashing the memory allocator, we factor the operand and control stacks out into a separate structure that is reused across the creation of each function body. This structure also contains the vector of options that we build up every time we generate an instruction.

pub(crate) struct CodeBuilderAllocations { controls: ControlStack, operands: OperandStack, // Dynamic set of instructions that would be // valid right now. options: Vec<fn( &mut Unstructured, &Module, &mut CodeBuilder, ) -> Result<Instruction>>, } struct CodeBuilder<'a> { func_ty: &'a FuncType, locals: &'a Vec<ValType>, allocs: &'a mut CodeBuilderAllocations, }

This is basically the same information that WebAssembly’s validation algorithm requires. Our generator is similar to a WebAssembly validator for the same reasons that a generator that emits syntactically correct test cases is similar to a recursive descent parser.

Because there are so many instructions, we won’t write the validation checks and the thunks that generate the instructions inline like we did for previous sections. Instead we will write pairs of functions: the first checks whether a given instruction would be valid to generate in the current context, and the second generates that instruction and updates the context.

static OPTIONS: &[( // Predicate for whether this instruction is valid // in the given context, if any. `None` means that // the instruction is always valid. Option<fn(&Module, &mut CodeBuilder) -> bool>, // The function to generate the instruction, given // that we've made this choice. fn( &mut Unstructured, &Module, &mut CodeBuilder, ) -> Result<Instruction>, )] = &[ // ... ];

Despite this small change in code organization, we will use these function pairs to accomplish the same task as before: dynamically building up a list of functions, one for each instruction that would be valid in the current context, and then choosing one of them and calling it to generate an instruction.


Let’s start with an easy example: generating the i32.add instruction. An i32.add is valid when there are two i32s on the operand stack. There are many instructions that are valid to generate when there are two i32s on the operand stack, so rather than naming the predicate function something like i32_add_is_valid, we’ll name it i32_i32_on_stack.

// Predicate for whether two `i32`s are on top of // the operand stack, and therefore we can generate // an `i32.add` (and many others). fn i32_i32_on_stack( _: &Module, builder: &mut CodeBuilder, ) -> bool { builder.types_on_stack(&[ ValType::I32, ValType::I32, ]) }

i32.add pops the i32s and pushes their sum. The function that generates an i32.add needs to do the same to our model of the operand stack:

fn i32_add( _: &mut Unstructured, _: &Module, builder: &mut CodeBuilder, ) -> Result<Instruction> { builder.pop_operands(&[ValType::I32, ValType::I32]); builder.push_operands(&[ValType::I32]); Ok(Instruction::I32Add) } f64.const

Here is another simple example. An f64.const instruction is always valid. It has an f64 immediate and adds that f64 onto the operand stack:

fn f64_const( u: &mut Unstructured, _: &Module, builder: &mut CodeBuilder, ) -> Result<Instruction> { let x = u.arbitrary::<f64>()?; builder.push_operands(&[ValType::F64]); Ok(Instruction::F64Const(x)) } if

Things start getting more interesting when we consider instructions that introduce new control frames. An if instruction requires an i32 on top of the operand stack which determines whether its truth-y or false-y code path is executed. It can also take additional operand stack parameters, making them available to further instructions within its control frame. And while it can take stack parameters, we only require an i32 on the stack because we can always define ifs with empty stack parameters.

fn if_valid( _: &Module, builder: &mut CodeBuilder, ) -> bool { builder.type_on_stack(ValType::I32) } fn r#if( u: &mut Unstructured, module: &Module, builder: &mut CodeBuilder, ) -> Result<Instruction> { // Pop the `i32` that controls whether we take the // truth-y or the false-y code path. builder.pop_operands(&[ValType::I32]); // Generate an arbitrary block type that is // compatible with the current operand stack. let block_ty = builder.arbitrary_block_type( u, module, )?; // Create the control frame for this `if`. let (params, results) = block_ty .params_results(module); let height = builder.allocs.operands.len() - params.len(); builder.allocs.controls.push(Control { kind: ControlKind::If, params, results, height, }); Ok(Instruction::If(block_ty)) } else

An else instruction is only valid when the top control frame was introduced by an if and the types on (the accessible portion of) the operand stack are exactly the top control frame’s result types.

fn else_valid(_: &Module, b: &mut CodeBuilder) -> bool { let control = b.top_control(); control.kind == ControlKind::If && b.operands().len() == control.results.len() && b.types_on_stack(&control.results) }

The else instruction resets the operand stack to its state at the moment that the if was introduced. It also changes the control kind to ControlKind::Block so that we don’t generate instruction sequences with two elses for the same if, like if ... else ... else ... end.

fn r#else( _: &mut Unstructured, _: &Module, builder: &mut CodeBuilder, ) -> Result<Instruction> { let control = builder.pop_control(); builder.pop_operands(&control.results); builder.push_operands(&control.params); builder.allocs.controls.push(Control { kind: ControlKind::Block, ..control }); Ok(Instruction::Else) } end

An end instruction finishes a control sequence that was introduced by an if, else, block, or a loop instruction. Similar to an else, it is only valid when we have exactly the current control frame’s result types on the operand stack. The final thing to check is that if the top control frame was introduced by an if, and the if does not leave the stack as it was upon entering (i.e. the parameter and result types are not equal) then we can’t generate an end. This scenario requires that we generate an else first, since not executing any instructions if the if’s truth-y path is not taken would leave the stack in a type mismatched state.

fn end_valid( _: &Module, builder: &mut CodeBuilder, ) -> bool { // Note: first control frame is the function // return's control frame, which never has an // associated `end`. if builder.allocs.controls.len() <= 1 { return false; } let control = builder.top_control(); builder.operands().len() == control.results.len() && builder.types_on_stack(&control.results) && !(control.kind == ControlKind::If && control.params != control.results) } call

The final instruction we will examine is call. Whether a call is valid or not depends on the callee function’s type signature, and therefore whether we can generate a call or not depends on whether any of the module’s functions have a type signature compatible with the current operand stack.

fn call_valid( module: &Module, builder: &mut CodeBuilder, ) -> bool { module .funcs() .any(|(_, ty)| { builder.types_on_stack(&ty.params) }) }

To generate a call instruction, we choose which function we are calling and update the operand stack accordingly:

fn call( u: &mut Unstructured, module: &Module, builder: &mut CodeBuilder, ) -> Result<Instruction> { // Count how many of our module's functions could be // called with the current context. let n = module .funcs() .filter(|(_, ty)| { builder.types_on_stack(&ty.params) }) .count(); debug_assert!(n > 0); // Choose one of them. let i = u.int_in_range(0..=n - 1)?; let (func_idx, ty) = module .funcs() .filter(|(_, ty)| { builder.types_on_stack(&ty.params) }) .nth(i) .unwrap(); // Pop its parameters and push its results. builder.pop_operands(&ty.params); builder.push_operands(&ty.results); Ok(Instruction::Call(func_idx as u32)) } Generating Instruction Sequences

Currently, the core WebAssembly specification defines 189 instructions, and I’ve implemented support for all of them in wasm-smith. That’s too many to reproduce here, but you can check them out if you’re curious. We’ll now turn our attention to using the pairs of functions we’ve defined for generating individual instructions to generate the sequences of instructions that make up a whole function body.

We keep generating new instructions until either when there aren’t any control frames left (meaning that we’ve returned from the function) or when the Unstructured tells us to stop. To generate one instruction, we filter OPTIONS down to just those options that are valid within the current context, ask the Unstructured to choose one of them, and call the chosen function to generate the instruction. This new instruction is added to our function body and we repeat the process.

impl CodeBuilder { pub(crate) fn arbitrary( mut self, u: &mut Unstructured, module: &Module, ) -> Result<Vec<Instruction>> { let mut instructions = vec![]; while !self.allocs.controls.is_empty() { let keep_going = u.arbitrary().unwrap_or(false); if !keep_going { self.end_active_control_frames( &mut instructions, ); break; } // Filter `OPTIONS` down to just those that // are valid within the current context. self.allocs.options.clear(); for (is_valid, option) in OPTIONS { if is_valid.map_or(true, |f| { f(module, &mut self) }) { self.allocs.options.push(*option); } } // Choose one of them, and call the chosen // function to generate the instruction. let f = u.choose(&self.allocs.options)?; let inst = f(u, module, &mut self)?; instructions.push(inst); } Ok(instructions) } }

If the Unstructured tells us to stop generating instructions before we’ve exited all control frames, then the instruction sequence most likely is not at a valid stopping point. Either there are unfinished loops, ifs, and blocks, or we have the wrong types on the operand stack for a return. The end_active_control_frames method resolves these conflicts. When it can, it leaves the operands on the stack as a control frame’s results or a function’s return values. When the operand stack doesn’t align with the control frame’s results, it inserts an unreachable instruction. This instruction makes the sequence validate, but will trigger a trap at when executed.

impl CodeBuilder { fn end_active_control_frames( &mut self, instructions: &mut Vec<Instruction>, ) { while !self.allocs.controls.is_empty() { let num_operands = self.operands().len(); let control = self.pop_control(); // If we don't have the right operands on // the stack for this control frame, add an // `unreachable` instruction. if control.results.len() != num_operands || !self.types_on_stack(&control.results) { self.allocs.operands.push(None); instructions.push( Instruction::Unreachable, ); } // If this is an `if` that is not stack // neutral, then it must have an `else`. if control.kind == ControlKind::If && control.params != control.results { instructions.push(Instruction::Else); instructions.push( Instruction::Unreachable, ); } // The last control frame for the function // return does not need an `end` instruction. if !self.allocs.controls.is_empty() { instructions.push(Instruction::End); } self.allocs.operands.truncate( control.height, ); self.allocs.operands.extend( control.results.into_iter().map(Some), ); } } }

The CodeBuilderAllocations are created in the arbitrary_code method, which generates the module’s code section. The code section contains the bodies of the functions locally defined in the module. For each function, it calls arbitrary_func_body, which creates locals for the function and a CodeBuilder that reuses the allocations and generates the instruction sequence for the function.

impl Module { /// Generate the module's code section. fn arbitrary_code( &mut self, u: &mut Unstructured, ) -> Result<()> { // Reserve space for the function bodies we // are about to define. self.code.reserve(self.funcs.len()); // Create the allocations that are reused // when generating each function body. let mut allocs = CodeBuilderAllocations::default(); // For each function defined locally, // generate an arbitrary function body. for func_ty_idx in &self.funcs { let func_ty = &self.types[*func_ty_idx as usize]; let body = self.arbitrary_func_body( u, func_ty, &mut allocs, )?; self.code.push(body); } Ok(()) } /// Generate a function body. fn arbitrary_func_body( &self, u: &mut Unstructured, func_ty: &FuncType, allocs: &mut CodeBuilderAllocations, ) -> Result<Code> { // Generate arbitrary locals for this function. let locals = self.arbitrary_locals(u)?; // Create a `CodeBuilder` that reuses the given // allocations. let builder = allocs.builder(func_ty, &locals); // Have the `CodeBuilder` generate an arbitrary // instruction sequence. let instructions = builder.arbitrary(u, self)?; Ok(Code { locals, instructions, }) } }

Finally, here is the full Arbitrary implementation for Module, including calls to each of the methods to generate sections whose description we elided in this blog post:

impl Arbitrary for Module { fn arbitrary(u: &mut Unstructured) -> Result<Self> { let mut module = Module::default(); module.types = u.arbitrary()?; module.arbitrary_imports(u)?; module.arbitrary_funcs(u)?; if module.table_imports() == 0 { module.table = u.arbitrary()?; } if module.memory_imports() == 0 { module.memory = u.arbitrary()?; } module.arbitrary_globals(u)?; module.arbitrary_exports(u)?; module.arbitrary_start(u)?; module.arbitrary_elems(u)?; module.arbitrary_codes(u)?; module.arbitrary_data(u)?; Ok(module) } }

That’s everything we need to generate arbitrary, valid Wasm modules!

Using the Test Case Generator

Now we are ready to use our shiny, new Wasm test case generator with a fuzzer! We’ll use the libfuzzer-sys crate, and cargo fuzz. Our new fuzz target accepts an arbitrary Module as input, serializes it into bytes, and passes these bytes into the wasmparser crate’s validator. Finally, it asserts that the module is valid. If this assertion fails, then either the test case generator or the wasmparser crate has a bug.

// fuzz/fuzz_targets/ #![no_main] use libfuzzer_sys::fuzz_target; use wasm_smith::Module; // Define a fuzz target that accepts arbitrary // `Module`s as input. fuzz_target!(|m: Module| { // Convert the module into Wasm bytes. let bytes = m.to_bytes(); // Validate the module and assert that it passes // validation. let mut validator = wasmparser::Validator::new(); validator.wasm_features(wasmparser::WasmFeatures { multi_value: true, ..wasmparser::WasmFeatures::default() }); if let Err(e) = validator.validate_all(&bytes) { std::fs::write("test.wasm", bytes).unwrap(); panic!("Invalid module: {}", e); } });

We can run this fuzz target with cargo fuzz:

$ cargo fuzz run validate

Initially, I used this fuzz target to test my generator, making sure that it was actually generating valid Wasm modules. Then it stopped finding bugs in my generator, and soon after that it started finding bugs in wasmparser’s validation implementation. It has already found five unique validation bugs! And it isn’t like wasmparser has particularly low hanging fruit — we already fuzz wasmparser in a variety of different ways 24/7 on OSS-Fuzz.


Writing a test case generator for a programming language isn’t magic. It isn’t very difficult once you know the pattern to follow. And once you have the test case generator written, it can poke deep into your compiler, past the parser, helping you find a bunch of hidden bugs!

wasm-smith is already quite promising and I intend to keep developing and maintaining it. Next I want to define a bunch of fuzz targets that use wasm-smith for Wasmtime and its ecosystem of crates and tools. I also want to add support for swarm testing, toggling various Wasm proposals (such as multi-value and reference types) on and off, and add a mode that guarantees termination of its generated programs.

Thanks for reading!

0 It should be noted that this technique, while it enables using libFuzzer and AFL with our generator, does not require their use. We can still use a purely random approach by filling a buffer with a random number generator and then using that buffer as our input sequence of predetermined choices. This would, for example, let us fuzz with our test case generator on platforms that aren’t supported by AFL or libFuzzer.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR26 available

ma, 24/08/2020 - 04:41
TenFourFox Feature Parity Release 26 final is now (finally) available for testing (downloads, hashes, release notes). The delay is due to the severe heat wave and rolling blackouts we had here in overly sunny Southern California; besides the fact that Quad G5s have never been considered particularly power-thrifty, I had the A/C reduced to save electricity further and running the G5 and the Talos II simultaneously would have made the rear office absolutely miserable. There are no additional changes other than outstanding security updates, though since we will be switching to ESR78 for FPR27 anyway, I pulled a few lower-priority security and stability fixes from ESR78 in advance that didn't make it to ESR68. Assuming all goes well, it will go live tomorrow (Monday) afternoon/evening Pacific time.

For FPR27 we will be switching over the EV and TLS roots that we usually do for an ESR switch, and I would like to make a first pass at "sticky Reader mode" as well. More soon.

Categorieën: Mozilla-nl planet

Karl Dubost: Browser Wish List - Tabs Time Machine

za, 22/08/2020 - 00:30

Each time, when asking around you how many tabs are opened in the current desktop browser session, most people will have around per session (July 2020):

  • Release: 4 tabs (median) and 8 tabs (mean)
  • Nightly: 4 tabs (median) and 51 tabs (mean)

(Having a graph of the full distribution would be interesting here.)

It would be interesting to see the exact distribution, because there is a cohort with a very high number of tabs. I have usually in between 300 and 500 tabs opened. And sometimes I'm cleaning up everything. But after an internal discussion at Mozilla, I realized some people had even more toward a couple of thousand tabs opened at once.

While we are not the sheer majority, we are definitely a group of people probably working with browsers intensively and with specific needs that the browsers currently do not address. Also we have to be careful with these stats which are auto-selecting group of people. If there's nothing to manage a high number of tabs, it is then likely that there will not be a lot of people ready to painstakly manage a high number of tabs.

The Why?

I use a lot of tabs.

But if I turn my head to my bookshelf, there are probably around 2000+ books in there. My browser is a bookshelf or library of content and a desk. But one which is currently not very good at organizing my content. I keep tabs opened

  • to access reference content (articles, guidebook, etc)
  • to talk about it later on with someone else or in a blog post
  • to have access to tasks (opening 30 bugs I need to go through this week)

I sometimes open some tabs twice. I close by mistake a tab without realizing and then when I search the content again I can't find it. I can't do a full text search on all open tabs. I can only manage the tabs vertically with an addon (right now I'm using Tabs Center Redux). And if by any bad luck, we are offline and the tabs had not been navigated beforehand, we loose the full content we needed.

So I’m often grumpy at my browser.

What I want: Content Management

Here I will be focusing on my own use case and needs.

What I would love is an “Apple Time Machine”-like for my browser, aka dated archives of my browsing session, with full text search.

  • Search through text keyword all tabs content, not only the title.
  • Possibility to filter search queries with time and uri. "Search this keyword only on wikipedia pages opened less than one week ago"
  • Tag tabs to create collections of content.
  • Archive the same exact uri at different times. Imagine the homepage of the NYTimes at different dates or times and keeping each version locally. (Webarchive is incomplete and online, I want it to work offline).
  • The storage format doesn't need to be the full stack of technologies of the current page. Opera Mini for example is using a format which is compressing the page as a more or less interactive image with limited capabilities.
  • You could add automation with an automatic backup of everything you are browsing, or have the possibility to manually select the pages you want to keep (like when you decide to pin a tab)
  • If the current computer doesn't have enough storage for your needs, an encrypted (paid) service could be provided where you would specify which page you want to be archived away and the ones that you want to keep locally.

Firefox becomes a portable bookshelf and the desk with the piles of papers you are working on.

Browser Innovation

Innovation in browsers don't have to be only about supported technologies, but also about features of the browser itself. I have the feeling that we have dropped the ball on many things, as we race to be transparent with regards to websites and applications. Allowing technologies giving tools to web developers to create cool things is certainly very useful, but making a browser more useful for the immediate users is as much important. I don't want the browser to disappear in this mediating UI, I want it to give me more ways to manage and mitigate my interactions with the Web.

Slightly Related

Open tabs are cognitive spaces by Michail Rybakov.

It is time we stop treating websites as something solitary and alien to us. Web pages that we visit and leave open are artifacts of externalized cognition; keys to thinking and remembering.

The browser of today is a transitory space that brings us into a mental state, not just to a specific website destination. And we should design our browsers for this task.


Responses/Comments from the Web
Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: An Update on MDN Web Docs

vr, 21/08/2020 - 20:03

Last week, Mozilla announced some general changes in our investments and we would like to outline how they will impact our MDN platform efforts moving forward. It hurts to make these cuts, and it’s important that we be candid on what’s changing and why.

First we want to be clear, MDN is not going away. The core engineering team will continue to run the MDN site and Mozilla will continue to develop the platform.

However, because of Mozilla’s restructuring, we have had to scale back our overall investment in developer outreach, including MDN. Our Co-Founder and CEO Mitchell Baker outlines the reasons why here. As a result, we will be pausing support for DevRel sponsorship, Hacks blog and Tech Speakers. The other areas we have had to scale back on staffing and programs include: Mozilla developer programs, developer events and advocacy, and our MDN tech writing.

We recognize that our tech writing staff drive a great deal of value to MDN users, as do partner contributions to the content. So we are working on a plan to keep the content up to date. We are continuing our planned platform improvements, including a GitHub-based submission system for contributors.

We believe in the value of MDN Web Docs as a premier web developer resource on the internet. We are currently planning how to move MDN forward long term, and will develop this new plan in close collaboration with our industry partners and community members.

Thank you all for your continued care and support for MDN,

— Rina Jensen, Director, Contributor Experience

The post An Update on MDN Web Docs appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet