mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Daniel Stenberg: target-independent libcurl headers

Mozilla planet - do, 15/06/2017 - 08:46

We write libcurl to be very portable. It can be built and run on virtually every operating system with an CPU architecture that is at least 32 bit, from some of the most legacy Unixes from the early 90s to the most recent updates to all the popular systems, including the widespread mobile platforms.

Type sizes on different archs

In the early 2000s we added support to libcurl for “large files” (back in the days when that support wasn’t always present in your operating systems) and large variable types (beyond 32 bits) to work for applications and libcurl alike, and to work the same way for libcurl-using applications independently on which platform you’d compile the code on.

We started out using compiler/system defines to figure out for example the size of the native “off_t” type to know if it was 32 bit or 64 bit. That turned out to be problematic as users accidentally ended up in situations where the library considered a type to be one size and the application considered it to be another, leading to unexpected behaviors at best or downright crashes and misery.

Determine at lib build-time

The fix to that run-time size-of-variables confusion was to generate a fixed “outcome” at build-time that would then be used by both the library and applications so that they could never again disagree on this. The obvious downside here was that we had to generate this target specific information into public headers for the library (known as curl/curlbuild.h). We didn’t like doing it this way, but this approach was a better situation than before as it caused less headaches for users.

Now we instead created problems for system packagers who wanted to provide a set of curl headers and allow users to build for example either a 32 bit build or a 64 bit build of their application – so they had to generate two sets of curl headers. Or having the headers on a shared file system to be used by many different systems. Inconvenient. But as this solution didn’t hurt too many people, was a cumbersome problem to fix and yet possible to work around, it remained in the curl project since August 7, 2008 (commit 14240e9e109fe6af1).

Determine at app build-time

In March 2017 (commit 9506d01ee5) we introduced a new take on this problem. A new header that checks systems defines and determines all the necessary information at the time the application is compiled instead of at the time libcurl is compiled. We call it curl/system.h.

The goal was to replace the generated curlbuild.h header, but since it would cause serious problems if this new header would get any different results (like variable type sizes) than the old header, it was a risky move. We needed extra seat-belts for this.

We therefor added the new header next to the old header in parallel, and introduced a test case in the curl test suite that verifies the output from the two systems and make sure that they agree, and had them present in the curl source tree, coexisting. The curl/system.h file of course without being used for anything real, but tested by everyone who runs the test suite – to make sure it isn’t awful.

We think the new header file has now proven itself worthy. We have not gotten any recent reports on problems with test 1541. It is time to cut out the old header system and launch the new!

Starting in release curl 7.55.0, due to be released on August 9, 2017, the header files will finally again be truly platform agnostic. It took us nine years but we finally did it! The bulk of the change is made in this commit.

Just another detail in the machinery.

Categorieën: Mozilla-nl planet

Daniel Pocock: Croissants, Qatar and a Food Computer Meetup in Zurich

Mozilla planet - wo, 14/06/2017 - 21:53

In my last blog, I described the plan to hold a meeting in Zurich about the OpenAg Food Computer.

The Meetup page has been gathering momentum but we are still well within the capacity of the room and catering budget so if you are in Zurich, please join us.

Thanks to our supporters

The meeting now has sponsorship from three organizations, Project 21 at ETH, the Debian Project and Free Software Foundation of Europe.

Sponsorship funds help with travel expenses and refreshments.

Food is always in the news

In my previous blog, I referred to a number of food supply problems that have occurred recently. There have been more in the news this week: a potential croissant shortage in France due to the rising cost of butter and Qatar's efforts to air-lift 4,000 cows from the US and Australia, among other things, due to the Saudi Arabia embargo.

The food computer isn't an immediate solution to these problems but it appears to be a helpful step in the right direction.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: WebExtensions in Firefox 55

Mozilla planet - wo, 14/06/2017 - 21:41

Firefox 55 landed in Beta this week, so it’s time for another update on WebExtensions. Because the development period for this latest release was about twice as long as normal, we have many more updates. Documentation for the APIs discussed here can be found on MDN.

APIs

The webRequest API has seen many improvements. Empty types and URLs in webRequest filters are now rejected. Requests can be cancelled before cookie processing occurs. Websockets can be processed through webRequest using the ws:// and wss:// protocols. Requests from the top frame now have the correct frameId, and more error conditions on requests are picked up by the onErrorOccurred event.

The sidebar API now re-opens automatically if you reload the add-on using about:debugging or web-ext. If you are following along with project Photon, you’ll note that the sidebar works great with the new Photon designs. Shiny!

The runtime.onMessageExternal API has been implemented, which allows WebExtensions add-ons to communicate with other WebExtensions add-ons. The runtime.onInstalled API will now activate if an add-on is installed temporarily, and the event will now include the previousVersion of the extension.

In order to limit the amount of CSS that a developer has to write and provide some degree of uniformity, there is a browser_style option for the browserAction API. We’ve also provided this to options V2 and the sidebar APIs.

Context menus now work in browserAction popups. The onClickData event in the context menu also gets the frameID. Context menu clicks can now open browser actions, page actions and sidebars. To do this, specify _execute_browser_action, _execute_page_action or _execute_sidebar_action in the command field for creating a context menu.

If you load a page from your extension, you get a long moz-extension://…. URL in the URL bar. We’ve added a notification in the identity box to indicate that the page what extension loaded it.

Other changes include:

A new API is now available for the nsiProfiler. This allows the Gecko Profiler to be used without legacy add-on support. This was essential for the Quantum Flow work happening in Firefox. Because of the sensitive nature of the content and the limited appeal of this API, access to this API is currently restricted.

Permissions

With Firefox 55, the user interface for required and optional permissions is now enabled for WebExtensions add-ons. Required permissions and hosts will trigger a prompt on installation for the user. Here’s an example:

When an extension is updated and the hosts or permissions have changed, the current extension remains enabled, but the user has to accept the updated permissions in order to continue.

There is also a new user interface for side loading add-ons that is more consistent with other installation methods. Side loading is when extensions are installed outside of Firefox, by other software. It now appears in the hamburger menu as a notification:

This permissions dialog is slightly different as well:

Once an extension has been installed, if it would like more permissions or hosts, it can ask for those as needed — these are called optional permissions. They are accessible using the browser.permissions.request API. An example of using optional permissions is available in the example repository on github.

Developer tools

With the introduction of devtools.inspectedWindow.eval bindings, many more add-ons are now able to support WebExtensions APIs. The developer tools team at Mozilla has been reaching out to developers with add-ons that might be affected as you can see on Twitter. For example, the Redux DevTools extension is now a WebExtensions add-on using the same code base as other browsers.

An API for devtools.panels.themeName has been implemented. The devtools panel icon is no longer inverted if a light theme is chosen.

There have been some improvements to the about:debugging page:

These changes are aimed at improving the ease of development. Temporary extensions now appear at the top of the page, a remove button is present, help is shown if the extension has a temporary ID, the location of the extension on the file system is shown, and the internal UUID is shown.

Android

Firefox for Android has gained browserAction support. Currently a textual menu item is added to the bottom of the menu on Android. It supports browserAction.onClicked and setTitle and getTitle. Tabs support was added to pageAction.

Theming

The beginnings of theme support, as detailed in this blog post, has landed in Firefox. In Firefox 55 you can use the browser.theme.update API. The theme API allows you to set some key values in Firefox such as:

browser.theme.update({  images: {    headerURL: "header.png",  },  colors: {    accentcolor: "#000",    textcolor: "#fff",  } });

This WebExtensions API will apply the theme to Firefox by setting the header image and some CSS colors. At this point the theme API applies a very similar set of functionality as the existing lightweight theme. However, using this API you can do this dynamically in your extension.

Additionally, browser.management APIs have been implemented for themes. These allow you to enable and disable themes by only using the management API. For an example check out the example repository on github.

Proxy

The proxy API allows extension authors to insert proxy configuration files into Firefox. This API implementation is quite different from the one in Chrome to take advantage of some of the improved support in Firefox for proxies. As a result, to prevent confusion, this API is not present in the chrome namespace.

The proxy configuration file will contain a function for dealing with the incoming request:

function FindProxyForURL(url, host) {  // ... }

And this will then be registered in the API:

browser.proxy.registerProxyScript(filename).then();

For an example of using the proxy API, please see the repository on github.

Performance

One focus of the Firefox 55 release was the performance of WebExtensions, particularly the scenario where there is at least one WebExtension on startup.

These performance improvements include speeding up host matching, limiting the cloning of add-on messages, lazily loading APIs when they are needed. We’ve also been adding in some telemetry measurements into Firefox, such as background page load times and extension start up times.

The next largest performance gain is the moving of WebExtensions add-ons to their own process. To enable this, we made the debugging tools work seamlessly with out-of-process add-ons. We are hoping to enable this feature for Windows users in Firefox 56 once the remaining graphics issues have been resolved.

You can see some of the results of performance improvements in the Quantum newsletter which Ehsan posts to his blog. These improvements aren’t just limited to WebExtensions add-ons. For example, the introduction of off script decoding brought a large performance improvement to startup measurements for all Firefox users, as well as those with WebExtensions add-ons:

Community

As ever we need to thank the community who contributed to this release. This includes: Tushar Saini, Tomislav Jovanovic, Rob Wu, Martin Giger and Geoff Lankow. Thank you to you all.

The post WebExtensions in Firefox 55 appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 102

Mozilla planet - wo, 14/06/2017 - 19:00

The Joy of Coding - Episode 102 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 102

Mozilla planet - wo, 14/06/2017 - 19:00

The Joy of Coding - Episode 102 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Add-ons Update – 2017/06

Mozilla planet - wo, 14/06/2017 - 18:19

Here’s the monthly update of the state of the add-ons world.

The Road to Firefox 57 explains what developers should look forward to in regards to add-on compatibility for the rest of the year. So please give it a read if you haven’t already.

The Review Queues

In the past month, our team reviewed 2,209 listed add-on submissions:

  • 1202 in fewer than 5 days (54%).
  • 173 between 5 and 10 days (8%).
  • 834 after more than 10 days (38%).

235 listed add-ons are awaiting review.

If you compare these numbers with last month’s, you’ll see a very clear difference, both in reviews done and add-ons still awaiting review. The admin reviewers have been doing an excellent job clearing the queues of add-ons that use the WebExtensions API, which are generally safer and can be reviewed more easily. There’s still work to do so we clear the review backlog, but we’re on track to being in a good place by the end of the month.

However, this doesn’t mean we won’t need volunteer reviewers in the future. If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 55 and the bulk validation script will be run in a week or so. The compatibility post for 56 is still a few weeks away.

Make sure you’ve tested your add-ons and either use WebExtensions or set the multiprocess compatible flag in your manifest to ensure they continue working in Firefox. And as always, we recommend that you test your add-ons on Beta.

You may also want  to review the post about upcoming changes to the Developer Edition channel. Firefox 55 is the first version that will move directly from Nightly to Beta.

If you’re an add-ons user, you can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • Tushar Saini
  • harikishen
  • Geoff Lankow
  •  Trishul Goel
  • Andrew Truong
  • raajitr
  • Christophe Villeneuve
  • zombie
  • Perry Jiang
  • vietngoc

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/06 appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: A crash course in memory management

Mozilla planet - wo, 14/06/2017 - 17:45

This is the 1st article in a 3-part series:

  1. A crash course in memory management
  2. A cartoon intro to ArrayBuffers and SharedArrayBuffers
  3. Avoiding race conditions in SharedArrayBuffers with Atomics

To understand why ArrayBuffer and SharedArrayBuffer were added to JavaScript, you need to understand a bit about memory management.

You can think of memory in a machine as a bunch of boxes. I think of these like the mailboxes that you have in offices, or the cubbies that pre-schoolers have to store their things.

If you need to leave something for one of the other kids, you can put it inside a box.

A column of boxes with a child putting something in one of the boxes

Next to each one of these boxes, you have a number, which is the memory address. That’s how you tell someone where to find the thing you’ve left for them.

Each one of these boxes is the same size and can hold a certain amount of info. The size of the box is specific to the machine. That size is called word size. It’s usually something like 32-bits or 64-bits. But to make it easier to show, I’m going to use a word size of 8 bits.

A box with 8 smaller boxes in it

If we wanted to put the number 2 in one of these boxes, we could do it easily. Numbers are easy to represent in binary.

The number two, converted to binary 00000010 and put inside the boxes

What if we want something that’s not a number though? Like the letter H?

We’d need to have a way to represent it as a number. To do that, we need an encoding, something like UTF-8. And we’d need something to turn it into that number… like an encoder ring. And then we can store it.

The letter H, put through an encoder ring to get 72, which is then converted to binary and put in the boxes

When we want to get it back out of the box, we’d have to put it through a decoder to translate it back to H.

Automatic memory management

When you’re working in JavaScript you don’t actually need to think about this memory. It’s abstracted away from you. This means you don’t touch the memory directly.

Instead, the JS engine acts as an intermediary. It manages the memory for you.

A column of boxes with a rope in front of it and the JS engine standing at that rope like a bouncer

So let’s say some JS code, like React, wants to create a variable.

Same as above, with React asking the JS engine to create a variable

What the JS engine does is run that value through an encoder to get the binary representation of the value.

The JS engine using an encoder ring to convert the string to binary

And it will find space in the memory that it can put that binary representation into. This process is called allocating memory.

The JS engine finding space for the binary in the column of boxes

Then, the engine will keep track of whether or not this variable is still accessible from anywhere in the program. If the variable can no longer be reached, the memory is going to be reclaimed so that the JS engine can put new values there.

The garbage collector clearing out the memory

This process of watching the variables—strings, objects, and other kinds of values that go in memory—and clearing them out when they can’t be reached anymore is called garbage collection.

Languages like JavaScript, where the code doesn’t deal with memory directly, are called memory-managed languages.

This automatic memory management can make things easier for developers. But it also adds some overhead. And that overhead can sometimes make performance unpredictable.

Manual memory management

Languages with manually managed memory are different. For example, let’s look at how React would work with memory if it were written in C (which would be possible now with WebAssembly).

C doesn’t have that layer of abstraction that JavaScript does on the memory. Instead, you’re operating directly on memory. You can load things from memory, and you can store things to memory.

A WebAssembly version of React working with memory directly

When you’re compiling C or other languages down to WebAssembly, the tool that you use will add in some helper code to your WebAssembly. For example, it would add code that does the encoding and decoding bytes. This code is called a runtime environment. The runtime environment will help handle some of the stuff that the JS engine does for JS.

An encoder ring being shipped down as part of the .wasm file

But for a manually managed language, that runtime won’t include garbage collection.

This doesn’t mean that you’re totally on your own. Even in languages with manual memory management, you’ll usually get some help from the language runtime. For example, in C, the runtime will keep track of which memory addresses are open in something called a free list.

A free list next to the column of boxes, listing which boxes are free right now

You can use the function malloc (short for memory allocate) to ask the runtime to find some memory addresses that can fit your data. This will take those addresses off of the free list. When you’re done with that data, you have to call free to deallocate the memory. Then those addresses will be added back to the free list.

You have to figure out when to call those functions. That’s why it’s called manual memory management—you manage the memory yourself.

As a developer, figuring out when to clear out different parts of memory can be hard. If you do it at the wrong time, it can cause bugs and even lead to security holes. If you don’t do it, you run out of memory.

This is why many modern languages use automatic memory management—to avoid human error. But that comes at the cost of performance. I’ll explain more about this in the next article.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: A cartoon intro to ArrayBuffers and SharedArrayBuffers

Mozilla planet - wo, 14/06/2017 - 17:45

This is the 2nd article in a 3-part series:

  1. A crash course in memory management
  2. A cartoon intro to ArrayBuffers and SharedArrayBuffers
  3. Avoiding race conditions in SharedArrayBuffers with Atomics

In the last article, I explained how memory-managed languages like JavaScript work with memory. I also explained how manual memory management works in languages like C.

Why is this important when we’re talking about ArrayBuffers and SharedArrayBuffers?

It’s because ArrayBuffers give you a way to handle some of your data manually, even though you’re working in JavaScript, which has automatic memory management.

Why is this something that you would want to do?

As we talked about in the last article, there’s a trade-off with automatic memory management. It is easier for the developer, but it adds some overhead. In some cases, this overhead can lead to performance problems.

A balancing scale showing that automatic memory management is easier to understand, but harder to make fast

For example, when you create a variable in JS, the engine has to guess what kind of variable this is and how it should be represented in memory. Because it’s guessing, the JS engine will usually reserve more space than it really needs for a variable. Depending on the variable, the memory slot may be 2–8 times larger than it needs to be, which can lead to lots of wasted memory.

Additionally, certain patterns of creating and using JS objects can make it harder to collect garbage. If you’re doing manual memory management, you can choose an allocation and de-allocation strategy that’s right for the use case that you’re working on.

Most of the time, this isn’t worth the trouble. Most use cases aren’t so performance sensitive that you need to worry about manual memory management. And for common use cases, manual memory management may even be slower.

But for those times when you need to work at a low-level to make your code as fast as possible, ArrayBuffers and SharedArrayBuffers give you an option.

A balancing scale showing that manual memory management gives you more control for performance fine-tuning, but requires more thought and planning

So how does an ArrayBuffer work?

It’s basically like working with any other JavaScript array. Except, when using an ArrayBuffer, you can’t put any JavaScript types into it, like objects or strings. The only thing that you can put into it are bytes (which you can represent using numbers).

Two arrays, a normal array which can contain numbers, objects, strings, etc, and an ArrayBuffer, which can only contain bytes

One thing I should make clear here is that you aren’t actually adding this byte directly to the ArrayBuffer. By itself, this ArrayBuffer doesn’t know how big the byte should be, or how different kinds of numbers should be converted to bytes.

The ArrayBuffer itself is just a bunch of zeros and ones all in a line. The ArrayBuffer doesn’t know where the division should be between the first element and the second element in this array.

A bunch of ones and zeros in a line

To provide context, to actually break this up into boxes, we need to wrap it in what’s called a view. These views on the data can be added with typed arrays, and there are lots of different kinds of typed arrays they can work with.

For example, you could have an Int8 typed array which would break this up into 8-bit bytes.

Those ones and zeros broken up into boxes of 8

Or you could have an unsigned Int16 array, which would break it up into 16-bit bites, and also handle this as if it were an unsigned integer.

Those ones and zeros broken up into boxes of 16

You can even have multiple views on the same base buffer. Different views will give you different results for the same operations.

For example, if we get elements 0 & 1 from the Int8 view on this ArrayBuffer, it will give us different values than element 0 in the Uint16 view, even though they contain exactly the same bits.

Those ones and zeros broken up into boxes of 16

In this way, the ArrayBuffer basically acts like raw memory. It emulates the kind of direct memory access that you would have in a language like C.

You may be wondering why don’t we just give programmers direct access to memory instead of adding this layer of abstraction. Giving direct access to memory would open up some security holes. I will explain more about this in a future article.

So, what is a SharedArrayBuffer?

To explain SharedArrayBuffers, I need to explain a little bit about running code in parallel and JavaScript.

You would run code in parallel to make your code run faster, or to make it respond faster to user events. To do this, you need to split up the work.

In a typical app, the work is all taken care of by a single individual—the main thread. I’ve talked about this before… the main thread is like a full-stack developer. It’s in charge of JavaScript, the DOM, and layout.

Anything you can do to remove work from the main thread’s workload helps. And under certain circumstances, ArrayBuffers can reduce the amount of work that the main thread has to do.

The main thread standing at its desk with a pile of paperwork. The top part of that pile has been removed

But there are times when reducing the main thread’s workload isn’t enough. Sometimes you need to bring in reinforcements… you need to split up the work.

In most programming languages, the way you usually split up the work is by using something called a thread. This is basically like having multiple people working on a project. If you have tasks that are pretty independent of each other, you can give them to different threads. Then, both those threads can be working on their separate tasks at the same time.

In JavaScript, the way you do this is using something called a web worker. These web workers are slightly different than the threads you use in other languages. By default they don’t share memory.

Two threads at desks next to each other. Their piles of paperwork are half as tall as before. There is a chunk of memory below each, but not connected to the other's memory

This means if you want to share some data with the other thread, you have to copy it over. This is done with the function postMessage.

postMessage takes whatever object you put into it, serializes it, sends it over to the other web worker, where it’s deserialized and put in memory.

Thread 1 shares memory with thread 2 by serializing it, sending it across, where it is copied into thread 2's memory

That’s a pretty slow process.

For some kinds of data, like ArrayBuffers, you can do what is called transferring memory. That means moving that specific block of memory over so that the other web worker has access to it.

But then the first web worker doesn’t have access to it anymore.

Thread 1 shares memory with thread 2 by transferring it. Thread 1 no longer has access to it

That works for some use cases, but for many use cases where you want to have this kind of high performance parallelism, what you really need is to have shared memory.

This is what SharedArrayBuffers give you.

The two threads get some shared memory which they can both access

With the SharedArrayBuffer, both web workers, both threads, can be writing data and reading data from the same chunk of memory.

This means they don’t have the communication overhead and delays that you would have with postMessage. Both web workers have immediate access to the data.

There is some danger in having this immediate access from both threads at the same time though. It can cause what are called race conditions.

Drawing of two threads racing towards memory

I’ll explain more about those in the next article.

What’s the current status of SharedArrayBuffers?

SharedArrayBuffers will be in all of the major browsers soon.

Logos of the major browsers high-fiving

They’ve already shipped in Safari (in Safari 10.1). Both Firefox and Chrome will be shipping them in their July/August releases. And Edge plans to ship them in their fall Windows update.

Even once they are available in all major browsers, we don’t expect application developers to be using them directly. In fact, we recommend against it. You should be using the highest level of abstraction available to you.

What we do expect is that JavaScript library developers will create libraries that give you easier and safer ways to work with SharedArrayBuffers.

In addition, once SharedArrayBuffers are built into the platform, WebAssembly can use them to implement support for threads. Once that’s in place, you’d be able to use the concurrency abstractions of a language like Rust, which has fearless concurrency as one of its main goals.

In the next article, we’ll look at the tools (Atomics) that these library authors would use to build up these abstractions while avoiding race conditions.

Layer diagram showing SharedArrayBuffer + Atomics as the foundation, and JS libaries and WebAssembly threading building on top

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Avoiding race conditions in SharedArrayBuffers with Atomics

Mozilla planet - wo, 14/06/2017 - 17:44

This is the 3rd article in a 3-part series:

  1. A crash course in memory management
  2. A cartoon intro to ArrayBuffers and SharedArrayBuffers
  3. Avoiding race conditions in SharedArrayBuffers with Atomics

In the last article, I talked about how using SharedArrayBuffers could result in race conditions. This makes working with SharedArrayBuffers hard. We don’t expect application developers to use SharedArrayBuffers directly.

But library developers who have experience with multithreaded programming in other languages can use these new low-level APIs to create higher-level tools. Then application developers can use these tools without touching SharedArrayBuffers or Atomics directly.

Layer diagram showing SharedArrayBuffer + Atomics as the foundation, and JS libaries and WebAssembly threading building on top

Even though you probably shouldn’t work with SharedArrayBuffers and Atomics directly, I think it’s still interesting to understand how they work. So in this article, I’ll explain what kinds of race conditions concurrency can bring, and how Atomics help libraries avoid them.

But first, what is a race condition?

Drawing of two threads racing towards memory

 

Race conditions: an example you may have seen before

A pretty straightforward example of a race condition can happen when you have a variable that is shared between two threads. Let’s say one thread wants to load a file and the other thread checks whether it exists. They share a variable, fileExists, to communicate.

Initially, fileExists is set to false.

Two threads working on some code. Thread 1 is loading a file if fileExists is true, and thread 2 is setting fileExists

As long as the code in thread 2 runs first, the file will be loaded.

Diagram showing thread 2 going first and file load succeeding

But if the code in thread 1 runs first, then it will log an error to the user, saying that the file does not exist.

Diagram showing thread 1 going first and file load failing

But that’s not the problem. It’s not that the file doesn’t exist. The real problem is the race condition.

Many JavaScript developers have run into this kind of race condition, even in single-threaded code. You don’t have to understand anything about multithreading to see why this is a race.

However, there are some kinds of race conditions which aren’t possible in single-threaded code, but that can happen when you’re programming with multiple threads and those threads share memory.

Different classes of race conditions and how Atomics help

Let’s explore some of the different kinds of race conditions you can have in multithreaded code and how Atomics help prevent them. This doesn’t cover all possible race conditions, but should give you some idea why the API provides the methods that it does.

Before we start, I want to say again: you shouldn’t use Atomics directly. Writing multithreaded code is a known hard problem. Instead, you should use reliable libraries to work with shared memory in your multithreaded code.

Caution sign

With that out of the way…

Race conditions in a single operation

Let’s say you had two threads that were incrementing the same variable. You might think that the end result would be the same regardless of which thread goes first.

Diagram showing two threads incrementing a variable in turn

But even though, in the source code, incrementing a variable looks like a single operation, when you look at the compiled code, it is not a single operation.

At the CPU level, incrementing a value takes three instructions. That’s because the computer has both long-term memory and short-term memory. (I talk more about how this all works in another article).

Drawing of a CPU and RAM

All of the threads share the long-term memory. But the short-term memory—the registers—are not shared between threads.

Each thread needs to pull the value from memory into its short-term memory. After that, it can run the calculation on that value in short-term memory. Then it writes that value back from its short-term memory to the long-term memory.

Diagram showing a variable being loaded from memory to a register, then being operated on, and then being stored back to memory

If all of the operations in thread 1 happen first, and then all the operations in thread 2 happen, we will end up with the result that we want.

Flow chart showing instructions happening sequentially on one thread, then the other

But if they are interleaved in time, the value that thread 2 has pulled into its register gets out of sync with the value in memory. This means that thread 2 doesn’t take thread 1’s calculation into consideration. Instead, it just clobbers the value that thread 1 wrote to memory with its own value.

Flow chart showing instructions interleaved between threads

One thing atomic operations do is take these operations that humans think of as being single operations, but which the computer sees as multiple operations, and makes the computer see them as single operations, too.

This is why they’re called atomic operations. It’s because they take an operation that would normally have multiple instructions—where the instructions could be paused and resumed—and it makes it so that they all happen seemingly instantaneously, as if it were one instruction. It’s like an indivisible atom.

Instructions encased in an atom

Using atomic operations, the code for incrementing would look a little different.

Atomics.add(sabView, index, 1)

Now that we’re using Atomics.add, the different steps involved in incrementing the variable won’t be mixed up between threads. Instead, one thread will finish its atomic operation and prevent the other one from starting. Then the other will start its own atomic operation.

Flow chart showing atomic execution of the instructions

The Atomics methods that help avoid this kind of race are:

You’ll notice that this list is fairly limited. It doesn’t even include things like division and multiplication. A library developer could create atomic-like operations for other things, though.

To do that, the developer would use Atomics.compareExchange. With this, you get a value from the SharedArrayBuffer, perform an operation on it, and only write it back to the SharedArrayBuffer if no other thread has updated it since you first checked. If another thread has updated it, then you can get that new value and try again.

Race conditions across multiple operations

So those Atomic operations help avoid race conditions during “single operations”. But sometimes you want to change multiple values on an object (using multiple operations) and make sure no one else is making changes to that object at the same time. Basically, this means that during every pass of changes to an object, that object is on lockdown and inaccessible to other threads.

The Atomics object doesn’t provide any tools to handle this directly. But it does provide tools that library authors can use to handle this. What library authors can create is a lock.

Diagram showing two threads and a lock

If code wants to use locked data, it has to acquire the lock for the data. Then it can use the lock to lock out the other threads. Only it will be able to access or update the data while the lock is active.

To build a lock, library authors would use Atomics.wait and Atomics.wake, plus other ones such as Atomics.compareExchange and Atomics.store. If you want to see how these would work, take a look at this basic lock implementation.

In this case, thread 2 would acquire the lock for the data and set the value of locked to true. This means thread 1 can’t access the data until thread 2 unlocks.

Thread 2 gets the lock and uses it to lock up shared memory

If thread 1 needs to access the data, it will try to acquire the lock. But since the lock is already in use, it can’t. The thread would then wait—so it would be blocked—until the lock is available.

Thread 1 waits until the lock is unlocked

Once thread 2 is done, it would call unlock. The lock would notify one or more of the waiting threads that it’s now available.

Thread 1 is notified that the lock is available

That thread could then scoop up the lock and lock up the data for its own use.

Thread 1 uses the lock

A lock library would use many of the different methods on the Atomics object, but the methods that are most important for this use case are:

Race conditions caused by instruction reordering

There’s a third synchronization problem that Atomics take care of. This one can be surprising.

You probably don’t realize it, but there’s a very good chance that the code you’re writing isn’t running in the order you expect it to. Both compilers and CPUs reorder code to make it run faster.

For example, let’s say you’ve written some code to calculate a total. You want to set a flag when the calculation is finished.

subTotal = price + fee; total += subTotal; isDone = true

To compile this, we need to decide which register to use for each variable. Then we can translate the source code into instructions for the machine.

Diagram showing what that would equal in mock assembly

So far, everything is as expected.

What’s not obvious if you don’t understand how computers work at the chip level (and how the pipelines that they use for executing code work) is that line 2 in our code needs to wait a little bit before it can execute.

Most computers break down the process of running an instruction into multiple steps. This makes sure all of the different parts of the CPU are busy at all times, so it makes the best use of the CPU.

Here’s one example of the steps an instruction goes through:

  1. Fetch the next instruction from memory
  2. Figure out what the instruction is telling us to do (aka decode the instruction), and get the values from the registers
  3. Execute the instruction
  4. Write the result back to the register

 fetch the instruction
 decode the instruction and fetch register values
 Execute the operation
 Write back the result

So that’s how one instruction goes through the pipeline. Ideally, we want to have the second instruction following directly after it. As soon as it has moved into stage 2, we want to fetch the next instruction.

The problem is that there is a dependency between instruction #1 and instruction #2.

Diagram of a data hazard in the pipeline

We could just pause the CPU until instruction #1 has updated subTotal in the register. But that would slow things down.

To make things more efficient, what a lot of compilers and CPUs will do is reorder the code. They will look for other instructions which don’t use subTotal or total and move those in between those two lines.

Drawing of line 3 of the assembly code being moved between lines 1 and 2

This keeps a steady stream of instructions moving through the pipe.

Because line 3 didn’t depend on any values in line 1 or 2, the compiler or CPU figures it’s safe to reorder like this. When you’re running in a single thread, no other code will even see these values until the whole function is done, anyway.

But when you have another thread running at the same time on another processor, that’s not the case. The other thread doesn’t have to wait until the function is done to see these changes. It can see them almost as soon as they are written back to memory. So it can tell that isDone was set before total.

If you were using isDone as a flag that the total had been calculated and was ready to use in the other thread, then this kind of reordering would create race conditions.

Atomics attempt to solve some of these bugs. When you use an Atomic write, it’s like putting a fence between two parts of your code.

Atomic operations aren’t reordered relative to each other, and other operations aren’t moved around them. In particular, two operations that are often used to enforce ordering are:

All variable updates above Atomics.store in the function’s source code are guaranteed to be done before Atomics.store is done writing its value back to memory. Even if the non-Atomic instructions are reordered relative to each other, none of them will be moved below a call to Atomics.store which comes below in the source code.

And all variable loads after Atomics.load in a function are guaranteed to be done after Atomics.load fetches its value. Again, even if the non-atomic instructions are reordered, none of them will be moved above an Atomics.load that comes above them in the source code.

Diagram showing Atomics.store and Atomics.load maintaining order

Note: The while loop I show here is called a spinlock and it’s very inefficient. And if it’s on the main thread, it can bring your application to a halt. You almost certainly don’t want to use that in real code.

Once again, these methods aren’t really meant for direct use in application code. Instead, libraries would use them to create locks.

Conclusion

Programming multiple threads that share memory is hard. There are many different kinds of race conditions just waiting to trip you up.

Drawing of shared memory with a dragon and "Here be dragons" above

This is why you don’t want to use SharedArrayBuffers and Atomics in your application code directly. Instead, you should depend on proven libraries by developers who are experienced with multithreading, and who have spent time studying the memory model.

It is still early days for SharedArrayBuffer and Atomics. Those libraries haven’t been created yet. But these new APIs provide the basic foundation to build on top of.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla Launches Campaign to Raise Awareness for Internet Health

Mozilla planet - wo, 14/06/2017 - 17:00

Today, Mozilla unveils several initiatives including an event focused on Internet Health with special guests DeRay McKesson, Lauren Duca and more, a brand new podcast, new tech to help create a voice database, as well as some local SF pop-ups.

Mozilla is doing this to draw the public’s attention to mounting concern over the consolidation of power online, including the Federal Communications Commission’s proposed actions to kill net neutrality.

New Polling

60 percent of people in the U.S. are worried about online services being owned by a small number of services, according to a new Mozilla/Ipsos poll released today.

“The Internet is a vital tool that touches every aspect of modern life,” said Mark Surman, Mozilla’s Executive Director. “If you care about freedom of speech, economic growth and a level playing field, then you care about guarding against those who would throttle, lock down or monopolize the web as if they owned it.

According to another Mozilla/Ipsos poll, seventy-six percent of people in the U.S. support net neutrality.

“At Mozilla, we’re fueling a movement to ensure the web is something that belongs to all of us. Forever,” Surman added.

“A Night for Internet Health”

On Thursday, June 29, Mozilla will host “A Night for Internet Health” — a free live event featuring prominent thinkers, performers, and political voices discussing power, progress, and life on the Web.

Mozilla will be joined by musician Neko Case, Pod Save the People host DeRay McKesson, Teen Vogue columnist Lauren Duca, comedian Moshe Kasher, tech media personality Veronica Belmont, and Sens. Al Franken and Ron Wyden via video.

The event is from 7-10 p.m. (PDT), June 29 at the SFJazz Center in San Francisco. Tickets will be available through the Center’s Box Office starting on June 15.

Credentials are available for media.

IRL podcast

On June 26, Mozilla will debut the podcast IRL: Because Online Life is Real Life. Host Veronica Belmont will share stories from the wilds of the Web, and real talk about online issues that affect us all.

People can listen to the IRL trailer or pre-subscribe to IRL on Apple Podcasts, Stitcher, Pocket Casts, Overcast, or RadioPublic.

Project Common Voice: The World’s First Crowdsourced Voice Database

Voice-enabled devices represent the next major disruption, but access to databases is expensive and doesn’t include a diverse set of accents and languages. Mozilla’s Project Common Voice aims to solve the problem by inviting people to donate samples of their voices to a massive global project that will allow anyone to quickly and easily train voice-enabled applications. Mozilla will make this resource available to the public later this year.

The project will be featured at guerilla pop-ups in San Francisco, where people can also create custom tote bags or grab a T-shirt that expresses their support for a healthy Internet and net neutrality.

Locations:

Pop-ups:
  • Wednesday, June 28: From noon – 6 p.m. PDT at Justin Herman Plaza in San Francisco.
  • Thursday, June 29: From 7 – 10 at SFJazz in San Francisco.
  • Friday, June 30 – July 1:  From noon – 6 p.m. PDT at Union Square in San Francisco.
SF Take-Over

Beginning on Monday, June 19, Mozilla will launch a provocative advertising campaign across San Francisco and online, highlighting what’s at stake with the attacks on net neutrality and power consolidation on the web.

The advertisements juxtapose opposing messages, highlighting the power dynamics of the Internet and offering steps people can take to create a healthier Internet. For example, one advertisement contrasts “Let’s Kill Innovation” with “Actually, let’s not. Raise your voice for net neutrality.”

San Franciscans and visitors will see the ads across the city and will be placed along Market and Embarcadero Streets, San Francisco Airport, projected on buildings– as well as online, radio, social media and prominent websites.

About Mozilla

Mozilla has been a pioneer and advocate for the open web for more than 15 years. We promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to experience the Web on computers, tablets and mobile devices. For more information, visit www.mozilla.org.

The post Mozilla Launches Campaign to Raise Awareness for Internet Health appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Robert O'Callahan: New "rr pack" Command

Mozilla planet - wo, 14/06/2017 - 15:04

I think there's huge potential to use rr for debugging cloud services. Apparently right now interactive debugging is mostly not used in the cloud, which makes sense — it's hard to identify the right process to debug, much less connect to it, and even if you could, stopping it for interactive analysis would likely interfere too much with your distributed system. However, with rr you could record any number of process executions without breaking your system, identify the failed runs after the fact, and debug them at your leisure.

Unfortunately there are a couple of problems making that difficult right now. One is that the largest cloud providers don't support the hardware performance counter rr needs. I'm excited to hear that Amazon has recently enabled some HW performance counters on dedicated hosts — hopefully they can be persuaded to add the retired-conditional-branch counter to their whitelist (and someone can fix the Xen PMU virtualization bug that breaks rr). Another problem is that rr's traces aren't easy to move from one machine to another. I've started addressing this problem by implementing a new rr command, rr pack.

There are two problems with rr traces. One is that on filesystems that do not support "reflink" file copies, to keep recording overhead low we sometimes hardlink files into the trace, or for system libraries we just assume they won't change even if we can't hardlink them. This means traces are not fully self-contained in the latter case, and in the former case the recording can be invalidated if the files change. The other problem is that every time an mmap occurs we clone/link a new file into the trace, even if a previous mmap mapped the same file, because we have no fast way of telling if the file has changed or not. This means traces appear to contain large numbers of large files but many of those files are duplicates.

rr pack fixes both of those problems. You run it on a trace directory in-place. It deduplicates trace files by computing a cryptographic hash (BLAKE2b, 256 bits) of each file and keeping only one file for any given hash. It identifies needed files outside the trace directory, and hardlinks to files outside the trace directory, and copies them into the trace directory. It rewrites trace records (the mmaps file) to refer to the new files, so the trace format hasn't changed. You should be able to copy around the resulting trace, and modify any files outside the trace, without breaking it. I tried pretty hard to ensure that interrupted rr pack commands leave the trace intact (using fsync and atomic rename); of course, an interrupted rr pack may not fully pack the trace so the operation should be repeated. Successful rr pack commands are idempotent.

We haven't really experimented with trace portability yet so I can't say how easy it will be to just zip up a trace directory and replay it on a different computer. We know that currently replaying on a machine with different CPUID values is likely to fail, but we have a solution in the works for that — Kyle's patches to add ARCH_SET_CPUID to control "CPUID faulting" are in Linux kernel 4.12 and will let rr record and replay CPUID values.

Categorieën: Mozilla-nl planet

Air Mozilla: Rust Bay Area Meetup June 2017

Mozilla planet - wo, 14/06/2017 - 04:00

Rust Bay Area Meetup June 2017 https://www.meetup.com/Rust-Bay-Area/ Tentative agenda will be: - Andrew Stone from VMWare talking about Haret - William Morgan from Buoyant talking about linkerd-tcp

Categorieën: Mozilla-nl planet

Air Mozilla: Rust Bay Area Meetup June 2017

Mozilla planet - wo, 14/06/2017 - 04:00

Rust Bay Area Meetup June 2017 https://www.meetup.com/Rust-Bay-Area/ Tentative agenda will be: - Andrew Stone from VMWare talking about Haret - William Morgan from Buoyant talking about linkerd-tcp

Categorieën: Mozilla-nl planet

Sean McArthur: hyper v0.11

Mozilla planet - di, 13/06/2017 - 22:27

The async release of hyper is here, version 0.11.0. There’s an updated website, and new guides to try to help you get up to speed with all the changes.

hyper is an HTTP library built in Rust, providing fast and safe client and server implementations.

v0.11

This release marks a form of stability for async hyper. This isn’t saying hyper’s API won’t continue to evolve (and break), but that when such a break happens, it will happen in a v0.12, and the changes will be concentrated. It should be possible to start building frameworks and tools using v0.11.

Even before v0.11 was tagged, many were so excited by the prospects of async hyper, they already are using it. Some examples:

  • sccache has been using hyper’s Client to manage resources in S3.
  • npm uses hyper for their Registry change stream
Async

The biggest deal here, of course, is the switch to non-blocking (or “async”) IO. This has been the push for this release for a long time, and the landscape in the Rust community changed a lot while we were working on this. Last year, a framework for building asynchronous network protocols was released, Tokio. There a lot of great things to say about it, and hyper has embraced it fully.

This means a big change in API.

For instance, Request and Response bodies are no longer used via the std::io::{Read, Write} traits. Instead, bodies are Streams of bytes. Streams are essentially a Future that can resolve multiple times, which matches how an async connection works: bunches of bytes are received at different times.

By integrating with Tokio, hyper and the community gain a lot. Adding in Transport Layer Security is just combining hyper::server::Http with something like tokio_tls::TlsServer. That same TlsServer can be plugged into any protocol, and Http can be wrapped in any other community piece implementing the right trait. The same can be done with other concepts, like generic timeouts.

Hop over to the guides if you’d like to see how to get working examples.

Headers

Being a large breaking change release, an opportunity was taken to refine the headers system in hyper. Some standout changes:

  • A Raw type was added, and the set_raw, get_raw, etc methods now use it. It allows for a more ergonomic way of adding raw header values, and it’s also faster when a Raw in most cases.
  • The HeaderFormat trait has been merged into the Header trait. They were previously separate due to trait object safety rules, but now that trait methods can have a where Self: Sized added, there is no need to separate them.
  • The semantics of Header::fmt_header were clarified. Most of the time, headers can be written on one line. There is the rare exception (technically only Set-Cookie is specified) where each “value” must be on a separate line. Now, fmt_header receives a hyper::header::Formatter, with only a fmt_line method. Pretty much every header can just implement std::fmt::Display, and call f.fmt_line(self), but now Set-Cookie doesn’t need to use a hack to format itself.
Performance

hyper v0.10 was no slouch. It can churn through requests and pump out responses. However, as it uses blocking IO, it hits a problem when you have tons of connections to your server at the same time. Blocking IO means it needs to use multiple threads, only being able to deal with 1 connection per thread. Threads, when you have a lot, get to be expensive.1 So, switching to non-blocking IO means that we keep going fast, but each additional connection isn’t nearly as expensive.

hyper v0.11 is fast2, and handles thousands of connections like a champ.

Changelog

The changes are big. There is a changelog if you want to see all of them. The changelog tries to only contain changes from v0.10, but it’s not exhaustive.

Thanks

There are a lot of people to thank for getting this release out the door. This really is a fantastic community.

Next

hyper is now tracking the Futures and Tokio crates. Work is happening in there as well, as we find patterns and problems that aren’t unique to hyper, and should be available for any async protocol.

There has been community desire (and on the hyper team too!) to stabilize some sort of http crate. This would contain types for handle statuses, methods, versions, and headers, but without client or server or protocol version implementations. We’re trying to find a good design that supports all the possible use cases, and HTTP1 and HTTP2, without sacrificing any performance. Once such a thing exists, hyper would likely replace the types it uses with those.

In doing the above, that may mean that hyper’s current headers system won’t fit. It might make sense to break that out into its own crate, so that people who want typed headers can have them, while a bare bones server could live without them. This would also help reqwest in its road to 1.0, since it publicly exports hyper::headers, but hyper likely won’t reach v1.0 before it.

And of course, we always want to go faster. That will never stop!

v0.11.0

Again, go get it! Read the new guides. Tell us what you think!

  1. hyper uses a set number of threads, not growing as more connections are made. It’s a different trade off, but not too relevant for explaining why non-blocking IO is better. 

  2. hyper doesn’t lead the pack in benchmarks (yet), but it’s not in the back either. The last benchmark put it at 58% requests per second of the fastest. Since that benchmark was published, some significant low-hanging improvements were made. A new preview should be available soon. And we’ll keep going! 

Categorieën: Mozilla-nl planet

Air Mozilla: Rust Libs Meeting 2017-06-13

Mozilla planet - di, 13/06/2017 - 22:00

Rust Libs Meeting 2017-06-13 walkdir crate evaluation

Categorieën: Mozilla-nl planet

Air Mozilla: Rust Libs Meeting 2017-06-13

Mozilla planet - di, 13/06/2017 - 22:00

Rust Libs Meeting 2017-06-13 walkdir crate evaluation

Categorieën: Mozilla-nl planet

The Mozilla Blog: The Best Firefox Ever

Mozilla planet - di, 13/06/2017 - 21:00

With E10s, our new version of Firefox nails the “just right” balance between memory and speed


On the Firefox team, one thing we always hear from our users is that they rely on the web for complex tasks like trip planning and shopping comparisons. That often means having many tabs open. And the sites and web apps running in those tabs often have lots of things going on– animations, videos, big pictures and more. Complex sites are more and more common. The average website today is nearly 2.5 megabytes – the same size as the original version of the game Doom, according to Wired. Up until now, a complex site in one Firefox tab could slow down all the others. That often meant a less than perfect browsing experience.

To make Firefox run even complex sites faster, we’ve been changing it to run using multiple operating system processes. Translation? The old Firefox used a single process to run all the tabs in a browser. Modern browsers split the load into several independent processes. We named our project to split Firefox into multiple processes ‘Electrolysis ’ (or E10s) after the chemical process that divides water into its core elements. E10s is the largest change to Firefox code in our history. And today we’re launching our next big phase of the E10s initiative.

A Faster Firefox With Four Content Processes

With today’s release, Firefox uses up to four processes to run web page content across all open tabs. This means that a heavy, complex web page in one tab has a much lower impact on the responsiveness and speed in other tabs. By separating the tabs into separate processes, we make better use of the hardware on your computer, so Firefox can deliver you more of the web you love, with less waiting.

I’ve been living with this turned on by default in the pre-release version of Firefox (Nightly). The performance improvements are remarkable. Besides running faster and crashing less, E10S makes websites feel more smooth. Even busy pages, like Facebook newsfeeds, spool out smoothly and cleanly. After making the switch to Firefox with E10s, now I can’t live without it.

Firefox 54 with E10s makes sites run much better on all computers, especially on computers with less memory. Firefox aims to strike the “just right” balance between speed and memory usage. To learn more about Firefox’s multi-process architecture, and how it’s different from Chrome’s, check out Ryan Pollock’s post about the search for the Goldilocks browser.

Multi-Process Without Memory Bloat Firefox Wins Memory Usage Comparison

In our tests comparing memory usage for various browsers, we found that Firefox used significantly less RAM than other browsers on Windows 10, macOS, and Linux. (RAM stands for Random Access Memory, the type of memory that stores the apps you’re actively running.) This means that with Firefox you can browse freely, but still have enough memory left to run the other apps you want to use on your computer.

The Best Firefox Ever

This is the best release of Firefox ever, with improvements that will be very noticeable to even casual users of our beloved browser. Several other enhancements are shipping in Firefox today, and you can visit our release notes to see the full list. If you’re a web developer, or if you’ve built a browser extension, check out the Hacks Blog to read about all the new Web Platform and WebExtension APIs shipping today.

As we continue to make progress on Project Quantum, we are pushing forward in building a completely revamped browser made for modern computing. It’s our goal to make Firefox the fastest and smoothest browser for PCs and mobile devices. Through the end of 2017, you’ll see some big jumps in capability and performance from Team Firefox. If you stopped using Firefox, try it again. We think you’ll be impressed. Thank you and let us know what you think.

The post The Best Firefox Ever appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Air Mozilla: Selling Your Attention: The Web and Advertising with Tim Wu

Mozilla planet - di, 13/06/2017 - 21:00

 The Web and Advertising with Tim Wu You don't need cash to search Google or to use Facebook, but they're not free. We pay for these services with our attention and with...

Categorieën: Mozilla-nl planet

Air Mozilla: Selling Your Attention: The Web and Advertising with Tim Wu

Mozilla planet - di, 13/06/2017 - 21:00

 The Web and Advertising with Tim Wu You don't need cash to search Google or to use Facebook, but they're not free. We pay for these services with our attention and with...

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Firefox 54: E10S-Multi, WebExtension APIs, CSS clip-path

Mozilla planet - di, 13/06/2017 - 20:57
“E10S-Multi:” A new multi-process model for Firefox

Today’s release completes Firefox’s transformation into a fully multi-process browser, running many simultaneous content processes in addition to a UI process and, on Windows, a special GPU process. This design makes it easier to utilize all of the cores available on modern processors and, in the future, to securely sandbox web content. It also improves stability, ensuring that a single content process crashing won’t take out all of your other tabs, nor the rest of the browser.

Illustration of Firefox's new multi-process architecture, showing one Firefox UI process talking to four Content Processes. Each content process has several tabs within it.

An initial version of multi-process Firefox (codenamed “Electrolysis”, or “e10s” for short) debuted with Firefox 48 last August. This first version moved Firefox’s UI into its own process so that the browser interface remains snappy even under load. Firefox 54 takes this further by running many content processes in parallel: each one with its own RAM and CPU resources managed by the host operating system.

Additional processes do come with a small degree of memory overhead, no matter how well optimized, but we’ve worked wonders to reduce this to the bare minimum. Even with those optimizations, we wanted to do more to ensure that Firefox is respectful of your RAM. That’s why, instead of spawning a new process with every tab, Firefox sets an upper limit: four by default, but configurable by users (dom.ipc.processCount in about:config). This keeps you in control, while still letting Firefox take full advantage of multi-core CPUs.

To learn more about Firefox’s multi-process architecture, check out this Medium post about the search for the “Goldilocks” browser.

New WebExtension APIs

Firefox continues its rapid implementation of new WebExtension APIs. These APIs are designed to work cross-browser, and will be the only APIs available to add-ons when Firefox 57 launches this November.

Most notably, it’s now possible to create custom DevTools panels using WebExtensions. For example, the screenshot below shows the Chrome version of the Vue.js DevTools running in Firefox without any modifications. This dramatically reduces the maintenance burden for authors of devtools add-ons, ensuring that no matter which framework you prefer, its tools will work in Firefox.

Screenshot of Firefox showing the Vue.js DevTools extension running in Firefox

Additionally:

Read about the full set of new and changed APIs on the Add-ons Blog, or check out the complete WebExtensions documentation on MDN.

CSS shapes in clip-path

The CSS clip-path property allows authors to define which parts of an element are visible. Previously, Firefox only supported clipping paths defined as SVG files. With Firefox 54, authors can also use CSS shape functions for circles, ellipses, rectangles or arbitrary polygons (Demo).

Like many CSS values, clipping shapes can be animated. There are some rules that control how the interpolation between values is performed, but long story short: as long as you are interpolating between the same shapes, or polygons with the same number of vertices, you should be fine. Here’s how to animate a circular clipping:

See the Pen Animated clip-path by ladybenko (@ladybenko) on CodePen.

You can also dynamically change clipping according user input, like in this example that features a “periscope” effect controlled by the mouse:

See the Pen clip-path (periscope) by ladybenko (@ladybenko) on CodePen.

To learn more, check our article on clip-path from last week.

Project Dawn

Lastly, the release of Firefox 54 marks the completion of the Project Dawn transition, eliminating Firefox’s pre-beta release channel, codenamed “Aurora.” Firefox releases now move directly from Nightly into Beta every six weeks. Firefox Developer Edition, which was based on Aurora, is now based on Beta.

For early adopters, we’ve also made Firefox Nightly for Android available on Google Play.

Categorieën: Mozilla-nl planet

Pagina's