mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla CEO Mitchell Baker urges European Commission to seize ‘once-in-a-generation’ opportunity

Mozilla Blog - ma, 07/09/2020 - 11:00

Today, Mozilla CEO Mitchell Baker published an open letter to European Commission President Ursula von der Leyen, urging her to seize a ‘once-in-a-generation’ opportunity to build a better internet through the opportunity presented by the upcoming Digital Services Act (“DSA”).

Mitchell’s letter coincides with the European Commission’s public consultation on the DSA, and sets out high-level recommendations to support President von der Leyen’s DSA policy agenda for emerging tech issues (more on that agenda and what we think of it here).

The letter sets out Mozilla’s recommendations to ensure:

  • Meaningful transparency with respect to disinformation;
  • More effective content accountability on the part of online platforms;
  • A healthier online advertising ecosystem; and,
  • Contestable digital markets

As Mitchell notes:

“The kind of change required to realise these recommendations is not only possible, but proven. Mozilla, like many of our innovative small and medium independent peers, is steeped in a history of challenging the status quo and embracing openness, whether it is through pioneering security standards, or developing industry-leading privacy tools.”

Mitchell’s full letter to Commission President von der Leyen can be read here.

The post Mozilla CEO Mitchell Baker urges European Commission to seize ‘once-in-a-generation’ opportunity appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

A look at password security, Part V: Disk Encryption

Mozilla Blog - zo, 06/09/2020 - 00:56

The previous posts ( I, II, III, IV) focused primarily on remote login, either to multiuser systems or Web sites (though the same principles also apply to other networked services like e-mail). However, another common case where users encounter passwords is for login to devices such as laptops, tablets, and phones. This post addresses that topic.

Threat Model

We need to start by talking about the threat model. As a general matter, the assumption here is that the attacker has some physical access to your device. While some devices do have password-controlled remote access, that’s not the focus here.

Generally, we can think of two kinds of attacker access.

Non-invasive: The attacker isn’t willing to take the device apart, perhaps because they only have the device temporarily and don’t want to leave traces of tampering that would alert you.

Invasive: The attacker is willing to take the device apart. Within invasive, there’s a broad range of how invasive the attacker is willing to be, starting with “open the device and take out the hard drive” and ending with “strip the packaging off all the chips and examine them with an electron microscope”.

How concerned you should be depends on who you are, the value of your data, and the kinds of attackers you face. If you’re an ordinary person and your laptop gets stolen out of your car, then attacks are probably going to be fairly primitive, maybe removing the hard disk but probably not using an electron microscope. On the other hand, if you have high value data and the attacker targets you specifically, then you should assume a fairly high degree of capability. And of course people in the computer security field routinely worry about attackers with nation state capabilities.

It’s the data that matters

It’s natural to think of passwords as a measure that protects access to the computer, but in most cases it’s really a matter of access to the data on your computer. If you make a copy of someone’s disk and put it in another computer that will be a pretty close clone of the original (that’s what a backup is, after all) and the attacker will be able to read all your sensitive data off the disk, and quite possibly impersonate you to cloud services.

This implies two very easy attacks:

  • Bypass the operating system on the computer and access the disk directly. For instance, on a Mac you can boot into recovery mode and just examine the disk. Many UNIX machines have something called single-user mode which boots up with administrative access.
  • Remove the disk and mount it in another computer as an external disk. This is trivial on most desktop computers, requiring only a screwdriver (if that) and on many laptops as well; if you have a Mac or a mobile device, the disk may be a soldered in Flash drive, which makes things harder but still doable.

The key thing to realize is that nearly all of the access controls on the computer are just implemented by the operating system software. If you can bypass that software by booting into an administrative mode or by using another computer, then you can get past all of them and just access the data directly.1

If you’re thinking that this is bad, you’re right. And the solution to this is to encrypt your disk. If you don’t do that, then basically your data will not be secure against any kind of dedicated attacker who has physical access to your device.

Password-Based Key Derivation

The good news is that basically all operating systems support disk encryption. The bad news is that the details of how it’s implemented vary dramatically in some security critical ways. I’m not talking here about the specific details about cryptographic algorithms and how each individual disk block is encrypted. That’s a fascinating topic (see here), but most operating systems do something mostly adequate. The most interesting question for users is how the disk encryption keys are handled and how the the password is used to gate access to those keys.

The obvious way to do this — and the way things used to work pretty much everywhere — is to generate the encryption key directly from the password. [Technical Note: You probably really want generate a random key and encrypt it with a key derived from the password. This way you can change your password without re-encrypting the whole disk. But from a security perspective these are fairly equivalent.] The technical term for this is a password-based key derivation function, which just means that it takes a password and outputs a key. For our purposes, this is the same as a password hashing function and it has the same problem: given an encrypted disk I can attempt to brute force the password by trying a large number of candidate passwords. The result is that you need to have a super-long password (or often a passphrase) in order to prevent this kind of attack. While it’s possible to memorize a long enough password, it’s no fun, as well as being a real pain to type in whenever you want to log in to your computer, let alone on your smartphone or tablet. As a result, most people use much shorter passwords, which of course weakens the security of disk encryption.

Hardware Security Modules

As we’ve seen before, the problem here is that the attacker gets to try candidate passwords very fast and the only real fix is to limit the rate at which they can try. This is what many modern devices do. Instead of just deriving the encryption key from the password, they generate a random encryption key inside of a piece of hardware security module (HSM).2 What “secure” means varies but ideally it’s something like:

  1. It can do encryption and decryption internally without ever exposing the keys.4
  2. It resists physical attacks to recover the keys. For instance it might erase them if you try to remove the casing from the HSM.

In order to actually encrypt or decrypt, you first unlock the HSM with the password, but that doesn’t give you the keys, but just lets you use the HSM to do encryption and decryption. However, until you enter the password, it won’t do anything.

The main function of the HSM is to limit the rate at which you can try passwords. This might happen by simply having a flat limit of X tries per second, or maybe it exponentially backs off the more passwords you try, or maybe it will only allow some small number of failures (10 is common) before it erases itself. If you’ve ever pulled your iPhone out of your pocket only to see “iPhone is disabled, try again in 5 minutes”, that’s the rate limiting mechanism in action. Whatever the technique, the idea is the same: prevent the attacker from quickly trying a large number of candidate passwords. With a properly designed rate limiting mechanism, you can get away with a much much shorter passwords. For instance, if you can only have 10 tries before the phone erases itself, then the attacker only has a 1/1000 chance of breaking a 4 digit PIN, let alone a 16 character password. Some HSMs can also do biometric authentication to unlock the encryption key, which is how features like TouchID and FaceID work.

So, having the encryption keys in an HSM is a big improvement to security and it doesn’t require any change in the user interface — you just type in your password — which is great. What’s not so great is that it’s not always clear whether your device has an HSM or not. As a practical matter, new Apple devices do, as does the Google Pixel. The situation on Windows 10 is maybe but many modern devices will.

It needs to be said that an HSM isn’t magic: iPhones store their keys in HSMs and it certainly makes it much harder to decrypt them, but there are also companies who sell technology for breaking into HSM-protected devices like iPhones (Cellebrite being probably the best known), but you’re far better off with a device like this than you are without. And of course all bets are off if someone takes your device when it’s unlocked. This is why it’s a good idea to have your screen set to lock automatically after a fairly short time; obviously that’s a lot more convenient if you have fingerprint or face ID.3

Summary

OK, so this has been a pretty long series, but I hope it’s given you an appreciation for all the different settings in which passwords are used and where they are safe(r) versus unsafe.

As always, I can be reached at ekr-blog@mozilla.com if you have questions or comments.

  1. Some computers allow you to install a firmware password which will stop the computer from booting unless you enter the right password. This isn’t totally useless but it’s not a defense if the attacker is willing to remove the disk. 
  2. Also called a Secure Encryption Processor (SEP) or a Trusted Platform Module (TPM). 
  3. It’s not technically necessary to keep the keys in HSM in order to secure the device against password guessing. For instance, once the HSM is unlocked it could just output the key and let decryption happen on the main CPU. The problem is that this then exposes you to attacks on the non-tamper-resistant hardware that makes up the rest of the computer. For this reason, it’s better to have the key kept inside the HSM. Note that this only applies to the keys in the HSM, not the data in your computer’s memory, which generally isn’t encrypted, and there are ways to read that memory. If you are worried your computer might be seized and searched, as in a border crossing, do what the pros do and turn it off.
  4. Unfortunately, biometric ID also makes it a lot easier to be compelled to unlock your phone–whatever the legal situation in your jurisdiction, someone can just press your finger against the reader, but it’s a lot harder to make you punch in your PIN–so it’s a bit of a tradeoff. 

Update: 2020-09-07: Changed TPM to HSM once in the main text for consistency.

The post A look at password security, Part V: Disk Encryption appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Wladimir Palant: A grim outlook on the future of browser add-ons

Mozilla planet - ma, 31/08/2020 - 13:04

A few days ago Mozilla announced the release of their new Android browser. This release, dubbed “Firefox Daylight,” is supposed to achieve nothing less than to “revolutionize mobile browsing.” And that also goes for browser extensions of course:

Last but not least, we revamped the extensions experience. We know that add-ons play an important role for many Firefox users and we want to make sure to offer them the best possible experience when starting to use our newest Android browsing app. We’re kicking it off with the top 9 add-ons for enhanced privacy and user experience from our Recommended Extensions program.

What this text carefully avoids stating directly: that’s the only nine (as in: single-digit 9) add-ons which you will be able to install on Firefox for Android now. After being able to use thousands of add-ons before, this feels like a significant downgrade. Particularly given that there appears to be no technical reason why none of the other add-ons are allowed any more, it being merely a policy decision. I already verified that my add-ons can still run on Firefox for Android but aren’t allowed to, same should be true for the majority of other add-ons.

Historical Firefox browser extension icons (puzzle pieces) representing the past, an oddly shaped and inconvenient puzzle piece standing for the present and a tombstone for the potential future<figcaption> Evolution of browser extensions. Image credits: Mozilla, jean_victor_balin </figcaption> Contents Why would Mozilla kill mobile add-ons?

Before this release, Firefox was the only mobile browser to allow arbitrary add-ons. Chrome experimented with add-ons on mobile but never actually released this functionality. Safari implemented a halfhearted ad blocking interface, received much applause for it, but never made this feature truly useful or flexible. So it would seem that Firefox had a significant competitive advantage here. Why throw it away?

Unfortunately, supporting add-ons comes at a considerable cost. It isn’t merely the cost of developing and maintaining the necessary functionality, there is also the performance and security impact of browser extensions. Mozilla has been struggling with this for a while. The initial solution was reviewing all extensions before publication. It was a costly process which also introduced delays, so by now all add-ons are published immediately but are still supposed to be reviewed manually eventually.

Mozilla is currently facing challenges both in terms of market share and financially, the latter being linked to the former. This once again became obvious when Mozilla laid off a quarter of its workforce a few weeks ago. In the past, add-ons have done little to help Mozilla achieve a breakthrough on mobile, so costs being cut here isn’t much of a surprise. And properly reviewing nine extensions is certainly cheaper than keeping tabs on a thousand.

But won’t Mozilla add more add-ons later?

Yes, they also say that more add-ons will be made available later. But if you look closely, all of Mozilla’s communication around that matter has been focused on containing damage. I’ve looked through a bunch of blog posts, and nowhere did it simply say: “When this is released, only a handful add-ons will be allowed, and adding more will require our explicit approval.” A number of Firefox users relies on add-ons, so I suspect that the strategy is to prevent an outcry from those.

This might also be the reason why extension developers haven’t been warned about this “minor” change. Personally, I learned about it from a user’s issue report. While there has been some communication around Recommended Extensions program, it was never mentioned that participating in this program was a prerequisite for extensions to stay usable.

I definitely expect Mozilla to add more add-ons later. But it will be the ones that users are most vocal about. Niche add-ons with only few users? Bad luck for you…

What this also means: the current state of the add-on ecosystem is going to be preserved forever. If only popular add-ons are allowed, other add-ons won’t get a chance to become popular. And since every add-on has to start small, developing anything new is a wasted effort.

Update (2020-09-01): There are some objections from the Mozilla community stating that I’m overinterpreting this. Yes, maybe I am. Maybe add-ons are still a priority to Mozilla. So much that for this release they:

  • declared gatekeeping add-ons a virtue rather than a known limitation (“revamped the extensions experience”).
  • didn’t warn add-on developers about the user complains to be expected, leaving it to them to figure out what’s going on.
  • didn’t bother setting a timeline when the gatekeeping is supposed to end and in fact didn’t even state unambiguously that ending it is the plan.
  • didn’t document the current progress anywhere, so nobody knows what works and what doesn’t in terms of extension APIs (still work in progress at the time of writing).

I totally get it that the development team has more important issues to tackle now that their work has been made available to a wider audience. I’m merely not very confident that once they have all these issues sorted out they will still go back to the add-on support and fix it. Despite all the best intentions, there is nothing as permanent as a temporary stopgap solution.

Isn’t the state of affairs much better on the desktop?

Add-on support in desktop browsers looks much better of course, with all major browsers supporting add-ons. Gatekeeping also isn’t the norm here, with Apple being the only vendor so far to discourage newcomers. However, a steady degradation has been visible here as well, sadly an ongoing trend.

Browser extensions were pioneered by Mozilla and originally had the same level of access as the browser’s own code. This allowed amazingly powerful extensions, for example the vimperator extension implemented completely different user interface paradigms which were inspired by the vim editor. Whether you are a fan of vim or not (few people are), being able to do something like this was very empowering.

So it’s not surprising that Mozilla attracted a very active community of extension builders. There has been lots of innovation, extensions showcasing the full potential of the browser. Some of that functionality has been eventually adopted by the browsers. Remember Firebug for example? The similarity to Developer Tools as they are available in any modern browser is striking.

Historical Firefox browser extension icons (puzzle pieces) representing the past, an oddly shaped and inconvenient puzzle piece standing for the present and a tombstone for the potential future<figcaption> Firebug screenshot. Image credits: Wikipedia </figcaption>

Once Google Chrome came along, this extension system was doomed. It simply had too many downsides to survive the fierce competition in the browser market. David Teller explains in his blog post why Mozilla had no choice but to remove it, and he is absolutely correct of course.

As to the decision about what to replace it with, I’m still not convinced that Mozilla made a good choice when they decided to copy Chrome’s extension APIs. While this made development of cross-browser extensions easier, it also limited Firefox extensions to the functionality supported by Chrome. Starting out as a clear leader in terms of customization, Firefox was suddenly chasing Chrome and struggling to keep full compatibility. And of course Google refused to cooperate on standardization of its underdocumented extension APIs (surprise!).

Where is add-on support on desktop going?

Originally, Mozilla promised that they wouldn’t limit themselves to the capabilities provided by Chrome. They intended to add more functionality soon, so that more powerful extensions would be possible. They also intended to give extension developers a way to write new extension APIs themselves, so that innovation could go beyond what browser developers anticipated. None of this really materialized, other than a few trivial improvements to Chrome’s APIs.

And so Google with its Chrome browser is now determining what extensions should be able to do – in any browser. After all, Mozilla’s is the only remaining independent extensions implementation, and it is no real competition any more. Now that they have this definition power, Google unsurprisingly decided to cut the costs incurred by extensions. Among other things, this change will remove webRequest API which is the one most powerful tool currently available to extensions. I expect Mozilla to follow suit sooner or later. And this is unlikely to be the last functionality cut.

Conclusions

The recent browser wars set a very high bar on what a modern browser should be. We got our lean and fast browsers, supporting vast amounts of web standards and extremely powerful web applications. The cost was high however: users’ choice was reduced significantly, it’s essentially Firefox vs. Chrome in its numerous varieties now, other browser engines didn’t survive. The negative impacts of Google’s almost-monopole on web development aren’t too visible yet, but in the browser customization space they already show very clearly.

Google Chrome is now the baseline for browser customization. On mobile devices this means that anything beyond “no add-on support whatsoever” will be considered a revolutionary step. Mozilla isn’t the first mobile browser vendor to celebrate themselves for providing a few selected add-ons. Open add-on ecosystems for mobile browsers are just not going to happen any more.

And on desktop Google has little incentive to keep the bar high for add-on support. There will be further functionality losses here, all in the name of performance and security. And despite these noble goals it means that users are going to lose out: the innovative impact of add-ons is going away. In future, all innovation will have to originate from browser vendors themselves, there will be no space for experiments or niche solutions.

Categorieën: Mozilla-nl planet

Anne van Kesteren: Farewell Emil

Mozilla planet - ma, 31/08/2020 - 11:38

When I first moved to Zürich I had the good fortune to have dinner with Emil. I had never met someone before with such a passion for food. (That day I met two.) Except for the food we had a good time. I found it particularly enjoyable that he was so upset — though in a very upbeat manner — with the quality of the food that having dessert there was no longer on the table.

The last time I remember running into Emil was in Lisbon, enjoying hamburgers and fries of all things. (Rest assured, they were very good.)

Long before all that, I used to frequent EAE.net, to learn how to make browsers do marvelous things and improve user-computer interaction.

Categorieën: Mozilla-nl planet

Mike Hommey: [Linux] Disabling CPU turbo, cores and threads without rebooting

Mozilla planet - ma, 31/08/2020 - 00:00

[Disclaimer: this has been sitting as a draft for close to three months ; I forgot to publish it, this is now finally done.]

In my previous blog post, I built Firefox in a multiple different number of configurations where I’d disable the CPU turbo, some of its cores or some of its threads. That is something that was traditionally done via the BIOS, but rebooting between each attempt is not really a great experience.

Fortunately, the Linux kernel provides a large number of knobs that allow this at runtime.

Turbo

This is the most straightforward:

$ echo 0 > /sys/devices/system/cpu/cpufreq/boost

Re-enable with

$ echo 1 > /sys/devices/system/cpu/cpufreq/boost CPU frequency throttling

Even though I haven’t mentioned it, I might as well add this briefly. There are many knobs to tweak frequency throttling, but assuming your goal is to disable throttling and set the CPU frequency to its fastest non-Turbo frequency, this is how you do it:

$ echo performance > /sys/devices/system/cpu/cpu$n/cpufreq/scaling_governor

where $n is the id of the core you want to do that for, so if you want to do that for all the cores, you need to do that for cpu0, cpu1, etc.

Re-enable with:

$ echo ondemand > /sys/devices/system/cpu/cpu$n/cpufreq/scaling_governor

(assuming this was the value before you changed it ; ondemand is usually the default)

Cores and Threads

This one requires some attention, because you cannot assume anything about the CPU numbers. The first thing you want to do is to check those CPU numbers. You can do so by looking at the physical id and core id fields in /proc/cpuinfo, but the output from lscpu --extended is more convenient, and looks like the following:

CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 3700.0000 2200.0000 1 0 0 1 1:1:1:0 yes 3700.0000 2200.0000 2 0 0 2 2:2:2:0 yes 3700.0000 2200.0000 3 0 0 3 3:3:3:0 yes 3700.0000 2200.0000 4 0 0 4 4:4:4:1 yes 3700.0000 2200.0000 5 0 0 5 5:5:5:1 yes 3700.0000 2200.0000 6 0 0 6 6:6:6:1 yes 3700.0000 2200.0000 7 0 0 7 7:7:7:1 yes 3700.0000 2200.0000 (...) 32 0 0 0 0:0:0:0 yes 3700.0000 2200.0000 33 0 0 1 1:1:1:0 yes 3700.0000 2200.0000 34 0 0 2 2:2:2:0 yes 3700.0000 2200.0000 35 0 0 3 3:3:3:0 yes 3700.0000 2200.0000 36 0 0 4 4:4:4:1 yes 3700.0000 2200.0000 37 0 0 5 5:5:5:1 yes 3700.0000 2200.0000 38 0 0 6 6:6:6:1 yes 3700.0000 2200.0000 39 0 0 7 7:7:7:1 yes 3700.0000 2200.0000 (...)

Now, this output is actually the ideal case, where pairs of CPUs (virtual cores) on the same physical core are always n, n+32, but I’ve had them be pseudo-randomly spread in the past, so be careful.

To turn off a core, you want to turn off all the CPUs with the same CORE identifier. To turn off a thread (virtual core), you want to turn off one CPU. On machines with multiple sockets, you can also look at the SOCKET column.

Turning off one CPU is done with:

$ echo 0 > /sys/devices/system/cpu/cpu$n/online

Re-enable with:

$ echo 1 > /sys/devices/system/cpu/cpu$n/online Extra: CPU sets

CPU sets are a feature of Linux’s cgroups. They allow to restrict groups of processes to a set of cores. The first step is to create a group like so:

$ mkdir /sys/fs/cgroup/cpuset/mygroup

Please note you may already have existing groups, and you may want to create subgroups. You can do so by creating subdirectories.

Then you can configure on which CPUs/cores/threads you want processes in this group to run on:

$ echo 0-7,16-23 > /sys/fs/cgroup/cpuset/mygroup/cpuset.cpus

The value you write in this file is a comma-separated list of CPU/core/thread numbers or ranges. 0-3 is the range for CPU/core/thread 0 to 3 and is thus equivalent to 0,1,2,3. The numbers correspond to /proc/cpuinfo or the output from lscpu as mentioned above.

There are also memory aspects to CPU sets, that I won’t detail here (because I don’t have a machine with multiple memory nodes), but you can start with:

$ cat /sys/fs/cgroup/cpuset/cpuset.mems > /sys/fs/cgroup/cpuset/mygroup/cpuset.mems

Now you’re ready to assign processes to this group:

$ echo $pid >> /sys/fs/cgroup/cpuset/mygroup/tasks

There are a number of tweaks you can do to this setup, I invite you to check out the cpuset(7) manual page.

Disabling a group is a little involved. First you need to move the processes to a different group:

$ while read pid; do echo $pid > /sys/fs/cgroup/cpuset/tasks; done < /sys/fs/cgroup/cpuset/mygroup/tasks

Then deassociate CPU and memory nodes:

$ > /sys/fs/cgroup/cpuset/mygroup/cpuset.cpus $ > /sys/fs/cgroup/cpuset/mygroup/cpuset.mems

And finally remove the group:

$ rmdir /sys/fs/cgroup/cpuset/mygroup
Categorieën: Mozilla-nl planet

The Servo Blog: GSoC wrap-up - Implementing WebGPU in Servo

Mozilla planet - zo, 30/08/2020 - 02:30
Introduction

Hello everyone! I am Kunal(@kunalmohan), an undergrad student at Indian Institute of Technology Roorkee, India. As a part of Google Summer of Code(GSoC) 2020, I worked on implementing WebGPU in Servo under the mentorship of Mr. Dzmitry Malyshau(@kvark). I devoted the past 3 months working on ways to bring the API to fruition in Servo, so that Servo is able to run the existing examples and pass the Conformance Test Suite(CTS). This is going to be a brief account of how I started with the project, what challenges I faced, and how I overcame them.

What is WebGPU?

WebGPU is a future web standard, cross-platform graphics API aimed to make GPU capabilities more accessible on the web. WebGPU is designed from the ground up to efficiently map to the Vulkan, Direct3D 12, and Metal native GPU APIs. A native implementation of the API in Rust is developed in the wgpu project. Servo implementation of the API uses this crate.

The Project

At the start of the project the implementation was in a pretty raw state- Servo was only able to accept shaders as SPIRV binary and ran just the compute example. I had the following tasks in front of me:

  • Implement the various DOM interfaces that build up the API.
  • Setup a proper Id rotation for the GPU resources.
  • Integrate WebGPU with WebRender for presenting the rendering to HTML canvas.
  • Setup proper model model for async error recording.

The final goal was to be able to run the live examples at https://austineng.github.io/webgpu-samples/ and pass a fair amount of the CTS.

Implementation

Since Servo is a multi-process browser, GPU is accessed from a different process(server-side) than the one running the page content and scripts(content process). For better performance and asynchronous behaviour, we have a separate wgpu thread for each content process.

Setting up a proper Id rotation for the GPU resources was our first priority. I had to ensure that each Id generated was unique. This meant sharing the Identity Hub among all threads via Arc and Mutex. For recycling the Ids, wgpu exposes an IdentityHandler trait that must be implemented on the server-side interface of the browser and wgpu. This facilitates the following: when wgpu detects that an object has been dropped by the user (which is some time after the actual drop/garbage collection), wgpu calls the trait methods that are responsible for releasing the Id. In our case they send a message to the content process to free the Id and make it available for reuse.

Implementing the DOM Interfaces was pretty straight forward. A DOM object is just an opaque handle to an actual GPU resource. Whenever a method, that performs an operation, is called on a DOM object there are 2 things to be done- convert the IDL types to wgpu types. And send a message to the server to perform the operation. Most of the validation is done within wgpu.

Presentation

WebGPU textures can be rendered to HTML canvas via GPUCanvasContext, which can be obtained from canvas.getContext('gpupresent'). All rendered images are served to WebRender as ExternalImages for rendering purpose. This is done via an async software presentation path. Each new GPUCanvasContext object is assigned a new ExternalImageId and a new swap chain is assigned a new ImageKey. Since WebGPU threads are spawned on-demand, an image handler for WebGPU is initialized at startup, stored in Constellation, and supplied to threads at the time of spawn. Each time GPUSwapChain.getCurrentTexture() is called the canvas is marked as dirty which is then flushed at the time of reflow. At the time of flush, a message is sent to the wgpu server to update the image data provided to WebRender. The following happens after this:

  • The contents of the rendered texture are copied to a buffer.
  • Buffer is mapped asynchronously for read.
  • The data read from the buffer is copied to a staging area in PresentionData. PresentationData stores the data and all the required machinery for this async presentation belt.
  • When WebRender wants to read the data, it locks on the data to prevent it from being altered during read. Data is served in the form of raw bytes.

The above process is not the best one, but the only option available to us for now. This also causes a few empty frames to be rendered at the start. A good thing, though, is that this works on all platforms and is a great fallback path while we’ll be adding hardware accelerate presentation in the future.

Buffer Mapping

When the user issues an async buffer map operation, the operation is queued on the server-side and all devices polled at a regular interval of 100ms for the same. As soon as the map operation is complete, data is read and sent to the content process where it is stored in the Heap. The user can read and edit this data by accessing it’s subranges via GPUBuffer.getMappedRange() which returns ExternalArrayBuffer pointing to the data in the Heap. On unmap, all the ExternalArrayBuffers are detached, and if the buffer was mapped for write, data sent back to server for write to the actual resource.

Error Reporting

To achieve maximum efficiency, WebGPU supports an asynchronous error model. The implementation keeps a stack of ErrorScopes that are responsible for capturing the errors that occur during operations performed in their scope. The user is responsible for pushing and popping an ErrorScope in the stack. Popping an ErrorScope returns a promise that is resolved to null if all the operations were successfull, otherwise it resolves to the first error that occurred.

When an operation is issued, scope_id of the ErrorScope on the top of the stack is sent to the server with it and operation-count of the scope is incremented. The result of the operation can be described by the enum-

pub enum WebGPUOpResult { ValidationError(String), OutOfMemoryError, Success, }

On receiving the result, we decrement the operation-count of the ErrorScope with the given scope_id. We further have 3 cases:

  • The result is Success. Do nothing.
  • The result is an error and the ErrorFilter matches the error. We record this error in the ErrorScopeInfo, and if the ErrorScope has been popped by the user, resolve the promise with it.
  • The result is an error but the ErrorFilter does not match the error. In this case, we find the nearest parent ErrorScope with the matching filter and record the error in it.

After the result is processed, we try to remove the ErrorScope from the stack- the user should have called popErrorScope() on the scope and the operation-count of the scope should be 0.

In case there are no error scopes on the stack or if ErrorFilter of none of the ErrorScopes match the error, the error is fired as an GPUUncapturedErrorEvent.

Conformance Test Suite

Conformance Test Suite is required for checking the accuracy of the implementation of the API and can be found here. Servo vendors it’s own copy of the CTS which, currently, needs to be updated manually for the latest changes. Here are a few statistics of the tests:

  • 14/36 pass completely
  • 5/36 have majority of subtests passing
  • 17/36 fail/crash/timeout

The wgpu team is actively working on improving the validation.

Unfinished business

A major portion of the project that was proposed has been completed, but there’s still work left to do. These are a few things that I was unable to cover under the proposed timeline:

  • Profiling and benchmarking the implementation against the WebGL implementation of Servo.
  • Handle canvas resize event smoothly.
  • Support Error recording on Workers.
  • Support WGSL shaders.
  • Pass the remaining tests in the CTS.
Important Links

The WebGPU specification can be found here. The PRs that I made as a part of the project can be accessed via the following links:

The progress of the project can be tracked in the GitHub project

Conclusion

WebGPU implementation in Servo supports all of the Austin’s samples. Thanks to CYBAI and Josh, Servo now supports dynamic import of modules and thus accept GLSL shaders. Here are a few samples of what Servo is capable of rendering at 60fps:

Fractal Cube

Instanced Cube

Compute Boids

I would like to thank Dzmitry and Josh for guiding me throughout the project and a big shoutout to the WebGPU and Servo community for doing such awesome work! I had a great experience contributing to Servo and WebGPU. I started as a complete beginner to Rust, graphics and browser internals, but learned a lot during the course of this project. I urge all WebGPU users and graphics enthusiasts out there to test their projects on Servo and help us improve the implementation and the API as well :)

Categorieën: Mozilla-nl planet

Pagina's