mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

J. Ryan Stinnett: Building Firefox for Linux 32-bit

Mozilla planet - za, 26/08/2017 - 02:33
Background

As part of my work on the Stylo / Quantum CSS team at Mozilla, I needed to be able to test changes to Firefox that only affect Linux 32-bit builds. These days, I believe you essentially have to use a 64-bit host to build Firefox to avoid OOM issues during linking and potentially other steps, so this means some form of cross-compiling from a Linux 64-bit host to a Linux 32-bit target.

I already had a Linux 64-bit machine running Ubuntu 16.04 LTS, so I set about attempting to make it build Firefox targeting Linux 32-bit.

I should note that I only use Linux occasionally at the moment, so there could certainly be a better solution than the one I describe. Also, I recreated these steps after the fact, so I might have missed something. Please let me know in the comments.

This article assumes you are already set up to build Firefox when targeting 64-bit.

Multiarch Packages (Or: How It's Supposed to Work)

Recent versions of Debian and Ubuntu support the concept of "multiarch packages" which are intended to allow installing multiple architectures together to support use cases including... cross-compiling! Great, sounds like just the thing we need.

We should be able to install1 the core Gecko development dependencies with an extra :i386 suffix to get the 32-bit version on our 64-bit host:

``` (host) $ sudo apt install libasound2-dev:i386 libcurl4-openssl-dev:i386 libdbus-1-dev:i386 libdbus-glib-1-dev:i386 libgconf2-dev:i386 libgtk-3-dev:i386 libgtk2.0-dev:i386 libiw-dev:i386 libnotify-dev:i386 libpulse-dev:i386 libx11-xcb-dev:i386 libxt-dev:i386 mesa-common-dev:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:

The following packages have unmet dependencies: libgtk-3-dev:i386 : Depends: gir1.2-gtk-3.0:i386 (= 3.18.9-1ubuntu3.3) but it is not going to be installed

Depends: libatk1.0-dev:i386 (>= 2.15.1) but it is not going to be installed Depends: libatk-bridge2.0-dev:i386 but it is not going to be installed Depends: libegl1-mesa-dev:i386 but it is not going to be installed Depends: libxkbcommon-dev:i386 but it is not going to be installed Depends: libmirclient-dev:i386 (>= 0.13.3) but it is not going to be installed

libgtk2.0-dev:i386 : Depends: gir1.2-gtk-2.0:i386 (= 2.24.30-1ubuntu1.16.04.2) but it is not going to be installed

Depends: libatk1.0-dev:i386 (>= 1.29.2) but it is not going to be installed Recommends: python:i386 (>= 2.4) but it is not going to be installed

libnotify-dev:i386 : Depends: gir1.2-notify-0.7:i386 (= 0.7.6-2svn1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. ```

Well, that doesn't look good. It appears some of the Gecko libraries we need aren't happy about being installed for multiple architectures.

Switch Approaches to chroot

Since multiarch packages don't appear to be working here, I looked around for other approaches. Ideally, I would have something fairly self-contained so that it would be easy to remove when I no longer need 32-bit support.

One approach to multiple architectures that has been around for a while is to create a chroot environment: effectively, a separate installation of Linux for a different architecture. A utility like schroot can then be used to issue the chroot(2) system call which makes the current session believe this sub-installation is the root filesystem.

Let's grab schroot so we'll be able to enter the chroot once it's set up:

(host) $ sudo apt install schroot

There are several different types of chroots you can use with schroot. We'll use the directory type, as it's the simplest to understand (just another directory on the existing filesystem), and it will make it simpler to expose a few things to the host later on.

You can place the directory wherever, but some existing filesystems are mapped into the chroot for convenience, so avoiding /home is probably a good idea. I went with /var/chroot/linux32:

(host) $ sudo mkdir -p /var/chroot/linux32

We need to update schroot.conf to configure the new chroot:

(host) $ sudo cat << EOF >> /etc/schroot/schroot.conf [linux32] description=Linux32 build environment aliases=default type=directory directory=/var/chroot/linux32 personality=linux32 profile=desktop users=jryans root-users=jryans EOF

In particular, personality is important to set for this multi-arch use case. (Make sure to replace the user names with your own!)

Firefox will want access to shared memory as well, so we'll need to add that to the set of mapped filesystems in the chroot:

(host) $ sudo cat << EOF >> /etc/schroot/desktop/fstab /dev/shm /dev/shm none rw,bind 0 0 EOF

Now we need to install the 32-bit system inside the chroot. We can do that with a utility called debootstrap:

(host) $ sudo apt install debootstrap (host) $ sudo debootstrap --variant=buildd --arch=i386 --foreign xenial /var/chroot/linux32 http://archive.ubuntu.com/ubuntu

This will fetch all the packages for a 32-bit installation and place them in the chroot. For a cross-arch bootstrap, we need to add --foreign to skip the unpacking step, which we will do momentarily from inside the chroot. --variant=buildd will help us out a bit by including common build tools.

To finish installation, we have to enter the chroot. You can enter the chroot with schroot and it remains active until you exit. Any snippets that say (chroot) instead of (host) are meant to be run inside the chroot.

So, inside the chroot, run the second stage of debootstrap to actually unpack everything:

(chroot) $ sudo /debootstrap/debootstrap --second-stage

Let's double-check that things are working like we expect:

(chroot) $ arch i686

Great, we're getting closer!

Install packages

Now that we have a basic 32-bit installation, let's install the packages we need for development. The apt source list inside the chroot is pretty bare bones, so we'll want to expand it a bit to reach everything we need:

(chroot) $ sudo cat << EOF > /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu xenial main universe deb http://archive.ubuntu.com/ubuntu xenial-updates main universe EOF (chroot) $ sudo apt update

Let's grab the same packages from before (without :i386 since that's the default inside the chroot):

(chroot) $ sudo apt install libasound2-dev libcurl4-openssl-dev libdbus-1-dev libdbus-glib-1-dev libgconf2-dev libgtk-3-dev libgtk2.0-dev libiw-dev libnotify-dev libpulse-dev libx11-xcb-dev libxt-dev mesa-common-dev python-dbus xvfb yasm

You may need to install the 32-bit version of your graphics card's GL library to get reasonable graphics output when running in the 32-bit environment.

(chroot) $ sudo apt install nvidia-384

We'll also want to have access to the X display inside the chroot. The simple way to achieve this is to disable X security in the host and expose the same display in the chroot:

(host) $ xhost + (chroot) $ export DISPLAY=:0

We can verify that we have accelerated graphics:

(chroot) $ sudo apt install mesa-utils (chroot) $ glxinfo | grep renderer OpenGL renderer string: GeForce GTX 1080/PCIe/SSE2

Building Firefox

In order for the host to build Firefox for the 32-bit target, it needs to access various 32-bit libraries and include files. We already have these installed in the chroot, so let's cheat and expose them to the host via symlinks into the chroot's file structure:

(host) $ sudo ln -s /var/chroot/linux32/lib/i386-linux-gnu /lib/ (host) $ sudo ln -s /var/chroot/linux32/usr/lib/i386-linux-gnu /usr/lib/ (host) $ sudo ln -s /var/chroot/linux32/usr/include/i386-linux-gnu /usr/include/

We also need Rust to be able to target 32-bit from the host, so let's install support for that:

(host) $ rustup target add i686-unknown-linux-gnu

We'll need a specialized .mozconfig for Firefox to target 32-bit. Something like the following:

(host) $ cat << EOF > ~/projects/gecko/.mozconfig export PKG_CONFIG_PATH="/var/chroot/linux32/usr/lib/i386-linux-gnu/pkgconfig:/var/chroot/linux32/usr/share/pkgconfig" export MOZ_LINUX_32_SSE2_STARTUP_ERROR=1 CFLAGS="$CFLAGS -msse -msse2 -mfpmath=sse" CXXFLAGS="$CXXFLAGS -msse -msse2 -mfpmath=sse" if test `uname -m` = "x86_64"; then CFLAGS="$CFLAGS -m32 -march=pentium-m" CXXFLAGS="$CXXFLAGS -m32 -march=pentium-m" ac_add_options --target=i686-pc-linux ac_add_options --host=i686-pc-linux ac_add_options --x-libraries=/usr/lib fi EOF

This was adapted from the mozconfig.linux32 used for official 32-bit builds. I modified the PKG_CONFIG_PATH to point at more 32-bit files installed inside the chroot, similar to the library and include changes above.

Now, we should be able to build successfully:

(host) $ ./mach build

Then, from the chroot, you can run Firefox and other tests:

(chroot) $ ./mach run

Firefox running on Linux 32-bit

Footnotes

1. It's commonly suggested that people should use ./mach bootstrap to install the Firefox build dependencies, so feel free to try that if you wish. I dislike scripts that install system packages, so I've done it manually here. The bootstrap script would likely need various adjustments to support this use case.

Categorieën: Mozilla-nl planet

The Servo Blog: Custom Elements in Servo

Mozilla planet - do, 24/08/2017 - 22:00

This summer I had the pleasure of implementing Custom Elements in Servo under the mentorship of jdm.

Introduction

Custom Elements are an exciting development for the Web Platform. They are apart of the Web Components APIs. The goal is to allow web developers to create reusable web components with first-class support from the browser. The Custom Element portion of Web Components allows for elements with custom names and behaviors to be defined and used via HTML tags.

For example, a developer could create a custom element called fancy-button which has special behavior (for example, ripples from material design). This element is reusable and can be used directly in HTML:

<fancy-button>My Cool Button</fancy-button>

For examples of cool web components check out webcomponents.org.

While using these APIs directly is very powerful, new web frameworks are emerging that harness the power of Web Component APIs and give developers even more power. One major contender with frontend web frameworks is Polymer. The Polymer framework builds on top of Web Components and removes boilerplate and makes using web components easier.

Another exciting framework using Custom Elements is A-Frame (supported by Mozilla). A-Frame is a WebVR framework that allows developers to create entire Virtual Reality experiences using HTML elements and javascript. There has been some recent work in getting WebVR and A-Frame functional in Servo. Implementing Custom Elements removes the need for Servo to rely on a polyfill.

For more information on what Custom Elements are and how to use them, I would suggest reading Custom Elements v1: Reusable Web Components.

Implementation

Before I began the implementation of Custom Elements, I broke down the spec into a few major pieces.

  • The CustomElementRegistry
  • Custom element creation
  • Custom element reactions

The CustomElementRegistry keeps track of all the defined custom elements for a single window. The registry is where you go to define new custom elements and later Servo will use the registry to lookup definitions give a possible custom element name. The bulk of the work in this section of the implementation was validating custom element definitions.

Custom element creation is the process of taking a custom element definition and running the defined constructor on a HTMLElement or the element extends. This can happen either when a new element is created, or after an element has been created via an upgrade reaction.

The final portion is triggering custom element reactions. There are two types of reactions:

  1. Callback reactions
  2. Upgrade reactions

Callback reactions fire when custom elements:

  • are connected from the DOM tree
  • are disconnected from the DOM tree
  • are adopted into a new document
  • have an attribute that is modified

When the reactions are triggered, the corresponding lifecycle method of the Custom Element is called. This allows the developer to implement custom behavior when any of these lifecycle events occur.

Upgrade reactions are used to take a non-customized element and make it customized by running the defined constructor. There is quite a bit of trickery going on behind the scenes to make all of this work. I wrote a post about custom element upgrades explaining how they work and why they are needed.

I used Gecko’s partial implementation of Custom Elements as a reference for a few parts of my implementation. This became extrememly useful whenever I had to use the SpiderMonkey API.

Roadblocks

As with any project, it is difficult to foresee big issues until you actually start writing the implementation. Most parts of the spec were straightforward and did not yield any trouble while I was writing the implementation; however, there were a few difficulties and unexpected problems that presented themselves.

One major pain-point was working with the SpiderMonkey API. This was more due to my lack of experience with the SpiderMonkey API. I had to learn how compartments work and how to debug panics coming from SpiderMonkey. bzbarsky was extremely helpful during this process; they helped me step through each issue and understand what I was doing wrong.

While I was in the midst of writing the implementation, I found out about the HTMLConstructor attribute. I had missed this part of the spec during the planning phase. The HTMLConstructor WebIDL attribute marks certain HTML elements that can be extended and generates a custom constructor for each that allows custom element constructors to work (read more about this in custom element upgrades).

Notable Pull Requests Conclusions

I enjoyed working on this project this summer and hope to continue my involvement with the Servo project. I have a gsoc repository that contains a list of all my GSoC issues, PRs, and blog posts. I want to extend a huge thanks to my mentor jdm and to bzbarsky for helping me work through issues when using SpiderMonkey.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Mozilla applauds India Supreme Court’s decision upholding privacy as a fundamental right

Mozilla planet - do, 24/08/2017 - 19:09

Mozilla is thrilled to see the Supreme Court of India’s decision declaring that the Right to Privacy is guaranteed by the Indian Constitution. Mozilla fights for privacy around the world as part of our mission, and so we’re pleased to see the Supreme Court unequivocally end the debate on whether this right even exists in India. Attention must move now to Aadhaar, which the government is increasingly making mandatory without meaningful privacy protections. To realize the right to privacy in practice, swift action is needed to enact a strong data protection law.

The post Mozilla applauds India Supreme Court’s decision upholding privacy as a fundamental right appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Mike Conley: Photon Engineering Newsletter #14

Mozilla planet - do, 24/08/2017 - 19:04

Just like jaws did last week, I’m taking over for dolske this week to talk about stuff going on with Photon Engineering. So sit back, strap in, and absorb Photon Engineering Newsletter #14!

If you’ve got the release calendar at hand, you’ll note that Nightly 57 merges to Beta on September 20th. Given that there’s usually a soft-freeze before the merge, this means that there are less than 4 weeks remaining for Photon development. That’s right – in less than a month’s time, folks on the Beta channel who might not be following Nightly development are going to get their first Photon experience. That’ll be pretty exciting!

So with the clock winding down, the Photon team has started to shift more towards polish and bug-fixing. At this point, all of the major changes should have landed, and now we need to buff the code to a sparkling sheen.

The first thing you may have noticed is that, after a solid run of dogefox, the icon has shifted again:

The new Nightly icon

We now return you to your regularly scheduled programming

The second big change are our new 60fps1 loading throbbers in the tabs, coming straight to you from the Photon Animations team!

The new loading throbber in Nightly

I think it’s fair to say that Photon Animations are giving Firefox a turbo boost!

Other recent changes Menus and structure Animations
  • Did we mention the new tab loading throbber?
Preferences
  • All MVP work is completed! The team is now fixing polish bugs. Outstanding!
Visual redesign Onboarding Performance
  1. The screen capturing software I used here is only capturing at 30fps, so it’s really not doing it justice. This tweet might capture it better. 

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Aug. 24, 2017

Mozilla planet - do, 24/08/2017 - 18:00

Reps Weekly Meeting Aug. 24, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Aug. 24, 2017

Mozilla planet - do, 24/08/2017 - 18:00

Reps Weekly Meeting Aug. 24, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Introducing the Extension Compatibility Tester

Mozilla planet - do, 24/08/2017 - 16:29

With Firefox’s move to a modern web-style browser extension API, it’s now possible to maintain one codebase and ship an extension in multiple browsers. However, since different browsers can have different capabilities, some extensions may require modification to be truly portable. With this in mind, we’ve built the Extension Compatibility Tester to give developers a better sense of whether their existing extensions will work in Firefox.

The tool currently supports Chrome extension bundle (.crx) files, but we’re working on expanding the types of extensions you can check. The tool generates a report showing any potential uses of APIs or permissions incompatible with Firefox, along with next steps on how to distribute a compatible extension to Firefox users.

We will continue to participate in the Browser Extensions Community Group and support its goal of finding a common subset of extensible points in browsers and APIs that developers can use. We hope you give the tool a spin and let us know what you think!

Try it out! >> “The tool says my extension may not be compatible“

Not to worry! Our analysis only shows API and permission usage, and doesn’t have the full context. If the incompatible functionality is non-essential to your extension you can use capability testing to only use the API when available:

// Causes an Error browser.unavailableAPI(...); // Capability Testing FTW! if ('unavailableAPI' in browser) { browser.unavailableAPI(...); }

Additionally, we’re constantly expanding the available extension APIs, so your missing functionality may be only a few weeks away!

“The tool says my extension is compatible!”

Hooray! That said, definitely try your extension out in Firefox before submitting to make sure things work as you expect. Common APIs may still have different effects in different browsers.

“I don’t want to upload my code to a 3rd party website.”

Understood! The compatibility testing is available as part of our extension development command-line tool or as a standalone module.

If you have any issues using the tool, please file an issue or leave a comment here. The hope is that this tool is a useful first step in helping developers port their extensions, and we get a healthier, more interoperable extension ecosystem.

Happy porting!

Categorieën: Mozilla-nl planet

Ryan Harter: Documentation Style Guide

Mozilla planet - do, 24/08/2017 - 09:00

I just wrote up a style guide for our team's documentation. The documentation is rendered using Gitbook and hosted on Github Pages. You can find the PR here but I figured it's worth sharing here as well.

Style Guide

Articles should be written in Markdown (not AsciiDoc). Markdown is usually powerful enough and is a more common technology than AsciiDoc.

Limit lines to 100 characters where possible. Try to split lines at the end of sentences. This makes it easier to reorganize your thoughts later.

This documentation is meant to be read digitally. Keep in mind that people read digital content much differently than other media. Specifically, readers are going to skim your writing, so make it easy to identify important information

Use visual markup like bold text, code blocks, and section headers. Avoid long paragraphs. Short paragraphs that describe one concept each makes finding important information easier.

Please squash your changes into meaningful commits and follow these commit message guidelines.

Categorieën: Mozilla-nl planet

Emma Humphries: Firefox Triage Report 2017-08-21

Mozilla planet - do, 24/08/2017 - 01:23

Correction: several incorrect buglist links have been fixed

It's the weekly report on the state of triage in Firefox-related components. I apologize for missing last week’s report. I was travelling and did not have a chance to sit down and focus on this.

Hotspots

The components with the most untriaged bugs remain the JavaScript Engine and Build Config.

I discussed the JavaScript bugs with Naveed. What will happen is that the JavaScript bugs which have not been marked as a priority for Quantum Flow (the ‘\[qf:p[1:3]\]’ whiteboard tags) or existing work (the ‘\[js:p[1:3]\]’ whiteboard tags) will be moved to the backlog (P3) for review after the Firefox 57 release. See https://bugzilla.mozilla.org/show_bug.cgi?id=1392436.

**Rank** **Component** **2017-08-07** **This Week** ---------- ------------------------------ ---------------- --------------- 1 Core: JavaScript Engine 449 471 2 Core: Build Config 429 450 3 Firefox for Android: General 411 406 4 Firefox: General 242 246 5 Core: General 234 235 6 Core: XPCOM 176 178 7 Core: JavaScript: GC — 168 8 Core: Networking — 161 All Components 8,373 8,703

Please make sure you’ve made it clear what, if anything will happen with these bugs.

Not sure how to triage? Read https://wiki.mozilla.org/Bugmasters/Process/Triage.

Next Release

**Version** 56 56 56 56 57 57 57 ----------------------------------------- ------- ------- ------- ------- ----- ------ ------- **Date** 7/10 7/17 7/24 7/31 8/7 8/14 8/14 **Untriaged this Cycle** 4,525 4,451 4,317 4,479 479 835 1,196 **Unassigned Untriaged this Cycle** 3,742 3,682 3,517 3,674 356 634 968 **Affected this Upcoming Release (56)** 111 126 139 125 123 119 **Enhancements** 102 107 91 103 3 5 11 **Orphaned P1s** 199 193 183 192 196 191 183 **Stalled P1s** 195 173 159 179 157 152 155

What should we do with these bugs? Bulk close them? Make them into P3s? Bugs without decisions add noise to our system, cause despair in those trying to triage bugs, and leaves the community wondering if we listen to them.

Methods and Definitions

In this report I talk about bugs in Core, Firefox, Firefox for Android, Firefox for IOs, and Toolkit which are unresolved, not filed from treeherder using the intermittent-bug-filer account*, and have no pending needinfos.

By triaged, I mean a bug has been marked as P1 (work on now), P2 (work on next), P3 (backlog), or P5 (will not work on but will accept a patch).

A triage decision is not the same as a release decision (status and tracking flags.)

https://mozilla.github.io/triage-report/#report

Age of Untriaged Bugs

The average age of a bug filed since June 1st of 2016 which has gone without triage.

https://mozilla.github.io/triage-report/#date-report

Untriaged Bugs in Current Cycle

Bugs filed since the start of the Firefox 57 release cycle which do not have a triage decision.

https://mzl.la/2wzJxLP

Recommendation: review bugs you are responsible for (https://bugzilla.mozilla.org/page.cgi?id=triage_owners.html) and make triage decision, or RESOLVE.

Untriaged Bugs in Current Cycle (57) Affecting Next Release (56)

Bugs marked status_firefox56 = affected and untriaged.

https://mzl.la/2wzjHaH

Enhancements in Release Cycle

Bugs filed in the release cycle which are enhancement requests, severity = enhancement, and untriaged.

https://mzl.la/2wzCBy8

​Recommendation: ​product managers should review and mark as P3, P5, or RESOLVE as WONTFIX.

High Priority Bugs without Owners

Bugs with a priority of P1, which do not have an assignee, have not been modified in the past two weeks, and do not have pending needinfos.

https://mzl.la/2u1VLem

Recommendation: review priorities and assign bugs, re-prioritize to P2, P3, P5, or RESOLVE.

Stalled High Priority Bugs

There 159 bugs with a priority of P1, which have an assignee, but have not been modified in the past two weeks.

https://mzl.la/2u2poMJ

Recommendation: review assignments, determine if the priority should be changed to P2, P3, P5 or RESOLVE.

* New intermittents are filed as P5s, and we are still cleaning up bugs after this change, See https://bugzilla.mozilla.org/show_bug.cgi?id=1381587, https://bugzilla.mozilla.org/show_bug.cgi?id=1381960, and https://bugzilla.mozilla.org/show_bug.cgi?id=1383923

If you have questions or enhancements you want to see in this report, please reply to me here, on IRC, or Slack and thank you for reading.



comment count unavailable comments
Categorieën: Mozilla-nl planet

The Servo Blog: Off main thread HTML parsing in Servo

Mozilla planet - wo, 23/08/2017 - 20:41

Originally published on Nikhil’s blog.

Introduction

Traditionally, browsers have been written as single threaded applications, and the html spec certainly seems to validate this statement. This makes it difficult to parallelize any task which a browser carries out, and we generally have to come up with innovative ways to do so.

One such task is HTML parsing, and I have been working on parallelizing it this summer as part of my GSoC project. Since Servo is written in Rust, I’m assuming the reader has some basic knowledge about Rust. If not, check out this awesome Rust book. Done? Let’s dive straight into the details:

HTML Parser

Servo’s HTML (and XML) parsing code live in html5ever. Since this project concerns HTML parsing, I will only be talking about that. The first component we need to know about is the Tokenizer. This component is responsible for taking in raw input from a buffer and creating tokens, eventually sending them to its Sink, which we will call TokenSink. This could be any type which implements the TokenSink trait.

html5ever has a type called TreeBuilder, which implements this trait. The TreeBuilder’s job is to create tree operations based on the tokens it receives. TreeBuilder contains its own Sink, called TreeSink, which details the methods corresponding to these tree ops. The TreeBuilder calls these TreeSink methods under appropriate conditions, and these ‘action methods’ are responsible for constructing the DOM tree.

With me so far? Good. The key to parallelizing HTML parsing is realizing that the task of creating tree ops is independent from the task of actually executing them to construct the DOM tree. Therefore, tokenization and tree op creation can happen on a separate thread, while the tree construction can be done on the main thread itself.

Example image

The Process

The first step I took was to decouple tree op creation from tree construction. Previously, tree ops were executed as soon as they were created. This involved the creation of a new TreeSink, which instead of executing them directly, created a representation of a tree op, containing all relevant data. For the time being, I sent the tree op to a process_op function as soon as it was created, whereupon it was executed.

Now that these two processes were independent, my next task consisted of creating a new thread, where the Tokenizer+TreeBuilder pair would live, to generate these tree ops. Now, when a tree op was created, it would be sent to the main thread, and control would return back to the TreeBuilder. The TreeBuilder does not have to wait for the execution of the tree op anymore, thus speeding up the entire process.

So far so good. The final task in this project was to implement speculative parsing, by building on top of these recent changes.

Speculative Parsing

The HTML spec dictates that at any point during parsing, if we encounter a script tag, then the script must be executed immediately (if it is an inline script), or must be fetched and then executed (note that this rule does not apply to async or defer scripts). Why, you might ask, must this be done so? Why can’t we mark these scripts and execute them all at the end, after the parsing is done? This is because of an old, ill-thought out Document API function called document.write(). This function is a pain point for many developers who work on browsers, as it is a real headache implementing it well enough, while working around the many idiosyncrasies which surround it. I won’t dive into the details here, as they are not relevant. All we need to know is what document.write() does: it takes a string argument, which is generally markup, and inserts this string as part of the document’s HTML content. It is suffice to say that using this function might break your page, and should not be used.

Returning to the parsing task, we can’t commit any DOM manipulations until the script finishes executing, because document.write() could make them redundant. What speculative parsing aims to do is to continue parsing the content after the script tag in the parser thread, while the script is being executed in the main thread. Note that we are only speculatively creating tree ops here, not the actual tree construction. After the script finishes executing, we analyze the actions of the document.write() calls (if any) to determine whether to use the tree ops, or to throw them away.

Roadblock!

Remember when I said the process of creating tree ops is independent from tree construction? Well, I lied a little. Until a week ago, we need access to some DOM nodes for the creation of a couple of tree actions (one method needed to know if a node had a parent, and the other needed to know whether two nodes existed in the same tree). When I moved the task of creating tree ops to a separate thread, I could no longer access the DOM tree, which lived on the main thread. So I used a Sender on the TreeSink to create and send queries to the main thread, which would access the DOM and send the results back. Then only would the TreeSink method return, with the data it received from the main thread. Additionally, this meant that these couple of methods were synchronous in nature. No biggie.

I realized the problem when I sat down to think about how I would implement speculative parsing. Since the main thread is busy executing scripts, it won’t be listening to the queries these synchronous methods will be sending, and therefore the task of creating tree ops cannot progress further!

This turned out to be a bigger problem than I’d imagined, and I also had to sift through the equivalent Gecko code to understand how this situation was handled. I eventually came up with a good solution, but I won’t bore you with the details. If you want to know more, here’s a gist explaining the solution.

With these changes landed in html5ever, I can finally implement speculative parsing. Unfortunately, there’s not much time to implement it as a part of the GSoC project, so I will be landing this feature in Servo some time later. I hope to publish another blog post describing it thoroughly, along with details on the performance improvements this feature would bring.

Links to important PRs:

Added Async HTML Tokenizer: https://github.com/servo/servo/pull/17037

Run the async HTML Tokenizer on a new thread: https://github.com/servo/servo/pull/17914

TreeBuilder no longer relies on same_tree and has_parent_node: https://github.com/servo/html5ever/pull/300

End TreeBuilder’s reliance on DOM: https://github.com/servo/servo/pull/18056

Conclusion

This was a really fun project; I got solve lots of cool problems, and also learnt a lot more about how a modern, spec-compliant rendering engine works.

I would like to thank my mentor Anthony Ramine, who was absolutely amazing to work with, and Josh Matthews, who helped me a lot when I was still a rookie looking to contribute to the project.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 110

Mozilla planet - wo, 23/08/2017 - 19:00

The Joy of Coding - Episode 110 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 110

Mozilla planet - wo, 23/08/2017 - 19:00

The Joy of Coding - Episode 110 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting August 23, 2017

Mozilla planet - wo, 23/08/2017 - 18:00

Weekly SUMO Community Meeting August 23, 2017 This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Ryan Harter: Beer and Probes

Mozilla planet - wo, 23/08/2017 - 09:00

Quick post to clear up some terminology. But first, an analogy to clear up my thinking:

Analogy

Temperature control is a big part of brewing beer. Throughout the brewing process I use a thermometer to measure the temperature of the soon-to-be beer. Because I take several temperature readings throughout the brewing process, one brew will result in a list of a half dozen temperature readings. For example, I take a mash temperature, then a sparge temperature, then a fermentation temperature. The units on these measurements are always in Fahrenheit, but their interpretation is different.

The Rub

In this example, I would call the thermometer a "probe". The set of all temperature readings share a "data type". Each temperature reading is a "measurement" which is stored in a given "field".

At the SFO workweek I uncovered some terminology I found confusing. Specifically, we use the word "probe" to refer to data we collect. I haven't encountered this usage outside of Mozilla.

Instead, I'd suggest we call histograms and scalars "data types". A "probe" is a unit of client-side code that collects a measurement for us. A single "field" could be be a column in one of our datasets (like normalized_channel). A measurement would be a value from a single field from a single ping (like the string "release").

Categorieën: Mozilla-nl planet

Mozilla Testing New Default Opt-Out Setting for Firefox Telemetry Collection - BleepingComputer

Nieuws verzameld via Google - wo, 23/08/2017 - 00:33

BleepingComputer

Mozilla Testing New Default Opt-Out Setting for Firefox Telemetry Collection
BleepingComputer
Mozilla engineers are discussing plans to change the way Firefox collects usage data (telemetry), and the organization is currently preparing to test an opt-out clause an opt-out clause so they could collect more data relevant to the browser's usage.
Mozilla causes stir with opt-out data collection plans - NeowinNeowin
Mozilla wants to keep the internet away from fake newsFree Newsman: Market Research News By Market.Biz
[H]ardOCP: Firefox Plans to Anonymously Collect Browsing DataHardOCP

alle 6 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Join Mozilla and Stanford’s open design sprint for an accessible web

Mozilla planet - di, 22/08/2017 - 21:06
Join Mozilla’s and Stanford’s open design sprint for an accessible webCC photo by Complete Streets via Flickr

Millions of people have disabilities, ranging from hearing impairments from birth to visual impairments from old age. As much of our lives increasingly take place online the absence of accessibility contributes to the exclusion or partial exclusion of many people from society. Mozilla’s mission is to keep the web open, for everyone.

Working to include everyone has led to innovations that benefit others too. Take curb cuts, the sloping curb sections that connect sidewalks to the street. Curb cuts were originally introduced by disability activists for people in wheelchairs, but they were soon eagerly welcomed by people using bicycles, delivery carts and strollers. We believe innovations for accessibility tend to produce a corresponding electronic curb-cut effect.

We are looking for volunteers with first hand experience with accessibility needs, creative thinkers, designers, and engineers to work together to re-imagine accessibility for everyone while surfing the web.

Fill out this short form to join the design drive Monday to Friday, Aug 28 to Sep 1. Participation will involve working with a small team for about an hour/day.

The decentralized design process

The Open Innovation Team at Mozilla and Stanford have partnered to explore how a decentralized design process (a design process where people are not in the same physical location) can provide a way to innovate and include more diverse perspectives in the design process. The “hive” approach, pioneered by Stanford, will be used in this experiment to test how decentralized design process can help inspire and create a better accessible web for all!

How it works

You will work online in small teams with other participants across the globe for about an hour each day from Monday to Friday, Aug 28-Sep 1. You will be grouped based on your background, timezone and availability. We will go through the Stanford d.school’s design process together- spending a day on each of the phases: inspire, define, ideate, prototype, and test. We will gradually change team membership to give you a chance to interact with a diverse group of people over the course of the design sprint. We will provide instructions and deliverables for each phase.

This will be a highly collaborative process, where you will work with interesting people and disability experts while practicing the different stages of the design thinking process for a real world product used by millions of people. Each team will have a team-lead who will facilitate conversations. You can apply to be a team-lead on the signup form. Priority will be given to people who have a disability.

The final submissions to the design drive will be evaluated by the Firefox Test Pilot and the Accessibility teams, and go through a round of user testing. The Test Pilot team will evaluate the best contributions and determine if they are ready to be tested by hundreds of thousand users. Test Pilot is a platform that allows Mozilla to launch experimental features for Firefox to general release users and enables Mozilla to learn in detail how these features are used. Learnings from Test Pilot help Mozilla make decisions about Firefox and other products. So this could be the first step towards getting your contribution into the official Firefox browser!

Get involved!

The design sprint drive will take place over Slack, a text-based instant messaging service, and participation will take 5 sessions of roughly one hour each from Monday to Friday (Aug 28-Sep 1).

To participate you are required to respect other participants and follow the Mozilla community participation guidelines. If you have any questions you can ask us on Twitter at @MZOpenSprint or email firefoxaccessibility@cs.stanford.edu.

Join us making the web accessible for everyone!

Join Mozilla and Stanford’s open design sprint for an accessible web was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Software-update: Pale Moon 27.4.2 - Tweakers

Nieuws verzameld via Google - di, 22/08/2017 - 19:53

Tweakers

Software-update: Pale Moon 27.4.2
Tweakers
Pale Moon logo (75 pix) Er is een tweede update voor versie 27.4 van Pale Moon uitgekomen. Deze webbrowser maakt gebruik van de broncode van Mozilla Firefox, maar is geoptimaliseerd voor moderne hardware. De Windows-versie van Mozilla Firefox ...

Categorieën: Mozilla-nl planet

The Mozilla Blog: The Battle to Save Net Neutrality: A Panel with Tom Wheeler, Ro Khanna, Mozilla, Leading TV Producers and Others

Mozilla planet - di, 22/08/2017 - 19:18
On September 18, net neutrality experts will gather at the Internet Archive to discuss dire threats to the open web. You’re invited

 

Net neutrality — and the future of a healthy internet — is at stake.

In May, the FCC voted to move forward with plans to gut net neutrality. It was a decision met with furor: Since then, many millions of Americans have written, phoned and petitioned the FCC, demanding an internet that belongs to individual users, not broadband ISP gatekeepers. And scores of nonprofits and technology companies have organized to amplify Americans’ voices.

The first net neutrality public comment period ends on August 30, and the FCC is moving closer to a vote.

So on Monday, September 18, Mozilla is gathering leaders at the forefront of protecting net neutrality. We’ll discuss why it matters, what lies ahead, and what can be done to protect it.

RSVP: The Battle to Save Net Neutrality

Leaders like former FCC Chairman Tom Wheeler and Congressman Ro Khanna will discuss net neutrality’s importance to free speech, innovation, competition and social justice.

This free public event, titled “Battle to Save Net Neutrality,” will feature a panel discussion, reception and audience Q&A. It will be held at the Internet Archive (300 Funston Avenue, San Francisco) from 6 p.m. to 9 p.m. Participants include:

  • Panelist Tom Wheeler, former FCC Chairman who served under President Obama and architect of the 2015 net neutrality rules

 

  • Panelist and Congressman Ro Khanna (D-California), who represents California’s 17th congressional district in the heart of Silicon Valley. Khanna is a vocal supporter of net neutrality

 

  • Panelist Amy Aniobi, TV writer and producer for “Insecure” (HBO) and “Silicon Valley” (HBO), and member of the Writers Guild of America, West

 

  • Panelist Luisa Leschin, TV writer and producer for “From Dusk til Dawn” (Netflix) and “Just Add Magic” (Amazon), and a member of the Writers Guild of America, West

 

  • Panelist Denelle Dixon, Mozilla Chief Legal and Business Officer. Dixon spearheads Mozilla’s business, policy and legal activities in defense of a healthy internet. She is a vocal advocate for net neutrality, encryption and greater user choice and control online

 

  • Panelist Malkia Cyril, Executive Director of the Center for Media Justice. Cyril has spent the past 20 years building the capacity of racial and economic justice movements to win media rights, access and power in the digital age

 

  • Moderator Gigi Sohn, Mozilla Tech Policy Fellow and former counselor to FCC Chairman Tom Wheeler (2013-2016). One of the nation’s leading advocates for an open, fair, and fast internet, Sohn was named “one of the heroes that saved the Internet” by The Daily Dot for her leadership in the passage of the FCC’s strong net neutrality rules in 2015

Join us as we discuss the future of net neutrality, and what it means for the health of the internet. Register for this free event here.

The post The Battle to Save Net Neutrality: A Panel with Tom Wheeler, Ro Khanna, Mozilla, Leading TV Producers and Others appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Battle to Save Net Neutrality: A Panel with Tom Wheeler, Ro Khanna, Mozilla, Leading TV Producers and Others

Mozilla Blog - di, 22/08/2017 - 19:18
On September 18, net neutrality experts will gather at the Internet Archive to discuss dire threats to the open web. You’re invited

 

Net neutrality — and the future of a healthy internet — is at stake.

In May, the FCC voted to move forward with plans to gut net neutrality. It was a decision met with furor: Since then, many millions of Americans have written, phoned and petitioned the FCC, demanding an internet that belongs to individual users, not broadband ISP gatekeepers. And scores of nonprofits and technology companies have organized to amplify Americans’ voices.

The first net neutrality public comment period ends on August 30, and the FCC is moving closer to a vote.

So on Monday, September 18, Mozilla is gathering leaders at the forefront of protecting net neutrality. We’ll discuss why it matters, what lies ahead, and what can be done to protect it.

RSVP: The Battle to Save Net Neutrality

Leaders like former FCC Chairman Tom Wheeler and Congressman Ro Khanna will discuss net neutrality’s importance to free speech, innovation, competition and social justice.

This free public event, titled “Battle to Save Net Neutrality,” will feature a panel discussion, reception and audience Q&A. It will be held at the Internet Archive (300 Funston Avenue, San Francisco) from 6 p.m. to 9 p.m. Participants include:

  • Panelist Tom Wheeler, former FCC Chairman who served under President Obama and architect of the 2015 net neutrality rules

 

  • Panelist and Congressman Ro Khanna (D-California), who represents California’s 17th congressional district in the heart of Silicon Valley. Khanna is a vocal supporter of net neutrality

 

  • Panelist Amy Aniobi, TV writer and producer for “Insecure” (HBO) and “Silicon Valley” (HBO), and member of the Writers Guild of America, West

 

  • Panelist Luisa Leschin, TV writer and producer for “From Dusk til Dawn” (Netflix) and “Just Add Magic” (Amazon), and a member of the Writers Guild of America, West

 

  • Panelist Denelle Dixon, Mozilla Chief Legal and Business Officer. Dixon spearheads Mozilla’s business, policy and legal activities in defense of a healthy internet. She is a vocal advocate for net neutrality, encryption and greater user choice and control online

 

  • Panelist Malkia Cyril, Executive Director of the Center for Media Justice. Cyril has spent the past 20 years building the capacity of racial and economic justice movements to win media rights, access and power in the digital age

 

  • Moderator Gigi Sohn, Mozilla Tech Policy Fellow and former counselor to FCC Chairman Tom Wheeler (2013-2016). One of the nation’s leading advocates for an open, fair, and fast internet, Sohn was named “one of the heroes that saved the Internet” by The Daily Dot for her leadership in the passage of the FCC’s strong net neutrality rules in 2015

Join us as we discuss the future of net neutrality, and what it means for the health of the internet. Register for this free event here.

The post The Battle to Save Net Neutrality: A Panel with Tom Wheeler, Ro Khanna, Mozilla, Leading TV Producers and Others appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Inside a super fast CSS engine: Quantum CSS (aka Stylo)

Mozilla planet - di, 22/08/2017 - 17:30

You may have heard of Project Quantum… it’s a major rewrite of Firefox’s internals to make Firefox fast. We’re swapping in parts from our experimental browser, Servo, and making massive improvements to other parts of the engine.

The project has been compared to replacing a jet engine while the jet is still in flight. We’re making the changes in place, component by component, so that you can see the effects in Firefox as soon as each component is ready.

And the first major component from Servo—a new CSS engine called Quantum CSS (previously known as Stylo)—is now available for testing in our Nightly version. You can make sure that it’s turned on for you by going to about:config and setting layout.css.servo.enabled to true.

This new engine brings together state-of-the-art innovations from four different browsers to create a new super CSS engine.

4 browser engines feeding in to Quantum CSS

It takes advantage of modern hardware, parallelizing the work across all of the cores in your machine. This means it can run up to 2 or 4 or even 18 times faster.

On top of that, it combines existing state-of-the-art optimizations from other browsers. So even if it weren’t running in parallel, it would still be one fast CSS engine.

Racing jets

But what does the CSS engine do? First let’s look at the CSS engine and how it fits into the rest of the browser. Then we can look at how Quantum CSS makes it all faster.

What does the CSS engine do?

The CSS engine is part of the browser’s rendering engine. The rendering engine takes the website’s HTML and CSS files and turns them into pixels on the screen.

Files to pixels

Each browser has a rendering engine. In Chrome, it’s called Blink. In Edge, it’s called EdgeHTML. In Safari, it’s called WebKit. And in Firefox, it’s called Gecko.

To get from files to pixels, all of these rendering engines basically do the same things:

  1. Parse the files into objects the browser can understand, including the DOM. At this point, the DOM knows about the structure of the page. It knows about parent/child relationships between elements. It doesn’t know what those elements should look like, though.Parsing the HTML into a DOM tree
  2. Figure out what the elements should look like. For each DOM node, the CSS engine figures out which CSS rules apply. Then it figures out values for each CSS property for that DOM node.Styling each DOM node in the tree by attaching computed styles
  3. Figure out dimensions for each node and where it goes on the screen. Boxes are created for each thing that will show up on the screen. The boxes don’t just represent DOM nodes… you will also have boxes for things inside the DOM nodes, like lines of text.Measuring all of the boxes to create a frame tree
  4. Paint the different boxes. This can happen on multiple layers. I think of this like old-time hand drawn animation, with onionskin layers of paper. That makes it possible to just change one layer without having to repaint things on other layers.Painting layers
  5. Take those different painted layers, apply any compositor-only properties like transforms, and turn them into one image. This is basically like taking a picture of the layers stacked together. This image will then be rendered on the screen.Assembling the layers together and taking a picture

This means when it starts calculating the styles, the CSS engine has two things:

  • a DOM tree
  • a list of style rules

It goes through each DOM node, one by one, and figures out the styles for that DOM node. As part of this, it gives the DOM node a value for each and every CSS property, even if the stylesheets don’t declare a value for that property.

I think of it kind of like somebody going through and filling out a form. They need to fill out one of these forms for each DOM node. And for each form field, they need to have an answer.

Blank form with CSS properties

To do this, the CSS engine needs to do two things:

  • figure out which rules apply to the node — aka selector matching
  • fill in any missing values with values from the parent or a default value—aka the cascade
Selector matching

For this step, we’ll add any rule that matches the DOM node to a list. Because multiple rules can match, there may be multiple declarations for the same property.

Person putting check marks next to matching CSS rules

Plus, the browser itself adds some default CSS (called user agent style sheets). How does the CSS engine know which value to pick?

This is where specificity rules come in. The CSS engine basically creates a spreadsheet. Then it sorts the declarations based on different columns.

Declarations in a spreadsheet

The rule that has the highest specificity wins. So based on this spreadsheet, the CSS engine fills out the values that it can.

Form with some CSS properties filled in

For the rest, we’ll use the cascade.

The cascade

The cascade makes CSS easier to write and maintain. Because of the cascade, you can set the color property on the body and know that text in p, and span, and li elements will all use that color (unless you have a more specific override).

To do this, the CSS engine looks at the blank boxes on its form. If the property inherits by default, then the CSS engine walks up the tree to see if one of the ancestors has a value. If none of the ancestors have a value, or if the property does not inherit, it will get a default value.

Form will all CSS properties filled in

So now all of the styles have been computed for this DOM node.

A sidenote: style struct sharing

The form that I’ve been showing you is a little misrepresentative. CSS has hundreds of properties. If the CSS engine held on to a value for each property for each DOM node, it would soon run out of memory.

Instead, engines usually do something called style struct sharing. They store data that usually goes together (like font properties) in a different object called a style struct. Then, instead of having all of the properties in the same object, the computed styles object just has pointers. For each category, there’s a pointer to the style struct that has the right values for this DOM node.

Chunks of the form pulled out to separate objects

This ends up saving both memory and time. Nodes that have similar properties (like siblings) can just point to the same structs for the properties they share. And because many properties are inherited, an ancestor can share a struct with any descendants that don’t specify their own overrides.

Now, how do we make that fast?

So that is what style computation looks like when you haven’t optimized it.

 selector matching, sorting by specificity, and computing property values

There’s a lot of work happening here. And it doesn’t just need to happen on the first page load. It happens over and over again as users interact with the page, hovering over elements or making changes to the DOM, triggering a restyle.

Initial styling plus restyling for hover, DOM nodes added, etc

This means that CSS style computation is a great candidate for optimization… and browsers have been testing out different strategies to optimize it for the past 20 years. What Quantum CSS does is take the best of these strategies from different engines and combine them to create a superfast new engine.

So let’s look at the details of how these all work together.

Run it all in parallel

The Servo project (which Quantum CSS comes from) is an experimental browser that’s trying to parallelize all of the different parts of rendering a web page. What does that mean?

A computer is like a brain. There’s a part that does the thinking (the ALU). Near that, there’s some short term memory (the registers). These are grouped together on the CPU. Then there’s longer term memory, which is RAM.

CPU with ALU (the part that does the thinking) and registers (short term memory)

Early computers could only think one thing at a time using this CPU. But over the last decade, CPUs have shifted to having multiple ALUs and registers, grouped together in cores. This means that the CPU can think multiple things at once — in parallel.

CPU chip with multiple cores containing ALUs and registers

Quantum CSS makes use of this recent feature of computers by splitting up style computation for the different DOM nodes across the different cores.

This might seem like an easy thing to do… just split up the branches of the tree and do them on different cores. It’s actually much harder than that for a few reasons. One reason is that DOM trees are often uneven. That means that one core will have a lot more work to do than others.

Imbalanced DOM tree being split between multiple cores so one does all the work

To balance the work more evenly, Quantum CSS uses a technique called work stealing. When a DOM node is being processed, the code takes its direct children and splits them up into 1 or more “work units”. These work units get put into a queue.

Cores segmenting their work into work units

When one core is done with the work in its queue, it can look in the other queues to find more work to do. This means we can evenly divide the work without taking time up front to walk the tree and figure out how to balance it ahead of time.

Cores that have finished their work stealing from the core with more work

In most browsers, it would be hard to get this right. Parallelism is a known hard problem, and the CSS engine is very complex. It’s also sitting between the two other most complex parts of the rendering engine — the DOM and layout. So it would be easy to introduce a bug, and parallelism can result in bugs that are very hard to track down, called data races. I explain more about these kinds of bugs in another article.

If you’re accepting contributions from hundreds or thousands of engineers, how can you program in parallel without fear? That’s what we have Rust for.

Rust logo

With Rust, you can statically verify that you don’t have data races. This means you avoid tricky-to-debug bugs by just not letting them into your code in the first place. The compiler won’t let you do it. I’ll be writing more about this in a future article. In the meantime, you can watch this intro video about parallelism in Rust or this more in-depth talk about work stealing.

With this, CSS style computation becomes what’s called an embarrassingly parallel problem — there’s very little keeping you from running it efficiently in parallel. This means that we can get close to linear speed ups. If you have 4 cores on your machine, then it will run close to 4 times faster.

Speed up restyles with the Rule Tree

For each DOM node, the CSS engine needs to go through all of the rules to do selector matching. For most nodes, this matching likely won’t change very often. For example, if the user hovers over a parent, the rules that match it may change. We still need to recompute style for its descendants to handle property inheritance, but the rules that match those descendants probably won’t change.

It would be nice if we could just make a note of which rules match those descendants so we don’t have to do selector matching for them again… and that’s what the rule tree—borrowed from Firefox’s previous CSS engine— does.

The CSS engine will go through the process of figuring out the selectors that match, and then sorting them by specificity. From this, it creates a linked list of rules.

This list is going to be added to the tree.

A linked list of rules being added to the rule tree

The CSS engine tries to keep the number of branches in the tree to a minimum. To do this, it will try to reuse a branch wherever it can.

If most of the selectors in the list are the same as an existing branch, then it will follow the same path. But it might reach a point where the next rule in the list isn’t in this branch of the tree. Only at that point will it add a new branch.

The last item in the linked list being added to the tree

The DOM node will get a pointer to the rule that was inserted last (in this example, the div#warning rule). This is the most specific one.

On restyle, the engine does a quick check to see whether the change to the parent could potentially change the rules that match children. If not, then for any descendants, the engine can just follow the pointer on the descendant node to get to that rule. From there, it can follow the tree back up to the root to get the full list of matching rules, from most specific to least specific. This means it can skip selector matching and sorting completely.

Skipping selector matching and sorting by specificity

So this helps reduce the work needed during restyle. But it’s still a lot of work during initial styling. If you have 10,000 nodes, you still need to do selector matching 10,000 times. But there’s another way to speed that up.

Speed up initial render (and the cascade) with the style sharing cache

Think about a page with thousands of nodes. Many of those nodes will match the same rules. For example, think of a long Wikipedia page… the paragraphs in the main content area should all end up matching the exact same rules, and have the exact same computed styles.

If there’s no optimization, then the CSS engine has to match selectors and compute styles for each paragraph individually. But if there was a way to prove that the styles will be the same from paragraph to paragraph, then the engine could just do that work once and point each paragraph node to the same computed style.

That’s what the style sharing cache—inspired by Safari and Chrome—does. After it’s done processing a node, it puts the computed style into the cache. Then, before it starts computing styles on the next node, it runs a few checks to see whether it can use something from the cache.

Those checks are:

  • Do the 2 nodes have the same ids, classes, etc? If so, then they would match the same rules.
  • For anything that isn’t selector based—inline styles, for example—do the nodes have the same values? If so, then the rules from above either won’t be overridden, or will be overridden in the same way.
  • Do both parents point to the same computed style object? If so, then the inherited values will also be the same.

 yes

Those checks have been in earlier style sharing caches since the beginning. But there are a lot of other little cases where styles might not match. For example, if a CSS rule uses the :first-child selector, then two paragraphs might not match, even though the checks above suggest that they should.

In WebKit and Blink, the style sharing cache would give up in these cases and not use the cache. As more sites use these modern selectors, the optimization was becoming less and less useful, so the Blink team recently removed it. But it turns out there is a way for the style sharing cache to keep up with these changes.

In Quantum CSS, we gather up all of those weird selectors and check whether they apply to the DOM node. Then we store the answers as ones and zeros. If the two elements have the same ones and zeros, we know they definitely match.

first-child

If a DOM node can share styles that have already been computed, you can skip pretty much all of the work. Because pages often have many DOM nodes with the same styles, this style sharing cache can save on memory and also really speed things up.

Skipping all of the work

Conclusion

This is the first big technology transfer of Servo tech to Firefox. Along the way, we’ve learned a lot about how to bring modern, high-performance code written in Rust into the core of Firefox.

We’re very excited to have this big chunk of Project Quantum ready for users to experience first-hand. We’d be happy to have you try it out, and let us know if you find any issues.

Categorieën: Mozilla-nl planet

Pagina's