mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

William Lachance: Functional is the future

Mozilla planet - mo, 28/08/2017 - 23:02

Just spent well over an hour tracking down a silly bug in my code. For the mission control project, I wrote this very simple API method that returns a cached data structure to our front end:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 def measure(request): channel_name = request.GET.get('channel') platform_name = request.GET.get('platform') measure_name = request.GET.get('measure') interval = request.GET.get('interval') if not all([channel_name, platform_name, measure_name]): return HttpResponseBadRequest("All of channel, platform, measure required") data = cache.get(get_measure_cache_key(platform_name, channel_name, measure_name)) if not data: return HttpResponseNotFound("Data not available for this measure combination") if interval: try: min_time = datetime.datetime.now() - datetime.timedelta(seconds=int(interval)) except ValueError: return HttpResponseBadRequest("Interval must be specified in seconds (as an integer)") # Return any build data in the interval empty_buildids = set() for (build_id, build_data) in data.items(): build_data['data'] = [d for d in build_data['data'] if d[0] > min_time] if not build_data['data']: empty_buildids.add(build_id) # don't bother returning empty indexed data for empty_buildid in empty_buildids: del data[empty_buildid] return JsonResponse(data={'measure_data': data})

As you can see, it takes 3 required parameters (channel, platform, and measure) and one optional one (interval), picks out the required data structure, filters it a bit, and returns it. This is almost what we wanted for the frontend, unfortunately the time zone information isn’t quite what we want, since the strings that are returned don’t tell the frontend that they’re in UTC format — they need a ‘Z’ appended to them for that.

After a bit of digging, I found out that Django’s json serializer will only add the Z if the tzinfo structure is specified. So I figured out a simple pattern for adding that (using the dateutil library, which we are fortunately already using):

1 2 from dateutil.tz import tzutc datetime.datetime.fromtimestamp(mydatestamp.timestamp(), tz=tzutc())

I tested this quickly on the python console and it seemed to work great. But when I added the code to my function, the unit tests mysteriously failed. Can you see why?

1 2 3 4 5 6 7 8 for (build_id, build_data) in data.items(): # add utc timezone info to each date, so django will serialize a # 'Z' to the end of the string (and so javascript's date constructor # will know it's utc) build_data['data'] = [ [datetime.datetime.fromtimestamp(d[0].timestamp(), tz=tzutc())] + d[1:] for d in build_data['data'] if d[0] > min_time ]

Trick question: there’s actually nothing wrong with this code. But if you look at the block in context (see the top of the post), you see that it’s only executed if interval is specified, which it isn’t necessarily. The first case that my unit tests executed didn’t specify interval, so fail they did. It wasn’t immediately obvious to me why this was happening, so I went on a wild-goose chase of trying to figure out how the Django context might have been responsible for the unexpected output, before realizing my basic logic error.

This was fairly easily corrected (my updated code applies the datetime-mapping unconditionally to set of optionally-filtered results) but perfectly illustrates my issue with idiomatic python: while the language itself has constructs like map and reduce that support the functional programming model, the language strongly steers you towards writing things in an imperative style that makes costly and annoying mistakes like this much easier to make. Yes, list and dictionary comprehensions are nice and compact but they start to break down in the more complex cases.

As an experiment, I wrote up what this function might look like in a pure functional style with immutable data structures:

1 2 3 4 5 6 7 8 def transform_and_filter_data(build_data): new_build_data = copy.copy(build_data) new_build_data['data'] = [ [datetime.datetime.fromtimestamp(d[0].timestamp(), tz=tzutc())] + d[1:] for d in build_data['data'] if d[0] > min_time ] return new_build_data transformed_build_data = {k: v for k, v in {k: transform_and_filter_data(v) for k, v in data}.items() if len(v['data']) > 0}

A work of art it isn’t — and definitely not “pythonic”. Compare this to a similar piece of code written in Javascript (ES6) with lodash (using a hypothetical tzified function):

1 2 3 4 5 6 7 let transformedBuildData = _.filter(_.mapValues(data, (buildData) => ({ ...buildData, data: buildData['data'] .filter(datum => datum[0] > minTimestamp) .map(datum => [tzcified(datum[0])].concat(datum.slice(1))) })), (data, buildId) => data['data'].length > 0);

A little bit easier to understand, but more importantly it comes across as idiomatic and natural in a way that the python version just doesn’t. I’ve been happily programming Python for the last 10 years, but it’s increasingly feeling time to move on to greener pastures.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 28 Aug 2017

Mozilla planet - mo, 28/08/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #3

Mozilla planet - mo, 28/08/2017 - 19:57

WebRender work is coming along nicely. I haven’t managed to properly track what landed this week so the summary below is somewhat short. This does little justice to the great stuff that is happening on the side. For example I won’t list the many bugs that Sotaro finds and fixes on a daily basis, or the continuous efforts Kats puts into keeping Gecko’s repository in sync with WebRender’s, or Ryan’s work on cbindgen (the tool we made to auto-generate C bindings for WebRender), or the unglamorous refactoring I got myself into in order to get some parts of Gecko to integrate with WebRender without breaking the web. Lee has been working on the dirty and gory details of fonts for a while but that won’t make it to the newsletter until it lands. Morris’s work on display items conversion hasn’t yet received due credit here, nor Jerry’s work on handling the many (way too many) texture formats that have to be supported by WebRender for video playback. Meanwhile Gankro is working on changes to the rust language itself that will make our life easier when dealing with fallible allocation and Kvark, after reviewing most of what lands in the WebRender repo and triaging all of the issues, manages to find the time to add tools to measure pixel coverage of render passes, and plenty of other things I don’t even know about because following everything closely would be a full-time job. You get the idea. I just wanted to give a little shout out to the people working on very important parts of the project that may not always appear in the highlights below, either because the work hasn’t landed yet, because I missed it, or because it was hidden behind Glenn’s usual round of epic optimization.

Notable WebRender changes
  • Glenn optimized the allocation of clip masks. Improvements with this fix on a test case generated from running cnn.com in Gecko:
GPU time 10ms -> 1.7ms. Clip target allocations 54 -> 1. CPU compositor time 2.8ms -> 1.8ms. CPU backend time 1.8ms -> 1.6ms. Notable Gecko changes
  • Jeff landed tiling support for blob images. Tiling is currently only used for very large images, but when used we get parallel rasterization across tiles for free.
  • Fallback blob images are no longer manually clipped. This means that we don’t have to redraw them while scrolling anymore. This gives a large performance improvement when scrolling mozilla.org

Categorieën: Mozilla-nl planet

Shruti Jasoria: Replicate Distribution on Perfherder

Mozilla planet - snein, 27/08/2017 - 20:30

I would have put up this post a tad early had the Game of Thrones season finale not consumed my entire afternoon today. What an episode! It’s saddening that the final season of the show requires a two-year long wait.

All the code which I have written for GSoC has been merged!

After squashing Bug 1273513 and Bug 1164891, I started working on Bug 1350384 which constituted a major portion of my GSoC project.

Up till now, Perfherder provided aggregated results and graphical visualization of various test suites in its comparison view, but not for the individual replicates that are used to generate them. For tests where there is a large natural variation in these individual numbers, it can be difficult to determine if there is regression when the summarised values change because there could be many different underlying reasons for that. For these cases, a detailed view of the individual test results could be extremely helpful.

These individual test results can now be analysed using the new replicate view.This can be accessed from the subtest comparison:

Link to the replicate distribution view from subtest comparison.

In this view, bar charts are used to compare the replicate results. The values for the base and original projects run-side by-side in the order in which they were obtained.

The new replicate view.

This feature is currently available for Talos framework when two specific revisions are compared.

With this bug comes an end to Google Summer of Code. It was a truly amazing experience. Over this summer, I have developed a better understanding of how Perfherder works and my front-end development skills have improved a lot. I got a chance to fly to San Francisco and attend the All hands meeting. The best part about GSoC was the fact that the code which I have written would make an impact on how performance testing is done in Mozilla.

In other news, my seventh semester in the college has begun. There’s so much to explore. This semester, I’ll try my hand on Artificial Intelligence and Cryptography. And probably a new Mozilla project too.

I hope you find the features which I have added to Perfherder useful. :)

Categorieën: Mozilla-nl planet

J. Ryan Stinnett: Building Firefox for Linux 32-bit

Mozilla planet - sn, 26/08/2017 - 02:33
Background

As part of my work on the Stylo / Quantum CSS team at Mozilla, I needed to be able to test changes to Firefox that only affect Linux 32-bit builds. These days, I believe you essentially have to use a 64-bit host to build Firefox to avoid OOM issues during linking and potentially other steps, so this means some form of cross-compiling from a Linux 64-bit host to a Linux 32-bit target.

I already had a Linux 64-bit machine running Ubuntu 16.04 LTS, so I set about attempting to make it build Firefox targeting Linux 32-bit.

I should note that I only use Linux occasionally at the moment, so there could certainly be a better solution than the one I describe. Also, I recreated these steps after the fact, so I might have missed something. Please let me know in the comments.

This article assumes you are already set up to build Firefox when targeting 64-bit.

Multiarch Packages (Or: How It's Supposed to Work)

Recent versions of Debian and Ubuntu support the concept of "multiarch packages" which are intended to allow installing multiple architectures together to support use cases including... cross-compiling! Great, sounds like just the thing we need.

We should be able to install1 the core Gecko development dependencies with an extra :i386 suffix to get the 32-bit version on our 64-bit host:

``` (host) $ sudo apt install libasound2-dev:i386 libcurl4-openssl-dev:i386 libdbus-1-dev:i386 libdbus-glib-1-dev:i386 libgconf2-dev:i386 libgtk-3-dev:i386 libgtk2.0-dev:i386 libiw-dev:i386 libnotify-dev:i386 libpulse-dev:i386 libx11-xcb-dev:i386 libxt-dev:i386 mesa-common-dev:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:

The following packages have unmet dependencies: libgtk-3-dev:i386 : Depends: gir1.2-gtk-3.0:i386 (= 3.18.9-1ubuntu3.3) but it is not going to be installed

Depends: libatk1.0-dev:i386 (>= 2.15.1) but it is not going to be installed Depends: libatk-bridge2.0-dev:i386 but it is not going to be installed Depends: libegl1-mesa-dev:i386 but it is not going to be installed Depends: libxkbcommon-dev:i386 but it is not going to be installed Depends: libmirclient-dev:i386 (>= 0.13.3) but it is not going to be installed

libgtk2.0-dev:i386 : Depends: gir1.2-gtk-2.0:i386 (= 2.24.30-1ubuntu1.16.04.2) but it is not going to be installed

Depends: libatk1.0-dev:i386 (>= 1.29.2) but it is not going to be installed Recommends: python:i386 (>= 2.4) but it is not going to be installed

libnotify-dev:i386 : Depends: gir1.2-notify-0.7:i386 (= 0.7.6-2svn1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. ```

Well, that doesn't look good. It appears some of the Gecko libraries we need aren't happy about being installed for multiple architectures.

Switch Approaches to chroot

Since multiarch packages don't appear to be working here, I looked around for other approaches. Ideally, I would have something fairly self-contained so that it would be easy to remove when I no longer need 32-bit support.

One approach to multiple architectures that has been around for a while is to create a chroot environment: effectively, a separate installation of Linux for a different architecture. A utility like schroot can then be used to issue the chroot(2) system call which makes the current session believe this sub-installation is the root filesystem.

Let's grab schroot so we'll be able to enter the chroot once it's set up:

(host) $ sudo apt install schroot

There are several different types of chroots you can use with schroot. We'll use the directory type, as it's the simplest to understand (just another directory on the existing filesystem), and it will make it simpler to expose a few things to the host later on.

You can place the directory wherever, but some existing filesystems are mapped into the chroot for convenience, so avoiding /home is probably a good idea. I went with /var/chroot/linux32:

(host) $ sudo mkdir -p /var/chroot/linux32

We need to update schroot.conf to configure the new chroot:

(host) $ sudo cat << EOF >> /etc/schroot/schroot.conf [linux32] description=Linux32 build environment aliases=default type=directory directory=/var/chroot/linux32 personality=linux32 profile=desktop users=jryans root-users=jryans EOF

In particular, personality is important to set for this multi-arch use case. (Make sure to replace the user names with your own!)

Firefox will want access to shared memory as well, so we'll need to add that to the set of mapped filesystems in the chroot:

(host) $ sudo cat << EOF >> /etc/schroot/desktop/fstab /dev/shm /dev/shm none rw,bind 0 0 EOF

Now we need to install the 32-bit system inside the chroot. We can do that with a utility called debootstrap:

(host) $ sudo apt install debootstrap (host) $ sudo debootstrap --variant=buildd --arch=i386 --foreign xenial /var/chroot/linux32 http://archive.ubuntu.com/ubuntu

This will fetch all the packages for a 32-bit installation and place them in the chroot. For a cross-arch bootstrap, we need to add --foreign to skip the unpacking step, which we will do momentarily from inside the chroot. --variant=buildd will help us out a bit by including common build tools.

To finish installation, we have to enter the chroot. You can enter the chroot with schroot and it remains active until you exit. Any snippets that say (chroot) instead of (host) are meant to be run inside the chroot.

So, inside the chroot, run the second stage of debootstrap to actually unpack everything:

(chroot) $ sudo /debootstrap/debootstrap --second-stage

Let's double-check that things are working like we expect:

(chroot) $ arch i686

Great, we're getting closer!

Install packages

Now that we have a basic 32-bit installation, let's install the packages we need for development. The apt source list inside the chroot is pretty bare bones, so we'll want to expand it a bit to reach everything we need:

(chroot) $ sudo cat << EOF > /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu xenial main universe deb http://archive.ubuntu.com/ubuntu xenial-updates main universe EOF (chroot) $ sudo apt update

Let's grab the same packages from before (without :i386 since that's the default inside the chroot):

(chroot) $ sudo apt install libasound2-dev libcurl4-openssl-dev libdbus-1-dev libdbus-glib-1-dev libgconf2-dev libgtk-3-dev libgtk2.0-dev libiw-dev libnotify-dev libpulse-dev libx11-xcb-dev libxt-dev mesa-common-dev python-dbus xvfb yasm

You may need to install the 32-bit version of your graphics card's GL library to get reasonable graphics output when running in the 32-bit environment.

(chroot) $ sudo apt install nvidia-384

We'll also want to have access to the X display inside the chroot. The simple way to achieve this is to disable X security in the host and expose the same display in the chroot:

(host) $ xhost + (chroot) $ export DISPLAY=:0

We can verify that we have accelerated graphics:

(chroot) $ sudo apt install mesa-utils (chroot) $ glxinfo | grep renderer OpenGL renderer string: GeForce GTX 1080/PCIe/SSE2

Building Firefox

In order for the host to build Firefox for the 32-bit target, it needs to access various 32-bit libraries and include files. We already have these installed in the chroot, so let's cheat and expose them to the host via symlinks into the chroot's file structure:

(host) $ sudo ln -s /var/chroot/linux32/lib/i386-linux-gnu /lib/ (host) $ sudo ln -s /var/chroot/linux32/usr/lib/i386-linux-gnu /usr/lib/ (host) $ sudo ln -s /var/chroot/linux32/usr/include/i386-linux-gnu /usr/include/

We also need Rust to be able to target 32-bit from the host, so let's install support for that:

(host) $ rustup target add i686-unknown-linux-gnu

We'll need a specialized .mozconfig for Firefox to target 32-bit. Something like the following:

(host) $ cat << EOF > ~/projects/gecko/.mozconfig export PKG_CONFIG_PATH="/var/chroot/linux32/usr/lib/i386-linux-gnu/pkgconfig:/var/chroot/linux32/usr/share/pkgconfig" export MOZ_LINUX_32_SSE2_STARTUP_ERROR=1 CFLAGS="$CFLAGS -msse -msse2 -mfpmath=sse" CXXFLAGS="$CXXFLAGS -msse -msse2 -mfpmath=sse" if test `uname -m` = "x86_64"; then CFLAGS="$CFLAGS -m32 -march=pentium-m" CXXFLAGS="$CXXFLAGS -m32 -march=pentium-m" ac_add_options --target=i686-pc-linux ac_add_options --host=i686-pc-linux ac_add_options --x-libraries=/usr/lib fi EOF

This was adapted from the mozconfig.linux32 used for official 32-bit builds. I modified the PKG_CONFIG_PATH to point at more 32-bit files installed inside the chroot, similar to the library and include changes above.

Now, we should be able to build successfully:

(host) $ ./mach build

Then, from the chroot, you can run Firefox and other tests:

(chroot) $ ./mach run

Firefox running on Linux 32-bit

Footnotes

1. It's commonly suggested that people should use ./mach bootstrap to install the Firefox build dependencies, so feel free to try that if you wish. I dislike scripts that install system packages, so I've done it manually here. The bootstrap script would likely need various adjustments to support this use case.

Categorieën: Mozilla-nl planet

The Servo Blog: Custom Elements in Servo

Mozilla planet - to, 24/08/2017 - 22:00

This summer I had the pleasure of implementing Custom Elements in Servo under the mentorship of jdm.

Introduction

Custom Elements are an exciting development for the Web Platform. They are apart of the Web Components APIs. The goal is to allow web developers to create reusable web components with first-class support from the browser. The Custom Element portion of Web Components allows for elements with custom names and behaviors to be defined and used via HTML tags.

For example, a developer could create a custom element called fancy-button which has special behavior (for example, ripples from material design). This element is reusable and can be used directly in HTML:

<fancy-button>My Cool Button</fancy-button>

For examples of cool web components check out webcomponents.org.

While using these APIs directly is very powerful, new web frameworks are emerging that harness the power of Web Component APIs and give developers even more power. One major contender with frontend web frameworks is Polymer. The Polymer framework builds on top of Web Components and removes boilerplate and makes using web components easier.

Another exciting framework using Custom Elements is A-Frame (supported by Mozilla). A-Frame is a WebVR framework that allows developers to create entire Virtual Reality experiences using HTML elements and javascript. There has been some recent work in getting WebVR and A-Frame functional in Servo. Implementing Custom Elements removes the need for Servo to rely on a polyfill.

For more information on what Custom Elements are and how to use them, I would suggest reading Custom Elements v1: Reusable Web Components.

Implementation

Before I began the implementation of Custom Elements, I broke down the spec into a few major pieces.

  • The CustomElementRegistry
  • Custom element creation
  • Custom element reactions

The CustomElementRegistry keeps track of all the defined custom elements for a single window. The registry is where you go to define new custom elements and later Servo will use the registry to lookup definitions give a possible custom element name. The bulk of the work in this section of the implementation was validating custom element definitions.

Custom element creation is the process of taking a custom element definition and running the defined constructor on a HTMLElement or the element extends. This can happen either when a new element is created, or after an element has been created via an upgrade reaction.

The final portion is triggering custom element reactions. There are two types of reactions:

  1. Callback reactions
  2. Upgrade reactions

Callback reactions fire when custom elements:

  • are connected from the DOM tree
  • are disconnected from the DOM tree
  • are adopted into a new document
  • have an attribute that is modified

When the reactions are triggered, the corresponding lifecycle method of the Custom Element is called. This allows the developer to implement custom behavior when any of these lifecycle events occur.

Upgrade reactions are used to take a non-customized element and make it customized by running the defined constructor. There is quite a bit of trickery going on behind the scenes to make all of this work. I wrote a post about custom element upgrades explaining how they work and why they are needed.

I used Gecko’s partial implementation of Custom Elements as a reference for a few parts of my implementation. This became extrememly useful whenever I had to use the SpiderMonkey API.

Roadblocks

As with any project, it is difficult to foresee big issues until you actually start writing the implementation. Most parts of the spec were straightforward and did not yield any trouble while I was writing the implementation; however, there were a few difficulties and unexpected problems that presented themselves.

One major pain-point was working with the SpiderMonkey API. This was more due to my lack of experience with the SpiderMonkey API. I had to learn how compartments work and how to debug panics coming from SpiderMonkey. bzbarsky was extremely helpful during this process; they helped me step through each issue and understand what I was doing wrong.

While I was in the midst of writing the implementation, I found out about the HTMLConstructor attribute. I had missed this part of the spec during the planning phase. The HTMLConstructor WebIDL attribute marks certain HTML elements that can be extended and generates a custom constructor for each that allows custom element constructors to work (read more about this in custom element upgrades).

Notable Pull Requests Conclusions

I enjoyed working on this project this summer and hope to continue my involvement with the Servo project. I have a gsoc repository that contains a list of all my GSoC issues, PRs, and blog posts. I want to extend a huge thanks to my mentor jdm and to bzbarsky for helping me work through issues when using SpiderMonkey.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Mozilla applauds India Supreme Court’s decision upholding privacy as a fundamental right

Mozilla planet - to, 24/08/2017 - 19:09

Mozilla is thrilled to see the Supreme Court of India’s decision declaring that the Right to Privacy is guaranteed by the Indian Constitution. Mozilla fights for privacy around the world as part of our mission, and so we’re pleased to see the Supreme Court unequivocally end the debate on whether this right even exists in India. Attention must move now to Aadhaar, which the government is increasingly making mandatory without meaningful privacy protections. To realize the right to privacy in practice, swift action is needed to enact a strong data protection law.

The post Mozilla applauds India Supreme Court’s decision upholding privacy as a fundamental right appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Mike Conley: Photon Engineering Newsletter #14

Mozilla planet - to, 24/08/2017 - 19:04

Just like jaws did last week, I’m taking over for dolske this week to talk about stuff going on with Photon Engineering. So sit back, strap in, and absorb Photon Engineering Newsletter #14!

If you’ve got the release calendar at hand, you’ll note that Nightly 57 merges to Beta on September 20th. Given that there’s usually a soft-freeze before the merge, this means that there are less than 4 weeks remaining for Photon development. That’s right – in less than a month’s time, folks on the Beta channel who might not be following Nightly development are going to get their first Photon experience. That’ll be pretty exciting!

So with the clock winding down, the Photon team has started to shift more towards polish and bug-fixing. At this point, all of the major changes should have landed, and now we need to buff the code to a sparkling sheen.

The first thing you may have noticed is that, after a solid run of dogefox, the icon has shifted again:

The new Nightly icon

We now return you to your regularly scheduled programming

The second big change are our new 60fps1 loading throbbers in the tabs, coming straight to you from the Photon Animations team!

The new loading throbber in Nightly

I think it’s fair to say that Photon Animations are giving Firefox a turbo boost!

Other recent changes Menus and structure Animations
  • Did we mention the new tab loading throbber?
Preferences
  • All MVP work is completed! The team is now fixing polish bugs. Outstanding!
Visual redesign Onboarding Performance
  1. The screen capturing software I used here is only capturing at 30fps, so it’s really not doing it justice. This tweet might capture it better. 

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Aug. 24, 2017

Mozilla planet - to, 24/08/2017 - 18:00

Reps Weekly Meeting Aug. 24, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Aug. 24, 2017

Mozilla planet - to, 24/08/2017 - 18:00

Reps Weekly Meeting Aug. 24, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Introducing the Extension Compatibility Tester

Mozilla planet - to, 24/08/2017 - 16:29

With Firefox’s move to a modern web-style browser extension API, it’s now possible to maintain one codebase and ship an extension in multiple browsers. However, since different browsers can have different capabilities, some extensions may require modification to be truly portable. With this in mind, we’ve built the Extension Compatibility Tester to give developers a better sense of whether their existing extensions will work in Firefox.

The tool currently supports Chrome extension bundle (.crx) files, but we’re working on expanding the types of extensions you can check. The tool generates a report showing any potential uses of APIs or permissions incompatible with Firefox, along with next steps on how to distribute a compatible extension to Firefox users.

We will continue to participate in the Browser Extensions Community Group and support its goal of finding a common subset of extensible points in browsers and APIs that developers can use. We hope you give the tool a spin and let us know what you think!

Try it out! >> “The tool says my extension may not be compatible“

Not to worry! Our analysis only shows API and permission usage, and doesn’t have the full context. If the incompatible functionality is non-essential to your extension you can use capability testing to only use the API when available:

// Causes an Error browser.unavailableAPI(...); // Capability Testing FTW! if ('unavailableAPI' in browser) { browser.unavailableAPI(...); }

Additionally, we’re constantly expanding the available extension APIs, so your missing functionality may be only a few weeks away!

“The tool says my extension is compatible!”

Hooray! That said, definitely try your extension out in Firefox before submitting to make sure things work as you expect. Common APIs may still have different effects in different browsers.

“I don’t want to upload my code to a 3rd party website.”

Understood! The compatibility testing is available as part of our extension development command-line tool or as a standalone module.

If you have any issues using the tool, please file an issue or leave a comment here. The hope is that this tool is a useful first step in helping developers port their extensions, and we get a healthier, more interoperable extension ecosystem.

Happy porting!

Categorieën: Mozilla-nl planet

Ryan Harter: Documentation Style Guide

Mozilla planet - to, 24/08/2017 - 09:00

I just wrote up a style guide for our team's documentation. The documentation is rendered using Gitbook and hosted on Github Pages. You can find the PR here but I figured it's worth sharing here as well.

Style Guide

Articles should be written in Markdown (not AsciiDoc). Markdown is usually powerful enough and is a more common technology than AsciiDoc.

Limit lines to 100 characters where possible. Try to split lines at the end of sentences. This makes it easier to reorganize your thoughts later.

This documentation is meant to be read digitally. Keep in mind that people read digital content much differently than other media. Specifically, readers are going to skim your writing, so make it easy to identify important information

Use visual markup like bold text, code blocks, and section headers. Avoid long paragraphs. Short paragraphs that describe one concept each makes finding important information easier.

Please squash your changes into meaningful commits and follow these commit message guidelines.

Categorieën: Mozilla-nl planet

Emma Humphries: Firefox Triage Report 2017-08-21

Mozilla planet - to, 24/08/2017 - 01:23

Correction: several incorrect buglist links have been fixed

It's the weekly report on the state of triage in Firefox-related components. I apologize for missing last week’s report. I was travelling and did not have a chance to sit down and focus on this.

Hotspots

The components with the most untriaged bugs remain the JavaScript Engine and Build Config.

I discussed the JavaScript bugs with Naveed. What will happen is that the JavaScript bugs which have not been marked as a priority for Quantum Flow (the ‘\[qf:p[1:3]\]’ whiteboard tags) or existing work (the ‘\[js:p[1:3]\]’ whiteboard tags) will be moved to the backlog (P3) for review after the Firefox 57 release. See https://bugzilla.mozilla.org/show_bug.cgi?id=1392436.

**Rank** **Component** **2017-08-07** **This Week** ---------- ------------------------------ ---------------- --------------- 1 Core: JavaScript Engine 449 471 2 Core: Build Config 429 450 3 Firefox for Android: General 411 406 4 Firefox: General 242 246 5 Core: General 234 235 6 Core: XPCOM 176 178 7 Core: JavaScript: GC — 168 8 Core: Networking — 161 All Components 8,373 8,703

Please make sure you’ve made it clear what, if anything will happen with these bugs.

Not sure how to triage? Read https://wiki.mozilla.org/Bugmasters/Process/Triage.

Next Release

**Version** 56 56 56 56 57 57 57 ----------------------------------------- ------- ------- ------- ------- ----- ------ ------- **Date** 7/10 7/17 7/24 7/31 8/7 8/14 8/14 **Untriaged this Cycle** 4,525 4,451 4,317 4,479 479 835 1,196 **Unassigned Untriaged this Cycle** 3,742 3,682 3,517 3,674 356 634 968 **Affected this Upcoming Release (56)** 111 126 139 125 123 119 **Enhancements** 102 107 91 103 3 5 11 **Orphaned P1s** 199 193 183 192 196 191 183 **Stalled P1s** 195 173 159 179 157 152 155

What should we do with these bugs? Bulk close them? Make them into P3s? Bugs without decisions add noise to our system, cause despair in those trying to triage bugs, and leaves the community wondering if we listen to them.

Methods and Definitions

In this report I talk about bugs in Core, Firefox, Firefox for Android, Firefox for IOs, and Toolkit which are unresolved, not filed from treeherder using the intermittent-bug-filer account*, and have no pending needinfos.

By triaged, I mean a bug has been marked as P1 (work on now), P2 (work on next), P3 (backlog), or P5 (will not work on but will accept a patch).

A triage decision is not the same as a release decision (status and tracking flags.)

https://mozilla.github.io/triage-report/#report

Age of Untriaged Bugs

The average age of a bug filed since June 1st of 2016 which has gone without triage.

https://mozilla.github.io/triage-report/#date-report

Untriaged Bugs in Current Cycle

Bugs filed since the start of the Firefox 57 release cycle which do not have a triage decision.

https://mzl.la/2wzJxLP

Recommendation: review bugs you are responsible for (https://bugzilla.mozilla.org/page.cgi?id=triage_owners.html) and make triage decision, or RESOLVE.

Untriaged Bugs in Current Cycle (57) Affecting Next Release (56)

Bugs marked status_firefox56 = affected and untriaged.

https://mzl.la/2wzjHaH

Enhancements in Release Cycle

Bugs filed in the release cycle which are enhancement requests, severity = enhancement, and untriaged.

https://mzl.la/2wzCBy8

​Recommendation: ​product managers should review and mark as P3, P5, or RESOLVE as WONTFIX.

High Priority Bugs without Owners

Bugs with a priority of P1, which do not have an assignee, have not been modified in the past two weeks, and do not have pending needinfos.

https://mzl.la/2u1VLem

Recommendation: review priorities and assign bugs, re-prioritize to P2, P3, P5, or RESOLVE.

Stalled High Priority Bugs

There 159 bugs with a priority of P1, which have an assignee, but have not been modified in the past two weeks.

https://mzl.la/2u2poMJ

Recommendation: review assignments, determine if the priority should be changed to P2, P3, P5 or RESOLVE.

* New intermittents are filed as P5s, and we are still cleaning up bugs after this change, See https://bugzilla.mozilla.org/show_bug.cgi?id=1381587, https://bugzilla.mozilla.org/show_bug.cgi?id=1381960, and https://bugzilla.mozilla.org/show_bug.cgi?id=1383923

If you have questions or enhancements you want to see in this report, please reply to me here, on IRC, or Slack and thank you for reading.



comment count unavailable comments
Categorieën: Mozilla-nl planet

The Servo Blog: Off main thread HTML parsing in Servo

Mozilla planet - wo, 23/08/2017 - 20:41

Originally published on Nikhil’s blog.

Introduction

Traditionally, browsers have been written as single threaded applications, and the html spec certainly seems to validate this statement. This makes it difficult to parallelize any task which a browser carries out, and we generally have to come up with innovative ways to do so.

One such task is HTML parsing, and I have been working on parallelizing it this summer as part of my GSoC project. Since Servo is written in Rust, I’m assuming the reader has some basic knowledge about Rust. If not, check out this awesome Rust book. Done? Let’s dive straight into the details:

HTML Parser

Servo’s HTML (and XML) parsing code live in html5ever. Since this project concerns HTML parsing, I will only be talking about that. The first component we need to know about is the Tokenizer. This component is responsible for taking in raw input from a buffer and creating tokens, eventually sending them to its Sink, which we will call TokenSink. This could be any type which implements the TokenSink trait.

html5ever has a type called TreeBuilder, which implements this trait. The TreeBuilder’s job is to create tree operations based on the tokens it receives. TreeBuilder contains its own Sink, called TreeSink, which details the methods corresponding to these tree ops. The TreeBuilder calls these TreeSink methods under appropriate conditions, and these ‘action methods’ are responsible for constructing the DOM tree.

With me so far? Good. The key to parallelizing HTML parsing is realizing that the task of creating tree ops is independent from the task of actually executing them to construct the DOM tree. Therefore, tokenization and tree op creation can happen on a separate thread, while the tree construction can be done on the main thread itself.

Example image

The Process

The first step I took was to decouple tree op creation from tree construction. Previously, tree ops were executed as soon as they were created. This involved the creation of a new TreeSink, which instead of executing them directly, created a representation of a tree op, containing all relevant data. For the time being, I sent the tree op to a process_op function as soon as it was created, whereupon it was executed.

Now that these two processes were independent, my next task consisted of creating a new thread, where the Tokenizer+TreeBuilder pair would live, to generate these tree ops. Now, when a tree op was created, it would be sent to the main thread, and control would return back to the TreeBuilder. The TreeBuilder does not have to wait for the execution of the tree op anymore, thus speeding up the entire process.

So far so good. The final task in this project was to implement speculative parsing, by building on top of these recent changes.

Speculative Parsing

The HTML spec dictates that at any point during parsing, if we encounter a script tag, then the script must be executed immediately (if it is an inline script), or must be fetched and then executed (note that this rule does not apply to async or defer scripts). Why, you might ask, must this be done so? Why can’t we mark these scripts and execute them all at the end, after the parsing is done? This is because of an old, ill-thought out Document API function called document.write(). This function is a pain point for many developers who work on browsers, as it is a real headache implementing it well enough, while working around the many idiosyncrasies which surround it. I won’t dive into the details here, as they are not relevant. All we need to know is what document.write() does: it takes a string argument, which is generally markup, and inserts this string as part of the document’s HTML content. It is suffice to say that using this function might break your page, and should not be used.

Returning to the parsing task, we can’t commit any DOM manipulations until the script finishes executing, because document.write() could make them redundant. What speculative parsing aims to do is to continue parsing the content after the script tag in the parser thread, while the script is being executed in the main thread. Note that we are only speculatively creating tree ops here, not the actual tree construction. After the script finishes executing, we analyze the actions of the document.write() calls (if any) to determine whether to use the tree ops, or to throw them away.

Roadblock!

Remember when I said the process of creating tree ops is independent from tree construction? Well, I lied a little. Until a week ago, we need access to some DOM nodes for the creation of a couple of tree actions (one method needed to know if a node had a parent, and the other needed to know whether two nodes existed in the same tree). When I moved the task of creating tree ops to a separate thread, I could no longer access the DOM tree, which lived on the main thread. So I used a Sender on the TreeSink to create and send queries to the main thread, which would access the DOM and send the results back. Then only would the TreeSink method return, with the data it received from the main thread. Additionally, this meant that these couple of methods were synchronous in nature. No biggie.

I realized the problem when I sat down to think about how I would implement speculative parsing. Since the main thread is busy executing scripts, it won’t be listening to the queries these synchronous methods will be sending, and therefore the task of creating tree ops cannot progress further!

This turned out to be a bigger problem than I’d imagined, and I also had to sift through the equivalent Gecko code to understand how this situation was handled. I eventually came up with a good solution, but I won’t bore you with the details. If you want to know more, here’s a gist explaining the solution.

With these changes landed in html5ever, I can finally implement speculative parsing. Unfortunately, there’s not much time to implement it as a part of the GSoC project, so I will be landing this feature in Servo some time later. I hope to publish another blog post describing it thoroughly, along with details on the performance improvements this feature would bring.

Links to important PRs:

Added Async HTML Tokenizer: https://github.com/servo/servo/pull/17037

Run the async HTML Tokenizer on a new thread: https://github.com/servo/servo/pull/17914

TreeBuilder no longer relies on same_tree and has_parent_node: https://github.com/servo/html5ever/pull/300

End TreeBuilder’s reliance on DOM: https://github.com/servo/servo/pull/18056

Conclusion

This was a really fun project; I got solve lots of cool problems, and also learnt a lot more about how a modern, spec-compliant rendering engine works.

I would like to thank my mentor Anthony Ramine, who was absolutely amazing to work with, and Josh Matthews, who helped me a lot when I was still a rookie looking to contribute to the project.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 110

Mozilla planet - wo, 23/08/2017 - 19:00

The Joy of Coding - Episode 110 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 110

Mozilla planet - wo, 23/08/2017 - 19:00

The Joy of Coding - Episode 110 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting August 23, 2017

Mozilla planet - wo, 23/08/2017 - 18:00

Weekly SUMO Community Meeting August 23, 2017 This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Ryan Harter: Beer and Probes

Mozilla planet - wo, 23/08/2017 - 09:00

Quick post to clear up some terminology. But first, an analogy to clear up my thinking:

Analogy

Temperature control is a big part of brewing beer. Throughout the brewing process I use a thermometer to measure the temperature of the soon-to-be beer. Because I take several temperature readings throughout the brewing process, one brew will result in a list of a half dozen temperature readings. For example, I take a mash temperature, then a sparge temperature, then a fermentation temperature. The units on these measurements are always in Fahrenheit, but their interpretation is different.

The Rub

In this example, I would call the thermometer a "probe". The set of all temperature readings share a "data type". Each temperature reading is a "measurement" which is stored in a given "field".

At the SFO workweek I uncovered some terminology I found confusing. Specifically, we use the word "probe" to refer to data we collect. I haven't encountered this usage outside of Mozilla.

Instead, I'd suggest we call histograms and scalars "data types". A "probe" is a unit of client-side code that collects a measurement for us. A single "field" could be be a column in one of our datasets (like normalized_channel). A measurement would be a value from a single field from a single ping (like the string "release").

Categorieën: Mozilla-nl planet

Mozilla Testing New Default Opt-Out Setting for Firefox Telemetry Collection - BleepingComputer

Nieuws verzameld via Google - wo, 23/08/2017 - 00:33

BleepingComputer

Mozilla Testing New Default Opt-Out Setting for Firefox Telemetry Collection
BleepingComputer
Mozilla engineers are discussing plans to change the way Firefox collects usage data (telemetry), and the organization is currently preparing to test an opt-out clause an opt-out clause so they could collect more data relevant to the browser's usage.
Mozilla causes stir with opt-out data collection plans - NeowinNeowin
Mozilla wants to keep the internet away from fake newsFree Newsman: Market Research News By Market.Biz
[H]ardOCP: Firefox Plans to Anonymously Collect Browsing DataHardOCP

alle 6 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Join Mozilla and Stanford’s open design sprint for an accessible web

Mozilla planet - ti, 22/08/2017 - 21:06
Join Mozilla’s and Stanford’s open design sprint for an accessible webCC photo by Complete Streets via Flickr

Millions of people have disabilities, ranging from hearing impairments from birth to visual impairments from old age. As much of our lives increasingly take place online the absence of accessibility contributes to the exclusion or partial exclusion of many people from society. Mozilla’s mission is to keep the web open, for everyone.

Working to include everyone has led to innovations that benefit others too. Take curb cuts, the sloping curb sections that connect sidewalks to the street. Curb cuts were originally introduced by disability activists for people in wheelchairs, but they were soon eagerly welcomed by people using bicycles, delivery carts and strollers. We believe innovations for accessibility tend to produce a corresponding electronic curb-cut effect.

We are looking for volunteers with first hand experience with accessibility needs, creative thinkers, designers, and engineers to work together to re-imagine accessibility for everyone while surfing the web.

Fill out this short form to join the design drive Monday to Friday, Aug 28 to Sep 1. Participation will involve working with a small team for about an hour/day.

The decentralized design process

The Open Innovation Team at Mozilla and Stanford have partnered to explore how a decentralized design process (a design process where people are not in the same physical location) can provide a way to innovate and include more diverse perspectives in the design process. The “hive” approach, pioneered by Stanford, will be used in this experiment to test how decentralized design process can help inspire and create a better accessible web for all!

How it works

You will work online in small teams with other participants across the globe for about an hour each day from Monday to Friday, Aug 28-Sep 1. You will be grouped based on your background, timezone and availability. We will go through the Stanford d.school’s design process together- spending a day on each of the phases: inspire, define, ideate, prototype, and test. We will gradually change team membership to give you a chance to interact with a diverse group of people over the course of the design sprint. We will provide instructions and deliverables for each phase.

This will be a highly collaborative process, where you will work with interesting people and disability experts while practicing the different stages of the design thinking process for a real world product used by millions of people. Each team will have a team-lead who will facilitate conversations. You can apply to be a team-lead on the signup form. Priority will be given to people who have a disability.

The final submissions to the design drive will be evaluated by the Firefox Test Pilot and the Accessibility teams, and go through a round of user testing. The Test Pilot team will evaluate the best contributions and determine if they are ready to be tested by hundreds of thousand users. Test Pilot is a platform that allows Mozilla to launch experimental features for Firefox to general release users and enables Mozilla to learn in detail how these features are used. Learnings from Test Pilot help Mozilla make decisions about Firefox and other products. So this could be the first step towards getting your contribution into the official Firefox browser!

Get involved!

The design sprint drive will take place over Slack, a text-based instant messaging service, and participation will take 5 sessions of roughly one hour each from Monday to Friday (Aug 28-Sep 1).

To participate you are required to respect other participants and follow the Mozilla community participation guidelines. If you have any questions you can ask us on Twitter at @MZOpenSprint or email firefoxaccessibility@cs.stanford.edu.

Join us making the web accessible for everyone!

Join Mozilla and Stanford’s open design sprint for an accessible web was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Pages