Mozilla Nederland LogoDe Nederlandse

Mozilla Privacy Blog: Mozilla statement on the adoption of the EU Copyright directive

Mozilla planet - di, 26/03/2019 - 12:58

Today, EU lawmakers voted to adopt new copyright rules, on which we had been engaged for over three years.

Here’s a statement from Raegan MacDonald, Mozilla’s Head of EU Public Policy reacting to the outcome –

There is nothing to celebrate today. With a chance to bring copyright rules into the 21st century the EU institutions have squandered the progress made by innovators and creators to imagine new content and share it with people across the world, and have instead handed the power back to large US owned record labels, film studios and big tech.

People online everywhere will feel the impact of this disastrous vote and we fully expect copyright to return to the political stage. Until then we will do our best to minimise the negative impact of this law on Europeans’ internet experience and the ability of European companies to compete in the digital marketplace.

The post Mozilla statement on the adoption of the EU Copyright directive appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 279

Mozilla planet - di, 26/03/2019 - 05:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is safety-guard, a crate providing a #[safety] attribute that generates both a doc entry and debug assertion. Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

169 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs Upcoming Events Africa Asia Pacific Europe North America South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

all the ergonomic improvements in rust 2018 are really messing up my book that consists entirely of running face-first into compiler errors so i can explain concepts.

– Alexis Beingessner, author of “Learning Rust With Entirely Too Many Linked Lists”

Thanks to icefoxen for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: EU copyright reform: a missed opportunity

Mozilla planet - ma, 25/03/2019 - 11:22

After almost three years of debate and activism, EU lawmakers are expected to give their final approval to new EU copyright rules this week. Ahead of that vote, it’s timely to take a look back at how we got here, why we think this law is not the answer to EU lawmakers’ legitimate concerns, and what happens next if, as expected, Parliament votes through the new rules.

How did we get here?

We’ve been engaged in the discussions around the EU Copyright directive since the very beginning. During that time, we deployed various tools, campaigns, and policy assessments to highlight to European lawmakers the importance of an ambitious copyright reform that puts the interests of European internet users and creators at the centre of the process. Sadly, despite our best efforts – as well as the efforts of academics, creator and digital rights organisations, internet luminaries, and over five million citizens – our chances of reversing the EU’s march towards a bad legislative outcome diminished dramatically last September, after the draft law passed a crucial procedural milestone in the European Parliament.

Over the last several months, we have worked hard to minimise the damage that these proposals would do to the internet in Europe and to Europeans’ rights. Although the draft law is still deeply flawed, we are grateful to those progressive lawmakers who worked with us to improve the text.

Why this law won’t solve lawmakers’ legitimate concerns

The new rules that MEPs are set to adopt will compel online services to implement blanket upload filters, with an overly complex and limited SME carve out that will be unworkable in practice.  At the same time, lawmakers have forced through a new ancillary copyright for press publishers, a regressive and disproven measure that will undermine access to knowledge and the sharing of information online.

The legal uncertainty and potential variances in implementations across the EU that will be generated by these complex rules means that only the largest, most established platforms will be able to fully comply and thrive in such a restricted online environment. Moreover, despite our best efforts, the interests of European internet users have been largely ignored in this debate, and the law’s restrictions on user generated content and link sharing will hit users hardest. And worse, the controversial new rules will not contribute to addressing the core problems they were designed to tackle, namely the fair remuneration of European creators and the sustainability of the press sector.

A missed opportunity

Like many others, we had originally hoped that the EU Copyright directive would provide an opportunity to bring European copyright law in line with the realities of the 21st century. Sadly, suggestions that were made by us for real and positive reforms of EU copyright law, such as an ambitious new exemption for user-generated content, have been swept aside. We are glad to see that the final text includes a new copyright exemption for text & data mining (TDM), and that lawmakers pushed back on attempts to make Europe’s TDM environment even more legally restrictive.

In that sense, the adoption of this law by the European Parliament this week would be a pyrrhic victory for its supporters. We do not see how it will bring any of the positive impacts that they’ve championed, and rather, will simply serve to entrench the power of the incumbents. We expect copyright to return to the political agenda in the years to come, as the real underlying issues facing European creators and press publishers will remain.

Next steps

Many citizens, creators, digital rights groups, and tech companies continue to highlight how problematic the proposed Copyright directive is, and we stand in solidarity with them. We’ll be following the upcoming vote closely, in particular the vote on last-minute amendments that would see the controversial article 13 removed from the final law. Should the European Parliament ultimately decide to wave the new law into being, we’ll be stepping up to ensure its implementation in the 28 EU Member States causes as little harm to the internet and European citizens’ rights as possible.

The post EU copyright reform: a missed opportunity appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Software-update: Mozilla Thunderbird 60.6.1 - Computer - Downloads - Tweakers

Nieuws verzameld via Google - ma, 25/03/2019 - 08:00
Software-update: Mozilla Thunderbird 60.6.1 - Computer - Downloads  Tweakers

De Mozilla Foundation heeft versie 60.6.1 van Thunderbird uitgebracht. Mozilla Thunderbird is een opensourceclient voor e-mail en nieuwsgroepen, met ...

Categorieën: Mozilla-nl planet

Mozilla patcht ernstige Firefox-lekken Pwn2Own binnen 24 uur -

Nieuws verzameld via Google - ma, 25/03/2019 - 08:00
Mozilla patcht ernstige Firefox-lekken Pwn2Own binnen 24 uur

Twee ernstige beveiligingslekken in Firefox, waardoor aanvallers in combinatie met een Windows-lek volledige controle over het onderliggende systeem ...

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR13 SPR1 available

Mozilla planet - ma, 25/03/2019 - 02:28
TenFourFox Feature Parity Release 13 Security Parity Release 1 ("FPR13.1") is now available and live (downloads, hashes, release notes). The Pwn2Own vulnerabilities do not work on TenFourFox in their present form (not only because we're PowerPC but also because of our hybrid-endian typed arrays and other differences), but I have determined that TenFourFox-specific variant attacks could be exploitable, so we are patched as well. This should also reduce the risk of crashes from attempts to exploit mainline x86 Firefox.

Meanwhile, H.264 support for TenFourFox FPR14 appears to be sticking. Yes, folks: for the first time you can now play Vimeo and other H.264-only videos from within TenFourFox using sidecar ffmpeg libraries, and it actually works pretty well! Kudos to Olga for the integration code! That said, however, it comes with a couple significant caveats. The first is that while WebM video tends not to occur in large numbers on a given page, H.264 videos nowadays are studded everywhere (Vimeo's front page, Twitter threads, Imgur galleries, etc.) and sometimes try to autoplay simultaneously. In its first iteration this would cause the browser to run out of memory if a large number of higher resolution videos tried to play at once, and sometimes crash when an infallible memory allocation fallibled. Right now there is a lockout in the browser to immediately halt all H.264 decoding if any instance runs out of memory so that the browser can save itself, but this needs a lot more testing to make sure it's solid, and is clearly a suboptimal solution. Remember that we are under unusual memory constraints because of the large amount of stack required for our JIT.

The second caveat with H.264 support is that while the additional AltiVec support in ffmpeg (TenFourFox is compatible with 2.8 and 3.4) makes H.264 decoding faster than WebM, it is not dramatically so, and you should not expect a major jump in video performance. (In fact, quite the opposite on pages like the above.) Because of that, and because I have to build and support ffmpeg library installers now, I am only officially supporting H.264 on G4/7450 and G5 based on the existing 1.25GHz minimum CPU requirement for web video (and you should really have 2GB or more of memory). There will not be an official TenFourFox ffmpeg build for G4/7400 and G3 (or, for that matter, Intel); while you can build it yourself mostly out of the box with Xcode 2.5 and I won't have any block in TenFourFox for user-created libraries, I will provide neither support nor ffmpeg builds for these architectures. Olga's current FFmpeg Enabler does work on 10.4 now and does support 7400 and my future 7450 version will run on a 7400, so early G4 users have a couple options, but either way you would be on your own. Sorry, there are enough complaints about TenFourFox performance already without me making promises of additional functionality I know those systems can't meet.

Back on the good news side, the AppleScript-JavaScript bridge is also complete and working. As a example, consider this script, which actually works in the internal test build:

tell application "TenFourFoxG5"
  tell front browser window
    set URL of current tab to ""
    repeat while (current tab is busy)
      delay 1
    end repeat
    tell current tab
      run JavaScript "let f = document.getElementById('tsf');f.q.value='tenfourfox';f.submit();"
    end tell
    repeat while (current tab is busy)
      delay 1
    end repeat
    tell current tab
      run JavaScript "return document.getElementsByTagName('h3')[0].innerText + ' ' + document.getElementsByTagName('cite')[0].innerText"
    end tell
  end tell
end tell

I'll let you ponder what it does until the FPR14 beta comes out, but it should be obvious that this would be great for automating certain tasks in the browser now that you don't have to rely on figuring out how to send the exact UI event anymore: you can just manipulate the DOM of any web page directly from AppleScript. Firefox still can't do that! (Mozilla can port over my code; I'd be flattered.)

The last things to do are a couple security and performance tweaks, and then one more desperate attempt to get Github working. I'm still not sure how feasible the necessary JavaScript hacks will be yet but come hell or high water we're on track for FPR14 beta 1 in early April.

Categorieën: Mozilla-nl planet

Tech 23 mrt 2019 Firefox werkt tegenwoordig een stuk beter op je iPad - Want

Nieuws verzameld via Google - za, 23/03/2019 - 08:00
Tech 23 mrt 2019 Firefox werkt tegenwoordig een stuk beter op je iPad  Want

Omdat je iPad niet gewoonweg een grotere versie is van je iPhone heeft Mozilla speciaal voor je tablet een nieuwe versie van Firefox aangekondigd.

Categorieën: Mozilla-nl planet

Software-update: Mozilla Firefox 66.0.1 - Computer - Downloads - Tweakers

Nieuws verzameld via Google - za, 23/03/2019 - 08:00
Software-update: Mozilla Firefox 66.0.1 - Computer - Downloads  Tweakers

Mozilla heeft een update voor versie 66 van zijn webbrowser Firefox uitgebracht, de tiende versie gebaseerd op Firefox Quantum. De browser heeft met ...

Categorieën: Mozilla-nl planet

Mike Conley: Firefox Front-End Performance Update #15

Mozilla planet - za, 23/03/2019 - 01:31

Firefox 66 has been released, Firefox 67 is out on the beta channel, and Firefox 68 is cooking for the folks on the Nightly channel! These trains don’t stop!

With that, let’s take a quick peek at what the Firefox Front-end Performance team has been doing these past few weeks…

Volunteer Contributor Highlight: Nikki!

I first wanted to call out some great work from Nikki, who’s a volunteer contributor. Nikki fixed a bug where we’d stall the parent process horribly if ever hovering a link with a really really long URL (like a base64 encoded Data URL). Stalling the parent process is the worst, because it makes everything else seem slow as a result.

Thank you for your work, Nikki!

Document Splitting Foundations for WebRender (In-Progress by Doug Thayer)

An impressive set of patches were recently queued to land, which should bring document splitting to WebRender, but in a disabled state. The gfx.webrender.split-render-roots pref is what controls it, but I don’t think we can reap the full benefits of document splitting until we get retained display lists enabled in the parent process for the UI. I believe, at that point, we can start enabling document splitting, which means that updating the browser UI area will not involve sending updates to the content area for WebRender.

In other WebRender news, it looks like it should be enabled by default for some of our users on the release channel in Firefox 67, due to be released in mid-May!

Warm-up Service (In-Progress by Doug Thayer)

Doug has written the bits of code that tie a Firefox preference to an HKLM registry key, which can be read by the warm-up service at start-up. The next step is to add a mode to the Firefox executable that loads its core DLLs and then exits, and then have the warm-up service call into that mode if enabled.

Once this is done, we should be in a state where we can user test this feature.

Startup Cache Telemetry (In-Progress by Doug Thayer)

Two things of note here:

  1. With the probes having now uplifted to Beta, data will slowly trickle in these next few days that will show us how the Firefox startup cache is behaving in the wild for users that aren’t receiving two updates a day (like our Nightly users). This important, because oftentimes, those updates cause some or all of the startup cache to be invalidated. We’re eager to see how the startup caches are behaving in the wild on Beta.
  2. One of the tests that was landed for the startup cache Telemetry appears to have caught an issue with how the QuantumBar code works with it – this is useful, because up until now, we’ve had very little automated testing to ensure that the startup cache is working as expected.
Smoother Tab Animations (Paused by Felipe Gomes)

UX, Product and Engineering have been having discussions about how the new tab animations work, and one thing has been decided upon: we want our User Research team to run some studies to see how tab animations are perceived before we fully commit to changing one of the fundamental interactions in the browser. So, at this time, Felipe is pausing his efforts here until User Research comes back with some information on guidance.

Browser Adjustment Project (Concluded by Gijs Kruitbosch)

We originally set out to see whether or not we could do something for users running weaker hardware to improve their browsing experience. Our initial hypothesis was that by lowering the frame rate of the browser on weaker hardware, we could improve the overall page load time.

This hypothesis was bolstered by measurements done in late 2018, where it appeared that by manually lowering the frame rate on a weaker reference laptop, we could improve our internal page load benchmarks by a significant degree. This measurement was reproduced by Denis Palmeiro on Vicky Chin’s team, and so Gijs started implementing a runtime detection mechanism to do that lowering of the frame rate for machines with 2 or fewer cores where each core’s clockspeed was 1.8Ghz or slower1.

However, since then, we’ve been unable to reproduce the same positive effect on page load time. Neither has Denis. We suspect that recent work on the RefreshDriver, which changes how often the RefreshDriver runs during the page load window, is effectively getting the same kind of win2.

We did one final experiment to see whether or not lowering the frame rate would improve battery life, and it appeared to, but not to a very high degree. We might revisit that route were we tasked with trying to improve power usage in Firefox.

So, to reduce code complexity, Gijs landed patches to remove the low-end hardware switches and frame rate lowering code today. This experiment and project is now concluded. It’s not a satisfying end with a slum dunk perf win, but you can’t win them all.

Better about:newtab Preloading (Completed by Gijs Kruitbosch)

The patch to preload about:newtab in an idle callback has landed and stuck! This means that we don’t preload about:newtab immediately after opening a new tab (which is good for responsiveness right around the time when you’re likely to want to do something), and also means that we have the possibility of preloading the first new tab in new windows! Great job, Gijs!

Experiments with the Process Priority Manager (In-Progress by Mike Conley)

I had a meeting today with Saptarshi, one of our illustrious Data Scientists, to talk about the upcoming experiment. One of the things he led me to conclude was that this experiment is going to have a lot of confounds, and it will be difficult to conclude things from.

Part of the reason for that is because there are often times when a background tab won’t actually have its content process priority lowered. The potential reasons for this are:

  1. The tab is running in a content process which is also hosting a tab that is running in the foreground of either the same or some other browser window.
  2. The tab is playing audio or video.

Because of this, we can’t actually do things like measure how page load is being impacted by this feature because we don’t have a great sense of how many tabs have their content process priorities lowered. That’s just not a thing we collect with Telemetry. It’s theoretically possible, either due to how many windows or videos or tabs our Beta users have open, that very few of them will ever actually have their content process priorities lowered, and then the data we’d draw from Telemetry would be useless.

I’m working with Saptarshi now to try to find ways of either altering the process priority manager or adding new probes to reduce the number of potential confounds.

Grab bag of other performance improvements
  1. These criteria for what makes “weak hardware” was mostly plucked from the air, but we had to start somewhere. 

  2. But for all users, not just users on weaker hardware. 

Categorieën: Mozilla-nl planet

Cameron Kaiser: Stand by for urgent security update

Mozilla planet - vr, 22/03/2019 - 23:33
Pwn2Own came and went and Firefox fell with it. The __proto__ vulnerability seems exploitable in TenFourFox, though it would require a PowerPC-specific attack to be fully weaponized, and I'm currently evaluating the other bug. Builds ("FPR13 SPR1") including fixes for either or both depending on my conclusions will be issued within the next couple days.
Categorieën: Mozilla-nl planet

William Lachance: New ideas, old buildings

Mozilla planet - vr, 22/03/2019 - 20:08

Last week, Brendan Colloran announced Iodide, a new take on scientific collaboration and reporting that I’ve been really happy to contribute to over the past year-and-a-bit. I’ve been describing it to people I meet as kind of "glitch meets jupyter " but that doesn’t quite do it justice. I’d recommend reading Brendan’s blog post (and taking a look at our demonstration site) to get the full picture.

One question that I’ve heard asked (including on Brendan’s post) is why we chose a rather conventional and old technology (Django) for the server backend. Certainly, Iodide has not been shy about building with relatively new or experimental technologies for other parts (e.g. Python on WebAssembly for the notebooks, React/Redux for the frontend). Why not complete the cycle by using a new-fangled JavaScript web server like, I don’t know, NestJS? And while we’re at it, what’s with iodide’s ridiculous REST API? Don’t you know GraphQL is the only legitimate way to expose your backend to the world in 2019?

The great urban theorist of the twentieth century, Jane Jacobs has a quote I love:

“Old ideas can sometimes use new buildings. New ideas must use old buildings.”

Laura Thompson (an engineering director at Mozilla) has restated this wisdom in a software development context as “Build exciting things with boring technologies”.

It so happened that the server was not an area Iodide was focusing on for innovation (at least initially), so it made much, much more sense to use something proven and battle-tested for the server side deployment. I’d used Django for a number of projects at Mozilla before this one (Treeherder/Perfherder and Mission Control) and have been wildly impressed by the project’s excellent documentation, database access layer, and support for building a standardized API via the Django REST Framework add-on. Not to mention the fact that so much of Mozilla’s in-house ops and web development expertise is based around this framework (I could name off probably 5 or 6 internal business systems based around the Django stack, in addition to Treeherder), so deploying Iodide and getting help building it would be something of a known quantity.

Only slightly more than half a year since I began work on the iodide server, we now have both a publicly accessible site for others to experiment with and an internal one for Mozilla’s business needs. It’s hard to say what would have happened had I chosen something more experimental to build Iodide’s server piece, but at the very least there would have been a substantial learning curve involved — in addition to engineering effort to fill in the gaps where the new technology is not yet complete — which would have meant less time to innovate where it really mattered. Django’s database migration system, for example, took years to come to fruition and I’m not aware of anything comparable in the world of JavaScript web frameworks.

As we move ahead, we may find places where applying new backend server technologies makes sense. Heck, maybe we’ll chose to rewrite the whole thing at some point. But to get to launch, chosing a bunch of boring, tested software for this portion of Iodide was (in my view) absolutely the right decision and I make no apologies for it.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Get the tablet experience you deserve with Firefox for iPad

Mozilla planet - vr, 22/03/2019 - 17:05

We know that iPads aren’t just bigger versions of iPhones. You use them differently, you need them for different things. So rather than just make a bigger version of our … Read more

The post Get the tablet experience you deserve with Firefox for iPad appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: SUMO A/B Experiments

Mozilla planet - vr, 22/03/2019 - 16:59

This year the SUMO team is focused on learning what to improve on our site. As part of that, we spent January setting up for A/B testing and last week we ran our first test!

The goal of the test was to run a series of experiments on individual Knowledge Base articles to:

  • Improve navigation from KB article to KB article (in-article suggestions)
  • Improve design of KB articles to ensure users better understand content and can engage with content faster

The two tests we are running are trying a bunch of different things, such as screengrabs, video clips, highlights, better feedback options on articles, and better navigation.

Version A: Breadcrumbs

  • Screengrabs
  • Ratings at different parts of the page
  • Highlights
  • On both experiments we have a section of related articles at the bottom.



A breadcrumb menu should make it clearer to users where they are.





Feedback points through up/down icons with a follow up question to understand to allow for more feedback.





Highlights in the text to help the user see the important areas.


Version B: Hamburger menu – Categories

  • One rating at the end of the page
  • No highlights in text
  • On both experiments we have a section of related articles at the bottom.


Hamburger menu to allow for users to focus on the content not the menu.





Drop down to see wider menu.




The test will run for the next 2-3 weeks and we will report back on here and our weekly SUMO meeting on the results and next steps.

The test is currently serving for 50% of visitors and you can ‘maybe’ see the tests by going here or here.

SUMO staff team

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): L10n report: March edition

Mozilla planet - do, 21/03/2019 - 15:47
New content and projects What’s new or coming up in Firefox desktop Schedule and important dates

Firefox 66 has been released on March 19, which means:

  • Firefox 68 is currently in Nightly.
  • Firefox 67 is in Beta.

The deadline to ship updates for Beta will be on April 30. Also don’t forget that Firefox 68 is going to be the next ESR version: ideally you should be localizing it early in Nightly, in order to have a good amount of time for testing before it reaches the release channel.

Removing unmaintained locales

This is not an action that we take lightly, because it’s demoralizing for the Community and potentially confusing for users, but in some cases we have to remove locales from Firefox builds. As outlined in the document, we try our best to revive the localization effort, and only act when it’s clear that we can’t solve the problem in other ways.

In Firefox 68 we’re going to remove the following locales: Assamese (as), South-African English (en-ZA), Maithili (mai), Malayalam (ml), Odia (or).

We’re also working with the Bengali community to unify two locales – Bengali India (bn-IN) and Bengali Banglashed (bn-BD) – under a single locale (bn), to optimize the Community resources we have.

Firefox Monitor

The add-on for Firefox Monitor is now localized as part of the main Firefox projects. If you want to test it:

  • Open about:config and create a new boolean setting, extensions.fxmonitor.enabled, and set it to true.
  • Navigate to a breached site. You can pick a website from this list, just make sure that the “AddedDate” is within the last 12 months.
What’s new or coming up in mobile

Just like for Firefox Desktop, the deadline to ship updates for Fennec Beta will be on April 30. Read the previous section of this report for more details surrounding that.

A notable Android update this month (that we’ve just announced on the dev-l10n mailing list – please consider following if it’s not yet the case) is that we’ve exposed on Pontoon the new Android-Components strings as part of the new Android-l10n project, to a small subset of locales.

Fenix browser strings are right around the corner as well, and will be exposed very soon in that same project, so stay tuned. Read up here for more details on all this.

On Firefox iOS side, we’re still working hard on shipping the upcoming version, which will be v16. Deadline for localization was today (March 21st), and with this new version we are adding one new locale: Vietnamese! Congrats to the team for shipping their first localized version of Firefox iOS!

What’s new or coming up in web projects AMO and Facebook Container extension

Mozilla is partnering with the European Union to promote its Facebook Container extension in advance of the upcoming EU elections. We have translated the listing for the extension on into 24 languages primarily used within the EU, and we could use your help localizing the user interface for so people can have a more complete experience when downloading the extension. AMO frontend and server are two huge projects. If your locale has a lot to catch up, you can focus on these top priority strings the team has identified (note there are two tabs). You can search for them in the AMO Frontend project in Pontoon.

In order to promote the extension in 24 languages, we need to enable AMO server and AMO Frontend in all the languages, including Maltese, of which we don’t have a community. We also added a few languages out of product requirement without communities’ agreement.  These languages are on the “read-only” locale list. They are Croatian, Estonian, Latvian, and Lithuanian for AMO Frontend, and Estonian and Latvian for AMO Server. If any of these communities are interested in localizing at least the high priority strings, please email the l10n-drivers so we can change the language setting.

There are a few updates coming soon. To prioritize, make sure to focus on shared files first (main.lang, download_button.lang), then the rest by star priority rating, or by deadline if applicable. You may see a few of the same strings appearing in several files. We usually leverage existing translations into a brand new file, but less so for updated file. In the latter case, please rely on Pontoon’s `Machinery` feature to leverage from your previous work.

Common Voice

Many new contributors joined the Mozilla localization communities through this project and are only interested in this project. Though there is an existing community that has been localizing other projects, the new contributors are new to localization, to Pontoon, to Mozilla localization process. They need your help with onboarding. Many have contributed to the project and are waiting for constructive feedback. Locale managers, please check the Common Voice project to see if there are strings waiting to be reviewed in your locale. Try to arrange resources to provide feedback in a timely manner. Based on the quality of the new contributors’ work and their interest, you can grant them broader permission at project level.

What’s new or coming up in Foundation projects

A very quick update on the misinformation campaign — the scorecard mentioned last month won’t be released, due to external changes. The good news is that a lot of the work done by the team is being reused and multiple campaigns will be launched instead. Details are evolving quickly, so there’s not much to share yet. We will keep you posted!

What’s new or coming up in Pontoon

Translate.Next. Soon we’ll begin testing of the rewritten translation interface of Pontoon. The look & feel will largely remain the same, but the codebase will be completely different, allowing us to fulfill user requests in a more timely manner. Stay tuned for more updates in the usual l10n channels!

Improving experience for 3rd party installations. While used internally at Mozilla, Pontoon is a general purpose TMS with good support for popular localization file formats, ready to localize a variety of open source projects, apps or websites. Vishal started improving experience for 3rd party deployments by making Pontoon homepage customizable instead of hardcoding the Mozilla-specific content used on The path to setting up a first project for localization is now also more obvious.


Under the hood changes. Thanks to Jotes and Aniruddha, our Python test coverage has  improved. On top of that, Jotes started making first steps towards migrating our codebase to Python3.

Useful Links Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Categorieën: Mozilla-nl planet

Firefox UX: Look over here! Results from a Firefox user research study about interruptions.

Mozilla planet - wo, 20/03/2019 - 21:52

The Attention War. There have been many headlines related to it in the past decade. This is the idea that apps and companies are stealing attention. It’s the idea that technologists throw up ads on websites in a feeble attempt to get the attention of the people who visit the website.

In tech, or any industry really, people often say something to the effect of, “well if the person using this product or service only read the instructions, or clicked on the message, or read our email, they’d understand and wouldn’t have any problems”. We need people’s attention to provide a product experience or service. We’re all in the “attention war”, product designers and users alike.

And what’s a sure-fire way to grab someone’s attention? Interruptions. Regardless if they’re good, bad, or neutral. Interruptions are not necessarily a “bad” thing, they can also lead to good behavior, actions, or knowledge.

<figcaption>Even on a webpage that has a playlist of talks about “The race for your attention”, there’s a giant banner in attempt to get your attention to sign up for a recommendation service. The image shows a banner with a red “Get Started” button above the content about “The race for your attention”. Image Source:</figcaption>

Here are a couple questions the Firefox Team had about interruptions:

  1. “How do participants feel about interruptions?”
  2. “Can participants distinguish between the sources of interruptions? (i.e. Can they tell if the interruption is from Firefox, a website, or the operating system?)”

To answer the questions, I ran a user research study about the interruptions people receive while using Firefox. Eight participants were in a week-long study. Each participant used Firefox as their main browser on their laptop. Four participants agreed to record their browsing sessions over the course of the week, and six participants agreed to share their browsing analytics with us. I logged interruptions that came from the operating system, desktop software, Firefox, and websites. All participants were interviewed on the first day and last day. On the last day, I asked each participant to complete five tasks that would trigger interruptions to gauge understanding, behavior, and attitudes towards interruptions.

Before I answer the two questions from above, I’ll describe how I categorized interruptions.

Stopping Power

To analyze the data, I coded each interruption in terms of its stopping power. Mehrotra et al. coded interruptions as “low priority” and “high priority” depending on if the interruption stopped a person from completing their task [1]. Similarly, I coded each interruption as “low stopping power”, “medium stopping power”, and “high stopping power” (examples in the figures below). I defined stopping power as how much the design & implementation of an interruption makes it so that the user must interact with it to continue using the system. From the recorded interviews and browsing sessions (excluding the five tasks that triggered interruptions), I logged 83 low stopping power interruptions, 37 medium, and 15 high.

<figcaption>An example of a “low stopping power” interruption: Screenshot that shows badged icons on the browser toolbar.</figcaption><figcaption>An example of a “medium stopping power” interruption: Screenshot from a participant’s screen. Location permission doorhanger.</figcaption><figcaption>An example of a “high stopping power” interruption: Screenshot of a website modal asking for you to sign up.</figcaption>

Now I’ll move on to answer our two research questions.

1. How did participants feel about interruptions?

Participants care about their safety and saving time

When I asked participants how they felt about the 5 interruptions they experienced during the post interview, a clear theme was safety. One task was for participants to visit a “Bad SSL*” page. This happens when a website has a malformed or outdated security certificate. An error will appear and you’ll see a message like the screenshot below: “Your connection is not secure”.

<figcaption>Screenshot of an error page that appears when the website’s security certificate is out of date.</figcaption>

A couple participants were frustrated at first with the idea of encountering this page, because they would not be able to get to the website they wanted to get to, but then participants expressed an appreciation for it saying, like one participant: “now that I’ve read it, I suppose it’s trying to help” and another saying “I like that it’s protecting me”.

*SSL stands for Secure Sockets Layer and is a security protocol where websites share a certificate to verify their identity. Without a certificate to verify their identity, Firefox can’t be sure that the website is who it says it is.

Some participants were annoyed by interruptions, others could care less.

<figcaption>A visual scale of: “How annoyed were participants about interruptions they received while using Firefox?” Each emoji is one participant. The subscript is the browser they use the most outside of the research study. Two participants were on the left-hand of the scale (very annoyed), three participants were on the right-hand of the scale (not annoyed at all), and three participants were somewhere around the middle of the scale. I did not ask participants to measure their own annoyance, but rather came up with this scale during analysis of interview responses.</figcaption>

Participants reacted to interruptions differently, as shown by the scale above. This study was not able to determine the factors that impacted their level of annoyance — we would need to gather more data to determine that. However, it’s important to note that not everyone will react the same to interruptions. Some will make comments like one of the participants that it “feels like a gun shooting range” (where interruptions appear all over the place), and others will barely notice interruptions at all.

2. “Can participants distinguish between the sources interruptions? Can they tell if the interruption is from Firefox, a website, or the operating system?”

Most participants could not tell you the source of the interruption.

Only two of the eight participants could differentiate between all sources. They confidently knew if an interruption was from the operating system, website, or browser.

Four of the eight participants could differentiate between an operating system and a browser/website notification but NOT between a browser and a website notification. For example, a participant could tell that operating system was asking to perform an update. However, a participant could not tell if the website or browser was asking to save their credit card information.

One of the eight participants could not tell the difference between any of them. They thought that a notification from desktop email software was coming from Firefox, and wondered why other browsers did not do that.

Why is this important?

Do people really need to know the source of an interruption, if it does not hinder them from completing their task? Yes. An understanding of the source of the interruption is important for safety (i.e. people know where their data is and who it’s being shared with), and for mitigating potential annoyance with Firefox for things that Firefox is not responsible for.

Are you a designer or product manager?

Every time your product or service is considering interrupting someone, ask:

  • Is it worth stopping someone for?
  • Are you helping the person using your product or service to be more safe, or save time?

As I saw from the hours of browser recordings, we live in a vast sea of interruptions. Let’s be careful how we add to the ever-growing pile.

Acknowledgements (alphabetical by first name)

Thank you to my colleagues at Mozilla for helping with this study! Thanks to Aaron Benson, Alice Rhee, Amy Lee, Amy Tsay, Betsy Mikel, Brian Jones, Bryan Bell, Chris More, Chuck Harmston, Cindy Hsiang, Emanuela Damiani, Frank Bertsch, Gemma Petrie, Grace Xu, Heather McGaw, Javaun Moradi, Kamyar Ardekani, Kev Needham, Maria Popova, Meridel Walkington, Michelle Heubusch, Peter Dolanjski, Philip Walmsley, Romain Testard, Sharon Bautista, Stephen Horlander, Tim Spurway


[1] Mehrotra, A., Pejovic, V., Vermeulen, J., Hendley, R., & Musolesi, M. (2016). My Phone and Me (pp. 1021–1032). Presented at the the 2016 CHI Conference, New York, New York, USA: ACM Press.

Look over here! Results from a Firefox user research study about interruptions. was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Botond Ballo: Trip Report: C++ Standards Meeting in Kona, February 2019

Mozilla planet - wo, 20/03/2019 - 15:00
Summary / TL;DR (new developments since last meeting in bold)

Project What’s in it? Status C++20 See below On track Library Fundamentals TS v3 See below Under active development Concepts TS Constrained templates Merged into C++20, including abbreviated function templates! Parallelism TS v2 Task blocks, library vector types and algorithms, and more Published! Executors Abstraction for where/how code runs in a concurrent context Not headed for C++20 Concurrency TS v2 See below Under active development Networking TS Sockets library based on Boost.ASIO Published! Not headed for C++20. Ranges TS Range-based algorithms and views Merged into C++20! Coroutines TS Resumable functions, based on Microsoft’s await design Merged into C++20! Modules v1 A component system to supersede the textual header file inclusion model Published as a TS Modules v2 Improvements to Modules v1, including a better transition path Merged into C++20! Numerics TS Various numerical facilities Under active development Reflection TS Static code reflection mechanisms Approved for publication! C++ Ecosystem TR Guidance for build systems and other tools for dealing with Modules Early development Pattern matching A match-like facility for C++ Under active development, targeting C++23 Introduction

A few weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Kona, Hawaii. This was the first committee meeting in 2019; you can find my reports on 2018’s meetings here (November 2018, San Diego), here (June 2018, Rapperswil), and here (March 2018, Jacksonville). These reports, particularly the San Diego one, provide useful context for this post.

This week marked the feature-complete deadline of C++20, so there was a heavy focus on figuring out whether certain large features that hadn’t yet merged into the working draft would make it in. Modules and Coroutines made it; Executors and Networking did not.

Attendance at this meeting wasn’t quite at last meeting’s record-breaking level, but it was still quite substantial. We continued the experiment started at the last meeting of running Evolution Incubator (“EWGI”) and Library Evolution Incubator (“LEWGI”) subgroups to pre-filter / provide high-level directional guidance for proposals targeting the Evolution and Library Evolution groups (EWG and LEWG), respectively.

Another notable procedural development is that the committee started to track proposals in front of the committee in GitHub. If you’re interested in the status of a proposal, you can find its issue on GitHub by searching for its title or paper number, and see its status — such as which subgroups it has been reviewed by and what the outcome of the reviews were — there.


Here are the new changes voted into C++20 Working Draft at this meeting. For a list of changes voted in at previous meetings, see my San Diego report. (As a quick refresher, major features voted in at previous meetings include default comparisons (<=>), concepts, contracts, and ranges.)

Technical Specifications

In addition to the C++ International Standard (IS), the committee publishes Technical Specifications (TS) which can be thought of experimental “feature branches”, where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization.

At this meeting, the committee iterated on a number of TSes under development.

Reflection TS

The Reflection TS was sent out for its PDTS ballot two meetings ago. As described in previous reports, this is a process where a draft specification is circulated to national standards bodies, who have an opportunity to provide feedback on it. The committee can then make revisions based on the feedback, prior to final publication.

The ballot results (often referred to as “NB comments”, as they are comments from national bodies in response to the ballot) were published between the last meeting and this one, and the TS authors prepared proposed resolutions, which various subgroups reviewed this week. I am pleased to report that the committee addressed all the comments this week, and subsequently voted to publish the TS as amended by the comment resolutions. The final draft is not prepared yet, but I expect it will be in the committee’s next mailing, and will then be transmitted to ISO for official publication.

(I mentioned previously that a procedural snafu necessitated rebasing the TS onto {C++17 + Concepts TS} as it could not reference the not-yet-published C++20 working draft which contains Concepts in their current form. I was slightly mistaken: as the Concepts TS, which was published in 2015, is based on C++14, the Reflection TS actually had to be rebased onto {C++14 + Concepts TS}. Geneva: 1, common sense: 0.)

I wish I could tell you that there is an implementation of the Reflection TS available for experimentation and encourage you to try it out. Unfortunately, to my knowledge there is no such implementation, nor is one imminent. (There is a WIP implementation in a clang branch, but I didn’t get the impression that it’s actively being worked on. I would be delighted to be mistaken on that point.) This state of affairs has led me to reflect (pun intended) on the TS process a bit.

Library Fundamentals TS v3

This third iteration (v3) of the Library Fundamentals TS is under active development, and gained its first new feature at this meeting, a generic scope guard and RAII wrapper. (The remaining contents of the TS working draft are features from v2 which haven’t been merged into the C++ IS yet.)

Merging Technical Specifications into C++20

This meeting was the deadline for merging published TSes into C++20, so naturally a large amount of attention on the outstanding ones such as Modules and Coroutines.

Modules TS

As mentioned in my previous report, Modules gained design approval at the end of the last meeting, in San Diego. This was the culmination of a multi-year effort to reconcile and merge two different approaches to Modules — the design from the Modules TS, which has its roots in Microsoft’s early implementation work, and the Atom proposal which was inspired by Clang Modules — into a unified and cohesive language feature.

I found it interesting to see how the conversation around Modules shifted as the two approaches achieved convergence. For much of the past few years, the discussions and controversies focused on the differences between the two proposals, such as macro support, incremental transition mechanisms, and module organization (preambles and partitions and such).

Now that the compiler implementers have achieved consensus on the language feature, the focus has shifted to parts of the C++ ecosystem outside of the compilers themselves that are affected by Modules — notably, build systems and other tools. The tooling community has a variety of outstanding concerns about Modules, and these concerns dominated the conversation around Modules at this meeting. I talk about this in more detail in the SG15 (Tooling) section below, but my point here is that consensus among compiler implementers does not necessarily imply consensus among the entire C++ community.

All the same, the Core Working Group proceeded with wording review of Modules at full speed, and it was completed in time to hold a plenary vote to merge the feature into C++20. As mentioned, this vote passed, in spite of concerns from the tooling community. That is to say, Modules are now officially in the C++20 working draft!

It’s important to note that this does not mean the committee doesn’t care about the tooling-related concerns, just that it has confidence that the concerns can be addressed between now and the publication of C++20 (or, in the case of issues whose resolution does not require a breaking change to the core language feature, post-C++20).

Coroutines TS

The proponents of the Coroutines TS have been trying to merge it into C++20 for quite some time. Each of the three previous meetings saw an attempt to merge it, with the latter two making it to a plenary vote, only to fail there. The reason it had failed to achieve consensus so far was that there were some concerns abouts its design, and a couple of alternative proposals which attempt to address those concerns (namely, Core Coroutines, which had been under development for a few meetings now, and a new one at this meeting, Symmetric Coroutines).

We are sufficiently late in the cycle that the alternative proposals had no chance of getting into C++20, so the decision the Committee needed to make is, are the improvements these alternatives purport to bring worth delaying the feature until C++23 or later. Thus far, the Committee had been delaying this decision, in the hopes that further development on the alternative proposals would lead to a more informed decision. With this meeting being the deadline for merging a TS into C++20, the opportunities for delay were over, and the decision needed to be made this week.

Knowing that we’re down to the wire, the EWG chair instructed the authors of the various proposals to collaborate on papers exploring the design space, putting the respective proposals into context, and comparing their approaches in detail.

The authors delivered on this request, with commendably thorough analysis papers. I talk about the technical issues a bit below, but the high level takeaways were as follows:

  • Both alternative proposals share an implementation challenge inherent to their attempt to expose the state of a coroutine as a first-class object, that would have signficant language impact. While compiler implementers agreed the proposals are implementable, they estimated the magnitude of the language impact to be sufficiently great that the ability to work out the specification issues and deliver an implementation in the C++23 timeframe was uncertain (that is, going with the alternatives would risk Coroutines being delayed until C++26).
  • Due to the relative novelty of the alternative proposals, as compared to the Coroutines TS which has multiple implementations and deployment experience, meant there was much less certainty about their eventual success, as there may be issues with them yet to be discovered. (As an example, the implementation challenge mentioned above only really came to be understood at this meeting.)
  • At least some of the advantages the alternative proposals would bring to the Coroutines TS could be accomplished via incremental, non-breaking changes post-C++20 (though this would also come with costs, such as greater complexity).

Importantly, all of the authors were more or less in agreement on these points; their differences remained only in the conclusions they drew from them.

This allowed the Committee to make what I believe was a well-informed final decision, which was that merging the Coroutines TS into C++20 gained consensus both in EWG and subsequently in plenary. Notably, it wasn’t “just barely consensus,” either — the ratio of the final vote in plenary was on the order of 10 in favour to 1 against.

Networking TS

The Networking TS did not make C++20, in part due to concerns about its design based on usage experience, and in part because it depends on Executors which also didn’t make it (not even a subset, as was hoped at the last meeting).

Musings on the TS Process

Disclaimer: This section reflects my personal opinions on potentially controversial topics. Caveat lector / feel free to skip / etc.

Recall that using Technical Specifications as a vehicle to allow large proposals to mature before final standardization is a procedural experiment that the Committee embarked on after C++11, and which is still ongoing. I’ve mentioned that opinions on how successful this experiment has been, vary widely within the Committee.

I’ve previously characterized Concepts and Modules as examples of success stories for the TS process, as both features improved significantly between their TS and IS forms.

However, one realization has been on my mind of late: we don’t seem to have a great track record for motivating compiler vendors to implement language features in their TS form. Let’s survey a few examples:

  • Concepts was only implemented in its TS form by GCC. As far as I’m aware, Clang implementation efforts have specifically targeted their C++20 form only.
  • To my knowledge, the Modules TS does not have a complete implementation either; it was partially implemented in MSVC and Clang, but both efforts have since moved on to target the newer, C++20-track formulation.
  • As mentioned above, the Reflection TS does not have a complete implementation, nor is one being actively worked on. Implementation efforts again seem to be focused on the newer, constexpr-based reflection facilities that are targeting C++23.

(If I’m mistaken on any of these points, I apologize in advance; please do point it out in a comment, and I will amend the above list accordingly.)

The Coroutines TS, which has multiple shipping implementations, is a notable exception to the above pattern. Library TS’es such as Networking, Filesystem, Library Fundamentals, and Parallelism also have a good track record of implementation. The fact remains, though, that the majority of our core language TS’es have not managed to inspire complete implementations.

This somewhat calls into question the value of language TS’s as vehicles for gathering use experience: you can’t collect use experience if users don’t have an implementation to use. (By contrast, implementation experience can be gathered from partial implementation efforts, and certainly has been for both Concepts and Modules.)

It also calls into question claims along the lines of “choosing to standardize [Feature X] as a TS first doesn’t mean you [users] have to wait longer to get it; you can just use the TS!” — a claim that I admit to have made myself, multiple times, on this blog.

What are the takeaways from this? Are language TS’es still a good idea? I’m still trying to work that out myself, but I will suggest a couple of takeaways for now:

  • Implementations can move faster than standards. Language TS’es are often snapshots of a rapidly evolving design. By the time a TS is published, its design is often known to have important flaws, and often it’s already been iterated on. Compilers don’t have much of a motivation to polish an implementation of a known-to-be-broken thing, nor users a motivation to use it.
  • Large features take a long time to get right. To take Modules as an example: while the Modules TS didn’t end up being something people can really use in practice, it seems to me that pushing Modules into the C++17 IS would been a mistake as well; given the extent of the feature’s evolution between then and now, locking the design as it stood in ~2016 (the C++17 feature-complete date) into the IS would have resulted in a significantly less baked feature. That suggests to me, that perhaps the choice we gave ourselves back then (“Modules: TS, or C++17?”) was a false choice. Perhaps a better choice would have been to continue iterating on Modules until it’s ready, even if that meant not publishing any spec-like document about Modules in the 2017 timeframe. (Update: see below for a counter-argument.)

Perhaps the actionable suggestion here is to downplay the role of a TS as a way to get a core language feature in front of users early. They do play other roles as well, such as providing a stabilized draft of a feature’s specification to write proposed changes against, and arguably they remain quite useful in that role.

Update: since publishing this, I’ve received private feedback that included suggestions of other advantages of core language TS’es, which I’ve found compelling, and wanted to share:

  • They allow Core wording review of a feature to proceed even while there are outstanding design questions (which can be deferred to post-TS consideration), which can in turn result in important issues being discovered and resolved sooner.
  • They prod implementers by putting them on notice that the feature may be standardized in this form in the absence of feedback. While this may not lead to complete implementations of the TS, it often does lead to partial implementation efforts that generate very valuable feedback.

Continuing with the example of Modules, both of the above considerations were in play, and contributed to the high quality of the feature now headed for C++20.

Evolution Working Group

I spent most of the week in EWG, as usual, although I did elope to some Study Group meetings, and to EWGI for a day.

Here I will list the papers that EWG reviewed, categorized by topic, and also indicate whether each proposal was approved, had further work on it encouraged, or rejected. Approved proposals are targeting C++20 unless otherwise mentioned; “further work” proposals are not, as this meeting was the deadline for EWG approval for C++20.


Contracts — which have been added into the C++20 working draft at the last meeting — have been the subject of very extensive mailing list discussions, and what I understand to be fairly heated in-person debates in EWG. I wasn’t in the room for them (I was in EWGI that day), but my understanding is that issues that have come up related to (1) undefined behaviour caused by compilers assuming the truth of contact predicates, and (2) incremental rollout of contracts in a codebase where they may not initially be obeyed at the time of introduction, have led to a plurality of stakeholders to believe that the Contracts feature as currently specified is broken.

To remedy this, three different solutions were proposed. The first two — “Avoiding undefined behaviour in contracts” and “Contracts that work” — attempted to fix the feature in the C++20 timeframe, with different approaches.

The third proposal was to just remove Contracts from C++20.

However, none of these proposals gained EWG consensus, so for now the status quo — a feature believed to be a broken in the working draft — remains.

I expect that updated proposals to resolve this impasse will forthcome at the next meeting, though I cannot predict their direction.

EWG did manage to agree on one thing: to rename the context-sensitive keywords that introduce pre- and post-conditions from expects and ensures (respectively) to pre and post. Another proposed tweak, to allow contract predicates on non-first decarations, failed to gain consensus.


EWG reviewed a handful of Modules-related proposals:

  • (Approved) Constrained internal linkage for modules. This is a design fix that supersedes two other proposals related to linkage, “Modules: ADL & internal linkage” and “Module partitions are not a panacea”, by making code that would run into the underlying issues ill-formed. Notably, the proposal requires a diagnostic when the rules it introduces are violated; I’m very glad of this, as I find the “ill-formed, no diagnostic required” semantics that pervade the language (particularly those where there is actual implementation divergence on whether or not an error diagnostic is issued) a significant pitfall.
  • (Sent to SG15) Implicit module partition lookup. This aims to address part of the tooling-related concerns around Modules by introducing a standard mechanism by which module names are resolved to file names. EWG felt that it would be more appropriate for this proposal to target the C++ Ecosystem TR rather than the IS, and accordingly forwarded the paper to SG 15 (Tooling).
  • (Further work) Module resource dependency propagation. This is a proposal to allow annotating modular source files with the filenames of resource files (think e.g. an image that the code in the source file needs to use, that is shipped with an application) that they depend on, with the annotation appearing on the module declaration; the idea is that these annotations could inform a build system which could extract them and make them part of its dependency graph. EWG was sympathetic to the objectives but recognized that a proposal like this has specification challenges, as the current standard says very little about aspects of the host environment in which translation (building the program) takes place.

A couple of informational papers were also looked at:

  • Are Modules fast? attempts to characterize the performance impact of Modules via a microbenchmark. The gist of the results is that Modules tend to increase throughput while potentially also increasing latency, depending on the shape of your program’s dependency graph, an observation which is also corroborated by real-world deployment experience.
  • Make me a module describes an experimental implementation of build system support for Modules in GNU make.

Coroutines was probably the most talked-about subject at this meeting. I summarized the procedural developments that led to the Coroutines TS ultimately being merged into C++20, above.

Preceding that consequential plenary vote was an entire day of design discussion in EWG. The papers that informed the high-level directional discussion included:

  • The alternative proposals: Core Coroutines and Symmetric Coroutines.
  • Two papers outlining how something that accomplishes many of the goals of the alternative proposals can be built on top of the Coroutines TS in a backwards compatible fashion (the first of these is the “unified” proposal by Facebook that I mentioned in my last report).
  • Two analysis papers comparing the various approaches in detail, one focused on use cases and the second on language and implementation impact.
  • An experience report about implementing a coroutine TS frontend to an existing tasking library.

I’d say the paper that had the biggest impact on the outcome was the analysis paper about the language and implementation impact. This is what discussed, in detail, what I described above as an “implementation challenge” shared by Core Coroutines and Symmetric Coroutines. The issue here is that both of these proposals aim to expose the coroutine frame — the data structure that stores the coroutine state, including local variables that persist across suspension points — as a first-class object in C++. The reason this is challenging is that first-class C++ objects have certain properties, such as their sizeof being known at constant expression evaluation time, which happens in the compiler front-end; however, the size of a coroutine frame is not known with any reasonable amount of accuracy until after optimizations and other tasks more typically done by the middle- or back-end stages of a compiler. Implementer consensus was that introducing this kind of depedency of the front-end on optimization passes is prohibitive in terms of implementation cost. The paper explores alternatives that involve language changes, such as introducing the notion of “late-sized types” whose size is not available during constant expression evaluation; some of these were deemed to be implementable, but the required language changes would have been extensive and still required a multi-year implementation effort. (The problem space here also has considerable overlap with variable-length arrays, which the committee has not been able to agree on to date.)

This, I believe was the key conclusion that convinced EWG members that if we go with the alternatives, we’re not likely to have Coroutines until the C++26 timeframe, and in light of that choice, to choose having the Coroutines TS now.

EWG also looked at a couple of specific proposed changes to the Coroutines TS, both of which were rejected:

  • (Rejected) The trouble with coroutine_traits. This would have enhanced the ability of a programmer to customize the behaviour of a third-party coroutine type. I think the main reason for rejection was that the proposal involved new syntax, but the specific syntax had not been decided on, and there wasn’t time to hash it out in the C++20 timeframe. The proposal may come back as an enhancement in C++23.
  • (Rejected) Coroutines TS simplifications. There weren’t strong objections to this, but ultimately proceeding with the TS unmodified had the greater consensus as it has implementation experience.

The march to make ever more things possible in constexpr continued this week:

  • (Approved) Permitting trivial default initialization in constexpr contexts. “Trivial default initialization” refers to things like int x; at local scope, which leaves x uninitialized. This is currently ill-formed in a constexpr context; this proposal relaxes it so that it’s only ill-formed if you actually try to read the uninitialized value. The interesting use cases here involve arrays, such as the one used to implement a small-vector optimization.
  • (Approved) Adding the constinit keyword. This is a new keyword that can be used on a variable declaration to indicate that the initial value must be computed at compile time, without making the variable const (so that the value can be modified at runtime).
  • (Further work) constexpr structured bindings. This is targeting C++23; EWG didn’t request any specific design changes, but did request implementation experience.
  • (Rejected) An update on “More constexpr containers”. This proposal was previously approved by EWG, and had two parts: first, allowing dynamic allocation during constant evaluation; and second, allowing the results of the dynamic allocation to survive to runtime, at which time they are considered static storage. Recent work on this proposal unearthed an issue with the second part, related to what is mutable and what is constant during the constant evaluation. The authors proposed a solution, but EWG found the solution problematic for various reasons. After lengthy discussion, people agreed that a better solution is desired, but we don’t have time to find one for C++20, and the “promotion to static storage” ability can’t go forward without a solution, so this part of the proposal was yanked and will be looked at again for C++23. (The first part, dynamic allocations without promotion to static storage, remains on track for C++20.)
Comparisons Pattern matching
  • (Further work) Pattern matching. This is one of the most exciting proposals to look forward to in C++23; it will bring a pattern matching facility comparable to that in Rust and other modern languages, to C++. EWG spent most of an afternoon on it and gave the authors a lot of guidance, including on syntax choices, parseability, readability, and the proposed customization point design.
  • (Rejected) Disallow _ usage in C++20 for pattern matching in C++23. By the same authors as the pattern matching proposal, this paper tried to land-grab the _ identifier in C++20 for future use as a wildcard pattern in C++23 pattern matching. EWG wasn’t on board, due to concerns over existings uses of _ in various libraries, and the availability of other potential symbols. This does mean that the wildcard pattern in C++23 pattern matching will (very likely) have to be spelt some other way than _.
Other new features
  • (Approved) Expansion statements. This is a form of compile-time for loop that can iterate over tuple-like objects, constexpr ranges, and parameter packs. The feature has been revised to address previous EWG feedback and use a single syntax, for...
  • (Approved) using enum. This allows bringing all the enumerators of an enumeration, or just a specific enumerator, into scope such that they can be referenced without typing the enumeration name or enclosing type name. Approved with the modification that it acts like a series of using-declarations.
Bug / Consistency Fixes

(Disclaimer: don’t read too much into the categorization here. One person’s bug fix is another’s feature.)

  • (Approved) char8_t backwards compatibility remediation. This contains a couple of minor, library-based mitigations for the backwards compatibility breakage caused by u8 literals changing from type char to char8_t. Additional library and language-based mitigations were mentioned but not proposed.
  • (Approved) Reference capture of structured bindings. Value capture was already approved at the previous meeting.
  • (Approved) Implicit creation of objects for low-level object manipulation. This is largely standard wording changes to make certain widely-used code patterns, such as using malloc() to allocate a POD object, defined. It also introduces a new “barrier operation” std::bless, a kind of counterpart to std::launder, which facilities writing custom operations that implicitly create objects like malloc(). One point that came up during discussion is that this proposal makes things like implementing a small vector optimization in constexpr possible (recall that things which trigger undefined behaviour at runtime are ill-formed during constant evaluation).
  • (Approved) Deprecating volatile. Despite the provocative title, which is par for the course from our esteemed JF Bastien, this only deprecates uses of volatile which have little to no practical use, such as volatile-qualified member functions.
  • (Approved) Layout-compatibility and pointer-interconvertibility traits. This allows checking at compile time whether certain operations like converting between two unrelated pointer types would be safe.
  • (Approved) [[nodiscard("should have a reason")]]. This extends the ability to annotate an attribute with a reason string, which [[deprecated]] already has, to [[nodiscard]].
  • (Approved) More implicit moves. This extends the compiler’s ability to implicitly move rather than copy an object in some situations where it knows the original is about to go out of scope anyways. (The suggested future extension regarding assignment operators was not encouraged.)
  • (Further work) Ultimate copy elision. This is an ambitious proposal to give compilers license to elide copies in cases not covered by the as-if rule (i.e. cases where the compiler can’t prove the elision isn’t observable; this is typically the case when the copy constructor being invoked isn’t entirely inline). The benefits are clear, but there are concerns that the proposed changes are not sound; more analysis and implementation experience is needed.
Proposals Not Discussed

Notable among proposals that didn’t come up this week is Herb’s static exceptions proposal. As this is a C++23-track proposal, it was deliberately kept out of EWG so far to avoid distracting from C++20, but it is expected to come up at the next meeting in Cologne.

Evolution Working Group Incubator

The EWG Incubator group (EWGI), meeting for the second time since its inception at the last meeting, continued to do a preliminary round of review on EWG-bound proposals.

I wasn’t there for most of the week, but here are the papers the group forwarded to EWG:

Numerous other proposals were asked to return to EWGI with revisions. I’ll call out a few particularly interesting ones:

  • Overload sets as function parameters. Being able to pass around overload sets has been proposed and shot down numerous times before. The novely in this approach is that it’s opt-in at the callee side, not the caller side.
  • Parametric expressions. This is an ambitious proposal that aims to bring a sort of a hygienic macro system to C++.
  • Object relocations in terms of move plus destroy. This aims to solve some common performance issues in the implementation of container types, where for some types of objects, relocating them to newly allocated storage can safely be done via a memcpy rather than invoking a move constructor and destructor, but the infrastructure for identifying such types is not present.
  • Language variants. This would add a core-language sum type, similar to Rust’s enums, to C++, as an alternative to the library-based std::variant.
Other Working Groups Library Groups

Having sat in the Evolution groups, I haven’t been able to follow the Library groups in any amount of detail, but I’ll call out some of the more notable library proposals that have gained design approval at this meeting:

Study Groups SG 1 (Concurrency)

C++20-track work reviewed this week included revisions to joining thread, and deprecating volatile.

v1 of the Concurrency TS will be withdrawn; v2 continues to be under active development, with asymmetric fences approved for it this week.

Executors continue to be a hot topic. SG 1 forwaded two papers related to them onward to the Library Evolution Working Group, while three others remain under review. An earlier plan to ship a subset of executors in C++20 had to be scrapped, because LEWG requested the the “property” mechanism it relies on be generalized, but it was too late in the cycle to progress that for C++20. As a result, Executors are now targeting C++23.

Other proposals under active review in SG 1 concern fibers, concurrent associative data structures, memory model issues, volatile_load and volatile_store, customization points for atomic_ref, and thread-local storage.

SG 7 (Compile-Time Programming)

The main topic in SG 7 continues to be deciding on the high-level direction for constexpr-based reflection in the (hopefully) C++23 timeframe. The two proposals on the table are scalable reflection in C++ and constexpr reflexpr; their main point of divergence is whether they use a single type (meta::info) to represent all compile-time reflection metadata objects (also known as reflections), or whether there should be a family / hierarchy (not necessarily inheritance-based) of such types (meta::variable, meta::function, etc.).

SG 7 expressed a preference for the “family of types” approach at the last meeting, however the point continues to be debated as the proposal authors gather more experience. The “single type” approach has been motivated by implementation experience in the EDG and Clang compilers, which has suggested this can achieve better compile-time performance. The “family of types” approach is motivated more by API design considerations, as expressed in the position paper constexpr C++ is not constexpr C.

While a consensus on this point is yet to emerge, a possible (and potentially promising) direction might be to build the “family of types” approach as a layer on top of the “single type” approach, which would be the one implemented using compiler primitives.

SG 7 also reviewed a proposal for a modern version of offsetof, which was forwaded to LEWG.

SG 15 (Tooling)

The Tooling Study Group (SG 15) met for an evening session, primarily to discuss tooling-related concerns around Modules.

As mentioned above, now that Modules has achieved consensus among compiler implemeters, tooling concerns (nicely summarized in this paper) are the remaining significant point of contention.

The concerns fall into two main areas: (1) how build systems interact with Modules, and (2) how non-compiler tools that consume source code (such as static analyzers) can continue to do so in a Modular world. The heart of the issue is that components of the C++ ecosystem that previously needed to rely only on behaviour specified in the C++ standard, and some well-established conventions (e.g. that compilers find included headers using a search path that can be disclosed to tools as well), now in a Modular world need to rely on behaviours that are out of scope of the C++ standard and for which established conventions are yet to emerge (such as how module names are mapped to module interface files, or how translation of imported modules is invoked / performed).

To address these concerns, SG 15 has announced what I view as probably the most exciting development since the group’s inception: that it will aim to produce and publish a C++ Ecosystem Technical Report containing guidance regarding the above-mentioned areas.

A Technical Report (TR) is a type of published ISO document which is not a specification per se, but contains guidance or discussion pertaining to topics covered by other specifications. The committee has previously published a TR in 2006, on C++ performance.

A TR seems like an appropriate vehicle for addressing the tooling-related concerns around Modules. While the committee can’t mandate e.g. how module names should map to file names, by providing guidance about it in the C++ Ecosystem TR, hopefully we can foster the emergence of widely followed conventions and best practices, which can in turn help maintain a high level of interoperability for tools.

The announcement of plans for a C++ Ecosystem TR did not completely assuage tool authors’ concerns; some felt that, while it was a good direction, Modules should be delayed and standardized in tandem with the TR’s publication in the C++23 timeframe. However, this was a minority view, and as mentioned Modules went on to successfully merge into the C++20 working draft at the end of the meeting.

Other Study Groups

Other Study Groups that met at this meeting include:

  • SG 6 (Numerics), which met for about two days and reviewed a dozen or so proposals. Topics discussed included utility functions, floating point types, number representations, and linear algebra (the latter being a hot topic for the committee these days, with a lot of interest from the new SG 19 (Machine Learning) as well).
  • SG 12 (Undefined and Unspecified Behaviour), which met to discuss an informational paper on pointer provenance and a paper about signed integer overflow; the latter was referred to SG 20 as a matter of educating C++ programmers. There was also the now-usual joint session with WG23 – Software Vulnerabilities, where additional sections of the C++ vulnerabilities document were reviewed; there will also be upcoming work on MISRA.
  • SG 13 (Human/Machine Interface), which met for half a day to review a proposal for a standard audio API, which generated a lot of interest. There were no developments related to 2D graphics at this meeting.
  • SG 16 (Unicode). Papers reviewed include compile-time regular expressions, source-code information capture, and charset transcoding, transformation, and transliteration.
  • SG 19 (Machine Learning) had its first meeting this week. This paper provides a good overview of how SG 19 envisions structuring its work. The initial work is understandably focused on fundamentals such as linear algebra primitives. Like many other study groups, SG 19 will hold monthly teleconferences to make progress in between in-person meetings.
  • SG 20 (Education) also had its first in-person meeting. They plan to produce a “standing paper” of educational guidelines (see this proposed early draft). They will also hold monthly telecons.
Next Meetings

The next meeting of the Committee will be in Cologne, Germany, the week of July 15th, 2019.


This was an eventful and productive meeting, and it seems like the progress made at this meeting has been well-received by the user community as well! With Modules and Coroutines joining the ranks of Concepts, Ranges, contracts, default comparisons and much else in the C++20 working draft, C++20 is promising to the most significant language update since C++11.

Due to sheer number of proposals, there is a lot I didn’t cover in this post; if you’re curious about a specific proposal that I didn’t mention, please feel free to ask about it in the comments.

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #42

Mozilla planet - wo, 20/03/2019 - 11:21

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Mozilla’s research web browser servo and on its way to becoming Firefox‘s rendering engine.

What’s everyone working on?
  • Glenn has been investigating WebRender’s performance on Android ARM Mali devices. There seem to be some reasonable overlap between optimizations affecting Intel integrated GPUs and Mobile GPUs which is good news. This investigation led to a few performance optimizations such as a fast path generating clip masks in the common cases reducing the number of fragment shader instructions, and some changes to the way data is provided to clip render tasks to use pixel local storage instead of the render task data texture.

  • Kvark made a few refactorings improving the internation code as well as framebuffer coordinates and document origin semantics. Kvark also improved plane-splitting accuracy and the way GL driver errors are reported.
    Kvark also extracted a useful bit of WebRender into the copyless crate. This crate makes it possible to push large structures into standard vectors and hash maps in a way that llvm is better able to optimize than when using, says, Vec::push. This lets the large values get initialized directly in the container’s allocated memory without emitting extra memcpys.

  • Kats has fixed a number of scrolling related bugs, and is improving the automatic synchronization of WebRender’s code between the mozilla-central and github repositories.

  • Nical is investigating optimizations of the render task tree (with the idea of turning it into a graph rather than a tree strictly speaking). Currently WebRender does not provide a way for the output of a render task to be read by several other render tasks. In other words, if an element has a dozen shadows, we currently re-compute the blur for that element a dozen times. There are ongoing experiments with various render graph scheduling strategies in a separate repository and some of the findings from these experiments are being ported to WebRender.

  • Sotaro has landed a series of improvements around the way we handle cross-process texture sharing and how GL contexts are managed.

  • Timothy is working on a GPU implementation of the component transfer SVG filter. Avoiding the CPU fallback for this particular filter is important because of its use in Google docs.

  • Jeff has been doing a lot WebRender bug triage and profiling. He also fixed a very bad interaction between tiled blob images and filters which was causing the whole filtered area to be re-rendered from scratch for tile.

  • Doug continues his work on document splitting. A large part of it has already been reviewed.


Jessie got some glitchy gfx team stickers printed. It’s not a WebRender news per se, but I didn’t want to pass on an occasion to put a fun picture on the blog.


I originally made this (absolutely unofficial) logo to decorate the blog by simply flipping random bits in a png image of the Firefox Nightly logo. I recently re-did the logo in SVG using Inkscape to get a high enough resolution for the stickers.

Enabling WebRender in Firefox Nightly

In about:config, enable the pref gfx.webrender.all and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.

Note that it is possible to log in with a github account.

Using WebRender in a Rust project

WebRender is available as a standalone crate on (documentation)

Categorieën: Mozilla-nl planet

Daniel Stenberg: Happy 21st, curl!

Mozilla planet - wo, 20/03/2019 - 07:55

Another year has passed. The curl project is now 21 years old.

I think we can now say that it is a grown-up in most aspects. What have we accomplished in the project in these 21 years?

We’ve done 179 releases. Number 180 is just a week away.

We estimate that there are now roughly 6 billion curl installations world-wide. In phones, computers, TVs, cars, video games etc. With 4 billion internet users, that’s like 1.5 curl installation per Internet connected human on earth

669 persons have authored patches that was merged.

The curl source code now consists of 160,000 lines of code made in over 24,000 commits.

1,927 persons have helped out so far. With code, bug reports, advice, help and more.

The curl repository also hosts 429 man pages with a total of 36,900 lines of documentation. That count doesn’t even include the separate project Everything curl which is a dedicated book on curl with an additional 10,165 lines.

In this time we have logged more than 4,900 bug-fixes, out of which 87 were security related problems.

We keep doing more and more CI builds, auto-builds, fuzzing and static code analyzing on our code day-to-day and non-stop. Each commit is now built and tested in over 50 different builds and environments and are checked by at least four different static code analyzers, spending upwards 20-25 CPU hours per commit.

We have had 2 curl developer conferences, with the third curl up about to happen this coming weekend in Prague, Czech Republic.

The curl project was created by me and I’m still the lead developer. Up until today, almost 60% of the commits in the project have my name on them. I have done most commits per month in the project every single month since August 2015, and in 186 months out of the 232 months for which we have logged data.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Firefox 66: The Sound of Silence

Mozilla planet - di, 19/03/2019 - 16:56

Firefox 66 is out, and brings with it a host of great new features like screen sharing, scroll anchoring, autoplay blocking for audible media, and initial support for the Touch Bar on macOS.

These are just highlights. For complete information, see:

Audible Autoplay Blocking

Starting with version 66, Firefox will block audible autoplaying video and audio. This means media (audio and video) have to wait for user interaction before playing, unless the muted property is set on the associated HTMLMediaElement. Blocking can be disabled on a case-by-case basis in the site information overlay:

Screenshot of the Site Information panel showing the 'Autoplay sound' permissionNow you get to decide when to disturb the sound of silence.

Note: We’re rolling out blocking gradually to ensure that it doesn’t break legitimate use cases. All Firefox users should have blocking enabled within a few days.

Usability Improvements Scroll Anchoring

Firefox now implements scroll anchoring, which prevents slow-loading content from suddenly appearing and pushing visible content off the page.

Touch Bar

The Touch Bar on macOS is now supported, offering quick access to common browser features without having to learn keyboard shortcuts.

Photo of Firefox's buttons on a MacBook Pro Touch Bar

Tab Search

Too many tabs? The overflow menu sports a new option to search through your open tabs and switch to the right one.

Screenshot of Firefox's tab overflow menu showing a new 'Search Tabs' optionsAstute users will note that clicking on “Search Tabs” focuses the Awesomebar and types a % sign in front of your query. Thus, while the menu entry makes tab search much more discoverable, you can actually achieve the same effect by focusing the Awesomebar and manually typing a % sign or other modifier.

Extension Shortcuts

Speaking of shortcuts, you can now manage and change all of the shortcuts set by extensions by visiting about:addons and clicking “Manage Extension Shortcuts” under the gear icon on the Extensions overview page.Screenshot of Firefox's new settings page to manage keyboard shortcuts added by extensions

Better Security Warnings

We’ve completely redesigned Firefox’s security warnings to better encourage safe browsing practices (i.e., don’t ignore the warnings!)

Expanded CSS Features

Firefox is the first browser to support animating the CSS Grid grid-template-rows and grid-template-columns properties, as seen in the video below.

We’re also the first browser to support the overflow-inline and overflow-block media queries, which make it possible to apply styles based on whether (and how) overflowing content is available to the user. For example, a digital billboard might report overflow-block: none, while an e-reader would match overflow-block: paged.

Furthermore, Firefox now supports:


The new getDisplayMedia API enables screen sharing on the Web similarly to how getUserMedia provides access to webcams. The resulting stream can be processed locally or shared over the network with WebRTC. See Using the Screen Capture API on MDN for more information.Screenshot of Firefox's screen sharing dialog

Mozilla is using getDisplayMedia in Bugzilla to allow people to take and attach screenshots to their bug reports, directly from inside the browser.

Also, starting with Firefox 66, InputEvent now has a read-only property, inputType. This distinguishes between many different types of edits that can happen inside an input field, for example insertText versus insertFromPaste. To learn more, check out the documentation (and live demo) on MDN.

Browser Internals

Lastly, we’ve made a few changes to how Firefox works under the hood:

From all of us at Mozilla, thank you for choosing Firefox!

The post Firefox 66: The Sound of Silence appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Today’s Firefox Aims to Reduce Your Online Annoyances

Mozilla planet - di, 19/03/2019 - 14:01

Almost a hundred years ago, John Maynard Keynes suggested that the industrial revolution would effectively end work for humans within a couple of generations, and our biggest challenge would be figuring what to do with that time. That definitely hasn’t happened, and we always seem to have lots to do, much of it online. When you’re on the web, you’re trying to get stuff done, and therefore online annoyances are just annoyances. Whether it’s autoplaying videos, page jumps or finding a topic within all your multiple tabs, Firefox can help. Today’s Firefox release minimizes those online inconveniences, and puts you back in control.

Block autoplaying content by default

Ever open a new page and all of a sudden get bombarded with noise? Well, worry no more. Starting next week, we will be rolling out the peace that silence brings with our latest feature, block autoplay. Here’s how to use block autoplay:

  • Scenario #1 – For anyone who wants peace and quiet on the web:  Go to a site that plays videos or audio, it could be a news site or site known for hosting movies and television shows, the Block Autoplay feature will stop the audio and video from automatically playing. If you want to view the video, simply click on the play button to watch it.

There will be instances where there are some sites, like social media, that automatically mute the sound but will continue to play the video. In this case, the new Block Autoplay Feature will not stop the video from playing.

  • Scenario #2 – For the binge-watcher: If your weekend plans involve catching up on your favorite TV series, you’ll want to make it interruption-free. To play the videos continuously, hit play and all subsequent videos will play automatically, just as the site intended. This will apply to all streaming sites including Netflix, Hulu and YouTube. To continue to autoplay from the first video, you should add those sites to your permissions list.

To enable autoplay on your favorite websites, add them to your permissions list by visiting the control center — which can be found by clicking the lowercase “i” with a circle in the address bar. From there go to Permissions and select “allow” in the drop down to automatically play media with sound.

From Permissions, you can choose to allow or block

No more annoying page jumps with smoother scrolling

Do you ever find yourself immersed in an online article, then all of a sudden an image or ad loads from the top of the page and you lose your place. Images or ads load slower than the written content on a page, and without scroll anchoring in place, you’re left bouncing around the page. Today’s release features scroll anchoring. Now, the page remembers where you are so that you aren’t interrupted by slow loading images or ads.

Search made easier and faster

Search is one of the most common activities that people do whenever they go online, so we are always looking for ways to streamline that experience. Today, we’re improving the search experience to make it faster, easier and more convenient by enabling:

  • Searching within Multiple Tabs – Did you know that if you enter a ‘%’ in your Awesome Bar, you can search the tabs on your computer? If you have more than one device on Firefox Sync, you can search the tabs on your other devices as well. Now you can search from the tab overflow menu, which appears when you have a large number of tabs open in a window. When this happens, you’ll see on the right side of the plus sign (where you typically open a new tab) a down arrow. This is called the tab overflow menu. Simply click on it to find the new box for searching your tabs.
  • Searching in Private Browsing – Sometimes you’d prefer your search history to not be saved, like those times when you’re planning a surprise party or gift. Now, when you open a new tab in Private Browsing, you’ll see a search bar with your default search engine – Google, Bing,, DuckDuckGo, eBay, Twitter or Wikipedia. You can set your default search engine when you go to Preferences, Search, then Default Search Engine.


Additional features in today’s Firefox release include:
  • Keeping you safe with easy-to-understand security warnings – Whenever you visit a site, it’s our job to make sure the site is safe. We review a security certificate, a proof of their identity, before letting you visit the site. If something isn’t right, you’ll get a security warning. We’ve updated these warnings to be simple and straightforward on why the site might not be safe.To read more about how we created these warnings, visit here.
  • Web Authentication support for Windows Hello –  For the security-minded early adopters, we’re providing biometric support for Web Authentication using Windows Hello on Windows 10. With the upcoming release for Windows 10, users will be able to sign in to compatible websites using fingerprint or facial recognition, a PIN, or a security key. To learn more, visit our Security blog.
  • Improved experience for extension users – Previously, extensions stored their settings in individual files (commonly referred to as a JSON file) which took some time to load a page. We made changes so that the extensions now store their settings in a Firefox database. This makes it faster to get you to the sites you want to visit.

For the complete list of what’s new or what we’ve changed, you can review today’s full release notes.

Check out and download the latest version of Firefox Quantum, available here.

Get the latest Firefox

Stop audio and video content from automatically playing, and say goodbye to jumpy pages, interrupted by ad and image loading, with smoother scrolling.

Firefox is made by Mozilla, the not-for-profit champions of a healthy internet.

Download Firefox

The post Today’s Firefox Aims to Reduce Your Online Annoyances appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet