Mozilla Nederland LogoDe Nederlandse

Abonneren op feed Mozilla planet
Planet Mozilla -
Bijgewerkt: 1 maand 2 uur geleden

The Mozilla Blog: Fast. For good. Launching the new Firefox into the World

di, 14/11/2017 - 14:59

Thirteen years ago, we marked the launch of Firefox 1.0 with a crowdfunded New York Times ad. It listed the names of every single person who contributed — hundreds of people. And it opened a lot of eyes. Why? It showed what committed individuals willing to put their actions and dollars behind a cause they believe in can make happen. In this case, it was launching Firefox, a web browser brought to market by Mozilla, the not-for-profit organization committed to making the internet open and accessible to everyone. And Firefox represented more than just a new and improved browser. It stood for an independent alternative to the corporately controlled Internet Explorer from Microsoft, and a way for people to take back control of their online experience.

Firefox Ad in New York Times on November 14, 2017

Fast forward to today and the launch of the new Firefox browser (check out our ad in today’s New York Times). No doubt, we are bringing a new, faster and more powerful browser to market but, the reason we’re in this business still remains the same. Now, more than ever, people need tech options that are not only built to work well for the individual user but which also improve the overall tech landscape. That’s exactly what the new Firefox does. Twice as fast, and still committed to putting people over profit. We are fighting for a healthy internet and we want the Internet to be accessible and open to all. We are a community of committed individuals standing up for what we believe is right.

The new Firefox browser is the best we’ve put in market since Firefox first launched, and the world of marketing has changed since that initial launch so we’ve put forward our best marketing to date as well.

Behind-The-Scenes of our New Firefox Campaign Launch

Our research tells us that Firefox and its parent organization Mozilla are both well-known brands. Yet not enough people see a distinction between Firefox and our biggest competitor, Chrome. And even fewer people understand that Mozilla is a not-for-profit responsible for pro-Internet technologies, policies, and programs beyond Firefox.

However, those who do understand the depth and breadth of Mozilla’s work view Firefox as a more iconic browser and are happier and more loyal users.  So we’ve been hard at work to create a deeper understanding of Mozilla in order to create more differentiation for Firefox, and at the same time, be much clearer about what makes Firefox unique.

Audience first

Part of that work is defining our key audience or the people for whom we can provide the most value with our products and who in return, can help us spread the word fastest and work with us to keep the Internet a healthy, open and accessible experience. We’ve identified a group of consumers, representing 23% of all Internet users, who we call Conscious Choosers.  This segment takes time to research and understand products and companies in order to make a deliberate choice about who and what they support. They share a worldview that is against monopolies and centralized power hubs, and for democratic access to information, knowledge, and resources. They try hard to reconcile these values with their behaviors, and while willing to take the extra effort to do what is right, they are in a constant balancing act between choosing what is “easy” and doing what is “right.”

Understanding what makes Conscious Choosers tick has helped us make some important marketing decisions and guide the new Firefox launch.

The right promise

First, it led to our tagline for the new Firefox:  Fast for good.  This promise reinforces that with the new Firefox, there is no trade-off between performance (the “easy” choice) and purpose (the “right” choice). You get a browser that is 2x faster, and that uses 30% less memory than Chrome. A browser that’s already known for its powerful privacy options. And a browser that allows you to support a mission-driven not-for-profit too.

This positioning is spread across all of our marketing materials from the website to our advertising campaigns.

We think the new Firefox has to be felt to believed. So two of our creative executions that I am previewing here for you first, focused on what it feels like — and even sounds like — to use our blazingly fast new browser. We’re bringing these concepts to life in television spots and promoted videos.

Browsing at the Speed of Right

Wait Face. As you use the internet you’ve probably felt frustrated by the slowness of a page loading or a video buffering. Annoyance, boredom, and tiredness have become universal human expressions of waiting for stuff to happen online, and these expressions act as a foil for the new Firefox experience. We loved working with our teams of actors and directors to capture the essence of the wait face and we still smile when we watch the spots.

The antidote to wait face is the new Firefox.  When our actors fire it up the waiting resolves to joy and excitement as people experience the sensation of speed. The internet is theirs to enjoy. And the energy that all the teams put into this spot can be felt.

Take a look.

What Does Speed Sound Like?

The Wait Face videos show what the new Firefox experience looks and feels like. Through the creative exploration, they made us wonder how else we might demonstrate the sensation of the new product. Watching the actors interact with the music and play on their Firefox faces made us wonder if we might be able to show what slow and fast sounded like.

For that, we turned to the music impresario Reggie Watts — bandleader for the Late, Late Show; intellectual improv artist; and beat-box musician with a large and growing following. We asked Reggie to improvise through sound the feeling of slowness and to contrast that with the joyful speed of Firefox. Our collaboration results in a funny and memorable performance where fast has a magnetic attraction that even Reggie can’t resist.

Reggie Watts in the new Firefox TV ads


In addition to a spot for TV and digital video placement, Reggie improvised a slew of video snippets that we’ll share by a combination of different tactics from social to video bumpers and more.

In Real Life

Our Conscious Chooser insights also led to another important component of our marketing strategy.  While this segment is very confident in their ability to “vote with their wallet” offline to show support of the products and services that align with their values, they can sometimes be overwhelmed and even feel defeated with how to demonstrate their values and exercise their power online.  So we’ve created a series experiences that make the intangible more tangible.  Our Firefox Fast Ferry literally gets New Yorkers from Brooklyn to Manhattan and back faster by offering an alternative to the slower-than-ever subway (that happens to be under repair to boot).  Our Firefox Fast Pass has sped up fans’ experience — and their time for fun — at events like TwitchCon, ComplexCon and Playlist Live.

In addition to experiences like these that reinforce what’s special about Firefox, we’ve also invested in programs that help people understand the broader work of Mozilla, in order to add even more differentiation to Firefox.  Our podcast IRL shines a light on how our online and offline behaviors impact each other. And our Glass Room pop-up has taken the somewhat boring and often intimidating idea of data privacy and brought it to life through interactive exhibits, onsite experts and easy-to-use tools that make it real and easy to make smart choices about how, where and when to share your online identity.


      Firefox, fast for good

This is why Firefox exists. Our CEO, Chris Beard summed it up nicely in an email to the community of Firefox users:

“When you use Firefox, you’re also contributing to a movement to ensure the Internet remains a global public resource, open and accessible to all. As an independent, not-for-profit organization, we’ve been committed since 2003 to building products that put you in control of your online life and advancing open technology and public policy that promote a healthier Internet. We put you at the center of everything we do.

On behalf of Mozilla’s global community, we’re proud to introduce you to the new Firefox. Fast for good.”


If you haven’t done so already, we invite you to check out the new Firefox browser and tell us what you think through Twitter, Facebook or Instagram.

We hope you enjoy it as much as we do.

The post Fast. For good. Launching the new Firefox into the World appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Giorgio Maone: Double NoScript

di, 14/11/2017 - 13:11

NoScript Work
Later today In a couple of days, if everything goes fine, and definitely by the end of this week, NoScript 10, the first "pure" WebExtension NoScript version, will be finally released for Firefox 57 and above, after years of work and months NoScript 5.x living as a hybrid one to allow for smooth user data migration.

NoScript 10 is very different from 5.x: some things are simpler, some things are improved, some are still missing and need to wait for WebExtensions APIs not available yet in Firefox 57. Anyway, whenever you decide to migrate, your old settings are kept safe, ready to be used as soon as the feature they apply to gets deployed.

If you're not bothered by change, you're ready to report bugs* and you're not super-paranoid about the whole lot of "NoScript Security Suite" most arcane features, NoScript 10 is worth the migration: active content blocking (now more configurable than ever) and XSS protection (now with a huge performance boost) are already there. And yes, Firefox 57 is truly the most awesome browser around.

If, otherwise, you really need the full-rounded, solid, old NoScript experience you're used to, and you can't bear anything different, even if just for a few weeks, dont' worry: NoScript 5.x is going to be maintained and to receive security updates until June 2018 at least, when the Tor Browser will switch to be based on Firefox 59 ESR and the "new" NoScript will be as powerful as the old one. Of course, in order to keep using NoScript 5.x outside the Tor Browser (which has it built-in), you have to stay on Firefox 52 ESR, Seamonkey, Palemoon or another pre-Quantum browser.
Or you can even install Firefox 58 Developer Edition, which allows you to keep NoScript 5 running on "Quantum" with the extensions.legacy.enabled trick. Just please don't block your updates on Firefox 56, it would be bad for your security.

Let me repeat that: your safest option for the next few days is Firefox 52 ESR, which will receive security updates until June 2018.

So, for another half-year you there will be two NoScripts: just sort your priorities and choose yours.

Update 2017-11-15

As you probably noticed, yesterday's today has gone away in most time zones and we're not ready yet (Murphy law and all) :(
But we're definitely on track for the end of this week, and in the meanwhile your awesome patience deserves a couple of preview screenshots...
NoScript 10 menu

Update 2017-11-18

The week is not over yet.

* in the next few weeks will move NoScript 10.x source code and bug tracking on GitHub, in the meanwhile please keep using the forum.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 208

di, 14/11/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is failure, a crate to deal with... you guessed it, failure. Thanks to Vikrant for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

137 pull requests were merged in the last week

New Contributors
  • Alec Theriault
  • Alkis Evlogimenos
  • John Ford
  • John-John Tedro
  • Sebastian Dröge
  • Sébastien Santoro
  • Shotaro Yamada
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: MozMEAO SRE Status Report - November 14, 2017

di, 14/11/2017 - 01:00

Here’s what happened on the MozMEAO SRE team from November 7th - November 14th.

Current work Firefox Quantum release

The team actively monitored our bedrock Kubernetes deployments during the release of [Firefox Quantum] ( No manual intervention was required during the release.

SRE General SUMO
  • an Elasticsearch development instance has been provisioned and is usable by the SUMO development team.
  • Redis and RDS provisioning automation has been merged, but resources have not been provisioned in AWS.
  • The team worked on a SUMO infra estimate for AWS.
    • Assumes existing K8s cluster, possible shared RDS/Elasticache.
MDN Static site hosting

The team is in the process of evaluating the following static hosting solutions:

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Fearless Concurrency in Firefox Quantum

di, 14/11/2017 - 01:00

These days, Rust is used for all kinds of things. But its founding application was Servo, an experimental browser engine.

Now, after years of effort, a major part of Servo is shipping in production: Mozilla is releasing Firefox Quantum!

Rust code began shipping in Firefox last year, starting with relatively small pilot projects like an MP4 metadata parser to replace some uses of libstagefright. These components performed well and caused effectively no crashes, but browser development had yet to see large benefits from the full power Rust could offer. This changes today.

Stylo: a parallel CSS engine

Firefox Quantum includes Stylo, a pure-Rust CSS engine that makes full use of Rust’s “Fearless Concurrency” to speed up page styling. It’s the first major component of Servo to be integrated with Firefox, and is a major milestone for Servo, Firefox, and Rust. It replaces approximately 160,000 lines of C++ with 85,000 lines of Rust.

When a browser is loading a web page, it looks at the CSS and parses the rules. It then determines which rules apply to which elements and their precedence, and “cascades” these down the DOM tree, computing the final style for each element. Styling is a top-down process: you need to know the style of a parent to calculate the styles of its children, but the styles of its children can be calculated independently thereafter.

This top-down structure is ripe for parallelism; however, since styling is a complex process, it’s hard to get right. Mozilla made two previous attempts to parallelize its style system in C++, and both of them failed. But Rust’s fearless concurrency has made parallelism practical! We use rayon —one of the hundreds of crates Servo uses from Rust’s ecosystem — to drive a work-stealing cascade algorithm. You can read more about that in Lin Clark’s post. Parallelism leads to a lot of performance improvements, including a 30% page load speedup for Amazon’s homepage.

Fearless concurrency

An example of Rust preventing thread safety bugs is how style information is shared in Stylo. Computed styles are grouped into “style structs” of related properties, e.g. there’s one for all the font properties, one for all the background properties, and so on. Now, most of these are shared; for example, the font of a child element is usually the same as its parent, and often sibling elements share styles even if they don’t have the same style as the parent. Stylo uses Rust’s atomically reference counted Arc<T> to share style structs between elements. Arc<T> makes its contents immutable, so it’s thread safe — you can’t accidentally modify a style struct when there’s a chance it is being used by other elements.

We supplement this immutable access with Arc::make_mut(); for example, this line calls .mutate_font() (a thin wrapper around Arc::make_mut() for the font style struct) to set the font size. If the given element is the only element that has a reference to this specific font struct, it will just mutate it in place. But if it is not, make_mut() will copy the entire style struct into a new, unique reference, which will then be mutated in place and eventually stored on the element.


On the other hand, Rust guarantees that it is impossible to mutate the style of the parent element, because it is kept behind an immutable reference. Rayon’s scoped threading functionality makes sure that there is no way for that struct to even obtain/store a mutable reference if it wanted to. The parent style is something which one thread was allowed to write to to create (when the parent element was being processed), after which everyone is only allowed to read from it. You’ll notice that the reference is a zero-overhead “borrowed pointer”, not a reference counted pointer, because Rust and Rayon let you share data across threads without needing reference counting when they can guarantee that the data will be alive at least as long as the thread.

Personally, my “aha, I now fully understand the power of Rust” moment was when thread safety issues cropped up on the C++ side. Browsers are complex beings, and despite Stylo being Rust code, it needs to call back into Firefox’s C++ code a lot. Firefox has a single “main thread” per process, and while it does use other threads they are relatively limited in what they do. Stylo, being quite parallel, occasionally calls into C++ code off the main thread. That was usually fine, but would regularly surface thread safety bugs in the C++ code when there was a cache or global mutable state involved, things which basically never were a problem on the Rust side.

These bugs were not easy to notice, and were often very tricky to debug. And that was with only the occasional call into C++ code off the main thread; It feels like if we had tried this project in pure C++ we’d be dealing with this far too much to be able to get anything useful done. And indeed, bugs like these have thwarted multiple attempts to parallelize styling in the past, both in Firefox and other browsers.

Rust’s productivity

Firefox developers had a great time learning and using Rust. People really enjoyed being able to aggressively write code without having to worry about safety, and many mentioned that Rust’s ownership model was close to how they implicitly reason about memory within Firefox’s large C++ codebase. It was refreshing to have fuzzers catch mostly explicit panics in Rust code, which are much easier to debug and fix than segfaults and other memory safety issues on the C++ side.

A conversation amongst Firefox developers that stuck with me — one that was included in Josh Matthews’ talk at Rust Belt Rust — was

<heycam> one of the best parts about stylo has been how much easier it has been to implement these style system optimizations that we need, because Rust

<heycam> can you imagine if we needed to implement this all in C++ in the timeframe we have

<heycam> yeah srsly

<bholley> heycam: it’s so rare that we get fuzz bugs in rust code

<bholley> heycam: considering all the complex stuff we’re doing

*heycam remembers getting a bunch of fuzzer bugs from all kinds of style system stuff in gecko

<bholley> heycam: think about how much time we could save if each one of those annoying compiler errors today was swapped for a fuzz bug tomorrow :-)

<heycam> heh

<njn> you guys sound like an ad for Rust

Wrapping up

Overall, Firefox Quantum benefits significantly from Stylo, and thus from Rust. Not only does it speed up page load, but it also speeds up interaction times since styling information can be recalculated much faster, making the entire experience smoother.

But Stylo is only the beginning. There are two major Rust integrations getting close to the end of the pipeline. One is integrating Webrender into Firefox; Webrender heavily uses the GPU to speed up rendering. Another is Pathfinder, a project that offloads font rendering to the GPU. And beyond those, there remains Servo’s parallel layout and DOM work, which are continuing to grow and improve. Firefox has a very bright future ahead.

As a Rust team member, I’m really happy to see Rust being successfully used in production to such great effect! As a Servo and Stylo developer, I’m grateful to the tools Rust gave us to be able to pull this off, and I’m happy to see a large component of Servo finally make its way to users!

Experience the benefits of Rust yourself — try out Firefox Quantum!

Categorieën: Mozilla-nl planet

Emma Humphries: Enabling Google Analytics on

ma, 13/11/2017 - 23:27

Hello, we intend to enable Google Analytics on

As specified in our privacy policy for websites, if you have Do Not Track enabled we will not load Google Analytics.

We want to be better able to respond to bugs filed on Bugzilla. Google Analytics gives us a way to learn which issues are getting the most views and activity so that we can respond to them.

We also want to improve the user experience by better understanding how the community uses Bugzilla. HTTP logs alone don’t tell the story of how easy or difficult it is to file a bug, or which parts of the bug detail page are confusing people using our site.

At the same time, we share our community's concerns about the importance of privacy and security, and are particularly sensitive to participation in, as well as the content of, security and legal bugs.

We will not load Google Analytics on detail and attachment pages for bugs in groups with restricted access. Furthermore, none of the URL parameters will be logged, and your IP addresses will be anonymized.

If you allow us to collect information about your use of Bugzilla using Google Analytics, that data will be used in accordance with our privacy policy.

comment count unavailable comments
Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 13 Nov 2017

ma, 13/11/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 13 Nov 2017

ma, 13/11/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Eric Shepherd: Avoiding “thin” pages when writing on MDN

ma, 13/11/2017 - 18:50

Last week, I wrote about the results of our “thin pages” (that is, pages too short to be properly cataloged by search engines) SEO experiment, in which we found that while there appear to be gains in some cases when improving the pages considered to be too short, there was too much uncertainty and too few cases in which gains seemed to occur at all, to justify making a full-fledged effort to fix every thin page on MDN.

However, we do want to try to avoid thin pages going forward! Having content that people can actually find is obviously important. In addition, we encourage contributors working on articles for other reasons who find that they’re too short to go ahead and update them.

I’ve already updated our meta-documentation (that is, our documentation about writing documentation) to incorporate most of the recommendations for avoiding thin content. These changes are primarily in the writing style guide. I’ve also written the initial portions of a separate guide to writing for SEO on MDN.

For fun, let’s review the basics here today!

What’s a thin page?

A thin page is a page that’s too short for search engines to properly catalog and differentiate from other pages. Pages that are shorter than 250-300 words of content text do not provide enough context for search algorithms to reliably comprehend what the article is about, which means the page winds up not in the right place in search results.

For the purposes of computing the length of an article, the article’s length is the number of words of body content—that is, content that isn’t in headers, footers, sidebars, or similar constructs—plus the number of words located in alt text on <img> elements.

How to avoid thin pages

These tips are taken straight from the guidelines on MDN:

  • Keep an eye on the convenient word counter located in the top-right corner of the editor’s toolbar on MDN.
  • Obviously, if the article is a stub, or is missing material, add it. We try to avoid outright “stub” pages on MDN, although they do exist, but there are plenty of pages that are missing large portions of their content while not technically being a “stub.”
  • Generally review the page to ensure that it’s structured properly for the type of page it is. Be sure every section that it should have is present and has appropriate content.
  • Make sure every section is complete and up-to-date, with no information missing. Are all parameters listed and explained?
  • Be sure everything is fully fleshed-out. It’s easy to give a quick explanation of something, but make sure that all the nuances are covered. Are there special cases? Known restrictions that the reader might need to know about?
  • There should be examples covering all parameters or at least common sets of parameters. Each example should be preceded with an overview of what the example will do, what additional knowledge might be needed to understand it, and so forth. After the example (or interspersed among pieces of the example) should be text explaining how the code works. Don’t skimp on the details and the handling of errors in examples; readers will copy and paste your example to use in their own projects, and your code will wind up used on production sites! See Code examples and our Code sample guidelines for more useful information.
  • If there are particularly common use cases for the feature being described, talk about them! Instead of assuming the reader will figure out that the method being documented can be used to solve a common development problem, actually add a section with an example and text explaining how the example works.
  • Include proper alt text on all images and diagrams; this text counts, as do captions on tables and other figures.
  • Do not descend into adding repetitive, unhelpful material or blobs of keywords, in an attempt to improve the page’s size and search ranking. This does more harm than good, both to content readability and to our search results.

Reviewing the above guidelines and suggestions (some of which are admittedly pretty obvious) when confronted with pages that are just too short may help kick-start your creativity so you can write the needed material to ensure that MDN’s content drifts to the top of the turbid sea of web documentation and other content to be found on the Internet.

Categorieën: Mozilla-nl planet

Martin Giger: Browser Extensions should Work Together

ma, 13/11/2017 - 18:25
Live Stream Notifier + Notification Sound + Streamlink Helper = ❤

Most browser extensions do a thing. And they do that thing in their isolated little world. Many of them do their thing pretty well. Many of them are built to do many things. Many of them are built to do just one little thing. But only few of them talk to other extensions to do …

Dieser Artikel erschien zuerst auf Humanoids beLog

Categorieën: Mozilla-nl planet

The Mozilla Blog: WebAssembly support now shipping in all major browsers

ma, 13/11/2017 - 17:09

While Mozilla has been preparing to launch Firefox Quantum, its fastest browser yet, some notable developments have happened with WebAssembly, the binary file format (“wasm”) that works with JavaScript to run web applications at near-native speeds.

In the past weeks, both Apple and Microsoft have shipped new versions of Safari and Edge, respectively, that include support for WebAssembly. Since Mozilla Firefox and Google Chrome already support WebAssembly, that makes all four major browsers capable of running code compiled to the wasm format on the web.

“Google, Apple, and Microsoft had all committed to supporting WebAssembly in their browsers. To have that support in market today is a really exciting development,” said Luke Wagner, the Mozilla engineer who created WebAssembly’s precursor, asm.js, and spearheaded work on the WebAssembly specification.

For developers, broad client support means they can experiment with WebAssembly with assurance that most end users will be able to run super-fast wasm modules by default. Ubiquitous client support fueled the early success of asm.js. Since it is a pure subset of JavaScript, asm.js can run in any browser without modification. You can find asm.js on Facebook, where it powers popular games like Candy Crush Saga, Top Eleven, and Cloud Raiders.

A Growing Standard

What’s the big deal about WebAssembly? First, it’s on its way to becoming an industry standard. It’s a proven way to run large, complex applications on the web. And it gives web developers a number of options they’ve never had before. For instance, now you can:

  • Take advantage of the compact wasm format to transmit files quickly over the wire and load them as JavaScript modules
  • Get near-native performance without using a plug-in
  • Write code that is both performant and safe, because it executes within the browser’s security sandbox
  • Have a choice of languages besides JavaScript, using WebAssembly as a compiler target for C and C++, with additional language support to come.
How It’s Used Today

WebAssembly has caught the interest of a wide swath of technical folks, because it brings predictable performance to the web platform – something that’s been exceedingly difficult to achieve with JavaScript alone. Gaming companies were early adopters of WebAssembly and asm.js. Epic and Unity were first to put their industrial-strength game engines on the web without rewriting the C++ code bases in JavaScript.

Today, the use cases for WebAssembly and asm.js have grown beyond online gaming. As people experiment with the process of using the WebAssembly format and its cohort, the Emscripten compiler, they’re finding ways to move increasingly sophisticated applications to the web. Things like:

“Asm.js and WebAssembly were really no-brainers for the gaming industry, because they had all this investment in massive C++ programs that they didn’t want to rewrite for the web,” Wagner said. “Now we’re seeing people using WebAssembly for all kinds of new projects. So there’s this real promise that we will someday be able to run most any application on the web and have it perform just as it would if it were running locally on your PC.”

Want to learn more about WebAssembly? Developers can find resources on MDN Web Docs and on the project site.

Interactive Tools

You can also try out WebAssembly Explorer, an online tool which allows you play around with a C/C++ compiler and understand how WebAssembly code is produced, delivered, and ultimately consumed by the browser. Another online tool, WebAssembly Fiddle, lets you write, share, and run WebAssembly code snippets in the browser. For an even deeper dive, you can inspect WebAssembly binaries to understand how WebAssembly code is encoded at a binary level.


The post WebAssembly support now shipping in all major browsers appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Anne van Kesteren: Testing standards

ma, 13/11/2017 - 16:14

At a high level, standards organizations operate in similar ways. A standard is produced and implementations follow. Taking a cue from software engineering, WHATWG added active maintenance to the mix by producing Living Standards. The idea being that just like unmaintained software, unmaintained standards lead to security issues and shaky foundations.

The W3C worked on test suites, but never drove it to the point of test-driven development or ensuring the test suites fully covered the standards. The WHATWG community produced some tests, e.g., for the HTML parser and the canvas API, but there was never a concerted effort. The idea being that as long as you have a detailed enough standard, interoperable implementations will follow.

Those with a background in quality assurance, and those who might have read Mark Pilgrim’s Why specs matter, probably know this to be false, yet it has taken a long time for tests to be considered an essential part of the standardization process. We’re getting there in terms of acceptance, which is great as crucial parts of the web platform, such as CSS, HTML, HTTP, and smaller things like MIME types and URLs, all have the same kind of long-standing interoperability issues.

These interoperability issues are detrimental to all constituencies:

  • Users pay as these issues limit what kind of product they can use.
  • Developers pay as they have to deal with these issues rather than being able to focus on making a great library, framework, site, or application.
  • Implementers pay as they keep having to tweak code written years ago and end up with extremely fragile and hard to refactor code.
  • Editors of standards pay as they keep having to update their standard to align with reality rather than work on something new. Or worse, they don’t and build new things atop shaky foundations, leading to yet more problems down the road.

Therefore I’d like everyone to take this far more seriously than they have been. Always ask about the testing story for a standard. If it doesn’t have one, consider that a red flag. If you’re working on a standard, figure out how you can test it (hint: web-platform-tests). If you work on a standard that can be implemented by lots of different software, ensure the test suite is generic enough to accommodate that (shared JSON resources with software-specific wrappers have been a favorite of mine).

Effectively, this is another cue standards needs to take from modern software development practices. Serious software will require tests to accompany changes, standards should too. Ensuring standards, tests, and implementations are developed in tandem results in a virtuous cycle of interoperability goodness.

(It would be wrong not to acknowledge Ecma’s TC39 here, who produced a standard for JavaScript that is industry-leading with everything derived from first principles, and also produced a corresponding comprehensive test suite shared among all implementations. It’s a complex standard to read, but the resulting robust implementations are hard to argue with.)

Categorieën: Mozilla-nl planet

Soledad Penades: Tell me more about this intriguing future

ma, 13/11/2017 - 16:02

Firefox 1.0 was released on the 9th of November of 2004, and I still remember the buzz. We were all excitedly downloading it because our browser had finally reached v1.0.

Using Firefox at that time, with all the developer extensions, gave you such an advantage over other web developers. Adding in the tabs, and the way it was predictable (CSS standards wise), and that it wouldn’t get infected with stuff as often as Internet Explorer, made it into such a joyous experience.

Now, if you had told me back then that I’d be contributing code to Firefox, I’d be laughing in your face. But then I’d stop and ask: Wait… what? Tell me more about this intriguing future!

Fast forward thirteen years. I am working at Mozilla, and tomorrow we release Firefox Quantum to the general public. It’s, as the name says, a “quantum leap” between this and previous Firefox versions.

I’m personally excited that I’ve contributed code to this release. I worked on removing dependencies on the (now defunct) Add-on SDK from the code base of Developer Tools. This means that the SDK code could be finally removed from Firefox, as the new WebExtensions format that Firefox uses now does not make use of that SDK. Results? Safer and leaner Firefox (the old SDK exposed way too many internals). Oh, and that warm and fuzzy feeling after deleting code…

So I didn’t contribute to a big initiative such as a new rendering engine or whatnot, but it’s often the little non-glamourous things that need to be done. I’m proud of this work (which was also done on time). My team were great!

Another aspect I’m very thrilled about is how this work has set us up for more successes already, as we’ve developed new tools and systems to find out ‘bad stuff’ in our code, and now we’re using these outside of the Firefox “core” team to identify more things we’ll want to improve in the upcoming months. There’s a momentum here!

Who knows what else will the future bring? Maybe in 10 years time I’ll be telling you I shipped code for the new rendering engine in Firefox indeed! One has to be open to the possibilities…

Update: my colleague Lin has explained how Firefox Quantum is a browser for the future, using modern technology.

Flattr this!

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Entering the Quantum Era—How Firefox got fast again and where it’s going to get faster

ma, 13/11/2017 - 15:00

People have noticed that Firefox is fast again.

Tweet from Sara Soueidan about Firefox Nightly being fast

Over the past seven months, we’ve been rapidly replacing major parts of the engine, introducing Rust and parts of Servo to Firefox. Plus, we’ve had a browser performance strike force scouring the codebase for performance issues, both obvious and non-obvious.

We call this Project Quantum, and the first general release of the reborn Firefox Quantum comes out tomorrow.

orthographic drawing of jet engine

But this doesn’t mean that our work is done. It doesn’t mean that today’s Firefox is as fast and responsive as it’s going to be.

So, let’s look at how Firefox got fast again and where it’s going to get faster.

Laying the foundation with coarse-grained parallelism

To get faster, we needed to take advantage of the way hardware has changed over the past 10 years.

We aren’t the first to do this. Chrome was faster and more responsive than Firefox when it was first introduced. One of the reasons was that the Chrome engineers saw that a change was happening in hardware and they started making better use of that new hardware.

Chrome looking to the future of coarse-grained parallelism

A new style of CPU was becoming popular. These CPUs had multiple cores which meant that they could do tasks independently of each other, but at the same time—in parallel.

This can be tricky though. With parallelism, you can introduce subtle bugs that are hard to see and hard to debug. For example, if two cores need to add 1 to the same number in memory, one is likely to overwrite the other if you don’t take special care.

diagram showing data race between two cores

A pretty straightforward way to avoid these kinds of bugs is just to make sure that the two things you’re working on don’t have to share memory — to split up your program into pretty large tasks that don’t have to cooperate much. This is what coarse-grained parallelism is.

In the browser, it’s pretty easy to find these coarse grains. Have each tab as its own separate bit of work. There’s also the stuff around that webpage—the browser chrome—and that can be handled separately.

This way, the pages can work at their own speed, simultaneously, without blocking each other. If you have a long-running script in a background tab, it doesn’t block work in the foreground tab.

This is the opportunity that the Chrome engineers foresaw. We saw it too, but we had a bumpier path to get there. Since we had an existing code base we needed to plan for how to split up that code base to take advantage of multiple cores.

Firefox looking to coarse-parallelism future

It took a while, but we got there. With the Electrolysis project, we finally made multiprocess the default for all users. And Quantum has been making our use of coarse-grained parallelism even better with a few other projects.

timeline for coarse grained parallelism, with Electrolysis and Quantum Compositor before initial Quantum release and Quantum DOM after


Electrolysis laid the groundwork for Project Quantum. It introduced a kind of multi-process architecture similar to the one that Chrome introduced. Because it was such a big change, we introduced it slowly, testing it with small groups of users starting in 2016 before rolling it out to all Firefox users in mid-2017.

Quantum Compositor

GPU process

Quantum Compositor moved the compositor to its own process. The biggest win here was that it made Firefox more stable. Having a separate process means that if the graphics driver crashes, it won’t crash all of Firefox. But having this separate process also makes Firefox more responsive.

Quantum DOM

Even when you split up the content windows between cores and have a separate main thread for each one, there are still a lot of tasks that main thread needs to do. And some of them are more important than others. For example, responding to a keypress is more important than running garbage collection. Quantum DOM gives us a way to prioritize these tasks. This makes Firefox more responsive. Most of this work has landed, but we still plan to take this further with something called pre-emptive scheduling.

Making best use of the hardware with fine-grained parallelism

When we looked out to the future, though, we need to go further than coarse-grained parallelism.

Firefox looking towards the future of fine-grained parallelism

Coarse-grained parallelism makes better use of the hardware… but it doesn’t make the best use of it. When you split up these web pages across different cores, some of them don’t have work to do. So those cores will sit idle. At the same time, a new page being fired up on a new core takes just as long as it would if the CPU were single core.

Splitting content windows across different cores

It would be great to be able to use all of those cores to process the new page as it’s loading. Then you could get that work done faster.

But with coarse-grained parallelism, you can’t split off any of the work from one core to the other cores. There are no boundaries between the work.

With fine-grained parallelism, you break up this larger task into smaller units that can then be sent to different cores. For example, if you have something like the Pinterest website, you can split up the different pinned items and send those to be processed by different cores.

Splitting work across cores fine-grained

This doesn’t just help with latency like the coarse-grained parallelism did. It also helps with pure speed. The page loads faster because the work is split up across all the cores. And as you add more cores, your page load keeps getting faster the more cores you add.

So we saw that this was the future, but it wasn’t entirely clear how to get there. Because to make this fine-grained parallelism fast, you usually need to share memory between the cores. But that gives you those data races that I talked about before.

But we knew that the browser had to make this shift, so we started investing in research. We created a language that was free of these data races — Rust. Then we created a browser engine— Servo — that made full use of this fine-grained parallelism. Through that, we proved that this could work and that you could actually have fewer bugs while going faster.

timeline of fine grained parallelism, with Quantum CSS before initial Qunatum release, and Quantum Render and possibly more after

Quantum CSS (aka Stylo)

Cores that have finished their work stealing from the core with more work

With Stylo, the work of CSS style computation is fully parallelized across all of the CPU cores. Stylo uses a technique called work stealing to efficiently split up the work between the cores so that they all stay busy. With this, you get a linear speed-up. You divide the time it takes to do CSS style computation by however many cores you have.

Quantum Render (featuring WebRender)

Diagram of the 4 different threads, with a RenderBackend thread between the main thread and compositor thread. The RenderBackend thread translates the display list into batched draw calls

Another part of the hardware that is highly parallelized is the GPU. It has hundreds or thousands of cores. You have to do a lot of planning to make sure these cores stay as busy as they can, though. That’s what WebRender does.

WebRender will land in 2018, and will take advantage of modern GPUs. In the meantime, we’ve also attacked this problem from another angle. The Advanced Layers project modifies Firefox’s existing layer system to support batch rendering. It gives us immediate wins by optimizing Firefox’s current GPU usage patterns.


We think other parts of the rendering pipeline can benefit from this kind of fine-grained parallelism, too. Over the coming months, we’ll be taking a closer look to see where else we can use these techniques.

Making sure we keep getting faster and never get slow again

Beyond these major architectural changes that we knew we were going to have to make, a number of performance bugs also just slipped into the code base when we weren’t looking.

So we created another part of Quantum to fix this… basically a browser performance strike force that would find these problems and mobilize teams to fix them.

timeline of Quantum Flow, with an upward sloping arc

Quantum Flow

The Quantum Flow team was this strike force. Rather than focusing on overall performance of a particular subsystem, they zero-ed in on some very specific, important use cases — for example, loading your social media feed — and worked across teams to figure out why it was less responsive in Firefox than other browsers.

Quantum Flow brought us lots of big performance wins. Along the way, we also developed tools and processes to make it easier to find and track these types of issues.

So what happens to Quantum Flow now?

We’re taking this process that was so successful—identifying and focusing on one key use case at a time — and turning it into a regular part of our workflow. To do this, we’re improving our tools so we don’t need a strike force of experts to search for the issues, but instead can empower more engineers across the organization to find them.

But there’s one problem with this approach. When we optimize one use case, we could deoptimize another. To prevent this, we’re adding lots of new tracking, including improvements to CI automation running performance tests, telemetry to track what users experience, and regression management inside of bugs. With this, we expect Firefox Quantum to keep getting better.

Tomorrow is just the beginning

Tomorrow is a big day for us at Mozilla. We’ve been driving hard over the past year to make Firefox fast. But it’s also just the beginning.

We’ll be continuously delivering new performance improvements throughout the next year. We look forward to sharing them with you!

Try Firefox Quantum in Release or in Developer Edition to make sure you get the latest updates as they come out.

Categorieën: Mozilla-nl planet

Don Marti: Time-saving tip for Firefox 57

ma, 13/11/2017 - 09:00

Last time I recommended the Tracking Protection feature in Firefox 57, coming tomorrow. The fast browser is even faster when you block creepy trackers, which are basically untested combinations of third-party JavaScript.

But what about sites that mistakenly detect Tracking Protection as "an ad blocker" and give you grief about it? Do you have to turn Tracking Protection off?

So far I have found that the answer is usually no. I can usually use NJS to turn off JavaScript for that site instead. (After all, if a web developer can't tell an ad blocker from a tracking protection tool, I don't trust their JavaScript anyway.)

NJS will also deal with a lot of "growth hacking" tricks such as newsletter signup forms that appear in front of the main article. And it defaults to on, so that sites with JavaScript will work normally until I decide that they're better off without it.

Bonus links

Entering the Quantum Era—How Firefox got fast again and where it’s going to get faster by Lin Clark

How to turn Tracking Protection on

Categorieën: Mozilla-nl planet

Cameron Kaiser: The security blanket blues revisited, or: keeping your Power Mac safe in an Intel world

ma, 13/11/2017 - 05:58
Way back in 2012 I wrote a fairly basic piece on Power Mac security, and ever since then I've promised repeatedly to do an update for what's happened in between. So here it is.

The usual advice well-meaning but annoying people will give us Power Mac users is, "there are many security holes in your machine, so you shouldn't ever use it on the Internet." The first part is true. The second part is, at least right now, not. You just have to understand where the vulnerabilities lie, patch the holes you can, and mitigate the vulnerabilities that you can't. However, doing so is absolutely imperative and absolutely your responsibility. If some easily remotely exploitable bug surfaces that cannot be mitigated or blocked, I'll change my tune here, but that's not presently the case.

The most important thing to keep in mind is that, as virtually all the regular readers of this blog know, Power Macs use a completely different architecture than the majority of what's out there today, and this has important security ramifications. The vast majority of presently extant low-level exploits like buffer overflows and use-after-frees broadly depend on being able to deposit Intel or ARM machine code in memory and have it executed by the victim application, but our instruction set and (often) memory layout are completely different, so any such exploit would have to be specific to PowerPC to successfully execute. At worst, an non-PowerPC exploit of this type would just crash the application or, in extreme cases, the machine. While the more security conscious amongst you will (correctly) point out this is a special example of "security by obscurity," that doesn't mean this heterogeneity isn't an advantage. Attackers go where the money is, and it's not our machines. No current Intel Mac can easily generate code that will run on a Power Mac without a lot of work either.

But our systems definitely do not sail above the fray. Where we are most practically vulnerable falls under two major categories: information leakage, and cross-platform attacks. In the first case, unsecured networking, weak encryption or other flaws could leak private data such as passwords, credentials or personal data from our computers to an attacker; in the worst case they could allow an attacker to masquerade as you to other services or sites. In the second case, applications on your computer could be duped into performing tasks on behalf of an attacker using a payload that is not specific to a particular machine type, but can run anywhere the cross-platform environment they utilize exists (such as Java, Flash, Microsoft Word macros, scripting languages like shell scripts, JavaScript, etc.) and is able to exploit flaws in that environment to take over any machine that can run the code. In the worst case, an attacker could gain administrator access and complete control of the system, and because the exploit is not architecture-dependent, we could potentially run the poisoned code too.

So as promised, here's an updated practical guide to keeping your beloved Power Mac safe, or at least safer, today 11 years and nine operating system releases after the last Power Mac rolled off the assembly line. This post is hardly comprehensive and you should not assume it covers all relevant deficiencies, but, for the record, these are the recommendations I myself use on my own systems. I reference prior blog posts here you can read for more details, but this guide will cover the basic notions and try to give you an idea of priority. Please note: this document primarily applies to systems running 10.4 and later. The classic Mac OS through OS 9.2.2 has an extremely small attack surface because of its radically different architecture, and while browsers on OS 9 (including, though this is improving, Classilla) are subject to information leakage attacks and should not run Flash or Java, other kinds of attacks are almost impossible upon it. There are a few exceptions noted below. For 10.0 through 10.3, however, there are sadly much fewer good options for securing these systems, and I would simply advise putting them behind a good firewall and assuming everything you do on them is not secured.

Obviously, I also assume for the below that you're running the current version of TenFourFox and can securely download additional tools if necessary!

General recommendations

Make sure your clock is set correctly: certificate verification will fail if your clock is off more than a few minutes in either direction. Particularly on 10.4 systems (but also observed on 10.5), systems with long durations of uptime without sleeping or being shutdown may go out of synchronization with any time server you use. The first and easiest way to reestablish the connection to your timeserver is either by rebooting, or unchecking and then re-checking the time server checkbox in System Preferences. If your system is powered on and off regularly you may not have a need even to do this much. However, if this is insufficient and you are comfortable with the command line, you could try the more definitive solution in our article.

Consider using a non-admin account for basic activities: this will ensure that, if a old PowerPC-compatible exploit or Trojan horse is around and does get through, the damage is limited. At least one well-known OS X Trojan horse circulated in a PowerPC-compatible version as late as 2012!

Why do I have to enter my password? Consider this every time you're asked for it; a little paranoia is just good common sense. Ask yourself, does this application actually need administrative access? Or is this program doing something other than it claims?

Security issues with connectivity and networking

Built-in networking: On OS X, enable the built-in firewall in System Preferences (Sharing, Firewall) and enable stealth mode, and if you can, also Block UDP Traffic from the Advanced menu within that preference pane. This substantially reduces the surface for incoming network threats. Using a hardware firewall is even better, especially in combination, as well as disabling UPnP on your router if your applications don't require it; in fact, my personal daily drivers live on a specially secured wired network that cannot directly route to the Internet. There are a number of possible exploits in the network-accessible components of 10.4 and 10.5 and simply preventing access to them in this fashion is probably the best approach. Note that UDP is still necessary for some kinds of protocols such as local Windows file and printer sharing (in that case, blocking it at the router level rather on individual Macs would be more appropriate), and disabling UPnP may be problematic for some applications.

WiFi: All Power Macs are subject to the KRACK attack and there is no known client-side fix (more info). The problem can be mitigated by going into your router settings and selecting WPA2 (not just WPA!) AES-CCMP as your only means of Wi-Fi security, which some routers may abbreviate to just "AES." Do not use TKIP. Routers may also be vulnerable, particularly if your router is itself a client to another WiFi network such as being in repeater mode; you should check to see if a firmware update is available, and consider another router if necessary.

Although AES-CCMP is much more resistant to attacks than TKIP and an attacker cannot actually join a network secured with it, they could clone your access point to a second access point with the same SSID and MAC/BSSID on a different channel and entice you to transparently connect to that. This is not very likely in a controlled home environment, but it could be an issue for public Wi-Fi or close quarters like dorms or apartments. Immediately disable Wi-Fi if you see two copies of the same network; it could be an attempt to snare you. See our article for a more in-depth way of detecting such an attack.

If you are on a public Wi-Fi connection you can't control, you should assume your connection is completely insecure (the same applies for WEP, such as on Mac OS 9, which does not support WPA2 natively, or WPA). Use a VPN if you have it available, and/or only connect to secure hosts, such as over HTTPS and SSH, to layer your connection with a secondary level of encryption. A better browser can help ... like, I dunno, TenFourFox. Just a suggestion.

Bluetooth: All Power Macs are potentially vulnerable to BlueBorne-based attacks, though the practical likelihood of being exploited is low (more info). These attacks are generally low-level and would need to be specific to PowerPC to function, but could be a source of system instability if a malicious Bluetooth device is broadcasting poison packets with Intel or ARM code embedded in them. Keep Bluetooth off if you don't need it except in controlled environments; when tethering, if a malicious device is likely to be in range, Wi-Fi is probably safer even with the caveats above.

Hardening OS X

These are well-known vulnerabilities in OS X which can be, in some cases, exploited remotely.

sudo at the wrong time: Because a password is not required to change the system date and time (either with System Preferences or using systemsetup from the command line), an attacker can set the clock wrong and then dupe vulnerable versions of the sudo utility, which allows you to run commands with administrator permissions, to acquire that same administrative access without authentication. This is due to a convenience in sudo where repeated use within a certain interval does not require a password; thus, the simplest and most secure solution is to always require a password. Start a Terminal window (or start /Applications/Utilities/Terminal) and enter the following commands:

  • sudo visudo (enter your password)
  • Using the vi editor which then appears, add the line Defaults timestamp_timeout=0 at the end. If you don't know how to use vi, type these key strokes:

    • 0G (the number zero, and a capital G)
    • o (lower case "o")
    • Defaults timestamp_timeout=0
    • Press the ESCape key and then type :wq! (colon, lower case "w", lower case "q", exclamation point) and press ENTER.

If you get an error, you did it wrong; start over. See the original article for more information.

RootPipe/systemsetupusthebomb: This is an actual flaw in another privileged system component called writeconfig that can be exploited to write arbitrary files with root permissions, also giving an attacker administrative access. The simplest fix is to go to System Preferences, and under Security, check "Require password to unlock each secure system preference" (and make sure the lock at the lower left is locked). Now any known use of the vulnerable tool will either fail or at least prompt you for a password. This covers all known exploits for this component, but for a more comprehensive approach (that may have side effects), see the original article.

Shellshock: The version of the Bourne again shell (bash) that comes with all PowerPC versions of OS X is susceptible to Shellshock, a collection of methods of causing the shell to execute arbitrary commands passed to it through environment variables. Although of particular concern to anyone using their machine as a server, it is possible to use this exploit even on single-user systems in more limited circumstances. All versions prior to 4.3.30 are vulnerable. If you have never fixed this on your system, then download the patched version of bash 4.3.30 that we provide as a community service and follow these directions exactly:

  1. Put the file in your home directory and double-click to decompress it. You should be left with a file named bash-4.3.30-10.4u. Do not change the name.
  2. Close all terminal windows and programs if they are open, just to make sure you won't stomp on bash while a program is trying to call it. Start /Applications/Utilities/Terminal and have exactly one window open.
  3. In that Terminal window, type these commands exactly as shown. If you get any errors, STOP and ask for help.

    • exec tcsh
    • chmod +x bash-4.3.30-10.4u

      (IMPORTANT! If you replaced /bin/bash (and/or /bin/sh) with any earlier version using these commands, DO NOT ENTER THE NEXT TWO COMMANDS. If you have never replaced them, then do go ahead; these will put the old ones in a safe place just in case.)

    • sudo mv /bin/bash /bin/bash_old (enter your password)
    • sudo mv /bin/sh /bin/sh_old (enter your password; if you don't get prompted again, you need to fix sudo with the steps above!)

      Everybody does these:

    • sudo cp bash-4.3.30-10.4u /bin/bash (enter your password)
    • sudo cp bash-4.3.30-10.4u /bin/sh (enter your password)

  4. Restart your Mac as a paranoia to make sure everything is using the new copy of bash.

If you're not sure, bash --version will display what you're running (mine says GNU bash, version 4.3.30(5)-release (powerpc-apple-darwin8.11.0). The version we provide is universal and will work on PowerPC and Intel from 10.4 through at least 10.9. If you want to check if your version is correctly behaving, see the original article for a test battery.

Other vulnerabilities in OS X built-in software

Although there aren't updates for most of these, you should at least be aware of the actual risk, and how to reduce it.

Some of the entries in this and the following sections reference plugins. These are usually stored in /Library/Internet Plug-Ins, but there may be per-user plugins installed in Library/Internet Plug-Ins in your home folder. You can disable them as recommended below by simply moving them to another folder, or deleting them outright if appropriate.

Java: Java is not safe on Power Macs; all versions of Java provided on any PowerPC-compatible version of Mac OS or OS X have serious well-known vulnerabilities. In particular, exploits such as Flashback can obtain system access in a cross-platform fashion. If the Java plugin is on your computer, it should be removed or disabled (or use TenFourFox, natch, which won't even run it), and you should only run signed Java applets from trusted sources if you must run them at all.

QuickTime: There are historical PowerPC-based exploits for certain codecs in QuickTime, though none of these are known to be circulating now, and no specific PowerPC-based exploit is known for QT 7+ generally. (While QT 6.0.3 in OS 9/Classic is technically vulnerable, the limitations of OS 9 make the exploit difficult and it would have to be specific to both OS 9 and PowerPC.) It is possible for QuickTime playlists and certain other kinds of scriptable content to be used to load data over the network, but they can be only interacted with in limited ways, and to actually use them for executable data would require a PowerPC-compatible attack. While such an attack is feasible and possible, it isn't very likely to occur or succeed on a Power Mac. This mode of attack can be minimized further by removing or disabling the QuickTime Plugin (or use TenFourFox, natch, which won't even run it); removing the Plugin won't affect using the QuickTime Player. and the built-in image and PDF viewer libraries also have known holes, but no known specific PowerPC-based attacks which would be required to exploit them. The built-in PDF toolkit doesn't understand JavaScript in PDF files or embedded Flash, and as a result is much safer than using the real Adobe Acrobat Reader (which you should really only use for protected documents). If you don't mind the speed, you can also use the built-in PDF viewer in TenFourFox by going to Preferences, TenFourFox and checking the preference to enable it, though our internal viewer currently supports even fewer features than Preview. TenFourFox also can view many images by simply dragging them to any open browser window. Again, while an attack through a malicious image or PDF file is feasible and possible, it isn't very likely to occur or succeed on a Power Mac. This mode of attack can be minimized further by removing any Internet plugins that furnish PDF access in the browser, including and especially the Adobe Acrobat plugin (or use TenFourFox, natch, which doesn't even run them and implements its own sandboxed PDF viewer).

WebKit and Safari: Safari, and many other software packages, uses the version of WebKit on the system to render web pages and other network, HTML and image assets; it is, essentially, the built-in "WebKit shell." With the exception of OmniWeb, every PowerPC-compatible WebKit-based browser (Safari, iCab, Roccat, Stainless, Demeter, Shiira, etc.) relies on the version of WebKit the operating system provides, which means they inherit all the bugs and security issues of the built-in WebKit framework as well as any bugs in the shell they provide. (Gecko-based browsers bring their own libraries with them, but we're the only Gecko-based browser still updated for PowerPC OS X.)

I'm sure all of you are enthusiastic daily drivers of TenFourFox, but WebKit should also be updated because of how many other apps depend on it. For 10.5, of course, the best solution is Tobias' Leopard WebKit. Leopard WebKit not only includes a very current WebKit framework, but also includes an updated OS Security framework, and can relink WebKit shells and other programs using a provided utility.

Unfortunately, a similar supported option is not available for 10.4. TenFourKit, also written by Tobias, does update the system framework somewhat but does not include security or encryption updates, and has not received any updates since 2012; it's basically the same version as the framework built-in to the OmniWeb browser. For this reason, you should avoid Safari and other WebKit shell applications like iCab on 10.4, as they will not be sufficiently protected, and you should be cautious of apps that attempt to display web pages over the network since the vast majority will use the built-in WebKit also. Because the OS's security framework is also not current, many secure sites will either not connect properly, or throw inexplicable errors.

Currently all WebKit shells support, and will instantiate, plugins (TenFourFox doesn't). I still advise disabling them or removing them where appropriate, but if you can't do this, ClickToPlugin will at least reduce drive-by risk in Safari. Mail uses the built-in system WebKit (as above), and may have other deficiencies which are not patched. These deficiencies likely require PowerPC-specific exploits, though Apple Mail's general lack of updates implies other vulnerabilities likely lurk such as information leaks and inadequate connection security. Although Tenfourbird (an unaffiliated project) was once a solid and secure alternative, it has not been maintained since version 38.9 as of this writing, so unfortunately I am no longer able to generally recommend it. The simplest and safest approach is simply to use a webmail service instead in TenFourFox or Leopard WebKit unless you absolutely must have a local mail client; in that case, I would use Tenfourbird over Apple Mail, since it is at least more up to date.

Major third-party and optional software vulnerabilities

Your web browser: Currently TenFourFox (10.4+), Leopard WebKit (10.5 only) and Roccat (10.5 only) are known to be updated on a semi-regular basis (we issue TenFourFox releases with security updates, updated certificates and pinned keys every six weeks simultaneously with Firefox ESR updates). No other browser is current, though at least a re-linked WebKit shell will have fewer vulnerabilities. Note that Roccat also needs to be relinked with Leopard WebKit for maximum security.

Flash: Flash is not safe on Power Macs; all PowerPC-compatible versions of Adobe Flash Player have serious well-known vulnerabilities. The cross-platform Rosetta Flash exploit is able to steal credentials and cookies with 10.1 and earlier versions of Flash, and the recommended server mitigation does not fix the problem in these versions (only Flash 10.2+). Furthermore, Flash applets have been previously demonstrated to attack network settings in a cross-platform fashion, and there are other sandbox escape vulnerabilities that have been reported. Although unofficial "later" versions of the Flash plugin have circulated for Power Macs, these are still Flash 10.1 internally with a bumped version number and do not actually have any fixes. Unless you have content that absolutely cannot be viewed without Flash, you should remove or disable the Flash plugin (or use TenFourFox, natch, which won't even run it); a tool like SandboxSafari or the experimental PopOut Player can help reduce the risk for legacy content that still requires it.

Microsoft Office and OpenOffice/NeoOffice/LibreOffice: None of these office applications is currently updated for Power Macs and all of them have potential vulnerabilities to Word and Excel macro viruses, though the OpenOffice derivatives are much less likely to be successfully exploited. For Word it is unlikely you will want macros enabled (and you should definitely turn them off in the preferences except for those rare situations in which they are needed), but this could be a real issue for Excel power users. Office v.X, and Office 98 in Classic/Mac OS 9, are probably too old to be effectively pwned, but many macro attacks against Office 2004 and 2008 will run on Power Macs and the Open XML Converter can be attacked in some of the same ways. Microsoft, damn their Redmond hides, does not offer any of the updaters prior to Office 2008 for download anymore, but I've archived some of them on the Gopher server. For Office 2008, start here (note that you may need to download earlier service packs, which are currently still available as of this writing). Note that Office 2008 cannot run Visual Basic for Applications (VBA), which is a drop in functionality but also a reduction in attack surface, nor can the OpenOffice alternatives. NeoOffice has not been updated for PowerPC in some time; 5.2.0alpha0 is the last version of LibreOffice for Power Macs and is generally my recommendation, but you can also download OpenOffice 4.0.0. All will run on 10.4+.

Note that while iWork/Numbers does support some Excel macros, it does not support VBA and seems to have some issues interpreting macros in general, so it is less likely to be exploited. The venerable AppleWorks nee ClarisWorks is also not known to have any serious vulnerabilities.

Adobe Acrobat and Adobe Acrobat Reader: Acrobat allows embedded Flash and JavaScript, which also makes it a scriptable cross-platform target, and Adobe Acrobat is no longer updated on PowerPC OS X. (The classic Mac OS version is less vulnerable because it implements less functionality, but it may have compatibility issues with more recent documents.) The only thing you should use Acrobat for is creating PDFs, and viewing protected documents. Otherwise, make sure your PDFs open by default in Preview using the Get Info box in the Finder. Do not use the Acrobat plugin. It should be disabled or removed (or use TenFourFox, natch, which won't even run it).

Microsoft Virtual PC (and other PC emulators): I won't belabour this point except to say this depends greatly on what you run inside the emulator. Remember that a virtual machine installation of Windows can be just as hosed as a real installation, and can be an even greater malware risk if it has network access. Some Linuces will still run in VPC (I used to use Knoppix). Otherwise, stick to Windows and patch patch patch, and/or completely disable networking or enable bridged mode, which uses your Mac as a firewall for the emulated PC, as appropriate.

* * *

Watch this blog as other security-related posts appear. Yes, your Power Mac has holes, but until such time as they can't be plugged or the system is no longer fit for your purpose, nothing says the only choices are a forced upgrade or sit unprotected. So far we've made our systems last over a decade. I think we can still safely keep them viable a while longer.

Categorieën: Mozilla-nl planet

Wladimir Palant: On Web Extensions shortcomings and their impact on add-on security

za, 11/11/2017 - 21:21

Recently, I reported a security issue in the new Firefox Screenshots feature (fixed in Firefox 56). This issue is remarkable for a number of reasons. First of all, the vulnerable code was running within the Web Extensions sandbox, meaning that it didn’t have full privileges like regular Firefox code. This code was also well-designed, with security aspects taken into consideration. In fact, what I found were multiple minor flaws, each of them pretty harmless. And yet, in combination these flaws were sufficient for Mozilla to assign security impact “high” to my bug report (only barely, but still). Finally, I think that these flaws only existed due to shortcomings of the Web Extensions platform, something that should be a concern given that most extensions based on it are not well-designed.

The Firefox Screenshots feature was introduced in Firefox 55 and allows users to easily take a screenshot of a web page or some part of it and upload it to a web service. All uploaded screenshots are public but you have to know the URL. Technically, this feature is really a browser extension that is integrated into Firefox. And when I looked at this extension, I immediately noticed a potential weakness: when you click its toolbar button, the extension needs to show you some user interface to select a website part and actually take the screenshot. And it will inject that user interface into the webpage. So a malicious webpage could in theory manipulate that user interface.

For years, I have been arguing that injecting trusted content (such as an extension’s user interface) into untrusted websites is a bad idea and should be avoided at any cost. However, Google Chrome won the extensions war and Firefox extensions are now just as limited as Chrome extensions always were. If a toolbar button with pop-up or developer tools panel won’t do for your extension, then there is no way around injecting extension’s user interface into the webpage. Sucks being an extension developer these days.

Of course, there are measures which can be taken to limit the potential damage, and Firefox Screenshots takes them. The extension injects only <iframe> elements into the page with an extension page loaded into the frame. The browser’s same-origin policy prevents the website from accessing anything within the frame. All it can do is manipulate the <iframe> element itself. For example, it could remove this element and effectively prevent Firefox users from taking screenshots on this website. Not great but not a big deal either.

Firefox Screenshots uses a slightly untypical approach however. The page loaded into the frame is always blank and the content is being determined by the content script that creates it:

let element = document.createElement("iframe"); element.src = browser.extension.getURL("blank.html"); element.onload = () => { element.contentDocument.documentElement.innerHTML = ...; // event handlers attached here };

I’m moderately certain that this approach wouldn’t work in Chrome, the content script wouldn’t have the privileges to access frame contents there. But the real issue is a different one: the load event handler doesn’t verify what frame it injects the user interface into. The website can load about:blank into this frame and it will also trigger the extension’s load event handler. This way the extension can be tricked into injecting its user interface into a frame that the website can access and manipulate.

What would a malicious website do with this? It could generate fake events and select an area for the screenshot without any further user interaction. Interestingly, actually taking the screenshot wasn’t possible because the corresponding event handler checked event.isTrusted and wouldn’t react to fake events. But taking the screenshot merely requires the user to click a particular button. By making that button very large and transparent one can make sure that the user will trigger that screenshot no matter where they click (clickjacking).

At this point I decided to file a Firefox bug, well aware that it was unlikely to reach even “moderate” as security impact. After all, what’s the worst thing that can happen? A website that tricks users into screenshotting some obscenity? Paul Theriault’s reply in my newly filed bug went into that exact direction but this sentence made me think further:

worst case would be the ability to streak the users screenshot Uris but I don’t think that is possible through this bug.

Right, the website cannot read out the location of user’s other screenshots. But can’t it figure out the location of the screenshot it just took? The extension tells you that the screenshot location is copied to the clipboard. Yet Web Extension APIs don’t allow “proper” clipboard access, you have to use ugly tricks involving document.execCommand. And these ugly tricks won’t work in the extension’s background page right now, meaning that you are forced to use some untrusted context for them. And sure enough: Firefox Screenshots was running the “copy to clipboard” code in the context of the website, meaning that the website could intercept the value copied or manipulate it.

So what we have now: a malicious website can detect that a user tries to make a screenshot of it and hijack that process so that an arbitrary screenshot is taken with merely one more click anywhere on the page. And it can read out the location of this screenshot which lets it access it given that all screenshots are public. Users are easy to trick into performing both required actions (click and “Screenshots” toolbar icon and then click on the website) via social engineering. And the potential damage?

Websites are not supposed to know how exactly they are rendered, APIs like drawWindow() are reserved for privileged code. This is required to avoid reopening CSS History Leak for example. If websites could reliably distinguish visited and unvisited links, they could tell which websites the user visited in the past. But there is a far worse issue here: Firefox Screenshots can create screenshots of third-party frames within the webpage as well. So a website could for example load into a frame (not really, this particular website forbids framing), trick the user into starting screenshot creation, then screenshot that frame and read out the resulting screenshot. Oops, that’s your Google user name and all your emails in the screenshot leaked to a malicious website!

That’s three distinct issues: not ensuring that the frame to receive extension’s user interface is really trusted, not rejecting fake events in all event handlers consistently and performing copying to clipboard within an untrusted context. Nobody would treat any of these issues with priority when looking at them in isolation. I bet that similar issues will pop up in numerous extensions. If Mozilla is serious about enabling extensions and preventing security issues, adding integration points for extension’s user interface that don’t force them into untrusted contexts should be a priority. Also, the current state of clipboard manipulation is a huge footgun. At the very least, copying to clipboard should work on the background page. Proper APIs for clipboard manipulation would be better however.

Categorieën: Mozilla-nl planet

Don Marti: my Firefox 57 add-ons

za, 11/11/2017 - 09:00

Firefox 57 is coming on Tuesday, and as you may have heard, add-ons must use the WebExtensions API. I have been running Firefox Nightly for a while, so add-on switching came for me early. Here is what I have come up with.

The basic set

Privacy Badger is not on here just because I'm using Firefox Tracking Protection. I like both.

Blogging, development and testing
  • blind-reviews. This is an experiment to help break your own habits of bias when reviewing code contributions. It hides the contributor name and email when you first see the code, and you can reveal it later. Right now it just does Bugzilla, but watch this space for an upcoming GitHub version. (more info)

  • Copy as Markdown. Not quite as full-featured as the old "Copy as HTML Link" but still a time-saver for blogging. Copy both the page title and URL, formatted as Markdown, for pasting into a blog.

  • Firefox Pioneer. Participate in Firefox user research. Studies have extremely strict and detailed privacy policies.

  • Test Pilot. Try new Firefox features. Tracking Protection was on Test Pilot for a while. Right now there is a new speech recognition one, an in-browser notepad, and more.

Advanced (for now) nerdery
  • Cookie AutoDelete. Similar to the old "Self-Destructing Cookies". Cleans up cookies after leaving a site. Useful but requires me to whitelist the sites where I want to stay logged in. More time-consuming than other privacy tools.

  • PrivacyPass. This is new. Privacy Pass interacts with supporting websites to introduce an anonymous user-authentication mechanism. In particular, Privacy Pass is suitable for cases where a user is required to complete some proof-of-work (e.g. solving an internet challenge) to authenticate to a service. Right now I don't use any sites that have it, but it could be a great way to distribute "tickets" for reading articles or leaving comments.

Note on ad blocking

If you run an ad blocker, the pre-57 add-ons check is a good time to make sure that you're not compromising your privacy by participating in a paid whitelisting scheme. As long as you have to go through your add-ons anyway, it's a great time to ditch AdBlock Plus or Adblock. They're taking advantage of users to shake down web sites.

What to use instead? For most people, either the built-in Firefox Tracking Protection or EFF's Privacy Badger will provide good protection. I would try one or both of those before a conventional ad blocker. If sites have a broken ad blocker detector that falsely identifies a tracking protection tool as an ad blocker, you can usually get around it by turning off JavaScript for that site with NJS.

If you still want to get rid of more ads and join the blocker vs. anti-blocker game (I don't), there's always uBlock Origin, which does not do paid whitelisting. The project site has more info). But try either the built-in tracking protection or Privacy Badger first.

Bonus links

New Firefox Quantum arrives November 14, 2017

Firefox Quantum 57 for developers

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR4 available

za, 11/11/2017 - 03:47
TenFourFox Feature Parity Release 4 final is now available (downloads, hashes, release notes). It will become live on Monday "evening" as usual. There is no debug version for final since the only reason I was doing that for the last FPR or two was to smoke out issue 72, for which the fix now appears to be sticking (but, as usual, there will be one for FPR5b1).

For FPR5 the big goals are expanded AltiVec (enable strchr() everywhere else, finish as much as possible of the AltiVec VP9 intra predictors), some DOM and Web compatibility improvements, and some additional performance improvements primarily in the session store module and the refresh driver. More on those soon.

Categorieën: Mozilla-nl planet