mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Hacks.Mozilla.Org: An Update on MDN Web Docs

Mozilla planet - vr, 21/08/2020 - 20:03

Last week, Mozilla announced some general changes in our investments and we would like to outline how they will impact our MDN platform efforts moving forward. It hurts to make these cuts, and it’s important that we be candid on what’s changing and why.

First we want to be clear, MDN is not going away. The core engineering team will continue to run the MDN site and Mozilla will continue to develop the platform.

However, because of Mozilla’s restructuring, we have had to scale back our overall investment in developer outreach, including MDN. Our Co-Founder and CEO Mitchell Baker outlines the reasons why here. As a result, we will be pausing support for DevRel sponsorship, Hacks blog and Tech Speakers. The other areas we have had to scale back on staffing and programs include: Mozilla developer programs, developer events and advocacy, and our MDN tech writing.

We recognize that our tech writing staff drive a great deal of value to MDN users, as do partner contributions to the content. So we are working on a plan to keep the content up to date. We are continuing our planned platform improvements, including a GitHub-based submission system for contributors.

We believe in the value of MDN Web Docs as a premier web developer resource on the internet. We are currently planning how to move MDN forward long term, and will develop this new plan in close collaboration with our industry partners and community members.

Thank you all for your continued care and support for MDN,

— Rina Jensen, Director, Contributor Experience

The post An Update on MDN Web Docs appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Karl Dubost: Khmer Line Breaking

Mozilla planet - vr, 21/08/2020 - 07:08

I'm not an expert in Khmer language, it's just me stumbling on a webcompat issue and trying to make sense of it.

Khmer Language

The Khmer language, apart of being the official language of Cambodia (South-East Asia), is spoken by some people in Thailand and Vietnam.

Webcompat Issue - 56316

We receive a webcompat issue recently where a long Khmer line on a mobile device was not wrapping hence breaking the layout of the site. Jonathan Kew helped me figure out if the issue was with the fonts or with the browser.

I don't think this is about fonts, it's that we don't have Khmer line-breaking support on Android. Line-breaking for SEAsian languages that are written without word spaces (e.g. Thai, Lao, Khmer) is based on calling an operating system API to find potential word-break positions. Hence the results are platform-dependent. Unfortunately on Android we don't have any such API to call, and so we don't find break positions within long runs of text. We have an internal line-breaker for Thai (and recently implemented some basic support for Tibetan), but nothing for Khmer.

So that intrigued me the "find potential word-break positions".

Khmer Language Line Breaking

In western language like French and English, there are breaking opportunities, usually spaces in between words. So for example,

a sentence can break like this

because there are spaces in between words, but in Khmer language, there are no space in between words inside a phrase.

Thai, Lao, and Khmer are languages that are written with no spaces between words. Spaces do occur, but they serve as phrase delimiters, rather than word delimiters. However, when Thai, Lao, or Khmer text reaches the end of a line, the expectation is that text is wrapped a word at a time.

So how do you discover the word boudaries?

Most applications do this by using dictionary lookup. It’s not 100% perfect, and authors may need to adjust things from time to time.

It means you would like to have a better rendering in a browser, you need to either include a dictionary of words inside the browser or call a dictionary loaded on the system. And there are subtleties for compound words.

How is Khmer line-breaking handled on the Web? is trying to understand what is the status.

But let's go back to Gecko on mobile.

Gecko Source Code For Line Breaking

I found this reference in gecko source code for line breaking for these specific languages in LineBreaker.h

I opened an issue on bugzilla so we can try to implement line breaking for Khmer language. I was wondering if it would be a simple modification, but Makoto Kato jumped in and commented

Old Android doesn't have native line break API, but Android 24+ can use ICU from Java (android.icu.text.BreakIterator). Since we still support Android 5+ on Fenix, so not easy.

Chromium in Issue 136148: Add Khmer and Lao Line-Breaking layout tests has some tests, that might help if Mozilla decides to solve this issue.

Firefox Usage In Khmer Language Areas

I don't know if there's a big usage of Firefox in khmer but definitely on mobile that kind of bugs would have a strong impact on the usability of the browser. It is important to report bugs, it helps to improve the platform. It shows also how challenging it can be to implement a browser with all the diversity and variability of context.

A small report might benefit a lot of people.

Otsukare!

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl ping pong

Mozilla planet - do, 20/08/2020 - 23:51

Pretend that a ping pong ball represents a single curl installation somewhere in the world. Here’s a picture of one to help you get an image in your head.

Moving on with this game, you get one ball for every curl installation out there and your task is to put all those balls on top of each other. Okay, that’s hard to balance but for this game we can also pretend you have glue enough to make sure they stay like this. A tower of ping pong balls.

You soon realize that this is quite a lot of work. The balls keep pouring in.

How fast can you build?

If you manage to do this construction work non-stop at the rate of one ball per second (which seems like it maybe would be hard after a while but let’s not make that ruin the fun), it will keep you occupied for no less than a little bit over 317 years. (That also assumes the number of curl installations doesn’t grow significantly in the mean time.)

That’s a lot of ping pong balls. Ten billion of them, give or take.

Assuming you have friends to help you build this tower you can probably build it faster. If you can instead sustain a rate of 1000 balls per second, you’d be done in less than four months.

One official ping pong ball weighs 2.7 grams. It makes a total of 27,000 tonnes of balls. That’s quite some pressure on such a small surface. You better make sure to build the tower on something solid. The heaviest statue in the world is the Statue of Liberty in New York, clocking in at 24,500 tonnes.

That takes a lot of balls

But wait, the biggest ping pong ball manufacturer in the world (Double Happiness, in China – yes it’s really called that) “only” produces 200 million balls per year. It would take them 50 years to make balls for this tower. You clearly need to engage many factories.

You can get 100 balls for roughly 10 USD on Amazon right now. Maybe not the best balls to play with, but I think they might still suit this game. That’s a billion US dollars for the balls. Maybe you’d get a discount, but you’d also drastically increase demand, so…

How tall is that?

A tower of ten billion ping pong balls, how tall is that? It would reach the moon.

The diameter of a ping pong ball is 40 mm (it was officially increased from 38 mm back in 2000). This makes 25 balls per meter of tower. Conveniently aligned for our game here.

10,000,000,000 balls / 25 balls per meter = 400,000,000 meters. 400,000 km.

Distance from earth to moon? 384,400 km. The fully built tower is actually a little taller than the average distance to the moon! Here’s another picture to help you get an image in your head. (Although this image is not drawn to scale!)

A challenge will of course be to keep this very thin tower steady when that tall. Winds and low temperatures should be challenging. And there’s the additional risk of airplanes or satellites colliding with it. Or even just birds interfering in the lower altitudes. I suspect there are also laws prohibiting such a construction.

Never mind

Come to think of it. This was just a mind game. Forget about it now. Let’s move on with our lives instead. We have better things to do.

Categorieën: Mozilla-nl planet

Mozilla Accessibility: Early Mac Firefox VoiceOver Support

Mozilla planet - do, 20/08/2020 - 22:04

We’ve made some great early progress with Firefox VoiceOver support on macOS and we’d love it if web developers could give it a test run and provide feedback on any issues you run into while evaluating web page accessibility. Please grab Firefox Dev Edition 80 and try it out. Thanks.

The post Early Mac Firefox VoiceOver Support appeared first on Mozilla Accessibility.

Categorieën: Mozilla-nl planet

The Mozilla Blog: A look at password security, Part IV: WebAuthn

Mozilla planet - do, 20/08/2020 - 20:35

As discussed in part III, public key authentication is great in principle but in practice has been hard to integrate into the Web environment. However, we’re now seeing deployment of a new technology called WebAuthn (short for Web Authentication) that hopefully changes that.1

Previous approaches to public key authentication required the browser to provide the user interface. For a variety of reasons (the interfaces were bad, the sites wanted to control the experience) this didn’t work well for sites, and public key authentication didn’t get much adoption. WebAuthn takes a different approach, which is to provide a JavaScript API that the site can use to do public key authentication via the browser.

The key difference here is that previous systems tended to operate at a lower layer (typically HTTP or TLS), which made it hard for the site to control how and when authentication happened.2 By contrast, a JS API puts the site in control so it can ask for authentication when it wants to (e.g., after showing the home page and prompting for the username).

Some Technical Details

WebAuthn offers two new API points that are used by the server’s JavaScript [Technical note: These are buried in the credential management API.]:

  1. makeCredential: Creates a new public key pair and returns the public key.
  2. getAssertion: Sign with an existing credential over a challenge provided by the server.

The way this is used in practice is that when the user first registers with the server — or as is more likely now, when the server first adds WebAuthn support or detects that a client has it — the server uses makeCredential() to create a new public key pair and stores the public key, possibly along with an attestation. An attestation is a provable statement such as, “this public key was minted by a YubiKey.” Note that unlike some public key authentication systems, each server gets its own public key so WebAuthn is harder to use for cross-site tracking (more on this later). Then when the user returns, the site uses getAssertion(), causing the browser to sign the server’s challenge using the private key associated with the public key. The server can then verify the assertion, allowing it to determine that the client is the same endpoint as originally registered (for some value of “the same”. More on this later too).

The clever bit here is that because this is all hidden behind a JS API, the site can authenticate the client at any part of its login experience it wants without disrupting the user experience. In particular, WebAuthn can be used as a second factor in addition to a password or as a primary authenticator without a password.

Hardware Authenticators

The WebAuthn specification doesn’t require any particular mechanism for handling the key pair, so it’s technically possible to implement WebAuthn entirely in the browser, storing the key on the user’s disk. However, the designers of WebAuthn and its predecessor FIDO U2F were very concerned about the user’s machine being compromised and the private key being stolen, which would allow the attacker to impersonate the user indefinitely (just like if your password was compromised).

Accordingly, WebAuthn was explicitly designed around having the key pair in a hardware token. These tokens are designed to do all the cryptography internally and never expose the key, so if your computer is compromised, the attacker may be able to impersonate you temporarily, but they won’t be able to steal the key. This also has the advantage that the token is portable, so you can pull it out of your computer and carry it with you — thus minimizing the risk of your computer being stolen — or plug it into a second computer; it’s the token that matters not the computer it’s plugged into. We’re also starting to see hardware backed designs that don’t depend on a token. For instance, modern Macs have trusted hardware built in to power TouchID and FaceID and Apple is using this to implement WebAuthn. We have been looking at similar designs for Firefox.

While hardware key storage isn’t mandatory, WebAuthn was designed to allow sites to require it. Obviously you can’t just trust the browser when it says that it’s storing the key in hardware and so WebAuthn includes an attestation scheme that is designed to let the site determine the type of token/device being used for WebAuthn. However, there are privacy concerns about the attestation scheme 3 and many sites don’t actually insist on it. Firefox shows a separate prompt (shown below) when the site requests attestation.

Privacy Properties and User Interactivity

While as a technical matter a browser or token could just do all the WebAuthn computations automatically with no user interaction, that’s not really what you want for two reasons:

  1. It allows sites to track users without their consent (this already happens with user login fields which is why Firefox requires that the user interact with the page before filling in your username or password.)
  2. It would allow an attacker who had compromised your computer to invisibly log in as you.

In order to prevent this, FIDO-compliant tokens require the user to do something (typically touch the token) before signing an assertion. This prevents invisible tracking or use of the key to log in. Apple’s use of FaceID/TouchID takes this one step further, requiring a specific user to authorize a login, thus protecting you in case your laptop is stolen.

Alternative Designs

If you’re familiar with Web technologies, you might be wondering why we need something new here. In particular, many of the properties of WebAuthn could be replicated with cookies or WebCrypto. However, WebAuthn offers a number of advantages over these alternatives.

First, because WebAuthn requires user interaction prior to authentication it is much harder to use for tracking. This means that the browser doesn’t need to clear WebAuthn state when it clears cookie or WebCrypto state as they can be used for invisible tracking. It would be possible to add some kind of explicit user action step before accessing cookies or WebCrypto but then you would have something new.

Second, when used with keys in hardware, WebAuthn is more resistant to machine compromise. By contrast, cookies and WebCrypto state are generally stored in storage which is available directly to the browser, so if it’s compromised they can be stolen. While this is a real issue, it’s unclear how important it is: many sites use cookies for authentication over fairly long periods (when was the last time Facebook made you actually log in?) and so an attacker who steals your cookies will still be able to impersonate you for a long period. And of course the cost of this is that you have to buy a token.

Adoption Status

Technically, WebAuthn is a pretty big improvement over pre-existing systems. However, authentication systems tend to rely pretty heavily on network effects: it’s not worth users enabling it unless a lot of sites use it and it’s not worth sites enabling it unless a lot of users are willing to sign up. So far, indications are pretty promising: a number of important sites such as GSuite and Github already support WebAuthn as do SSO vendors like Okta and Duo. All four major browsers support it as well. With any luck we’ll be seeing a lot more WebAuthn deployment over the next few years — a big step forward for user security.

Up Next: Login and Device Encryption

This about wraps it up for remote authentication, but what about logging into your computer or phone? I’ll be covering that next.

Acknowledgement

Thanks to JC Jones and Chris Wood for help with this post.

  1. The WebAuthn spec is pretty hard to read. MDN’s article does a better job. 
  2. For instance, with TLS the easiest thing to do is to authenticate the user as soon as they connect, but this means you don’t get to show any UI, which is awkward for users who don’t yet have accounts. You can also do “TLS renegotiation” later in the connection but for a variety of technical reasons that has proven hard to integrate with servers. In addition, any TLS-level authentication is an awkward fit for CDNs because the TLS is terminated at the CDN, not at the origin. 
  3. The idea behind the attestation mechanism is that the device manufacturer issues a certificate to the device and device uses the corresponding private key to sign the new generated authentication key. However, if that certificate is unique to the device and used for every site then it becomes a tracking vector. The specification suggests two (somewhat clunky) mechanisms for reducing the risk here, but neither is mandatory. 

The post A look at password security, Part IV: WebAuthn appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

David Teller: Why Did Mozilla Remove XUL Add-ons?

Mozilla planet - do, 20/08/2020 - 18:09

TL;DR: Firefox used to have a great extension mechanism based on the XUL and XPCOM. This mechanism served us well for a long time. However, it came at an ever-growing cost in terms of maintenance for both Firefox developers and add-on developers. On one side, this growing cost progressively killed any effort to make Firefox secure, fast or to try new things. On the other side, this growing cost progressively killed the community of add-on developers. Eventually, after spending years trying to protect this old add-on mechanism, Mozilla made the hard choice of removing this extension mechanism and replacing this with the less powerful but much more maintainable WebExtensions API. Thanks to this choice, Firefox developers can once again make the necessary changes to improve security, stability or speed.

During the past few days, I’ve been chatting with Firefox users, trying to separate fact from rumor regarding the consequences of the August 2020 Mozilla layoffs. One of the topics that came back a few times was the removal of XUL-based add-ons during the move to Firefox Quantum. I was very surprised to see that, years after it happened, some community members still felt hurt by this choice.

And then, as someone pointed out on reddit, I realized that we still haven’t taken the time to explain in-depth why we had no choice but to remove XUL-based add-ons.

So, if you’re ready for a dive into some of the internals of add-ons and Gecko, I’d like to take this opportunity to try and give you a bit more detail.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Practicing Lean Data and Defending “Lean Data”

Mozilla planet - do, 20/08/2020 - 15:00

At Mozilla, we put privacy first. We do this in our own products with features like tracking protection. We also promote privacy in our public advocacy. A key feature of our privacy work is a commitment to reducing the amount of user data that is collected in the first place. Focusing on the data you really need lowers risk and promotes trust. Our Lean Data Practices page describes this framework and includes tools and tips for staying lean. For years, our legal and policy teams have held workshops around the world, advising businesses on how they can use lean data practices to reduce their data footprint and improve the privacy of their products and services.

Mozilla is not the only advocate for lean data. Many, many, many, many, many, many, many, many, many others use the term “lean data” to refer to the principle of minimizing data collection. Given this, we were very surprised to receive a demand letter from lawyers representing LeanData, Inc. claiming that Mozilla’s Lean Data Practices page infringes the company’s supposed trademark rights. We have responded to this letter to stand up for everyone’s right to use the words “lean data” in digital advocacy.

Our response to LeanData explains that it cannot claim ownership of a descriptive term such as “lean data.” In fact, when we investigated its trademark filings we discovered that the US Patent and Trademark Office (USPTO) had repeatedly rejected the company’s attempts to register a wordmark that covered the term. The USPTO has cited numerous sources, including the very Mozilla page LeanData accused of infringing, as evidence that “lean data” is descriptive. Also, the registration for LeanData’s logo cited in the company’s letter to Mozilla was recently cancelled (and it wouldn’t cover the words “lean data” in any event). LeanData’s demand is without merit.

In a follow-up letter, LeanData, Inc. acknowledged that it does not have any currently registered marks on “lean data.” LeanData’s lawyer suggested, however, the company will continue to pursue its application for a “LeanData” wordmark. We believe the USPTO should, and will, continue to reject this application. Important public policy discussions must be free from intellectual property overreach. Scholars, engineers, and commentators should be allowed to use a descriptive term like “lean data” to describe a key digital privacy principle.

The post Practicing Lean Data and Defending “Lean Data” appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl 7.72.0 – more compression

Mozilla planet - wo, 19/08/2020 - 09:55

Welcome to another release, seven weeks since we did the patch release 7.71.1. This time we add a few new subtle features so the minor number is bumped yet again. Details below.

Release presentation video Numbers

the 194th release
3 changes
49 days (total: 8,188)

100 bug fixes (total: 6,327)
134 commits (total: 26,077)
0 new public libcurl function (total: 82)
0 new curl_easy_setopt() option (total: 277)

0 new curl command line option (total: 232)
52 contributors, 29 new (total: 2,239)
30 authors, 14 new (total: 819)
1 security fix (total: 95)
500 USD paid in Bug Bounties (total: 2,800 USD)

Security

CVE-2020-8132: “libcurl: wrong connect-only connection”. This a rather obscure issue that we’ve graded severity Low. There’s a risk that an application that’s using libcurl to do connect-only connections (ie not doing the full transfer with libcurl, just using it to setup the connection) accidentally sends or reads data over the wrong connection, as libcurl could mix them up internally in rare circumstances.

We rewarded 500 USD to the reporter of this security flaw.

Features

This is the first curl release that supports zstd compression. zstd is a yet another way to compressed content data over HTTP and if curl supports it, it can then automatically decompress it on the fly. zstd is designed to compress better and faster than gzip and if I understand the numbers shown, it is less CPU intensive than brotli. In pure practical terms, curl will ask for this compression in addition to the other supported algorithms if you tell curl you want compressed content. zstd is still not widely supported by browsers.

For clients that supports HTTP/2 and server push, libcurl now allows the controlling callback (“should this server push be accepted?”) to return an error code that will tear down the entire connection.

There’s a new option for curl_easy_getinfo called CURLINFO_EFFECTIVE_METHOD that lets the application ask libcurl what the most resent request method used was. This is relevant in case you’ve allowed libcurl to follow redirects for a POST where it might have changed the method as a result of what particular HTTP response the server responded with.

Bug-fixes

Here are a collection of bug-fixes I think stood out a little extra in this cycle.

cmake: fix windows xp build

I just love the fact that someone actually tried to build curl for Windows XP, noticed it failed in doing so and provided the fix to make it work again…

curl: improve the existing file check with -J

There were some minor mistakes in the code that checks if the file you get when you use -J already existed. That logic has now been tightened. Presumably not a single person ever actually had an actual problem with that before either, but…

ftp: don’t do ssl_shutdown instead of ssl_close

We landed an FTPS regression in 7.71.1 where we accidentally did the wrong function call when closing down the data connection. It could make consecutive FTPS transfers terribly slow.

http2: repair trailer handling

We had another regression reported where HTTP trailers when using HTTP/2 really didn’t work. Obviously not a terribly well-used feature…

http2: close the http2 connection when no more requests may be sent

Another little HTTP/2 polish: make sure that connections that have received a GOAWAY is marked for closure so that it gets closed sooner rather than later as no new streams can be created on it anyway!

multi_remove_handle: close unused connect-only connections

“connect-only connections” are those where the application asks libcurl to just connect to the site and not actually perform any request or transfer. Previously when that was done, the connection would remain in the multi handle until it was closed and it couldn’t be reused. Starting now, when the easy handle that “owns” the connection is removed from the multi handle the associated connect-only connection will be closed and removed. This is just sensible.

ngtcp2: adapt to changes

ngtcp2 is a QUIC library and is used in one of the backends curl supports for HTTP/3. HTTP/3 in curl is still marked experimental and we aim at keeping the latest curl code work with the latest QUIC libraries – since they’re both still “pre-beta” versions and don’t do releases yet. So, if you find that the HTTP/3 build fails, make sure you use the latest git commits of all the h3 components!

quiche: handle calling disconnect twice

If curl would call the QUIC disconnect function twice, using the quiche backend, it would crash hard. Would happen if you tried to connect to a host that didn’t listen to the UDP port at all for example…

setopt: unset NOBODY switches to GET if still HEAD

We recently fixed a bug for storing the HTTP method internally and due to refactored code, the behavior of unsetting the CURLOPT_NOBODY option changed slightly. There was never any promise as to what exactly that would do – but apparently several users had already drawn conclusions and written applications based on that. We’ve now adapted somewhat to that presumption on undocumented behavior by documenting better what it should do and by putting back some code to back it up…

http2: move retrycount from connect struct to easy handle

Yet another HTTP/2 fix. In a recent release we fixed a problem that materialized when libcurl received a GOAWAY on a stream for a HTTP/2 connection, and it would then instead try a new connection to issue the request over and that too would get a GOAWAY. libcurl will do these retry attempts up to 5 times but due to a mistake, the counter was stored wrongly and was cleared when each new connection was made…

url: fix CURLU and location following

libcurl supports two ways of setting the URL to work with. The good old string to the entire URL and the option CURLOPT_CURLU where you provide the handle to an already parsed URL. The latter is of course a much newer option and it turns out that libcurl didn’t properly handle redirects when the URL was set with this latter option!

Coming up

There are already several Pull Requests waiting in line to get merged that add new features and functionality. We expect the next release to become 7.73.0 and ship on October 14, 2020. Fingers crossed.

Categorieën: Mozilla-nl planet

Eric Shepherd: Moz-eying along…

Mozilla planet - di, 18/08/2020 - 19:10

By now, most folks have heard about Mozilla’s recent layoff of about 250 of its employees. It’s also fairly well known that the entire MDN Web Docs content team was let go, aside from our direct manager, the eminently-qualified and truly excellent Chris Mills. That, sadly, includes myself.

Yes, after nearly 14½ years writing web developer documentation for MDN, I am moving on to new things. I don’t know yet what those new things are, but the options are plentiful and I’m certain I’ll land somewhere great soon.

Winding down

But it’s weird. I’ve spent over half my career as a technical writer at Mozilla. When I started, we were near the end of documenting Firefox 1.5, whose feature features (sorry) were the exciting new <canvas> element and CSS improvements including CSS columns. A couple of weeks ago, I finished writing my portions of the documentation for Firefox 80, for which I wrote about changes to WebRTC and Web Audio, as well as the Media Source API.

Indeed, in my winding-down days, when I’m no longer assigned specific work to do, I find myself furiously writing as much new material as I can for the WebRTC documentation, because I think it’s important, and there are just enough holes in the docs as it stands to make life frustrating for newcomers to the technology. I won’t be able to fix them all before I’m gone, but I’ll do what I can.

Because that’s how I roll. I love writing developer documentation, especially for technologies for which no documentation yet exists. It’s what I do. Digging into a technology and coding and debugging and re-coding (and cursing and swearing a bit, perhaps) until I have working code that ensures that I understand what I’m going to write about is a blast! Using that code, and what I learned while creating it, to create documentation to help developers avoid at least some of the debugging (and cursing and swearing a bit, perhaps) that I had to go through.

The thrill of creation is only outweighed by the deep-down satisfaction that comes from knowing that what you’ve produced will help others do what they need to do faster, more efficiently, and—possibly most importantly—better.

That’s the dream.

Wrapping up

Anyway, I will miss Mozilla terribly; it was a truly wonderful place to work. I’ll miss working on MDN content every day; it was my life from the day I joined as the sole full-time writer, through the hiring and departure of several other writers, until the end.

First, let me thank the volunteer community of writers, editors, and translators who’ve worked on MDN in the past—and who I hope will continue to do so going forward. We need you more than ever now!

Jean-Luc Picard demonstrates the facepalm

Me, if I’ve forgotten to mention anyone.

Then there are our staff writers, both past and present. Jean-Yves Perrier left the team a long while ago, but he was a fantastic colleague and a great guy. Jérémie Pattonier was a blast to work with and a great asset to our team. Paul Rouget, too, was a great contributor and a great person to work with (until he moved on to engineering roles; then he became a great person to get key information from, because he was so easy to engage with).

Chris Mills, our amazing documentation team manager and fabulous writer in his own right, will be remaining at Mozilla, and hopefully will find ways to make MDN stay on top of the heap. I’m rooting for you, Chris!

Florian Scholz, our content lead and the youngest member of our team (a combination that tells you how amazing he is) was a fantastic contributor from his school days onward, and I was thrilled to have him join our staff. I’m exceptionally proud of his success at MDN.

Janet Swisher, who managed our community as well as writing documentation, may have been the rock of our team. She’s been a steadfast and reliable colleague and a fantastic source of advice and ideas. She kept us on track a lot of times when we could have veered sharply off the rails and over a cliff.

Will Bamberg has never been afraid to take on a big project. From developer tools to browser extensions to designing our new documentation platform, I’ve always been amazed by his focus and his ability to do big things well.

Thank you all for the hard work, the brilliant ideas, and the devotion to making the web better by teaching developers to create better sites. We made the world a little better for everyone in it, and I’m very, very proud of all of us.

Farewell, my friends.

Categorieën: Mozilla-nl planet

Mozilla Attack & Defense: Bug Bounty Program Updates: Adding (another) New Class of Bounties

Mozilla planet - di, 18/08/2020 - 16:15

Recently we increased bounty payouts and also included a Static Analysis component in our bounty program; and we are expanding our bug bounty program even further with a new Exploit Mitigation Bounty. Within Firefox, we have introduced vital security features, exploit mitigations, and defense in depth measures. If you are able to bypass one of these measures, even if you are operating from privileged access within the browser, you are now eligible for a bounty even if before it would not have qualified.

While previously, bypassing a mitigation in a testing scenario – such as directly testing the HTML Sanitizer – would be classified as a sec-low or sec-moderate; it will now be eligible for a bounty equivalent to a sec-high. Additionally, if the vulnerability is triggerable without privileged access, this would count as both a regular security vulnerability eligible for a bounty and a mitigation bypass, earning a bonus payout. We have an established list of the mitigations we consider in scope for this bounty, they and more details are available on the Client Bug Bounty page.

Finally, based on our experience with our Nightly channel, we are making a change to how we handle recent regressions. Occasionally we will introduce a new issue that is immediately noticed. These breaking changes are frequently caught by multiple systems including Mozilla’s internal fuzzing efforts, crash reports, internal Nightly dogfooding, and telemetry – and also sometimes by external bounty participants performing fuzzing on Nightly.

We still want to encourage bounty hunting on Nightly – even if other bounty programs don’t – but issuing bounties for obvious transient issues we find ourselves is not improving the state of Firefox security or encouraging novel fuzzer improvements. While some bounty programs won’t issue a bounty if an issue is also found internally at all, we will continue to do so. However, we are implementing a four-day grace period beginning when a code change that exposes a vulnerability is checked-in to the primary repository for that application. If the issue is identified internally within this grace period it will be ineligible for a bounty. After four days, if no one working on the project has reported the issue it is eligible.

We’re excited to expand our program by providing more specific targets of Firefox internals we would appreciate attention to – keep watch here and on Twitter for more tips, tricks, and targets for Firefox bounty hunting!

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 352

Mozilla planet - di, 18/08/2020 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Check out this week's This Week in Rust Podcast

Updates from Rust Community Official Tooling Newsletters Observations/Thoughts Learn Standard Rust Learn More Rust Project Updates Miscellaneous Crate of the Week

This week's crate is cargo-c, a cargo subcommand to build and install C-ABI compatibile dynamic and static libraries.

Thanks to Zicklag for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

345 pull requests were merged in the last week

Rust Compiler Performance Triage
  • 2020-08-17. 4 regressions, 3 improvements, 4 mixed bags.
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in the final comment period.

Tracking Issues & PRs New RFCs

No new RFCs were proposed this week.

Upcoming Events Online North America Asia Pacific

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

As Dave Herman always told me, “macros are for when you run out of language”. If you still have language left—and Rust gives you a lot of language—use the language first.

Thanks to Nixon Enraght-Moony for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Laying the foundation for Rust's future

Mozilla planet - di, 18/08/2020 - 02:00

The Rust project was originally conceived in 2010 (depending on how you count, you might even say 2006!) as a Mozilla Research project, but the long term goal has always been to establish Rust as a self-sustaining project. In 2015, with the launch of Rust 1.0, Rust established its project direction and governance independent of the Mozilla organization. Since then, Rust has been operating as an autonomous organization, with Mozilla being a prominent and consistent financial and legal sponsor.

Mozilla was, and continues to be, excited by the opportunity for the Rust language to be widely used, and supported, by many companies throughout the industry. Today, many companies, both large and small, are using Rust in more diverse and more significant ways, from Amazon’s Firecracker, to Fastly’s Lucet, to critical services that power Discord, Cloudflare, Figma, 1Password, and many, many more.

On Tuesday, August 11th 2020, Mozilla announced their decision to restructure the company and to lay off around 250 people, including folks who are active members of the Rust project and the Rust community. Understandably, these layoffs have generated a lot of uncertainty and confusion about the impact on the Rust project itself. Our goal in this post is to address those concerns. We’ve also got a big announcement to make, so read on!

Community impact

There’s no denying the impact these layoffs have had on all members of the Rust community, particularly the folks who have lost their jobs in the middle of a global pandemic. Sudden, unexpected layoffs can be a difficult experience, and they are made no less difficult when it feels like the world is watching. Impacted employees who are looking for job assistance can be found on Mozilla’s talent directory.

Notwithstanding the deep personal impact, the Rust project as a whole is very resilient to such events. We have leaders and contributors from a diverse set of different backgrounds and employers, and that diversity is a critical strength. Further, it is a common misconception that all of the Mozilla employees who participated in Rust leadership did so as a part of their employment. In fact, many Mozilla employees in Rust leadership contributed to Rust in their personal time, not as a part of their job.

Finally, we would like to emphasize that membership in Rust teams is given to individuals and is not connected to one’s employer. Mozilla employees who are also members of the Rust teams continue to be members today, even if they were affected by the layoffs. Of course, some may choose to scale back their involvement. We understand not everyone might be able to continue contributing, and we would fully support their decision. We're grateful for everything they have done for the project so far.

Starting a foundation

As the project has grown in size, adoption, and maturity, we’ve begun to feel the pains of our success. We’ve developed legal and financial needs that our current organization lacks the capacity to fulfill. While we were able to be successful with Mozilla’s assistance for quite a while, we’ve reached a point where it’s difficult to operate without a legal name, address, and bank account. “How does the Rust project sign a contract?” has become a question we can no longer put off.

Last year, we began investigating the idea of creating an independent Rust foundation. Members of the Rust Team with prior experience in open source foundations got together to look at the current landscape, identifying the things we’d need from a foundation, evaluating our options, and interviewing key members and directors from other foundations.

Building on that work, the Rust Core Team and Mozilla are happy to announce plans to create a Rust foundation. The Rust Core Team's goal is to have the first iteration of the foundation up and running by the end of the year.

This foundation’s first task will be something Rust is already great at: taking ownership. This time, the resource is legal, rather than something in a program. The various trademarks and domain names associated with Rust, Cargo, and crates.io will move into the foundation, which will also take financial responsibility for the costs they incur. We see this first iteration of the foundation as just the beginning. There’s a lot of possibilities for growing the role of the foundation, and we’re excited to explore those in the future.

For now though, we remain laser-focused on these initial narrow goals for the foundation. As an immediate step the Core Team has selected members to form a project group driving the efforts to form the foundation. Expect to see follow-up blog posts from the group with more details about the process and opportunities to give feedback. In the meantime, you can email the group at foundation@rust-lang.org.

Leading with infrastructure

While we have only begun the process of setting up the foundation, over the past two years the Infrastructure Team has been leading the charge to reduce the reliance on any single company sponsoring the project, as well as growing the number of companies that support Rust.

These efforts have been quite successful, and — as you can see on our sponsorship page — Rust’s infrastructure is already supported by a number of different companies throughout the ecosystem. As we legally transition into a fully independent entity, the Infrastructure Team plans to continue their efforts to ensure that we are not overly reliant on any single sponsor.

Thank you

We’re excited to start the next chapter of the project by forming a foundation. We would like to thank everyone we shared this journey with so far: Mozilla for incubating the project and for their support in creating a foundation, our team of leaders and contributors for constantly improving the community and the language, and everyone using Rust for creating the powerful ecosystem that drives so many people to the project. We can’t wait to see what our vibrant community does next.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Adjusting to changes at Mozilla

Mozilla planet - di, 18/08/2020 - 01:38

Earlier last week, Mozilla announced a number of changes and these changes include aspects of SUMO as well.

For a high level overview of these changes, we encourage you to read Mitchell’s address to the community. For Support, the most immediate change is that we will be creating a more focused team that combines Pocket Support and Mozilla Support into a single team.

We want to take a moment to stress that Mozilla remains fully committed to our Support team and community, and the team changes are in no way a reflection on Mozilla’s focus on Support moving forward. The entire organization is grateful for all the hard work the community does everyday to support the products we all love. Community is the heart of Mozilla, and that can be said for our support functions as well. As we make plans as a combined Support team, we’d love to hear from you as well, so please feel free to reach out to us.

We very much appreciate your patience while we adjust to these changes.

On behalf of the Support team – Rina

Categorieën: Mozilla-nl planet

William Lachance: mozregression and older builds

Mozilla planet - ma, 17/08/2020 - 21:01

Periodically the discussion comes up about pruning away old stored Firefox build artifacts in S3. Each build is tens of megabytes, multiply that by the number of platforms we support and the set of revisions we churn through on a daily basis, and pretty soon you’re talking about real money.

This came up recently in a discussion about removing the legacy taskcluster deployment — what do we actually lose by cutting back our archive of integration builds? The main reason to keep them around is to facilitate bisection testing with mozregression, to find out when a bug was introduced. Up to now, discussions about this have been a bit hand-wavey: we do keep logs about who’s accessing old builds, but it’s never been clear whether it was mozregression accessing them or something else.

Happily, now that mozregression has some telemetry, it’s a little easier to get some answers on what people are actually doing. This query gets the distribution of build ages (launched or bisected) over the past 6 months, at a month long granularity.1 Ages are relative to the date mozregression was launched: for example, if someone asked for a build from May 2019 in June 2020, the number would be “13”.

SELECT metrics.string.usage_app AS app, metrics.string.usage_build_type AS build_type, DATE_DIFF(DATE(submission_timestamp), IF(LENGTH(metrics.datetime.usage_bad_date) > 0, PARSE_DATE('%Y-%m-%d', substr(metrics.datetime.usage_bad_date, 1, 10)), PARSE_DATE('%Y-%m-%d', substr(metrics.datetime.usage_launch_date, 1, 10))), MONTH) + 1 AS build_age FROM `moz-fx-data-shared-prod`.org_mozilla_mozregression.usage WHERE DATE(submission_timestamp) >= DATE_SUB(CURRENT_DATE(), INTERVAL 6 MONTH) AND client_info.app_display_version NOT LIKE '%dev%' AND LENGTH(metrics.string.usage_build_type) > 0 AND (LENGTH(metrics.datetime.usage_bad_date) > 0 OR LENGTH(metrics.datetime.usage_launch_date) > 0)

I ran this query on sql.telemetry.mozilla.org and generated a box plot, broken down by product and build type:

link (requires Mozilla LDAP)

Unsurprisingly, Firefox shippable builds are the number one thing people try to bisect. Let’s take a little bit of a closer look at what’s going on there:

The median value is 1, which indicates that most people are bisecting builds within one month of the day in which mozregression was run. And the upper fence result is 6, suggesting that most of the time people are looking at a regression range that is within a 6 month range. However, looking more closely at the data points themselves (the little points in the chart above), there are a considerable number of outliers where a range greater than 20 months was asked for.

… which brings up to the question that we want to answer. Given that getting old builds isn’t that common (which we sort of knew already, based on the access patterns in the S3 logs), what is the impact of the times that we do? And it’s here where I have to throw up my hands and say “I don’t know” and suggest that we go back to empirical observation and user research.

You can go back to the thread I linked above, and see that core Firefox/Gecko developers find the ability to get a precise regression range for older revisions valuable. One thing that’s worth mentioning is that mozregression isn’t run that often, compared to a product that we ship: on the order of 50 to 100 times per a day. But when it comes to internal tooling, a small amount of use might have a big impact: if a mozregression invocation a developer a few hours (or more), that’s a real benefit to Firefox and Mozilla. The same argument might apply here, where a small number of bisections on older builds might have a disproportionate impact on the quality of the product.

  1. I only added the telemetry to capture this information relatively recently, so we’re actually only looking at about a month of data in this post. We’ll have more complete results later this year. 

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR26b1 available (after all, Mozilla's not dead yet)

Mozilla planet - zo, 16/08/2020 - 07:02
TenFourFox Feature Parity Release 26 beta 1 is now available (downloads, hashes, release notes). There isn't a great deal in this release due to continued heavy workload at my regular job and summer heat here in excessively sunny Southern California making running the G5 and the Talos II at the same time pretty miserable, and I also had the better part of a week laid up ill to boot (note: not COVID-19). Still, this hopefully completes the work on DOM workers and the usual security updates, which will switch to 78ESR starting with FPR27. All going well, it will be released on August 25.

With much of the low-hanging fruit gone that a solo developer can reasonably do on their own, for FPR27 I would like to resurrect an old idea I had about a "permanent Reader mode" where once you enter Reader mode, clicking links keeps you in it until you explicitly exit. I think we should be leveraging Reader mode more as Readability improves because it substantially lowers the horsepower needed to usefully render a page, and we track current releases of Readability fairly closely. I'm also looking at the possibility of implementing a built-in interface to automatically run modifier scripts on particular domains or URLs, similar to Classilla's stelae idea but operating at the DOM level a la Greasemonkey like TenFourFox's AppleScript-JavaScript bridge does. The browser would then ship with a default set of modifier scripts and users could add their own. This might have some performance impact, however, so I have to think about how to do these checks quickly.

A few people have asked what the Mozilla layoffs mean for TenFourFox. Not much, frankly, because even though the layoffs affect the Mozilla security team there will still be security updates, and we'll continue to benefit as usual from backporting those to TenFourFox's modified Firefox 45 base (as well as downstream builders that use our backports for their own updates to Fx45). In particular I haven't heard the layoffs have changed anything for the Extended Support Releases of Firefox, from which our continued security patches derive, and we don't otherwise rely on Mozilla infrastructure for anything else; the rest is all local Floodgap resources for building and hosting, plus Tenderapp for user support, SourceForge for binaries and mirrors and Github for source code, wiki and issues.

But it could be a bigger deal for OpenPOWER systems like the Talos II next to the G5 if Mozilla starts to fade. I wrote on Talospace a good year and a half ago how critical Firefox is to unusual platforms, not least because of Google's general hostility to patches for systems they don't consider market relevant; I speak from personal experience on how accepting Mozilla is of Tier 3 patches as long as they don't screw up Tiers 1 and 2. Although the requirement of a Rust compiler is an issue for 32-bit PowerPC (and Tiger and Leopard specifically, since we don't have thread-local storage either), much of the browser still generally "just builds" even in the absence of architecture-specific features. Besides, there's the larger concern of dealing with a rapidly changing codebase controlled by a single entity more interested in the promulgation of its own properties and designing their browser to be those services' preferred client, which is true whether you're using mainline Chrome or any of the Chromium-based third-party browsers. That may make perfect business sense for them and for certain values of "good" it may even yield a good product, but it's in service of the wrong goal, and it's already harming the greater community by continuing to raise the barrier to entry for useful browser competition. We damned IE when Microsoft engaged in embrace, extend and extinguish; we should make the same judgment call when Google engages in the same behaviour. We have no spine for meaningful anti-trust actions in the United States anymore and this would be a good place to start.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Video: Landing code in curl

Mozilla planet - do, 13/08/2020 - 23:13

A few hours ago I ended my webinar on how to get your code contribution merged into curl. Here’s the video of it:

Here are the slides.

Categorieën: Mozilla-nl planet

Andrew Sutherland: Talk Script: Firefox OS Email Performance Strategies

Thunderbird - do, 30/04/2015 - 22:11

Last week I gave a talk at the Philly Tech Week 2015 Dev Day organized by the delightful people at technical.ly on some of the tricks/strategies we use in the Firefox OS Gaia Email app.  Note that the credit for implementing most of these techniques goes to the owner of the Email app’s front-end, James Burke.  Also, a special shout-out to Vivien for the initial DOM Worker patches for the email app.

I tried to avoid having slides that both I would be reading aloud as the audience read silently, so instead of slides to share, I have the talk script.  Well, I also have the slides here, but there’s not much to them.  The headings below are the content of the slides, except for the one time I inline some code.  Note that the live presentation must have differed slightly, because I’m sure I’m much more witty and clever in person than this script would make it seem…

Cover Slide: Who!

Hi, my name is Andrew Sutherland.  I work at Mozilla on the Firefox OS Email Application.  I’m here to share some strategies we used to make our HTML5 app Seem faster and sometimes actually Be faster.

What’s A Firefox OS (Screenshot Slide)

But first: What is a Firefox OS?  It’s a multiprocess Firefox gecko engine on an android linux kernel where all the apps including the system UI are implemented using HTML5, CSS, and JavaScript.  All the apps use some combination of standard web APIs and APIs that we hope to standardize in some form.

Firefox OS homescreen screenshot Firefox OS clock app screenshot Firefox OS email app screenshot

Here are some screenshots.  We’ve got the default home screen app, the clock app, and of course, the email app.

It’s an entirely client-side offline email application, supporting IMAP4, POP3, and ActiveSync.  The goal, like all Firefox OS apps shipped with the phone, is to give native apps on other platforms a run for their money.

And that begins with starting up fast.

Fast Startup: The Problems

But that’s frequently easier said than done.  Slow-loading websites are still very much a thing.

The good news for the email application is that a slow network isn’t one of its problems.  It’s pre-loaded on the phone.  And even if it wasn’t, because of the security implications of the TCP Web API and the difficulty of explaining this risk to users in a way they won’t just click through, any TCP-using app needs to be a cryptographically signed zip file approved by a marketplace.  So we do load directly from flash.

However, it’s not like flash on cellphones is equivalent to an infinitely fast, zero-latency network connection.  And even if it was, in a naive app you’d still try and load all of your HTML, CSS, and JavaScript at the same time because the HTML file would reference them all.  And that adds up.

It adds up in the form of event loop activity and competition with other threads and processes.  With the exception of Promises which get their own micro-task queue fast-lane, the web execution model is the same as all other UI event loops; events get scheduled and then executed in the same order they are scheduled.  Loading data from an asynchronous API like IndexedDB means that your read result gets in line behind everything else that’s scheduled.  And in the case of the bulk of shipped Firefox OS devices, we only have a single processor core so the thread and process contention do come into play.

So we try not to be a naive.

Seeming Fast at Startup: The HTML Cache

If we’re going to optimize startup, it’s good to start with what the user sees.  Once an account exists for the email app, at startup we display the default account’s inbox folder.

What is the least amount of work that we can do to show that?  Cache a screenshot of the Inbox.  The problem with that, of course, is that a static screenshot is indistinguishable from an unresponsive application.

So we did the next best thing, (which is) we cache the actual HTML we display.  At startup we load a minimal HTML file, our concatenated CSS, and just enough Javascript to figure out if we should use the HTML cache and then actually use it if appropriate.  It’s not always appropriate, like if our application is being triggered to display a compose UI or from a new mail notification that wants to show a specific message or a different folder.  But this is a decision we can make synchronously so it doesn’t slow us down.

Local Storage: Okay in small doses

We implement this by storing the HTML in localStorage.

Important Disclaimer!  LocalStorage is a bad API.  It’s a bad API because it’s synchronous.  You can read any value stored in it at any time, without waiting for a callback.  Which means if the data is not in memory the browser needs to block its event loop or spin a nested event loop until the data has been read from disk.  Browsers avoid this now by trying to preload the Entire contents of local storage for your origin into memory as soon as they know your page is being loaded.  And then they keep that information, ALL of it, in memory until your page is gone.

So if you store a megabyte of data in local storage, that’s a megabyte of data that needs to be loaded in its entirety before you can use any of it, and that hangs around in scarce phone memory.

To really make the point: do not use local storage, at least not directly.  Use a library like localForage that will use IndexedDB when available, and then fails over to WebSQLDatabase and local storage in that order.

Now, having sufficiently warned you of the terrible evils of local storage, I can say with a sorta-clear conscience… there are upsides in this very specific case.

The synchronous nature of the API means that once we get our turn in the event loop we can act immediately.  There’s no waiting around for an IndexedDB read result to gets its turn on the event loop.

This matters because although the concept of loading is simple from a User Experience perspective, there’s no standard to back it up right now.  Firefox OS’s UX desires are very straightforward.  When you tap on an app, we zoom it in.  Until the app is loaded we display the app’s icon in the center of the screen.  Unfortunately the standards are still assuming that the content is right there in the HTML.  This works well for document-based web pages or server-powered web apps where the contents of the page are baked in.  They work less well for client-only web apps where the content lives in a database and has to be dynamically retrieved.

The two events that exist are:

DOMContentLoaded” fires when the document has been fully parsed and all scripts not tagged as “async” have run.  If there were stylesheets referenced prior to the script tags, the script tags will wait for the stylesheet loads.

load” fires when the document has been fully loaded; stylesheets, images, everything.

But none of these have anything to do with the content in the page saying it’s actually done.  This matters because these standards also say nothing about IndexedDB reads or the like.  We tried to create a standards consensus around this, but it’s not there yet.  So Firefox OS just uses the “load” event to decide an app or page has finished loading and it can stop showing your app icon.  This largely avoids the dreaded “flash of unstyled content” problem, but it also means that your webpage or app needs to deal with this period of time by displaying a loading UI or just accepting a potentially awkward transient UI state.

(Trivial HTML slide)

<link rel=”stylesheet” ...> <script ...></script> DOMContentLoaded!

This is the important summary of our index.html.

We reference our stylesheet first.  It includes all of our styles.  We never dynamically load stylesheets because that compels a style recalculation for all nodes and potentially a reflow.  We would have to have an awful lot of style declarations before considering that.

Then we have our single script file.  Because the stylesheet precedes the script, our script will not execute until the stylesheet has been loaded.  Then our script runs and we synchronously insert our HTML from local storage.  Then DOMContentLoaded can fire.  At this point the layout engine has enough information to perform a style recalculation and determine what CSS-referenced image resources need to be loaded for buttons and icons, then those load, and then we’re good to be displayed as the “load” event can fire.

After that, we’re displaying an interactive-ish HTML document.  You can scroll, you can press on buttons and the :active state will apply.  So things seem real.

Being Fast: Lazy Loading and Optimized Layers

But now we need to try and get some logic in place as quickly as possible that will actually cash the checks that real-looking HTML UI is writing.  And the key to that is only loading what you need when you need it, and trying to get it to load as quickly as possible.

There are many module loading and build optimizing tools out there, and most frameworks have a preferred or required way of handling this.  We used the RequireJS family of Asynchronous Module Definition loaders, specifically the alameda loader and the r-dot-js optimizer.

One of the niceties of the loader plugin model is that we are able to express resource dependencies as well as code dependencies.

RequireJS Loader Plugins

var fooModule = require('./foo'); var htmlString = require('text!./foo.html'); var localizedDomNode = require('tmpl!./foo.html');

The standard Common JS loader semantics used by node.js and io.js are the first one you see here.  Load the module, return its exports.

But RequireJS loader plugins also allow us to do things like the second line where the exclamation point indicates that the load should occur using a loader plugin, which is itself a module that conforms to the loader plugin contract.  In this case it’s saying load the file foo.html as raw text and return it as a string.

But, wait, there’s more!  loader plugins can do more than that.  The third example uses a loader that loads the HTML file using the ‘text’ plugin under the hood, creates an HTML document fragment, and pre-localizes it using our localization library.  And this works un-optimized in a browser, no compilation step needed, but it can also be optimized.

So when our optimizer runs, it bundles up the core modules we use, plus, the modules for our “message list” card that displays the inbox.  And the message list card loads its HTML snippets using the template loader plugin.  The r-dot-js optimizer then locates these dependencies and the loader plugins also have optimizer logic that results in the HTML strings being inlined in the resulting optimized file.  So there’s just one single javascript file to load with no extra HTML file dependencies or other loads.

We then also run the optimizer against our other important cards like the “compose” card and the “message reader” card.  We don’t do this for all cards because it can be hard to carve up the module dependency graph for optimization without starting to run into cases of overlap where many optimized files redundantly include files loaded by other optimized files.

Plus, we have another trick up our sleeve:

Seeming Fast: Preloading

Preloading.  Our cards optionally know the other cards they can load.  So once we display a card, we can kick off a preload of the cards that might potentially be displayed.  For example, the message list card can trigger the compose card and the message reader card, so we can trigger a preload of both of those.

But we don’t go overboard with preloading in the frontend because we still haven’t actually loaded the back-end that actually does all the emaily email stuff.  The back-end is also chopped up into optimized layers along account type lines and online/offline needs, but the main optimized JS file still weighs in at something like 17 thousand lines of code with newlines retained.

So once our UI logic is loaded, it’s time to kick-off loading the back-end.  And in order to avoid impacting the responsiveness of the UI both while it loads and when we’re doing steady-state processing, we run it in a DOM Worker.

Being Responsive: Workers and SharedWorkers

DOM Workers are background JS threads that lack access to the page’s DOM, communicating with their owning page via message passing with postMessage.  Normal workers are owned by a single page.  SharedWorkers can be accessed via multiple pages from the same document origin.

By doing this, we stay out of the way of the main thread.  This is getting less important as browser engines support Asynchronous Panning & Zooming or “APZ” with hardware-accelerated composition, tile-based rendering, and all that good stuff.  (Some might even call it magic.)

When Firefox OS started, we didn’t have APZ, so any main-thread logic had the serious potential to result in janky scrolling and the impossibility of rendering at 60 frames per second.  It’s a lot easier to get 60 frames-per-second now, but even asynchronous pan and zoom potentially has to wait on dispatching an event to the main thread to figure out if the user’s tap is going to be consumed by app logic and preventDefault called on it.  APZ does this because it needs to know whether it should start scrolling or not.

And speaking of 60 frames-per-second…

Being Fast: Virtual List Widgets

…the heart of a mail application is the message list.  The expected UX is to be able to fling your way through the entire list of what the email app knows about and see the messages there, just like you would on a native app.

This is admittedly one of the areas where native apps have it easier.  There are usually list widgets that explicitly have a contract that says they request data on an as-needed basis.  They potentially even include data bindings so you can just point them at a data-store.

But HTML doesn’t yet have a concept of instantiate-on-demand for the DOM, although it’s being discussed by Firefox layout engine developers.  For app purposes, the DOM is a scene graph.  An extremely capable scene graph that can handle huge documents, but there are footguns and it’s arguably better to err on the side of fewer DOM nodes.

So what the email app does is we create a scroll-region div and explicitly size it based on the number of messages in the mail folder we’re displaying.  We create and render enough message summary nodes to cover the current screen, 3 screens worth of messages in the direction we’re scrolling, and then we also retain up to 3 screens worth in the direction we scrolled from.  We also pre-fetch 2 more screens worth of messages from the database.  These constants were arrived at experimentally on prototype devices.

We listen to “scroll” events and issue database requests and move DOM nodes around and update them as the user scrolls.  For any potentially jarring or expensive transitions such as coordinate space changes from new messages being added above the current scroll position, we wait for scrolling to stop.

Nodes are absolutely positioned within the scroll area using their ‘top’ style but translation transforms also work.  We remove nodes from the DOM, then update their position and their state before re-appending them.  We do this because the browser APZ logic tries to be clever and figure out how to create an efficient series of layers so that it can pre-paint as much of the DOM as possible in graphic buffers, AKA layers, that can be efficiently composited by the GPU.  Its goal is that when the user is scrolling, or something is being animated, that it can just move the layers around the screen or adjust their opacity or other transforms without having to ask the layout engine to re-render portions of the DOM.

When our message elements are added to the DOM with an already-initialized absolute position, the APZ logic lumps them together as something it can paint in a single layer along with the other elements in the scrolling region.  But if we start moving them around while they’re still in the DOM, the layerization logic decides that they might want to independently move around more in the future and so each message item ends up in its own layer.  This slows things down.  But by removing them and re-adding them it sees them as new with static positions and decides that it can lump them all together in a single layer.  Really, we could just create new DOM nodes, but we produce slightly less garbage this way and in the event there’s a bug, it’s nicer to mess up with 30 DOM nodes displayed incorrectly rather than 3 million.

But as neat as the layerization stuff is to know about on its own, I really mention it to underscore 2 suggestions:

1, Use a library when possible.  Getting on and staying on APZ fast-paths is not trivial, especially across browser engines.  So it’s a very good idea to use a library rather than rolling your own.

2, Use developer tools.  APZ is tricky to reason about and even the developers who write the Async pan & zoom logic can be surprised by what happens in complex real-world situations.  And there ARE developer tools available that help you avoid needing to reason about this.  Firefox OS has easy on-device developer tools that can help diagnose what’s going on or at least help tell you whether you’re making things faster or slower:

– it’s got a frames-per-second overlay; you do need to scroll like mad to get the system to want to render 60 frames-per-second, but it makes it clear what the net result is

– it has paint flashing that overlays random colors every time it paints the DOM into a layer.  If the screen is flashing like a discotheque or has a lot of smeared rainbows, you know something’s wrong because the APZ logic is not able to to just reuse its layers.

– devtools can enable drawing cool colored borders around the layers APZ has created so you can see if layerization is doing something crazy

There’s also fancier and more complicated tools in Firefox and other browsers like Google Chrome to let you see what got painted, what the layer tree looks like, et cetera.

And that’s my spiel.

Links

The source code to Gaia can be found at https://github.com/mozilla-b2g/gaia

The email app in particular can be found at https://github.com/mozilla-b2g/gaia/tree/master/apps/email

(I also asked for questions here.)

Categorieën: Mozilla-nl planet

Joshua Cranmer: Breaking news

Thunderbird - wo, 01/04/2015 - 09:00
It was brought to my attention recently by reputable sources that the recent announcement of increased usage in recent years produced an internal firestorm within Mozilla. Key figures raised alarm that some of the tech press had interpreted the blog post as a sign that Thunderbird was not, in fact, dead. As a result, they asked Thunderbird community members to make corrections to emphasize that Mozilla was trying to kill Thunderbird.

The primary fear, it seems, is that knowledge that the largest open-source email client was still receiving regular updates would impel its userbase to agitate for increased funding and maintenance of the client to help forestall potential threats to the open nature of email as well as to innovate in the space of providing usable and private communication channels. Such funding, however, would be an unaffordable luxury and would only distract Mozilla from its central goal of building developer productivity tooling. Persistent rumors that Mozilla would be willing to fund Thunderbird were it renamed Firefox Email were finally addressed with the comment, "such a renaming would violate our current policy that all projects be named Persona."

Categorieën: Mozilla-nl planet

Joshua Cranmer: Why email is hard, part 8: why email security failed

Thunderbird - di, 13/01/2015 - 05:38
This post is part 8 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. Part 7 discusses the problem of trust. This part discusses why email security has largely failed.

At the end of the last part in this series, I posed the question, "Which email security protocol is most popular?" The answer to the question is actually neither S/MIME nor PGP, but a third protocol, DKIM. I haven't brought up DKIM until now because DKIM doesn't try to secure email in the same vein as S/MIME or PGP, but I still consider it relevant to discussing email security.

Unquestionably, DKIM is the only security protocol for email that can be considered successful. There are perhaps 4 billion active email addresses [1]. Of these, about 1-2 billion use DKIM. In contrast, S/MIME can count a few million users, and PGP at best a few hundred thousand. No other security protocols have really caught on past these three. Why did DKIM succeed where the others fail?

DKIM's success stems from its relatively narrow focus. It is nothing more than a cryptographic signature of the message body and a smattering of headers, and is itself stuck in the DKIM-Signature header. It is meant to be applied to messages only on outgoing servers and read and processed at the recipient mail server—it completely bypasses clients. That it bypasses clients allows it to solve the problem of key discovery and key management very easily (public keys are stored in DNS, which is already a key part of mail delivery), and its role in spam filtering is strong motivation to get it implemented quickly (it is 7 years old as of this writing). It's also simple: this one paragraph description is basically all you need to know [2].

The failure of S/MIME and PGP to see large deployment is certainly a large topic of discussion on myriads of cryptography enthusiast mailing lists, which often like to partake in propositions of new end-to-end encryption of email paradigms, such as the recent DIME proposal. Quite frankly, all of these solutions suffer broadly from at least the same 5 fundamental weaknesses, and I see it unlikely that a protocol will come about that can fix these weaknesses well enough to become successful.

The first weakness, and one I've harped about many times already, is UI. Most email security UI is abysmal and generally at best usable only by enthusiasts. At least some of this is endemic to security: while it mean seem obvious how to convey what an email signature or an encrypted email signifies, how do you convey the distinctions between sign-and-encrypt, encrypt-and-sign, or an S/MIME triple wrap? The Web of Trust model used by PGP (and many other proposals) is even worse, in that inherently requires users to do other actions out-of-band of email to work properly.

Trust is the second weakness. Consider that, for all intents and purposes, the email address is the unique identifier on the Internet. By extension, that implies that a lot of services are ultimately predicated on the notion that the ability to receive and respond to an email is a sufficient means to identify an individual. However, the entire purpose of secure email, or at least of end-to-end encryption, is subtly based on the fact that other people in fact have access to your mailbox, thus destroying the most natural ways to build trust models on the Internet. The quest for anonymity or privacy also renders untenable many other plausible ways to establish trust (e.g., phone verification or government-issued ID cards).

Key discovery is another weakness, although it's arguably the easiest one to solve. If you try to keep discovery independent of trust, the problem of key discovery is merely picking a protocol to publish and another one to find keys. Some of these already exist: PGP key servers, for example, or using DANE to publish S/MIME or PGP keys.

Key management, on the other hand, is a more troubling weakness. S/MIME, for example, basically works without issue if you have a certificate, but managing to get an S/MIME certificate is a daunting task (necessitated, in part, by its trust model—see how these issues all intertwine?). This is also where it's easy to say that webmail is an unsolvable problem, but on further reflection, I'm not sure I agree with that statement anymore. One solution is just storing the private key with the webmail provider (you're trusting them as an email client, after all), but it's also not impossible to imagine using phones or flash drives as keystores. Other key management factors are more difficult to solve: people who lose their private keys or key rollover create thorny issues. There is also the difficulty of managing user expectations: if I forget my password to most sites (even my email provider), I can usually get it reset somehow, but when a private key is lost, the user is totally and completely out of luck.

Of course, there is one glaring and almost completely insurmountable problem. Encrypted email fundamentally precludes certain features that we have come to take for granted. The lesser known is server-side search and filtration. While there exist some mechanisms to do search on encrypted text, those mechanisms rely on the fact that you can manipulate the text to change the message, destroying the integrity feature of secure email. They also tend to be fairly expensive. It's easy to just say "who needs server-side stuff?", but the contingent of people who do email on smartphones would not be happy to have to pay the transfer rates to download all the messages in their folder just to find one little email, nor the energy costs of doing it on the phone. And those who have really large folders—Fastmail has a design point of 1,000,000 in a single folder—would still prefer to not have to transfer all their mail even on desktops.

The more well-known feature that would disappear is spam filtration. Consider that 90% of all email is spam, and if you think your spam folder is too slim for that to be true, it's because your spam folder only contains messages that your email provider wasn't sure were spam. The loss of server-side spam filtering would dramatically increase the cost of spam (a 10% reduction in efficiency would double the amount of server storage, per my calculations), and client-side spam filtering is quite literally too slow [3] and too costly (remember smartphones? Imagine having your email take 10 times as much energy and bandwidth) to be a tenable option. And privacy or anonymity tends to be an invitation to abuse (cf. Tor and Wikipedia). Proposed solutions to the spam problem are so common that there is a checklist containing most of the objections.

When you consider all of those weaknesses, it is easy to be pessimistic about the possibility of wide deployment of powerful email security solutions. The strongest future—all email is encrypted, including metadata—is probably impossible or at least woefully impractical. That said, if you weaken some of the assumptions (say, don't desire all or most traffic to be encrypted), then solutions seem possible if difficult.

This concludes my discussion of email security, at least until things change for the better. I don't have a topic for the next part in this series picked out (this part actually concludes the set I knew I wanted to discuss when I started), although OAuth and DMARC are two topics that have been bugging me enough recently to consider writing about. They also have the unfortunate side effect of being things likely to see changes in the near future, unlike most of the topics I've discussed so far. But rest assured that I will find more difficulties in the email infrastructure to write about before long!

[1] All of these numbers are crude estimates and are accurate to only an order of magnitude. To justify my choices: I assume 1 email address per Internet user (this overestimates the developing world and underestimates the developed world). The largest webmail providers have given numbers that claim to be 1 billion active accounts between them, and all of them use DKIM. S/MIME is guessed by assuming that any smartcard deployment supports S/MIME, and noting that the US Department of Defense and Estonia's digital ID project are both heavy users of such smartcards. PGP is estimated from the size of the strong set and old numbers on the reachable set from the core Web of Trust.
[2] Ever since last April, it's become impossible to mention DKIM without referring to DMARC, as a result of Yahoo's controversial DMARC policy. A proper discussion of DMARC (and why what Yahoo did was controversial) requires explaining the mail transmission architecture and spam, however, so I'll defer that to a later post. It's also possible that changes in this space could happen within the next year.
[3] According to a former GMail spam employee, if it takes you as long as three minutes to calculate reputation, the spammer wins.

Categorieën: Mozilla-nl planet

Joshua Cranmer: A unified history for comm-central

Thunderbird - za, 10/01/2015 - 18:55
Several years back, Ehsan and Jeff Muizelaar attempted to build a unified history of mozilla-central across the Mercurial era and the CVS era. Their result is now used in the gecko-dev repository. While being distracted on yet another side project, I thought that I might want to do the same for comm-central. It turns out that building a unified history for comm-central makes mozilla-central look easy: mozilla-central merely had one import from CVS. In contrast, comm-central imported twice from CVS (the calendar code came later), four times from mozilla-central (once with converted history), and imported twice from Instantbird's repository (once with converted history). Three of those conversions also involved moving paths. But I've worked through all of those issues to provide a nice snapshot of the repository [1]. And since I've been frustrated by failing to find good documentation on how this sort of process went for mozilla-central, I'll provide details on the process for comm-central.

The first step and probably the hardest is getting the CVS history in DVCS form (I use hg because I'm more comfortable it, but there's effectively no difference between hg, git, or bzr here). There is a git version of mozilla's CVS tree available, but I've noticed after doing research that its last revision is about a month before the revision I need for Calendar's import. The documentation for how that repo was built is no longer on the web, although we eventually found a copy after I wrote this post on git.mozilla.org. I tried doing another conversion using hg convert to get CVS tags, but that rudely blew up in my face. For now, I've filed a bug on getting an official, branchy-and-tag-filled version of this repository, while using the current lack of history as a base. Calendar people will have to suffer missing a month of history.

CVS is famously hard to convert to more modern repositories, and, as I've done my research, Mozilla's CVS looks like it uses those features which make it difficult. In particular, both the calendar CVS import and the comm-central initial CVS import used a CVS tag HG_COMM_INITIAL_IMPORT. That tagging was done, on only a small portion of the tree, twice, about two months apart. Fortunately, mailnews code was never touched on CVS trunk after the import (there appears to be one commit on calendar after the tagging), so it is probably possible to salvage a repository-wide consistent tag.

The start of my script for conversion looks like this:

#!/bin/bash set -e WORKDIR=/tmp HGCVS=$WORKDIR/mozilla-cvs-history MC=/src/trunk/mozilla-central CC=/src/trunk/comm-central OUTPUT=$WORKDIR/full-c-c # Bug 445146: m-c/editor/ui -> c-c/editor/ui MC_EDITOR_IMPORT=d8064eff0a17372c50014ee305271af8e577a204 # Bug 669040: m-c/db/mork -> c-c/db/mork MC_MORK_IMPORT=f2a50910befcf29eaa1a29dc088a8a33e64a609a # Bug 1027241, bug 611752 m-c/security/manager/ssl/** -> c-c/mailnews/mime/src/* MC_SMIME_IMPORT=e74c19c18f01a5340e00ecfbc44c774c9a71d11d # Step 0: Grab the mozilla CVS history. if [ ! -e $HGCVS ]; then hg clone git+https://github.com/jrmuizel/mozilla-cvs-history.git $HGCVS fi

Since I don't want to include the changesets useless to comm-central history, I trimmed the history by using hg convert to eliminate changesets that don't change the necessary files. Most of the files are simple directory-wide changes, but S/MIME only moved a few files over, so it requires a more complex way to grab the file list. In addition, I also replaced the % in the usernames with @ that they are used to appearing in hg. The relevant code is here:

# Step 1: Trim mozilla CVS history to include only the files we are ultimately # interested in. cat >$WORKDIR/convert-filemap.txt <<EOF # Revision e4f4569d451a include directory/xpcom include mail include mailnews include other-licenses/branding/thunderbird include suite # Revision 7c0bfdcda673 include calendar include other-licenses/branding/sunbird # Revision ee719a0502491fc663bda942dcfc52c0825938d3 include editor/ui # Revision 52efa9789800829c6f0ee6a005f83ed45a250396 include db/mork/ include db/mdb/ EOF # Add the S/MIME import files hg -R $MC log -r "children($MC_SMIME_IMPORT)" \ --template "{file_dels % 'include {file}\n'}" >>$WORKDIR/convert-filemap.txt if [ ! -e $WORKDIR/convert-authormap.txt ]; then hg -R $HGCVS log --template "{email(author)}={sub('%', '@', email(author))}\n" \ | sort -u > $WORKDIR/convert-authormap.txt fi cd $WORKDIR hg convert $HGCVS $OUTPUT --filemap convert-filemap.txt -A convert-authormap.txt

That last command provides us the subset of the CVS history that we need for unified history. Strictly speaking, I should be pulling a specific revision, but I happen to know that there's no need to (we're cloning the only head) in this case. At this point, we now need to pull in the mozilla-central changes before we pull in comm-central. Order is key; hg convert will only apply the graft points when converting the child changeset (which it does but once), and it needs the parents to exist before it can do that. We also need to ensure that the mozilla-central graft point is included before continuing, so we do that, and then pull mozilla-central:

CC_CVS_BASE=$(hg log -R $HGCVS -r 'tip' --template '{node}') CC_CVS_BASE=$(grep $CC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2) MC_CVS_BASE=$(hg log -R $HGCVS -r 'gitnode(215f52d06f4260fdcca797eebd78266524ea3d2c)' --template '{node}') MC_CVS_BASE=$(grep $MC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2) # Okay, now we need to build the map of revisions. cat >$WORKDIR/convert-revmap.txt <<EOF e4f4569d451a5e0d12a6aa33ebd916f979dd8faa $CC_CVS_BASE # Thunderbird / Suite 7c0bfdcda6731e77303f3c47b01736aaa93d5534 d4b728dc9da418f8d5601ed6735e9a00ac963c4e, $CC_CVS_BASE # Calendar 9b2a99adc05e53cd4010de512f50118594756650 $MC_CVS_BASE # Mozilla graft point ee719a0502491fc663bda942dcfc52c0825938d3 78b3d6c649f71eff41fe3f486c6cc4f4b899fd35, $MC_EDITOR_IMPORT # Editor 8cdfed92867f885fda98664395236b7829947a1d 4b5da7e5d0680c6617ec743109e6efc88ca413da, e4e612fcae9d0e5181a5543ed17f705a83a3de71 # Chat EOF # Next, import mozilla-central revisions for rev in $MC_MORK_IMPORT $MC_EDITOR_IMPORT $MC_SMIME_IMPORT; do hg convert $MC $OUTPUT -r $rev --splicemap $WORKDIR/convert-revmap.txt \ --filemap $WORKDIR/convert-filemap.txt done

Some notes about all of the revision ids in the script. The splicemap requires the full 40-character SHA ids; anything less and the thing complains. I also need to specify the parents of the revisions that deleted the code for the mozilla-central import, so if you go hunting for those revisions and are surprised that they don't remove the code in question, that's why.

I mentioned complications about the merges earlier. The Mork and S/MIME import codes here moved files, so that what was db/mdb in mozilla-central became db/mork. There's no support for causing the generated splice to record these as a move, so I have to manually construct those renamings:

# We need to execute a few hg move commands due to renamings. pushd $OUTPUT hg update -r $(grep $MC_MORK_IMPORT .hg/shamap | cut -d' ' -f2) (hg -R $MC log -r "children($MC_MORK_IMPORT)" \ --template "{file_dels % 'hg mv {file} {sub(\"db/mdb\", \"db/mork\", file)}\n'}") | bash hg commit -m 'Pseudo-changeset to move Mork files' -d '2011-08-06 17:25:21 +0200' MC_MORK_IMPORT=$(hg log -r tip --template '{node}') hg update -r $(grep $MC_SMIME_IMPORT .hg/shamap | cut -d' ' -f2) (hg -R $MC log -r "children($MC_SMIME_IMPORT)" \ --template "{file_dels % 'hg mv {file} {sub(\"security/manager/ssl\", \"mailnews/mime\", file)}\n'}") | bash hg commit -m 'Pseudo-changeset to move S/MIME files' -d '2014-06-15 20:51:51 -0700' MC_SMIME_IMPORT=$(hg log -r tip --template '{node}') popd # Echo the new move commands to the changeset conversion map. cat >>$WORKDIR/convert-revmap.txt <<EOF 52efa9789800829c6f0ee6a005f83ed45a250396 abfd23d7c5042bc87502506c9f34c965fb9a09d1, $MC_MORK_IMPORT # Mork 50f5b5fc3f53c680dba4f237856e530e2097adfd 97253b3cca68f1c287eb5729647ba6f9a5dab08a, $MC_SMIME_IMPORT # S/MIME EOF

Now that we have all of the graft points defined, and all of the external code ready, we can pull comm-central and do the conversion. That's not quite it, though—when we graft the S/MIME history to the original mozilla-central history, we have a small segment of abandoned converted history. A call to hg strip removes that.

# Now, import comm-central revisions that we need hg convert $CC $OUTPUT --splicemap $WORKDIR/convert-revmap.txt hg strip 2f69e0a3a05a

[1] I left out one of the graft points because I just didn't want to deal with it. I'll leave it as an exercise to the reader to figure out which one it was. Hint: it's the only one I didn't know about before I searched for the archive points [2].
[2] Since I wasn't sure I knew all of the graft points, I decided to try to comb through all of the changesets to figure out who imported code. It turns out that hg log -r 'adds("**")' narrows it down nicely (1667 changesets to look at instead of 17547), and using the {file_adds} template helps winnow it down more easily.

Categorieën: Mozilla-nl planet

Pagina's