mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

The Mozilla Blog: What we do when things go wrong

Mozilla planet - do, 09/05/2019 - 22:53

We strive to make Firefox a great experience. Last weekend we failed, and we’re sorry.

An error on our part prevented new add-ons from being installed, and stopped existing add-ons from working. Now that we’ve been able to restore this functionality for the majority of Firefox users, we want to explain a bit about what happened and tell you what comes next.

Add-ons are an important feature of Firefox. They enable you to customize your browser and add valuable functionality to your online experience. We know how important this is, which is why we’ve spent a great deal of time over the past few years coming up with ways to make add-ons safer and more secure. However, because add-ons are so powerful, we’ve also worked hard to build and deploy systems to protect you from malicious add-ons. The problem here was an implementation error in one such system, with the failure mode being that add-ons were disabled. Although we believe that the basic design of our add-ons system is sound, we will be working to refine these systems so similar problems do not occur in the future.

In order to address this issue as quickly as possible, we used our “Studies” system to deploy the initial fix, which requires users to be opted in to Telemetry.  Some users who had opted out of Telemetry opted back in, in order to get the initial fix as soon as possible. As we announced in the Firefox Add-ons blog at 2019-05-08T23:28:00Z there is now no longer a need to have Studies on to receive updates anymore; please check that your settings match your personal preferences before we re-enable Studies, which will happen sometime after 2019-05-13T16:00:00Z. In order to respect our users’ potential intentions as much as possible, based on our current set up, we will be deleting all of our source Telemetry and Studies data for our entire user population collected between 2019-05-04T11:00:00Z and 2019-05-11T11:00:00Z.

Our CTO, Eric Rescorla, shares more about what happened technically in this post.

We would like to extend our thanks to the people who worked hard to address this issue, including the hundred or so community members and employees localizing content and answering questions on https://support.mozilla.org/, Twitter, and Reddit.

There’s a lot more detail we will be sharing as part of a longer post-mortem which we will make public — including details on how we went about fixing this problem and why we chose this approach. You deserve a full accounting, but we didn’t want to wait until that process was complete to tell you what we knew so far. We let you down and what happened might have shaken your confidence in us a bit, but we hope that you’ll give us a chance to earn it back.

The post What we do when things go wrong appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

What we do when things go wrong

Mozilla Blog - do, 09/05/2019 - 22:53

We strive to make Firefox a great experience. Last weekend we failed, and we’re sorry.

An error on our part prevented new add-ons from being installed, and stopped existing add-ons from working. Now that we’ve been able to restore this functionality for the majority of Firefox users, we want to explain a bit about what happened and tell you what comes next.

Add-ons are an important feature of Firefox. They enable you to customize your browser and add valuable functionality to your online experience. We know how important this is, which is why we’ve spent a great deal of time over the past few years coming up with ways to make add-ons safer and more secure. However, because add-ons are so powerful, we’ve also worked hard to build and deploy systems to protect you from malicious add-ons. The problem here was an implementation error in one such system, with the failure mode being that add-ons were disabled. Although we believe that the basic design of our add-ons system is sound, we will be working to refine these systems so similar problems do not occur in the future.

In order to address this issue as quickly as possible, we used our “Studies” system to deploy the initial fix, which requires users to be opted in to Telemetry.  Some users who had opted out of Telemetry opted back in, in order to get the initial fix as soon as possible. As we announced in the Firefox Add-ons blog at 2019-05-08T23:28:00Z there is now no longer a need to have Studies on to receive updates anymore; please check that your settings match your personal preferences before we re-enable Studies, which will happen sometime after 2019-05-13T16:00:00Z. In order to respect our users’ potential intentions as much as possible, based on our current set up, we will be deleting all of our source Telemetry and Studies data for our entire user population collected between 2019-05-04T11:00:00Z and 2019-05-11T11:00:00Z.

Our CTO, Eric Rescorla, shares more about what happened technically in this post.

We would like to extend our thanks to the people who worked hard to address this issue, including the hundred or so community members and employees localizing content and answering questions on https://support.mozilla.org/, Twitter, and Reddit.

There’s a lot more detail we will be sharing as part of a longer post-mortem which we will make public — including details on how we went about fixing this problem and why we chose this approach. You deserve a full accounting, but we didn’t want to wait until that process was complete to tell you what we knew so far. We let you down and what happened might have shaken your confidence in us a bit, but we hope that you’ll give us a chance to earn it back.

The post What we do when things go wrong appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Technical Details on the Recent Firefox Add-on Outage

Mozilla planet - do, 09/05/2019 - 22:06

Editor’s Note: May 9, 8:22 pt – Updated as follows: (1) Fixed verb tense (2) Clarified the situation with downstream distros. For more detail, see Bug 1549886.

Recently, Firefox had an incident in which most add-ons stopped working. This was due to an error on our end: we let one of the certificates used to sign add-ons expire which had the effect of disabling the vast majority of add-ons. Now that we’ve fixed the problem for most users and most people’s add-ons are restored, I wanted to walk through the details of what happened, why, and how we repaired it.

Background: Add-Ons and Add-On Signing

Although many people use Firefox out of the box, Firefox also supports a powerful extension mechanism called “add-ons”. Add-ons allow users to add third party features to Firefox that extend the capabilities we offer by default. Currently there are over 15,000 Firefox add-ons with capabilities ranging from blocking ads to managing hundreds of tabs.

Firefox requires that all add-ons that are installed be digitally signed. This requirement is intended to protect users from malicious add-ons by requiring some minimal standard of review by Mozilla staff. Before we introduced this requirement in 2015, we had serious problems with malicious add-ons.

The way that the add-on signing works is that Firefox is configured with a preinstalled “root certificate”. That root is stored offline in a hardware security module (HSM). Every few years it is used to sign a new “intermediate certificate” which is kept online and used as part of the signing process. When an add-on is presented for signature, we generate a new temporary “end-entity certificate” and sign that using the intermediate certificate. The end-entity certificate is then used to sign the add-on itself. Shown visually, this looks like this:

Diagram showing the digital signature workflow from Root to Add-on

Note that each certificate has a “subject” (to whom the certificate belongs) and an “issuer” (the signer). In the case of the root, these are the same entity, but for other certificates, the issuer of a certificate is the subject of the certificate that signed it.

An important point here is that each add-on is signed by its own end-entity certificate, but nearly all add-ons share the same intermediate certificate [1]. It is this certificate that encountered a problem: Each certificate has a fixed period during which it is valid. Before or after this window, the certificate won’t be accepted, and an add-on signed with that certificate can’t be loaded into Firefox. Unfortunately, the intermediate certificate we were using expired just after 1AM UTC on May 4, and immediately every add-on that was signed with that certificate become unverifiable and could not be loaded into Firefox.

Although add-ons all expired around midnight, the impact of the outage wasn’t felt immediately. The reason for this is that Firefox doesn’t continuously check add-ons for validity. Rather, all add-ons are checked about every 24 hours, with the time of the check being different for each user. The result is that some people experienced problems right away, some people didn’t experience them until much later. We at Mozilla first became aware of the problem around 6PM Pacific time on Friday May 3 and immediately assembled a team to try to solve the issue.

Damage Limitation

Once we realized what we were up against, we took several steps to try to avoid things getting any worse.

First, we disabled signing of new add-ons. This was sensible at the time because we were signing with a certificate that we knew was expired. In retrospect, it might have been OK to leave it up, but it also turned out to interfere with the “hardwiring a date” mitigation we discuss below (though eventually didn’t use) and so it’s good we preserved the option. Signing is now back up.

Second, we immediately pushed a hotfix which suppressed re-validating the signatures on add-ons. The idea here was to avoid breaking users who hadn’t re-validated yet. We did this before we had any other fix, and have removed it now that fixes are available.

Working in Parallel

In theory, fixing a problem like this looks simple: make a new, valid certificate and republish every add-on with that certificate. Unfortunately, we quickly determined that this wouldn’t work for a number of reasons:

  1. There are a very large number of add-ons (over 15,000) and the signing service isn’t optimized for bulk signing, so just re-signing every add-on would take longer than we wanted.
  2. Once add-ons were signed, users would need to get a new add-on. Some add-ons are hosted on Mozilla’s servers and Firefox would update those add-ons within 24 hours, but users would have to manually update any add-ons that they had installed from other sources, which would be very inconvenient.

Instead, we focused on trying to develop a fix which we could provide to all our users with little or no manual intervention.

After examining a number of approaches, we quickly converged on two major strategies which we pursued in parallel:

  1. Patching Firefox to change the date which is used to validate the certificate. This would make existing add-ons magically work again, but required shipping a new build of Firefox (a “dot release”).
  2. Generate a replacement certificate that was still valid and somehow convince Firefox to accept it instead of the existing, expired certificate.

We weren’t sure that either of these would work, so we decided to pursue them in parallel and deploy the first one that looked like it was going to work. At the end of the day, we ended up deploying the second fix, the new certificate, which I’ll describe in some more detail below.

A Replacement Certificate

As suggested above, there are two main steps we had to follow here:

  1. Generate a new, valid, certificate.
  2. Install it remotely in Firefox.

In order to understand why this works, you need to know a little more about how Firefox validates add-ons. The add-on itself comes as a bundle of files that includes the certificate chain used to sign it. The result is that the add-on is independently verifiable as long as you know the root certificate, which is configured into Firefox at build time. However, as I said, the intermediate certificate was broken, so the add-on wasn’t actually verifiable.

However, it turns out that when Firefox tries to validate the add-on, it’s not limited to just using the certificates in the add-on itself. Instead, it tries to build a valid chain of certificates starting at the end-entity certificate and continuing until it gets to the root. The algorithm is complicated, but at a high level, you start with the end-entity certificate and then find a certificate whose subject is equal to the issuer of the end-entity certificate (i.e., the intermediate certificate). In the simple case, that’s just the intermediate that shipped with the add-on, but it could be any certificate that the browser happens to know about. If we can remotely add a new, valid, certificate, then Firefox will try that as well. The figure below shows the situation before and after we install the new certificate.

a diagram showing two workflows, before and after we installed a new valid certificate

Once the new certificate is installed, Firefox has two choices for how to validate the certificate chain, use the old invalid certificate (which won’t work) and use the new valid certificate (which will work). An important feature here is that the new certificate has the same subject name and public key as the old certificate, so that its signature on the End-Entity certificate is valid. Fortunately, Firefox is smart enough to try both until it finds a path that works, so the add-on becomes valid again. Note that this is the same logic we use for validating TLS certificates, so it’s relatively well understood code that we were able to leverage.[2]

The great thing about this fix is that it doesn’t require us to change any existing add-on. As long as we get the new certificate into Firefox, then even add-ons which are carrying the old certificate will just automatically verify. The tricky bit then becomes getting the new certificate into Firefox, which we need to do automatically and remotely, and then getting Firefox to recheck all the add-ons that may have been disabled.

Normandy and the Studies System

Ironically, the solution to this problem is a special type of add-on called a system add-on (SAO). In order to let us do research studies, we have developed a system called Normandy which lets us serve SAOs to Firefox users. Those SAOs automatically execute on the user’s browser and while they are usually used for running experiments, they also have extensive access to Firefox internal APIs. Important for this case, they can add new certificates to the certificate database that Firefox uses to verify add-ons.[3]

So the fix here is to build a SAO which does two things:

  1. Install the new certificate we have made.
  2. Force the browser to re-verify every add-on so that the ones which were disabled become active.

But wait, you say. Add-ons don’t work so how do we get it to run? Well, we sign it with the new certificate!

Putting it all together… and what took so long?

OK, so now we’ve got a plan: issue a new certificate to replace the old one, build a system add-on to install it on Firefox, and deploy it via Normandy. Starting from about 6 PM Pacific on Friday May 3, we were shipping the fix in Normandy at 2:44 AM, or after less than 9 hours, and then it took another 6-12 hours before most of our users had it. This is actually quite good from a standing start, but I’ve seen a number of questions on Twitter about why we couldn’t get it done faster. There are a number of steps that were time consuming.

First, it took a while to issue the new intermediate certificate. As I mentioned above, the Root certificate is in a hardware security module which is stored offline. This is good security practice, as you use the Root very rarely and so you want it to be secure, but it’s obviously somewhat inconvenient if you want to issue a new certificate on an emergency basis. At any rate, one of our engineers had to drive to the secure location where the HSM is stored. Then there were a few false starts where we didn’t issue exactly the right certificate, and each attempt cost an hour or two of testing before we knew exactly what to do.

Second, developing the system add-on takes some time. It’s conceptually very simple, but even simple programs require taking some care, and we really wanted to make sure we didn’t make things worse. And before we shipped the SAO, we had to test it, and that takes time, especially because it has to be signed. But the signing system was disabled, so we had to find some workarounds for that.

Finally, once we had the SAO ready to ship, it still takes time to deploy. Firefox clients check for Normandy updates every 6 hours, and of course many clients are offline, so it takes some time for the fix to propagate through the Firefox population. However, at this point we expect that most people have received the update and/or the dot release we did later.

Final Steps

While the SAO that was deployed with Studies should fix most users, it didn’t get to everyone. In particular, there are a number of types of affected users who will need another approach:

  • Users who have disabled either Telemetry or Studies.
  • Users on Firefox for Android (Fennec), where we don’t have Studies.
  • Users of downstream builds of Firefox ESR that don’t opt-in to
    telemetry reporting.
  • Users who are behind HTTPS Man-in-the-middle proxies, because our add-on installation systems enforce key pinning for these connections, which proxies interfere with.
  • Users of very old builds of Firefox which the Studies system can’t reach.

We can’t really do anything about the last group — they should update to a new version of Firefox anyway because older versions typically have quite serious unfixed security vulnerabilities. We know that some people have stayed on older versions of Firefox because they want to run old-style add-ons, but many of these now work with newer versions of Firefox. For the other groups we have developed a patch to Firefox that will install the new certificate once people update. This was released as a “dot release” so people will get it — and probably have already — through the ordinary update channel. If you have a downstream build, you’ll need to wait for your build maintainer to update.

We recognize that none of this is perfect. In particular, in some cases, users lost data associated with their add-ons (an example here is the “multi-account containers” add-on).

We were unable to develop a fix that would avoid this side effect, but we believe this is the best approach for the most users in the short term. Long term, we will be looking at better architectural approaches for dealing with this kind of issue.

Lessons

First, I want to say that the team here did amazing work: they built and shipped a fix in less than 12 hours from the initial report. As someone who sat in the meeting where it happened, I can say that people were working incredibly hard in a tough situation and that very little time was wasted.

With that said, obviously this isn’t an ideal situation and it shouldn’t have happened in the first place. We clearly need to adjust our processes both to make this and similar incidents it less likely to happen and to make them easier to fix.

We’ll be running a formal post-mortem next week and will publish the list of changes we intend to make, but in the meantime here are my initial thoughts about what we need to do. First, we should have a much better way of tracking the status of everything in Firefox that is a potential time bomb and making sure that we don’t find ourselves in a situation where one goes off unexpectedly. We’re still working out the details here, but at minimum we need to inventory everything of this nature.

Second, we need a mechanism to be able to quickly push updates to our users even when — especially when — everything else is down.  It was great that we are able to use the Studies system, but it was also an imperfect tool that we pressed into service, and that had some undesirable side effects. In particular, we know that many users have auto-updates enabled but would prefer not to participate in Studies and that’s a reasonable preference (true story: I had it off as well!) but at the same time we need to be able to push updates to our users; whatever the internal technical mechanisms, users should be able to opt-in to updates (including hot-fixes) but opt out of everything else. Additionally, the update channel should be more responsive than what we have today. Even on Monday, we still had some users who hadn’t picked up either the hotfix or the dot release, which clearly isn’t ideal. There’s been some work on this problem already, but this incident shows just how important it is.

Finally, we’ll be looking more generally at our add-on security architecture to make sure that it’s enforcing the right security properties at the least risk of breakage.

We’ll be following up next week with the results of a more thorough post-mortem, but in the meantime, I’ll be happy to answer questions by email at ekr-blog@mozilla.com.

 

[1] A few very old add-ons were signed with a different intermediate.

[2] Readers who are familiar with the WebPKI will recognize that this is also the way that cross-certification works.
[3] Technical note: we aren’t adding the certificate with any special privileges; it gets its authority by being signed for the root. We’re just adding it to the pool of certificates which can be used by Firefox. So, it’s not like we are adding a new privileged certificate to Firefox.

The post Technical Details on the Recent Firefox Add-on Outage appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Chris H-C: Google I/O Extended 2019 – Report

Mozilla planet - do, 09/05/2019 - 20:03

I attended a Google I/O Extended event on Tuesday at Google’s Kitchener office. It’s a get-together where there are demos, talks, workshops, and networking opportunities centred around watching the keynote live on the screen.

I treat it as an opportunity to keep an eye on what they’re up to this time, and a reminder that I know absolutely no one in the tech scene around here.

The first part of the day was a workshop about how to build Actions for the Google Assistant. I found the exercise to be very interesting.

The writing of the Action itself wasn’t interesting, that was a bunch of whatever. But it was interesting that it refused to work unless you connected it to a Google Account that had Web & Search Activity tracking turned on. Also I found it interesting that, though they said it required Chrome, it worked just fine on Firefox. It was interesting listening to laptops (including mine) across the room belt out welcome phrases because the simulator defaults to a hot mic and a loud speaker. It was interesting to notice that the presenter spent thirty seconds talking about how to name your project, and zero seconds talking about the Terms of Use of the application we were being invited to use. It was interesting to see that the settings defaulted to allowing you to test on all devices registered to the Google Account, without asking.

After the workshop the tech head of the Google Home App stood up and delivered a talk about trying to get manufacturers to agree on how to talk to Google Home and the Google Assistant.

I asked whether these efforts in trying to normalize APIs and protocols was leading them to publish a standard with a standards body. “No idea, sorry.”

Then I noticed the questions from the crowd were following a theme: “Can we get finer privacy controls?” (The answer seemed to be that Google believes the controls are already fine enough) “How do you educate users about the duration the data is retained?” (It’s in the Terms of Service, but it isn’t read aloud. But Google logs every “consent moment” and keeps track of settings) “For the GDPR was there a challenge operating in multiple countries?” (Yes. They admitted that some of the “fine enough” privacy controls are finer in certain jurisdictions due to regs.) And, after the keynote, someone in the crowd asked what features Android might adopt (self-destruct buttons, maybe) to protect against Border Security-style threats.

It was very heartening to hear a room full of tech nerds from Toronto and Waterloo Region ask questions about Privacy and Security of a tech giant. It was incredibly validating to hear from the keynote that Chrome is considering privacy protections Firefox introduced last year.

Maybe we at Mozilla aren’t crazy to think that privacy is important, that users care about it, that it is at risk and big tech companies have the power and the responsibility to protect it.

Maybe. Maybe not.

Just keep those questions coming.

:chutten

Categorieën: Mozilla-nl planet

Daniel Stenberg: Sometimes I speak

Mozilla planet - do, 09/05/2019 - 09:35

I view myself as primarily a software developer. Perhaps secondary as someone who’s somewhat knowledgeable in networking and is participating in protocol development and discussions. I do not regularly proclaim myself to be a “speaker” or someone who’s even very good at talking in front of people.

Time to wake up and face reality? I’m slowly starting to realize that I’m actually doing more presentations than ever before in my life and I’m enjoying it.

Since October 2015 I’ve done 53 talks and presentations in front of audiences – in ten countries. That’s one presentation done every 25 days on average. (The start date of this count is a little random but it just happens that I started to keep a proper log then.) I’ve talked to huge audiences and to small. I done presentations that were appreciated and I’ve done some that were less successful.

<figcaption>The room for the JAX keynote, May 2019, as seen from the stage, some 20 minutes before 700 persons sat down in the audience to hear my talk on HTTP/3.</figcaption>

My increased frequency in speaking engagements coincides with me starting to work full-time from home back in 2014. Going to places to speak is one way to get out of the house and see the “real world” a little bit and see what the real people are doing. And a chance to hang out with humans for a change. Besides, I only ever talk on topics that are dear to me and that I know intimately well so I rarely feel pressure when delivering them. 2014 – 2015 was also the time frame when HTTP/2 was being finalized and the general curiosity on that new protocol version helped me find opportunities back then.

Public speaking is like most other things: surprisingly enough, practice actually makes you better at it! I still have a lot to learn and improve, but speaking many times has for example made me better at figuring out roughly how long time I need to deliver a particular talk. It has taught me to “find myself” better when presenting and be more relaxed and the real me – no need to put up a facade of some kind or pretend. People like seeing that there’s a real person there.

<figcaption>I talked HTTP/2 at Techday by Init, in November 2016.</figcaption>

I’m not even getting that terribly nervous before my talks anymore. I used to really get a raised pulse for the first 45 talks or so, but by doing it over and over and over I think the practice has made me more secure and more relaxed in my attitude to the audience and the topics. I think it has made me a slightly better presenter and it certainly makes me enjoy it more.

I’m not “a good presenter”. I can deliver a talk and I can do it with dignity and I think the audience is satisfied with me in most cases, but by watching actual good presenters talk I realize that I still have a long journey ahead of me. Of course, parts of the explanation is that, to connect with the beginning of this post, I’m a developer. I don’t talk for a living and I actually very rarely practice my presentations very much because I don’t feel I can spend that time.

<figcaption>The JAX keynote in May 2019 as seen from the audience. Photo by Bernd Ruecker.</figcaption>

Some of the things that are still difficult include:

The money issue. I actually am a developer and that’s what I do for a living. Taking time off the development to prepare a presentation, travel to a distant place, sacrifice my spare time for one or more days and communicating something interesting to an audience that demands and expects it to be both good and reasonably entertaining takes time away from that development. Getting travel and accommodation compensated is awesome but unfortunately not enough. I need to insist on getting paid for this. I frequently turn down speaking opportunities when they can’t pay me for my time.

Saying no. Oh my god do I have a hard time to do this. This year, I’ve been invited to so many different conferences and the invitations keep flying in. For every single received invitation, I get this warm and comfy feeling and I feel honored and humbled by the fact that someone actually wants me to come to their conference or gathering to talk. There’s the calendar problem: I can’t be in two places at once. Then I also can’t plan events too close to each other in time to avoid them holding up “real work” too much or to become too much of a nuisance to my family. Sometimes there’s also the financial dilemma: if I can’t get compensation, it gets tricky for me to do it, no matter how good the conference seems to be and the noble cause they’re working for.

<figcaption>At SUE 2016 in the Netherlands.</figcaption>

Feedback. To determine what parts of the presentation that should be improved for the next time I speak of the same or similar topic, which parts should be removed and if something should be expanded, figuring what works and what doesn’t work is vital. For most talks I’ve done, there’s been no formal way to provide or receive this feedback, and for the small percentage that had a formal feedback form or a scoring system or similar, taking care of a bunch of distributed grades (for example “your talk was graded 4.2 on a scale between 1 and 5”) and random comments – either positive or negative – is really hard… I get the best feedback from close friends who dare to tell me the truth as it is.

Conforming to silly formats. Slightly different, but some places want me to send me my slides in, either a long time before the event (I’ve had people ask me to provide way over a week(!) before), or they dictate that the slides should be sent to them using Microsoft Powerpoint, PDF or some other silly format. I want to use my own preferred tools when designing presentations as I need to be able to reuse the material for more and future presentations. Sure, I can convert to other formats but that usually ruins formatting and design. Then a lot the time and sweat I put into making a fine and good-looking presentation is more or less discarded! Fortunately, most places let me plug in my laptop and everything is fine!

Upcoming talks?

As a little service to potential audience members and conference organizers, I’m listing all my upcoming speaking engagements on a dedicated page on my web site:

https://daniel.haxx.se/talks.html

I try to keep that page updated to reflect current reality. It also shows that some organizers are forward-planning waaaay in advance…

<figcaption>Here’s me talking about DNS-over-HTTPS at FOSDEM 2019. Photo by Steve Holme.</figcaption> Invite someone like me to talk?

Here’s some advice on how to invite a speaker (like me) with style:

  1. Ask well in advance (more than 2-3 months preferably, probably not more than 9). When I agree to a talk, others who ask for talks in close proximity to that date will get declined. I get a surprisingly large amount of invitations for events just a month into the future or so, and it rarely works for me to get those into my calendar in that time frame.
  2. Do not assume for-free delivery. I think it is good tone of you to address the price/charge situation, if not in the first contact email at least in the following discussion. If you cannot pay, that’s also useful information to provide early.
  3. If the time or duration of the talk you’d like is “unusual” (ie not 30-60 minutes) do spell that out early on.
  4. Surprisingly often I get invited to talk without a specified topic or title. The inviter then expects me to present that. Since you contact me you clearly had some kind of vision of what a talk by me would entail, it would make my life easier if that vision was conveyed as it could certainly help me produce a talk subject that will work!
<figcaption>Presenting HTTP/2 at the Velocity conference in New York, October 2015, together with Ragnar Lönn.</figcaption> What I bring

To every presentation I do, I bring my laptop. It has HDMI and USB-C ports. I also carry a HDMI-to-VGA adapter for the few installations that still use the old “projector port”. Places that need something else than those ports tend to have their own converters already since they’re then used with equipment not being fitted for their requirements.

I always bring my own clicker (the “remote” with which I can advance to next slide). I never use the laser-pointer feature, but I like being able to move around on the stage and not have to stand close to the keyboard when I present.

Presentations

I never create my presentations with video or sound in them, and I don’t do presentations that need Internet access. All this to simplify and to reduce the risk of problems.

I work hard on limiting the amount of text on each slide, but I also acknowledge that if a slide set should have value after-the-fact there needs to be a certain amount. I’m a fan of revealing the text or graphics step-by-step on the slides to avoid having half the audience reading ahead on the slide and not listening.

I’ve settled on 16:9 ratio for all presentations. Luckily, the remaining 4:3 projectors are now scarce.

I always make and bring a backup of my presentations in PDF format so that basically “any” computer could display that in case of emergency. Like if my laptop dies. As mentioned above, PDF is not an ideal format, but as a backup it works.

<figcaption>I talked “web transport” in the Mozilla devroom at FOSDEM, February 2017 in front of this audience. Not a single empty seat…</figcaption>

Categorieën: Mozilla-nl planet

Mike Hommey: Announcing git-cinnabar 0.5.1

Mozilla planet - wo, 08/05/2019 - 23:57

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0?
  • Updated git to 2.21.0 for the helper.
  • Experimental native mercurial support (used when mercurial libraries are not available) now has feature parity.
  • Try to read the git system config from the same place as git does. This fixes native HTTPS support with Git on Windows.
  • Avoid pushing more commits than necessary in some corner cases (see e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=1529360).
  • Added an –abbrev argument for git cinnabar {git2hg,hg2git} to display shortened sha1s.
  • Can now pass multiple revisions to git cinnabar fetch.
  • Don’t require the requests python module for git cinnabar download.
  • Fixed git cinnabar fsck file checks to actually report errors.
  • Properly return an error code from git cinnabar rollback.
  • Track last fsck’ed metadata and allow git cinnabar rollback --fsck to go back to last known good metadata directly.
  • git cinnabar reclone can now be rolled back.
  • Added support for git bundles as cinnabarclone source.
  • Added alternate styles of remote refs.
  • More resilient to interruptions when HTTP Range requests are supported.
  • Fixed off-by-one when storing mercurial heads.
  • Better handling of mercurial branchmap tips.
  • Better support for end of parts in bundle v2.
  • Improvements handling urls to local mercurial repositories.
  • Fixed compatibility with (very) old mercurial servers when using mercurial 5.0 libraries.
  • Converted Continuous Integration scripts to Python 3.
Categorieën: Mozilla-nl planet

Mozilla Thunderbird: WeTransfer File Transfer Now Available in Thunderbird

Mozilla planet - di, 07/05/2019 - 20:53

WeTransfer’s file-sharing service is now available within Thunderbird for sending large files (up to 2GB) for free, without signing up for an account.

Even better, sharing large files can be done without leaving the composer. While writing an email, just attach a large file and you will be prompted to choose whether you want to use file link, which will allow you to share a large file with a link to download it. Via this prompt you can select to use WeTransfer.

Filelink prompt in Thunderbird

Filelink prompt in Thunderbird

You can also enable File Link through the Preferences menu, under the attachments tab and the Outgoing page. Click “Add…” and choose “WeTransfer” from the drop down menu.

WeTransfer in Preferences

Once WeTransfer is set up in Thunderbird it will be the default method of linking for files over the size that you have specified (you can see that is set to 5MB in the screenshot above).

WeTransfer and Thunderbird are both excited to be able to work together on this great feature for our users. The Thunderbird team thinks that this will really improve the experience of collaboration and and sharing for our users.

WeTransfer is also proud of this feature. Travis Brown, WeTransfer VP of Business Development says about the collaboration:

“Mozilla and WeTransfer share similar values. We’re focused on the user and on maintaining our user’s privacy and an open internet. We’ll continue to work with their team across multiple areas and put privacy at the front of those initiatives.”

We hope that all our users will give this feature a try and enjoy being able to share the files they want with co-workers, friends, and family – easily.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 285

Mozilla planet - di, 07/05/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is select-rustc, a crate for conditional compilation according to rustc version. Thanks to ehsanmok for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

235 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs Upcoming Events Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

A compile_fail test that fails to fail to compile is also a failure.

David Tolnay in the try-build README

Llogiq is pretty self-congratulatory for picking this awesome quote.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Daniel Stenberg: live-streamed curl development

Mozilla planet - ma, 06/05/2019 - 23:16

As some of you already found out, I’ve tried live-streaming curl development recently. If you want to catch previous and upcoming episodes subscribe on my twitch page.

Why stream

For the fun of it. I work alone from home most of the time and this is a way for me to interact with others.

To show what’s going on in curl right now. By streaming some of my development I also show what kind of work that’s being done, showing that a lot of development and work are being put into curl and I can share my thoughts and plans with a wider community. Perhaps this will help getting more people to help out or to tickle their imagination.

<figcaption>A screenshot from live stream #11 when parallel transfers with curl was shown off for the first time ever!</figcaption>

For the feedback and interaction. It is immediately notable that one of the biggest reasons I enjoy live-streaming is the chat with the audience and the instant feedback on mistakes I do or thoughts and plans I express. It becomes a back-and-forth and it is not at all just a one-way broadcast. The more my audience interact with me, the more fun I have! That’s also the reason I show the chat within the stream most of the time since parts of what I say and do are reactions and follow-ups to what happens there.

I can only hope I get even more feedback and comments as I get better at this and that people find out about what I’m doing here.

And really, by now I also think of it as a really concentrated and devoted hacking time. I can get a lot of things done during these streaming sessions! I’ll try to keep them going a while.

Twitch

I decided to go with twitch simply because it is an established and known live-streaming platform. I didn’t do any deeper analyses or comparisons, but it seems to work fine for my purposes. I get a stream out with video and sound and people seem to be able to enjoy it.

As of this writing, there are 1645 people following me on twitch. Typical recent live-streams of mine have been watched by over a hundred simultaneous viewers. I also archive all past streams on Youtube, so you can get almost the same experience my watching back issues there.

I announce my upcoming streaming sessions as “events” on Twitch, and I announce them on twitter (@bagder you know). I try to stick to streaming on European day time hours basically because then I’m all alone at home and risk fewer interruptions or distractions from family members or similar.

Challenges

It’s not as easy as it may look trying to write code or debug an issue while at the same time explaining what I do. I learnt that the sessions get better if I have real and meaty issues to deal with or features to add, rather than to just have a few light-weight things to polish.

I also quickly learned that it is better to now not show an actual screen of mine in the stream, but instead I show a crafted set of windows placed on the output to look like it is a screen. This way there’s a much smaller risk that I actually show off private stuff or other content that wasn’t meant for the audience to see. It also makes it easier to show a tidy, consistent and clear “desktop”.

Streaming makes me have to stay focused on the development and prevents me from drifting off and watching cats or reading amusing tweets for a while

Trolls

So far we’ve been spared from the worst kind of behavior and people. We’ve only had some mild weirdos showing up in the chat and nothing that we couldn’t handle.

Equipment and software

I do all development on Linux so things have to work fine on Linux. Luckily, OBS Studio is a fine streaming app. With this, I can setup different “scenes” and I can change between them easily. Some of the scenes I have created are “emacs + term”, “browser” and “coffee break”.

When I want to show off me fiddling with the issues on github, I switch to the “browser” scene that primarily shows a big browser window (and the chat and the webcam in smaller windows).

When I want to show code, I switch to “emacs + term” that instead shows a terminal and an emacs window (and again the chat and the webcam in smaller windows), and so on.

OBS has built-in support for some of the major streaming services, including twitch, so it’s just a matter of pasting in a key in an input field, press ‘start streaming’ and go!

The rest of the software is the stuff I normally use anyway for developing. I don’t fake anything and I don’t make anything up. I use emacs, make, terminals, gdb etc. Everything this runs on my primary desktop Debian Linux machine that has 32GB of ram, an older i7-3770K CPU at 3.50GHz with a dual screen setup. The video of me is captured with a basic Logitech C270 webcam and the sound of my voice and the keyboard is picked up with my Sennheiser PC8 headset.

Some viewers have asked me about my keyboard which you can hear. It is a FUNC-460 that is now approaching 5 years, and I know for a fact that I press nearly 7 million keys per year.

Coffee

In a reddit post about my live-streaming, user ‘digitalsin’ suggested “Maybe don’t slurp RIGHT INTO THE FUCKING MIC”.

How else am I supposed to have my coffee while developing?

<figcaption>This is my home office standard setup. On the left is my video conference laptop and on the right is my regular work laptop. The two screens in the middle are connected to the desktop computer.</figcaption>
Categorieën: Mozilla-nl planet

Matthew Noorenberghe: Password Manager Improvements in Firefox 67

Mozilla planet - ma, 06/05/2019 - 08:57

There have been many improvements to the password manager in Firefox and some of them may take a while to be noticed so I thought I would highlight some of the user-facing ones in version 67:

Credit for the fixes goes to Jared Wein, Sam Foster, Prathiksha Guruprasad, and myself. The full list of password manager improvements in Firefox 67 can be found on Bugzilla and there are many more to come in Firefox 68 so stay tuned…


  1. Due to interactions with the Master Password dialog, this change doesn't apply if a Master Password is enabled
Categorieën: Mozilla-nl planet

The Mozilla Blog: The Firefox EU Elections Toolkit helps you to prevent pre-vote online manipulation

Mozilla planet - ma, 06/05/2019 - 07:06

What comes to your mind when you hear the term ‘online manipulation’? In the run-up to the EU parliamentary elections at the end of May, you probably think first and foremost of disinformation. But what about technical ways to manipulate voters on the internet? Although they are becoming more and more popular because they are so difficult to recognize and therefore particularly successful, they probably don’t come to mind first. Quite simply because they have not received much public attention so far. Firefox tackles this issue today: The ‘Firefox EU Election Toolkit’ not only provides important background knowledge and tips – designed to be easily understood by non-techies – but also tools to enable independent online research and decision-making.

Manipulation on the web: ‘fake news’ isn’t the main issue (anymore)

Few other topics have been so present in public perception in recent years, so comprehensively discussed in everyday life, news and science, and yet have been demystified as little as disinformation. Also commonly referred to as ‘fake news’, it’s defined as “deliberate disinformation or hoaxes spread via traditional print and broadcast news media or online social media.” Right now, so shortly before the next big elections at the end of May, the topic seems to be bubbling up once more: According to the European Commission’s Eurobarometer, 73 percent of Internet users in Europe are concerned about disinformation in the run-up to the EU parliamentary elections.

However, research also proves: The public debate about disinformation takes place in great detail, which significantly increases awareness of the ‘threat’. The fact that more and more initiatives against disinformation and fact-checking actors have been sprouting up for some time now – and that governments are getting involved, too – may be related to the zeitgeist or connected to individuals’ impression that they are constantly confronted with ‘fake news’ and cannot protect themselves on their own.

It’s important to take action against disinformation. Also, users who research the elections and potential candidates on the Internet, for example, should definitely stay critical and cautious. After all, clumsy disinformation campaigns are still taking place, revealing some of the downsides of a global, always available Internet; and they even come with a wide reach and rapid dissemination. Countless actors, including journalists, scientists and other experts now agree that the impact of disinformation is extremely limited and traditional news is still the primary and reliable source of information. This does not, however, mean that the risk of manipulation has gone away; in fact, we must make sure to stay alert and not close our eyes to new, equally problematic forms of manipulation, which have just been less present in the media and science so far. At Firefox we understand that this may require some support – and we’re happy to provide it today.

A toolkit for well-informed voters

Tracking has recently been a topic of discussion in the context of intrusive advertising, big data and GDPR. To refresh your memory: When browsing from site to site, users’ personal information may be collected through scripts or widgets on the websites. They’re called trackers. Many people don’t like that user information collected through trackers is used for advertising, often times without people’s knowledge (find more info here). But there’s another issue a lot less people are aware of and which hasn’t been widely discussed so far: User data can also be used for manipulation attempts, micro-targeted at specific groups or individuals. We believe that this needs to change – and in order to make that happen, more people need to hear about it.

Firefox is committed to an open and free Internet that provides access to independent information to everyone. That’s why we’ve created the ‘Firefox EU Elections Toolkit’: a website where users can find out how tracking and opaque online advertising influence their voting behavior and how they can easily protect themselves – through browser add-ons and other tools. Additionally, disinformation and the voting process are well represented on the site. The toolkit is now available online in English, German and French. No previous technical or policy-related knowledge is required. Among other things, the toolkit contains:

  • background information on how tracking, opaque election advertising and other questionable online activities affect people on the web, including a short, easy-to-digest video.
  • selected information about the EU elections as well as the EU as an institution – only using trustworthy sources.
  • browser extensions, checked on and recommended by Firefox, that support independent research and opinion making.
Make an independent choice when it matters the most

Of course, manipulation on the web is not only relevant in times of major political votes. With the forthcoming parliamentary elections, however, we find ourselves in an exceptional situation that calls for practical measures – also because there might be greater interest in the election, the programmes, parties and candidates than in recent years: More and more EU citizens are realizing how important the five-yearly parliamentary election is; the demands on parliamentarians are rising; and last but not least, there are numerous new voters again this May for whom Internet issues play an important role, but who need to find out about the election, its background and consequences.

Firefox wants to make sure that everyone has the chance to make informed choices. That detailed technical knowledge is not mandatory for getting independent information. And that the internet with all of its many advantages and (almost) unlimited possibilities is open and available to everyone, independent from demographics. Firefox fights for you.

The post The Firefox EU Elections Toolkit helps you to prevent pre-vote online manipulation appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Add-ons disabled or failing to install in Firefox

Mozilla planet - za, 04/05/2019 - 16:01

Incident summary

Updates – Last updated 14:35 PST May 14, 2019. We expect this to be our final update.

  • If you are running Firefox versions 61 – 65 and 1) did not receive the deployed fix and 2) do not want to update to the current version (which includes the permanent fix): Install this extension to resolve the expired security certificate issue and re-enable extensions and themes.
  • If you are running Firefox versions 57 – 60: Install this extension to resolve the expired security certificate issue and re-enable extensions and themes.
  • If you are running Firefox versions 47 – 56: install this extension to resolve the expired security certificate issue and re-enable extensions and themes.
  • A less technical blog post about the outage is also available. If you enabled telemetry to get the initial fix, we’re deleting all data collected since May 4. (May 9, 17:04 EDT)
  • Mozilla CTO Eric Rescorla posted a blog on the technical details of what went wrong last weekend. (May 9, 16:20 EDT)
  • We’ve released Firefox 66.0.5 for Desktop and Android, and Firefox ESR 60.6.3, which include the permanent fix for re-enabling add-ons that were disabled starting on May 3rd. The initial, temporary fix that was deployed May 4th through the Studies system is replaced by these updates, and we recommend updating as soon as possible. Users who enabled Studies to receive the temporary fix, and have updated to the permanent fix, can now disable Studies if they desire.For users who cannot update to the latest version of Firefox or Firefox ESR, we plan to distribute an update that automatically applies the fix to versions 52 through 60. This fix will also be available as a user-installable extension. For anyone still experiencing issues in versions 61 through 65, we plan to distribute a fix through a user-installable extension. These extensions will not require users to enable Studies, and we’ll provide an update when they are available. (May 8, 19:28 EDT)
  • Firefox 66.0.5 has been released, and we recommend that people update to that version if they continue to experience problems with extensions being disabled. You’ll get an update notification within 24 hours, or you can initiate an update manually. An update to ESR 60.6.3 is also available as of 16:00 UTC May 8th. We’re continuing to work on a fix for older versions of Firefox, and will update this post and on social media as we have more information. (May 8, 11:51 EDT)
  • A Firefox release has been pushed — version 66.0.4 on Desktop and Android, and version 60.6.2 for ESR. This release repairs the certificate chain to re-enable web extensions, themes, search engines, and language packs that had been disabled (Bug 1549061). There are remaining issues that we are actively working to resolve, but we wanted to get this fix out before Monday to lessen the impact of disabled add-ons before the start of the week. More information about the remaining issues can be found by clicking on the links to the release notes above. (May 5, 16:25 EDT)
  • Some users are reporting that they do not have the “hotfix-update-xpi-signing-intermediate-bug-1548973” study active in “about:studies”. Rather than using work-arounds, which can lead to issues later on, we strongly recommend that you continue to wait. If it’s possible for you to receive the hotfix, you should get it by 6am EDT, 24 hours after it was first released. For everyone else, we are working to ship a more permanent solution. (May 5, 00:54 EDT)
  • There are a number of work-arounds being discussed in the community. These are not recommended as they may conflict with fixes we are deploying. We’ll let you know when further updates are available that we recommend, and appreciate your patience. (May 4, 15:01 EDT)
  • Temporarily disabled commenting on this post given volume and duplication. They’ll be re-enabled as more updates become available. (May 4, 13:02 EDT)
  • Updated the post to clarify that deleting extensions can result in data loss, and should not be used to attempt a fix. (May 4, 12:58 EDT)
  • Clarified that the study may appear in either the Active studies or Completed studies of “about:studies” (May 4, 12:10 EDT)
  • We’re aware that some users are reporting that their extensions remain disabled with both studies active. We’re tracking this issue on Bugzilla in bug 1549078. (May 4, 12:03 EDT)
  • Clarified that the Studies fix applies only to Desktop users of Firefox distributed by Mozilla. Firefox ESR, Firefox for Android, and some versions of Firefox included with Linux distributions will require separate updates. (May 4, 12:03 EDT)


Late on Friday May 3rd, we became aware of an issue with Firefox that prevented existing and new add-ons from running or being installed. We are very sorry for the inconvenience caused to people who use Firefox.

Our team  identified and rolled-out a temporary fix for all Firefox Desktop users on Release, Beta and Nightly. The fix will be automatically applied in the background within 24 hours. No active steps need to be taken to make add-ons work again. In particular, please do not delete and/or re-install any add-ons as an attempt to fix the issue. Deleting an add-on removes any data associated with it, where disabling and re-enabling does not.

Please note: The fix does not apply to Firefox ESR or Firefox for Android. We’re working on releasing a fix for both, and will provide updates here and on social media.

To provide this fix on short notice, we are using the Studies system. This system is enabled by default, and no action is needed unless Studies have been disabled. Firefox users can check if they have Studies enabled by going to:

  • Firefox Options/Preferences -> Privacy & Security -> Allow Firefox to install and run studies (scroll down to find the setting)

  • Studies can be disabled again after the add-ons have been re-enabled

It may take up to six hours for the Study to be applied to Firefox. To check if the fix has been applied, you can enter “about:studies” in the location bar. If the fix is in the active, you’ll see “hotfix-update-xpi-signing-intermediate-bug-1548973” in either the Active studies or Completed studies as follows:

You may also see “hotfix-reset-xpi-verification-timestamp-1548973” listed, which is part of the fix and may be in the Active studies or Completed studies section(s).

We are working on a general fix that doesn’t use the Studies system and will keep this blog post updated accordingly. We will share a more substantial update in the coming days.

Additional sources of information:

The post Add-ons disabled or failing to install in Firefox appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox not affected by the addon apocalypse

Mozilla planet - za, 04/05/2019 - 07:04
Tonight's Firefox add-on apocalypse, traced to a mistakenly expired intermediate signing certificate, is currently roiling Firefox users worldwide. It bit me on my Talos II, which really cheesed me off because it tanked all my carefully constructed site containers. (And that's an official Mozilla addon!)

This brief post is just to reassure you that TenFourFox is unaffected -- I disagreed with signature enforcement on add-ons from the beginning and explicitly disabled it.

Categorieën: Mozilla-nl planet

Mike Hoye: Goals And Constraints

Mozilla planet - vr, 03/05/2019 - 19:30

This way to art.

I keep coming back to this:

“Open” in this context inextricably ties source control to individual agency. The checks and balances of openness in this context are about standards, data formats, and the ability to export or migrate your data away from sites or services that threaten to go bad or go dark. This view has very little to say about – and is often hostile to the idea of – granular access restrictions and the ability to impose them, those being the tools of this worldview’s bad actors.

The blind spots of this worldview are the products of a time where someone on the inside could comfortably pretend that all the other systems that had granted them the freedom to modify this software simply didn’t exist. Those access controls were handled, invisibly, elsewhere; university admission, corporate hiring practices or geography being just a few examples of the many, many barriers between the network and the average person.

And when we’re talking about blind spots and invisible social access controls, of course, what we’re really talking about is privilege.

How many people get to have this, I wonder: the sense that they can sit down in front of a computer and be empowered by it. The feeling of being able, the certainty that you are able to look at a hard problem, think about it, test and iterate; that easy rapid prototyping with familiar tools is right there in your hands, that a toolbox the size of the world is within reach. That this isn’t some child’s wind up toy I turn a crank on until the powerpoint clown pops up.

It’s not a universal or uniform experience, to be sure; they’re machines made of other people’s choices, and computers are gonna computer. But the only reason I get to have that feeling at all is that I got my start when the unix command line was the only decent option around, and I got to put the better part of a decade grooving in that muscle memory on machines and forums where it was safe – for me at least – to be there, fully present, make mistakes and learn from them.

(Big shoutout to everyone out there who found out how bash wildcards work by inadvertently typing mv * in a directory with only two files in it.)

That world doesn’t exist anymore; the internet that birthed it isn’t coming back. But I want everyone to have this feeling, that the machine is more than a glossy appliance. That it’s not a constraint. That with patience and tenacity it can work with you and for you, not just a tool for a task but an extension and expression of ourselves and our intent. That a computer can be a tool for expressing ourselves, for helping us be ourselves better.

Last week I laid out the broad strokes of Mozilla’s requirements for our next synchronous-text platform. They were pretty straightforward, but I want to thank a number of people from different projects who’ve gotten in touch on IRC or email to ask questions and offer their feedback.

Right now I’d like to lay out those requirements in more detail, and talk about some of the reasons behind them. Later I’m going to lay out the process and the options we’re looking at, and how we’re going to gather information, test those options and evaluate what we learn.

While the Rust community is making their own choices now about the best fit for their needs, the Rust community’s processes are going to strongly inform the steps for Mozilla. They’ve learned a lot the hard way about consensus-building and community decision-making, and it’s work that I have both a great deal of respect for and no intention of re-learning the hard way myself. I’ll have more about that shortly as well.

I mentioned our list of requirements last week but I want to drill into some of them here; in particular:

  • It needs to be accessible to the greater Mozilla community.

This one implies a lot more than it states, and it would be pretty easy to lay out something trite like “we think holistically about accessibility” the way some organizations say “a diversity of ideas”, as though that means anything at all. But that’s just not good enough.

Diversity, accessibility and community are all tightly interwoven ideas we prize, and how we approach, evaluate and deploy the technologies that connect us speaks deeply to our intentions and values as an organization. Mozilla values all the participants in the project, whether they rely on a screen reader, a slow network or older hardware; we won’t – we can’t – pick a stack that treats anyone like second-class citizens. That will not be allowed.

  • While we’re investigating options for semi-anonymous or pseudonymous connections, we will require authentication, because:
  • The Mozilla Community Participation Guidelines will apply, and they’ll be enforced.

Last week Dave Humphrey wrote up a reminiscence about his time on IRC soon after I made the announcement. Read the whole thing, for sure. Dave is wiser and kinder than I am, and has been for as long as we’ve known each other; his post spoke deeply to many of us who’ve been in and around Mozilla for a while, and two sentences near the end are particularly important:

“Having a way to get deeply engaged with a community is important, especially one as large as Mozilla. Whatever product or tool gets chosen, it needs to allow people to join without being invited.”

We’ve got a more detailed list of functional and organizational requirements for this project, and this is an important part of it: “New users must be able to join the service without manual intervention from a Mozilla employee.”

We’ve understood this as an accessibility issue for a long time as well, though I don’t think we’ve ever given it a name. “Involvement friction”, maybe – everything about becoming part of a project and community that’s hard not because it’s inherently difficult, but because nobody’s taken the time to make it easy.

I spend a lot of time thinking about something Sid Wolinsky said about the first elevators installed in the New York subway system: “This elevator is a gift from the disability community and the ADA to the nondisabled people of New York”. If you watch who’s using the elevators, ramps or automatic doors in any public building long enough, anything with wheelchair logo on it, you’ll notice a trend: it’s never somebody in a wheelchair. It’s somebody pushing a stroller or nursing a limp. It’s somebody carrying an awkward parcel, or a bag of groceries. Sometimes it’s somebody with a coffee in one hand and a phone in the other. Sometimes it’s somebody with no reason at all, at least not one you can see. It’s people who want whatever thing they’re doing, however difficult, to be a little bit easier. It’s everybody.

If you cost out accessible technology for the people who rely on it, it looks really expensive; if you cost it out for everyone who benefits from it, though, it’s basically free. And none of us in the “benefit” camp are ever further than a sprained ankle away from “rely”.

We’re getting better at this at Mozilla in hundreds of different ways, at recognizing how important it is that the experience of getting from “I want to help” to “I’m set up to help” to “I’m helping” be as simple and painless as possible. As one example, our bootstrap scripts and mach-build have reduced our once-brittle, failure-prone developer setup process down to “answer these questions and wait for the downloads to finish”, and in the process have done more to make the Firefox codebase accessible than I ever will. And everyone relies on them now, first-touch contributors and veteran devs alike.

Getting involved in the community, though, is still harder than it needs to be; try watching somebody new to open source development try to join an IRC channel sometime. Watch them go from “what’s IRC” to finding a client to learning how to use the client to joining the right server, then the right channel, only to find that the reward for all that effort is no backscroll, no context, and no idea who you’re talking to or if you’re in the right place or if you’re shouting into the void because the people you’re looking for aren’t logged in at the same time. It’s like asking somebody to learn to operate an airlock on their own so they can toss themselves out of it.

It’s more than obvious that you don’t build products like that anymore, but I think it’s underappreciated that it’s just as true of communities. I think it’s critical that we bring that same discipline of caring about the details of the experience to our communications channels and community forums, and the CPG is the cornerstone of that effort.

It was easy not to care about this when somebody who wanted to contribute to an open source project with global impact had maybe four choices, the Linux kernel, the Mozilla suite, the GNU tools and maybe Apache. But that world was pre-Github, pre-NPM. If you want to work on hard problems with global impact now you have a hundred thousand options, and that means the experience of joining and becoming a part of the Mozilla community matters.

In short, the amount of effort a project puts into making the path from “I want to help” to “I’m helping” easier is a reliable indicator of the value that project puts on community involvement. So if we say we value our community, we need to treat community involvement and contribution like a product, with all the usability and accessibility concerns that implies. To drive involvement friction as close to zero as possible.

One tool we’ll be relying on – and this one, we did build in-house – is called Mozilla-IAM, Mozilla’s Identity and Access Management tool. I’ll have more to say about this soon, but at its core it lets us proxy authentication from various sources and methods we trust, Github, Firefox Accounts, a link in your email, a few others. We think IAM will let us support pseudonymous participation and a low-cost first-contact experience, but also let us keep our house in order and uphold the CPG in the process.

Anyway, here’s a few more bullet points; what requirements doc isn’t full of them?

A synchronous messaging system that meets our needs:

  • Must work correctly in unmodified, release-channel Firefox.
  • Must offer a solid mobile experience.
  • Must support thousands of simultaneous users across the service.
  • Must support easy sharing of hyperlinks and graphics as well as text.
  • Must have persistent scrollback. Users reconnecting to a channel or joining the channel for the first time must be able to read up to acquire context of the current conversation in the backscroll.
  • Programmatic access is a hard requirement. The service must support a mature, reasonably stable and feature-rich API.
  • As mentioned, people participating via accessible technologies including screen readers or high-contrast display modes must be able to participate as first-class citizens of the service and the project.
  • New users must be able to join the service without manual intervention from a Mozilla employee.
  • Whether or not we are self-hosting, the service must allow Mozilla to specify a data retention and security policy that meets our institutional standards.
  • The service must have a customizable first-contact experience to inform new participants about Mozilla’s CPG and privacy notice.
  • The service must have effective administrative tooling including user and channel management, alerting and banning.
  • The service must support delegated authentication.
  • The service must pass an evaluation by our legal, trust and security teams. This is obviously also non-negotiable.

I doubt any of that will surprise anyone, but they might, and I’m keeping an eye out for questions. We’re still talking this out in #synchronicity on irc.m.o, and you’re welcome to jump in.

I suppose I should tip my hand at this point, and say that as much as I value the source part of open source, I also believe that people participating in open source communities deserve to be free not only to change the code and build the future, but to be free from the brand of arbitrary, mechanized harassment that thrives on unaccountable infrastructure, federated or not. We’d be deluding ourselves if we called systems that are just too dangerous for some people to participate in at all “open” just because you can clone the source and stand up your own copy. And I am absolutely certain that if this free software revolution of ours ends up in a place where asking somebody to participate in open development is indistinguishable from asking them to walk home at night alone, then we’re done. People cannot be equal participants in environments where they are subject to wildly unequal risk. People cannot be equal participants in environments where they are unequally threatened.

I think we can get there; I think we can meet our obligations to the Mission and the Manifesto as well as the needs of our community, and help the community grow and thrive in a way that grows and strengthens the web want and empowers everyone using and building it to be who we’re aspiring to be, better.

The next steps are going to be to lay out the evaluation process in more detail; then we can start pulling in information, stand up instances of the candidate stacks we’re looking at and trying them out.

Categorieën: Mozilla-nl planet

The Firefox Frontier: How to research smarter, not harder with 10 tools on Firefox

Mozilla planet - vr, 03/05/2019 - 02:20

Whether you’re in school or working on a project, knowing how to research is an essential skill. However, understanding how to do something and doing it smarter are two different … Read more

The post How to research smarter, not harder with 10 tools on Firefox appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Add-on Policy and Process Updates

Mozilla planet - do, 02/05/2019 - 17:50

As part of our ongoing work to make add-ons safer for Firefox users, we are updating our Add-on Policy to help us respond faster to reports of malicious extensions. The following is a summary of the changes, which will go into effect on June 10, 2019.

  • We will no longer accept extensions that contain obfuscated code. We will continue to allow minified, concatenated, or otherwise machine-generated code as long as the source code is included. If your extension is using obfuscated code, it is essential to submit a new version by June 10th that removes it to avoid having it rejected or blocked.

We will also be clarifying our blocking process. Add-on or extension blocking (sometimes referred to as “blocklisting”), is a method for disabling extensions or other third-party software that has already been installed by Firefox users.

  • We will be blocking extensions more proactively if they are found to be in violation of our policies. We will be casting a wider net, and will err on the side of user security when determining whether or not to block.
  • We will continue to block extensions for intentionally violating our policies, critical security vulnerabilities, and will also act on extensions compromising user privacy or circumventing user consent or control.

You can preview the policy and blocking process documents and ensure your extensions abide by them to avoid any disruption. If you have questions about these updated policies or would like to provide feedback, please post to this forum thread.

 

May 4, 2019 9:09 AM PST update: A certificate expired yesterday and has caused add-ons to stop working or fail to install. This is unrelated to the policy changes. We will be providing updates about the certificate issue in other posts on this blog.

9:55 am PST: Because a lot of comments on this post are related to the certificate issue, we are temporarily turning off comments for this post. 

The post Add-on Policy and Process Updates appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Socorro: April 2019 happenings

Mozilla planet - do, 02/05/2019 - 12:00
Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the crash reporter collects data about the crash, generates a crash report, and submits that report to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

This blog post summarizes Socorro activities in April.

Read more… (6 min remaining to read)

Categorieën: Mozilla-nl planet

Axel Hecht: Migrate to Fluent

Mozilla planet - do, 02/05/2019 - 10:24
Introduction

A couple of weeks ago the Localization Team at Mozilla released the Fluent Syntax specification. As mentioned in our announcement, we already have over 3000 Fluent strings in Firefox. You might wonder how we introduced Fluent to a running project. In this post I’ll detail on how the design of Fluent plays into that effort, and how we pulled it off.

Fluent’s Design for Simplicity

Fluent abstracts away the complexities of human languages from programmers. At the same time, Fluent makes easy things easy for localizers, while making complex things possible.

When you migrate a project to Fluent, you build on both of those design principles. You will simplify your code, and move the string choices from your program into the Fluent files. Only then you’ll expose Fluent to localizers to actually take advantage of the capabilities of Fluent, and to perfect the localizations of your project.

Fluent’s Layered Design

When building runtime implementations, we created several layers to tightly own particular tasks.

  1. Fluent source files are parsed into Resources.
  2. Multiple resources are aggregated in Bundles, which expose APIs to resolve single strings. Message and Term references resolve inside Bundles, but not necessarily inside Resources. A Bundle is associated with a single language, as well as fallback languages for i18n libraries.
  3. Language negotiation and language fallback happen in the Localization level. Here you’d implement that someone looking for Frisian would get a Frisian string. If that’s missing or has a runtime problem, you might want to try Dutch, and then English.
  4. Bindings use the Localization API, and integrate it into the development stack. They marshal data models from the programming language into Fluent data models like strings, numbers, and dates. Declarative bindings also apply the localizations to the rendered UI.
Invest in Bindings

Bindings integrate Fluent into your development workflow. For Firefox, we focused on bindings to generate localized DOM. We also have bindings for React. These bindings determine how fluent Fluent feels to developers, but also how much Fluent can help with handling the localized return values. To give an example, integrating Fluent into Android app development would probably focus on a LayoutInflator. In the bindings we use at Mozilla, we decided to localize as close to the actual display of the strings as possible.

If you have declarative UI generation, you want to look into a declarative binding for Fluent. If your UI is generated programmatically, you want a programmatic binding.

The Localization classes also integrate IO into your application runtime, and making the right choices here has strong impact on performance characteristics. Not just on speed, but also the question of showing untranslated strings shortly.

Migrate your Code

Migrating your code will often be a trivial change from one API to another. Most of your code will get a string and show it, after all. You might convert several different APIs into just one in Fluent, in particular dedicated plural APIs will go away.

You will also move platform-specific terminology into the localization side, removing conditional code. You should also be able to stop stitching several localized strings together in your application logic.

As we’ll go through the process here, I’ll show an example of a sentence with a link. The project wants to be really sure the link isn’t broken, so it’s not exposed to localizers at all. This is shortened from an actual example in Firefox, where we link to our privacy policy. We’ll convert to DOM overlays, to separate localizable and non-localizable aspects of the DOM in Fluent. Let’s just look at the HTML code snippet now, and look at the localizations later.

Before:

<li>&msg-start;<a href="https://example.com">&msg-middle;</a>&msg-end;</li>

After:

<li data-l10n-id="msg"><a href="https://example.com" data-l10n-name="msg-link"></a></li> Migrate your Localizations

Last but not least, we’ll want to migrate the localizations. While migrating code is work, losing all your existing localizations is just outright a bad idea.

For our work on Firefox, we use a Python package named fluent.migrations. It’s building on top of the fluent.syntax package, and programmatically creates Fluent files from existing localizations.

It allows you to copy and paste existing localizations into a Fluent string for the most simple cases. It also concats several strings into a single result, which you used to do in your code. For these very simple cases, it even uses Fluent syntax, with specialized global functions to copy strings.

Example:

msg = {COPY(from_path,"msg-start")}<a data-l10n-name="msg-link">{COPY(from_path,"msg-middle")}</a>{COPY(from_path,"msg-end")}

Then there are a bit more complicated tasks, notably involving variable references. Fluent only supports its built-in variable placement, so you need to migrate away from printf and friends. That involves firstly normalizing the various ways that a printf parameter can be formatted and placed, and then the code can do a simple replacement of the text like %2$S with a Fluent variable reference like {user-name}.

We also have logic to read our Mozilla-specific plural logic from legacy files, and to write them out as select-expressions in Fluent, with a variant for each plural form.

These transforms are implemented as pseudo nodes in a template AST, which is then evaluated against the legacy translations and creates an actual AST, which can then be serialized.

Concluding our example, before:

<ENTITY msg-start "This is a link to an "> <ENTITY msg-middle "example"> <ENTITY msg-end ".">

After:

msg = This is a link to an <a data-l10n-name="msg-link">example</a> site.

Find out more about this package and its capabilities in the documentation.

Given that we’re OpenSource, we also want to carry over attribution. Thus our code not only migrates all the data, but also splits the migration into individual commits, one for each author of the migrated translations.

Once the baseline is migrated, localizers can dive in and improve. They can then start using parameterized Terms to adjust grammar, for example. Or add a plural form where English didn’t need one. Or introduce a platform-specific terminology that only exists in their language.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: May’s featured extensions

Mozilla planet - do, 02/05/2019 - 02:04

Firefox Logo on blue background

Pick of the Month: Google Translator for Firefox

by nobzol
Sleek translation tool. Just highlight text, hit the toolbar icon and your translation appears right there on the web page itself. You can translate selected text (up to 1100 characters) or the entire page.

Bonus feature: the context menu presents an option to search your highlighted word or phrase on Wikipedia.

“Sehr einfache Bedienung, korrekte Übersetzung aller Texte.”

Featured: Google Container

by Perflyst
Isolate your Google identity into a container. Make it difficult for Google to track your moves around the web.

(NOTE: Though similarly titled to Mozilla’s Facebook Container and Multi-Account Containers, this extension is not affiliated with Mozilla.)

“Thanks a lot for making this. Works great! I’m only sorry I did not find this extension sooner.”

The post May’s featured extensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Owning it: browser compatibility data and open source governance

Mozilla planet - wo, 01/05/2019 - 16:54

What does it mean to “own” an open-source project? With the browser-compat-data project (“BCD”), the MDN (Mozilla Developer Network) community and I recently had the opportunity to find out.

In 2017, the MDN Web Docs team invited me to work on what was described to me as a small, but growing project (previously on Hacks). The little project had a big goal: to provide detailed and reliable structured data about what Web platform features are supported by different browsers. It sounded ambitious, but my part was narrow: convert hand-written HTML compatibility tables on MDN into structured JSON data.

As a technical writer and consultant, it was an unusual project to get to work on. Ordinarily, I look at data and code and use them to write words for people. For BCD, I worked in the opposite direction: reading what people wrote and turning it into structured data for machines. But I think I was most excited at the prospect of working on an open source project with a lot of reach, something I’d never done before.

Plus the project appealed to my sense of order and tidiness. Back then, most of the compatibility tables looked something like this:

A screenshot of a cluttered, inconsistent table of browser support for the CSS linear-gradient feature

In their inconsistent state, they couldn’t be updated in bulk and couldn’t be redesigned without modifying thousands upon thousands of pages on MDN. Instead, we worked to liberate the data in the tables to a structured, validated JSON format that we could publish in an npm package. With this change, new tables could be generated and other projects could use the data too.

A screenshot of a tidy, organized table of browser support for the CSS linear-gradient feature

Since then, the project has grown considerably. If there was a single inflection point, it was the Hack on MDN event in Paris, where we met in early 2018 to migrate more tables, build new tools, and play with the data. In the last year and a half, we’ve accomplished so many things, including replacing the last of the legacy tables on MDN with shiny, new BCD-derived tables, and seeing our data used in Visual Studio Code.

Building a project to last

We couldn’t have built BCD into what it is now without the help of the hundreds of new contributors that have joined the project. But some challenges have come along with that growth. My duties shifted from copying data into the repository to reviewing others’ contributions, learning about the design of the schema, and hacking on supporting tools. I had to learn so much about being a thoughtful, helpful guide for new and established contributors alike. But the increased size of the project also put new demands on the project as a whole.

Florian Scholz, the project leader, took on answering a question key to the long-term sustainability of the project: how do we make sure that contributors can be more than mere inputs, and can really be part of the project? To answer that question, Florian wrote and helped us adopt a governance document that defines how any contributor—not just MDN staff—can become an owner of the project.

Inspired by the JS Foundation’s Technical Advisory Committee, the ESLint project, and others, BCD’s governance document lays out how contributors can become committers (known as peers), how important decisions are made by the project leaders (known as owners), and how to become an owner. It’s not some stuffy rule book about votes and points of order; it speaks to the project’s ambition of being a community-led project.

Since adopting the governance document, BCD has added new peers from outside Mozilla, reflecting how the project has grown into a cross-browser community. For example, Joe Medley, a technical writer at Google, has joined us to help add and confirm data about Google Chrome. We’ve also added one new owner: me.

If I’m being honest, not much has changed: peers and owners still review pull requests, still research and add new data, and still answer a lot of questions about BCD, just as before. But with the governance document, we know what’s expected and what we can do to guide others on the journey to project ownership, like I experienced. It’s reassuring to know that as the project grows so too will its leadership.

More to come

We accomplished a lot in the past year, but our best work is ahead. In 2019, we have an ambitious goal: get 100% real data for Firefox, Internet Explorer, Edge, Chrome, Safari, mobile Safari, and mobile Chrome for all Web platform features. That means data about whether or not any feature in our data set is supported by each browser and, if it is, in what version it first appeared. If we achieve our goal, BCD will be an unparalleled resource for Web developers.

But we can’t achieve this goal on our own. We need to fill in the blanks, by testing and researching features, updating data, verifying pull requests, and more. We’d love for you to join us.

The post Owning it: browser compatibility data and open source governance appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Pagina's