Mozilla Nederland LogoDe Nederlandse

Mozilla Privacy Blog: Governments should work to strengthen online security, not undermine it

Mozilla planet - ma, 16/09/2019 - 23:24

On Friday, Mozilla filed comments in a case brought by Privacy International in the European Court of Human Rights involving government “computer network exploitation” (“CNE”)—or, as it is more colloquially known, government hacking.

While the case focuses on the direct privacy and freedom of expression implications of UK government hacking, Mozilla intervened in order to showcase the further, downstream risks to users and internet security inherent in state CNE. Our submission highlights the security and related privacy threats from government stockpiling and use of technology vulnerabilities and exploits.

Government CNE relies on the secret discovery or introduction of vulnerabilities—i.e., bugs in software, computers, networks, or other systems that create security weaknesses. “Exploits” are then built on top of the vulnerabilities. These exploits are essentially tools that take advantage of vulnerabilities in order to overcome the security of the software, hardware, or system for purposes of information gathering or disruption.

When such vulnerabilities are kept secret, they can’t be patched by companies, and the products containing the vulnerabilities continue to be distributed, leaving people at risk. The problem arises because no one—including government—can perfectly secure information about a vulnerability. Vulnerabilities can be and are independently discovered by third parties and inadvertently leaked or stolen from government. In these cases where companies haven’t had an opportunity to patch them before they get loose, vulnerabilities are ripe for exploitation by cybercriminals, other bad actors, and even other governments,1 putting users at immediate risk.

This isn’t a theoretical concern. For example, the findings of one study suggest that within a year, vulnerabilities undisclosed by a state intelligence agency may be rediscovered up to 15% of the time.2 Also, one of the worst cyber attacks in history was caused by a vulnerability and exploit stolen from NSA in 2017 that affected computers running Microsoft Windows.3 The devastation wreaked through use of that tool continues apace today.4

This example also shows how damaging it can be when vulnerabilities impact products that are in use by tens or hundreds of millions of people, even if the actual government exploit was only intended for use against one or a handful of targets.

As more and more of our lives are connected, governments and companies alike must commit to ensuring strong security. Yet state CNE significantly contributes to the prevalence of vulnerabilities that are ripe for exploitation by cybercriminals and other bad actors and can result in serious privacy and security risks and damage to citizens, enterprises, public services, and governments. Mozilla believes that governments can and should contribute to greater security and privacy for their citizens by minimizing their use of CNE and disclosing vulnerabilities to vendors as they find them.

2 Rediscovery (belfer-revision).pdf

The post Governments should work to strengthen online security, not undermine it appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

William Lachance: mozregression update: python 3 edition

Mozilla planet - ma, 16/09/2019 - 17:29

For those who are still wondering, yup, I am still maintaining mozregression, though increasingly reluctantly. Given how important this project is to the development of Firefox (getting a regression window using mozregression is standard operating procedure whenever a new bug is reported in Firefox), it feels like this project is pretty vital, so I continue out of some sense of obligation — but really, someone more interested in Mozilla’a build, automation and testing systems would be better suited to this task: over the past few years, my interests/focus have shifted away from this area to building up Mozilla’s data storage and visualization platform.

This post will describe some of the things that have happened in the last year and where I see the project going. My hope is to attract some new blood to add some needed features to the project and maybe take on some of the maintainership duties.

python 3

The most important update is that, as of today, the command-line version of mozregression (v3.0.1) should work with python 3.5+. modernize did most of the work for us, though there were some unit tests that needed updating: special thanks to @gloomy-ghost for helping with that.

For now, we will continue to support python 2.7 in parallel, mainly because the GUI has not yet been ported to python 3 (more on that later) and we have CI to make sure it doesn’t break.

other updates

The last year has mostly been one of maintenance. Thanks in particular to Ian Moody (:kwan) for his work throughout the year — including patches to adapt mozregression support to our new updates policy and shippable builds (bug 1532412), and Kartikaya Gupta (:kats) for adding support for bisecting the GeckoView example app (bug 1507225).

future work

There are a bunch of things I see us wanting to add or change with mozregression over the next year or so. I might get to some of these if I have some spare cycles, but probably best not to count on it:

  • Port the mozregression GUI to Python 3 (bug 1581633) As mentioned above, the command-line client works with python 3, but we have yet to port the GUI. We should do that. This probably also entails porting the GUI to use PyQT5 (which is pip-installable and thus much easier to integrate into a CI process), see bug 1426766.
  • Make self-contained GUI builds available for MacOS X (bug 1425105) and Linux (bug 1581643).
  • Improve our mechanism for producing a standalone version of the GUI in general. We’ve used cx_Freeze which mostly works ok, but has a number of problems (e.g. it pulls in a bunch of unnecessary dependencies, which bloats the size of the installer). Upgrading the GUI to use python 3 may alleviate some of these issues, but it might be worth considering other options in this space, like Gregory Szorc’s pyoxidizer.
  • Add some kind of telemetry to mozregression to measure usage of this tool (bug 1581647). My anecdotal experience is that this tool is pretty invaluable for Firefox development and QA, but this is not immediately apparent to Mozilla’s leadership and it’s thus very difficult to convince people to spend their cycles on maintaining and improving this tool. Field data may help change that story.
  • Supporting new Mozilla products which aren’t built (entirely) out of mozilla-central, most especially Fenix (bug 1556042) and Firefox Reality (bug 1568488). This is probably rather involved (mozregression has a big pile of assumptions about how the builds it pulls down are stored and organized) but that doesn’t mean that this work isn’t necessary.

If you’re interested in working on any of the above, please feel free to dive in on one of the above bugs. I can’t offer formal mentorship, but am happy to help out where I can.

Categorieën: Mozilla-nl planet

William Lachance: Time for some project updates

Mozilla planet - ma, 16/09/2019 - 16:41

I’ve been a bit bad about updating this blog over the past year or so, though this hasn’t meant there haven’t been things to talk about. For the next couple weeks, I’m going to try to give some updates on the projects I have been spending time on in the past year, both old and new. I’m going to begin with some of the less-loved things I’ve been working on, partially in an attempt to motivate some forward-motion on things that I believe are rather important to Mozilla.

More to come.

Categorieën: Mozilla-nl planet

Armen Zambrano: A web performance issue

Mozilla planet - vr, 13/09/2019 - 19:03

Back in July and August, I was looking into a performance issue in Treeherder . Treeherder is a Django app running on Heroku with a MySql database via RDS. This post will cover some knowledge gained while investigating the performance issue and the solutions for it.

NOTE: Some details have been skipped to help the readability of this post. It’s a long read as it is!


Treeherder is a public site mainly used by Mozilla staff. It’s used to determine if engineers have introduced code regressions on Firefox and other products. The performance issue that I investigated would make the site unusable for a long period of time (a few minutes to 20 minutes) multiple times per week. An outage like this would require blocking engineers from pushing new code since it would be practially impossible to determine the health of the code tree during an outage. In other words, the outages would keep “the trees” closed for business. You can see the tracking bug for this work here.

The smoking gun

On June 18th during Mozilla’s All Hands conference, I received a performance alert and decided to investigate it. I decided to use New Relic which was my first time using it and it also was my first time investigating a performance issue of a complex web site. New Relic made it easy and intiutive to get to what I wanted to see.

<figcaption>JobsViewSet API affected by MySQL job selection’s slow down</figcaption>

The UI slow downs came from API slow downs (and timeouts) due to database slow downs. The API that was most affected was JobsViewSet API which is heavily used by the front-end developers. The spike shown on the graph above was rather anomoulous. After some investigation I found that a developer unintentionally pushed code with a command that would trigger an absurd number of performance jobs. A developer normally would request one performance job per code push rather than ten. As these jobs finished (very close together in time) their performance data would be inserted into the database and make the DB crawl.

<figcaption>Normally you would see 1 letter per performance job instead of 10</figcaption>

Since I was new to the team and the code-base, I tried to get input from the rest of my coworkers. We discussed using Django’s bulk_create to reduce the impact on the DB. I was not completely satisfied with the solution because we did not yet understand the root issue. From my Release Engineering years I remembered that you need to find the root issue or you’re just putting a band-aid on that will fall off sooner or later. Treeherder’s infrastructure had a limitation somewhere and a code change might only solve the problem temporarily. We would hit a different performance issue down the road. A fix at the root of the problem was required.

Gaining insight

I knew I needed proper insight as to what was happening plus an understanding of how each part of the data ingestion pipeline worked together. In order to know these things I needed metrics, and New Relic helped me to create a custom dashboard.

<figcaption>Few graphs from the custom NewRelic dashboard</figcaption>Similar set-up to test fixes

I made sure that the Heroku and RDS set-up between production and stage were as similar as possible. This is important if you want to try changes on stage first, measure it, and compare it with production.

For instance, I requested EC2 type instance changes plus upgrading to the current EC M5 instance types. I can’t find the exact Heroku changes that I produced, but I made the various ingestion workers to be similar in type and in number.

Consult others

I had a very primitive knowledge of MySql at scale and I knew that I would have to lean on others to understand the potential solution. I want to thank dividehex, coop and ckolos for all their time spent listening and all the knowledge they shared with me.

The cap you didn’t know you have

After reading a lot of documentation about Amazon’s RDS set-up I determined that slow downs in the database were related to IOPS spikes. Amazon gives you 3 IOPS per Gb and with a storage of 1 Terabyte we had 3,000 IOPS as our baseline. The graph below shows that at times we would get above that max baseline.

CloudWatch graph showing the SUM of read IOPS and write IOPS for Treeherder’s database<figcaption>CloudWatch graph showing Treeherder’s SUM of read & write IOPS operations</figcaption>

To increase the IOPS baseline we could either increase the storage size or switch from General SSD to Provisioned IOPS storage. The cost of the different storage type was much higher so we decided to double our storage, thus, doubling our IOPS baseline. You can see in the graph below that we’re constantly above our previous baseline. This change helped Treeherder’s performance a lot.

<figcaption>Graph after doubling Treeherder’s IOPS limit</figcaption>

In order to prevent getting into such a state in the future, I also created a CloudWatch alert. We would get alerted if the combined IOPS is greater than 5,700 IOPS for 6 datapoints within 10 minutes.

Auto Scaling

One of the problems with Treeherder’s UI is that it hits the backend quite heavily. The load depends on the number of users using the site, the number of pushes that are in view and the number of jobs that each push has determines the load on the backend.

Fortunately, Heroku allows auto scaling for web nodes. This required upgrading from the Standard 2x nodes to the Performance nodes. Configuring the auto scaling is very simple as you can see in the screenshot below. All you have to do is define the minimum and maximum number of nodes, plus the threshold after which you want new nodes to be spun up.

Heroku’s auto scaling feature in display<figcaption>Heroku’s auto scaling feature in display</figcaption>Final words

Troubleshooting this problem was quite a learning experience. I learned a lot about the project, the monitoring tools available, the RDS set up, Treeherder’s data pipeline, the value of collaboration and the importance of measuring.

I don’t want to end this post without mentioning that this was not excruciating because of the great New Relic set up. This is something that Ed Morley accomplished while at Mozilla and we should be very greatful that he did.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Creating privacy-centric virtual spaces

Mozilla planet - do, 12/09/2019 - 23:02
Creating privacy-centric virtual spaces

We now live in a world with instantaneous communication unrestrained by geography. While a generation ago, we would be limited by the speed of the post, now we’re limited by the speed of information on the Internet. This has changed how we connect with other people.

As immersive devices become more affordable, social spaces in virtual reality (VR) will become more integrated into our daily lives and interactions with friends, family, and strangers. Social media has enabled rapid pseudonymous communication, which can be directed at both a single person and large groups. If social VR is the next evolution of this, what approaches will result in spaces that respect user identities, autonomy, and safety?

We need spaces that reflect how we interact with others on a daily basis.

Social spaces: IRL and IVR

Often, when people think about social VR, what tends to come to mind are visions from the worlds of science fiction stories: Snow Crash, Ready Player One, The Matrix - huge worlds that involve thousands of strangers interacting virtually on a day to day basis. In today’s social VR ecosystem, many applications take a similarly public approach: new users are often encouraged (or forced) by the system to interact with new people in the name of developing relationships with strangers who are also participating in the shared world. This can result in more dynamic and populated spaces, but in a way that isn’t inherently understood from our regular interactions.

This approach doesn’t mirror our usual day-to-day experiences—instead of spending time with strangers, we mostly interact with people we know. Whether we’re in a private, semi-public, or public space, we tend to stick to familiarity. We can define the privacy of space by thinking about who has access to a location, and the degree to which there is established trust among other people you encounter there.

Private: a controlled space where all individuals are known to each other. In the physical world, your home is an example of a private space—you know anyone invited into your home, whether they’re a close associate, or a passing acquaintance (like a plumber)
Semi-public: a semi-controlled space where all individuals are associated with each other. For example, you might not know everyone in your workplace, but you’re all connected via your employer
Public: a public space made up of a lot of different, separate groups of people who might not have established relationships or connections. In a restaurant, while you know the group you’re dining with, you likely don’t know anyone else

Creating privacy-centric virtual spaces

While we might encounter strangers in public or semi-public spaces, most of our interactions are still with people we know. This should extend to the virtual world. However, VR devices haven’t been widely available until recently, so most companies building virtual worlds have designed their spaces in a way that prioritizes getting people in the same space, regardless of whether or not those users already know each other.

For many social VR systems, the platform hosting spaces often networks different environments and worlds together and provides a centralized directory of user-created content to go explore. While this type of discovery has benefits and values, in the physical world, we largely spend time with the same people from day to day. Why don’t we design a social platform around this?

Mozilla Hubs is a social VR platform created to provide spaces that more accurately emulate our IRL interactions. Instead of hosting a connected, open ecosystem, users create their own independent, private-by-default rooms. This creates a world where instead of wandering into others’ spaces, you intentionally invite people you know into your space.

Private by default

Communities and societies often establish their own cultural norms, signals, inside jokes, and unspoken (or written) rules — these carry over to online spaces. It can be difficult for people to be thrown into brand-new groups of users without this understanding, and there are often no guarantees that the people you’ll be interacting with in these public spaces will be receptive to other users who are joining. In contrast to these public-first platforms, we’ve designed our social VR platform, Hubs, to be private by default. This means that instead of being in an environment with strangers from the outset, Hubs rooms are designed to be private to the room owner, who can then choose who they invite into the space with the room access link.

Protecting public spaces

When we’re in public spaces, we have different sets of implied rules than the social norms that we might abide by when we’re in private. In virtual spaces, these rules aren’t always as clear and different people will behave differently in the absence of established rules or expectations. Hubs allows communities to set up their own public spaces, so that they can bring their own social norms into the spaces. When people are meeting virtually, it’s important to consider the types of interactions that you’re encouraging.

Because access to a Hubs room is predicated on having the invitation URL, the degree to which that link is shared by the room owner or visitors to the room will dictate how public or private a space is. If you know that the only people in a Hubs room are you and two of your closest friends, you probably have a pretty good sense of how the three of you interact together. If you’re hosting a meetup and expecting a crowd, those behaviors can be less clear. Without intentional community management practices, virtual spaces can turn ugly. Here are some things that you could consider to keep semi-public or public Hubs rooms safe and welcoming:

  • Keep the distribution of the Hubs link limited to known groups of trusted individuals and consider using a form or registration to handle larger events.
  • Use an integration like the Hubs Discord Bot to back users against a known identity. Removing users from a linked Discord channel also removes their ability to enter an authenticated Hubs room.
  • Lock down room permissions in the room to limit who can create new objects in the room or draw with the pen tool.
  • Create a code of conduct or set of community guidelines that are posted in the Hubs room near the room entry point.
  • Assign trusted users to act as moderators so users who is available in the space to help welcome visitors and enforce positive conduct.

We need social spaces that respect and empower participants. Here at Mozilla, we’re creating a platform that more closely reflects how we interact with others IRL. Our spaces are private by default, and Hubs allows users to control who enters their space and how visitors can behave.

Mozilla Hubs is an open source social VR platform—come try it out at or contribute here.

Read more about safety and identity in Hubs here.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Multiview on WebXR

Mozilla planet - do, 12/09/2019 - 20:36
Multiview on WebXR

The WebGL multiview extension is already available in several browsers and 3D web engines and it could easily help to improve the performance on your WebXR application

What is multiview?

When VR first arrived, many engines supported stereo rendering by running all the render stages twice, one for each camera/eye. While it works it is highly inefficient.

for (eye in eyes) renderScene(eye)

Where renderScene will setup the viewport, shaders, and states every time is being called. This will double the cost of rendering every frame.

Later on, some optimizations started to appear in order to improve the performance and minimize the state changes.

for (object in scene) for (eye in eyes) renderObject(object, eye)

Even if we reduce the number of state changes, by switching programs and grouping objects, the number of draw calls remains the same: two times the number of objects.

In order to minimize this bottleneck, the multiview extension was created. The TL;DR of this extension is: Using just one drawcall you can draw on multiple targets, reducing the overhead per view.

Multiview on WebXR

This is done by modifying your shader uniforms with the information for each view and accessing them with the gl_ViewID_OVR, similar to how the Instancing API works.

in vec4 inPos; uniform mat4 u_viewMatrices[2]; void main() { gl_Position = u_viewMatrices[gl_ViewID_OVR] * inPos; }

The resulting render loop with the multiview extension will look like:

for (object in scene) setUniformsForBothEyes() // Left/Right camera matrices renderObject(object)

This extension can be used to improve multiple tasks as cascaded shadow maps, rendering cubemaps, rendering multiple viewports as in CAD software, although the most common use case is stereo rendering.

Stereo rendering is also our main target as this will improve the VR rendering path performance with just a few modifications in a 3D engine. Currently, most of the headsets have two views, but there are prototypes of headset with ultra-wide FOV using 4 views which is currently the maximum number of views supported by multiview.

Multiview in WebGL

Once the OpenGL OVR_multiview2 specification was created, the WebGL working group started to make a WebGL version of this API.

It’s been a while since our first experiment supporting multiview on servo and three.js. Back then it was quite a challenge to support WEBGL_multiview: it was based on opaque framebuffers and it was possible to use it with WebGL1 but the shaders need to be compiled with GLSL 3.0 support, which was only available on WebGL2, so some hacks on the servo side were needed in order to get it running.
At that time the WebVR spec had a proposal to support multiview but it was not approved.

Thanks to the work of the WebGL WG, the multiview situation has improved a lot in the last few months. The specification is already in the Community Approved status, which means that browsers could ship it enabled by default (As we do on Firefox desktop 70 and Firefox Reality 1.4)

Some important restrictions of the final specification to notice:

  • It only supports WebGL2 contexts, as it needs GLSL 3.00 and texture arrays.
  • Currently there is no way to use multiview to render to a multisampled backbuffer, so you should create contexts with antialias: false. (The WebGL WG is working on a solution for this)
Web engines with multiview support

We have been working for a while on adding multiview support to three.js (PR). Currently it is possible to get the benefits of multiview automatically as long as the extension is available and you define a WebGL2 context without antialias:

var context = canvas.getContext( 'webgl2', { antialias: false } ); renderer = new THREE.WebGLRenderer( { canvas: canvas, context: context } );

You can see a three.js example using multiview here (source code).

A-Frame is based on three.js so they should get multiview support as soon as they update to the latest release.

Babylon.js has had support for OVR_multiview2 already for a while (more info).

For details on how to use multiview directly without using any third party engine, you could take a look at the three.js implementation, see the specification’ sample code or read this tutorial by Oculus.

Browsers with multiview support

The extension was just approved by the Community recently so we expect to see all the major browsers adding support for it by default soon

  • Firefox Desktop: Firefox 71 will support multiview enabled by default. In the meantime, you can test it on Firefox Nightly by enabling draft extensions.
  • Firefox Reality: It’s already enabled by default since version 1.3.
  • Oculus Browser: It’s implemented but disabled by default, you must enable Draft WebGL Extension preference in order to use it.
  • Chrome: You can use it on Chrome Canary for Windows by running it with the following command line parameters: --use-cmd-decoder=passthrough --enable-webgl-draft-extensions
Performance improvements

Most WebGL or WebXR applications are CPU bound, the more objects you have on the scene the more draw calls you will submit to the GPU. In our benchmarks for stereo rendering with two views, we got a consistent improvement of ~40% compared to traditional rendering.
As you can see on the following chart, the more cubes (drawcalls) you have to render, the better the performance will be.
Multiview on WebXR

What’s next?

The main drawback when using the current multiview extension is that there is no way to render to a multisampling backbuffer. In order to use it with WebXR you should set antialias: false when creating the context. However this is something the WebGL WG is working on.

As soon as they come up with a proposal and is implemented by the browsers, 3D engines should support it automatically. Hopefully, we will see new extensions arriving to the WebGL and WebXR ecosystem to improve the performance and quality of the rendering, such as the ones exposed by Nvidia VRWorks (eg: Variable Rate Shading and Lens Matched Shading).


* Header image by Nvidia VRWorks

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Firefox Reality 1.4

Mozilla planet - do, 12/09/2019 - 00:45
Firefox Reality 1.4

Firefox Reality 1.4 is now available for users in the Viveport and Oculus stores.

With this release, we’re excited to announce that users can enjoy browsing in multiple windows side-by-side. Each window can be set to the size and position of your choice, for a super customizable experience.

Firefox Reality 1.4

And, by popular demand, we’ve enabled local browsing history, so you can get back to sites you've visited before without typing. Sites in your history will also appear as you type in the search bar, so you can complete the address quickly and easily. You can clear your history or turn it off anytime from within Settings.

The Content Feed also has a new and improved menu of hand-curated “Best of WebVR” content for you to explore. You can look forward to monthly updates featuring a selection of new content across different categories including Animation, Extreme (sports/adrenaline/adventure), Music, Art & Experimental and our personal favorite way to wind down a day, 360 Chill.

Additional highlights

  • Movable keyboard, so you can place it where it’s most comfortable to type.
  • Tooltips on buttons and actions throughout the app.
  • Updated look and feel for the Bookmarks and History views so you can see and interact better at all window sizes.
  • An easy way to request the desktop version of a site that doesn’t display well in VR, right from the search bar.
  • Updated and reorganized settings to be easier to find and understand.
  • Added the ability to set a preferred website language order.

Full release notes can be found in our GitHub repo here.

Stay tuned as we keep improving Firefox Reality! We’re currently working on integrating your Firefox Account so you’ll be able to easily send tabs to and from VR from other devices. New languages and copy/paste are also coming soon, in addition to continued improvements in performance and stability.

Firefox Reality is available right now. Go and get it!
Download for Oculus Go
Download for Oculus Quest
Download for Viveport (Search for Firefox Reality in Viveport store)

Categorieën: Mozilla-nl planet

Mike Hoye: Duty Of Care

Mozilla planet - wo, 11/09/2019 - 17:47

A colleague asked me what I thought of this Medium article by Owen Bennett on the application of the UK’s Duty of Care laws to software. I’d had… quite a bit of coffee at that point, and this (lightly edited) was my answer:

I think the point Bennett makes early about the shortcomings of analogy is an important one, that however critical analogy is as a conceptual bridge it is not a valuable endpoint. To some extent analogies are all we have when something is new; this is true ever since the first person who saw fire had to explain to somebody else, it warms like the sun but it is not the sun, it stings like a spear but it is not a spear, it eats like an animal but it is not an animal. But after we have seen fire, once we know fire we can say, we can cage it like an animal, like so, we can warm ourselves by it like the sun like so. “Analogy” moves from conceptual, where it is only temporarily useful, to functional and structural where the utility endures.

I keep coming back to something Bryan Cantrill said in the beginning of an old DTrace talk – – (even before he gets into the dtrace implementation details, the first 10 minutes or so of this talk are amazing) – that analogies between software and literally everything else eventually breaks down. Is software an idea, or is it a machine? It’s both. Unlike almost everything else.

(Great line from that talk – “Does it bother you that none of this actually exists?”)

But: The UK has some concepts that really do have critical roles as functional- and structural-analogy endpoints for this transition. What is your duty of care here as a developer, and an organization? Is this software fit for purpose?

Given the enormous potential reach of software, those concepts absolutely do need to survive as analogies that are meaningful and enforceable in software-afflicted outcomes, even if the actual text of (the inevitable) regulation of software needs to recognize software as being its own, separate thing, that in the wrong context can be more dangerous than unconstrained fire.

With that in mind, and particularly bearing in mind that the other places the broad “duty of care” analogy extends go well beyond beyond immediate action, and covers stuff like industrial standards, food safety, water quality and the million other things that make modern society work at all, I think Bennett’s argument that “Unlike the situation for ‘offline’ spaces subject to a duty of care, it is rarely the case that the operator’s act or omission is the direct cause of harm accruing to a user — harm is almost always grounded in another user’s actions” is incorrectly omitting an enormous swath of industrial standards and societal norms that have already made the functional analogy leap so effectively as to be presently invisible.

Put differently, when Toyota recalls hundreds of thousands of cars for potential defects in which exactly zero people were harmed, we consider that responsible stewardship of their product. And when the people working at Uber straight up murder a person with an autonomous vehicle, they’re allowed to say “but software”. Because much of software as an industry, I think, has been pushing relentlessly against the notion that the industry and people in it can or should be held accountable for the consequences of their actions, which is another way of saying that we don’t have and desperately need a clear sense of what a “duty of care” means in the software context.

I think that the concluding paragraph – “To do so would twist the law of negligence in a wholly new direction; an extremely risky endeavour given the context and precedent-dependent nature of negligence and the fact that the ‘harms’ under consideration are so qualitatively different than those subject to ‘traditional’ duties.” – reflects a deep genuflection to present day conceptual structures, and their specific manifestations as text-on-the-page-today, that is (I suppose inevitably, in the presence of this Very New Thing) profoundly at odds with the larger – and far more noble than this article admits – social and societal goals of those structures.

But maybe that’s just a superficial reading; I’ll read it over a few times and give it some more thought.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the Month – July 2019

Mozilla planet - wo, 11/09/2019 - 13:34

Please join us in congratulating Bhuvana Meenakshi Koteeswaran, Rep of the Month for July 2019!

Bhuvana is from Salem, India. She joined the Reps program at the end of 2017 and since then she has been involved with Virtual and Augmented Reality projects.


Bhuvana has recently held talks about WebXR at FOSSCon India and BangPypers. In October she will be a Space Wrangler at the Mozilla Festival in London.

Congratulations and keep rocking the open web! :tada:

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl 7.66.0 – the parallel HTTP/3 future is here

Mozilla planet - wo, 11/09/2019 - 07:57

I personally have not done this many commits to curl in a single month (August 2019) for over three years. This increased activity is of course primarily due to the merge of and work with the HTTP/3 code. And yet, that is still only in its infancy…

Download curl here.


the 185th release
6 changes
54 days (total: 7,845)

81 bug fixes (total: 5,347)
214 commits (total: 24,719)
1 new public libcurl function (total: 81)
1 new curl_easy_setopt() option (total: 269)

4 new curl command line option (total: 225)
46 contributors, 23 new (total: 2,014)
29 authors, 14 new (total: 718)
2 security fixes (total: 92)
450 USD paid in Bug Bounties

Two security advisories TFTP small blocksize heap buffer overflow

(CVE-2019-5482) If you told curl to do TFTP transfers using a smaller than default “blocksize” (default being 512), curl could overflow a heap buffer used for the protocol exchange. Rewarded 250 USD from the curl bug bounty.

FTP-KRB double-free

(CVE-2019-5481) If you used FTP-kerberos with curl and the server maliciously or mistakenly responded with a overly large encrypted block, curl could end up doing a double-free in that exit path. This would happen on applications where allocating a large 32 bit max value (up to 4GB) is a problem. Rewarded 200 USD from the curl bug bounty.


The new features in 7.66.0 are…


This experimental feature is disabled by default but can be enabled and works (by some definition of “works”). Daniel went through “HTTP/3 in curl” in this video from a few weeks ago:

Parallel transfers

You can now do parallel transfers with the curl tool’s new -Z / –parallel option. This is a huge change that might change a lot of use cases going forward!


There’s a standard HTTP header that some servers return when they can’t or won’t respond right now, which indicates after how many seconds or at what point in the future the request might be fulfilled. libcurl can now return that number easily and curl’s –retry option makes use of it (if present).


curl_multi_poll is a new function offered that is very similar to curl_multi_wait, but with one major benefit: it solves the problem for applications of what to do for the occasions when libcurl has no file descriptor at all to wait for. That has been a long-standing and perhaps far too little known issue.

SASL authzid

When using SASL authentication, curl and libcurl now can provide the authzid field as well!


Some interesting bug-fixes included in this release..

.netrc and .curlrc on Windows

Starting now, curl and libcurl will check for and use the dot-prefixed versions of these files even on Windows and only fall back and check for and use the underscore-prefixed versions for compatibility if the dotted one doesn’t exist. This unifies curl’s behavior across platforms.

asyn-thread: create a socketpair to wait on

With this perhaps innocuous-sounding change, libcurl on Linux and other Unix systems will now provide a file descriptor for the application to wait on while name resolving in a background thread. This lets applications know better when to call libcurl again and avoids having to just blindly wait and retry. A performance gain.

Credentials in URL when using HTTP proxy

We found and fixed a regression that made curl not use credentials properly from the URL when doing multi stage authentication (like HTTP Digest) with a proxy.

Move code into vssh for SSH backends

A mostly janitor-style fix that also now abstracted away more SSH-using code to not know what particular SSH backend that is being used while at the same time making it easier to write and provide new SSH backends in the future. I’m personally working a little slowly on one, to be talked about at a later point.

Disable HTTP/0.9 by default

If you want libcurl to accept and deliver HTTP/0.9 responses to your application, you need to tell it to do that. Starting in this version, curl will consider those invalid HTTP responses by default.

alt-svc improvements

We introduced alt-svc support a while ago but as it is marked experimental and nobody felt a strong need to use it, it clearly hasn’t been used or tested much in real life. When we’ve worked on using alt-svc to bootstrap into HTTP/3 we found and fixed a whole range of little issues with the alt-svc support and it is now in a much better shape. However, it is still marked experimental.

IPv6 addresses in URLs

It was reported that the URL parser would accept malformatted IPv6 addresses that subsequently and counter-intuitively would get resolved as a host name internally! An example URL would be “https://[]/’ – where all the letters and symbols within the brackets are individually allowed components of a IPv6 numerical address but it still isn’t a valid IPv6 syntax and instead is a legitimate and valid host name.

Going forward!

We recently ran a poll among users of what we feel are the more important things to work on, and with that the rough roadmap has been updated. Those are things I want to work on next but of course I won’t guarantee anything and I will greatly appreciate all help and assistance that I can get. And sure, we can and will work on other things too!

Categorieën: Mozilla-nl planet

Niko Matsakis: AiC: Shepherds 3.0

Mozilla planet - wo, 11/09/2019 - 06:00

I would like to describe an idea that’s been kicking around in my head. I’m calling this idea “shepherds 3.0” – the 3.0 is to distinguish it from the other places we’ve used the term in the past. This proposal actually supplants both of the previous uses of the term, replacing them with what I believe to be a preferred alternative (more on that later).


This is an idea that has been kicking around in my head for a while. It is not a polished plan and certainly not an accepted one. I’ve not talked it over with the rest of the lang team, for example. However, I wanted to put it out there for discussion, and I do think we should be taking some step in this direction soon-ish.


What I’m proposing, at its heart, is very simple. I want to better document the “agenda” of the lang-team. Specifically, if we are going to be moving a feature forward1, then it should have a shepherd (or multiple) who is in charge of doing that.

In order to avoid unbounded queues, the number of things that any individual can shepherd should be limited. Ideally, each person should only shepherd one thing at a time, though I don’t think we need to make a firm rule about it.

Becoming a shepherd is a commitment on the part of the shepherd. The first part of the lang team meeting should be to review the items that are being actively shepherded and get any updates. If we haven’t seen any movement in a while, we should consider changing the shepherd, or officially acknowleding that something is stalled and removing the shepherd altogether.

Assigning a shepherd is a commitment on the part of the rest of the lang-team as well. Before assigning a shepherd, we should discuss if this agenda item is a priority. In particular, if someone is shepherding something, that means we all agree to help that item move towards some kind of completion. This means giving feedback, when feedback is requested. It means doing the work to resolve concerns and conflicts. And, sometimes, it will mean giving way. I’ll talk more about this in a bit.

What was shepherds 1.0 and how is this different?

The initial use of the term shepherd, as I remember it, was actually quite close to the way I am using it here. The idea was that we would assign RFCs to a shepherd that should either drive to be accepted or to be closed. This policy was, by and large, a failure – RFCs got assigned, but people didn’t put in the time. (To be clear, sometimes they did, and in those cases the system worked reasonably well.)

My proposal here differs in a few key respects that I hope will make it more successful:

  • We limit how many things you can shepherd at once.
  • Assigning a shepherd is also a commitment from the lang team as a whole to review progress, resolve conflicts, and devote some time to the issue.
  • We don’t try to shepherd everything – in contrast, shepherding marks the things we are moving forward.
  • The shepherd is not something specific to an RFC, it refers to all kinds of “larger decisions”. For example, stabilization would be a shepherd activity as well.
What was shepherds 2.0 and how is this different?

We’ve also used the term shepherd to refer to a role that is moving towards full lang team membership. That’s different from this proposal in that it is not tied to a specific topic area. But there is also some interaction – for example, it’s not clear that shepherds need to be active lang team members.

I think it’d be great to allow shepherds to be any person who is sufficiently committed to help see something through. The main requirement for a shepherd should be that they are able to give us regular updates on the progress. Ideally, this would be done by attending the lang team meeting. But that doesn’t work for everyone – whether it because of time zones, scheduling, or language barriers – and so I think that any form of regular, asynchronous report would work jsut fine.

I think I would prefer for this proposal – and this kind of “role-specific shepherding” – to entirely replace the “provisional member” role on the lang team. It seems strictly better to me. Among other things, it’s naturally time-limited. Once the work item completes, that gives us a chance to decide whether it makes sense for someone to become a full member of the lang team, or perhaps try shepherding another idea, or perhaps just part ways. I expect there are a lot of people who have interest in working through a specific feature but for whom there is little desire to be long-term members of the lang team.

How do I get a shepherd assigned to my work item?

Ultimately, I think this question is ill-posed: there is no way to “get” a shepherd assigned to your work. Having the expectation that a shepherd will be assigned runs smack into the problems of unbounded queues and was, I think, a crucial flaw in the Shepherds 1.0 system.

Basically, the way a shepherd gets assigned in this scheme is roughly the same as the way things “get done” today. You convince someone in the lang team that the item is a priority, and they become the shepherd. That convincing takes place through the existing channels: nominated issues, discord or zulip, etc. It’s not that I don’t think this is something else we should be thinking about, it’s just that it’s something of an orthogonal problem.

My model is that shepherds are how we quantify and manage the things we are doing. The question of “what happens to all the existing things” is more a question of how we select which things to do – and that’s ultimately a priority call.

OK, so, what happens to all the existing things?

That’s a very good question. And one I don’t intend to answer here, at least not in full. That said, I do think this is an important problem that we should think about. I would like to be exposing more “insight” into our overall priorities.

In my ideal world, we’d have a list of projects that we are not working on, grouped somewhat by how likely we are to work on them in the future. This might then indicate ideas that we do not want to pursue; ideas that we have mild interest in but which have a lot of unknowns. Ideas that we started working on but got blocked at some point (hopefully with a report of what’s blocking them). And so forth. But that’s all a topic for another post.

One other idea that I like is documenting on the website the “areas of interest” for each of the lang team members (and possibly other folks) who might be willing shepherds. This would help people figure out who to reach out to.

Isn’t there anything I can do to help move Topic X along?

This proposal does offer one additional option that hadn’t formally existing before. If you want to see something happen, you can offer to shepherd it yourself – or in conjunction with a member of the lang team. You could do this by pinging folks on discord, attending a lang team meeting, or nominating an issue to bring it to the lang team’s attention.

How many active shepherds can we have then?

It is important to emphasize that having a willing shepherd is not necessarily enough to unblock a project. This is because, as I noted above, assigning a shepherd is also a commitment on the part of the lang-team – a commitment to review progress, resolve conflicts, and keep up with things. That puts a kind of informal cap on how many active things can be occurring, even if there are shepherds to spare. This is particularly true for subtle things. This cap is probably somewhat fundamental – even increasing the size of the lang team wouldn’t necessarily change it that much.

I don’t know how many shepherds we should have at a time, I think we’ll have to work that out by experience, but I do think we should be starting small, with a handful of items at a time. I’d much rather we are consistently making progress on a few things than spreading ourselves too thin.

Expectations for a shepherd

I think the expectations for a shepherd are as follows.

First, they should prepare updates for the lang team meeting on a weekly basis (even if it’s “no update”). This doesn’t have to be a long detailed write-up – even a “no update” suffices.

Second, when a design concern or conflict arises, they should help to see it resolved. This means a few things. First and foremost, they have to work to understand and document the considerations at play, and be prepared to summarize those. (Note: they don’t necessarily have to do all this work themselves! I would like to see us making more use of collaborative summary documents, which allow us to share the work of documenting concerns.)

They should also work to help resolve the conflict, possibly by scheduling one-off meetings or through other means. I won’t go into too much detail here because I think looking into how best to resolve design conflicts is worthy of a separate post.

Finally, while this is not a firm expectation, it is expected that shepherds will become experts in their area, and would thus be able to give useful advice about similar topics in the future (even if they are not actively shepherding that area anymore).

Expectations from the lang team

I want to emphasize this part of things. I think the lang team suffers from the problem of doing too many things at once. Part of agreeing that someone should shepherd topic X, I think, is agreeing that we should be making progress on topic X.

This implies that the team agrees to follow along with the status updates and give moderate amounts of feedback when requested.

Of course, as the design progresses, it is natural that lang team members will have concerns about various aspects. Just as today, we operate on a consensus basis, so resolving those concerns is needed to make progress. When an item has an active shepherd, though, that means it is a priority, and this implies then that lang team members with blocking concerns should make time to work with the shepherd and get them resolved. (And, is always the case, this may mean accepting an outcome that you don’t personally agree with, if the rest of the team is leaning the other way.)


So, that’s it! In the end, the specifics of what I propose are the following:

  • We’ll post on the lang team repository the list of active shepherds and their assigned areas.
  • In order for a formal decision to be made (e.g., stabilization proposal accepted, RFC accepted, etc), a shepherd must be assigned.
    • This happens at the lang team meeting. We should prepare a list of factors to take into account when making this decision, but one of the key ones is whether we agree as a team that this is something that is high enough priority that we can devote the required energy to seeing it progress.
  • Shepherds will keep the lang-team updated on major developers and help to resolve conflicts that arise, with the cooperation of the lang-team, as described above.
    • If a shepherd seems inactive for a long time, we’ll discuss if that’s a problem.
  1. I could not find any single page that documents Rust’s feature process from beginning to end. Seems like something we should fix. But what I mean by moving a feature forward is basically things like “accepting an RFC” or “stabilzing an unstable feature” – basically the formal decisions governed by the lang team. 

Categorieën: Mozilla-nl planet

Mozilla VR Blog: WebXR emulator extension

Mozilla planet - di, 10/09/2019 - 19:09
WebXR emulator extension

We are happy to announce the release of our WebXR emulator browser extension which helps WebXR content creation.

We understand that developing and debugging WebXR experiences is hard for many reasons:

  • You must own a physical XR device
  • Lack of support of XR devices on some platforms, as macOS
  • Putting on and taking off the headset all the time is an uncomfortable task
  • In order to make your app responsive across form factors, you must own tons of devices: mobile, tethered, 3dof, 6dof, and so on

With this extension, we aim to soften most of these issues.

WebXR emulator extension emulates XR devices so that you can directly enter immersive(VR) mode from your desktop browser and test your WebXR application without the need of any XR devices. It emulates multiple XR devices, so you can select which one you want to test.

The extension is built on top of the WebExtensions API, so it works on Firefox, Chrome, and other browsers supporting the API.

WebXR emulator extension

How can I use it?
  1. Install the extension from the extension stores (Firefox, Chrome)
  2. Launch a WebXR application, for example the Three.js examples. You will notice that the application detects that you have a VR device (emulated) and it will let you enter the immersive (VR) mode.
  3. Open the “WebXR” tab in the browser’s developer tool (Firefox, Chrome) to control the emulated device. You can move the headset and controllers and trigger the controller buttons. You will see their transforms reflected in the WebXR application.
    WebXR emulator extension
What’s next?

The development of this extension is still at an early stage. We have many awesome features planned, including:

  • Recording and replaying of actions and movements of your XR devices so you don’t have to replicate them every time you want to test your app and can share them with others.
  • Incorporate new XR devices
  • Control the headset and controllers using a standard gamepad like the Xbox or PS4 controllers or use your mobile as 3dof device
  • Something else?

We would love your feedback! What new features do you want next? Any problems with the extension on your WebXR application? Please join us on GitHub to discuss them.

Lastly, we would like to give a shout out to the WebVR API emulation Extension by Jaume Sanchez as it was a true inspiration for us when building this one.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Understand how hackers work

Mozilla planet - di, 10/09/2019 - 18:00

Forget about those hackers in movies trying to crack the code on someone’s computer to get their top secret files. The hackers responsible for data breaches usually start by targeting … Read more

The post Understand how hackers work appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: CASE Act Threatens User Rights in the United States

Mozilla planet - di, 10/09/2019 - 17:12

This week, the House Judiciary Committee is expected to mark up the Copyright Alternative in Small Claims Enforcement (CASE) Act of 2019 (H.R. 2426). While the bill is designed to streamline the litigation process, it will impose severe costs upon users and the broader internet ecosystem. More specifically, the legislation would create a new administrative tribunal for claims with limited legal recourse for users, incentivizing copyright trolling and violating constitutional principles. Mozilla has always worked for copyright reform that supports businesses and internet users, and we believe that the CASE Act will stunt innovation and chill free expression online. With this in mind, we urge members to oppose passage of H.R. 2426.

First, the tribunal created by the legislation conflicts with well-established separation of powers principles and limits due process for potential defendants. Under the CASE Act, a new administrative board would be created within the Copyright Office to review claims of infringement. However, as Professor Pamela Samuelson and Kathryn Hashimoto of Berkeley Law point out, it is not clear that Congress has the authority under Article I of the Constitution to create this tribunal. Although Congress can create tribunals that adjudicate “public rights” matters between the government and others, the creation of a board to decide infringement disputes between two private parties would represent an overextension of its authority into an area traditionally governed by independent Article III courts.

Moreover, defendants subject to claims under the CASE Act will be funneled into this process with strictly limited avenues for appeal. The legislation establishes the tribunal as a default legal process for infringement claims–defendants will be forced into the process unless they explicitly opt-out. This implicitly places the burden on the user, and creates a more coercive model that will disadvantage defendants who are unfamiliar with the nuances of this new legal system. And if users have objections to the decision issued by the tribunal, the legislation severely restricts access to justice by limiting substantive court appeals to cases in which the board exceeded its authority; failed to render a final determination; or issued a determination as a result of fraud, corruption, or other misconduct.

While the board is supposed to be reserved for small claims, the tribunal is authorized to award damages of up to $30,000 per proceeding. For many people, this supposedly “small” amount would be enough to completely wipe out their household savings. Since the forum allows for statutory damages to be imposed, the plaintiff does not even have to show any actual harm before imposing potentially ruinous costs on the defendant.

These damages awards are completely out of place in what is being touted as a small claims tribunal. As Stan Adams of the Center for Democracy and Technology notes, awards as high as $30,000 exceed the maximum awards for small claims courts in 49 out of 50 states. In some cases, they would be ten times higher than the damages available in small claims court.

The bill also authorizes the Register of Copyrights to unilaterally establish a forum for claims of up to $5,000 to be decided by a singular Copyright Claims Officer, without any pre-established explicit due process protections for users. These amounts may seem negligible in the context of a copyright suit, where damages can reach up to $150,000, but nearly 40 percent of Americans cannot cover a $400 emergency today.

Finally, the CASE Act will give copyright trolls a favorable forum. In recent years, some unscrupulous actors made a business of threatening thousands of Internet users with copyright infringement suits. These suits are often based on flimsy, but potentially embarrassing, allegations of infringement of pornographic works. Courts have helped limit the worst impact of these campaigns by making sure the copyright owner presented evidence of a viable case before issuing subpoenas to identify Internet users. But the CASE Act will allow the Copyright Office to issue subpoenas with little to no process, potentially creating a cheap and easy way for copyright trolls to identify targets.

Ultimately, the CASE Act will create new problems for internet users and exacerbate existing challenges in the legal system. For these reasons, we ask members to oppose H.R. 2426.

The post CASE Act Threatens User Rights in the United States appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Firefox’s Test Pilot Program Returns with Firefox Private Network Beta

Mozilla planet - di, 10/09/2019 - 15:00

Like a cat, the Test Pilot program has had many lives. It originally started as an Add-on before we relaunched it three years ago. Then in January, we announced that we were evolving our culture of experimentation, and as a result we closed the Test Pilot program to give us time to further explore what was next.

We learned a lot from the Test Pilot program. First, we had a loyal group of users who provided us feedback on projects that weren’t polished or ready for general consumption. Based on that input we refined and revamped various features and services, and in some cases shelved projects altogether because they didn’t meet the needs of our users. The feedback we received helped us evaluate a variety of potential Firefox features, some of which are in the Firefox browser today.

If you haven’t heard, third time’s the charm. We’re turning to our loyal and faithful users, specifically the ones who signed up for a Firefox account and opted-in to be in the know about new products testing, and are giving them a first crack to test-drive new, privacy-centric products as part of the relaunched Test Pilot program. The difference with the newly relaunched Test Pilot program is that these products and services may be outside the Firefox browser, and will be far more polished, and just one step shy of general public release.

We’ve already earmarked a couple of new products that we plan to fine-tune before their official release as part of the relaunched Test Pilot program. Because of how much we learned from our users through the Test Pilot program, and our ongoing commitment to build our products and services to meet people’s online needs, we’re kicking off our relaunch of the Test Pilot program by beta testing our project code named Firefox Private Network.

Try our first beta – Firefox Private Network

One of the key learnings from recent events is that there is growing demand for privacy features. The Firefox Private Network is an extension which provides a secure, encrypted path to the web to protect your connection and your personal information anywhere and everywhere you use your Firefox browser.

There are many ways that your personal information and data are exposed: online threats are everywhere, whether it’s through phishing emails or data breaches. You may often find yourself taking advantage of the free WiFi at the doctor’s office, airport or a cafe. There can be dozens of people using the same network — casually checking the web and getting social media updates. This leaves your personal information vulnerable to those who may be lurking, waiting to take advantage of this situation to gain access to your personal info. Using the Firefox Private Network helps protect you from hackers lurking in plain sight on public connections.

Start testing the Firefox Private Network today, it’s currently available in the US on the Firefox desktop browser. A Firefox account allows you to be one of the first to test potential new products and services, you can sign up directly from the extension.


Key features of Firefox Private Network are:
  • Protection when in public WiFi access points – Whether you are waiting at your doctor’s office, the airport or working from your favorite coffee shop, your connection to the internet is protected when you use the Firefox browser thanks to a secure tunnel to the web, protecting all your sensitive information like the web addresses you visit, personal and financial information.
  • Internet Protocol (IP) addresses are hidden so it’s harder to track you – Your IP address is like a home address for your computer. One of the reasons why you may want to keep it hidden is to keep advertising networks from tracking your browsing history. Firefox Private Network will mask your IP address providing protection from third party trackers around the web.
  • Toggle the switch on at any time. By clicking in the browser extension, you will find an on/off toggle that shows you whether you are currently protected, which you can turn on at anytime if you’d like additional privacy protection, or off if not needed at that moment.
Your feedback on Firefox Private Network beta is important

Over the next several months you will see a number of variations on our testing of the Firefox Private Network. This iterative process will give us much-needed feedback to explore technical and possible pricing options for the different online needs that the Firefox Private Network meets.

Your feedback will be essential in making sure that we offer a full complement of services that address the problems you face online with the right-priced service solutions. We depend on your feedback and we will send a survey to follow up. We hope you can spend a few minutes to complete it and let us know what you think. Please note this first Firefox Private Network Beta test will only be available to start in the United States for Firefox account holders using desktop devices. We’ll keep you updated on our eventual beta test roll-outs in other locales and platforms.

Sign up now for a Firefox account and join the fight to keep the internet open and accessible to all.

The post Firefox’s Test Pilot Program Returns with Firefox Private Network Beta appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 303

Mozilla planet - di, 10/09/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is viu, a terminal image viewer.

Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

303 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs Upcoming Events Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

The Rust compiler is basically 30 years of trying to figure out how to teach a computer how to see the things we worry about as C developers.

James Munns (@bitshiftmask) on Twitter

Thanks to llogiq for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Caniuse and MDN compatibility data collaboration

Mozilla planet - ma, 09/09/2019 - 17:59

Web developers spend a good amount of time making web compatibility decisions. Deciding whether or not to use a web platform feature often depends on its availability in web browsers.

A brief history of compatibility data

More than 10 years ago, @fyrd created the caniuse project, to help developers check feature availability across browsers. Over time, caniuse has evolved into the go-to resource to answer the question that comes up day to day: “Can I use this?”

About 2 years ago, the MDN team started re-doing its browser compatibility tables. The team was on a mission to take the guesswork out of web compatibility. Since then, the BCD project has become a large dataset with more than 10,000 data points. It stays up to date with the help of over 500 contributors on GitHub.

MDN compatibility data is available as open data on npm and has been integrated in a variety of projects including VS Code and auditing.

Two great data sources come together

Today we’re announcing the integration of MDN’s compat data into the caniuse website. Together, we’re bringing even more web compatibility information into the hands of web developers.

Caniuse table for Intl.RelativeTimeFormat. Data imported from mdn-compat-data.

Before we began our collaboration, the caniuse website only displayed results for features available in the caniuse database. Now all search results can include support tables for MDN compat data. This includes data types already found on caniuse, specifically the HTML, CSS, JavaScript, Web API, SVG & and HTTP categories. By adding MDN data, the caniuse support table count expands from roughly 500 to 10,500 tables! Developers’ caniuse queries on what’s supported where will now have significantly more results.

The new feature tables will look a little different. Because the MDN compat data project and caniuse have compatible yet somewhat different goals, the implementation is a little different too. While the new MDN-based tables don’t have matching fields for all the available metadata (such as links to resources and a full feature description), support notes and details such as bug information, prefixes, feature flags, etc. will be included.

The MDN compatibility data itself is converted under the hood to the same format used in caniuse compat tables. Thus, users can filter and arrange MDN-based data tables in the same way as any other caniuse table. This includes access to browser usage information, either by region or imported through Google Analytics to help you decide when a feature has enough support for your users. And the different view modes available via both datasets help visualize support information.

Differences in the datasets

We’ve been asked why the datasets are treated differently. Why didn’t we merge them in the first place? We discussed and considered this option. However, due to the intrinsic differences between our two projects, we decided not to. Here’s why:

MDN’s support data is very broad and covers feature support at a very granular level. This allows MDN to provide as much detailed information as possible across all web technologies, supplementing the reference information provided by MDN Web Docs.

Caniuse, on the other hand, often looks at larger features as a whole (e.g. CSS Grid, WebGL, specific file format support). The caniuse approach provides developers with higher level at-a-glance information on whether the feature’s supported. Sometimes detail is missing. Each individual feature is added manually to caniuse, with a primary focus on browser support coverage rather than on feature coverage overall.

Because of these and other differences in implementation, we don’t plan on merging the source data repositories or matching the data schema at this time. Instead, the integration works by matching the search query to the feature’s description on Then, caniuse generates an appropriate feature table, and converts MDN support data to the caniuse format on the fly.

What’s next

We encourage community members of both repos, caniuse and mdn-compat-data, to work together to improve the underlying data. By sharing information and collaborating wherever possible, we can help web developers find answers to compatibility questions.

The post Caniuse and MDN compatibility data collaboration appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Wladimir Palant: State of the art protection in Chrome Web Store

Mozilla planet - ma, 09/09/2019 - 16:04

All of you certainly know already that Google is guarding its Chrome Web Store vigilantly and making sure that no bad apples get in. So when you hit “Report abuse” your report will certainly be read carefully by another human being and acted upon ASAP. Well, eventually… maybe… when it hits the news. If it doesn’t, then it probably wasn’t important anyway and these extensions might stay up despite being taken down by Mozilla three months ago.

Canons protecting an old fort
Image by Sheba_Also

As to your legitimate extensions, these will be occasionally taken down as collateral damage in this fierce fight. Like my extension which was taken down due to missing a screenshot because of not having any user interface whatsoever. It’s not possible to give an advance warning either, like asking the developer to upload a screenshot within a week. This kind of lax policies would only encourage the bad guys to upload more malicious extensions without screenshots of course.

And the short downtime of a few weeks and a few hours of developer time spent trying to find anybody capable of fixing the problem are surely a small price to pay for a legitimate extension in order to defend the privilege of staying in the exclusive club of Chrome extension developers. So I am actually proud that this time my other browser extension, PfP: Pain-free Passwords, was taken down by Google in its relentless fight against the bad actors.

Here is the email I’ve got:

Garbled text of Google's mail

Hard to read? That might be due to the fact that this plain text email was sent as text/html. A completely understandable mistake given how busy all Google employees are. We only need to copy the link to the policy here and we’ll get this in a nicely formatted document.

Policy requiring privacy policy to be added to the designated field

So there we go. All I need to do is to write a privacy policy document for the extension which isn’t collecting any data whatsoever, then link it from the appropriate field. Could it be so easy? Of course not, the bad guys would be able to figure it out as well otherwise. Very clever of Google not to specify which one the “designated field” is. I mean, if you publish extensions on Mozilla Add-ons, there is literally a field saying “Privacy Policy” there. But in Chrome Web Store you only get Title, Summary, Detailed Description, Category, Official Url, Homepage Url, Support Url, Google Analytics ID.

See what Google is doing here? There is really only one place where the bad guys could add their privacy policy, namely that crappy unformatted “Detailed Description” field. Since it’s so unreadable, users ignore it anyway, so they will just assume that the extension has no privacy policy and won’t trust it with any data. And as an additional bonus, “Detailed Description” isn’t the designated field for privacy policy, which gives Google a good reason to take bad guys’ extensions down at any time. Brilliant, isn’t it?

In the meantime, PfP takes a vacation from Chrome Web Store. I’ll let you know how this situation develops.

Update (2019-09-10): As commenter drawdrove points out, the field for the privacy policy actually exists. Instead of placing it under extension settings, Google put it in the overall developer settings. So all of the developer’s extensions share the same privacy policy, no matter how different. Genius!

PfP is now back in Chrome Web Store. But will the bad guys also manage to figure it out?

Categorieën: Mozilla-nl planet

IRL (podcast): Privacy or Profit - Why Not Both?

Mozilla planet - ma, 09/09/2019 - 09:05

Every day, our data hits the market when we sign online. It’s for sale, and we’re left to wonder if tech companies will ever choose to protect our privacy rather than reap large profits with our information. But, is the choice — profit or privacy — a false dilemma? Meet the people who have built profitable tech businesses while also respecting your privacy. Fact check if Facebook and Google have really found religion in privacy. And, imagine a world where you could actually get paid to share your data.

In this episode, Oli Frost recalls what happened when he auctioned his personal data on eBay. Jeremy Tillman from Ghostery reveals the scope of how much ad-tracking is really taking place online. Patrick Jackson at breaks down Big Tech’s privacy pivot. DuckDuckGo’s Gabriel Weinberg explains why his private search engine has been profitable. And Dana Budzyn walks us through how her company, UBDI, hopes to give consumers the ability to sell their data for cash.

IRL is an original podcast from Firefox. For more on the series, go to

Read about Patrick Jackson and Geoffrey Fowler's privacy experiment.

Learn more about DuckDuckGo, an alternative to Google search, at

And, we're pleased to add a little more about Firefox's business here as well — one that puts user privacy first and is also profitable. Mozilla was founded as a community open source project in 1998, and currently consists of two organizations: the 501(c)3 Mozilla Foundation, which backs emerging leaders and mobilizes citizens to create a global movement for the health of the internet; and its wholly owned subsidiary, the Mozilla Corporation, which creates Firefox products, advances public policy in support of internet user rights and explores new technologies that give people more control and privacy in their lives online. Firefox products have never — and never will never — buy or sell user data. Because of its unique structure, Mozilla stands apart from its peers in the technology field as one of the most impactful and successful social enterprises in the world. Learn more about Mozilla and Firefox at

Categorieën: Mozilla-nl planet

Mike Hoye: Forward Motion

Mozilla planet - vr, 06/09/2019 - 21:56


This has been a while coming; thank you for your patience. I’m very happy to be able to share the final four candidates for Mozilla’s new community-facing synchronous messaging system.

These candidates were assessed on a variety of axes, most importantly Community Participation Guideline enforcement and accessibility, but also including team requirements from engineering, organizational-values alignment, usability, utility and cost. To close out, I’ll talk about the options we haven’t chosen and why, but for the moment let’s lead with the punchline.

Our candidates are:

We’ve been spoiled for choice here – there were a bunch of good-looking options that didn’t make it to the final four – but these are the choices that generally seem to meet our current institutional needs and organizational goals.

We haven’t stood up a test instance for Slack, on the theory that Mozilla already has a surprising number of volunteer-focused Slack instances running already – Common Voice, Devtools and A-Frame, for example, among many others – but we’re standing up official test instances of each of the other candidates shortly, and they’ll be available for open testing soon.

The trial period for these will last about a month. Once they’re spun up, we’ll be taking feedback in dedicated channels on each of those servers, as well as in #synchronicity on, and we’ll be creating a forum on Mozilla’s community Discourse instance as well. We’ll have the specifics for you at the same time as those servers will be opened up and, of course you can always email me.

I hope that if you’re interested in this stuff you can find some time to try out each of these options and see how well they fit your needs. Our timeline for this transition is:

  1. From September 12th through October 9th, we’ll be running the proof of concept trials and taking feedback.
  2. From October 9th through the 30th, we’re going discuss that feedback, draft a proposed post-IRC plan and muster stakeholder approval.
  3. On December 1st, assuming we can gather that support, we will stand up the new service.
  4. And finally – allowing transition time for support tooling and developers – no later than March 1st 2020, IRC.m.o will be shut down.

In implementation terms, there are a few practical things I’d like to mention:

  • At the end of the trial period, all of these instances will be turned off and all the information in them will be deleted. The only way to win the temporary-permanent game is not to play; they’re all getting decommed and our eventual selection will get stood up properly afterwards.
  • The first-touch experiences here can be a bit rough; we’re learning how these things work at the same time as you’re trying to use them, so the experience might not be seamless. We definitely want to hear about it when setup or connection problems happen to you, but don’t be too surprised if they do.
  • Some of these instances have EULAs you’ll need to click through to get started. Those are there for the test instances, and you shouldn’t expect that in the final products.
  • We’ll be testing out administration and moderation tools during this process, so you can expect to see the occasional bot, or somebody getting bounced arbitrarily. The CPG will be in effect on these test instances, and as always if you see something, say something.
  • You’re welcome to connect with mobile or alternative clients where those are available; we expect results there to be uneven, and we’d be glad for feedback there as well. There will be links in the feedback document we’ll be sending out when the servers are opened up to collections of those clients.
  • Regardless of our choice of public-facing synchronous communications platform, our internal Slack instance will continue to be the “you are inside a Mozilla office” confidential forum. Internal Slack is not going away; that has never been on the table. Whatever the outcome of this process, if you work at Mozilla your manager will still need to be able to find you on Slack, and that is where internal discussions and critical incident management will take place.

… and a few words on some options we didn’t pick and why:

  • Zulip, Gitter.IM and Spectrum.Chat all look like strong candidates, but getting them working behind IAM turned out to be either very difficult or impossible given our resources.
  • Discord’s terms of service, particularly with respect to the rights they assert over participants’ data, are expansive and very grabby, effectively giving them unlimited rights to do anything they want with anything we put into their service. Coupling that with their active hostility towards interoperability and alternative clients has disqualified them as a community platform.
  • Telegram (and a few other mobile-first / chat-first products in that space) looked great for conversations, but not great for work.
  • IRCv3 is just not there yet as a protocol, much less in terms of standardization or having extensive, mature client support.

So here we are. It’s such a relief to be able to finally click send on this post. I’d like to thank everyone on Mozilla’s IT and Open Innovation teams for all the work they’ve done to get us this far, and everyone who’s expressed their support (and sympathy, we got lots of that too) for this process. We’re getting closer.

Categorieën: Mozilla-nl planet