Mozilla Nederland LogoDe Nederlandse

About:Community: Firefox 72 new contributors

Mozilla planet - do, 02/01/2020 - 23:59

With the release of Firefox 72, we are pleased to welcome the 36 developers who contributed their first code change to Firefox in this release, 28 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Bringing California’s privacy law to all Firefox users in 2020

Mozilla planet - di, 31/12/2019 - 16:47

2019 saw a spike of activity to protect online privacy as governments around the globe grappled with new revelations of data breaches and privacy violations. While much of the privacy action came from outside the U.S., such as the passage of Kenya’s data protection law and Europe’s enforcement of its GDPR privacy regulation, California represented a bright spot for American privacy.

Amidst gridlock in Congress over federal privacy rules, California marched forward with its landmark privacy law, the California Consumer Privacy Act (CCPA), which goes into effect on January 1, 2020. Mozilla has long been a supporter of data privacy laws that empower people — including CCPA. In fact, we were one of the few companies to endorse CCPA back in 2018 when it was before the California legislature.

The California Consumer Privacy Act (CCPA) expands the rights of Californians over their data – and provides avenues for the Attorney General to investigate and enforce those rights, as well as allowing Californians to sue. Californians now have the right to know what personal information is being collected, to access it, to update and correct it, to delete it, to know who their data is being shared with, and to opt-out of the sale of their data.

Much of what the CCPA requires companies to do moving forward is in line with how Firefox already operates and handles data. We’ve long believed that your data is not our data, and that privacy online is fundamental. Nonetheless, we are taking steps to go above and beyond what’s expected in CCPA.

Here’s how we are bringing CCPA to life for Firefox users.

CCPA rights for everyone.

When Europe passed its GDPR privacy law we made sure that all users, whether located in the EU or not, were afforded the same rights under the law.  As a company that believes privacy is fundamental to the online experience, we felt that everyone should benefit from the rights laid out in GDPR. That is why our new settings and privacy notice applied to all of our users.

With the passage and implementation of CCPA, we will do the same. Changes we are making in the browser will apply to every Firefox user, not just those in California.

Deleting your data.

One of CCPA’s key new provisions is its expanded definition of “personal data” under CCPA. This expanded definition allows for users to request companies delete their user specific data.

As a rule, Firefox already collects very little of your data. In fact, most of what we receive is to help us improve the performance and security of Firefox. We call this telemetry data. This telemetry doesn’t tell us about the websites you visit or searches you do; we just know general information, like a Firefox user had a certain amount of tabs opened and how long their session was. We don’t collect telemetry about private browsing mode and we’ve always given people easy options to disable telemetry in Firefox. And because we’ve long believed that data should not be stored forever, we have strict limits on how long we keep telemetry data.

We’ve decided to go the extra mile and expand user deletion rights to include deleting this telemetry data stored in our systems. To date, the industry has not typically considered telemetry data “personal data” because it isn’t identifiable to a specific person, but we feel strongly that taking this step is the right one for people and the ecosystem.

In line with the work we’ve done this year to make privacy easier and more accessible to our users, the deletion control will be built into Firefox and will begin rolling out in the next version of the browser on January 7. This setting will provide users a way to request deletion for desktop telemetry directly from Firefox – and a way for us, at Mozilla, to perform that deletion.

For Firefox, privacy is not optional. We don’t think people should have to choose between the technology they love and their privacy. We think you should have both. That’s why we are taking these steps to bring additional protection to all our users under CCPA. And why we will continue to press in 2020 – through the products we build and the policies we advocate – for an Internet that gives people the privacy and security they deserve.

The post Bringing California’s privacy law to all Firefox users in 2020 appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Happy New Year from Hubs!

Mozilla planet - di, 31/12/2019 - 15:00
Happy New Year from Hubs!

As we wrap up 2019, The Hubs team says thank you to the Mozilla Mixed Reality Community for an incredible year! We’ve been looking back and we’re excited about the key milestones that we’ve hit in our mission to make private social VR readily available to the general public. At the core of what we’re doing, our team is exploring the ways that spatial computing and shared environments can improve the ways that we connect and collaborate, and thanks to the feedback and participation of our users and community as a whole, we got to spend a lot of time this year working on new features and experiments.

Early in the year, we wanted to dive into our hypothesis that social 3D spaces could integrate into our existing platforms and tools that the team was regularly using. We launched the Hubs Discord Bot back in April, which bridged chat between the two platforms and added an optional authentication layer to restrict access to rooms created with the bot to users in a given server. Since launching the Discord bot, we’ve learned more about the behaviors and frameworks that enable healthy community development and management, and we released a series of new features that supported multiple moderators, configurable room permissions, closing rooms, and more.

One of our goals for this year was to empower users to more easily personalize their Hubs experiences by making it easy to create custom content. This work kicked off with making Spoke available as a hosted web application, so creators no longer had to download a separate application to build scenes for Hubs. We followed with new features that improved how avatars could be created, shared, remixed, and discovered, and we wrapped up the year by releasing several pre-configured asset kits for building unique environments, starting with the Spoke Architecture Kit release that also included a number of ease-of-use feature updates.

We’ve also just had a lot of fun connecting with users and growing our team and community, and we’ve learned a lot about what we’re working on and how to improve Hubs for different use cases. When we joined Twitter, we got to start interacting with a lot more of you on a regular basis and we’ve loved seeing how you’ve been using Hubs when you share your own content with us! The number of new scenes, avatars, and even public events that have been shared within our community gets us even more excited for what we think 2020 can bring.

As we look ahead into the next year, we’ll be sharing a big update in January and go in-depth with work we’ve been doing to make Hubs a more versatile platform. If you want to follow along with our roadmap, you can keep an eye on the work we have planned on GitHub and follow us on Twitter @ByHubs. Happy 2020!

Categorieën: Mozilla-nl planet

The Firefox Frontier: New Year, New Rights: What to know about California’s new privacy law

Mozilla planet - di, 31/12/2019 - 08:59

The California Consumer Privacy Act (CCPA) expands the rights of Californians over their data. Starting in 2020, Californians have the right to know what personal information is being collected, access … Read more

The post New Year, New Rights: What to know about California’s new privacy law appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 319

Mozilla planet - di, 31/12/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is attohttpc, a tiny synchronous HTTP client library.

Thanks to Matěj Laitl for the suggestions!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

184 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.


No RFCs are currently in final comment period.

Tracking Issues & PRs

No RFCs are currently in final comment period.

New RFCs Upcoming Events Asia Pacific Europe North America South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust has multiple unique paradigms that don't even exist in other languages, such as lifetimes and compile-time-tracked "exclusive access". But instead of endorsing them from the beginning, as @mbrubeck's Rust: a unique perspective does, the Rust book tries to show a language that is "like other languages, but with (magical) compile-time checks". When the truth is that Rust's strength lies in non-unsafe Rust being less expressive than languages like C or C++.

I think that Rust should start with the statement: "Welcome to a language that by being less expressive forces you to use constructs that are guaranteed at compile-time to be sound. But don't worry; after some time you will get used to the coding patterns that are allowed, and will then almost not notice the hindered expressiveness, only the enhanced zero-cost safety that will let you hack without fear."

  • It doesn't sound bad imho, and is at least honest w.r.t. the struggles that someone refusing to shift their way of coding / mental coding patterns may encounter.

Daniel H-M on rust-users

Thanks to Tom Phinney for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Emily Dunham: Toy hypercube construction

Mozilla planet - ma, 30/12/2019 - 09:00
Toy hypercube construction

I think hypercubes are neat, so I tried to make one out of string to play with. In the process, I discovered that there are surprisingly many ways to fail to trace every edge of a drawing of a hypercube exactly once with a single continuous line.

This puzzle felt like the sort of problem that some nerd had probably solved before, so I searched the web and discovered that the shape I was trying to configure the string into is called an Eulerian Cycle.

I learned that any graph in which every vertex attaches to an even number of edges has such a cycle, which is useful for my craft project because the euler cycle is literally the path that the string needs to take to make a model of the object represented by the graph.

Mathematical materials

To construct a toy hypercube or any other graph, you need the graph. To make it from a single piece of string, every vertex should have an even number of edges.

Knowing the number of edges in the graph will be useful later, when marking the string.

Physical materials

For the edges of the toy, I wanted something that’s a bit flexible but can sort of stand up on its own. I found that cotton clothesline rope worked well: it’s easy to mark, easy to pin vertex numbers onto, and sturdy but still flexible. I realized after completing the construction that it would have been clever to string items like beads onto the edges to make the toy prettier and identify which edge is which.

For the vertices, I pierced jump rings through the rope, then soldered them shut, to create flexible attachment points. This worked better than a previous prototype in which I used flimsier string and made the vertices from beads.

Vertices could be knotted, glued, sewn, or safety pinned. A bookbinding awl came in handy for making holes in the rope for the rings to go through.

Mathematical construction

First, I drew the graph of the shape I was trying to make – in this case, a hypercube. I counted its edges per vertex, 4. I made sure to draw each vertex with spots to write numbers in, half as many numbers as there are edges, because each time the string passes through the vertex it makes 2 edges. So in this case, every vertex needs room to write 2 numbers on it.

Here’s the graph I started with. I drew the edges in a lighter color so I could see which had already been visited when drawing in the euler cycle.


Then I started from an arbitrary vertex and drew in the line. Any algorithm for finding euler paths will suffice to draw the line. The important part of tracing the line on the graph is to mark each vertex it encounters, sequentially. So the vertex I start at is 1, the first vertex I visit is 2, and so forth.

Since the euler path visits every vertex of my particular hypercube twice, every vertex will have 2 numbers (the one I started at will have 3) when I finish the math puzzle. These pairs of numbers are what tell me which part of the string to attach to which other part.

Here’s what my graph looked like once I found an euler cycle in it and numbered the vertices that the cycle visited:

../../../_images/two1.jpg Physical construction

Since my graph has 32 edges, I made 33 evenly spaced marks on the string. I used an index card to measure them because that seemed like an ok size, but in retrospect it would have been fine if I’d made it smaller.


I then numbered each mark in sequence, from 1 to 33. I numbered them by writing the numbers on slips of paper and pinning the papers to the rope, but if I was using a ribbon or larger rope, the numbers could have been written directly on it. If you’re doing this at home, you could mark the numbers on masking tape on the rope just as well.


The really tedious step is applying the vertices. I just went through the graph, one vertex at a time, and attached the right points on the string together for it.

The first vertex had numbers 1, 25, and 33 on it for the euler cycle I drew and numbered on the graph, so I attached the string’s points 1, 25, and 33 together with a jump ring. The next vertex on the drawing had the numbers 2 and 18 on it, so I pierced together the points on the string that were labeled 2 and 18.

I don’t think it matters what order the vertices are assembled in, as long as the process ultimately results in all the vertices on the graph being represented by rings affixing the corresponding points on the string together.

I also soldered the rings shut, because after all that work I don’t want them falling out.


That’s all there is to it!


I’m going to have to find a faster way to apply the vertices before attempting a 6D hypercube. An ideal vertex would allow all edges to rotate and reposition themselves freely, but failing that, a lighter weight string and crimp fasteners large enough to hold 6 pieces of that string might do the trick.

The finished toy is not much to look at, but quite amusing to try to flatten into 3-space.

Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: The dawning of the Age of Apple Aquarius

Mozilla planet - ma, 30/12/2019 - 08:59
An interesting document has turned up at the Internet Archive: the specification to the Scorpius CPU, the originally intended RISC successor to the 68K Macintosh.

In 1986 the 68K processor line was still going strong but showing its age, and a contingent of Apple management (famously led by then-Mac division head Jean-Louis Gassée and engineer Sam Holland) successfully persuaded then-CEO John Sculley that Apple should be master of its own fate with its own CPU. RISC was just emerging at that time, with the original MIPS R2000 CPU appearing around 1985, and was clearly where the market was going (arguably it still is, since virtually all major desktop and mobile processors are load-store at the hardware level today, even Intel); thus was the Aquarius project born. Indeed, Sculley's faith in the initiative was so great that he allocated a staff of fifty and even authorized a $15 million Cray supercomputer, which was smoothed over with investors by claiming it was for modeling Apple hardware (which, in a roundabout and overly optimistic way, it was).

Holland was placed in charge of the project and set about designing the CPU for Aquarius. The processor's proposed feature set was highly ambitious, including four cores and SIMD (vector) support with inter-processor communication features. Holland's specification was called Scorpius; the initial implementation of the Scorpius design was to be christened Antares. This initial specification is what was posted at the Internet Archive, dated around 1988.

Despite Sculley and Gassée's support, Aquarius was controversial at Apple from the very beginning: it required a substantial RandD investment, cash which Apple could ill afford to fritter away at the time, and even if the cash were there many within the company did not believe Apple had sufficient technical chops to get the CPU to silicon. Holland's complex specification worried senior management further as it required solving various technical problems that even large, highly experienced chip design companies at the time would have found difficult.

With only a proposal and no actual hardware by 1988, Sculley became impatient, and Holland was replaced by Al Alcorn. Alcorn was a legend in the industry by this time, best known for his work at Atari, where he designed Pong and was involved in the development of the Atari 400 and the ill-fated "holographic" Atari Cosmos. After leaving Atari in 1981, he consulted for various companies and was brought in by Apple as outside expertise to try to rescue Aquarius. Alcorn pitched the question to microprocessor expert Hugh Martin, who studied the specification and promptly pronounced it "ridiculous" to both Alcorn and Sculley. On this advice Sculley scuttled Aquarius in 1989 and hired Martin to design a computer instead using an existing CPU. Martin's assignment became the similarly ill-fated Jaguar project, which completed poorly with another simultaneous project led by veteran engineer Jack McHenry called Cognac. Cognac, unlike Jaguar and Aquarius, actually produced working hardware. The "RISC LC" that the Cognac team built, originally a heavily modified Macintosh LC with a Motorola 88100 CPU running Mac OS, became the direct ancestor of the Power Macintosh. The Cray supercomputer, now idle, eventually went to the industrial design group for case modeling until it was dismantled.

Now that we have an actual specification to read, how might this have compared to the PowerPC 601? Scorpius defined a big-endian 32-bit RISC chip addressing up to 4GB of RAM with four cores, which the technical specification refers to as processing units, or PUs. Each core shares instruction and data caches with the others and communicates over a 5x4 crossbar network, and because all cores on a CPU must execute within the same address space, are probably best considered most similar to modern hardware threads (such as the 32 threads on the SMT-4 eight core POWER9 I'm typing this on). An individual core has 16 32-bit general purpose registers (GPRs) and seven special purpose registers (SPRs), plus eight global SPRs common to the entire CPU, though there is no floating-point unit in the specification we see here. Like ARM, and unlike PowerPC and modern Power ISA, the link register (which saves return addresses) is a regular GPR and code can jump directly to an address in any register. However, despite having a 32-bit addressing space and 32-bit registers, Scorpius uses a fixed-size 16-bit instruction word. Typical of early RISC designs and still maintained in modern MIPS CPUs, it also has a branch delay slot, where the instruction following a branch (even if the branch is taken) is always executed. Besides the standard cache control instructions, there are also special instructions for a core to broadcast to other cores, and the four PUs could be directed to work on data in tandem to yield SIMD vector-like operations (such as what you would see with AltiVec and SSE). Holland's design even envisioned an "inter-processor bus" (IPB) connecting up to 16 CPUs, each with their own local memory, something not unlike what we would call a non-uniform memory access (NUMA) design today.

The 16-bit instruction size greatly limits the breadth of available instructions compared to PowerPC's 32-bit instructions, but that would certainly be within the "letter" spirit of RISC. It also makes the code possibly more dense than PowerPC, though the limited amount of bits available for displacements and immediate values requires the use of a second prefix register and potentially multiple instructions which dampens this advantage somewhat. The use of multiple PUs in tandem for SIMD-like operations is analogous to AltiVec and rather more flexible, though the use of bespoke hardware support in later SIMD designs like the G4 is probably higher performance. The lack of a floating-point unit was probably not a major issue in 1986 but wasn't very forward-looking as every 601 shipped with an FPU standard from the factory; on the other hand, the NUMA IPB was very adventurous and certainly more advanced than multiprocessor PowerPC designs, something that wasn't even really possible until the 604 (or not without a lot of hacks, as in the case of the 603-based BeBox).

It's ultimately an academic exercise, of course, because this specification was effectively just a wish list whereas the 601 actually existed, though not for several more years. Plus, the first Power Macs, being descendants of the compatibility-oriented RISC LC, could still run 68K Mac software; while the specification doesn't say, Aquarius' radical differences from its ancestor suggests a completely isolated architecture intended for a totally new computer. Were Antares-based systems to actually emerge, it is quite possible that they would have eclipsed the Mac as a new and different machine, and in that alternate future I'd probably be writing a droll and informative article about the lost RISC Mac prototype instead.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR18b1 available

Mozilla planet - di, 24/12/2019 - 07:37
TenFourFox Feature Parity Release 18 beta 1 is now available (downloads, hashes, release notes). As promised, the biggest change in this release is to TenFourFox's Reader mode. Reader mode uses Mozilla Readability to display a stripped-down version of the page with (hopefully) the salient content, just the salient content, and no crap or cruft. This has obvious advantages for old systems like ours because Reader mode pages are smaller and substantially simpler, don't run JavaScript, and help to wallpaper over various DOM and layout deficiencies our older patched-up Firefox 45 underpinnings are starting to show a bit more.

In FPR18, Reader mode has two main changes: first, it is updated to the same release used in current versions of Firefox (I rewrote the glue module in TenFourFox so that current releases could be used unmodified, which helps maintainability), and second, Reader mode is now allowed on most web pages instead of only on ones Readability thinks it can render. By avoiding a page scan this makes the browser a teensy bit faster, but it also means that edge-case web pages that could still usefully display in Reader mode now can do so. When Reader mode can be enabled, a little "open book" icon appears in the address bar. Click that and it will turn orange and the page will switch to Reader mode. Click it again to return to the prior version of the page. Certain sites don't work well with this approach and are automatically filtered; we use the same list as Firefox. If you want the old method where the browser would scan the page first before offering reader mode, switch tenfourfox.reader.force-enable to false and reload the tab, and please mention what it was doing inappropriately so it can be investigated.

Reader mode isn't seamless, and in fairness wasn't designed to be. The most noticeable discontinuity is if you click a link within a Reader mode page, it renders that link in the regular browser (requiring you to re-enter Reader mode if you want to stay there), which kind of sucks for multipage documents. I'm considering a tweak to it such that you stay in Reader mode in a tab until you exit it but I don't know how well this would work and it would certainly alter the functionality of many pages. Post your thoughts in the comments. I might consider something like this for FPR19.

Besides the usual security updates, FPR18 also makes a minor compatibility fix to the browser and improves the comprehensiveness of removing browser data for privacy reasons. More work needs to be done on this because of currently missing APIs, but this first pass in FPR18 is a safe and easy improvement. As this is the first "fast four week" release, it will become live January 6.

I also wrote up another quickie script for those of you exploring TenFourFox's AppleScript support. Although Old Reddit appears to work just dandy with TenFourFox, the current React-based New Reddit is a basket case: it's slow, it uses newer JavaScript support that TenFourFox only allows incompletely, and its DOM is hard for extensions to navigate. If you're stuck on New Reddit and you can't read the comments because the "VIEW ENTIRE CONVERSATION" button doesn't work because React and if you work on React I hate you, you can now download Reddit Moar Comments. When the script is run, if a Reddit comments thread is in the current tab, it acts as if the View Entire Conversation button had been clicked and expands the thread. If you're like me, put the Scripts menu in the menu bar (using the AppleScript Utility), have a TenFourFox folder in your Scripts, and put this script in it so it's just a menu drop-down or two away. Don't forget to try the other possibly useful scripts in that folder, or see if you can write your own.

Merry Christmas to those of you who celebrate it, and a happy holiday to all.

Categorieën: Mozilla-nl planet

Armen Zambrano: Reducing Treeherder’s time to-deploy

Mozilla planet - ma, 23/12/2019 - 19:34
Reducing Treeherder’s time-to-deploy

Up until September we had been using code merges from the master branch to the production one to cause production deployments.

A merge to production would trigger few automatic steps:

  1. The code would get tested in the Travis CI (10 minutes or more)
  2. Upon success the code would be built by Heroku (few minutes)
  3. Upon success a Heroku release would happen (less than a minute)
<figcaption>What steps happen before new code is deployed</figcaption>

If a regression was to be found on production we would either `git revert` a change out of all merged changes OR use Heroku’s rollback feature to the last known working state (without using Git).

Using `git revert` to get us back into a good state would be very slow since it would take 15–20 minutes to run through Travis, a Heroku build and a Heroku release.

On the other hand, Heroku’s rollback feature would be an immediate step as it would skip steps 1 and 2. Rolling back is possible because a previous build of a commit would still be available and only the release step would be needed .

The procedural change I proposed was to use Heroku’s promotion feature (similar to Heroku’s rollback feature). This would reuse a build from the staging app with the production app. The promotion process is a one-click button event that only executes the release step since steps 1 & 2 had already run on the staging app. Promotions would take less than a minute to be live.

<figcaption>This shows how a Heroku build on stage is reused for production.</figcaption>

This change made day to day deployments a less involved process since all deployments would take less that a minute. I’ve been quite satisfied with the change since a deployment requires less waiting around to validate a deployment.

Categorieën: Mozilla-nl planet

Marco Zehe: Happy Chanukka

Mozilla planet - ma, 23/12/2019 - 13:00

Wishing all of my readers who celebrate it, a very happy Chanukka!

This year, Chanukka started at sundown on December 22 and runs through December 30. It coincides with Christmas. And as it so happens, the muslim mayor of London kicked off the Chanukka celebrations from a Christmas tree last night. In a world where there are so many separating thoughts and actions becoming more prominent again, endangering the free and open societies of some western countries, these connecting events are more important than ever.

Welcome to london…where the Muslim mayor of London kicks off the Jewish festival of Chanukah metre from a giant Christmas tree on one of the most famous sites in the world @JLC_uk @JewishLondon @ChabadUK @JewishNewsUK @sadiqkhan

— Justin Cohen (@CohenJust) December 22, 2019

Let me close by sharing with you a musical wish for a happy Chanukka by one of my favorite bands, the Canadians Walk Off The Earth, featuring Scott Helman.

Categorieën: Mozilla-nl planet

Niko Matsakis: Async Interview #3: Carl Lerche

Mozilla planet - ma, 23/12/2019 - 06:00

Hello! For the latest async interview, I spoke with Carl Lerche (carllerche). Among many other crates1, Carl is perhaps best known as one of the key authors behind tokio and mio. These two crates are quite widely used through the async ecosystem. Carl and I spoke on December 3rd.


You can watch the video on YouTube. I’ve also embedded a copy here for your convenience:

Background: the mio crate

One of the first things we talked about was a kind of overview of the layers of the “tokio-based async stack”.

We started with the mio crate. mio is meant to be the “lightest possible” non-blocking I/O layer for Rust. It basically exposes the “epoll” interface that is widely used on linux. Windows uses a fundamentally different model, so in that case there is a kind of compatibility layer, and hence the performance isn’t quite as good, but it’s still pretty decent. mio “does the best it can”, as Carl put it.

The tokio crate builds on mio. It wraps the epoll interface and exposes it via the Future abstraction from std. It also offers other things that people commonly need, such as timers.

Finally, bulding atop tokio you find tower, which exposes a “request-response” abstraction called Service. tower is similar to things like finagle or rack. This is then used by libraries like hyper and tonic, which implement protocol servers (http for hyper, gRPC for tonic). These protocol servers internally use the tower abstractions as well, so you can tell hyper to execute any Service.

One challenge is that it is not yet clear how to adapt tower’s Service trait to std::Future. It would really benefit from support of async functions in traits, in particular, which is difficult for a lot of reasons. The current plan is to adopt Pin and to require boxing and dyn Future values if you wish to use the async fn sugar. (Which seems like a good starting place, -ed.)

Returning to the overall async stack, atop protocol servers like hyper, you find web frameworks, such as warp – and (finally) within those you have middleware and the actual applications.

How independent are these various layers?

I was curious to understand how “interconnected” these various crates were. After all, while tokio is widely used, there are a number of different executors out there, both targeting different platforms (e.g., Fuchsia) as well as different trade-offs (e.g., async-std). I’m really interested to get a better understanding of what we can do to help the various layers described above operate independently, so that people can mix-and-match.

To that end, I asked Carl what it would take to use (say) Warp on Fuchsia. The answer was that “in principle” the point of Tower is to create just such a decoupling, but in practice it might not be so easy.

One of the big changes in the upcoming tokio 0.2 crate, in fact, has been to combine and merge a lot of tokio into one crate. Previously, the components were more decoupled, but people rarely took advantage of that. Therefore, tokio 0.2 combined a lot of components and made the experience of using them together more streamlined, although it is still possible to use components in a more “standalone” fashion.

In general, to make tokio work, you need some form of “driver thread”. Typically this is done by spawning a background thread, you can skip that and run the driver yourself.

The original tokio design had a static global that contained this driver information, but this had a number of issues in practice: the driver sometimes started unexpectedly, it could be hard to configure, and it didn’t work great for embedded environments. Therefore, the new system has switched to an explicitly launch, though there are procedural macros #[tokio::main] or #[tokio::test] that provide sugar if you prefer.

What should we do next? Stabilize stream.

Next we discussed which concrete actions made sense next. Carl felt that an obvious next step would be to stabilize the Stream trait. As you may recall, cramertj and I discussed the Stream trait in quite a lot of detail – in short, the existing design for Stream is “detached”, meaning that it must yield up ownership of each item it produces, much like an Iterator. It would be nice to figure out the story for “attached” streams that can re-use internal buffers, which are a very common use case, especially before we create syntactic sugar.

Carl’s motivation for a stable Stream is in part that he would like to issue a stable tokio release, ideally in Q3 of 2020, and Stream would be a part of that. If there is no Stream trait in the standard libary, that complicates things.

One thing we didn’t discuss, but which I personally would like to understand better, is what sort of libraries and infrastructure might benefit from a stabilized Stream. For example, “data libraries” like hyper mostly want a trait like AsyncRead to be stabilized.

About async read

Next we discussed the AsyncRead trait a little, though not in great depth. If you’ve been following the latest discussion, you’ll have seen that there is a tokio proposal to modify the AsyncRead traits used within tokio. There are two main goals here:

  • to make it safe to pass an uninitialized memory buffer to read
  • to better support vectorizing writes

However, there isn’t a clear consensus on the thread (at least not the last time I checked) on the best alternative design. The PR itself proposes changing from a &mut [u8] buffer (for writing the output into) to a dyn trait value, but there are other options. Carl for example proposed using a concrete wrapper struct instead, and adding methods to test for vectorization support (since outer layers may wish to adopt different strategies based on whether vectorization works).

One of the arguments in favor of the current design from the futures crate is that it maps very cleanly to the Read trait from the stdlib ([cramertj advanced this argument][c3], for example). Carl felt that the trait is already quite different (e.g., notably, it uses Pin) and that these more “analogous” interfaces could be made with defaulted helper methods instead. Further, he felt that async applications tend to prize performance more highly than synchronous ones, so the importance and overhead of uninitialized memory may be higher.

About async destructors and other utilities

We discussed async destructors. Carl felt that they would be a valuable thing to add for sure. He felt that the “general design” proposed by boats would be reasonable, although he thought there might be a bit of a duplication issue if you have both a async drop and a sync drop. A possible solution would be to have a prepare_to_drop async method that gives the object time to do async preparations, and then to always run the sync drop afterwards.

We also discussed a few utility methods like select!, and Carl mentioned that a lot of the ecosystem is currently using things like proc-macro-hack to support these, so perhaps a good thing to focus on would be improving procedural macro support so that it can handle expression level macros more cleanly.

  1. I think loom looks particularly cool. 

Categorieën: Mozilla-nl planet

Marco Zehe: WordPress accessibility team member, Gutenberg contributor

Mozilla planet - zo, 22/12/2019 - 13:00

My recent frequent blogging about Gutenberg has led to some really productive changes.

One change is that my profile on now shows that I am also contributing to the accessibility effort. The accessibility team mostly consists of volunteers. And now, I am one of them as well.

I also started contributing more than issues to Gutenberg. I can also review and label issues and pull requests now. There are some exciting changes ahead that I helped test and review in the past few days, and I promise I’ll blog about them once they are in an official plugin release.

It is my hope that my contributions will help bring the accessibility forward in a good direction for all. I’d like to thank both the other members of the WordPress accessibility team as well as the maintainers of Gutenberg for welcoming me to the community.

Categorieën: Mozilla-nl planet

Cameron Kaiser: RIP, Chuck Peddle

Mozilla planet - za, 21/12/2019 - 21:29
I never got the pleasure to have met him in person, but virtually any desktop computer owes a debt to him. Not only the computers using the the 6502 microprocessor he designed, but because the 6502 was so inexpensive (especially compared against the Intel and Motorola chips it competed with) that it made the possibility of a computer in everybody's home actually feasible. Here just in the very room I'm typing this, there is a Commodore 128D, several Commodore SX-64s (with the 8502 and 6510 respectively, variants of the 6502 with on-chip I/O ports), a Commodore KIM-1, a blue-label PET 2001, an Apple IIgs (technically with a 65816, the later WDC 16-bit variant), an Atari 2600 (6507, with a reduced address bus), an Atari Lynx (with the CMOS WDC WD65SC02), and an NEC TurboExpress (Hudson HuC6280, another modified WDC 65C02, with a primitive MMU). The 6502 appeared in fact in the Nintendo Famicom/NES (Ricoh 2A03 variant) and Super Nintendo (65816) and the vast majority of Commodore home computers before the Amiga, plus the Atari 8-bit and Apple II lines. For that matter, the Commodore 1541s and 1571s separate and built-into the 128D and SX-64s have 6502 CPUs too. Most impactful was probably its appearance in the BBC Micro series which was one of the influences on the now-ubiquitous ARM architecture.

I will not recapitulate his life or biography except to say that when I saw him a number of years ago in a Skype appearance at Vintage Computer Festival East (in a big cowboy hat) he was a humble, knowledgeable and brilliant man. Computing has lost one of its most enduring pioneers, and I think it can be said without exaggeration that the personal computing era probably would not have happened without him.

Categorieën: Mozilla-nl planet

Marco Zehe: Recap: The web accessibility basics

Mozilla planet - za, 21/12/2019 - 13:00

Today, I am just quickly going to recommend you an old, but all-time reader favorite post of mine I published 4 years ago. And it is as current today as it was then, and most of it already was in the year 2000. Yes, I’m talking about the basics of web accessibility.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Summing up My 2019

Mozilla planet - za, 21/12/2019 - 00:41

2019 is special in my heart. 2019 was different than many other years to me in several ways. It was a great year! This is what 2019 was to me.

curl and wolfSSL

I quit Mozilla last year and in the beginning of the year I could announce that I joined wolfSSL. For the first time in my life I could actually work with curl on my day job. As the project turned 21 I had spent somewhere in the neighborhood of 15,000 unpaid spare time hours on it and now I could finally do it “for real”. It’s huge.

Still working from home of course. My commute is still decent.


Just in November 2018 the name HTTP/3 was set and this year has been all about getting it ready. I was proud to land and promote HTTP/3 in curl just before the first browser (Chrome) announced their support. The standard is still in progress and we hope to see it ship not too long into next year.


Focusing on curl full time allows a different kind of focus. I’ve landed more commits in curl during 2019 than any other year going back all the way to 2005. We also reached 25,000 commits and 3,000 forks on github.

We’ve added HTTP/3, alt-svc, parallel transfers in the curl tool, tiny-curl, fixed hundreds of bugs and much, much more. Ten days before the end of the year, I’ve authored 57% (over 700) of all the commits done in curl during 2019.

We ran our curl up conference in Prague and it was awesome.

We also (re)started our own curl Bug Bounty in 2019 together with Hackerone and paid over 1000 USD in rewards through-out the year. It was so successful we’re determined to raise the amounts significantly going into 2020.

Public speaking

I’ve done 28 talks in six countries. A crazy amount in front of a lot of people.

In media

Dagens Nyheter published this awesome article on me. I’m now shown on the internetmuseum. I was interviewed and highlighted in Bloomberg Businessweek’s “Open Source Code Will Survive the Apocalypse in an Arctic Cave” and Owen William’s Medium post The Internet Relies on People Working for Free.

When Github had their Github Universe event in November and talked about their new sponsors program on stage (which I am part of, you can sponsor me) this huge quote of mine was shown on the big screen.

Maybe not media, but in no less than two Mr Robot episodes we could see curl commands in a TV show!


I’ve participated in three podcast episodes this year, all in Swedish. Kompilator episode 5 and episode 8, and Kodsnack episode 331.


I’ve toyed with live-streamed programming and debugging sessions. That’s been a lot of fun and I hope to continue doing them on and off going forward as well. They also made me consider and get started on my libcurl video tutorial series. We’ll see where that will end…


I figure it can become another fun year too!

Categorieën: Mozilla-nl planet

Steve Fink: Running taskcluster tasks locally

Mozilla planet - za, 21/12/2019 - 00:04
Work right from your own home! It can be difficult to debug failures in Taskcluster that don’t happen locally. Interactive tasks are very useful for this, but interactive tasks broke during the last migration — a relevant bug is bug 1596632, which is duped to a just-fixed bug, so maybe it works now?. I recently […]
Categorieën: Mozilla-nl planet

Firefox UX: People who listen to a lot of podcasts really are different

Mozilla planet - vr, 20/12/2019 - 23:17
Podcast Enthusiasts and Podcast Newbies

Podcasts are quickly becoming a cultural staple. Between 2013 and 2018, the percent of Americans over age 12 who had ever listened to a podcast jumped from 27% to 44%, according to the Pew Research Center. Yet just 17% of Americans have listened to a podcast in the past week. So we wanted to know: What distinguishes people who listen to podcasts weekly, or even daily, from people who only listen occasionally? Do frequent and infrequent podcast listeners have different values, needs and preferences? To put it another way, are there different kinds of podcast listeners?

To explore this question, Mozilla did a series of surveys and interviews to understand how people listen to podcasts — how often they listen, how many shows they listen to, what devices they use, how they discover content, and what features of the listening experience matter most to them. Here’s what we found.

There is a subset of dedicated, frequent podcast listeners…and they listen a lot

We released a short survey on podcast listening habits to a representative of sample of Americans (as recruited through Survey Monkey) and a targeted group of audio-enthusiasts (distributed via subReddits such as r/podcast and r/audiodrama and Mozilla’s social media accounts). In this survey, we asked people how often they listen to podcasts:

How often do you listen to podcasts (across all devices)?

 people listen never or always.

We found that 38% of our survey respondents listen to podcasts daily. Note that we asked this question for each device (i.e., How often do you listen on your phone? On a smart speaker? etc.) The graph above shows the highest listening frequency each person. For example, someone who listens on Alexa a few times a month and on a phone daily would be classified as a daily listener. This could result in an underestimate of each respondent’s overall listening frequency.

A bimodal pattern is emerging: People tend to either listen very infrequently (a few times a month) or very frequently (every day). At first, we found it surprising that podcast listenership in our survey was much more common than in Pew’s results. However, when we separated out the results by the Survey Monkey panel (which is roughly comparable to the general U.S. population) and our Reddit and social media channels, here’s what we found:

How often do you listen to podcasts (across all devices)?

We saw our Reddit users were much heavier podcast listeners than the general population

In the Survey Monkey panel, 56% of people at least occasionally listen to podcasts, which is still higher than Pew’s findings, but more more comparable. In contrast, only 91% of the people who accessed the survey via Reddit and Mozilla’s social media channels listen to podcasts at least occasionally, and 62% say they listen daily.

The listening distribution of these two populations are inverted. People who follow podcasting-related social media tend to listen a lot. This may seem like an obvious connection, but it suggests that we may find some interesting results if we look at the daily listeners and other podcast listeners separately.

Frequent and infrequent podcast listeners use different technologies

Smartphones are by far the dominant devices for podcast listening. But when we split apart listeners by frequency, we see that smartphone listening is more dominant among daily listeners, whereas laptop and desktop listening is more dominant among monthly listeners: 38% of podcast listeners use smartphones to listen daily; conversely, 27% of podcast listeners use laptops or desktops to listen a few times a month. We also found that frequent podcast listeners are more likely to use multiple types of devices to listen to podcasts.

How often do you listen to podcasts on these different devices?

Smartphones are dominant devices

This chart shows how often people listen to podcasts on particular types of devices (smartphones, laptops or desktops, smart speakers) for survey respondents who listen to podcasts at least a few times a month (n = 575).

This distinction in technology use also plays out when we look at the apps/software people use to listen. Apple Podcasts/Apple iTunes is the most popular listening app across all listeners. However, daily listeners use a broader distribution of apps. This could indicate that frequent listeners are experimenting to find the listening experience that best fits their needs. Monthly listeners, on the other hand, are much more likely to listen in a web browser (and may not even have a podcasting app installed on their phone at all). YouTube is popular across all listeners, but proportionately more common with infrequent listeners.

Which podcasting apps do you use?

Apple podcasts continues to have a dominant position in the market

This chart displays podcast listeners, segmented by listening frequency, and the apps that they use. (Note that we didn’t explicitly ask how often people use each app. But we do know that, for example, of the 310 survey respondents who listen to podcasts daily, 85 use Apple Podcasts/Apple iTunes). For all listeners, Apple Podcasts/iTunes is the most popular platform. For weekly and monthly users, YouTube and web browsers are the next most popular platforms.

Why might infrequent listeners be more likely to listen in web browsers and on platforms like YouTube? Perhaps newer and infrequent podcast listeners haven’t developed listening routines, or haven’t committed to a particular device or app for listening. If they are accessing audio content ad hoc, the web may be easier and more convenient than using an app.

In addition to this broad scale survey data, we can learn more from in-depth interviews with podcast listeners. Podcasting newbies and podcast enthusiasts have different behaviors — but what about their values? To dig into this question, we interviewed seven people who self-define as podcast enthusiasts, as well as drawing from fieldwork over the summer in Seattle and three European cities to understand listening behaviors. We learned a few key things from those studies, particularly around how people think about subscriptions, and how they learn about new podcasts.

“Subscriptions” don’t fully capture how people actually listen

While avid podcast listeners may subscribe to a long list of shows (up to 72 among the people we interviewed), they tend to be devoted to a smaller subset of shows, typically between 2 and 10, that they listen to on a regular basis. With these “regular rotation” shows, listeners catch new episodes soon after they are released and might even go back and re-listen to episodes multiple times. For listeners who have a core set of shows in their regular rotation, diving into a completely new podcast requires a significant amount of mental effort and time.

Several people we interviewed use subscriptions as a “save for later” feature, storing shows that they might want to listen to some day. But having a long list of aspirational podcasts can be overwhelming. One listener, for example, only wants shows “to be in front of me when I’m in the mood…So I’m trying be meticulous about subscribing and unsubscribing. They should have a different action that you can do, like your list of ‘when I’m ready for something new.’”

Relationships with podcasts come and go. As one listener described it, every day, “I’m going to eat breakfast. But I definitely have gone through phases in my life. Every morning I eat oatmeal….And then suddenly I hate that…I kind of feel like my podcast listening comes and goes and waves like that.”

One listener we interviewed is more of a grazer, roaming from show to show based on topics she is currently interested in: “I’ll just jump around, and I’ll try different things…I usually don’t subscribe.” For her, the concept of subscription doesn’t fit her listening patterns at all.

These themes indicate that perhaps the notion of “subscription” isn’t nuanced enough to capture the complex and dynamic ways people develop and break relationships with podcast content.

Word of mouth and podcast cross-promotion are powerful ways to discover content

Podcast enthusiasts use many strategies to figure out what to listen to, but one strategy dominates: When we asked podcast enthusiasts how they discover new content, every single person brought up word of mouth. The interviewees all also found cross-promotion — when podcast hosts mention another show they enjoy — to be effective because it’s a recommendation that comes from a trusted voice.

The podcast enthusiasts we spoke with described additional ways they discover content,  including browsing top charts, looking to trusted brands, finding recommendations on social media, reading “best of” lists, and following a content producer from another medium (like an author or a TV star) onto a podcast. However, none of these strategies were as common, or as salient, as word of mouth or cross-promotion. Methods of content discovery can reinforce each other, producing a snowball effect. One listener noted, “I might hear it from like the radio. Sort of an anonymous source first, and then I hear it from a friend, ‘Like oh I heard about that. You just told me about it. I should definitely go check it out now.’” If listeners hear about a show from multiple avenues, they are more likely to invest time in listening to it.

Word of mouth goes both ways and podcast listeners’ enthusiasm for talking about podcasts isn’t limited to other fanatics. They often recommend podcasts to non-listeners, both entire shows and specific episodes that are contextually relevant. For example, one interviewee noted that, “Whenever I have a conversation about something interesting with someone I’ll say, ‘Oh I heard a Planet Money about that’ and I will refer them to it.” For frequent podcast listeners, podcast content serves as a kind of conversational currency.

What does this all mean?

Podcast listeners are not a homogenous group. Product designers should consider people who listen a little and people who listen a lot; people who are new to podcasts and people who are immersed in podcast culture; people who are still figuring out how to listen and people who have built strong listening habits and routines. These distinct groups each bring their own values and preferences to the listening experience. By considering and addressing them, we can design listening products that better fit diverse listening needs.

We also asked about listening behaviors beyond just podcasts. To learn more about that, check out our companion post, Listening: It’s not just for audio.

A sketch of two podcast presenters arguing

Sketch by Jordan Wirfs-Brock


Categorieën: Mozilla-nl planet

Firefox UX: Listening: It’s not just for audio

Mozilla planet - vr, 20/12/2019 - 18:53
Understanding how people listen

When we first set out to study listening behaviors, we focused on audio content. After all, audio is what people listen to, right? It quickly became apparent, however, that people also often listen to videos and multimedia content. Listening isn’t just for audio — it’s for any situation where we don’t (or can’t) use our eyes and thus our ears dominate.

Why do we care that people are listening to videos as a primary mode of accessing content? Because in the past, technologists and content creators have often treated video, audio and text as distinct content types — after all, they are different types of file formats. But the people consuming content care less about the media or file type and more about the experience of accessing content. With advances in web, mobile, and ubiquitous technology, we’re seeing a convergence in media experience. We anticipate this convergence will continue with the emergence of voice-based platforms.

How do we know people are “listening” to video?

In our survey on podcast listening behaviors (find out more in our companion blog post), we asked what apps people use to listen. YouTube was the second most popular app, with 24% of podcast listeners. Only Apple Podcasts had more listeners:

Which of these do you use to listen to podcasts?

Youtube is the second most popular channel for podcasts, after Apple Podcasts.

We found that 38% of our survey respondents listen to podcasts daily. Note that we asked this question for each device (i.e., How often do you listen on your phone? On a smart speaker? etc.) The graph above shows the highest listening frequency each person. For example, someone who listens on Alexa a few times a month and on a phone daily would be classified as a daily listener. This could result in an underestimate of each respondent’s overall listening frequency.

Our survey also showed that YouTube and web browsers are more popular with infrequent podcast listeners and are often used as a secondary app. (More here!)

We found the prevalence of YouTube as a listening platform surprising, so we conducted a follow-up survey to get more information on the range of things people listen to in addition to podcasts. In this survey, deployed via the Firefox web browser, we asked which listening related activities people do at least once a month. Here’s what we found:

60% of people surveyed listen to podcasts at least once a month.

In the Survey Monkey panel, 56% of people at least occasionally listen to podcasts, which is still higher than Pew’s findings, but more more comparable. In contrast, 91% of the people who accessed the survey via Reddit and Mozilla’s social media channels listen to podcasts at least occasionally, and 62% say they listen daily.

We found that 60% of survey respondents said they “listen” to streaming videos at least once a month (note that we explicitly used the word listen, not watch). Of the range of listening activities we asked about, “listening” to streaming videos was more popular than listening to podcasts or listening to radio. In fact, it was more popular than every activity except listening to streaming music.

How and why are people listening to video?

We were also curious about how often people listen to video content, what platforms they use to listen to video content, and why they listen to video content.

We asked people how often they do various listening activities (listening to streaming music, listening to podcasts, listening to content on a smart speaker, listening to streaming videos, etc.) and then sorted them based on frequency:

People listen to music a lot; audio books are pretty rare.

On the left are activities tend to do rarely (50% of audiobook listeners say they do this a few times a month or less). On the right are activities that people tend to do daily (more than 60% of streaming video listeners say they do this daily). Note that “listening” to videos, either on the TV or on the web, falls in the middle. People are split pretty evenly between doing this a few times a week and doing them daily.

We also asked open-ended questions about the type of content people listen to and why they listen. People use streaming video as a listening platform for three main reasons: (1) access to content, (2) adaptability to environmental contexts, (3) integration of features that aren’t common in podcasting apps.

Content: Access to content you can’t get anywhere else, and it’s all in one place

Our survey respondents noted that lots of audio-focused content that is only available on YouTube or on the web. People pointed to video and audio podcasts (“A lot of podcasts are only uploaded to YouTube nowadays”) as well as lectures, debates, old radio programs, movies and TV. People valued both the availability of this content as well as the convenience of being able to listen to multiple types of content (audio or otherwise) in one place. As one person commented, “I can seamlessly switch from audio content (podcasts) to video content.”

Context: In situations where you simply can’t watch, you listen to video

One survey respondent listens to news from YouTube videos while driving. Another person says a, “web browser allows me to listen at work in another tab.” In both of these situations, the person is listening in order to multitask and because they can’t use their eyes to watch the video. We also got a lot of comments about transitioning between watching and listening, or between devices as people move from contexts where they can use their eyes to contexts where they can’t. One person wrote, “My dream scenario: start watching a video on my computer then pick up my phone and continue listening to the audio part of this video, then come back to my computer and continue with video.”

Features: Platforms like YouTube have features that aren’t common in podcasting apps

Many survey respondents also noted features that they valued from YouTube that aren’t available in some popular podcasting apps, like recommendations of what to listen to next, being able to comment on episodes, being able to pick up where they left off, and being able to manage playlists. One YouTube listener highlighted, “The fact that I get to comment on the content, rather than something like Apple’s Podcast app which doesn’t allow for discussion or feedback either to other listeners or to the creators.” Another pointed out, “Ability to bookmark and share at specific times.” Many of these features exist in some form in podcasting apps, but aren’t standard or aren’t as integrated into the listening experience.

What are the implications of listening to video?

As product designers and content producers, we tend to think about content in terms of media types — is this video, audio or text? But people experience media in a much more fluid manner. There is a flexibility inherent in a multimedia or multi-modal experience that allows people to listen, or watch, or read, or do any combination of the three when it best suits them. For example, one person uses YouTube as a listening platform because of the “auto-captions which I can export for future reading and citation.” Another listener treats video elements as supplementary to audio, noting: “I also like the added visual stimulation when I want it.” Instead of deciding “I need to watch a video now” or “I need to listen to audio content now,” people make media decisions based on what information is in content and how they can fit it into their lives.

Listening to video sketch

Sketch by Jordan Wirfs-Brock

Categorieën: Mozilla-nl planet

Mozilla VR Blog: How much is that new VR headset really sharing about you?

Mozilla planet - vr, 20/12/2019 - 15:06
How much is that new VR headset really sharing about you?

VR was big this holiday season - the Oculus Go sales hit the Amazon #1 electronics device list on Black Friday, and the Oculus Quest continues to sell. But in the spirit of Mozilla's Privacy Not Included guidelines, you might be wondering: what personal information is Oculus collecting while you use your device?

Reading the Oculus privacy policy, they say that they process and collect information like

  • information about your environment, physical movements, and dimensions
  • location-related information
  • information about people, games, content, apps, features, and experiences you interact with
  • identifiers that may be unique to you
  • and much much more!

That’s…a lot of data. Most of this data, like processing information about your physical movements is required for basic functionality of most MR experiences. For example, to track whether you avoid an obstacle in BeatSaber, your device needs to know the position of your head in space.

There’s a difference between processing and collecting. Like we mentioned, you can’t do much without processing certain data. Processing can either happen on the device itself, or on remote servers. Collecting data implies that it is stored remotely for a time period beyond what’s necessary for simply processing it.

Mozilla’s brand promise to our users is focused on security and privacy. So, while testing the Oculus Quest for Mozilla Mixed Reality products, we needed to know what kind of data was being sent to and from the device during a browsing session. The device has a developer mode that allows you to access advanced features by connecting it to your computer and using Android Debug Bridge (adb). From there, we used the developer mode and `adb` to install a custom trusted root certificate. This allows us to inspect the connections in depth.

So, what is Facebook transmitting from your device back to Facebook servers during a routine browsing session? From the data we saw, they’re reporting configuration and telemetry data, such as information about how long it took to fetch resources. For example, here’s a graph of the amount of data sent over time from the Oculus VR headset back to Facebook.

How much is that new VR headset really sharing about you?<figcaption>Bytes sent to Facebook IPs over time</figcaption>

The data is identified by both an id, which is consistent across browsing sessions, and a session_id. The id appears to be linked to the device hardware, because linking a Facebook account didn’t change the identifier (or any other information as far as we detected).

In addition to general timing information, Facebook also receives reports on more granular, URL level timing information that uses a unique URL ID.

"time_to_fetch": "1", "url_uid": "d8657582", "firstbyte_time": "0",

Like computers, mixed reality (MR) devices can collect data on the sites you visit and applications you interact with. They also have the ability to collect and transmit large amounts of other data, including biometrically-derived data (BDD). BDD includes any information that may be inferred from biometrics, like gaze, gait, and other nonverbal communication methods. 6DOF devices like the Oculus Quest track both head and body movement. Other devices, like the MagicLeap One and HoloLens 2, also track gaze. This type of data can reveal intrinsic characteristics about users, such as their height. Information about where they look can reveal details about a user’s sexual preferences and powerful insights into their psychology. Innocuous data like facial movements during a task have been used in research to predict high or low performers.

Fortunately, even though its privacy policy would allow it to, today Facebook does not appear to collect any of this MR-specific information from your Oculus VR headset. Instead, it focuses on collecting data about timing, application version, and other configuration and telemetry data. That doesn’t mean that they can’t do so in the future, according to their privacy policy.

In fact, Facebook just announced that Oculus VR data will now be used for ads if users are logged into Facebook. Horizon, Facebook's social VR experience, requires a linked Facebook account.

In addition to the difference between processing and collecting explained above, there’s a difference between committing to not collecting and simply not collecting. It’s not enough for Facebook to just not collect sensitive data now. They should commit not to collect it in the future. Otherwise, they could change the data they collect at any time without informing users of the change. Until BDD is protected and regulated, we need to be constantly vigilant.

How much is that new VR headset really sharing about you?

Currently, BDD (and other data that MR devices can track) lacks protections beyond whatever is stipulated in the privacy policy (which is regulated by contract law), so companies often reserve the right to collect and disseminate all the information they might possibly want to, knowing that consumers rarely read (let alone comprehend) the legalese they agree to. It’s time for regulators and legislators to take action and protect sensitive health, biometric, and derived data from misuse by tech companies.

Categorieën: Mozilla-nl planet

Daniel Stenberg: My 28 talks of 2019

Mozilla planet - vr, 20/12/2019 - 12:49
<figcaption>CS3 Sthlm 2019</figcaption>

In 2019 I did more public speaking than I’ve ever than before in a single year: 28 public appearances. More than 4,500 persons have seen my presentations live at both huge events (like 1,200 in the audience at FOSDEM 2019) but also some very small and up-close occasions. Many thousands more have also seen video recordings of some of the talks – my most viewed youtube talk of 2019 has been seen over 58,000 times. Do I need to say that it was about HTTP/3, a topic that was my most common one to talk about through-out the year? I suspect the desire to listen and learn more about that protocol version is far from saturated out there…

Cities <figcaption>Nordic APIs Summit 2019</figcaption>

During the year I’ve done presentations in

Barcelona, Brussels, Copenhagen, Gothenburg, Mainz, Prague, Stockholm and Umeå.

I’ve did many in Stockholm, two in Copenhagen.

Countries <figcaption>Castor Software Days 2019</figcaption>

During the year I’ve done presentations in

Belgium, Czechia, Denmark, Germany, Spain and Sweden.

Most of my talks were held in Sweden. I did one streamed from my home office!

Topics <figcaption>JAX 2019</figcaption>

14 of these talks had a title that included “HTTP/3” (example)

9 talks had “curl” in the title (one of them also had HTTP/3 in it) (example)

4 talks involved DNS-over-HTTPS (example)

2 talks were about writing secure code (example)

Talks in 2020 <figcaption>FOSDEM 2019</figcaption>

There will be talks by me in 2020 as well as the planning . Probably just a little bit fewer of them!

Invite me?

Sure, please invite me and I will consider it. I’ve written down some suggestions on how to do this the best way.

<figcaption>At GOTO10 early 2019</figcaption>

(The top image is from Fullstackfest 2019)

Categorieën: Mozilla-nl planet