Mozilla Nederland LogoDe Nederlandse

The Mozilla Blog: Notes on Implementing Vaccine Passports

Mozilla planet - do, 22/04/2021 - 17:45

Now that we’re starting to get widespread COVID vaccination “vaccine passports” have started to become more relevant. The idea behind a vaccine passport is that you would have some kind of credential that you could use to prove that you had been vaccinated against COVID; various entities (airlines, clubs, employers, etc.) might require such a passport as proof of vaccination. Right now deployment of this kind of mechanism is fairly limited: Israel has one called the green pass and the State of New York is using something called the Excelsior Pass based on some IBM tech.

Like just about everything surrounding COVID, there has been a huge amount of controversy around vaccine passports (see, for instance, this EFF post, ACLU post, or this NYT article).

There two seem to be four major sets of complaints:

  1. Requiring vaccination is inherently a threat to people’s freedom
  2. Because vaccine distribution has been unfair, with a number of communities having trouble getting vaccines, a requirement to get vaccinated increases inequity and vaccine passports enable that.
  3. Vaccine passports might be implemented in a way that is inaccessible for people without access to technology (especially to smartphones).
  4. Vaccine passports might be implemented in a way that is a threat to user privacy and security.

I don’t have anything particularly new to say about the first two questions, which aren’t really about technology but rather about ethics and political science, so, I don’t think it’s that helpful to weigh in on them, except to observe that vaccination requirements are nothing new: it’s routine to require children to be vaccinate to go to school, people to be vaccinated to enter certain countries, etc. That isn’t to say that this practice is without problems but merely that it’s already quite widespread, so we have a bunch of prior art here. On the other hand, the questions of how to design a vaccine passport system are squarely technical; the rest of this post will be about that.

What are we trying to accomplish?

As usual, we want to start by asking what we’re trying to accomplish At a high level, we have a system in which a vaccinated person (VP) needs to demonstrate to some entity (the Relying Party (RP)) that they have been vaccinated within some relevant time period. This brings with it some security requirements”

  1. Unforgeability: It should not be possible for an unvaccinated person to persuade the RP that they have been vaccinated.
  2. Information minimization: The RP should learn as little as possible about the VP, consistent with unforgeability.
  3. Untraceability: Nobody but the VP and RP should know which RPs the VP has proven their status to.

I want to note at this point that there has been a huge amount of emphasis on the unforgeability property, but it’s fairly unclear — at least to me — how important it really is. We’ve had trivially forgeable paper-based vaccination records for years and I’m not aware of any evidence of widespread fraud. However, this seems to be something people are really concerned about — perhaps due to how polarized the questions of vaccination and masks have become — and we have already heard some reports of sales of fake vaccine cards, so perhaps we really do need to worry about cheating. It’s certainly true that people are talking about requiring proof of COVID vaccination in many more settings than, for instance, proof of measles vaccination, so there is somewhat more incentive to cheat. In any case, the privacy requirements are a real concern.

In addition, we have some functional requirements/desiderata:

  1. The system should be cheap to bring up and operate.
  2. It should be easy for VPs to get whatever credential they need and to replace it if it is lost or destroyed.
  3. VPs should not be required to have some sort of device (e.g., a smartphone).
The Current State

In the US, most people who are getting vaccinated are getting paper vaccination cards that look like this:

COVID Vaccination Card

This card is a useful record that you’ve been vaccinated, with which vaccine, and when you have to come back, but it’s also trivially forgeable. Given that they’re made of paper with effectively no anti-counterfeiting measures (not even the ones that are in currency), it would be easy to make one yourself, and there are already people selling them online. As I said above, it’s not clear entirely how much we ought to worry about fraud, but if we do, these cards aren’t up to the task. In any case, they also have suboptimal information minimization properties: it’s not necessary to know how old you are or which vaccine you got in order to know whether you were vaccinated.

The cards are pretty good on the traceability front: nobody but you and the RP learns anything, and they’re cheap to make and use, without requiring any kind of device on the user’s side. They’re not that convenient if you lose them, but given how cheap they are to make, it’s not the worst thing in the world if the place you got vaccinated has to mail you a new one.

Improving The Situation

A good place to start is to ask how to improve the paper design to address the concerns above.

The data minimization issue is actually fairly easy to address: just don’t put unnecessary information on the card: as I said, there’s no reason to have your DOB or the vaccine type on the piece of paper you use for proof.

However, it’s actually not straightforward to remove your name. The reason for this is that the RP needs to be able to determine that the credential actually applies to you rather than to someone else. Even if we assume that the credential is tamper-resistant (see below), that doesn’t mean it belongs to you. There are really two main ways to address this:

  1. Have the VP’s name (or some ID number) on the credential and require them to provide a biometric credential (i.e., a photo ID) that proves they are the right person.
  2. Embed a biometric directly into the credential.

This should all be fairly familiar because it’s exactly the same as other situations where you prove your identity. For instance, when you get on a plane, TSA or the airline reads your boarding pass, which has your name, and then uses your photo ID to compare that to your face and decide if it’s really you (this is option 1). By contrast, when you want to prove you are licensed to drive, you present a credential that has your biometrics directly embedded (i.e., a drivers license).

This leaves us with the question of how to make the credential tamper-resistant. There are two major approaches here:

  1. Make the credential physically tamper-resistant
  2. Make the credential digitally tamper-resistant
Physically Tamper-Resistant Credentials

A physically tamper-resistant credential is just one which is hard to change or for unauthorized people to manufacture. This usually includes features like holograms, tamper-evident sealing (so that you can’t disassemble it without leaving traces) etc. Most of us have lot of experience with physically tamper-resistant credentials such as passports, drivers licenses, etc. These generally aren’t completely impossible to forge, but they’re designed to be somewhat difficult. From a threat model perspective, this is probably fine; after all we’re not trying to make it impossible to pretend to be vaccinated, just difficult enough that most people won’t try.

In principal, this kind of credential has excellent privacy because it’s read by a human RP rather than some machine. Of course, one could take a photo of it, but there’s no need to. As an analogy, if you go to a bar and show your driver’s license to prove you are over 21, that doesn’t necessarily create a digital record. Unfortunately for privacy, increasingly those kinds of previously analog admissions processes are actually done by scanning the credential (which usually has some machine readable data), thus significantly reducing the privacy benefit.

The main problem with a physically tamper-resistant credential is that it’s expensive to make and that by necessity you need to limit the number of people who can make it: if it’s cheap to buy the equipment to make the credential then it will also be cheap to forge. This is inconsistent with rapidly issuing credentials concurrently with vaccinating people: when I got vaccinated there were probably 25 staff checking people in and each one had a stack of cards. It’s hard to see how you would scale the production of tamper-resistant plastic cards to an operation like this, let alone to one that happens at doctors offices and pharmacies all over the country. It’s potentially possible that they could report people’s names to some central authority which then makes the cards, but even then we have scaling issues, especially if you want the cards to be available 2 weeks after vaccination. A related problem is that if you lose the card, it’s hard to replace because you have the same issuing problem.[1]

Digitally Tamper-Resistant Credentials

The major alternative here is to design a digitally tamper-resistant system. Effectively what this means is that the issuing authority digitally signs a credential. This provides cryptographically strong authentication of the data in the credential in such a way that anyone can verify it as long as they have the right software. The credential just needs to contain the same information as would be on the paper credential: the fact that you were vaccinated (and potentially a validity date) plus either your name (so you can show your photo id) or your identity (so the RP can directly match it against you).

This design has a number of nice properties. First, it’s cheap to manufacture: you can do the signing on a smartphone app.[2] It doesn’t need any special machinery from the RP: you can encode the credential as a 2-D bar code which the VP can show on their phone or print out. And they can make as many copies as they want, just like your airline boarding pass.

The major drawback of this design is that it requires special software on the RP side to read the 2D bar code, verify the digital signature, and verify the result. However, this software is relatively straightforward to write and can run on any smartphone, using the camera to read the bar code.[3] So, while this is somewhat of a pain, it’s not that big a deal.

This design also has generally good privacy properties: the information encoded in credential is (or at least can be) the minimal set needed to validate that you are you and that you are vaccinated, and because the credential can be locally verified, there’s no central authority which learns where you go. Or, at least, it’s not necessary for there to be a central authority: nothing stops the RP from reporting that you were present back to some central location, but that’s just inherent in them getting your name and picture. As far as I know, there’s no way to prevent that, though if the credential just contains your picture rather than an identifier, it’s somewhat better (though the code itself is still unique, so you can be tracked) especially because the RP can always capture your picture anyway.[4]

By this point you should be getting the impression that signed credentials are a pretty good design, and it’s no surprise that this seems to be the design that WHO has in mind for their smart vaccination certificate. They seem to envision encoding quite a bit more information than is strictly required for a “yes/no” decision and then having a “selective disclosure” feature that would just have that information and can be encoded in a bar code.

What about Green Pass, Excelsior Pass, etc?

So what are people actually rolling out in the field? The Israeli Green Pass seems to be basically this: a signed credential. It’s got a QR code which you read with an app and the app then displays the ID number and an expiration data. You then compare the ID number to the user’s ID to verify that they are the right person.

I’ve had a lot of trouble figuring out what the Excelsior Pass does. Based on the NY Excelsior Pass FAQ, which says that “you can print a paper Pass, take a screen shot of your Pass, or save it to the Excelsior Pass Wallet mobile app”, it sounds like it’s the same kind of thing as Green Pass, but that’s hardly definitive. I’ve been trying to get a copy of the specification for this technology and will report back if I manage to learn more.

What About the Blockchain?

Something that keeps coming up here is the use of blockchain for vaccine passports. You’ll notice that my description above doesn’t have anything about the blockchain but, for instance, the Excelsior Pass says it is built on IBM’s digital health pass which is apparently “built on IBM blockchain technology” and says “Protects user data so that it remains private when generating credentials. Blockchain and cryptography provide credentials that are tamper-proof and trusted.” As another example, in this webinar on the Linux Foundation’s COVID-19 Credentials Initiative, Kaliya Young answers a question on blockchain by saying that the root keys for the signers would be stored in the blockchain.

To be honest, I find this all kind of puzzling; as far as I can tell there’s no useful role for the blockchain here. To oversimplify, the major purpose of a blockchain is to arrange for global consensus about some set of facts (for instance, the set of financial transactions that has happened) but that’s not necessary in this case: the structure of a vaccine credential is that some health authority asserts that a given person have been vaccinated. We do need relying parties to know the set of health authorities, but we have existing solutions for that (at a high level, you just build the root keys into the verifying apps).[5] If anyone has more details on why a blockchain[6] is useful for this application I’d be interested in hearing them.

Is this stuff any good?

It’s hard to tell. As discussed above, some of these designs seem to be superficially sensible, but even if the overall design is sensible, there are lots of ways to implement it incorrectly. It’s quite concerning not to have published specifications for the exact structure of the credentials. Without having a detailed specification, it’s not possible to determine that it has the claimed security and privacy properties. The protocols that run the Web and the Internet are open which not only allows anyone to implement them, but also to verify their security and privacy properties. If we’re going to have vaccine passports, they should be open as well.

Updated: 2021-04-02 10:10 AM to point to Mozilla’s previous work on blockchain and identity.

  1. Of course, you could be issued multiple cards, as they’re not transferable. ↩︎
  2. There are some logistical issues around exactly who can sign: you probably don’t want everyone at the clinic to have a signing key, but you can have some central signer. ↩︎
  3. Indeed, in Santa Clara County, where I got vaccinated, your appointment confirmation is a 2D bar code which you print out and they scan onsite. ↩︎
  4. If you’re familiar with TLS, this is going to sound a lot like a digital certificate, and you might wonder whether revocation is a privacy issue the way that it is with WebPKI and OCSP. The answer is more or less “no”. There’s no real reason to revoke individual credentials and so the only real problem is revoking signing certificates. That’s likely to happen quite infrequently, so we can either ignore it, disseminate a certificate revocation list, or have central status checking just for them. ↩︎
  5. Obviously, you won’t be signing every credential with the root keys, but you use those to sign some other keys, building a chain of trust down to keys which you can use to sign the user credentials. ↩︎
  6. Because of the large amount of interest in blockchain technologies, there’s a tendency to try to sprinkle it in places it doesn’t help, especially in the identity space For that reason, it’s really important to ask what benefits it’s bringing. ↩︎

The post Notes on Implementing Vaccine Passports appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Pyodide Spin Out and 0.17 Release

Mozilla planet - do, 22/04/2021 - 17:17

We are happy to announce that Pyodide has become an independent and community-driven project. We are also pleased to announce the 0.17 release for Pyodide with many new features and improvements.

Pyodide consists of the CPython 3.8 interpreter compiled to WebAssembly which allows Python to run in the browser. Many popular scientific Python packages have also been compiled and made available. In addition, Pyodide can install any Python package with a pure Python wheel from the Python Package Index (PyPi). Pyodide also includes a comprehensive foreign function interface which exposes the ecosystem of Python packages to Javascript and the browser user interface, including the DOM, to Python.

You can try out the latest version of Pyodide in a REPL directly in your browser.

Pyodide is now an independent project

We are happy to announce that Pyodide now has a new home in a separate GitHub organisation ( and is maintained by a volunteer team of contributors. The project documentation is available on

Pyodide was originally developed inside Mozilla to allow the use of Python in Iodide, an experimental effort to build an interactive scientific computing environment for the web.  Since its initial release and announcement, Pyodide has attracted a large amount of interest from the community, remains actively developed, and is used in many projects outside of Mozilla.

The core team has approved a transparent governance document  and has a roadmap for future developments. Pyodide also has a Code of Conduct which we expect all contributors and core members to adhere to.

New contributors are welcome to participate in the project development on Github. There are many ways to contribute, including code contributions, documentation improvements, adding packages, and using Pyodide for your applications and providing feedback.

The Pyodide 0.17 release

Pyodide 0.17.0 is a major step forward from previous versions. It includes:

  • major maintenance improvements,
  • a thorough redesign of the central APIs, and
  • careful elimination of error leaks and memory leaks
Type translation improvements

The type translations module was significantly reworked in v0.17 with the goal that round trip translations of objects between Python and Javascript produces an identical object.

In other words, Python -> JS -> Python translation and JS -> Python -> JS translation now produce objects that are  equal to the original object. (A couple of exceptions to this remain due to unavoidable design tradeoffs.)

One of Pyodide’s strengths is the foreign function interface between Python and Javascript, which at its best can practically erase the mental overhead of working with two different languages. All I/O must pass through the usual web APIs, so in order for Python code to take advantage of the browser’s strengths , we need to be able to support use cases like generating image data in Python and rendering the data to an HTML5 Canvas, or implementing event handlers in Python.

In the past we found that one of the major pain points in using Pyodide occurs when an object makes a round trip from Python to Javascript and back to Python and comes back different. This violated the expectations of the user and forced inelegant workarounds.

The issues with round trip translations were primarily caused by implicit conversion of Python types to Javascript. The implicit conversions were intended to be convenient, but the system was inflexible and surprising to users. We still implicitly convert strings, numbers, booleans, and None. Most other objects are shared between languages using proxies that allow methods and some operations to be called on the object from the other language. The proxies can be converted to native types with new explicit converter methods called .toJs and to_py.

For instance, given an Array in JavaScript,

window.x = ["a", "b", "c"];

We can access it in Python as,

>>> from js import x # import x from global Javascript scope >>> type(x) <class 'JsProxy'> >>> x[0]    # can index x directly 'a' >>> x[1] = 'c' # modify x >>> x.to_py()   # convert x to a Python list ['a', 'c']

Several other conversion methods have been added for more complicated use cases. This gives the user much finer control over type conversions than was previously possible.

For example, suppose we have a Python list and want to use it as an argument to a Javascript function that expects an Array.  Either the caller or the callee needs to take care of the conversion. This allows us to directly call functions that are unaware of Pyodide.

Here is an example of calling a Javascript function from Python with argument conversion on the Python side:

function jsfunc(array) { array.push(2); return array.length; } pyodide.runPython(` from js import jsfunc from pyodide import to_js def pyfunc(): mylist = [1,2,3] jslist = to_js(mylist) return jsfunc(jslist) # returns 4 `)

This would work well in the case that jsfunc is a Javascript built-in and pyfunc is part of our codebase. If pyfunc is part of a Python package, we can handle the conversion in Javascript instead:

function jsfunc(pylist) { let array = pylist.toJs(); array.push(2); return array.length; }

See the type translation documentation for more information.

Asyncio support

Another major new feature is the implementation of a Python event loop that schedules coroutines to run on the browser event loop. This makes it possible to use asyncio in Pyodide.

Additionally, it is now possible to await Javascript Promises in Python and to await Python awaitables in Javascript. This allows for seamless interoperability between asyncio in Python and Javascript (though memory management issues may arise in complex use cases).

Here is an example where we define a Python async function that awaits the Javascript async function “fetch” and then we await the Python async function from Javascript.

pyodide.runPython(` async def test(): from js import fetch # Fetch the Pyodide packages list r = await fetch("packages.json") data = await r.json() # return all available packages return data.dependencies.object_keys() `); let test = pyodide.globals.get("test"); // we can await the test() coroutine from Javascript result = await test(); console.log(result); // logs ["asciitree", "parso", "scikit-learn", ...] Error Handling

Errors can now be thrown in Python and caught in Javascript or thrown in Javascript and caught in Python. Support for this is integrated at the lowest level, so calls between Javascript and C functions behave as expected. The error translation code is generated by C macros which makes implementing and debugging new logic dramatically simpler.

For example:

function jserror() { throw new Error("ooops!"); } pyodide.runPython(` from js import jserror from pyodide import JsException try: jserror() except JsException as e: print(str(e)) # prints "TypeError: ooops!" `); Emscripten update

Pyodide uses the Emscripten compiler toolchain to compile the CPython 3.8 interpreter and Python packages with C extensions to WebAssembly. In this release we finally completed the migration to the latest version of Emscripten that uses the upstream LLVM backend. This allows us to take advantage of recent improvements to the toolchain, including significant reductions in package size and execution time.

For instance, the SciPy package shrank dramatically from 92 MB to 15 MB so Scipy is now cached by browsers. This greatly improves the usability of scientific Python packages that depend on scipy, such as scikit-image and scikit-learn. The size of the base Pyodide environment with only the CPython standard library shrank from 8.1 MB to 6.4 MB.

On the performance side, the latest toolchain comes with a 25% to 30% run time improvement:

Performance ranges between near native to up to 3 to 5 times slower, depending on the benchmark.  The above benchmarks were created with Firefox 87.

Other changes

Other notable features include:

  • Fixed package loading for Safari v14+ and other Webkit-based browsers
  • Added support for relative URLs in micropip and loadPackage, and improved interaction between micropip and loadPackage
  • Support for implementing Python modules in Javascript

We also did a large amount of maintenance work and code quality improvements:

  • Lots of bug fixes
  • Upstreamed a number of patches to the emscripten compiler toolchain
  • Added systematic error handling to the C code, including automatic adaptors between Javascript errors and CPython errors
  • Added internal consistency checks to detect memory leaks, detect fatal errors, and improve ease of debugging

See the changelog for more details.

Winding down Iodide

Mozilla has made the difficult decision to wind down the Iodide project. While will continue to be available for now (in part to provide a demonstration of Pyodide’s capabilities), we do not recommend using it for important work as it may shut down in the future. Since iodide’s release, there have been many efforts at creating interactive notebook environments based on Pyodide which are in active development and offer a similar environment for creating interactive visualizations in the browser using python.

Next steps for Pyodide

While many issues were addressed in this release, a number of other major steps remain on the roadmap. We can mention

  • Reducing download sizes and initialization times
  • Improve performance of Python code in Pyodide
  • Simplification of package loading system
  • Update scipy to a more recent version
  • Better project sustainability, for instance, by seeking synergies with the conda-forge project and its tooling.
  • Better support for web workers
  • Better support for synchronous IO (popular for programming education)

For additional information see the project roadmap.


Lots of thanks to:

  • Dexter Chua and Joe Marshall for improving the build setup and making Emscripten migration possible.
  • Hood Chatham for in-depth improvement of the type translation module and adding asyncio support
  • and Romain Casati for improving the Pyodide REPL console.

We are also grateful to all Pyodide contributors.

The post Pyodide Spin Out and 0.17 Release appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: “So what exactly is curl?”

Mozilla planet - do, 22/04/2021 - 16:09

You know that question you can get asked casually by a person you’ve never met before or even by someone you’ve known for a long time but haven’t really talked to about this before. Perhaps at a social event. Perhaps at a family dinner.

– So what do you do?

The implication is of course what you work with. Or as. Perhaps a title.

Software Engineer

In my case I typically start out by saying I’m a software engineer. (And no, I don’t use a title.)

If the person who asked the question is a non-techie, this can then take off in basically any direction. From questions about the Internet, how their printer acts up sometimes to finicky details about Wifi installations or their parents’ problems to install anti-virus. In other words: into areas that have virtually nothing to do with software engineering but is related to computers.

If the person is somewhat knowledgeable or interested in technology or computers they know both what software and engineering are. Then the question can get deepened.

What kind of software?

Alternatively they ask for what company I work for, but it usually ends up on the same point anyway, just via this extra step.

I work on curl. (Saying I work for wolfSSL rarely helps.)

<figcaption>Business cards of mine</figcaption> So what is curl?

curl is a command line tool used but a small set of people (possibly several thousands or even millions), and the library libcurl that is installed in billions of places.

I often try to compare libcurl with how companies build for example cars out of many components from different manufacturers and companies. They use different pieces from many separate sources put together into a single machine to produce the end product.

libcurl is like one of those little components that a car manufacturer needs. It isn’t the only choice, but it is a well known, well tested and familiar one. It’s a safe choice.

Internet what?

Lots of people, even many with experience, knowledge or even jobs in the IT industry I’ve realized don’t know what an Internet transfer is. Me describing curl as doing such, doesn’t really help in those cases.

An internet transfer is the bridge between “the cloud” and your devices or applications. curl is a bridge.

Everything wants Internet these days

In general, anything today that has power goes towards becoming networked. Everything that can, will connect to the Internet sooner or later. Maybe not always because it’s a good idea, but because it gives your thing a (perceived) advantage to your competitors.

Things that a while ago you wouldn’t dream would do that, now do Internet transfers. Tooth brushes, ovens, washing machines etc.

If you want to build a new device or application today and you want it to be successful and more popular than your competitors, you will probably have to make it Internet-connected.

You need a “bridge”.

Making things today is like doing a puzzle

Everyone who makes devices or applications today have a wide variety of different components and pieces of the big “puzzle” to select from.

You can opt to write many pieces yourself, but virtually nobody today creates anything digital entirely on their own. We lean on others. We stand on other’s shoulders. In particular open source software has grown up to become or maybe provide a vast ocean of puzzle pieces to use and leverage.

One of the little pieces in your device puzzle is probably Internet transfers, because you want your thing to get updates, upload telemetry and who knows what else.

The picture then needs a piece inserted in the right spot to get complete. The Internet transfers piece. That piece can be curl. We’ve made curl to be a good such piece.

<figcaption>This perfect picture is just missing one little piece…</figcaption> Relying on pieces provided by others

Lots have been said about the fact that companies, organizations and entire ecosystems rely on pieces and components written, maintained and provided by someone else. Some of them are open source components written by developers on their spare time, but are still used by thousands of companies shipping commercial products.

curl is one such component. It’s not “just” a spare time project anymore of course, but the point remains. We estimate that curl runs in some ten billion installations these days, so quite a lot of current Internet infrastructure uses our little puzzle piece in their pictures.

<figcaption>Modified version of the original xkcd 2347 comic</figcaption> So you’re rich

I rarely get to this point in any conversation because I would have already bored my company into a coma by now.

The concept of giving away a component like this as open source under a liberal license is a very weird concept to general people. Maybe also because I say that I work on this and I created it, but I’m not at all the only contributor and we wouldn’t have gotten to this point without the help of several hundred other developers.

“- No, I give it away for free. Yes really, entirely and totally free for anyone and everyone to use. Correct, even the largest and richest mega-corporations of the world.”

The ten billion installations work as marketing for getting companies to understand that curl is a solid puzzle piece so that more will use it and some of those will end up discovering they need help or assistance and they purchase support for curl from me!

I’m not rich, but I do perfectly fine. I consider myself very lucky and fortunate who get to work on curl for a living.

A curl world

There are about 5 billion Internet using humans in the world. There are about 10 billion curl installations.

The puzzle piece curl is there in the middle.

This is how they’re connected. This is the curl world map 2021.

Or put briefly

libcurl is a library for doing transfers specified with a URL, using one of the supported protocols. It is fast, reliable, very portable, well documented and feature rich. A de-facto standard API available for everyone.


The original island image is by Julius Silver from Pixabay. xkcd strip edits were done by @tsjost.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla reacts to publication of EU’s draft regulation on AI

Mozilla planet - do, 22/04/2021 - 09:40

Today, the European Commission published its draft for a regulatory framework for artificial intelligence (AI). The proposal lays out comprehensive new rules for AI systems deployed in the EU. Mozilla welcomes the initiative to rein in the potential harms caused by AI, but much remains to be clarified.

Reacting to the European Commission’s proposal, Raegan MacDonald, Mozilla’s Director of Global Public Policy, said: 

“AI is a transformational technology that has the potential to create value and enable progress in so many ways, but we cannot lose sight of the real harms that can come if we fail to protect the rights and safety of people living in the EU. Mozilla is committed to ensuring that AI is trustworthy, that it helps people instead of harming them. The European Commission’s push to set ground rules is a step in the right direction and it is good to see that several of our recommendations to the Commission are reflected in the proposal – but there is more work to be done to ensure these principles can be meaningfully implemented, as some of the safeguards and red lines envisioned in the text leave a lot to be desired.

Systemic transparency is a critical enabler of accountability, which is crucial to advancing more trustworthy AI. We are therefore encouraged by the introduction of user-facing transparency obligations – for example for chatbots or so-called deepfakes – as well as a public register for high-risk AI systems in the European Commission’s proposal. But as always, details matter, and it will be important what information exactly this database will encompass. We look forward to contributing to this important debate.”


The post Mozilla reacts to publication of EU’s draft regulation on AI appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Cameron Kaiser: Coloured iMacs? We got your coloured iMacs right here

Mozilla planet - wo, 21/04/2021 - 19:27
And you don't even need to wait until May. Besides being the best colour Apple ever offered (a tray-loading Strawberry, which is nicer than the current M1 iMac Pink), this iMac G3 also has a 600MHz Sonnet HARMONi in it, so it has a faster CPU and FireWire too. Take that, non-upgradable Apple Silicon. It runs Jaguar with OmniWeb and Crypto Ancienne for web browsing.

Plus, these coloured iMacs can build and run TenFourFox: Chris T proved it on his 400MHz G3. It took 34 hours to compile from source. I always did like slow-cooked meals better.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 387

Mozilla planet - wo, 21/04/2021 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No papers/research projects this week.

Official Newsletters Project/Tooling Updates Observations/Thoughts Rust Walkthroughs Miscellaneous Crate of the Week

This week's crate is deltoid, another crate for delta-compressing Rust data structures.

Thanks to Joey Ezechiëls for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No calls for participation this week

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

292 pull requests were merged in the last week

Rust Compiler Performance Triage

Another quiet week with very small changes to compiler performance.

Triage done by @rylev. Revision range: 5258a74..6df26f

1 Regressions, 0 Improvements, 1 Mixed

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs

No new RFCs were proposed this week.

Upcoming Events Online Europe

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Grover GmbH

Massa Labs


Subspace Labs



Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

We feel that Rust is now ready to join C as a practical language for implementing the [Linux] kernel. It can help us reduce the number of potential bugs and security vulnerabilities in privileged code while playing nicely with the core kernel and preserving its performance characteristics.

Wedson Almeida Filho on the Google Security Blog

Thanks to Jacob Pratt for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Categorieën: Mozilla-nl planet

The Talospace Project: Firefox 88 on POWER

Mozilla planet - wo, 21/04/2021 - 05:01
Firefox 88 is out. In addition to a bunch of new CSS properties, JavaScript is now supported in PDF files even within Firefox's own viewer, meaning there is no escape, and FTP is disabled, meaning you will need to use 78ESR (though you get two more weeks of ESR as a reprieve, since Firefox 89 has been delayed to allow UI code to further settle). I've long pondered doing a generic "cURL extension" that would reenable all sorts of protocols through a shim to either curl or libcurl; maybe it's time for it.

Fortunately Fx88 builds uneventually as usual on OpenPOWER, though our PGO-LTO patches (apply to the tree with patch -p1) required a slight tweak to nsTerminator.cpp. Debug and optimized .mozconfigs are unchanged.

Also, an early milestone in the Firefox JavaScript JIT for OpenPOWER: Justin Hibbits merged my earlier jitpower work to a later tree (right now based on Firefox 86) and filled in the gaps with code from TenFourFox, and after some polishing up I did over the weekend, a JIT-enabled JavaScript shell now compiles on Fedora ppc64le. However, it immediately asserts due to probably some missing defintions for register sets, and I'm sure there are many other assertions and lurking bugs to be fixed, but this is much further along than before. The fork is on Github for others who wish to contribute; I will probably decommission the old jitpower project soon since it is now superfluous. More to come.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mark Surman joins the Mozilla Foundation Board of Directors

Mozilla planet - di, 20/04/2021 - 16:01

In early 2020, I outlined our efforts to expand Mozilla’s boards. Over the past year, we’ve added three new external Mozilla board members: Navrina Singh and Wambui Kinya to the Mozilla Foundation board and Laura Chambers to the Mozilla Corporation board.

Today, I’m excited to welcome Mark Surman, Executive Director of the Mozilla Foundation, to the Foundation board.

As I said to staff prior to his appointment, when I think about who should hold the keys to Mozilla, Mark is high on that list. Mark has unique qualifications in terms of the overall direction of Mozilla, how our organizations interoperate, and if and how we create programs, structures or organizations. Mark is joining the Mozilla Foundation board as an individual based on these qualifications; we have not made the decision that the Executive Director is automatically a member of the Board.
Mark has demonstrated his commitment to Mozilla as a whole, over and over. The whole of Mozilla figures into his strategic thinking. He’s got a good sense of how Mozilla Foundation and Mozilla Corporation can magnify or reduce the effectiveness of Mozilla overall. Mark has a hunger for Mozilla to grow in impact. He has demonstrated an ability to think big, and to dive into the work that is in front of us today.

For those of you who don’t know Mark already, he brings over two decades of experience leading projects and organizations focused on the public interest side of the internet. In the 12 years since Mark joined Mozilla, he has built the Foundation into a leading philanthropic and advocacy voice championing the health of the internet. Prior to Mozilla, Mark spent 15 years working on everything from a non-profit internet provider to an early open source content management system to a global network of community-run cybercafes. Currently, Mark spends most of his time on Mozilla’s efforts to promote trustworthy AI in the tech industry, a major focus of the Foundation’s current efforts.
Please join me in welcoming Mark Surman to the Mozilla Foundation Board of Directors.

You can read Mark’s message about why he’s joining Mozilla here.

PS. As always, we continue to look for new members for both boards, with the goal of adding the skills, networks and diversity Mozilla will need to succeed in the future.


The post Mark Surman joins the Mozilla Foundation Board of Directors appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Wearing more (Mozilla) hats

Mozilla planet - di, 20/04/2021 - 16:00

Mark Surman

For many years now — and well before I sought out the job I have today — I thought: the world needs more organizations like Mozilla. Given the state of the internet, it needs them now. And, it will likely need them for a very long time to come.

Why? In part because the internet was founded with public benefit in mind. And, as the Mozilla Manifesto declared back in 2007, “… (m)agnifying the public benefit aspects of the internet is an important goal, worthy of time, attention and commitment.”

Today, this sort of ‘time and attention’ is more important — and urgent — than ever. We live in an era where the biggest companies in the world are internet companies. Much of what they have created is good, even delightful. Yet, as the last few years have shown, leaving things to commercial actors alone can leave the internet — and society — in a bit of a mess. We need organizations like Mozilla — and many more like it — if we are to find our way out of this mess. And we need these organizations to think big!

It’s for this reason that I’m excited to add another ‘hat’ to my work: I am joining the Mozilla Foundation board today. This is something I will take on in addition to my role as executive director.

Why am I assuming this additional role? I believe Mozilla can play a bigger role in the world than it does today. And, I also believe we can inspire and support the growth of more organizations that share Mozilla’s commitment to the public benefit side of the internet. Wearing a board member hat — and working with other Foundation and Corporation board members — I will be in a better position to turn more of my attention to Mozilla’s long term impact and sustainability.

What does this mean in practice? It means spending some of my time on big picture ‘Pan Mozilla’ questions. How can Mozilla connect to more startups, developers, designers and activists who are trying to build a better, more humane internet? What might Mozilla develop or do to support these people? How can we work with policy makers who are trying to write regulations to ensure the internet benefits the public interest? And, how do we shift our attention and resources outside of the US and Europe, where we have traditionally focused? While I don’t have answers to all these questions, I do know we urgently need to ask them — and that we need to do so in an expansive way that goes beyond the current scope of our operating organizations. That’s something I’ll be well positioned to do wearing my new board member hat.

Of course, I still have much to do wearing my executive director hat. We set out a few years ago to evolve the Foundation into a ‘movement building arm’ for Mozilla. Concretely, this has meant building up teams with skills in philanthropy and advocacy who can rally more people around the cause of a healthy internet. And, it has meant picking a topic to focus on: trustworthy AI. Our movement building approach — and our trustworthy AI agenda — is getting traction. Yet, there is still a way to go to unlock the kind of sustained action and impact that we want. Leading the day to day side of this work remains my main focus at Mozilla.

As I said at the start of this post: I think the world will need organizations like Mozilla for a long time to come. As all corners of our lives become digital, we will increasingly need to stand firm for public interest principles like keeping the internet open and accessible to all. While we can all do this as individuals, we also need strong, long lasting organizations that can take this stand in many places and over many decades. Whatever hat I’m wearing, I continue to be deeply committed to building Mozilla into a vast, diverse and sustainable institution to do exactly this.

The post Wearing more (Mozilla) hats appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Karl Dubost: Get Ready For Three Digits User Agent Strings

Mozilla planet - di, 20/04/2021 - 14:32

In 2022, Firefox and Chrome will reach a version number with three digits: 100. It's time to get ready and extensively test your code, so your code doesn't return null or worse 10 instead of 100.

Durian on sale

Some contexts

The browser user agent string is used in many circumstances, on the server side with the User-Agent HTTP header and on the client side with navigator.userAgent. Browsers lie about it. Web apps and websites detection do not cover all cases. So browsers have to modify the user agent string on a site by site case.

Browsers Release Calendar

According to the Firefox release calendar, during the first quarter of 2022 (probably February), Firefox will reach version 100.

And Chrome release calendar sets a current date of March 29, 2022.

What Mozilla Webcompat Team is doing?

Dennis Schubert started to test JavaScript Libraries, but this tests only the libraries which are up to date. And we know it, the Web is a legacy machine full of history.

The webcompat team will probably automatically test the top 1000 websites. But this is very rudimentary. It will not cover everything. Sites always break in strange ways.

What Can You Do To Help? Browse the Web with a 100 UA string
  1. Change the user agent string of your favorite browser. For example, if the string is Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:89.0) Gecko/20100101 Firefox/89.0, change it to be Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:100.0) Gecko/20100101 Firefox/100.0
  2. If you notice something that is breaking because of the UA string, file a report on webcompat. Do not forget to check that it is working with the normal UA string.
Automatic tests for your code

If your web app has a JavaScript Test suite, add a profile with a browser having 100 for its version number and check if it breaks. Test both Firefox and Chrome (mobile and desktop) because the libraries have different code paths depending on the user agent. Watch out for code like:

const ua_string = "Firefox/100.0"; ua_string.match(/Firefox\/(\d\d)/); // ["Firefox/10", "10"] ua_string.match(/Firefox\/(\d{2})/); // ["Firefox/10", "10"] ua_string.match(/Firefox\/(\d\d)\./); // null Compare version numbers as integer not string

Compare integer, not string when you have decided to have a minimum version for supporting a browser, because

"80" < "99" // true "80" < "100" // false parseInt("80", 10) < parseInt("99", 10) // true parseInt("80", 10) < parseInt("100", 10) // true Comments

If you have more questions, things I may have missed, different take on them. Feel free to comment…. Be mindful.


Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla Mornings on the DSA: Setting the standard for third-party platform auditing

Mozilla planet - di, 20/04/2021 - 09:48

On 11 May, Mozilla will host the next instalment of Mozilla Mornings – our regular event series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

This instalment will focus on the DSA’s provisions on third-party platform auditing, one of the stand-out features of its next-generation regulatory approach. We’re bringing together a panel of experts to unpack the provisions’ strengths and shortcomings; and to provide recommendations for how the DSA can build a standard-setting auditing regime for Very Large Online Platforms.


Alexandra Geese MEP
IMCO DSA shadow rapporteur
Group of the Greens/European Free Alliance

Deborah Raji
Fellow | Research Collaborator
Mozilla Foundation | Algorithmic Justice League

Dr Ben Wagner
Assistant Professor, Faculty of Technology, Policy and Management
TU Delft  

With opening remarks by Owen Bennett
Senior Policy Manager
Mozilla Corporation

Moderated by Jennifer Baker
EU technology journalist


Logistical details

Tuesday 11 May, 14:00 – 15:00 CEST

Zoom Webinar

Register *here*

Webinar login details to be shared on day of event

The post Mozilla Mornings on the DSA: Setting the standard for third-party platform auditing appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Never too late for Firefox 88

Mozilla planet - ma, 19/04/2021 - 17:18

April is upon us, and we have a most timely release for you — Firefox 88. In this release you will find a bunch of nice CSS additions including :user-valid and :user-invalid support and image-set() support, support for regular expression match indices, removal of FTP protocol support for enhanced security, and more!

This blog post provides merely a set of highlights; for all the details, check out the following:

:user-valid and :user-invalid

There are a large number of HTML form-related pseudo-classes that allow us to specify styles for various data validity states, as you’ll see in our UI pseudo-classes tutorial. Firefox 88 introduces two more — :user-valid and :user-invalid.

You might be thinking “we already have :valid and :invalid for styling forms containing valid or invalid data — what’s the difference here?”

:user-valid and :user-invalid are similar, but have been designed with better user experience in mind. They effectively do the same thing — matching a form input that contains valid or invaid data — but :user-valid and :user-invalid only start matching after the user has stopped focusing on the element (e.g. by tabbing to the next input). This is a subtle but useful change, which we will now demonstrate.

Take our valid-invalid.html example. This uses the following CSS to provide clear indicators as to which fields contain valid and invalid data:

input:invalid { border: 2px solid red; } input:invalid + span::before { content: '✖'; color: red; } input:valid + span::before { content: '✓'; color: green; }

The problem with this is shown when you try to enter data into the “E-mail address” field — as soon as you start typing an email address into the field the invalid styling kicks in, and remains right up until the point where the entered text constitutes a valid e-mail address. This experience can be a bit jarring, making the user think they are doing something wrong when they aren’t.

Now consider our user-valid-invalid.html example. This includes nearly the same CSS, except that it uses the newer :user-valid and :user-invalid pseudo-classes:

input:user-invalid { border: 2px solid red; } input:user-invalid + span::before { content: '✖'; color: red; } input:user-valid + span::before { content: '✓'; color: green; }

In this example the valid/invalid styling only kicks in when the user has entered their value and removed focus from the input, giving them a chance to enter their complete value before receiving feedback. Much better!

Note: Previously to Firefox 88, the same effect could be achieved using the proprietary :-moz-ui-invalid and :-moz-ui-valid pseudo-classes.

image-set() support

The image-set() function provides a mechanism in CSS to allow the browser to pick the most suitable image for the device’s resolution from a list of options, in a similar manner to the HTML srcset attribute. For example, the following can be used to provide multiple background-images to choose from:

div { background-image: image-set( url("small-balloons.jpg") 1x, url("large-balloons.jpg") 2x); }

You can also use image-set() as a value for the content and cursor properties. So for example, you could provide multiple resolutions for generated content:

h2::before { content: image-set( url("small-icon.jpg") 1x, url("large-icon.jpg") 2x); }

or custom cursors:

div { cursor: image-set( url("custom-cursor-small.png") 1x, url("custom-cursor-large.png") 2x), auto; } outline now follows border-radius shape

The outline CSS property has been updated so that it now follows the outline shape created by border-radius. It is really nice to see a fix included in Firefox for this long standing problem. As part of this work the non-standard -moz-outline-radius property has been removed.

RegExp match indices

Firefox 88 supports the match indices feature of regular expressions, which makes an indices property available containing an array that stores the start and end positions of each matched capture group. This functionality is enabled using the d flag.

There is also a corresponding hasIndices boolean property that allows you to check whether a regex has this mode enabled.

So for example:

const regex1 = new RegExp('foo', 'd'); regex1.hasIndices // true const test = regex1.exec('foo bar'); test // [ "foo" ] test.indices // [ [ 0, 3 ] ]

For more useful information, see our RegExp.prototype.exec() page, and RegExp match indices on the V8 dev blog.

FTP support disabled

FTP support has been disabled from Firefox 88 onwards, and its full removal is (currently) planned for Firefox version 90. Addressing this security risk reduces the likelihood of an attack while also removing support for a non-encrypted protocol.

Complementing this change, the extension setting browserSettings.ftpProtocolEnabled has been made read-only, and web extensions can now register themselves as protocol handlers for FTP.

The post Never too late for Firefox 88 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Changes to themeable areas of Firefox in version 89

Mozilla planet - ma, 19/04/2021 - 17:08

Firefox’s visual appearance will be updated in version 89 to provide a cleaner, modernized interface. Since some of the changes will affect themeable areas of the browser, we wanted to give theme artists a preview of what to expect as the appearance of their themes may change when applied to version 89.

Tabs appearance
  • The property tab_background_separator, which controls the appearance of the vertical lines that separate tabs, will no longer be supported.
  • Currently, the tab_line property can set the color of an active tab’s thick top border. In Firefox 89, this property will set a color for all borders of an active tab, and the borders will be thinner.

URL and toolbar
  • The property toolbar_field_separator, which controls the color of the vertical line that separates the URL bar from the three-dot “meatball menu,” will no longer be supported.

  • The property toolbar_vertical_separator, which controls the vertical lines near the three-line “hamburger menu” and the line separating items in the bookmarks toolbar, will no longer appear next to the hamburger menu. You can still use this property to control the separators in the bookmarks toolbar.  (Note: users will need to enable the separator by right clicking on the bookmarks toolbar and selecting “Add Separator.”)

You can use the Nightly pre-release channel to start testing how your themes will look with Firefox 89. If you’d like to get more involved testing other changes planned for this release, please check out our foxfooding campaign, which runs until May 3, 2021.

Firefox 89 is currently set available on the Beta pre-release channel by April 23, 2021, and released on June 1, 2021.

As always, please post on our community forum if there are any questions.

The post Changes to themeable areas of Firefox in version 89 appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Mars 2020 Helicopter Contributor

Mozilla planet - ma, 19/04/2021 - 16:40

Friends of mine know that I’ve tried for a long time to get confirmation that curl is used in space. We’ve believed it to be likely but I’ve wanted to get a clear confirmation that this is indeed the fact.

Today GitHub posted their article about open source in the Mars mission, and they now provide a badge on their site for contributors of projects that are used in that mission.

I have one of those badges now. Only a few other of the current 879 recorded curl authors got it. Which seems to be due to them using a very old curl release (curl 7.19, released in September 2008) and they couldn’t match all contributors with emails or the authors didn’t have their emails verified on GitHub etc.

According to that GitHub blog post, we are “almost 12,000” developers who got it.

While this strictly speaking doesn’t say that curl is actually used in space, I think it can probably be assumed to be.

Here’s the interplanetary curl development displayed in a single graph:

See also: screenshotted curl credits and curl supports NASA.


Image by Aynur Zakirov from Pixabay

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Firefox 88 combats privacy abuses

Mozilla planet - ma, 19/04/2021 - 14:55

We are pleased to announce that Firefox 88 is introducing a new protection against privacy leaks on the web. Under new limitations imposed by Firefox, trackers are no longer able to abuse the property to track users across websites.

Since the late 1990s, web browsers have made the property available to web pages as a place to store data. Unfortunately, data stored in has been allowed by standard browser rules to leak between websites, enabling trackers to identify users or snoop on their browsing history. To close this leak, Firefox now confines the property to the website that created it.

Leaking data through

The property of a window allows it to be able to be targeted by hyperlinks or forms to navigate the target window. The property, available to any website you visit, is a “bucket” for storing any data the website may choose to place there. Historically, the data stored in has been exempt from the same-origin policy enforced by browsers that prohibited some forms of data sharing between websites. Unfortunately, this meant that data stored in the property was allowed by all major browsers to persist across page visits in the same tab, allowing different websites you visit to share data about you.

For example, suppose a page at set the property to “”. Traditionally, this information would persist even after you clicked on a link and navigated to So the page at would be able to read the information without your knowledge or consent: persists across the cross-origin navigation. persists across the cross-origin navigation.

Tracking companies have been abusing this property to leak information, and have effectively turned it into a communication channel for transporting data between websites. Worse, malicious sites have been able to observe the content of to gather private user data that was inadvertently leaked by another website.

Clearing to prevent leakage

To prevent the potential privacy leakage of, Firefox will now clear the property when you navigate between websites. Here’s how it looks:

Firefox 88 clearing after cross-origin navigation.

Firefox 88 clearing after cross-origin navigation.

Firefox will attempt to identify likely non-harmful usage of and avoid clearing the property in such cases. Specifically, Firefox only clears if the link being clicked does not open a pop-up window.

To avoid unnecessary breakage, if a user navigates back to a previous website, Firefox now restores the property to its previous value for that website. Together, these dual rules for clearing and restoring data effectively confine that data to the website where it was originally created, similar to how Firefox’s Total Cookie Protection confines cookies to the website where they were created. This confinement is essential for preventing malicious sites from abusing to gather users’ personal data.

Firefox isn’t alone in making this change: web developers relying on should note that Safari is also clearing the property, and Chromium-based browsers are planning to do so. Going forward, developers should expect clearing to be the new standard way that browsers handle

If you are a Firefox user, you don’t have to do anything to benefit from this new privacy protection. As soon as your Firefox auto-updates to version 88, the new default data confinement will be in effect for every website you visit. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect your privacy.

The post Firefox 88 combats privacy abuses appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl those funny IPv4 addresses

Mozilla planet - ma, 19/04/2021 - 08:39

Everyone knows that on most systems you can specify IPv4 addresses just 4 decimal numbers separated with periods (dots). Example:

Useful when you for example want to ping your local wifi router and similar. “ping”

Other bases

The IPv4 string is usually parsed by the inet_addr() function or at times it is passed straight into the name resolver function like getaddrinfo().

This address parser supports more ways to specify the address. You can for example specify each number using either octal or hexadecimal.

Write the numbers with zero-prefixes to have them interpreted as octal numbers:


Write them with 0x-prefixes to specify them in hexadecimal:


You will find that ping can deal with all of these.

As a 32 bit number

An IPv4 address is a 32 bit number that when written as 4 separate numbers are split in 4 parts with 8 bits represented in each number. Each separate number in “a.b.c.d” is 8 bits that combined make up the whole 32 bits. Sometimes the four parts are called quads.

The typical IPv4 address parser however handles more ways than just the 4-way split. It can also deal with the address when specified as one, two or three numbers (separated with dots unless its just one).

If given as a single number, it treats it as a single unsigned 32 bit number. The top-most eight bits stores what we “normally” with write as the first number and so on. The address shown above, if we keep it as hexadecimal would then become:


And you can of course write it in octal as well:


and plain old decimal:


As two numbers

If you instead write the IP address as two numbers with a dot in between, the first number is assumed to be 8 bits and the next one a 24 bit one. And you can keep on mixing the bases as you see like. The same address again, now in a hexadecimal + octal combo:


This allows for some fun shortcuts when the 24 bit number contains a lot of zeroes. Like you can shorten “” to just “127.1” and it still works and is perfectly legal.

As three numbers

Now the parts are supposed to be split up in bits like this: 8.8.16. Here’s the example address again in octal, hex and decimal:


Bypassing filters

All of these versions shown above work with most tools that accept IPv4 addresses and sometimes you can bypass filters and protection systems by switching to another format so that you don’t match the filters. It has previously caused problems in node and perl packages and I’m guessing numerous others. It’s a feature that is often forgotten, ignored or just not known.

It begs the question why this very liberal support was once added and allowed but I’ve not been able to figure that out – maybe because of how it matches class A/B/C networks. The support for this syntax seems to have been introduced with the inet_aton() function in the 4.2BSD release in 1983.

IPv4 in URLs

URLs have a host name in them and it can be specified as an IPv4 address.

RFC 3986

The RFC 3986 URL specification’s section 3.2.2 says an IPv4 address must be specified as:

dec-octet "." dec-octet "." dec-octet "." dec-octet

… but in reality very few clients that accept such URLs actually restrict the addresses to that format. I believe mostly because many programs will pass on the host name to a name resolving function that itself will handle the other formats.


The Host Parsing section of this spec allows the many variations of IPv4 addresses. (If you’re anything like me, you might need to read that spec section about three times or so before that’s clear).

Since the browsers all obey to this spec there’s no surprise that browsers thus all allow this kind of IP numbers in URLs they handle.

curl before

curl has traditionally been in the camp that mostly accidentally somewhat supported the “flexible” IPv4 address formats. It did this because if you built curl to use the system resolver functions (which it does by default) those system functions will handle these formats for curl. If curl was built to use c-ares (which is one of curl’s optional name resolver backends), using such address formats just made the transfer fail.

The drawback with allowing the system resolver functions to deal with the formats is that curl itself then works with the original formatted host name so things like HTTPS server certificate verification and sending Host: headers in HTTP don’t really work the way you’d want.

curl now

Starting in curl 7.77.0 (since this commit ) curl will “natively” understand these IPv4 formats and normalize them itself.

There are several benefits of doing this ourselves:

  1. Applications using the URL API will get the normalized host name out.
  2. curl will work the same independently of selected name resolver backend
  3. HTTPS works fine even when the address is using other formats
  4. HTTP virtual hosts headers get the “correct” formatted host name

Fun example command line to see if it works:

curl -L 16843009

16843009 gets normalized to which then gets used as (because curl will assume HTTP for this URL when no scheme is used) which returns a 301 redirect over to which -L makes curl follow…


Image by Thank you for your support Donations welcome to support from Pixabay

Categorieën: Mozilla-nl planet

Niko Matsakis: Async Vision Doc Writing Sessions VI

Mozilla planet - ma, 19/04/2021 - 06:00

Ryan Levick and I are going to be hosting more Async Vision Doc Writing Sessions this week. We’re not organized enough to have assigned topics yet, so I’m just going to post the dates/times and we’ll be tweeting about the particular topics as we go.

When Who Wed at 07:00 ET Ryan Wed at 15:00 ET Niko Fri at 07:00 ET Ryan Fri at 14:00 ET Niko

If you’ve joined before, we’ll be re-using the same Zoom link. If you haven’t joined, then send a private message to one of us and we’ll share the link. Hope to see you there!

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR32 available, plus a two-week reprieve

Mozilla planet - za, 17/04/2021 - 04:19
TenFourFox Feature Parity Release 32 final is now available for testing (downloads, hashes, release notes). This adds an additional entry to the ATSUI font blocklist and completes the outstanding security patches. Assuming no issues, it will go live as the final FPR on or about April 19.

Mozilla is advancing Firefox 89 by two weeks to give them additional time to polish up the UI changes in that version. This will thus put all future release dates ahead by two weeks as well; the next ESR release and the first Security Parity Release parallel with it instead will be scheduled for June 1. Aligning with this, the testing version of FPR32 SPR1 will come out the weekend before June 1 and the final official build of TenFourFox will also move ahead two weeks, from September 7 to September 21. After that you'll have to DIY but fortunately it already looks like people are rising to the challenge of building the browser themselves: I have been pointed to an installer which neatly wraps up all the necessary build prerequisites, provides a guided Automator workflow and won't interfere with any existing installation of MacPorts. I don't have anything to do this with this effort and can't attest to or advise on its use, but it's nice to see it exists, so download it from Macintosh Garden if you want to try it out. Remember, compilation speed on G4 (and, shudder, G3) systems can be substantially slower than on a G5, and especially without multiple CPUs. Given this Quad G5 running full tilt (three cores dedicated to compiling) with a full 16GB of RAM takes about three and a half hours to kick out a single architecture build, you should plan accordingly for longer times on lesser systems.

I have already started clearing issues from Github I don't intend to address. The remaining issues may not necessarily be addressed either, and definitely won't be during the security parity period, but they are considerations for things I might need later. Don't add to this list: I will mark new issues without patches or PRs as invalid. I will also be working on revised documentation for Tenderapp and the main site so people are aware of the forthcoming plan; those changes will be posted sometime this coming week.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: QUIC and HTTP/3 Support now in Firefox Nightly and Beta

Mozilla planet - vr, 16/04/2021 - 22:04

tl;dr: Support for QUIC and HTTP/3 is now enabled by default in Firefox Nightly and Firefox Beta. We are planning to start rollout on the release in Firefox Stable Release 88. HTTP/3 will be available by default by the end of May.

What is HTTP/3?

HTTP/3 is a new version of HTTP (the protocol that powers the Web) that is based on QUIC. HTTP/3 has three main performance improvements over HTTP/2:

  • Because it is based on UDP it takes less time to connect;
  • It does not have head of line blocking, where delays in delivering packets cause an entire connection to be delayed; and
  • It is better able to detect and repair packet loss.

QUIC also provides connection migration and other features that should improve performance and reliability. For more on QUIC, see this excellent blog post from Cloudflare.

How to use it?

Firefox Nightly and Firefox Beta will automatically try to use HTTP/3 if offered by the Web server (for instance, Google or Facebook). Web servers can indicate support by using the Alt-Svc response header or by advertising HTTP/3 support with a HTTPS DNS record. Both the client and server must support the same QUIC and HTTP/3 draft version to connect with each other. For example, Firefox currently supports drafts 27 to 32 of the specification, so the server must report support of one of these versions (e.g., “h3-32”) in Alt-Svc or HTTPS record for Firefox to try to use QUIC and HTTP/3 with that server. When visiting such a website, viewing the network request information in Dev Tools should show the Alt-Svc header, and also indicate that HTTP/3 was used.

If you encounter issues with these or other sites, please file a bug in Bugzilla.

The post QUIC and HTTP/3 Support now in Firefox Nightly and Beta appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

About:Community: New Contributors To Firefox

Mozilla planet - vr, 16/04/2021 - 20:06

With Firefox 88 in flight, we are pleased to welcome the long list of developers who’ve contributed their first code change to in this release, 24 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet