mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - https://planet.mozilla.org/
Bijgewerkt: 1 maand 1 dag geleden

Wladimir Palant: PSA: jQuery is bad for the security of your project

ma, 02/03/2020 - 15:19

For some time I thought that jQuery was a thing of the past, only being used in old projects for legacy reasons. I mean, there are now so much better frameworks, why would anyone stick with jQuery and its numerous shortcomings? Then some colleagues told me that they weren’t aware of jQuery’s security downsides. And I recently discovered two big vulnerabilities in antivirus software 1 2 which existed partly due to excessive use of jQuery. So here is your official public service announcement: jQuery is bad for the security of your project.

By that I don’t mean that jQuery is inherently insecure. You can build a secure project on top of jQuery, if you are sufficiently aware of the potential issues and take care. However, the framework doesn’t make it easy. It’s not secure by default, it rather invites programming practices which are insecure. You have to constantly keep that in mind and correct for it. And if don’t pay attention just once you will end up with a security vulnerability.

Why is jQuery like that?

You have to remember that jQuery was conceived in 2006. Security of web applications was a far less understood topic, e.g. autoescaping in Django templates to prevent server-side XSS vulnerabilities only came up in 2007. Security of client-side JavaScript code was barely considered at all.

So it’s not surprising that security wasn’t a consideration for jQuery. Its initial goal was something entirely different: make writing JavaScript code easy despite different browsers behaving wildly differently. Internet Explorer 6 with its non-standard APIs was still prevalent, and Internet Explorer 7 with its very modest improvements wasn’t even released yet. jQuery provided a level of abstraction on top of browser-specific APIs with the promise that it would work anywhere.

Today you can write document.getElementById("my-element") and call it done. Back then you might have to fall back to document.all["my-element"]. So jQuery provided you with jQuery("#my-element"), a standardized way to access elements by selector. Creating new elements was also different depending on the browser, so jQuery gave you jQuery("<div>Some text</div>"), a standardized way to turn HTML code into an element. And so on, lots of simplifications.

The danger of simple

You might have noticed a pattern above which affects many jQuery functions: the same function will perform different operations depending on the parameters it receives. You give it something and the function will figure out what you meant it to do. The jQuery() function will accept among other things a selector of the element to be located and HTML code of an element to be created. How does it decide which one of these fundamentally different operations to perform, with the parameter being a string both times? The initial logic was: if there is something looking like an HTML tag in the contents it must be HTML code, otherwise it’s a selector.

And there you have the issue: often websites want to find an element by selector but use untrusted data for parts of that selector. So attackers can inject HTML code into the selector and trick jQuery into substituting the safe “find element” operation by a dangerous “create a new element.” A side-effect of the latter would be execution of malicious JavaScript code, a typical client-side XSS vulnerability.

It took until jQuery 1.9 (released in 2013) for this issue to be addressed. In order to be interpreted as HTML code, a string has to start with < now. Given incompatible changes, it took websites years to migrate to safer jQuery versions. In particular, the Addons.Mozilla.Org website still had some vulnerabilities in 2015 going back to this 1 2.

The root issue that the same function performs both safe and dangerous operations remains part of jQuery however, likely due to backwards compatibility constrains. It can still cause issues even now. Attackers would have to manipulate the start of a selector which is less likely, but it is still something that application developers have to keep in mind (and they almost never do). This danger prompted me to advise disabling jQuery.parseHTML some years ago.

Downloads, insecure by default

You may be familiar with secure by default as a concept. For an application framework this would mean that all framework functionality is safe by default and dangerous functionality has to be enabled in obvious ways. For example, React will not process HTML code at execution time unless you use a property named dangerouslySetInnerHTML. As we’ve already seen, jQuery does not make it obvious that you might be handling HTML code in dangerous ways.

It doesn’t stop there however. For example, jQuery provides the function jQuery.ajax() which is supposed to simplify sending web requests. At the time of writing, neither the documentation for this function, nor documentation for the shorthands jQuery.get() and jQuery.post() showed any warning about the functionality being dangerous. And yet it is, as the developers of Avast Secure Browser had to learn the hard way.

The issue is how jQuery.ajax() deals with server responses. If you don’t set a value for dataType, the default behavior is to guess how the response should be treated. And jQuery goes beyond the usual safe options resulting in parsing XML or JSON data. It also offers parsing the response as HTML code (potentially running JavaScript code as a side-effect) or executing it as JavaScript code directly. The choice is made based on the MIME type provided by the server.

So if you use jQuery.ajax() or any of the functions building on it, the default behavior requires you to trust the download source to 100%. Downloading from an untrusted source is only safe if you set dataType parameter explicitly to xml or json. Are developers of jQuery-based applications aware of this? Hardly, given how the issue isn’t even highlighted in the documentation.

Harmful coding patterns

So maybe jQuery developers could change the APIs a bit, improve documentation and all will be good again? Unfortunately not, the problem goes deeper than that. Remember how jQuery makes it really simple to turn HTML code into elements? Quite unsurprisingly, jQuery-based code tends to make heavy use of that functionality. On the other hand, modern frameworks recognized that people messing with HTML code directly is bad for security. React hides this functionality under a name that discourages usage, Vue.js at least discourages usage in the documentation.

Why is this core jQuery functionality problematic? Consider the following code which is quite typical for jQuery-based applications:

var html = "<ul>"; for (var i = 0; i < options.length; i++) html += "<li>" + options[i] + "</li>"; html += "</ul>"; $(document.body).append(html);

Is this code safe? Depends on whether the contents of the options array are trusted. If they are not, attackers could inject HTML code into it which will result in malicious JavaScript code being executed as a side-effect. So this is a potential XSS vulnerability.

What’s the correct way of doing this? I don’t think that the jQuery documentation ever mentions that. You would need an HTML escaping function, an essential piece of functionality that jQuery doesn’t provide for some reason. And you should probably apply it to all dynamic parts of your HTML code, just in case:

function escape(str) { return str.replace(/</g, "&lt;").replace(/>/g, "&gt;") .replace(/"/g, "&quot;").replace(/&/g, "&amp;"); } var html = "<ul>"; for (var i = 0; i < options.length; i++) html += "<li>" + escape(options[i]) + "</li>"; html += "</ul>"; $(document.body).append(html);

And you really have to do this consistently. If you forget a single escape() call you might introduce a vulnerability. If you omit a escape() code because the data is presumably safe you create a maintenance burden – now anybody changing this code (including yourself) has to remember that this data is supposed to be safe. So they have to keep it safe, and if not they have to remember changing this piece of code. And that’s already assuming that your initial evaluation of the data being safe was correct.

Any system which relies on imperfect human beings to always make perfect choices to stay secure is bound to fail at some point. McAfee Webadvisor is merely the latest example where jQuery encouraged this problematic coding approach which failed spectacularly. That’s not how it should work.

What are the alternatives?

Luckily, we no longer need jQuery to paint over browser differences. We don’t need jQuery to do downloads, regular XMLHttpRequest works just as well (actually better in some respects) and is secure. And if that’s too verbose, browser support for the Fetch API is looking great.

So the question is mostly: what’s an easy way to create dynamic user interfaces? We could use a templating engine like Handlebars for example. While the result will be HTML code again, untrusted data will be escaped automatically. So things are inherently safe as long as you stay away from the “tripple-stash” syntax. In fact, a vulnerability I once found was due to the developer using tripple-stash consistently and unnecessarily – don’t do that.

But you could take one more step and switch to a fully-featured framework like React or Vue.js. While these run on HTML templates as well, the template parsing happens when the application is compiled. No HTML code is being parsed at execution time, so it’s impossible to create HTML elements which weren’t supposed to be there.

There is an added security bonus here: event handlers are specified within the template and will always connect to the right elements. Even if attackers manage to create similar elements (like I did to exploit McAfee Webadvisor vulnerability), it wouldn’t trick the application into adding functionality to these.

Categorieën: Mozilla-nl planet

Jan-Erik Rediger: Two-year Moziversary

ma, 02/03/2020 - 15:00

Woops, looks like I missed my Two Year Moziversary! 2 years ago, in March 2018, I joined Mozilla as a Firefox Telemetry Engineer. Last year I blogged about what happened in my first year.

One year later I am still in the same team, but other things changed. Our team grew (hello Bea & Mike!), and shrank (bye Georg!), lots of other changes at Mozilla happened as well.

However one thing stayed during the whole year: Glean.
Culminating in the very first Glean-written-in-Rust release, but not slowing down after, the Glean project is changing how we do telemetry in Mozilla products.

Glean is now being used in multiple products across mobile and desktop operating systems (Firefox Preview, Lockwise for Android, Lockwise for iOS, Project FOG). We've seen over 2000 commits on that project, published 16 releases and posted 14 This Week in Glean blogposts.

This year will be focused on maintaining Glean, building new capabailities and bringing more of Glean back into Firefox on Desktop. I am also looking forward to speak more about Rust and how we use it to build and maintain cross-platform libraries.

To the next year and beyond!

Thank you

Thanks to my team: :bea, :chutten, :Dexter, :mdboom & :travis!
Lots of my work involves collaborating with people outside my direct team, gathering feedback, taking in feature requests and triaging bug reports. So thanks also to all the other people at Mozilla I work with.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Are you registered to vote?

ma, 02/03/2020 - 14:30

Left, right, or center. Blue, red, or green. Whatever your flavor, it’s your right to decide the direction of your community, state and country. The first step is to check … Read more

The post Are you registered to vote? appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl ootw: –next

ma, 02/03/2020 - 11:30

(previously posted options of the week)

--next has the short option alternative -:. (Right, that’s dash colon as the short version!)

Working with multiple URLs

You can tell curl to send a string as a POST to a URL. And you can easily tell it to send that same data to three different URLs. Like this:

curl -d data URL1 URL2 URL2

… and you can easily ask curl to issue a HEAD request to two separate URLs, like this:

curl -I URL1 URL2

… and of course the easy, just GET four different URLs and send them all to stdout:

curl URL1 URL2 URL3 URL4 Doing many at once is beneficial

Of course you could also just invoke curl two, three or four times serially to accomplish the same thing. But if you do, you’d lose curl’s ability to use its DNS cache, its connection cache and TLS resumed sessions – as they’re all cached in memory. And if you wanted to use cookies from one transfer to the next, you’d have to store them on disk since curl can’t magically keep them in memory between separate command line invokes.

Sometimes you want to use several URLs to make curl perform better.

But what if you want to send different data in each of those three POSTs? Or what if you want to do both GET and POST requests in the same command line? --next is here for you!

--next is a separator!

Basically, --next resets the command line parser but lets curl keep all transfer related states and caches. I think it is easiest to explain with a set of examples.

Send a POST followed by a GET:

curl -d data URL1 --next URL2

Send one string to URL1 and another string to URL2, both as POST:

curl -d one URL1 --next -d another URL2

Send a GET, activate cookies and do a HEAD that then sends back matching cookies.

curl -b empty URL1 --next -I URL1

First upload a file with PUT, then download it again

curl -T file URL1 --next URL2

If you’re doing parallel transfers (with -Z), curl will do URLs in parallel only until the --next separator.

There’s no limit

As with most things curl, there’s no limit to the number of URLs you can specify on the command line and there’s no limit to the number of --next separators.

If your command line isn’t long enough to hold them all, write them in a file and point them out to curl with -K, --config.

Categorieën: Mozilla-nl planet

William Lachance: This week in Glean (special guest post): mozregression telemetry (part 1)

vr, 28/02/2020 - 16:50

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

This is a special guest post by non-Glean-team member William Lachance!

As I mentioned last time I talked about mozregression, I have been thinking about adding some telemetry to the system to better understand the usage of this tool, to justify some part of Mozilla spending some cycles maintaining and improving it (assuming my intuition that this tool is heavily used is confirmed).

Coincidentally, the Telemetry client team has been working on a new library for measuring these types of things in a principled way called Glean, which even has python bindings! Using this has the potential in saving a lot of work: not only does Glean provide a framework for submitting data, our backend systems are automatically set up to process data submitted via into Glean into BigQuery tables, which can then easily be queried using tools like sql.telemetry.mozilla.org.

I thought it might be useful to go through some of what I’ve been exploring, in case others at Mozilla are interested in instrumenting their pet internal tools or projects. If this effort is successful, I’ll distill these notes into a tutorial in the Glean documentation.

Initial steps: defining pings and metrics

The initial step in setting up a Glean project of any type is to define explicitly the types of pings and metrics. You can look at a “ping” as being a small bucket of data submitted by a piece of software in the field. A “metric” is something we’re measuring and including in a ping.

Most of the Glean documentation focuses on browser-based use-cases where we might want to sample lots of different things on an ongoing basis, but for mozregression our needs are considerably simpler: we just want to know when someone has used it along with a small number of non-personally identifiable characteristics of their usage, e.g. the mozregression version number and the name of the application they are bisecting.

Glean has the concept of event pings, but it seems like those are there more for a fine-grained view of what’s going on during an application’s use. So let’s define a new ping just for ourselves, giving it the unimaginative name “usage”. This goes in a file called pings.yaml:

--- $schema: moz://mozilla.org/schemas/glean/pings/1-0-0 usage: description: > A ping to record usage of mozregression include_client_id: true notification_emails: - wlachance@mozilla.com bugs: - http://bugzilla.mozilla.org/123456789/ data_reviews: - http://example.com/path/to/data-review

We also need to define a list of things we want to measure. To start with, let’s just test with one piece of sample information: the app we’re bisecting (e.g. “Firefox” or “Gecko View Example”). This goes in a file called metrics.yaml:

--- $schema: moz://mozilla.org/schemas/glean/metrics/1-0-0 usage: app: type: string description: > The name of the app being bisected notification_emails: - wlachance@mozilla.com bugs: - https://bugzilla.mozilla.org/show_bug.cgi?id=1581647 data_reviews: - http://example.com/path/to/data-review expires: never send_in_pings: - usage

The data_reviews sections in both of the above are obviously bogus, we will need to actually get data review before landing and using this code, to make sure that we’re in conformance with Mozilla’s data collection policies.

Testing it out

But in the mean time, we can test our setup with the Glean debug pings viewer by setting a special tag (mozregression-test-tag) on our output. Here’s a small python script which does just that:

from pathlib import Path from glean import Glean, Configuration from glean import (load_metrics, load_pings) mozregression_path = Path.home() / '.mozilla2' / 'mozregression' Glean.initialize( application_id="mozregression", application_version="0.1.1", upload_enabled=True, configuration=Configuration( ping_tag="mozregression-test-tag" ), data_dir=mozregression_path / "data" ) Glean.set_upload_enabled(True) pings = load_pings("pings.yaml") metrics = load_metrics("metrics.yaml") metrics.usage.app.set("reality") pings.usage.submit()

Running this script on my laptop, I see that a respectable JSON payload was delivered to and processed by our servers:

As you can see, we’re successfully processing both the “version” number of mozregression, some characteristics of the machine sending the information (my MacBook in this case), as well as our single measure. We also have a client id, which should tell us roughly how many distinct installations of mozregression are sending pings. This should be more than sufficient for an initial “mozregression usage dashboard”.

Next steps

There are a bunch of things I still need to work through before landing this inside mozregression itself. Notably, the Glean python bindings are python3-only, so we’ll need to port the mozregression GUI to python 3 before we can start measuring usage there. But I’m excited at how quickly this work is coming together: stay tuned for part 2 in a few weeks.

Categorieën: Mozilla-nl planet

Karl Dubost: Week notes - 2020 w09 - worklog - The machine will eat us

vr, 28/02/2020 - 09:10

note this is hard to keep notes with all the mental space used for work, family and… the new member of the world family: coronavirus. We need to adjust habits everywhere in the world, and it will have definitely an impact on a long term. I do not believe there will a resolution in the next couple of months. My mind is in a marathon state of mind. We are in there for a long run.

The machine will eat us.

I'm inheriting a project which was maintained by the Open innovation team for experimenting with webcompat team data. The learning curve will be steep. The gist of it is described in the Webcompat Machine Learning docs.

Some of the new things to learn:

  • machine learning
  • the specific deployment strategy which has been used in this case
  • the logic which has been used to target certain bugs on webcompat

The good news is that it is in flask/python for the most part so that's something less to digest.

Interesting challenge, even if everything seems so futile compared to the looming world crisis.

Some random bugs

Otsukare

Categorieën: Mozilla-nl planet

Daniel Stenberg: Expect: tweaks in curl

do, 27/02/2020 - 09:43

One of the persistent myths about HTTP is that it is “a simple protocol”.

Expect: – not always expected

One of the dusty spec corners of HTTP/1.1 (Section 5.1.1 of RFC 7231) explains how the Expect: header works. This is still today in 2020 one of the HTTP request headers that is very commonly ignored by servers and intermediaries.

Background

HTTP/1.1 is designed for being sent over TCP (and possibly also TLS) in a serial manner. Setting up a new connection is costly, both in terms of CPU but especially in time – requiring a number of round-trips. (I’ll detail further down how HTTP/2 fixes all these issues in a much better way.)

HTTP/1.1 provides a number of ways to allow it to perform all its duties without having to shut down the connection. One such an example is the ability to tell a client early on that it needs to provide authentication credentials before the clients sends of a large payload. In order to maintain the TCP connection, a client can’t stop sending a HTTP payload prematurely! When the request body has started to get transmitted, the only way to stop it before the end of data is to cut off the connection and create a new one – wasting time and CPU…

“We want a 100 to continue”

A client can include a header in its outgoing request to ask the server to first acknowledge that everything is fine and that it can continue to send the “payload” – or it can return status codes that informs the client that there are other things it needs to fulfill in order to have the request succeed. Most such cases typically that involves authentication.

This “first tell me it’s OK to send it before I send it” request header looks like this:

Expect: 100-continue Servers

Since this mandatory header is widely not understood or simply ignored by HTTP/1.1 servers, clients that issue this header will have a rather short timeout and if no response has been received within that period it will proceed and send the data even without a 100.

The timeout thing is happening so often that removing the Expect: header from curl requests is a very common answer to question on how to improve POST or PUT requests with curl, when it works against such non-compliant servers.

Popular browsers

Browsers are widely popular HTTP clients but none of the popular ones ever use this. In fact, very few clients do. This is of course a chicken and egg problem because servers don’t support it very well because clients don’t and client’s don’t because servers don’t support it very well…

curl sends Expect:

When we implemented support for HTTP/1.1 in curl back in 2001, we wanted it done proper. We made it have a short, 1000 milliseconds, timeout waiting for the 100 response. We also made curl automatically include the Expect: header in outgoing requests if we know that the body is larger than NNN or we don’t know the size at all before-hand (and obviously, we don’t do it if we send the request chunked-encoded).

The logic being there that if the data amount is smaller than NNN, then the waste is not very big and we can just as well send it (even if we risk sending it twice) as waiting for a response etc is just going to be more time consuming.

That NNN margin value (known in the curl sources as the EXPECT_100_THRESHOLD) in curl was set to 1024 bytes already then back in the early 2000s.

Bumping EXPECT_100_THRESHOLD

Starting in curl 7.69.0 (due to ship on March 4, 2020) we catch up a little with the world around us and now libcurl will only automatically set the Expect: header if the amount of data to send in the body is larger than 1 megabyte. Yes, we raise the limit by 1024 times.

The reasoning is of course that for most Internet users these days, data sizes below a certain size isn’t even noticeable when transferred and so we adapt. We then also reduce the amount of problems for the very small data amounts where waiting for the 100 continue response is a waste of time anyway.

Credits: landed in this commit. (sorry but my WordPress stupidly can’t show the proper Asian name of the author!)

417 Expectation Failed

One of the ways that a HTTP/1.1 server can deal with an Expect: 100-continue header in a request, is to respond with a 417 code, which should tell the client to retry the same request again, only without the Expect: header.

While this response is fairly uncommon among servers, curl users who ran into 417 responses have previously had to resort to removing the Expect: header “manually” from the request for such requests. This was of course often far from intuitive or easy to figure out for users. A source for grief and agony.

Until this commit, also targeted for inclusion in curl 7.69.0 (March 4, 2020). Starting now, curl will automatically act on 417 response if it arrives as a consequence of using Expect: and then retry the request again without using the header!

Credits: I wrote the patch.

HTTP/2 avoids this all together

With HTTP/2 (and HTTP/3) this entire thing is a non-issue because with these modern protocol versions we can abort a request (stop a stream) prematurely without having to sacrifice the connection. There’s therefore no need for this crazy dance anymore!

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.41.1

do, 27/02/2020 - 01:00

The Rust team has published a new point release of Rust, 1.41.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.41.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.41.1 stable

Rust 1.41.1 addresses two critical regressions introduced in Rust 1.41.0: a soundness hole related to static lifetimes, and a miscompilation causing segfaults. These regressions do not affect earlier releases of Rust, and we recommend users of Rust 1.41.0 to upgrade as soon as possible. Another issue related to interactions between 'static and Copy implementations, dating back to Rust 1.0, was also addressed by this release.

A soundness hole in checking static items

In Rust 1.41.0, due to some changes in the internal representation of static values, the borrow checker accidentally allowed some unsound programs. Specifically, the borrow checker would not check that static items had the correct type. This in turn would allow the assignment of a temporary, with a lifetime less than 'static, to a static variable:

static mut MY_STATIC: &'static u8 = &0; fn main() { let my_temporary = 42; unsafe { // Erroneously allowed in 1.41.0: MY_STATIC = &my_temporary; } }

This was addressed in 1.41.1, with the program failing to compile:

error[E0597]: `my_temporary` does not live long enough --> src/main.rs:6:21 | 6 | MY_STATIC = &my_temporary; | ------------^^^^^^^^^^^^^ | | | | | borrowed value does not live long enough | assignment requires that `my_temporary` is borrowed for `'static` 7 | } 8 | } | - `my_temporary` dropped here while still borrowed

You can learn more about this bug in issue #69114 and the PR that fixed it.

Respecting a 'static lifetime in a Copy implementation

Ever since Rust 1.0, the following erroneous program has been compiling:

#[derive(Clone)] struct Foo<'a>(&'a u32); impl Copy for Foo<'static> {} fn main() { let temporary = 2; let foo = (Foo(&temporary),); drop(foo.0); // Accessing a part of `foo` is necessary. drop(foo.0); // Indexing an array would also work. }

In Rust 1.41.1, this issue was fixed by the same PR as the one above. Compiling the program now produces the following error:

error[E0597]: `temporary` does not live long enough --> src/main.rs:7:20 | 7 | let foo = (Foo(&temporary),); | ^^^^^^^^^^ borrowed value does not live long enough 8 | drop(foo.0); | ----- copying this value requires that | `temporary` is borrowed for `'static` 9 | drop(foo.0); 10 | } | - `temporary` dropped here while still borrowed

This error occurs because Foo<'a>, for some 'a, only implements Copy when 'a: 'static. However, the temporary variable, with some lifetime '0 does not outlive 'static and hence Foo<'0> is not Copy, so using drop the second time around should be an error.

Miscompiled bound checks leading to segfaults

In a few cases, programs compiled with Rust 1.41.0 were omitting bound checks in the memory allocation code. This caused segfaults if out of bound values were provided. The root cause of the miscompilation was a change in a LLVM optimization pass, introduced in LLVM 9 and reverted in LLVM 10.

Rust 1.41.0 uses a snapshot of LLVM 9, so we cherry-picked the revert into Rust 1.41.1, addressing the miscompilation. You can learn more about this bug in issue #69225.

Contributors to 1.41.1

Many people came together to create Rust 1.41.1. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl ootw: –ftp-pasv

wo, 26/02/2020 - 22:12

(previous options of the week)

--ftp-pasv has no short version of the option.

This option was added in curl 7.11.0, January 2004.

FTP uses dual connections

FTP is a really old protocol with traces back to the early 1970s, actually from before we even had TCP. It truly comes from a different Internet era.

When a client speaks FTP, it first sets up a control connection to the server, issues commands to it that instructs it what to do and then when it wants to transfer data, in either direction, a separate data connection is required; a second TCP connection in addition to the initial one.

This second connection can be setup two different ways. The client instructs the server which way to use:

  1. the client connects again to the server (passive)
  2. the server connects back to the client (active)
Passive please

--ftp-pasv is the default behavior of curl and asks to do the FTP transfer in passive mode. As this is the default, you will rarely see this option used. You could for example use it in cases where you set active mode in your .curlrc file but want to override that decision for a single specific invocation.

The opposite behavior, active connection, is called for with the --ftp-port option. An active connection asks the server to do a TCP connection back to the client, an action that these days is rarely functional due to firewall and network setups etc.

FTP passive behavior detailed

FTP was created decades before IPv6 was even thought of, so the original FTP spec assumes and uses IPv4 addresses in several important places, including how to ask the server for setting up a passive data transfer.

EPSV is the “new” FTP command that was introduced to enable FTP over IPv6. I call it new because it wasn’t present in the original RFC 959 that otherwise dictates most of how FTP works. EPSV is an extension.

EPSV was for a long time only supported by a small fraction of FTP servers and it probably still exists such servers, so in case invoking this command is a problem, curl needs to fallback to the original command for this purpose: PASV.

curl first tries EPSV and if that fails, it sends PASV. The first of those commands that is accepted by the server will make the server wait for the client to connect to it (again) on fresh new TCP port number.

Problems with the dual connections

Your firewall needs to allow your client to connect to the new port number the server opened up for the data connection. This is particularly complicated if you enable FTPS (encrypted FTP) as then the new port number is invisible to middle-boxes such as firewalls.

Dual connections also often cause problems because the control connection will be idle during the transfer and that idle connection sometimes get killed by NATs or other middle-boxes due to that inactivity.

Enabling active mode is usually even worse than passive mode in modern network setups, as then the firewall needs to allow an outside TCP connection to come in and connect to the client on a given port number.

Example

Ask for a file using passive FTP mode:

curl --ftp-pasv ftp://ftp.example.com/cake-recipe.txt Related options

--disable-epsv will prevent curl from using the EPSV command – which then also makes this transfer not work for IPv6. --ftp-port switches curl to active connection.

--ftp-skip-pasv-ip tells curl that the IP address in the server’s PASV response should be ignored and the IP address of the control connection should instead be used when the second connection is setup.

Categorieën: Mozilla-nl planet

The Firefox Frontier: The Facebook Container for Firefox

wo, 26/02/2020 - 02:00

Even with the ongoing #deletefacebook movement, not everyone is willing to completely walk away from the connections they’ve made on the social platform. After all, Facebook — and its subsidiary … Read more

The post The Facebook Container for Firefox appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Nicholas Nethercote: Ad Hoc Profiling

wo, 26/02/2020 - 00:48

I have used a variety of profiling tools over the years, including several I wrote myself.

But there is one profiling tool I have used more than any other. It is capable of providing invaluable, domain-specific profiling data of a kind not obtainable by any general-purpose profiler.

It’s a simple text processor implemented in a few dozen lines of code. I use it in combination with logging print statements in the programs I am profiling. No joke.

Post-processing

The tool is called counts, and it tallies line frequencies within text files, like an improved version of the Unix command chain sort | uniq -c. For example, given the following input.

a 1 b 2 b 2 c 3 c 3 c 3 d 4 d 4 d 4 d 4

counts produces the following output.

10 counts: ( 1) 4 (40.0%, 40.0%): d 4 ( 2) 3 (30.0%, 70.0%): c 3 ( 3) 2 (20.0%, 90.0%): b 2 ( 4) 1 (10.0%,100.0%): a 1

It gives a total line count, and shows all the unique lines, ordered by frequency, with individual and cumulative percentages.

Alternatively, when invoked with the -w flag, it assigns each line a weight, determined by the last integer that appears on the line (or 1 if there is no such integer).  On the same input, counts -w produces the following output.

30 counts: ( 1) 16 (53.3%, 53.3%): d 4 ( 2) 9 (30.0%, 83.3%): c 3 ( 3) 4 (13.3%, 96.7%): b 2 ( 4) 1 ( 3.3%,100.0%): a 1

The total and per-line counts are now weighted; the output incorporates both frequency and a measure of magnitude.

That’s it. That’s all counts does. I originally implemented it in 48 lines of Perl, then later rewrote it as 48 lines of Python, and then later again rewrote it as 71 lines of Rust.

In terms of benefit-to-effort ratio, it is by far the best code I have ever written.

counts in action

As an example, I added print statements to Firefox’s heap allocator so it prints a line for every allocation that shows its category, requested size, and actual size. A short run of Firefox with this instrumentation produced a 77 MB file containing 5.27 million lines. counts produced the following output for this file.

5270459 counts: ( 1) 576937 (10.9%, 10.9%): small 32 (32) ( 2) 546618 (10.4%, 21.3%): small 24 (32) ( 3) 492358 ( 9.3%, 30.7%): small 64 (64) ( 4) 321517 ( 6.1%, 36.8%): small 16 (16) ( 5) 288327 ( 5.5%, 42.2%): small 128 (128) ( 6) 251023 ( 4.8%, 47.0%): small 512 (512) ( 7) 191818 ( 3.6%, 50.6%): small 48 (48) ( 8) 164846 ( 3.1%, 53.8%): small 256 (256) ( 9) 162634 ( 3.1%, 56.8%): small 8 (8) ( 10) 146220 ( 2.8%, 59.6%): small 40 (48) ( 11) 111528 ( 2.1%, 61.7%): small 72 (80) ( 12) 94332 ( 1.8%, 63.5%): small 4 (8) ( 13) 91727 ( 1.7%, 65.3%): small 56 (64) ( 14) 78092 ( 1.5%, 66.7%): small 168 (176) ( 15) 64829 ( 1.2%, 68.0%): small 96 (96) ( 16) 60394 ( 1.1%, 69.1%): small 88 (96) ( 17) 58414 ( 1.1%, 70.2%): small 80 (80) ( 18) 53193 ( 1.0%, 71.2%): large 4096 (4096) ( 19) 51623 ( 1.0%, 72.2%): small 1024 (1024) ( 20) 45979 ( 0.9%, 73.1%): small 2048 (2048)

Unsurprisingly, small allocations dominate. But what happens if we weight each entry by its size? counts -w produced the following output.

2554515775 counts: ( 1) 501481472 (19.6%, 19.6%): large 32768 (32768) ( 2) 217878528 ( 8.5%, 28.2%): large 4096 (4096) ( 3) 156762112 ( 6.1%, 34.3%): large 65536 (65536) ( 4) 133554176 ( 5.2%, 39.5%): large 8192 (8192) ( 5) 128523776 ( 5.0%, 44.6%): small 512 (512) ( 6) 96550912 ( 3.8%, 48.3%): large 3072 (4096) ( 7) 94164992 ( 3.7%, 52.0%): small 2048 (2048) ( 8) 52861952 ( 2.1%, 54.1%): small 1024 (1024) ( 9) 44564480 ( 1.7%, 55.8%): large 262144 (262144) ( 10) 42200576 ( 1.7%, 57.5%): small 256 (256) ( 11) 41926656 ( 1.6%, 59.1%): large 16384 (16384) ( 12) 39976960 ( 1.6%, 60.7%): large 131072 (131072) ( 13) 38928384 ( 1.5%, 62.2%): huge 4864000 (4866048) ( 14) 37748736 ( 1.5%, 63.7%): huge 2097152 (2097152) ( 15) 36905856 ( 1.4%, 65.1%): small 128 (128) ( 16) 31510912 ( 1.2%, 66.4%): small 64 (64) ( 17) 24805376 ( 1.0%, 67.3%): huge 3097600 (3100672) ( 18) 23068672 ( 0.9%, 68.2%): huge 1048576 (1048576) ( 19) 22020096 ( 0.9%, 69.1%): large 524288 (524288) ( 20) 18980864 ( 0.7%, 69.9%): large 5432 (8192)

This shows that the cumulative count of allocated bytes (2.55GB) is dominated by a mixture of larger allocation sizes.

This example gives just a taste of what counts can do.

(An aside: in both cases it’s good the see there isn’t much slop, i.e. the difference between the requested sizes and actual sizes are mostly 0. That 5432 entry at the bottom of the second table is curious, though.)

Other Uses

This technique is often useful when you already know something — e.g. a general-purpose profiler showed that a particular function is hot — but you want to know more.

  • Exactly how many times are paths X, Y and Z executed? For example, how often do lookups succeed or fail in data structure D? Print an identifying string each time a path is hit.
  • How many times does loop L iterate? What does the loop count distribution look like? Is it executed frequently with a low loop count, or infrequently with a high loop count, or a mix? Print the iteration count before or after the loop.
  • How many elements are typically in hash table H at this code location? Few? Many? A mixture? Print the element count.
  • What are the contents of vector V at this code location? Print the contents.
  • How many bytes of memory are used by data structure D at this code location? Print the byte size.
  • Which call sites of function F are the hot ones? Print an identifying string at the call site.

Then use counts to aggregate the data. Often this domain-specific data is critical to fully optimize hot code.

Worse is better

Print statements are an admittedly crude way to get this kind of information, profligate with I/O and disk space. In many cases you could do it in a way that uses machine resources much more efficiently, e.g. by creating a small table data structure in the code to track frequencies, and then printing that table at program termination.

But that would require:

  • writing the custom table (collection and printing);
  • deciding where to define the table;
  • possibly exposing the table to multiple modules;
  • deciding where to initialize the table; and
  • deciding where to print the contents of the table.

That is a pain, especially in a large program you don’t fully understand.

Alternatively, sometimes you want information that a general-purpose profiler could give you, but running that profiler on your program is a hassle because the program you want to profile is actually layered under something else, and setting things up properly takes effort.

In contrast, inserting print statements is trivial. Any measurement can be set up in no time at all. (Recompiling is often the slowest part of the process.) This encourages experimentation. You can also kill a running program at any point with no loss of profiling data.

Don’t feel guilty about wasting machine resources; this is temporary code. You might sometimes end up with output files that are gigabytes in size. But counts is fast because it’s so simple… and the Rust version is 3–4x faster than the Python version, which is nice. Let the machine do the work for you. (It does help if you have a machine with an SSD.)

Ad Hoc Profiling

For a long time I have, in my own mind, used the term ad hoc profiling to describe this combination of logging print statements and frequency-based post-processing. Wikipedia defines “ad hoc” as follows.

In English, it generally signifies a solution designed for a specific problem or task, non-generalizable, and not intended to be able to be adapted to other purposes

The process of writing custom code to collect this kind of profiling data — in the manner I disparaged in the previous section — truly matches this definition of “ad hoc”.

But counts is valuable specifically because it makes this type of custom profiling less ad hoc and more repeatable. I should arguably call it “generalized ad hoc profiling” or “not so ad hoc profiling”… but those names don’t have quite the same ring to them.

Tips

Use unbuffered output for the print statements. In C and C++ code, use fprintf(stderr, ...). In Rust code use eprintln!. (Update: Rust 1.32 added the dbg! macro, which also works well.)

Pipe the stderr output to file, e.g. firefox 2> log.

Sometimes programs print other lines of output to stderr that should be ignored by counts. (Especially if they include integer IDs that counts -w would interpret as weights!) Prepend all logging lines with a short identifier, and then use grep $ID log | counts to ignore the other lines. If you use more than one prefix, you can grep for each prefix individually or all together.

Occasionally output lines get munged together when multiple print statements are present. Because there are typically many lines of output, having a few garbage ones almost never matters.

It’s often useful to use both counts and counts -w on the same log file; each one gives different insights into the data.

To find which call sites of a function are hot, you can instrument the call sites directly. But it’s easy to miss one, and the same print statements need to be repeated multiple times. An alternative is to add an extra string or integer argument to the function, pass in a unique value from each call site, and then print that value within the function.

It’s occasionally useful to look at the raw logs as well as the output of counts, because the sequence of output lines can be informative. For example, I recently diagnosed an occurrences of quadratic behaviour in the Rust compiler by seeing that a loop iterated 1, 2, 3, …, 9000+ times.

The Code

counts is available here.

Conclusion

I use counts to do ad hoc profiling all the time. It’s the first tool I reach for any time I have a question about code execution patterns. I have used it extensively for every bout of major performance work I have done in the past few years, as well as in plenty of other circumstances. I even built direct support for it into rustc-perf, the Rust compiler’s benchmark suite, via the profile eprintln subcommand. Give it a try!

Categorieën: Mozilla-nl planet

Chris H-C: Jira, Bugzilla, and Tales of Issue Trackers Past

di, 25/02/2020 - 16:51

It seems as though Mozilla is never not in a period of transition. The distributed nature of the organization and community means that teams and offices and any informal or formal group is its own tiny experimental plot tended by gardeners with radically different tastes.

And if there’s one thing that unites gardeners and tech workers is that both have Feelings about their tools.

Tools are personal things: they’re the only thing that allows us to express ourselves in our craft. I can’t code without an editor. I can’t prune without shears. They’re the part of our work that we actually touch. The code lives Out There, the garden is Outside… but the tools are in our hands.

But tools can also be group things. A shed is a tool for everyone’s tools. A workshop is a tool that others share. An Issue Tracker is a tool that helps us all coordinate work.

And group things require cooperation, agreement, and compromise.

While I was on the Browser team at BlackBerry I used a variety of different Issue Trackers. We started with an outdated version of FogBugz, then we had a Bugzilla fork for the WebKit porting work and MKS Integrity for everything else across the entire company, and then we all standardized on Jira.

With minimal customization, Jira and MKS Integrity both seemed to be perfectly adequate Issue Tracking Software. They had id numbers, relationships, state, attachments, comments… all the things you need in an Issue Tracker. But they couldn’t just be “perfectly adequate”, they had to be better enough than what they were replacing to warrant the switch.

In other words, to make the switch the new thing needs to do something that the previous one couldn’t, wouldn’t, or just didn’t do (or you’ve forgotten that it did). And every time Jira or MKS is customized it seems to stop being Issue Tracking Software and start being Workflow Management Software.

Perhaps because the people in charge of the customization are interested more in workflows than in Issue Tracking?

Regardless of why, once they become Workflow Management Software they become incomparable with Issue Trackers. Apples and Oranges. You end up optimizing for similar but distinct use cases as it might become more important to report about issues than it is to file and fix and close them.

And that’s the state Mozilla might be finding itself in right now as a few teams here and there try to find the best tools for their garden and settle upon Jira. Maybe they tried building workflows in Bugzilla and didn’t make it work. Maybe they were using Github Issues for a while and found it lacking. We already had multiple places to file issues, but now some of the places are Workflow Management Software.

And the rumbling has begun. And it’s no wonder, as even tools that are group things are still personal. They’re still what we touch when we craft.

The GNU-minded part of me thinks that workflow management should be built above and separate from issue tracking by the skillful use of open and performant interfaces. Bugzilla lets you query whatever you want, however you want, so why not build reporting Over There and leave me my issue tracking Here where I Like It.

The practical-minded part of me thinks that it doesn’t matter what we choose, so long as we do it deliberately and consistently.

The schedule-minded part of me notices that I should probably be filing and fixing issues rather than writing on them. And I think now’s the time to let that part win.

:chutten

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Securing Firefox with WebAssembly

di, 25/02/2020 - 15:04

Protecting the security and privacy of individuals is a central tenet of Mozilla’s mission, and so we constantly endeavor to make our users safer online. With a complex and highly-optimized system like Firefox, memory safety is one of the biggest security challenges. Firefox is mostly written in C and C++. These languages are notoriously difficult to use safely, since any mistake can lead to complete compromise of the program. We work hard to find and eliminate memory hazards, but we’re also evolving the Firefox codebase to address these attack vectors at a deeper level. Thus far, we’ve focused primarily on two techniques:

A new approach

While we continue to make extensive use of both sandboxing and Rust in Firefox, each has its limitations. Process-level sandboxing works well for large, pre-existing components, but consumes substantial system resources and thus must be used sparingly. Rust is lightweight, but rewriting millions of lines of existing C++ code is a labor-intensive process.

Consider the Graphite font shaping library, which Firefox uses to correctly render certain complex fonts. It’s too small to put in its own process.  And yet, if a memory hazard were uncovered, even a site-isolated process architecture wouldn’t prevent a malicious font from compromising the page that loaded it. At the same time, rewriting and maintaining this kind of domain-specialized code is not an ideal use of our limited engineering resources.

So today, we’re adding a third approach to our arsenal. RLBox, a new sandboxing technology developed by researchers at the University of California, San Diego, the University of Texas, Austin, and Stanford University, allows us to quickly and efficiently convert existing Firefox components to run inside a WebAssembly sandbox. Thanks to the tireless efforts of Shravan Narayan, Deian Stefan, Tal Garfinkel, and Hovav Shacham, we’ve successfully integrated this technology into our codebase and used it to sandbox Graphite.

This isolation will ship to Linux users in Firefox 74 and to Mac users in Firefox 75, with Windows support following soon after. You can read more about this work in the press releases from UCSD and UT Austin along with the joint research paper.  Read on for a technical overview of how we integrated it into Firefox.

Building a wasm sandbox

The core implementation idea behind wasm sandboxing is that you can compile C/C++ into wasm code, and then you can compile that wasm code into native code for the machine your program actually runs on.  These steps are similar to what you’d do to run C/C++ applications in the browser, but we’re performing the wasm to native code translation ahead of time, when Firefox itself is built.  Each of these two steps rely on significant pieces of software in their own right, and we add a third step to make the sandboxing conversion more straightforward and less error prone.

First, you need to be able to compile C/C++ into wasm code.  As part of the WebAssembly effort, a wasm backend was added to Clang and LLVM.  Having a compiler is not enough, though; you also need a standard library for C/C++.  This component is provided via wasi-sdk.  With those pieces, we have enough to translate C/C++ into wasm code.

Second, you need to be able to convert the wasm code into native object files.  When we first started implementing wasm sandboxing, we were often asked, “why do you even need this step?  You could distribute the wasm code and compile it on-the-fly on the user’s machine when Firefox starts.” We could have done that, but that method requires the wasm code to be freshly compiled for every sandbox instance.  Per-sandbox compiled code is unnecessary duplication in a world where every origin resides in a separate process. Our chosen approach enables sharing compiled native code between multiple processes, resulting in significant memory savings.  This approach also improves the startup speed of the sandbox, which is important for fine-grained sandboxing, e.g. sandboxing the code associated with every font accessed or image loaded.

Ahead-of-time compilation with Cranelift and friends

This approach does not imply that we have to write our own wasm to native code compiler!  We implemented this ahead-of-time compilation using the same compiler backend that will eventually power the wasm component of Firefox’s JavaScript engine: Cranelift, via the Bytecode Alliance’s Lucet compiler and runtime.  This code sharing ensures that improvements benefit both our JavaScript engine and our wasm sandboxing compiler.  These two pieces of code currently use different versions of Cranelift for engineering reasons. As our sandboxing technology matures, however, we expect to modify them to use the exact same codebase.

Now that we’ve translated the wasm code into native object code, we need to be able to call into that sandboxed code from C++.  If the sandboxed code was running in a separate virtual machine, this step would involve looking up function names at runtime and managing state associated with the virtual machine.  With the setup above, however, sandboxed code is native compiled code that respects the wasm security model. Therefore, sandboxed functions can be called using the same mechanisms as calling regular native code.  We have to take some care to respect the different machine models involved: wasm code uses 32-bit pointers, whereas our initial target platform, x86-64 Linux, uses 64-bit pointers. But there are other hurdles to overcome, which leads us to the final step of the conversion process.

Getting sandboxing correct

Calling sandboxed code with the same mechanisms as regular native code is convenient, but it hides an important detail.  We cannot trust anything coming out of the sandbox, as an adversary may have compromised the sandbox.

For instance, for a sandboxed function:

/* Returns values between zero and sixteen. */ int return_the_value();

We cannot guarantee that this sandboxed function follows its contract.  Therefore, we need to ensure that the returned value falls in the range that we expect.

Similarly, for a sandboxed function returning a pointer:

extern const char* do_the_thing();

We cannot guarantee that the returned pointer actually points to memory controlled by the sandbox.  An adversary may have forced the returned pointer to point somewhere in the application outside of the sandbox.  Therefore, we validate the pointer before using it.

There are additional runtime constraints that are not obvious from reading the source.  For instance, the pointer returned above may point to dynamically allocated memory from the sandbox.  In that case, the pointer should be freed by the sandbox, not by the host application. We could rely on developers to always remember which values are application values and which values are sandbox values.  Experience has shown that approach is not feasible.

Tainted data

The above two examples point to a general principle: data returned from the sandbox should be specifically identified as such.  With this identification in hand, we can ensure the data is handled in appropriate ways.

We label data associated with the sandbox as “tainted”.  Tainted data can be freely manipulated (e.g. pointer arithmetic, accessing fields) to produce more tainted data.  But when we convert tainted data to non-tainted data, we want those operations to be as explicit as possible. Taintedness is valuable not just for managing memory returned from the sandbox.  It’s also valuable for identifying data returned from the sandbox that may need additional verification, e.g. indices pointing into some external array.

We therefore model all exposed functions from the sandbox as returning tainted data.  Such functions also take tainted data as arguments, because anything they manipulate must belong to the sandbox in some way.  Once function calls have this interface, the compiler becomes a taintedness checker. Compiler errors will occur when tainted data is used in contexts that want untainted data, or vice versa.  These contexts are precisely the places where tainted data needs to be propagated and/or data needs to be validated. RLBox handles all the details of tainted data and provides features that make incremental conversion of a library’s interface to a sandboxed interface straightforward.

Next Steps

With the core infrastructure for wasm sandboxing in place, we can focus on increasing its impact across the Firefox codebase – both by bringing it to all of our supported platforms, and by applying it to more components. Since this technique is lightweight and easy to use, we expect to make rapid progress sandboxing more parts of Firefox in the coming months. We’re focusing our initial efforts on third-party libraries bundled with Firefox.  Such libraries generally have well-defined entry points and don’t pervasively share memory with the rest of the system. In the future, however, we also plan to apply this technology to first-party code.

Acknowledgements

We are deeply grateful for the work of our research partners at UCSD, UT Austin, and Stanford, who were the driving force behind this effort. We’d also like to extend a special thanks to our partners at the Bytecode Alliance – particularly the engineering team at Fastly, who developed Lucet and helped us extend its capabilities to make this project possible.

The post Securing Firefox with WebAssembly appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: The Facts: Mozilla’s DNS over HTTPs (DoH)

di, 25/02/2020 - 11:53

The current insecure DNS system leaves billions of people around the world vulnerable because the data about where they go on the internet is unencrypted. We’ve set out to change that. In 2017, Mozilla began working on the DNS-over-HTTPS (DoH) protocol to close this privacy gap within the web’s infrastructure. Today, Firefox is enabling encrypted DNS over HTTPS by default in the US giving our users more privacy protection wherever and whenever they’re online.

DoH will encrypt DNS traffic from clients (browsers) to resolvers through HTTPS so that users’ web browsing can’t be intercepted or tampered with by someone spying on the network. The resolvers we’ve chosen to work with so far – Cloudflare and NextDNS – have agreed to be part of our Trusted Recursive Resolver program. The program places strong policy requirements on the resolvers and how they handle data. This includes placing strict limits on data retention so providers- including internet service providers – can no longer tap into an unprotected stream of a user’s browsing history to build a profile that can be sold, or otherwise used in ways that people have not meaningfully consented to. We hope to bring more partners into the TRR program.

Like most new technologies on the web, there has been a healthy conversation about this new protocol. We’ve seen non-partisan input, privacy experts, and other organizations all weigh in. And because we value transparency and open dialogue this conversation has helped inform our plans for DoH. We are confident that the research and testing we’ve done over the last two years has ensured our roll-out of DoH respects user privacy and makes the web safer for everyone. Through DoH and our trusted recursive resolver program we can begin to close the data leaks that have been part of the domain name system since it was created 35 years ago.

Here are a few things we think are important to know about our deployment of DNS over HTTPs.

What are the privacy threats to DNS information that motivate DoH?
DNS requests and responses reveal important information about your activity on the Internet, and that information can be collected and sold by Internet Service Providers (ISPs), Wi-Fi providers, and others without your consent.
How will DoH keep my data from being collected and sold?
Mozilla requires all DNS providers that can be selected in Firefox to comply with our resolver policy through a legally-binding contract. These requirements place strict limits on the type of data that may be retained, what the provider can do with that data, and how long they may retain it. This strict policy is intended to protect users from providers being able to collect and monetize their data.
Why does Mozilla believe switching DoH on by default respects user choice?
Few users understand the role of DNS in their use of the Internet, or the potential for widespread abuse of their DNS information. If users don’t understand, then they can’t make meaningful choices.

Rather than putting the onus on users, Mozilla is taking steps to ensure that personal privacy is the default for all users, and to give users the ability to select non-default options if they so choose.

Why encrypt DNS queries? Aren’t there other mechanisms besides DNS that ISPs can use to collect data about user behavior?
There are many threats to user privacy, and a single technology cannot address them all. The fact that so many privacy risks exist today is a reason to tackle each problem, not a reason to refuse to solve any of them.

That is why Mozilla and others are working to define appropriate methods to prevent leakage of personally identifying information in protocols other than DNS. One example is the proposal for Encrypted Server Name Indication (ESNI) for TLS connections.

Will DoH lead to greater centralization of DNS, which will be bad for the Internet as a whole?
We agree that centralization is bad for the Internet. Today in practice, DNS is centralized because consumer devices are locked to the DNS service of ISPs. And just five companies control over 80% of the US broadband Internet market. The immediate impact of Mozilla enabling DoH in Firefox will be less centralization, not more, because it shifts traffic away from large ISPs, and provides users with more choice, while respecting enterprise DNS configurations.
Does DoH cause harm by preventing parental controls from working?
Numerous ISPs provide opt-in parental control services. Firefox’s deployment of DoH is designed to respect those controls where users have opted into them. Mozilla is working with the industry to define appropriate standards that will enable the smooth functioning of opt-in parental controls.
How does DoH work with enterprise DNS solutions such as “split-horizon” DNS?
We have made it easy for enterprises to disable DoH. In addition, Firefox will detect whether enterprise policies have been set on the device and will disable DoH in those circumstances. System administrators interested in how to configure enterprise policies can find relevant documentation here.
Why deploy DoH when it makes it more difficult for organizations to protect themselves against security threats?
The same argument was made against encrypting HTTP connections and other Internet traffic, yet organizations have adjusted to those advances in user security while protecting organizational assets. Furthermore, organizations can run their own internal DoH services or disable DoH for their users as described above.
Are you turning DoH on by default worldwide for all Firefox users?
As part of our continuing efforts to carefully test and measure the benefits and impact of DoH, we are currently focused on releasing this feature in the United States only. We do not have plans to roll out the feature in Europe or other regions at this time. However, we strongly believe that DNS over HTTPS is good for the privacy of people everywhere.

The post The Facts: Mozilla’s DNS over HTTPs (DoH) appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Firefox continues push to bring DNS over HTTPS by default for US users

di, 25/02/2020 - 11:53

Today, Firefox began the rollout of encrypted DNS over HTTPS (DoH) by default for US-based users. The rollout will continue over the next few weeks to confirm no major issues are discovered as this new protocol is enabled for Firefox’s US-based users.

A little over two years ago, we began work to help update and secure one of the oldest parts of the internet, the Domain Name System (DNS). To put this change into context, we need to briefly describe how the system worked before DoH. DNS is a database that links a human-friendly name, such as www.mozilla.org, to a computer-friendly series of numbers, called an IP address (e.g. 192.0.2.1). By performing a “lookup” in this database, your web browser is able to find websites on your behalf. Because of how DNS was originally designed decades ago, browsers doing DNS lookups for websites — even encrypted https:// sites — had to perform these lookups without encryption. We described the impact of insecure DNS on our privacy:

Because there is no encryption, other devices along the way might collect (or even block or change) this data too. DNS lookups are sent to servers that can spy on your website browsing history without either informing you or publishing a policy about what they do with that information.

At the creation of the internet, these kinds of threats to people’s privacy and security were known, but not being exploited yet. Today, we know that unencrypted DNS is not only vulnerable to spying but is being exploited, and so we are helping the internet to make the shift to more secure alternatives. We do this by performing DNS lookups in an encrypted HTTPS connection. This helps hide your browsing history from attackers on the network, helps prevent data collection by third parties on the network that ties your computer to websites you visit.

Since our work on DoH began, many browsers have joined in announcing their plans to support DoH, and we’ve even seen major websites like Facebook move to support a more secure DNS.

If you’re interested in exactly how DoH protects your browsing history, here’s an in-depth explainer by Lin Clark.

We’re enabling DoH by default only in the US. If you’re outside of the US and would like to enable DoH, you’re welcome to do so by going to Settings, then General, then scroll down to Networking Settings and click the Settings button on the right. Here you can enable DNS over HTTPS by clicking, and a checkbox will appear. By default, this change will send your encrypted DNS requests to Cloudflare.

Users have the option to choose between two providers — Cloudflare and NextDNS — both of which are trusted resolvers. Go to Settings, then General, then scroll down to Network Settings and click the Settings button on the right. From there, go to Enable DNS over HTTPS, then use the pull down menu to select the provider as your resolver.

Users can choose between two providers

We continue to explore enabling DoH in other regions, and are working to add more providers as trusted resolvers to our program. DoH is just one of the many privacy protections you can expect to see from us in 2020.

You can download the release here.

The post Firefox continues push to bring DNS over HTTPS by default for US users appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Wladimir Palant: McAfee WebAdvisor: From XSS in a sandboxed browser extension to administrator privileges

di, 25/02/2020 - 10:33

A while back I wrote about a bunch of vulnerabilities in McAfee WebAdvisor, a component of McAfee antivirus products which is also available as a stand-alone application. Part of the fix was adding a bunch of pages to the extension which were previously hosted on siteadvisor.com, generally a good move. However, when I looked closely I noticed a Cross-Site Scripting (XSS) vulnerability in one of these pages (CVE-2019-3670).

Now an XSS vulnerability in a browser extension is usually very hard to exploit thanks to security mechanisms like Content Security Policy and sandboxing. These mechanisms were intact for McAfee WebAdvisor and I didn’t manage to circumvent them. Yet I still ended up with a proof of concept that demonstrated how attackers could gain local administrator privileges through this vulnerability, something that came as a huge surprise to me as well.

McAfee WebAdvisor shattered like glass

Table of Contents Summary of the findings

Both the McAfee WebAdvisor browser extension and the HTML-based user interface of its UIHost.exe application use the jQuery library. This choice of technology proved quite fatal: not only did it contribute to both components being vulnerable to XSS, it also made the vulnerability exploitable in the browser extension where existing security mechanisms would normally make exploitation very difficult to say the least.

In the end, a potential attacker could go from a reflective XSS vulnerability in the extension to a persistent XSS vulnerability in the application to writing arbitrary Windows registry values. The latter can then be used to run any commands with privileges of the local system’s administrator. The attack could be performed by any website and the required user interaction would be two clicks anywhere on the page.

At the time of writing, McAfee closed the XSS vulnerability in the WebAdvisor browser extensions and users should update to version 8.0.0.37123 (Chrome) or 8.0.0.37627 (Firefox) ASAP. From the look of it, the XSS vulnerability in the WebAdvisor application remains unaddressed. The browser extensions no longer seem to have the capability to add whitelist entries here which gives it some protection for now. It should still be possible for malware running with user’s privileges to gain administrator access through it however.

The initial vulnerability

When McAfee WebAdvisor prevents you from accessing a malicious page, a placeholder page is displayed instead. It contains a “View site report” link which brings you to the detailed information about the site. Here is what this site report looks like for the test website malware.wicar.org:

McAfee WebAdvisor site report for malware.wicar.org

This page happens to be listed in the extensions web accessible resources, meaning that any website can open it with any parameters. And the way it handles the url query parameter is clearly problematic. The following code has been paraphrased to make it more readable:

const url = new URLSearchParams(window.location.search).get("url"); const uri = getURI(url); const text = localeData(`site_report_main_url_${state}`, [`<span class="main__url">${uri}</span>`]); $("#site_report_main_url").append(text);

This takes a query parameter, massages it slightly (function getURI() merely removes the query/anchor part from a URL) and then inserts it into a localization string. The end result is added to the page by means of jQuery.append(), a function that will parse HTML code and insert the result into the DOM. At no point does this ensure that HTML code in the parameter won’t cause harm.

So a malicious website could open site_report.html?url=http://malware.wicar.org/<marquee>horrible</marquee> and subject the user to abhorrent animations. But what else could it do?

Exploiting XSS without running code

Any attempts to run JavaScript code through this vulnerability are doomed. That’s because the extension’s Content Security Policy looks like this:

"content_security_policy": "script-src 'self' 'unsafe-eval'; object-src 'self'",

Why the developers relaxed the default policy by adding 'unsafe-eval' here? Beats me. But it doesn’t make attacker’s job any easier, without any eval() or new Function() calls in this extension. Not even jQuery will call eval() for inline scripts, as newer versions of this library will create actual inline scripts instead – something that browsers will disallow without an 'unsafe-inline' keyword in the policy.

So attackers have to use JavaScript functionality which is already part of the extension. The site report page has no interesting functionality, but the placeholder page does. This one has an “Accept the Risk” button which will add a blocked website to the extension’s whitelist. So my first proof of concept page used the following XSS payload:

<script type="module" src="chrome-extension://fheoggkfdfchfphceeifdbepaooicaho/block_page.js"> </script> <button id="block_page_accept_risk" style="position: absolute; top: 0; left: 0; bottom: 0; width: 100%;"> Dare to click! </button>

This loads the extension’s block_page.js script so that it becomes active on the site report page. And it adds a button that this script will consider “Accept the Risk” button – only that the button’s text and appearance are freely determined by the attacker. Here, the button is made to cover all of page’s content, so that clicking anywhere will trigger it and produce a whitelist entry.

All the non-obvious quirks here are related to jQuery. Why type="module" on the script tag? Because jQuery will normally download script contents and turn them into an inline script which will then be blocked by the Content Security Policy. jQuery leaves ECMAScript modules alone however and these can execute normally.

Surprised that the script executes? That’s because jQuery doesn’t simply assign to innerHTML, it parses HTML code into a document fragment before inserting the results into the document.

Why don’t we have to generate any event to make block_page.js script attach its event listeners? That’s because the script relies on jQuery.ready() which will fire immediately if called in a document that already finished loading.

Getting out of the sandbox…

Now we’ve seen the general approach of exploiting existing extension functionality in order to execute actions without any own code. But are there actions that would produce more damage than merely adding whitelist entries? When looking through extension’s functionality, I noticed an unused options page – extension configuration would normally be done by an external application. That page and the corresponding script could add and remove whitelist entries for example. And I noticed that this page was vulnerable to XSS via malicious whitelist entries.

So there is another XSS vulnerability which wouldn’t let attackers execute any code, now what? But that options page is eerily similar to the one displayed by the external application. Could it be that the application is also displaying an HTML-based page? And maybe, just maybe, that page is also vulnerable to XSS? I mean, we could use the following payload to add a malicious whitelist entry if the user clicks somewhere on the page:

<script type="module" src="chrome-extension://fheoggkfdfchfphceeifdbepaooicaho/settings.js"> </script> <button id="add-btn" style="position: absolute; top: 0; left: 0; bottom: 0; width: 100%;"> Dare to click! </button> <input id="settings_whitelist_input" value="example.com<script>alert(location.href)</script>">

And if we then open WebAdvisor options…

McAfee WebAdvisor options running injected JavaScript code

This is no longer the browser extension, it’s the external UIHost.exe application. The application has an HTML-based user interface and it is also using jQuery to manipulate it, not bothering with XSS prevention of any kind. No Content Security Policy is stopping us here any more, anything goes.

Of course, attackers don’t have to rely on the user opening options themselves. They can rather instrument further extension functionality to open options programmatically. That’s what my proof of concept page did: first click would add a malicious whitelist entry, second click opened the options then.

… and into the system

It’s still regular JavaScript code that is executing in UIHost.exe, but it has access to various functions exposed via window.external. In particular, window.external.SetRegistryKey() is very powerful, it allows writing any registry keys. And despite UIHost.exe running with user privileges, this call succeeds on HKEY_LOCAL_MACHINE keys as well – presumably asking the McAfee WebAdvisor service to do the work with administrator privileges.

So now attackers only need to choose a registry key that allows them to run arbitrary code. My proof of concept used the following payload in the end:

window.external.SetRegistryKey('HKLM', 'SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run', 'command', 'c:\\windows\\system32\\cmd.exe /k "echo Hi there, %USERNAME%!"', 'STR', 'WOW64', false), window.external.Close()

This calls window.external.Close() so that the options window closes immediately and the user most likely won’t even see it. But it also adds a command to the registry that will be run on startup. So when any user logs on, they will get this window greeting them:

Command line prompt greeting the user

Yes, I cheated. I promised administrator privileges, yet this command executes with user’s privileges. In fact, with privileges of any user of the affected system who dares to log on. Maybe you’ll just believe me that attackers who can write arbitrary values to the registry can achieve admin privileges? No? Ok, there is this blog post which explains how writing to \SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\wsqmcons.exe and adding a “debugger” will get you there.

Conclusions

The remarkable finding here: under some conditions, not even a restrictive Content Security Policy value will prevent an XSS vulnerability from being exploited. And “some conditions” in this particular case means: using jQuery. jQuery doesn’t merely encourage a coding style that’s prone to introducing XSS vulnerabilities, here both in the browser extension and the standalone application. Its unusual handling of HTML code also means that <script> tags will be allowed to execute, unlike when a usual assignment to innerHTML is used. Without that, this vulnerability likely wouldn’t be exploitable.

Interestingly, the use of jQuery.ready() also contributed to this issue becoming exploitable. If the code used DOMContentLoaded event or a similar initialization mechanism, there would be no way to make it initialize here. Yet jQuery.ready() handler is guaranteed to run, and so an additional script would initialize regardless of when it loaded.

So the first recommendation to avoid such issues is definitely: use a modern framework that is inherently safe. For example, it’s far harder to introduce XSS vulnerabilities with React or Vue – normally no innerHTML or equivalents are used here. These frameworks will also only attach event listeners to elements that they created themselves, any elements injected into the DOM externally will stay without functionality.

It’s also important to have defense in depth however. HTML-based user interfaces for applications seem to be rather popular with antivirus vendors. It should be possible to ensure that an XSS vulnerability here doesn’t result in catastrophic failure however. With Internet Explorer being used under the hood, Content Security Policy doesn’t seem to be an option. But at least the functionality exposed via window.external can be limited in what it can do. For example, there is no reason to provide access to the entire registry here when the function is only needed to install browser extensions.

Timeline
  • 2019-11-27: Reported vulnerability to McAfee via email, subject to 90 days deadline.
  • 2019-12-02: Got confirmation that the report was received, after asking about it.
  • 2020-02-03: Reminded McAfee of the approaching deadline, seeing that the issue is unresolved.
  • 2020-02-05: McAfee notified me that the Chrome extension has been fixed and remaining issues will be resolved shortly.
  • 2020-02-19: Again reminded McAfee of the deadline, seeing that issue is unresolved in Firefox extension and application.
  • 2020-02-21: McAfee published their writeup on CVE-2019-3670.
  • 2020-02-24: Received confirmation from McAfee that a fix for the Firefox extension has been released.
Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 327

di, 25/02/2020 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crates are wundergraph, a GraphQL interface library, and kibi, a text editor in thousand lines of Rust.

Thanks to Georg Semmler and Vikrant for the suggestions!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

307 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs

No new RFCs were proposed this week.

Upcoming Events Africa Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Yoda must have hit his head, though. if let 42 = x {} "if let forty-two equals x"

Hutch on rust-internals

Thanks to Kornel for the suggestions!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Goals for USA FREEDOM reauthorization: reforms, access, and transparency

ma, 24/02/2020 - 21:09

At Mozilla, we believe that privacy is a fundamental digital right. We’ve built these values into the Firefox browser itself, and we’ve pushed Congress to pass strong legal protections for consumer privacy in the US. This week, Congress will have another opportunity to consider meaningful reforms to protect user privacy when it debates the reauthorization of the USA FREEDOM Act. We believe that Congress should amend this surveillance law to remove ineffective programs, bolster resources for civil liberties advocates, and provide more transparency for the public. More specifically, Mozilla supports the following reforms:

  • Elimination of the Call Detail Record program
  • Greater access to information for amicus curiae
  • Public access to all significant opinions from the Foreign Intelligence Surveillance Court

By taking these steps, we believe that Congress can ensure the continued preservation of civil liberties while allowing for appropriate government access to information when necessary.

Eliminate the Call Detail Record Program

First, Congress should revoke the authority for the Call Detail Record (CDR) program. The CDR program allows the federal government to obtain the metadata of phone calls and texts from telecom companies in the course of an international terrorism investigation, subject to specific statutory requirements. This program is simply no longer worthwhile. Over the last four years, the National Security Agency (NSA) scaled back its use of the program, and the agency has not used this specific authority since late 2018.

The decision to suspend the program is a result of challenges the agency encountered in effectively operating the program on three separate fronts. First, the NSA has had issues with the mass overcollection of phone records that it was not authorized to receive, intruding into the private lives of millions of Americans. This overcollection represents a mass breach of civil liberties–a systemic violation of our privacy. Without technical measures to maintain these protections, the program was no longer legally sustainable.

Second, the program may not provide sufficiently valuable insights in the current threat environment. In a recent Senate Judiciary Committee hearing, the government acknowledged that the intelligence value of the program was outweighed by the costs and technical challenges associated with its continued operation. This conclusion was supported by an independent analysis from the Privacy and Civil Liberties Oversight Board (PCLOB), which hopes to publicly release an unclassified version of its report in the near future. Additionally, the shift to other forms of communications may make it even less likely that law enforcement will obtain useful information through this specific authority in the future.

And finally, some technological shifts may have made the CDR program too complex to implement today. Citing to “technical irregularities” in some of the data obtained from telecom providers under the program, the NSA deleted three years’ worth of CDRs that it was not authorized to receive last June. While the agency has not released a specific explanation, Susan Landau and Asaf Lubin of Tufts University have posited that the problem stems from challenges associated with measures in place to facilitate interoperability between landlines and mobile phone networks.

The NSA has recommended shutting down the program, and Congress should follow suit by allowing it to expire.

Bolster Access for Advocates

The past four years have illustrated the need to expand the resources of civil liberties advocates working within the Foreign Intelligence Surveillance Court (FISC) itself. Under statute, these internal advocates are designated by the court as a part of the amicus curiae program to provide assistance and insight. Advocates on the panel have expertise in privacy, civil liberties, or other relevant legal or technical issues, and the law mandates access to all materials that the court determines to be relevant to the duties of the amicus curiae.

Unfortunately, the law does not guarantee that the panel will have access to the data held by the government, limiting the insight of the panel into the actual merits of the case and impairing its ability to protect civil liberties. To correct this inequity, Congress should amend the statute to ensure that the amicus curiae has access to any information held by the government that the amicus believes is relevant to the purposes of its appointment. By doing so, amici will be better equipped to fulfill its duties with a more complete picture of the underlying case.

More Transparency for the Public

Finally, the American public also deserves more information about how surveillance authorities are being used. The USA FREEDOM Act had provisions intended to spur greater public disclosure, particularly regarding decisions or orders issued by the Foreign Intelligence Surveillance Court (FISC) that contained significant interpretations of the Constitution or other laws. While the Court was originally designed in 1978 to address a relatively narrow universe of foreign surveillance cases, its mission has since expanded and it now routinely rules on fundamental issues of Constitutional law that may affect every citizen. Given the importance and scope of this growing body of jurisprudence, Congress should protect public access to the law and ensure that all cases with significant opinions are released.

Historically, the FISC has been reluctant to publicly release their decisions, despite the best efforts of civil society advocates. In recent litigation, the most prominent point of contention has been whether the FISC is obligated to release opinions issued before the enactment of the USA FREEDOM Act in May 2015. The legislative history and the text of the statute itself does not include any reference of such a limitation–senior members of the House Judiciary Committee explicitly stated that the provision was intended to require the government to “declassify and publish all novel significant opinions” of FISC during legislative debate. Congress should take this opportunity to re-emphasize their intent by amending the statute with explicit language to compel the release of all such decisions.

Ultimately, these reforms represent a necessary step forward to better protect the civil liberties of US users. By removing ineffective programs, expanding access to information for internal advocates, and providing greater transparency for the public, Congress can build upon critical privacy safeguards and support checks on the future use of surveillance authorities. Our hope is that Congress will adopt these reforms and give people the protections that they deserve.

 

The post Goals for USA FREEDOM reauthorization: reforms, access, and transparency appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Firefox UX: Critiquing Design

ma, 24/02/2020 - 17:04

A man in a white leotard holding a large yoga ball on a small stage.This is me about 25 years ago, dancing with a yoga ball. I was part of a theater company where I first learned Liz Lerman’s Critical Response Process. We used this extensively—it was an integral part of our company dynamic. We used it to develop company work, we used it in our education programs and we even used it to redesign our company structure. It was a formative part of my development as an artist, a teacher, and later, as a user-centered designer.

What I love about this process is that works by embedding all the things we strive for in a critique into a deceptively simple, step-by-step process. You don’t have to try to remember everything the next time you’re knee-deep in a critique session. It’s knowledge in the world for critique sessions.

So a while back I started thinking about how I could adapt this to critiquing design and I introduced the process to the Firefox UX team. We’ve been practicing and refining it since last summer and we’ve really found it helpful. Recently I’ve been getting requests to share our adapted version with other design teams so I’ve collected my notes and the cheat sheet we use to keep us on track over at critiquing.design. Please feel free to try it out. Let me know if you have questions or ideas to make it better. I’ve also started a GitHub repo for it so you can file an issue or make pull request if you want.

The other great part about this is that it changes critique from something like:

 The four steps of design critique. It references a photo of a sign that reads,"See them swarm. Hear them buzz. Feel the cringe. Conquer the dread." To a time that I look forward to and that leaves me inspired by the smart, creative people I work with.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR20b1 available

za, 22/02/2020 - 07:09

TenFourFox Feature Parity Release 20 beta 1 is now available (downloads, hashes, release notes). Now here's an acid test: I watched the entire Democratic presidential debate live-streamed on the G5 with TenFourFox FPR20b1 and the MP4 Enabler, and my G5 did not burst into flames either from my code or their rhetoric. (*Note: Any politician of any party can set your G5 on fire. It's what they do.)

When using FPR20 you should notice ... absolutely nothing. Sites should just appear as they do; the only way you'd know anything changed in this version is if you pressed Command-I and looked at the Security tab to see that you're connected over TLS 1.3, the latest TLS security standard. In fact, the entirety of the debate was streamed over it, and to the best of my knowledge TenFourFox is the only browser that implements TLS 1.3 on Power Macs running Mac OS X. On regular Firefox your clue would be seeing occasional status messages about handshakes, but I've even disabled that for TenFourFox to avoid wholesale invalidating our langpacks which entirely lack those strings. Other than a couple trivial DOM updates I wrote up because they were easy, as before there are essentially no other changes other than the TLS enablement in this FPR to limit the regression range. If you find a site that does not work, verify first it does work in FPR19 or FPR18, because sites change more than we do, and see if setting security.tls.version.max to 3 (instead of 4) fixes it. You may need to restart the browser to make sure. If this does seem to reliably fix the problem, report it in the comments. A good test site is Google or Mozilla itself. The code we are using is largely the same as current Firefox's.

I have also chosen to enable Zero Round Trip Time (0-RTT) to get further speed. The idea with 0RTT is that the browser, having connected to a particular server before and knowing all the details, will not wait for a handshake and immediately starts sending encrypted data using a previous key modified in a predictable fashion. This saves a handshake and is faster to establish the connection. If many connections or requests are made, the potential savings could be considerable, such as to a site like Google where many connections may be made in a short period of time. 0RTT only happens if both the client and server allow it and there are a couple technical limitations in our implementation that restrict it further (I'm planning to address these assuming the base implementation sticks).

0RTT comes with a couple caveats which earlier made it controversial, and some browsers have chosen to unconditionally disable it in response. A minor one is a theoretical risk to forward secrecy, which would require a malicious server caching keys (but a malicious server could do other more useful things to compromise you) or an attack on TenFourFox to leak those keys. Currently neither is a realistic threat at present, but the potential exists. A slightly less minor concern is that users requiring anonymity may be identified by 0RTT pre-shared keys being reused, even if they change VPNs or exit nodes. However, the biggest is that, if handled improperly, it may facilitate a replay attack where encrypted data could be replayed to trigger a particular action on the server (such as sending Mallory a billion dollars twice). Firefox/TenFourFox avoid this problem by never using 0RTT when a non-idempotent action is taken (i.e., any HTTP method that semantically changes state on the server, as opposed to something that merely fetches data), even if the server allows it. Furthermore, if the server doesn't allow it, then 0RTT is not used (and if the server inappropriately allows it, consider that their lack of security awareness means they're probably leaking your data in other, more concerning ways too).

All that said, even considering these deficiencies, TLS 1.3 with 0-RTT is still more secure than TLS 1.2, and the performance benefits are advantageous. Thus, the default in current Firefox is to enable it, and I think this is the right call for TenFourFox as well. If you disagree, set security.tls.enable_0rtt_data to false.

FPR20 goes live on or about March 10.

Categorieën: Mozilla-nl planet

Pagina's