mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Joins New Partners to Fund Open Source Digital Infrastructure Research

Mozilla Blog - do, 23/07/2020 - 17:58

Today, Mozilla is pleased to announce that we’re joining the Ford Foundation, the Sloan Foundation, and the Open Society Foundations to launch a request for proposals (RFP) for research on open source digital infrastructure. To kick off this RFP, we’re joining with our philanthropic partners to host a webinar today at 9:30 AM Pacific. The Mozilla Open Source Support Program (MOSS) is contributing $25,000 to this effort.

Nearly everything in our modern society, from hospitals and banks to universities and social media platforms, runs on “digital infrastructure” – a foundation of open source code that is designed to solve common challenges. The benefits of digital infrastructure are numerous: it can reduce the cost of setting up new businesses, support data-driven discovery across research disciplines, enable complex technologies such as smartphones to talk to each other, and allow everyone to have access to important innovations like encryption that would otherwise be too expensive.

In joining with these partners for this funding effort, Mozilla hopes to propel further investigation into the sustainability of open source digital infrastructure. Selected researchers will help determine the role companies and other private institutions should play in maintaining a stable ecosystem of open source technology, the policy and regulatory considerations for the long-term sustainability of digital infrastructure, and much more. These aims align with Mozilla’s pledge for a healthy internet, and we’re confident that these projects will go a long way towards deepening a crucial collective understanding of the industrial maintenance of digital infrastructure.

We’re pleased to invite interested researchers to apply to the RFP, using the application found here. The application opened on July 20, 2020, and will close on September 4, 2020. Finalists will be notified in October, at which point full proposals will be requested. Final proposals will be selected in November.

More information about the RFP is available here.

The post Mozilla Joins New Partners to Fund Open Source Digital Infrastructure Research appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

A look at password security, Part III: More secure login protocols

Mozilla Blog - di, 21/07/2020 - 02:06

In part II, we looked at the problem of Web authentication and covered the twin problems of phishing and password database compromise. In this post, I’ll be covering some of the technologies that have been developed to address these issues.

This is mostly a story of failure, though with a sort of hopeful note at the end. The ironic thing here is that we’ve known for decades how to build authentication technologies which are much more secure than the kind of passwords we use on the Web. In fact, we use one of these technologies — public key authentication via digital certificates — to authenticate the server side of every HTTPS transaction before you send your password over. HTTPS supports certificate-base client authentication as well, and while it’s commonly used in other settings, such as SSH, it’s rarely used on the Web. Even if we restrict ourselves to passwords, we have long had technologies for password authentication which completely resist phishing, but they are not integrated into the Web technology stack at all. The problem, unfortunately, is less about cryptography than about deployability, as we’ll see below.

Two Factor Authentication and One-Time Passwords

The most widely deployed technology for improving password security goes by the name one-time passwords (OTP) or (more recently) two-factor authentication (2FA). OTP actually goes back to well before the widespread use of encrypted communications or even the Web to the days when people would log in to servers in the clear using Telnet. It was of course well known that Telnet was insecure and that anyone who shared the network with you could just sniff your password off the wire1 and then login with it [Technical note: this is called a replay attack.] One partial fix for this attack was to supplement the user password with another secret which wasn’t static but rather changed every time you logged in (hence a “one-time” password).

OTP systems came in a variety of forms but the most common was a token about the size of a car key fob but with an LCD display, like this:

The token would produce a new pseudorandom numeric code every 30 seconds or so and when you went to log in to the server you would provide both your password and the current code. That way, even if the attacker got the code they still couldn’t log in as you for more than a brief period2 unless they also stole your token. If all of this looks familiar, it’s because this is more or less the same as modern OTP systems such as Google Authenticator, except that instead of a hardware token, these systems tend to use an app on your phone and have you log into some Web form rather than over Telnet. The reason this is called “two-factor authentication” is that authenticating requires both a value you know (the password) and something you have (the device). Some other systems use a code that is sent over SMS but the basic idea is the same.

OTP systems don’t provide perfect security, but they do significantly improve the security of a password-only system in two respects:

  1. They guarantee a strong, non-reused secret. Even if you reuse passwords and your password on site A is compromised, the attacker still won’t have the right code for site B.3
  2. They mitigate the effect of phishing. If you are successfully phished the attacker will get the current code for the site and can log in as you, but they won’t be able to log in in the future because knowing the current code doesn’t let you predict a future code. This isn’t great but it’s better than nothing.

The nice thing about a 2FA system is that it’s comparatively easy to deploy: it’s a phone app you download plus another code that the site prompts you for. As a result, phone-based 2FA systems are very popular (and if that’s all you have, I advise you to use it, but really you want WebAuthn, which I’ll be describing in my next post).

Password Authenticated Key Agreement

One of the nice properties of 2FA systems is that they do not require modifying the client at all, which is obviously convenient for deployment. That way you don’t care if users are running Firefox or Safari or Chrome, you just tell them to get the second factor app and you’re good to go. However, if you can modify the client you can protect your password rather than just limiting the impact of having it stolen. The technology to do this is called a Password Authenticated Key Agreement (PAKE) protocol.

The way a PAKE would work on the Web is that it would be integrated into the TLS connection that already secures your data on its way to the Web server. On the client side when you enter your password the browser feeds it into TLS and on the other side, the server feeds in a verifier (effectively a password hash). If the password matches the verifier, then the connection succeeds, otherwise it fails. PAKEs aren’t easy to design — the tricky part is ensuring that the attacker has to reconnect to the server for each guess at the password — but it’s a reasonably well understood problem at this point and there are several PAKEs which can be integrated with TLS.

What a PAKE gets you is security against phishing: even if you connect to the wrong server, it doesn’t learn anything about your password that it doesn’t already know because you just get a cryptographic failure. PAKEs don’t help against password file compromise because the server still has to store the verifier, so the attacker can perform a password cracking attack on the verifier just as they would on the password hash. But phishing is a big deal, so why doesn’t everyone use PAKEs? The answer here seems to be surprisingly mundane but also critically important: user interface.

The way that most Web sites authenticate is by showing you a Web page with a field where you can enter your password, as shown below:

Firefox accounts login box

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

When you click the “Sign In” button, your password gets sent to the server which checks it against the hash as described in part I. The browser doesn’t have to do anything special here (though often the password field will be specially labelled so that the browser can automatically mask out your password when you type); it just sends the contents of the field to the server.

In order to use a PAKE, you would need to replace this with a mechanism where you gave the browser your password directly. Browsers actually have something for this, dating back to the earliest days of the Web. On Firefox it looks like this:

Basic auth login

 

 

 

 

 

 

 

Hideous, right? And I haven’t even mentioned the part where it’s a modal dialog that takes over your experience. In principle, of course, this might be fixable, but it would take a lot of work and would still leave the site with a lot less control over their login experience than they have now; understandably they’re not that excited about that. Additionally, while a PAKE is secure from phishing if you use it, it’s not secure if you don’t, and nothing stops the phishing site from skipping the PAKE step and just giving you an ordinary login page, hoping you’ll type in your password as usual.

None of this is to say that PAKEs aren’t cool tech, and they make a lot of sense in systems that have less flexible authentication experiences; for instance, your email client probably already requires you to enter your authentication credentials into a dialog box, and so that could use a PAKE. They’re also useful for things like device pairing or account access where you want to start with a small secret and bootstrap into a secure connection. Apple is known to use SRP, a particular PAKE, for exactly this reason. But because the Web already offers a flexible experience, it’s hard to ask sites to take a step backwards and PAKEs have never really taken off for the Web.

Public Key Authentication

From a security perspective, the strongest thing would be to have the user authenticate with a public private key pair, just like the Web server does. As I said above, this is a feature of TLS that browsers actually have supported (sort of) for a really long time but the user experience is even more appalling than for builtin passwords.4 In principle, some of these technical issues could have been fixed, but even if the interface had been better, many sites would probably still have wanted to control the experience themselves. In any case, public key authentication saw very little usage.

It’s worth mentioning that public key authentication actually is reasonably common in dedicated applications, especially in software development settings. For instance, the popular SSH remote login tool (replacing the unencrypted Telnet) is commonly used with public key authentication. In the consumer setting, Apple Airdrop usesiCloud-issued certificates with TLS to authenticate your contacts.

Up Next: FIDO/WebAuthn

This was the situation for about 20 years: in theory public key authentication was great, but in practice it was nearly unusable on the Web. Everyone used passwords, some with 2FA and some without, and nobody was really happy. There had been a few attempts to try to fix things but nothing really stuck. However, in the past few years a new technology called WebAuthn has been developed. At heart, WebAuthn is just public key authentication but it’s integrated into the Web in a novel way which seems to be a lot more deployable than what has come before. I’ll be covering WebAuthn in the next post.

  1. And by “wire” I mean a literal wire, though such sniffing attacks are prevalent in wireless networks such as those protected by WPA2 
  2. Note that to really make this work well, you also need to require a new code in order to change your password, otherwise the attacker can change your password for you in that window. 
  3. Interestingly, OTP systems are still subject to server-side compromise attacks. The way that most of the common systems work is to have a per-user secret which is then used to generate a series of codes, e.g., truncated HMAC(Secret, time) (see RFC6238). If an attacker compromises the secret, then they can generate the codes themselves. One might ask whether it’s possible to design a system which didn’t store a secret on the server but rather some public verifier (e.g., a public key) but this does not appear to be secure if you also want to have short (e.g., six digits) codes. The reason is that if the information that is used to verify is public, the attacker can just iterate through every possible 6 digit code and try to verify it themselves. This is easily possible during the 30 second or so lifetime of the codes. Thanks to Dan Boneh for this insight. 
  4. The details are kind of complicated here, but just some of the problems (1) TLS client authentication is mostly tied to certificates and the process of getting a certificate into the browser was just terrible (2) The certificate selection interface is clunky (3) Until TLS 1.3, the certificate was actually sent in the clear unless you did TLS renegotiation, which had its own problems, particularly around privacy.

Update: 2020-07-21: Fixed up a sentence.

The post A look at password security, Part III: More secure login protocols appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Puts Its Trusted Stamp on VPN

Mozilla Blog - wo, 15/07/2020 - 16:16

Starting today, there’s a VPN on the market from a company you trust. The Mozilla VPN (Virtual Private Network) is now available on Windows and Android devices. This fast and easy-to-use VPN service is brought to you by Mozilla, the makers of Firefox, and a trusted name in online consumer security and privacy services.

See for yourself how the Mozilla VPN works:

 



The first thing you may notice when you install the Mozilla VPN is how fast your browsing experience is. That’s because the Mozilla VPN is based on modern and lean technology, the WireGuard protocol’s 4,000 lines of code, is a fraction in size of legacy protocols used by other VPN service providers.

You will also see an easy-to-use and simple interface for anyone who is new to VPN, or those who want to set it and get onto the web.

With no long-term contracts required, the Mozilla VPN is available for just $4.99 USD per month and will initially be available in the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand, with plans to expand to other countries this Fall.

In a market crowded by companies making promises about privacy and security, it can be hard to know who to trust. Mozilla has a reputation for building products that help you keep your information safe. We follow our easy to read, no-nonsense Data Privacy Principles which allow us to focus only on the information we need to provide a service. We don’t keep user data logs.

We don’t partner with third-party analytics platforms who want to build a profile of what you do online. And since the makers of this VPN are backed by a mission-driven company you can trust that the dollars you spend for this product will not only ensure you have a top-notch VPN, but also are making the internet better for everyone.

Simple and easy-to-use switch

Last year, we beta tested our VPN service which provided encryption and device-level protection of your connection and information on the Web. Many users shared their thoughts on why they needed this service.

Some of the top reasons users cited for using a VPN:

  • Security for all your devices Users are flocking to VPNs for added protection online. With Mozilla VPN you can be sure your activity is encrypted across all applications and websites, whatever device you are on.
  • Added protection for your private information – Over 50 percent of VPN users in the US and UK said that seeking protection when using public wi-fi was a top reason for choosing a VPN service.
  • Browse more anonymously – Users care immensely about being anonymous when they choose to. A VPN is a key component as it encrypts all your traffic and protects your IP address and location.
  • Communicate more securely – Using a VPN can give an added layer of protection, ensuring every conversation you have is encrypted over the network.

In a world where unpredictability has become the “new normal,” we know that it’s more important than ever for you to feel safe, and for you to know that what you do online is your own business.

Check out the Mozilla VPN and download it from our website or Google Play store.

The post Mozilla Puts Its Trusted Stamp on VPN appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

A look at password security, Part II: Web Sites

Mozilla Blog - ma, 13/07/2020 - 18:36

In part I, we took a look at the design of password authentication systems for old-school multiuser systems. While timesharing is mostly gone, most of us continue to use multiuser systems; we just call them Web sites. In this post, I’ll be covering some of the problems of Web authentication using passwords.

As I discussed previously, the strength of passwords depends to a great extent on how fast the attacker can try candidate passwords. The nature of a Web application inherently limits the velocity at which you can try passwords quite a bit. Even ignoring limits on the rate which you can transmit stuff over the network, real systems — at least well managed ones — have all kinds of monitoring software which is designed to detect large numbers of login attempts, so just trying millions of candidate passwords is not very effective. This doesn’t mean that remote attacks aren’t possible: you can of course try to log in with some of the obvious passwords and hope you get lucky, and if you have a good idea of a candidate password, you can try that (see below), but this kind of attack is inherently somewhat limited.

Remote compromise and password cracking

Of course, this kind of limitation in the number of login attempts you could make also applied to the old multiuser systems and the way you attack Web sites is the same: get a copy of the password file and remotely crack it.

The way this plays out is that somehow the attacker exploits a vulnerability in the server’s system to compromise the password database.1 They can then crack it offline and try to recover people’s passwords. Once they’ve done that, they can then use those passwords to log into the site themselves. If a site’s password database is stolen, their strongest defense is to reset everyone’s password, which is obviously really inconvenient, harms the site’s brand, and runs the risk of user attrition, and so doesn’t always happen.

To make matters worse, many users use the same password on multiple sites, so once you have broken someone’s password on one site, you can then try to login as them on other sites with the same password, even if the user’s password was reset on the site which was originally compromised. Even though this is an online attack, it’s still very effective, because password reuse is so common (this is one reason why it’s a bad idea to reuse passwords).

Password database disclosure is unfortunately quite a common occurrence, so much so that there are services such as Firefox Monitor and Have I been pwned? devoted to letting users know when some service they have an account on has been compromised.

Assuming a site is already following best practices (long passwords, slow password hashing algorithms, salting, etc.) then the next step is to either make it harder to steal the password hash or to make the password hash less useful. A good example here is the Facebook system described in this talk by Alec Muffett (famous for, among other things, the Crack password cracker). The system uses multiple layers of hashing, one of which is a keyed hash [technically, HMAC-SHA256] performed on a separate, hardened, machine. Even if you compromise the password hash database, it’s not useful without the key, which means you would also have to compromise that machine as well.2

Another defense is to use one-time password systems (often also called two-factor authentication systems). I’ll cover those in a future post.

Phishing

Leaked passwords aren’t the only threat to password authentication on Web sites. The other big issue is what’s called phishing. In the basic phishing attack, the attacker sends you an e-mail inviting you to log into your account. Often this will be phrased in some scary way like telling you your account will be deleted if you don’t log in immediately. The e-mail will helpfully contain a link to use to log in, but of course this link will go not to the real site but to the attacker’s site, which will usually look just like the real site and may even have a similar domain name (e.g., mozi11a.com instead of mozilla.com). When the user clicks on the link and logs in, the attacker captures their username and password and can then log into the real site. Note that having users use good passwords totally doesn’t help here because the user gives the site their whole password.

Preventing phishing has proven to be a really stubborn challenge because, well, people are not as suspicious as they should be and it’s actually fairly hard on casual examination to determine whether you are on the right site. Most modern browsers try to warn users if they are going to known phishing sites (Firefox uses the Google Safe Browsing service for this). In addition, if you use a password manager, then it shouldn’t automatically fill in your password on a phishing site because password managers key off of the domain name and just looking similar isn’t good enough. Of course, both of these defenses are imperfect: the lists of phishing sites can be incomplete and if users don’t use password managers or are willing to manually cut and paste their passwords, then phishing attacks are still possible.3

Beyond Passwords

The good news is that we now have standards and technologies which are better than simple passwords and are more resistant to these kinds of attacks. I’ll be talking about them in the next post.

  1. A more fatal security issue occurs when application developers mistakenly write plaintext user passwords to debug logs. This allows the attacker to target the logging system and get immediate access to passwords without having to do any sort of computational work. 
  2. The Facebook system is actually pretty ornate. At least as of 2014 they had four separate layers: MD5, HMAC-SHA1 (with a public salt), HMAC-SHA256(with a secret key), and Scrypt, and then HMAC-SHA256 (with public salt) again, Muffet’s talk and this post do a good job of providing the detail, but this design is due to a combination of technical requirements. In particular, the reason for the MD5 stage is that an older system just had MD5-hashed passwords and because Facebook doesn’t know the original password they can’t convert them to some other algorithm; it’s easiest to just layer another hash on. 
  3. This is an example of a situation in which the difficulty of implementing a good password manager makes the problem much worse. Sites vary a lot in how they present their password dialogs and so password managers have trouble finding the right place to fill in the password. This means that users sometimes have to type the password in themselves even if there is actually a stored password, teaching them bad habits which phishers can then exploit. 

The post A look at password security, Part II: Web Sites appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Frederik Braun: Hardening Firefox against Injection Attacks – The Technical Details

Mozilla planet - di, 07/07/2020 - 00:00

This blog post has first appeared on the Mozilla Attack & Defense blog and was co-authored with Christoph Kerschbaumer and Tom Ritter

In a recent academic publication titled Hardening Firefox against Injection Attacks (to appear at SecWeb – Designing Security for the Web) we describe techniques which we have incorporated into Firefox …

Categorieën: Mozilla-nl planet

Mozilla Accessibility: Broadening Our Impact

Mozilla planet - ma, 06/07/2020 - 20:35

Last year, the accessibility team worked to identify and fix gaps in our screen reader support, as well as on some new areas of focus, like improving Firefox for users with low vision. As a result, we shipped some great features. In addition, we’ve begun building awareness across Mozilla and putting in place processes to help ensure delightful accessibility going forward, including a Firefox wide triage process.

With a solid foundation for delightful accessibility well underway, we’re looking at the next step in broadening our impact: expanding our engagement with our passionate, global community. It’s our hope that we can get to a place where a broad community of interested people become active participants in the planning, design, development and testing of Firefox accessibility. To get there, the first step is open communication about what we’re doing and where we’re headed.

To that end, we’ve created this blog to keep you all informed about what’s going on with Firefox accessibility. As a second step, we’ve published the Firefox Accessibility Roadmap. This document is intended to communicate our ongoing work, connecting the dots from our aspirations, as codified in our Mission and Manifesto, through our near term strategy, right down to the individual work items we’re tackling today. The roadmap will be updated regularly to cover at least the next six months of work and ideally the next year or so.

Another significant area of new documentation, pushed by Eitan and Morgan, is around our ongoing work to bring VoiceOver support to Firefox on macOS. In addition to the overview wiki page, which covers our high level plan and specific lists of bugs we’re targeting, there’s also a work in progress architectural overview and a technical guide to contributing to the Mac work.

We’ve also transitioned most of our team technical discussions from a closed Mozilla Slack to the open and participatory Matrix instance. Some exciting conversations are already happening and we hope that you’ll join us.

And that’s just the beginning. We’re always improving our documentation and onboarding materials so stay tuned to this channel for updates. We hope you find access to the team and the documents useful and that if something in our docs calls out to you that you’ll find us on Matrix and help out, whether that’s contributing ideas for better solutions to problems we’re tackling, writing code for features and fixes we need, or testing the results of development work.

We look forward to working with you all to make the Firefox family of products and services the best they can be, a delight to use for everyone, especially people with disabilities.

The post Broadening Our Impact appeared first on Mozilla Accessibility.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Next Steps for Net Neutrality

Mozilla planet - ma, 06/07/2020 - 18:01

Two years ago we first brought Mozilla v. FCC in federal court, in an effort to save the net neutrality rules protecting American consumers. Mozilla has long fought for net neutrality because we believe that the internet works best when people control their own online experiences.

Today is the deadline to petition the Supreme Court for review of the D.C. Circuit decision in Mozilla v. FCC. After careful consideration, Mozilla—as well as its partners in this litigation—are not seeking Supreme Court review of the D.C. Circuit decision. Even though we did not achieve all that we hoped for in the lower court, the court recognized the flaws of the FCC’s action and sent parts of it back to the agency for reconsideration. And the court cleared a path for net neutrality to move forward at the state level. We believe the fight is best pursued there, as well as on other fronts including Congress or a future FCC.

Net neutrality is more than a legal construct. It is a reflection of the fundamental belief that ISPs have tremendous power over our online experiences and that power should not be further concentrated in actors that have often demonstrated a disregard for consumers and their digital rights. The global pandemic has moved even more of our daily lives—our work, school, conversations with friends and family—online. Internet videos and social media debates are fueling an essential conversation about systemic racism in America. At this moment, net neutrality protections ensuring equal treatment of online traffic are critical. Recent moves by ISPs to favor their own content channels or impose data caps and usage-based pricing make concerns about the need for protections all the more real.

The fight for net neutrality will continue on. The D.C. Circuit decision positions the net neutrality movement to continue on many fronts, starting with a defense of California’s strong new law to protect consumers online—a law that was on hold pending resolution of this case.

Other states have followed suit and we expect more to take up the mantle. We will look to a future Congress or future FCC to take up the issue in the coming months and years. Mozilla is committed to continuing our work, with our broad community of allies, in this movement to defend the web and consumers and ensure the internet remains open and accessible to all.

The post Next Steps for Net Neutrality appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Performance Improvements via Formally-Verified Cryptography in Firefox

Mozilla planet - ma, 06/07/2020 - 16:00

Cryptographic primitives, while extremely complex and difficult to implement, audit, and validate, are critical for security on the web. To ensure that NSS (Network Security Services, the cryptography library behind Firefox) abides by Mozilla’s principle of user security being fundamental, we’ve been working with Project Everest and the HACL* team to bring formally-verified cryptography into Firefox.

In Firefox 57, we introduced formally-verified Curve25519, which is a mechanism used for key establishment in TLS and other protocols. In Firefox 60, we added ChaCha20 and Poly1305, providing high-assurance authenticated encryption. Firefox 69, 77, and 79 improve and expand these implementations, providing increased performance while retaining the assurance granted by formal verification.

Performance & Specifics

For key establishment, we recently replaced the 32-bit implementation of Curve25519 with one from the Fiat-Crypto project. The arbitrary-precision arithmetic functions of this implementation are proven to be functionally correct, and it improves performance by nearly 10x over the previous code. Firefox 77 updates the 64-bit implementation with new HACL* code, benefitting from a ~27% speedup. Most recently, Firefox 79 also brings this update to Windows. These improvements are significant: Telemetry shows Curve25519 to be the most widely used elliptic curve for ECDH(E) key establishment in Firefox, and increased throughput reduces energy consumption, which is particularly important for mobile devices.

64-bit Curve25519 with HACL*

32-bit Curve25519 with Fiat-Crypto

For encryption and decryption, we improved the performance of ChaCha20-Poly1305 in Firefox 77. Throughput is doubled by taking advantage of vectorization with 128-bit and 256-bit integer arithmetic (via the AVX2 instruction set on x86-64 CPUs). When these features are unavailable, NSS will fall back to an AVX or scalar implementation, both of which have been further optimized.

ChaCha20-Poly1305 with HACL* and AVX2

The HACL* project has introduced new techniques and libraries to improve efficiency in writing verified primitives for both scalar and vectorized variants. This allows aggressive code sharing and reduces the verification effort across many different platforms.

What’s Next?

For Firefox 81, we intend to incorporate a formally-verified implementation of the P256 elliptic curve for ECDSA and ECDH. Middle-term targets for verified implementations include GCM, the P384 and P521 elliptic curves, and the ECDSA signature scheme itself. While there remains work to be done, these updates provide an improved user experience and ease the implementation burden for future inclusion of platform-optimized primitives.

The post Performance Improvements via Formally-Verified Cryptography in Firefox appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Wladimir Palant: Dismantling BullGuard Antivirus' online protection

Mozilla planet - ma, 06/07/2020 - 15:02

Just like so many other antivirus applications, BullGuard antivirus promises to protect you online. This protection consists of the three classic components: protection against malicious websites, marking of malicious search results and BullGuard Secure Browser for your special web surfing needs. As so often, this functionality comes with issues of its own, some being unusually obvious.

Chihuahua looking into a mirror and seeing a bulldog (BullGuard logo) there<figcaption> Image credits: BullGuard, kasiagrafik, GDJ, rygle </figcaption> Contents Summary of the findings

The first and very obvious issue was found in the protection against malicious websites. While this functionality often cannot be relied upon, circumventing it typically requires some effort. Not so with BullGuard Antivirus: merely adding a hardcoded character sequence to the address would make BullGuard ignore a malicious domain.

Further issues affected BullGuard Secure Browsers: multiple Cross-Site Scripting (XSS) vulnerabilities in its user interface potentially allowed websites to spy on the user or crash the browser. The crash might be exploitable for Remote Code Execution (RCE). Proper defense in depth prevented worse here.

Online protection approach

BullGuard Antivirus listens in on all connections made by your computer. For some of these connections it will get between the server and the browser in order to manipulate server responses. That’s especially the case for malicious websites of course, but the server response for search pages will also be manipulated in order to indicate which search results are supposed to be trustworthy.

'Link safe' message showing next to Yahoo! link on Google Search

To implement this pop-up, the developers used an interesting trick: connections to port 3220 will always be redirected to the antivirus application, no matter which domain. So navigating to http://www.yahoo.com:3220/html?eWFob28uY29t will yield the following response:

//www.yahoo.com:3220/html?eWFob28uY29t

This approach is quite dangerous, a vulnerability in the content served up here will be exploitable in the context of any domain on the web. I’ve seen such Universal XSS vulnerabilities before. In case of BullGuard, none of the content served appears to be vulnerable however. It would still be a good idea to avoid the unnecessary risk: the server could respond to http://localhost:3220/ only and use CORS appropriately.

Unblocking malicious websites

To be honest, I couldn’t find a malicious website that BullGuard Antivirus would block. So I cheated and added malware.wicar.org to the block list in application’s settings. Navigating to the site now resulted in a redirect to bullguard.com:

Warning page originating at safebrowsing.bullguard.com indicating that malware.wicar.org is blocked

If you click “More details” you will see a link labeled “Continue to website. NOT RECOMMENDED” at the bottom. So far so good, making overriding the warning complicated and discouraged is sane user interface design. But how does that link work?

Each antivirus application came up with its own individual answer to this question. Some answers turned out more secure than others, but BullGuard’s is still rather unique. The warning page will send you to https://malware.wicar.org/?gid=13fd117f-bb07-436e-85bb-f8a3abbd6ad6 and this gid parameter will tell BullGuard Antivirus to unblock malware.wicar.org for the current session. Where does its value come from? It’s hardcoded in BullGuardFiltering.exe. Yes, it’s exactly the same value for any website and any BullGuard install.

So if someone were to run a malicious email campaign and they were concerned about BullGuard blocking the link to https://malicious.example.com/ – no problem, changing the link into https://malicious.example.com/?gid=13fd117f-bb07-436e-85bb-f8a3abbd6ad6 would disable antivirus protection.

Current BullGuard Antivirus release uses a hid parameter with a value dependent on website and current session. So neither predicting its value nor reusing a value from one website to unblock another should work any more.

XSSing the secure browser

Unlike some other antivirus solutions, BullGuard doesn’t market their BullGuard Secure Browser as an online banking browser. Instead, they seem to suggest that it is good enough for everyday use – if you can live with horrible user experience that is. It’s a Chromium-based browser which “protects you from vulnerable and malicious browser extensions” by not supporting any browser extensions.

Unlike Chromium’s, this browser’s user interface is a collection of various web pages. For example, secure://address_bar.html is the location bar and secure://find_in_page.html the find bar. All pages rely heavily on HTML manipulation via jQuery, so you are probably not surprised that there are cross-site scripting vulnerabilities here.

Vulnerability in the location bar

The code displaying search suggestions when you type something into the location bar went like this:

else if (vecFields[i].type == 3) { var string_search = BG_TRANSLATE_String('secure_addressbar_search'); // type is search with Google strHtml += '><img src="' + vecFields[i].icon + '" alt="favicon" class="address_drop_favicon"/> ' + '<p class="address_drop_text"><span class="address_search_terms">' + vecFields[i].title + '</span><span class="address_search_provider"> - ' + string_search + '</span></p>'; }

No escaping performed here, so if you type something like <img src=x onerror=alert(document.location)> into the location bar, you will get JavaScript code executing in the context of secure://address_bar.html. Not only will the code stay there for the duration of the current browser session, it will be able to spy on location bar changes. Ironically, BullGuard’s announcement claims that their browser protects against man-in-the browser attacks which is exactly what this is.

You think that no user would be stupid enough to copy untrusted code into the location bar, right? But regular users have no reason to expect the world to explode simply because they typed in something. That’s why no modern browser allows typing in javascript: addresses any more. And javascript: addresses are far less problematic than the attack depicted here.

Vulnerability in the display of blocked popups

The address bar issue isn’t the only or even the most problematic XSS vulnerability. The page secure://blocked-popups.html runs the following code:

var li = $('<li/>') .addClass('bg_blocked_popups-item') .attr('onclick', 'jsOpenPopup(\''+ list[i] + '\', '+ i + ');') .appendTo(cList);

No escaping performed here either, so a malicious website only needs to use single quotation marks in the pop-up address:

window.open(`/document/download.pdf#'+alert(location.href)+'`, "_blank");

That’s it, the browser will now indicate a blocked pop-up.

'Pop-ups blocked' message displayed in location bar

And if the user clicks this message, a message will appear indicating that arbitrary JavaScript code is running in the context of secure://blocked-popups.html now. That code can for example call window.bgOpenPopup() which allows it to open arbitrary URLs, even data: URLs that normally cannot be opened by websites. It could even open an ad page every few minutes, and the only way to get rid of it would be restarting the browser.

But the purpose of window.bgOpenPopup() function isn’t merely allowing the pop-up, it also removes a blocked pop-up from the list. By position, without any checks to ensure that the index is valid. So calling window.bgOpenPopup("...", 100000) will crash the browser – this is an access violation. Use a smaller index and the operation will “succeed,” corrupting memory and probably allowing remote code execution.

And finally, this vulnerability allows calling window.bgOnElementRectMeasured() function, setting the size of this pop-up to an arbitrary value. This allows displaying the pop-up on top of the legitimate browser user interface and display a fake user interface there. In theory, malicious code could conjure up a fake location bar and content area, messing with any website loaded and exfiltrating data. Normally, browsers have visual clues to clearly separate their user interface from the content area, but these would be useless here. Also, it would be comparably easy to make the fake user interface look convincing as this page can access the CSS styles used for the real interface.

What about all the other API functions?

While the above is quite bad, BullGuard Secure Browser exposes a number of other functions to its user interface. However, it seems that these can only be used by the pages which actually need them. So the blocked pop-ups page cannot use anything other than window.bgOpenPopup() and window.bgOnElementRectMeasured(). That’s proper defense in depth and it prevented worse here.

Conclusions

BullGuard simply hardcoding security tokens isn’t an issue I’ve seen before. It should be obvious that websites cannot be allowed to disable security features, and yet it seems that the developers somehow didn’t consider this attack vector at all.

The other conclusion isn’t new but worth repeating: if you build a “secure” product, you should use a foundation that eliminates the most common classes of vulnerabilities. There is no reason why a modern day application should have XSS vulnerabilities in its user interface, and yet it happens all the time.

Timeline
  • 2020-04-09: Started looking for a vulnerability disclosure process on BullGuard website, found none. Asked on Twitter, without success.
  • 2020-04-10: Sent an email to security@ address, bounced. Contacted mail@ support address asking how vulnerabilities are supposed to be reported.
  • 2020-04-10: Response asks for “clarification.”
  • 2020-04-11: Sent a slightly reformulated question.
  • 2020-04-11: Response tells me to contact feedback@ address with “application-related feedback.”
  • 2020-04-12: Sent the same inquiry to feedback@ support address.
  • 2020-04-15: Received apology for the “unsatisfactory response” and was told to report the issues via this support ticket.
  • 2020-04-15: Sent report about circumventing protection against malicious websites.
  • 2020-04-17: Sent report about XSS vulnerabilities in BullGuard Secure Browser (delay due to miscommunication).
  • 2020-04-17: Protection circumvention vulnerability confirmed and considered critical.
  • 2020-04-23: Told that the HTML attachment (proof of concept for the XSS vulnerability) was rejected by the email system, resent the email with a 7zip-packed attachment.
  • 2020-05-04: XSS vulnerabilities confirmed and considered critical.
  • 2020-05-18: XSS vulnerabilities fixed in 20.0.378 Hotfix 2.
  • 2020-06-29: Protection circumvention vulnerability fixed in 20.0.380 Hotfix 2 (release announcement wrongly talks about “XSS vulnerabilities in Secure Browser” again).
  • 2020-06-07: Got reply from the vendor that publishing the details one week before deadline is ok.
Categorieën: Mozilla-nl planet

Firefox UX: UX Book Club Recap: Writing is Designing, in Conversation with the Authors

Mozilla planet - ma, 06/07/2020 - 13:30

The Firefox UX book club comes together a few times a year to discuss books related to the user experience practice. We recently welcomed authors Michael Metts and Andy Welfle to discuss their book Writing is Designing: Words and the User Experience (Rosenfeld Media, Jan. 2020).

Photo of Writing is Designing with notebook, coffee cup, and computer mouse on a table.

To make the most of our time, we collected questions from the group beforehand, organized them into themes, and asked people to upvote the ones they were most interested in.

An overview of Writing is Designing

“In many product teams, the words are an afterthought, and come after the “design,” or the visual and experiential system. It shouldn’t be like that: the writer should be creating words as the rest of the experience is developed. They should be iterative, validated with research, and highly collaborative. Writing is part of the design process, and writers are designers.” — Writing is Designing

Andy and Michael kicked things off with a brief overview of Writing is Designing. They highlighted how writing is about fitting words together and design is about solving problems. Writing as design brings the two together. These activities — writing and designing — need to be done together to create a cohesive user experience.

They reiterated that effective product content must be:

  • Usable: It makes it easier to do something. Writing should be clear, simple, and easy.
  • Useful: It supports user goals. Writers need to understand a product’s purpose and their audience’s needs to create useful experiences.
  • Responsible: What we write can be misused by people or even algorithms. We must take care in the language we use.

We then moved onto Q&A which covered these themes and ideas.

On writing a book that’s not just for UX writers

“Even if you only do this type of writing occasionally, you’ll learn from this book. If you’re a designer, product manager, developer, or anyone else who writes for your users, you’ll benefit from it. This book will also help people who manage or collaborate with writers, since you’ll get to see what goes into this type of writing, and how it fits into the product design and development process.” — Writing is Designing

You don’t have to be a UX writer or content strategy to benefit from Writing Is Designing. The book includes guidance for anyone involved in creating content for a user experience, including designers, researchers, engineers, and product managers. Writing is just as much of a design tool as Sketch or Figma—it’s just that the material is words not pixels.

When language perpetuates racism

“The more you learn and the more you are able to engage in discussions about racial justice, the more you are able to see how it impacts everything we do. Not questioning systems can lead to perpetuating injustice. It starts with our workplaces. People are having important conversations and questioning things that already should have been questioned.” — Michael Metts

Given the global focus on racial justice issues, it wasn’t surprising that we spent a good part of our time discussing how the conversation intersects with our day-to-day work.

Andy talked about the effort at Adobe, where he is the UX Content Strategy Manager, to expose racist terminology in its products, such as ‘master-slave’ and ‘whitelist-blacklist’ pairings. It’s not just about finding a neutral replacement term that appears to users in the interface, but rethinking how we’ve defined these terms and underlying structures entirely in our code.

Moving beyond anti-racist language

“We need to focus on who we are doing this for. We worry what we look like and that we’re doing the right thing. And that’s not the priority. The goal is to dismantle harmful systems. It’s important for white people to get away from your own feelings of wanting to look good. And focus on who you are doing it for and making it a better world for those people.” — Michael Metts

Beyond the language that appears in our products, Michael encouraged the group to educate themselves, follow Black writers and designers, and be open and willing to change. Any effective UX practitioner needs to approach their work with a sense of humility and openness to being wrong.

Supporting racial justice and the Black Lives Matter movement must also include raising long-needed conversations in the workplace, asking tough questions, and sitting with discomfort. Michael recommended reading How To Be An Antiracist by Ibram X. Kendi and So You Want to Talk About Race by Ijeoma Oluo.

Re-examining and revisiting norms in design systems

“In design systems, those who document and write are the ones who are codifying information for long term. It’s how terms like whitelist and blacklist, and master/slave keep showing up, decade after decade, in our stuff. We have a responsibility not to be complicit in codifying and continuing racist systems.” — Andy Welfle

Part of our jobs as UX practitioners is to codify and frame decisions. Design systems, for example, document content standards and design patterns. Andy reminded us that our own biases and assumptions can be built-in to these systems. Not questioning the systems we build and contribute to can perpetuate injustice.

It’s important to keep revisiting our own systems and asking questions about them. Why did we frame it this way? Could we frame it in another way?

Driving towards clarity early on in the design process

“It’s hard to write about something without understanding it. While you need clarity if you want to do your job well, your team and your users will benefit from it, too.” — Writing is Designing

Helping teams align and get clear on goals and user problems is a big part of a product writer’s job. While writers are often the ones to ask these clarifying questions, every member of the design team can and should participate in this clarification work—it’s the deep strategy work we must do before we can write and visualize the surface manifestation in products.

Before you open your favorite design tool (be it Sketch, Figma, or Adobe XD) Andy and Michael recommend writers and visual designers start with the simplest tool of all: a text editor. There you can do the foundational design work of figuring out what you’re trying to accomplish.

The longevity of good content

A book club member asked, “How long does good content last?” Andy’s response: “As long as it needs to.”

Software work is never ‘done.’ Products and the technology that supports them continue to evolve. With that in mind, there are key touch points to revisit copy. For example, when a piece of desktop software becomes available on a different platform like tablet or mobile, it’s a good time to revisit your context (and entire experience, in fact) to see if it still works.

Final thoughts—an ‘everything first’ approach

In the grand scheme of tech things, UX writing is still a relatively new discipline. Books like Writing for Designing are helping to define and shape the practice.

When asked (at another meet-up, not our own) if he’s advocating for a ‘content-first approach,’ Michael’s response was that we need an ‘everything first approach’ — meaning, all parties involved in the design and development of a product should come to the planning table together, early on in the process. By making the case for writing as a strategic design practice, this book helps solidify a spot at that table for UX writers.

Prior texts read by Mozilla’s UX book club
Categorieën: Mozilla-nl planet

Andrew Sutherland: Talk Script: Firefox OS Email Performance Strategies

Thunderbird - do, 30/04/2015 - 22:11

Last week I gave a talk at the Philly Tech Week 2015 Dev Day organized by the delightful people at technical.ly on some of the tricks/strategies we use in the Firefox OS Gaia Email app.  Note that the credit for implementing most of these techniques goes to the owner of the Email app’s front-end, James Burke.  Also, a special shout-out to Vivien for the initial DOM Worker patches for the email app.

I tried to avoid having slides that both I would be reading aloud as the audience read silently, so instead of slides to share, I have the talk script.  Well, I also have the slides here, but there’s not much to them.  The headings below are the content of the slides, except for the one time I inline some code.  Note that the live presentation must have differed slightly, because I’m sure I’m much more witty and clever in person than this script would make it seem…

Cover Slide: Who!

Hi, my name is Andrew Sutherland.  I work at Mozilla on the Firefox OS Email Application.  I’m here to share some strategies we used to make our HTML5 app Seem faster and sometimes actually Be faster.

What’s A Firefox OS (Screenshot Slide)

But first: What is a Firefox OS?  It’s a multiprocess Firefox gecko engine on an android linux kernel where all the apps including the system UI are implemented using HTML5, CSS, and JavaScript.  All the apps use some combination of standard web APIs and APIs that we hope to standardize in some form.

Firefox OS homescreen screenshot Firefox OS clock app screenshot Firefox OS email app screenshot

Here are some screenshots.  We’ve got the default home screen app, the clock app, and of course, the email app.

It’s an entirely client-side offline email application, supporting IMAP4, POP3, and ActiveSync.  The goal, like all Firefox OS apps shipped with the phone, is to give native apps on other platforms a run for their money.

And that begins with starting up fast.

Fast Startup: The Problems

But that’s frequently easier said than done.  Slow-loading websites are still very much a thing.

The good news for the email application is that a slow network isn’t one of its problems.  It’s pre-loaded on the phone.  And even if it wasn’t, because of the security implications of the TCP Web API and the difficulty of explaining this risk to users in a way they won’t just click through, any TCP-using app needs to be a cryptographically signed zip file approved by a marketplace.  So we do load directly from flash.

However, it’s not like flash on cellphones is equivalent to an infinitely fast, zero-latency network connection.  And even if it was, in a naive app you’d still try and load all of your HTML, CSS, and JavaScript at the same time because the HTML file would reference them all.  And that adds up.

It adds up in the form of event loop activity and competition with other threads and processes.  With the exception of Promises which get their own micro-task queue fast-lane, the web execution model is the same as all other UI event loops; events get scheduled and then executed in the same order they are scheduled.  Loading data from an asynchronous API like IndexedDB means that your read result gets in line behind everything else that’s scheduled.  And in the case of the bulk of shipped Firefox OS devices, we only have a single processor core so the thread and process contention do come into play.

So we try not to be a naive.

Seeming Fast at Startup: The HTML Cache

If we’re going to optimize startup, it’s good to start with what the user sees.  Once an account exists for the email app, at startup we display the default account’s inbox folder.

What is the least amount of work that we can do to show that?  Cache a screenshot of the Inbox.  The problem with that, of course, is that a static screenshot is indistinguishable from an unresponsive application.

So we did the next best thing, (which is) we cache the actual HTML we display.  At startup we load a minimal HTML file, our concatenated CSS, and just enough Javascript to figure out if we should use the HTML cache and then actually use it if appropriate.  It’s not always appropriate, like if our application is being triggered to display a compose UI or from a new mail notification that wants to show a specific message or a different folder.  But this is a decision we can make synchronously so it doesn’t slow us down.

Local Storage: Okay in small doses

We implement this by storing the HTML in localStorage.

Important Disclaimer!  LocalStorage is a bad API.  It’s a bad API because it’s synchronous.  You can read any value stored in it at any time, without waiting for a callback.  Which means if the data is not in memory the browser needs to block its event loop or spin a nested event loop until the data has been read from disk.  Browsers avoid this now by trying to preload the Entire contents of local storage for your origin into memory as soon as they know your page is being loaded.  And then they keep that information, ALL of it, in memory until your page is gone.

So if you store a megabyte of data in local storage, that’s a megabyte of data that needs to be loaded in its entirety before you can use any of it, and that hangs around in scarce phone memory.

To really make the point: do not use local storage, at least not directly.  Use a library like localForage that will use IndexedDB when available, and then fails over to WebSQLDatabase and local storage in that order.

Now, having sufficiently warned you of the terrible evils of local storage, I can say with a sorta-clear conscience… there are upsides in this very specific case.

The synchronous nature of the API means that once we get our turn in the event loop we can act immediately.  There’s no waiting around for an IndexedDB read result to gets its turn on the event loop.

This matters because although the concept of loading is simple from a User Experience perspective, there’s no standard to back it up right now.  Firefox OS’s UX desires are very straightforward.  When you tap on an app, we zoom it in.  Until the app is loaded we display the app’s icon in the center of the screen.  Unfortunately the standards are still assuming that the content is right there in the HTML.  This works well for document-based web pages or server-powered web apps where the contents of the page are baked in.  They work less well for client-only web apps where the content lives in a database and has to be dynamically retrieved.

The two events that exist are:

DOMContentLoaded” fires when the document has been fully parsed and all scripts not tagged as “async” have run.  If there were stylesheets referenced prior to the script tags, the script tags will wait for the stylesheet loads.

load” fires when the document has been fully loaded; stylesheets, images, everything.

But none of these have anything to do with the content in the page saying it’s actually done.  This matters because these standards also say nothing about IndexedDB reads or the like.  We tried to create a standards consensus around this, but it’s not there yet.  So Firefox OS just uses the “load” event to decide an app or page has finished loading and it can stop showing your app icon.  This largely avoids the dreaded “flash of unstyled content” problem, but it also means that your webpage or app needs to deal with this period of time by displaying a loading UI or just accepting a potentially awkward transient UI state.

(Trivial HTML slide)

<link rel=”stylesheet” ...> <script ...></script> DOMContentLoaded!

This is the important summary of our index.html.

We reference our stylesheet first.  It includes all of our styles.  We never dynamically load stylesheets because that compels a style recalculation for all nodes and potentially a reflow.  We would have to have an awful lot of style declarations before considering that.

Then we have our single script file.  Because the stylesheet precedes the script, our script will not execute until the stylesheet has been loaded.  Then our script runs and we synchronously insert our HTML from local storage.  Then DOMContentLoaded can fire.  At this point the layout engine has enough information to perform a style recalculation and determine what CSS-referenced image resources need to be loaded for buttons and icons, then those load, and then we’re good to be displayed as the “load” event can fire.

After that, we’re displaying an interactive-ish HTML document.  You can scroll, you can press on buttons and the :active state will apply.  So things seem real.

Being Fast: Lazy Loading and Optimized Layers

But now we need to try and get some logic in place as quickly as possible that will actually cash the checks that real-looking HTML UI is writing.  And the key to that is only loading what you need when you need it, and trying to get it to load as quickly as possible.

There are many module loading and build optimizing tools out there, and most frameworks have a preferred or required way of handling this.  We used the RequireJS family of Asynchronous Module Definition loaders, specifically the alameda loader and the r-dot-js optimizer.

One of the niceties of the loader plugin model is that we are able to express resource dependencies as well as code dependencies.

RequireJS Loader Plugins

var fooModule = require('./foo'); var htmlString = require('text!./foo.html'); var localizedDomNode = require('tmpl!./foo.html');

The standard Common JS loader semantics used by node.js and io.js are the first one you see here.  Load the module, return its exports.

But RequireJS loader plugins also allow us to do things like the second line where the exclamation point indicates that the load should occur using a loader plugin, which is itself a module that conforms to the loader plugin contract.  In this case it’s saying load the file foo.html as raw text and return it as a string.

But, wait, there’s more!  loader plugins can do more than that.  The third example uses a loader that loads the HTML file using the ‘text’ plugin under the hood, creates an HTML document fragment, and pre-localizes it using our localization library.  And this works un-optimized in a browser, no compilation step needed, but it can also be optimized.

So when our optimizer runs, it bundles up the core modules we use, plus, the modules for our “message list” card that displays the inbox.  And the message list card loads its HTML snippets using the template loader plugin.  The r-dot-js optimizer then locates these dependencies and the loader plugins also have optimizer logic that results in the HTML strings being inlined in the resulting optimized file.  So there’s just one single javascript file to load with no extra HTML file dependencies or other loads.

We then also run the optimizer against our other important cards like the “compose” card and the “message reader” card.  We don’t do this for all cards because it can be hard to carve up the module dependency graph for optimization without starting to run into cases of overlap where many optimized files redundantly include files loaded by other optimized files.

Plus, we have another trick up our sleeve:

Seeming Fast: Preloading

Preloading.  Our cards optionally know the other cards they can load.  So once we display a card, we can kick off a preload of the cards that might potentially be displayed.  For example, the message list card can trigger the compose card and the message reader card, so we can trigger a preload of both of those.

But we don’t go overboard with preloading in the frontend because we still haven’t actually loaded the back-end that actually does all the emaily email stuff.  The back-end is also chopped up into optimized layers along account type lines and online/offline needs, but the main optimized JS file still weighs in at something like 17 thousand lines of code with newlines retained.

So once our UI logic is loaded, it’s time to kick-off loading the back-end.  And in order to avoid impacting the responsiveness of the UI both while it loads and when we’re doing steady-state processing, we run it in a DOM Worker.

Being Responsive: Workers and SharedWorkers

DOM Workers are background JS threads that lack access to the page’s DOM, communicating with their owning page via message passing with postMessage.  Normal workers are owned by a single page.  SharedWorkers can be accessed via multiple pages from the same document origin.

By doing this, we stay out of the way of the main thread.  This is getting less important as browser engines support Asynchronous Panning & Zooming or “APZ” with hardware-accelerated composition, tile-based rendering, and all that good stuff.  (Some might even call it magic.)

When Firefox OS started, we didn’t have APZ, so any main-thread logic had the serious potential to result in janky scrolling and the impossibility of rendering at 60 frames per second.  It’s a lot easier to get 60 frames-per-second now, but even asynchronous pan and zoom potentially has to wait on dispatching an event to the main thread to figure out if the user’s tap is going to be consumed by app logic and preventDefault called on it.  APZ does this because it needs to know whether it should start scrolling or not.

And speaking of 60 frames-per-second…

Being Fast: Virtual List Widgets

…the heart of a mail application is the message list.  The expected UX is to be able to fling your way through the entire list of what the email app knows about and see the messages there, just like you would on a native app.

This is admittedly one of the areas where native apps have it easier.  There are usually list widgets that explicitly have a contract that says they request data on an as-needed basis.  They potentially even include data bindings so you can just point them at a data-store.

But HTML doesn’t yet have a concept of instantiate-on-demand for the DOM, although it’s being discussed by Firefox layout engine developers.  For app purposes, the DOM is a scene graph.  An extremely capable scene graph that can handle huge documents, but there are footguns and it’s arguably better to err on the side of fewer DOM nodes.

So what the email app does is we create a scroll-region div and explicitly size it based on the number of messages in the mail folder we’re displaying.  We create and render enough message summary nodes to cover the current screen, 3 screens worth of messages in the direction we’re scrolling, and then we also retain up to 3 screens worth in the direction we scrolled from.  We also pre-fetch 2 more screens worth of messages from the database.  These constants were arrived at experimentally on prototype devices.

We listen to “scroll” events and issue database requests and move DOM nodes around and update them as the user scrolls.  For any potentially jarring or expensive transitions such as coordinate space changes from new messages being added above the current scroll position, we wait for scrolling to stop.

Nodes are absolutely positioned within the scroll area using their ‘top’ style but translation transforms also work.  We remove nodes from the DOM, then update their position and their state before re-appending them.  We do this because the browser APZ logic tries to be clever and figure out how to create an efficient series of layers so that it can pre-paint as much of the DOM as possible in graphic buffers, AKA layers, that can be efficiently composited by the GPU.  Its goal is that when the user is scrolling, or something is being animated, that it can just move the layers around the screen or adjust their opacity or other transforms without having to ask the layout engine to re-render portions of the DOM.

When our message elements are added to the DOM with an already-initialized absolute position, the APZ logic lumps them together as something it can paint in a single layer along with the other elements in the scrolling region.  But if we start moving them around while they’re still in the DOM, the layerization logic decides that they might want to independently move around more in the future and so each message item ends up in its own layer.  This slows things down.  But by removing them and re-adding them it sees them as new with static positions and decides that it can lump them all together in a single layer.  Really, we could just create new DOM nodes, but we produce slightly less garbage this way and in the event there’s a bug, it’s nicer to mess up with 30 DOM nodes displayed incorrectly rather than 3 million.

But as neat as the layerization stuff is to know about on its own, I really mention it to underscore 2 suggestions:

1, Use a library when possible.  Getting on and staying on APZ fast-paths is not trivial, especially across browser engines.  So it’s a very good idea to use a library rather than rolling your own.

2, Use developer tools.  APZ is tricky to reason about and even the developers who write the Async pan & zoom logic can be surprised by what happens in complex real-world situations.  And there ARE developer tools available that help you avoid needing to reason about this.  Firefox OS has easy on-device developer tools that can help diagnose what’s going on or at least help tell you whether you’re making things faster or slower:

– it’s got a frames-per-second overlay; you do need to scroll like mad to get the system to want to render 60 frames-per-second, but it makes it clear what the net result is

– it has paint flashing that overlays random colors every time it paints the DOM into a layer.  If the screen is flashing like a discotheque or has a lot of smeared rainbows, you know something’s wrong because the APZ logic is not able to to just reuse its layers.

– devtools can enable drawing cool colored borders around the layers APZ has created so you can see if layerization is doing something crazy

There’s also fancier and more complicated tools in Firefox and other browsers like Google Chrome to let you see what got painted, what the layer tree looks like, et cetera.

And that’s my spiel.

Links

The source code to Gaia can be found at https://github.com/mozilla-b2g/gaia

The email app in particular can be found at https://github.com/mozilla-b2g/gaia/tree/master/apps/email

(I also asked for questions here.)

Categorieën: Mozilla-nl planet

Joshua Cranmer: Breaking news

Thunderbird - wo, 01/04/2015 - 09:00
It was brought to my attention recently by reputable sources that the recent announcement of increased usage in recent years produced an internal firestorm within Mozilla. Key figures raised alarm that some of the tech press had interpreted the blog post as a sign that Thunderbird was not, in fact, dead. As a result, they asked Thunderbird community members to make corrections to emphasize that Mozilla was trying to kill Thunderbird.

The primary fear, it seems, is that knowledge that the largest open-source email client was still receiving regular updates would impel its userbase to agitate for increased funding and maintenance of the client to help forestall potential threats to the open nature of email as well as to innovate in the space of providing usable and private communication channels. Such funding, however, would be an unaffordable luxury and would only distract Mozilla from its central goal of building developer productivity tooling. Persistent rumors that Mozilla would be willing to fund Thunderbird were it renamed Firefox Email were finally addressed with the comment, "such a renaming would violate our current policy that all projects be named Persona."

Categorieën: Mozilla-nl planet

Joshua Cranmer: Why email is hard, part 8: why email security failed

Thunderbird - di, 13/01/2015 - 05:38
This post is part 8 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. Part 7 discusses the problem of trust. This part discusses why email security has largely failed.

At the end of the last part in this series, I posed the question, "Which email security protocol is most popular?" The answer to the question is actually neither S/MIME nor PGP, but a third protocol, DKIM. I haven't brought up DKIM until now because DKIM doesn't try to secure email in the same vein as S/MIME or PGP, but I still consider it relevant to discussing email security.

Unquestionably, DKIM is the only security protocol for email that can be considered successful. There are perhaps 4 billion active email addresses [1]. Of these, about 1-2 billion use DKIM. In contrast, S/MIME can count a few million users, and PGP at best a few hundred thousand. No other security protocols have really caught on past these three. Why did DKIM succeed where the others fail?

DKIM's success stems from its relatively narrow focus. It is nothing more than a cryptographic signature of the message body and a smattering of headers, and is itself stuck in the DKIM-Signature header. It is meant to be applied to messages only on outgoing servers and read and processed at the recipient mail server—it completely bypasses clients. That it bypasses clients allows it to solve the problem of key discovery and key management very easily (public keys are stored in DNS, which is already a key part of mail delivery), and its role in spam filtering is strong motivation to get it implemented quickly (it is 7 years old as of this writing). It's also simple: this one paragraph description is basically all you need to know [2].

The failure of S/MIME and PGP to see large deployment is certainly a large topic of discussion on myriads of cryptography enthusiast mailing lists, which often like to partake in propositions of new end-to-end encryption of email paradigms, such as the recent DIME proposal. Quite frankly, all of these solutions suffer broadly from at least the same 5 fundamental weaknesses, and I see it unlikely that a protocol will come about that can fix these weaknesses well enough to become successful.

The first weakness, and one I've harped about many times already, is UI. Most email security UI is abysmal and generally at best usable only by enthusiasts. At least some of this is endemic to security: while it mean seem obvious how to convey what an email signature or an encrypted email signifies, how do you convey the distinctions between sign-and-encrypt, encrypt-and-sign, or an S/MIME triple wrap? The Web of Trust model used by PGP (and many other proposals) is even worse, in that inherently requires users to do other actions out-of-band of email to work properly.

Trust is the second weakness. Consider that, for all intents and purposes, the email address is the unique identifier on the Internet. By extension, that implies that a lot of services are ultimately predicated on the notion that the ability to receive and respond to an email is a sufficient means to identify an individual. However, the entire purpose of secure email, or at least of end-to-end encryption, is subtly based on the fact that other people in fact have access to your mailbox, thus destroying the most natural ways to build trust models on the Internet. The quest for anonymity or privacy also renders untenable many other plausible ways to establish trust (e.g., phone verification or government-issued ID cards).

Key discovery is another weakness, although it's arguably the easiest one to solve. If you try to keep discovery independent of trust, the problem of key discovery is merely picking a protocol to publish and another one to find keys. Some of these already exist: PGP key servers, for example, or using DANE to publish S/MIME or PGP keys.

Key management, on the other hand, is a more troubling weakness. S/MIME, for example, basically works without issue if you have a certificate, but managing to get an S/MIME certificate is a daunting task (necessitated, in part, by its trust model—see how these issues all intertwine?). This is also where it's easy to say that webmail is an unsolvable problem, but on further reflection, I'm not sure I agree with that statement anymore. One solution is just storing the private key with the webmail provider (you're trusting them as an email client, after all), but it's also not impossible to imagine using phones or flash drives as keystores. Other key management factors are more difficult to solve: people who lose their private keys or key rollover create thorny issues. There is also the difficulty of managing user expectations: if I forget my password to most sites (even my email provider), I can usually get it reset somehow, but when a private key is lost, the user is totally and completely out of luck.

Of course, there is one glaring and almost completely insurmountable problem. Encrypted email fundamentally precludes certain features that we have come to take for granted. The lesser known is server-side search and filtration. While there exist some mechanisms to do search on encrypted text, those mechanisms rely on the fact that you can manipulate the text to change the message, destroying the integrity feature of secure email. They also tend to be fairly expensive. It's easy to just say "who needs server-side stuff?", but the contingent of people who do email on smartphones would not be happy to have to pay the transfer rates to download all the messages in their folder just to find one little email, nor the energy costs of doing it on the phone. And those who have really large folders—Fastmail has a design point of 1,000,000 in a single folder—would still prefer to not have to transfer all their mail even on desktops.

The more well-known feature that would disappear is spam filtration. Consider that 90% of all email is spam, and if you think your spam folder is too slim for that to be true, it's because your spam folder only contains messages that your email provider wasn't sure were spam. The loss of server-side spam filtering would dramatically increase the cost of spam (a 10% reduction in efficiency would double the amount of server storage, per my calculations), and client-side spam filtering is quite literally too slow [3] and too costly (remember smartphones? Imagine having your email take 10 times as much energy and bandwidth) to be a tenable option. And privacy or anonymity tends to be an invitation to abuse (cf. Tor and Wikipedia). Proposed solutions to the spam problem are so common that there is a checklist containing most of the objections.

When you consider all of those weaknesses, it is easy to be pessimistic about the possibility of wide deployment of powerful email security solutions. The strongest future—all email is encrypted, including metadata—is probably impossible or at least woefully impractical. That said, if you weaken some of the assumptions (say, don't desire all or most traffic to be encrypted), then solutions seem possible if difficult.

This concludes my discussion of email security, at least until things change for the better. I don't have a topic for the next part in this series picked out (this part actually concludes the set I knew I wanted to discuss when I started), although OAuth and DMARC are two topics that have been bugging me enough recently to consider writing about. They also have the unfortunate side effect of being things likely to see changes in the near future, unlike most of the topics I've discussed so far. But rest assured that I will find more difficulties in the email infrastructure to write about before long!

[1] All of these numbers are crude estimates and are accurate to only an order of magnitude. To justify my choices: I assume 1 email address per Internet user (this overestimates the developing world and underestimates the developed world). The largest webmail providers have given numbers that claim to be 1 billion active accounts between them, and all of them use DKIM. S/MIME is guessed by assuming that any smartcard deployment supports S/MIME, and noting that the US Department of Defense and Estonia's digital ID project are both heavy users of such smartcards. PGP is estimated from the size of the strong set and old numbers on the reachable set from the core Web of Trust.
[2] Ever since last April, it's become impossible to mention DKIM without referring to DMARC, as a result of Yahoo's controversial DMARC policy. A proper discussion of DMARC (and why what Yahoo did was controversial) requires explaining the mail transmission architecture and spam, however, so I'll defer that to a later post. It's also possible that changes in this space could happen within the next year.
[3] According to a former GMail spam employee, if it takes you as long as three minutes to calculate reputation, the spammer wins.

Categorieën: Mozilla-nl planet

Joshua Cranmer: A unified history for comm-central

Thunderbird - za, 10/01/2015 - 18:55
Several years back, Ehsan and Jeff Muizelaar attempted to build a unified history of mozilla-central across the Mercurial era and the CVS era. Their result is now used in the gecko-dev repository. While being distracted on yet another side project, I thought that I might want to do the same for comm-central. It turns out that building a unified history for comm-central makes mozilla-central look easy: mozilla-central merely had one import from CVS. In contrast, comm-central imported twice from CVS (the calendar code came later), four times from mozilla-central (once with converted history), and imported twice from Instantbird's repository (once with converted history). Three of those conversions also involved moving paths. But I've worked through all of those issues to provide a nice snapshot of the repository [1]. And since I've been frustrated by failing to find good documentation on how this sort of process went for mozilla-central, I'll provide details on the process for comm-central.

The first step and probably the hardest is getting the CVS history in DVCS form (I use hg because I'm more comfortable it, but there's effectively no difference between hg, git, or bzr here). There is a git version of mozilla's CVS tree available, but I've noticed after doing research that its last revision is about a month before the revision I need for Calendar's import. The documentation for how that repo was built is no longer on the web, although we eventually found a copy after I wrote this post on git.mozilla.org. I tried doing another conversion using hg convert to get CVS tags, but that rudely blew up in my face. For now, I've filed a bug on getting an official, branchy-and-tag-filled version of this repository, while using the current lack of history as a base. Calendar people will have to suffer missing a month of history.

CVS is famously hard to convert to more modern repositories, and, as I've done my research, Mozilla's CVS looks like it uses those features which make it difficult. In particular, both the calendar CVS import and the comm-central initial CVS import used a CVS tag HG_COMM_INITIAL_IMPORT. That tagging was done, on only a small portion of the tree, twice, about two months apart. Fortunately, mailnews code was never touched on CVS trunk after the import (there appears to be one commit on calendar after the tagging), so it is probably possible to salvage a repository-wide consistent tag.

The start of my script for conversion looks like this:

#!/bin/bash set -e WORKDIR=/tmp HGCVS=$WORKDIR/mozilla-cvs-history MC=/src/trunk/mozilla-central CC=/src/trunk/comm-central OUTPUT=$WORKDIR/full-c-c # Bug 445146: m-c/editor/ui -> c-c/editor/ui MC_EDITOR_IMPORT=d8064eff0a17372c50014ee305271af8e577a204 # Bug 669040: m-c/db/mork -> c-c/db/mork MC_MORK_IMPORT=f2a50910befcf29eaa1a29dc088a8a33e64a609a # Bug 1027241, bug 611752 m-c/security/manager/ssl/** -> c-c/mailnews/mime/src/* MC_SMIME_IMPORT=e74c19c18f01a5340e00ecfbc44c774c9a71d11d # Step 0: Grab the mozilla CVS history. if [ ! -e $HGCVS ]; then hg clone git+https://github.com/jrmuizel/mozilla-cvs-history.git $HGCVS fi

Since I don't want to include the changesets useless to comm-central history, I trimmed the history by using hg convert to eliminate changesets that don't change the necessary files. Most of the files are simple directory-wide changes, but S/MIME only moved a few files over, so it requires a more complex way to grab the file list. In addition, I also replaced the % in the usernames with @ that they are used to appearing in hg. The relevant code is here:

# Step 1: Trim mozilla CVS history to include only the files we are ultimately # interested in. cat >$WORKDIR/convert-filemap.txt <<EOF # Revision e4f4569d451a include directory/xpcom include mail include mailnews include other-licenses/branding/thunderbird include suite # Revision 7c0bfdcda673 include calendar include other-licenses/branding/sunbird # Revision ee719a0502491fc663bda942dcfc52c0825938d3 include editor/ui # Revision 52efa9789800829c6f0ee6a005f83ed45a250396 include db/mork/ include db/mdb/ EOF # Add the S/MIME import files hg -R $MC log -r "children($MC_SMIME_IMPORT)" \ --template "{file_dels % 'include {file}\n'}" >>$WORKDIR/convert-filemap.txt if [ ! -e $WORKDIR/convert-authormap.txt ]; then hg -R $HGCVS log --template "{email(author)}={sub('%', '@', email(author))}\n" \ | sort -u > $WORKDIR/convert-authormap.txt fi cd $WORKDIR hg convert $HGCVS $OUTPUT --filemap convert-filemap.txt -A convert-authormap.txt

That last command provides us the subset of the CVS history that we need for unified history. Strictly speaking, I should be pulling a specific revision, but I happen to know that there's no need to (we're cloning the only head) in this case. At this point, we now need to pull in the mozilla-central changes before we pull in comm-central. Order is key; hg convert will only apply the graft points when converting the child changeset (which it does but once), and it needs the parents to exist before it can do that. We also need to ensure that the mozilla-central graft point is included before continuing, so we do that, and then pull mozilla-central:

CC_CVS_BASE=$(hg log -R $HGCVS -r 'tip' --template '{node}') CC_CVS_BASE=$(grep $CC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2) MC_CVS_BASE=$(hg log -R $HGCVS -r 'gitnode(215f52d06f4260fdcca797eebd78266524ea3d2c)' --template '{node}') MC_CVS_BASE=$(grep $MC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2) # Okay, now we need to build the map of revisions. cat >$WORKDIR/convert-revmap.txt <<EOF e4f4569d451a5e0d12a6aa33ebd916f979dd8faa $CC_CVS_BASE # Thunderbird / Suite 7c0bfdcda6731e77303f3c47b01736aaa93d5534 d4b728dc9da418f8d5601ed6735e9a00ac963c4e, $CC_CVS_BASE # Calendar 9b2a99adc05e53cd4010de512f50118594756650 $MC_CVS_BASE # Mozilla graft point ee719a0502491fc663bda942dcfc52c0825938d3 78b3d6c649f71eff41fe3f486c6cc4f4b899fd35, $MC_EDITOR_IMPORT # Editor 8cdfed92867f885fda98664395236b7829947a1d 4b5da7e5d0680c6617ec743109e6efc88ca413da, e4e612fcae9d0e5181a5543ed17f705a83a3de71 # Chat EOF # Next, import mozilla-central revisions for rev in $MC_MORK_IMPORT $MC_EDITOR_IMPORT $MC_SMIME_IMPORT; do hg convert $MC $OUTPUT -r $rev --splicemap $WORKDIR/convert-revmap.txt \ --filemap $WORKDIR/convert-filemap.txt done

Some notes about all of the revision ids in the script. The splicemap requires the full 40-character SHA ids; anything less and the thing complains. I also need to specify the parents of the revisions that deleted the code for the mozilla-central import, so if you go hunting for those revisions and are surprised that they don't remove the code in question, that's why.

I mentioned complications about the merges earlier. The Mork and S/MIME import codes here moved files, so that what was db/mdb in mozilla-central became db/mork. There's no support for causing the generated splice to record these as a move, so I have to manually construct those renamings:

# We need to execute a few hg move commands due to renamings. pushd $OUTPUT hg update -r $(grep $MC_MORK_IMPORT .hg/shamap | cut -d' ' -f2) (hg -R $MC log -r "children($MC_MORK_IMPORT)" \ --template "{file_dels % 'hg mv {file} {sub(\"db/mdb\", \"db/mork\", file)}\n'}") | bash hg commit -m 'Pseudo-changeset to move Mork files' -d '2011-08-06 17:25:21 +0200' MC_MORK_IMPORT=$(hg log -r tip --template '{node}') hg update -r $(grep $MC_SMIME_IMPORT .hg/shamap | cut -d' ' -f2) (hg -R $MC log -r "children($MC_SMIME_IMPORT)" \ --template "{file_dels % 'hg mv {file} {sub(\"security/manager/ssl\", \"mailnews/mime\", file)}\n'}") | bash hg commit -m 'Pseudo-changeset to move S/MIME files' -d '2014-06-15 20:51:51 -0700' MC_SMIME_IMPORT=$(hg log -r tip --template '{node}') popd # Echo the new move commands to the changeset conversion map. cat >>$WORKDIR/convert-revmap.txt <<EOF 52efa9789800829c6f0ee6a005f83ed45a250396 abfd23d7c5042bc87502506c9f34c965fb9a09d1, $MC_MORK_IMPORT # Mork 50f5b5fc3f53c680dba4f237856e530e2097adfd 97253b3cca68f1c287eb5729647ba6f9a5dab08a, $MC_SMIME_IMPORT # S/MIME EOF

Now that we have all of the graft points defined, and all of the external code ready, we can pull comm-central and do the conversion. That's not quite it, though—when we graft the S/MIME history to the original mozilla-central history, we have a small segment of abandoned converted history. A call to hg strip removes that.

# Now, import comm-central revisions that we need hg convert $CC $OUTPUT --splicemap $WORKDIR/convert-revmap.txt hg strip 2f69e0a3a05a

[1] I left out one of the graft points because I just didn't want to deal with it. I'll leave it as an exercise to the reader to figure out which one it was. Hint: it's the only one I didn't know about before I searched for the archive points [2].
[2] Since I wasn't sure I knew all of the graft points, I decided to try to comb through all of the changesets to figure out who imported code. It turns out that hg log -r 'adds("**")' narrows it down nicely (1667 changesets to look at instead of 17547), and using the {file_adds} template helps winnow it down more easily.

Categorieën: Mozilla-nl planet

Kent James: Thunderbird Summit in Toronto to Plan a Viable Future

Thunderbird - wo, 15/10/2014 - 06:17

On Wednesday, October 15 through Saturday, October 19, 2014, the Thunderbird core contributors (about 20 people in total) are gathering at the Mozilla offices in Toronto, Ontario for a key summit to plan a viable future for Thunderbird. The first two days are project work days, but on Friday, October 18 we will be meeting all day as a group to discuss how we can overcome various obstacles that threaten the continuing viability of Thunderbird as a project. This is an open Summit for all interested parties. Remote participation or viewing of Friday group sessions is possible, beginning at 9:30 AM EDT (6:30 AM Pacific Daylight Time)  using the same channels as the regular weekly Thunderbird status meetings.

Video Instructions: See https://wiki.mozilla.org/Thunderbird/StatusMeetings for details.

Overall Summit Description and Agenda: See https://wiki.mozilla.org/Thunderbird:Summit_2014

Feel free to join in if you are interested in the future of Thunderbird.

Categorieën: Mozilla-nl planet

Philipp Kewisch: Monitor all http(s) network requests using the Mozilla Platform

Thunderbird - do, 02/10/2014 - 16:38

In an xpcshell test, I recently needed a way to monitor all network requests and access both request and response data so I can save them for later use. This required a little bit of digging in Mozilla’s devtools code so I thought I’d write a short blog post about it.

This code will be used in a testcase that ensures that calendar providers in Lightning function properly. In the case of the CalDAV provider, we would need to access a real server for testing. We can’t just set up a few servers and use them for testing, it would end in an unreasonable amount of server maintenance. Given non-local connections are not allowed when running the tests on the Mozilla build infrastructure, it wouldn’t work anyway. The solution is to create a fakeserver, that is able to replay the requests in the same way. Instead of manually making the requests and figuring out how the server replies, we can use this code to quickly collect all the requests we need.

Without further delay, here is the code you have been waiting for:

Categorieën: Mozilla-nl planet

Pagina's