mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

The Rust Programming Language Blog: Announcing Rust 1.54.0

Mozilla planet - to, 29/07/2021 - 02:00

The Rust team is happy to announce a new version of Rust, 1.54.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.54.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.54.0 on GitHub.

What's in 1.54.0 stable Attributes can invoke function-like macros

Rust 1.54 supports invoking function-like macros inside attributes. Function-like macros can be either macro_rules! based or procedural macros which are invoked like macro!(...). One notable use case for this is including documentation from other files into Rust doc comments. For example, if your project's README represents a good documentation comment, you can use include_str! to directly incorporate the contents. Previously, various workarounds allowed similar functionality, but from 1.54 this is much more ergonomic.

#![doc = include_str!("README.md")]

Macros can be nested inside the attribute as well. For example, the concat! macro can be used to construct a doc comment from within a macro that uses stringify! to include substitutions:

macro_rules! make_function { ($name:ident, $value:expr) => { #[doc = concat!("The `", stringify!($name), "` example.")] /// /// # Example /// /// ``` #[doc = concat!( "assert_eq!(", module_path!(), "::", stringify!($name), "(), ", stringify!($value), ");") ] /// ``` pub fn $name() -> i32 { $value } }; } make_function! {func_name, 123}

Read here for more details.

wasm32 intrinsics stabilized

A number of intrinsics for the wasm32 platform have been stabilized, which gives access to the SIMD instructions in WebAssembly.

Notably, unlike the previously stabilized x86 and x86_64 intrinsics, these do not have a safety requirement to only be called when the appropriate target feature is enabled. This is because WebAssembly was written from the start to validate code safely before executing it, so instructions are guaranteed to be decoded correctly (or not at all).

This means that we can expose some of the intrinsics as entirely safe functions, for example v128_bitselect. However, there are still some intrinsics which are unsafe because they use raw pointers, such as v128_load.

Incremental Compilation is re-enabled by default

Incremental compilation has been re-enabled by default in this release, after it being disabled by default in 1.52.1.

In Rust 1.52, additional validation was added when loading incremental compilation data from the on-disk cache. This resulted in a number of pre-existing potential soundness issues being uncovered as the validation changed these silent bugs into internal compiler errors (ICEs). In response, the Compiler Team decided to disable incremental compilation in the 1.52.1 patch, allowing users to avoid encountering the ICEs and the underlying unsoundness, at the expense of longer compile times. 1

Since then, we've conducted a series of retrospectives and contributors have been hard at work resolving the reported issues, with some fixes landing in 1.53 and the majority landing in this release. 2

There are currently still two known issues which can result in an ICE. Due to the lack of automated crash reporting, we can't be certain of the full extent of impact of the outstanding issues. However, based on the feedback we received from users affected by the 1.52 release, we believe the remaining issues to be rare in practice.

Therefore, incremental compilation has been re-enabled in this release!

Stabilized APIs

The following methods and trait implementations were stabilized.

Other changes

There are other changes in the Rust 1.54.0 release: check out what changed in Rust, Cargo, and Clippy.

rustfmt has also been fixed in the 1.54.0 release to properly format nested out-of-line modules. This may cause changes in formatting to files that were being ignored by the 1.53.0 rustfmt. See details here.

Contributors to 1.54.0

Many people came together to create Rust 1.54.0. We couldn't have done it without all of you. Thanks!

  1. The 1.52.1 release notes contain a more detailed description of these events.

  2. The tracking issue for the issues is #84970.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Making Client Certificates Available By Default in Firefox 90

Mozilla planet - wo, 28/07/2021 - 21:18

 

Starting with version 90, Firefox will automatically find and offer to use client authentication certificates provided by the operating system on macOS and Windows. This security and usability improvement has been available in Firefox since version 75, but previously end users had to manually enable it.

When a web browser negotiates a secure connection with a website, the web server sends a certificate to the browser to prove its identity. Some websites (most commonly corporate authentication systems) request that the browser sends a certificate back to it as well, so that the website visitor can prove their identity to the website (similar to logging in with a username and password). This is sometimes called “mutual authentication”.

Starting with Firefox version 90, when you connect to a website that requests a client authentication certificate, Firefox will automatically query the operating system for such certificates and give you the option to use one of them. This feature will be particularly beneficial when relying on a client certificate stored on a hardware token, since you do not have to import the certificate into Firefox or load a third-party module to communicate with the token on behalf of Firefox. No manual task or preconfiguration will be necessary when communicating with your corporate authentication system.

If you are a Firefox user, you don’t have to do anything to benefit from this usability and security improvement to load client certificates. As soon as your Firefox auto-updates to version 90, you can simply select your client certificate when prompted by a website. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the web.

The post Making Client Certificates Available By Default in Firefox 90 appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Firefox Add-on Reviews: Tweak Twitch—BetterTTV and other extensions for Twitch customization

Mozilla planet - mo, 26/07/2021 - 19:11

Customize chat, optimize your video player, auto-collect channel points, and much much more. Explore some of the ways you can radically transform your Twitch experience with a browser extension… 

BetterTTV

One of the most feature rich and popular Twitch extensions out there, BetterTTV has everything from fun new emoticons to advanced content filtering. 

Key features:

  • Auto-collect channel points
  • Easier-to-read chat interface
  • Select usernames, words, or specific phrases you want highlighted throughout Twitch; or blacklist any of those elements you want filtered out
  • New emoticons to use globally or custom per channel
  • See deleted messages
  • Anonymous Chat—join a channel without notice
Alternative Player for Twitch.tv

While this extension’s focus is on video player customization, Alternate Player for Twitch.tv packs a bunch of other great features unrelated to video streaming. 

Let’s start with the video player. Some of its best tweaks include:

  • Ad blocking! Wipe away all of those suuuuper looooong pre-rolls
  • Choose a new color for the player 
  • Instant Replay is a wow feature—go back and watch up to a minute of material that just streamed (includes ability to speed up/slow down replay) 

Alternate Player for Twitch.tv also appears to run live streams at even smoother rates than Twitch’s default player. You can further optimize your stream by adjusting the extension’s bandwidth settings to better suit your internet speed. Audio Only mode is really great for saving bandwidth if you’re just tuning in for music or discussion. 

Our favorite feature is the ability to customize the size and location of the chat interface while in full-screen mode. Make the chat small and tuck it away in a corner or expand it to consume most of the screen; or remove chat altogether if the side conversation is a mood killer.

Twitch Previews

This is the best way to channel surf. Just hover over a stream icon in the sidebar and Twitch Previews will display its live video in a tiny player. 

No more clicking away from the thing you’re watching just to check out other streams. Additional features we love include the ability to customize the video size and volume of the previews, a sidebar auto-extender (to more easily see all live streamers), and full-screen mode with chat. 

<figcaption>Mouse over a stream in the sidebar to get a live look with Twitch Previews.</figcaption> Unwanted Twitch

Do you keep seeing the same channels over and over again that you’re not interested in? Unwanted Twitch wipes them from your experience. 

Not only block specific channels you don’t want, but you can even hide entire categories (I’m done with dub step!) or specific tags (my #Minecraft days are behind me). Other niche “hide” features include the ability to block reruns and streams with certain words appearing in their title. 

Twitch Chat Pronouns

What a neat idea. Twitch Chat Pronouns lets you add gender pronouns to usernames. 

The pronouns will display next to Twitch usernames. You’ll need to enter a pronoun for yourself if you want one to appear to other extension users. 

We hope your Twitch experience has been improved with a browser extension! Find more media enhancing extensions on addons.mozilla.org.

Categorieën: Mozilla-nl planet

Data@Mozilla: This Week in Glean: Shipping Glean with GeckoView

Mozilla planet - mo, 26/07/2021 - 12:18

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

Glean SDK

The Glean SDK is Mozilla’s telemetry library, used in most mobile products and now for Firefox Desktop as well. By now it has grown to a sizable code base with a lot of functionality beyond just storing some metric data. Since its first release as a Rust crate in 2019 we managed to move more and more logic from the language SDKs (previously also known as “language bindings”) into the core Rust crate. This allows us to maintain business logic only once and can easily share that across different implementations and platforms. The Rust core is shipped precompiled for multiple target platforms, with the language SDK distributed through the respective package manager.

I talked about how this all works in more detail last year, this year and blogged about it in a previous TWiG.

GeckoView

GeckoView is Mozilla’s alternative implementation for WebViews on Android, based on Gecko, the web engine that also powers Firefox Desktop. It is used as the engine behind Firefox for Android (also called Fenix). The visible parts of what makes up Firefox for Android is written in Kotlin, but it all delegates to the underlying Gecko engine, written in a combination of C++, Rust & JavaScript.

The GeckoView code resides in the mozilla-central repository, next to all the other Gecko code. From there releases are pushed to Mozilla’s own Maven repository.

One Glean too many

Initially Firefox for Android was the only user of the Glean SDK. Up until today it consumes Glean through its release as part of Android Components, a collection of libraries to build browser-like applications.

But the Glean SDK is also available outside of Android Components, as its own package. And additionally it’s available for other languages and platforms too, including a Rust crate. Over the past year we’ve been busy getting Gecko to use Glean through the Rust crate to build its own telemetry on top.

With the Glean SDK used in all these applications we’re in a difficult position: There’s a Glean in Firefox for Android that’s reporting data. Firefox for Android is using Gecko to render the web. And Gecko is starting to use Glean to report data.

That’s one Glean too many if we want coherent data from the full application.

Shipping it all together, take one

Of course we knew about this scenario for a long time. It’s been one of the goals of Project FOG to transparently collect data from Gecko and the embedding application!

We set out to find a solution so that we can connect both sides and have only one Glean be responsible for the data collection & sending.

We started with more detailed planning all the way back in August of last year and agreed on a design in October. Due to changed priorities & availability of people we didn’t get into the implementation phase until earlier this year.

By February I had a first rough prototype in place. When Gecko was shipped as part of GeckoView it would automatically look up the Glean library that is shipped as a dynamic library with the Android application. All function calls to record data from within Gecko would thus ultimately land in the Glean instance that is controlled by Fenix. Glean and the abstraction layer within Gecko would do the heavy work, but users of the Glean API would notice no difference, except their data would now show up in pings sent from Fenix.

This integration was brittle. It required finding the right dynamic library, looking up symbols at runtime as well as reimplementing all metric types to switch to the FFI API in a GeckoView build. We abandoned this approach and started looking for a better one.

Shipping it all together, take two

After the first failed approach the issue was acknowledged by other teams, including the GeckoView and Android teams.

Glean is not the only Rust project shipped for mobile, the application-services team is also shipping components written in Rust. They bundle all components into a single library, dubbed the megazord. This reduces its size (dependencies & the Rust standard library are only linked once) and simplifies shipping, because there’s only one library to ship. We always talked about pulling in Glean as well into such a megazord, but ultimately didn’t do it (except for iOS builds).

With that in mind we decided it’s now the time to design a solution, so that eventually we can bundle multiple Rust components in a single build. We came up with the following plan:

  • The Glean Kotlin SDK will be split into 2 packages: a glean-native package, that only exists to ship the compiled Rust library, and a glean package, that contains the Kotlin code and has a dependency on glean-native.
  • The GeckoView-provided libxul library (that’s “Gecko”) will bundle the Glean Rust library and export the C-compatible FFI symbols, that are used by the Glean Kotlin SDK to call into Glean core.
  • The GeckoView Kotlin package will then use Gradle capabilities to replace the glean-native package with itself (this is actually handle by the Glean Gradle plugin).

Consumers such as Fenix will depend on both GeckoView and Glean. At build time the Glean Gradle plugin will detect this and will ensure the glean-native package, and thus the Glean library, is not part of the build. Instead it assumes libxul from GeckoView will take that role.

This has some advantages. First off everything is compiled together into one big library. Rust code gets linked together and even Rust consumers within Gecko can directly use the Glean Rust API. Next up we can ensure that the version of the Glean core library matches the Glean Kotlin package used by the final application. It is important that the code matches, otherwise calling native functions could lead to memory or safety issues.

Glean is running ahead here, paving the way for more components to be shipped the same way. Eventually the experimentation SDK called Nimbus and other application-services components will start using the Rust API of Glean. This will require compiling Glean alongside them and that’s the exact case that is handled in mozilla-central for GeckoView then.

Now the unfortunate truth is: these changes have not landed yet. It’s been implemented for both the Glean SDK and mozilla-central, but also requires changes for the build system of mozilla-central. Initially that looked like simple changes to adopt the new bundling, but it turned into bigger changes across the board. Some of the infrastructure used to build and test Android code from mozilla-central was untouched for years and thus is very outdated and not easy to change. With everything else going on for Firefox it’s been a slow process to update the infrastructure, prepare the remaining changes and finally getting this landed.

But we’re close now!

Big thanks to Agi for connecting the right people, driving the initial design and helping me with the GeckoView changes. He also took on the challenge of changing the build system. And also thanks to chutten for his reviews and input. He’s driving the FOG work forward and thus really really needs us to ship GeckoView support.

Categorieën: Mozilla-nl planet

Jan-Erik Rediger: This Week in Glean: Shipping Glean with GeckoView

Mozilla planet - mo, 26/07/2021 - 12:00

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All "This Week in Glean" blog posts are listed in the TWiG index (and on the Mozilla Data blog). This article is cross-posted on the Mozilla Data blog.

Glean SDK

The Glean SDK is Mozilla's telemetry library, used in most mobile products and now for Firefox Desktop as well. By now it has grown to a sizable code base with a lot of functionality beyond just storing some metric data. Since its first release as a Rust crate in 2019 we managed to move more and more logic from the language SDKs (previously also known as "language bindings") into the core Rust crate. This allows us to maintain business logic only once and can easily share that across different implementations and platforms. The Rust core is shipped precompiled for multiple target platforms, with the language SDK distributed through the respective package manager.

I talked about how this all works in more detail last year, this year and blogged about it in a previous TWiG.

GeckoView

GeckoView is Mozilla's alternative implementation for WebViews on Android, based on Gecko, the web engine that also powers Firefox Desktop. It is used as the engine behind Firefox for Android (also called Fenix). The visible parts of what makes up Firefox for Android is written in Kotlin, but it all delegates to the underlying Gecko engine, written in a combination of C++, Rust & JavaScript.

The GeckoView code resides in the mozilla-central repository, next to all the other Gecko code. From there releases are pushed to Mozilla's own Maven repository.

One Glean too many

Initially Firefox for Android was the only user of the Glean SDK. Up until today it consumes Glean through its release as part of Android Components, a collection of libraries to build browser-like applications.

But the Glean SDK is also available outside of Android Components, as its own package. And additionally it's available for other languages and platforms too, including a Rust crate. Over the past year we've been busy getting Gecko to use Glean through the Rust crate to build its own telemetry on top.

With the Glean SDK used in all these applications we're in a difficult position: There's a Glean in Firefox for Android that's reporting data. Firefox for Android is using Gecko to render the web. And Gecko is starting to use Glean to report data.

That's one Glean too many if we want coherent data from the full application.

Shipping it all together, take one

Of course we knew about this scenario for a long time. It's been one of the goals of Project FOG to transparently collect data from Gecko and the embedding application!

We set out to find a solution so that we can connect both sides and have only one Glean be responsible for the data collection & sending.

We started with more detailed planning all the way back in August of last year and agreed on a design in October. Due to changed priorities & availability of people we didn't get into the implementation phase until earlier this year.

By February I had a first rough prototype in place. When Gecko was shipped as part of GeckoView it would automatically look up the Glean library that is shipped as a dynamic library with the Android application. All function calls to record data from within Gecko would thus ultimately land in the Glean instance that is controlled by Fenix. Glean and the abstraction layer within Gecko would do the heavy work, but users of the Glean API would notice no difference, except their data would now show up in pings sent from Fenix.

This integration was brittle. It required finding the right dynamic library, looking up symbols at runtime as well as reimplementing all metric types to switch to the FFI API in a GeckoView build. We abandoned this approach and started looking for a better one.

Shipping it all together, take two

After the first failed approach the issue was acknowledged by other teams, including the GeckoView and Android teams.

Glean is not the only Rust project shipped for mobile, the application-services team is also shipping components written in Rust. They bundle all components into a single library, dubbed the megazord. This reduces its size (dependencies & the Rust standard library are only linked once) and simplifies shipping, because there's only one library to ship. We always talked about pulling in Glean as well into such a megazord, but ultimately didn't do it (except for iOS builds).

With that in mind we decided it's now the time to design a solution, so that eventually we can bundle multiple Rust components in a single build. We came up with the following plan:

  • The Glean Kotlin SDK will be split into 2 packages: a glean-native package, that only exists to ship the compiled Rust library, and a glean package, that contains the Kotlin code and has a dependency on glean-native.
  • The GeckoView-provided libxul library (that's "Gecko") will bundle the Glean Rust library and export the C-compatible FFI symbols, that are used by the Glean Kotlin SDK to call into Glean core.
  • The GeckoView Kotlin package will then use Gradle capabilities to replace the glean-native package with itself (this is actually handle by the Glean Gradle plugin).

Consumers such as Fenix will depend on both GeckoView and Glean. At build time the Glean Gradle plugin will detect this and will ensure the glean-native package, and thus the Glean library, is not part of the build. Instead it assumes libxul from GeckoView will take that role.

This has some advantages. First off everything is compiled together into one big library. Rust code gets linked together and even Rust consumers within Gecko can directly use the Glean Rust API. Next up we can ensure that the version of the Glean core library matches the Glean Kotlin package used by the final application. It is important that the code matches, otherwise calling native functions could lead to memory or safety issues.

Glean is running ahead here, paving the way for more components to be shipped the same way. Eventually the experimentation SDK called Nimbus and other application-services components will start using the Rust API of Glean. This will require compiling Glean alongside them and that's the exact case that is handled in mozilla-central for GeckoView then.

Now the unfortunate truth is: these changes have not landed yet. It's been implemented for both the Glean SDK and mozilla-central, but also requires changes for the build system of mozilla-central. Initially that looked like simple changes to adopt the new bundling, but it turned into bigger changes across the board. Some of the infrastructure used to build and test Android code from mozilla-central was untouched for years and thus is very outdated and not easy to change. With everything else going on for Firefox it's been a slow process to update the infrastructure, prepare the remaining changes and finally getting this landed.

But we're close now!

Big thanks to Agi for connecting the right people, driving the initial design and helping me with the GeckoView changes. He also took on the challenge of changing the build system. And also thanks to chutten for his reviews and input. He's driving the FOG work forward and thus really really needs us to ship GeckoView support.

Categorieën: Mozilla-nl planet

Firefox Add-on Reviews: Too many open tabs? Extensions to the rescue!

Mozilla planet - to, 22/07/2021 - 22:30

The first step in getting help with your tab hoarding problem is to admit you have a tab hoarding problem. Whatever the reason may be—your job requires you to have dozens of open tabs or the rows of tabs represent your neverending “read later” list—you can regain control of this spiraling situation with the right browser extension. 

Tree Style Tab

Organize your tabs into a clean, cascading “tree” format. Tree Style Tab opens new tabs as “branches” of the parent tab, so all of your open tabs are automatically organized in an easy-to-glance tree branch layout. 

If you’re someone who likes to visually organize information, Tree Style Tab can be a real game changer. It’s very simple to use—just drag n drop different branches to reorganize your clusters of open tabs. 

<figcaption>Tree Style Tab keeps your tabs tucked away in a tidy sidebar. </figcaption> OneTab

For the times you suddenly find yourself overwhelmed with a bazillion open tabs, OneTab is your page overload panic button. 

Just hit OneTab’s toolbar button and all open tabs get tucked away into a single scrollable page. Save major CPU and memory with all pages now dormant. Reactivate them one by one or all at once. 

<figcaption>With the click of a mouse OneTab turns all your open tabs into a single list on a page.</figcaption> Tab Stash

Click the Tab Stash toolbar button and bam!—all those open tabs get stored as bookmarks, which presents intriguing possibilities. 

With tabs temporarily saved as bookmarks listed in a foldaway sidebar menu, you’re free to treat them as either easily navigable links to your previously open tabs, or save them permanently as individual or grouped bookmarks. Firefox Sync users will automatically have their Tab Stash bookmarks synced to other devices. 

<figcaption>Tab Stash elegantly organizes tab overload. </figcaption> Simple Tab Groups

Great for dealing with lots (and lots) of tab groupings, Simple Tab Groups gives you an easy way to navigate a bunch of tab clusters. 

Click the extension’s toolbar button to pull up a menu that lets you easily navigate your groups of open tabs, or specific pages. If you deal with a mass volume of open tabs—like say hundreds of tabs organized across a couple dozen groups—Simple Tab Groups is the extension for you. 

<figcaption>Simple Tab Groups is great for dealing with a huge volume of tabs. </figcaption> Tab Session Manager

Save and restore the full state of batches of open tabs with Tab Session Manager

If you find yourself opening a lot of new windows and filling them up with open tabs, Tab Session Manager lets you easily save the state of the entire window and its tabs so you’re free to close it down altogether until future recall. The extension also supports auto-save features, cloud sync, session import/export, and more. 

Tab Reloader

Do you have a need for frequent page refreshes across numerous tabs? Maybe you’re in a shopping queue waiting for limited availability items? Perhaps you want a news feed refreshed consistently? Whatever your reason, Tab Reloader gives you the ability to set your own custom time intervals for page refreshes. 

The extension gives you great individual page control. Additional features include:

  • Set different reload time intervals per page, or per a group of tabs within the same window
  • Set reloading to occur if pages are active or not
  • Create custom reload rules for tabs within designated hostnames
  • Manage everything conveniently from a toolbar menu
  • Choose to automatically start your view at the bottom of a freshly reloaded page, should new content appear there
Auto Tab Discard

Laser focused on a singularly important task, Auto Tab Discard simply suspends all activity for any background tabs, saving you CPU and memory load. 

A streamlined toolbar menu allows for a few other handy actions as well, like discarding specific tabs you don’t need anymore, whitelisting domains so you never accidentally discard them, retrieving accidentally discarded tabs, and more. 

Best of luck retaking control of all those tabs! Explore more tab extensions on addons.mozilla.org

Categorieën: Mozilla-nl planet

Take control over your data with Rally, a novel privacy-first data sharing platform

Mozilla Blog - fr, 25/06/2021 - 12:00

Mozilla teams up with Princeton University researchers to enable crowdsourced science for public good; collaborates with research groups at Princeton, Stanford on upcoming studies.

Your data is valuable. But for too long, online services have pilfered, swapped, and exploited your data without your awareness. Privacy violations and filter bubbles are all consequences of a surveillance data economy. But what if, instead of companies taking your data without giving you a say, you could select who gets access to your data and put it to work for public good?

Today, we’re announcing the Mozilla Rally platform. Built for the browser with privacy and transparency at its core, Rally puts users in control of their data and empowers them to contribute their browsing data to crowdfund projects for a better Internet and a better society. At Mozilla, we’re working on building a better internet, one that puts people first, respects their privacy and gives them power over their online experience. We’ve been a leader in privacy features that help you control your data by blocking trackers. But, being “data-empowered” also requires the ability to choose who you want to access your data. 

“Cutting people out of decisions about their data is an inequity that harms individuals, society and the internet. We believe that you should determine who benefits from your data. We are data optimists and want to change the way the data economy works for both people and day-to-day business. We are excited to see how Rally can help understand some of the biggest problems of the internet and make it better.”

Rebecca Weiss, Rally Project Lead

As a first step on this journey, we’re launching the new Rally research initiative, a crowdsourced scientific effort we developed in collaboration with professor Jonathan Mayer’s research group at Princeton University. Computer scientists, social scientists and other researchers will be able to launch groundbreaking studies about the web and invite you to participate. A core focus of the initiative is enabling unprecedented studies that hold major online services accountable.

“Online services constantly experiment on users, to maximize engagement and profit. But for too long, academic researchers have been stymied when trying to experiment on online services. Rally flips the script and enables a new ecosystem of technology policy research.”

Jonathan Mayer, Princeton’s Center for Information Technology Policy

We’re kickstarting the Mozilla Rally research initiative with our first two research collaborator studies. Our first study is “Political and COVID-19 News” and comes from the Princeton team that helped us develop the Rally research initiative. This study examines how people engage with news and misinformation about politics and COVID-19 across online services.  

Soon, we’ll also be launching our second academic study, “Beyond the Paywall”, a study, in partnership with Shoshana Vasserman and Greg Martin of the Stanford University Graduate School of Business. It aims to better understand news consumption, what people value in news and the economics that could build a more sustainable ecosystem for newspapers in the online marketplace.

“We need research to get answers to the hard questions that we face as a society in the information age. But for that research to be credible and reliable, it needs to be transparent, considered and treat every participant with respect. It sounds simple but this takes a lot of work. It needs a standard bearer to make it the expectation in social science. In working with Rally, we hope to be part of that transformation.”

Shoshana Vasserman, Assistant Professor of Economics at the Stanford Graduate School of Business

We are also launching a new toolkit today, WebScience, that enables researchers to build standardized browser-based studies on Rally. WebScience also encourages data minimization, which is central to how Rally will respect people who choose to participate in studies. WebScience was developed and open sourced by Jonathan Mayer’s team at Princeton and is now co-maintained with Mozilla. 

With Rally, we’ve built an innovative, consent-driven data sharing platform that puts power back into the hands of people. By leveraging the scale of web browsers – a piece of software used by billions of people around the world – Rally has the potential to help address societal problems we could not solve before. Our goal is to demonstrate that there is a case for an equitable market for data, one where every party is treated fairly, and we welcome mission-aligned organizations that want to join us on this journey. 

Rally is currently available for Firefox desktop users over age 19 in the United States. We plan to launch Rally for other web browsers and in other countries in the future. 

To participate in Rally, join us at rally.mozilla.org

————————————————————————————

Interested in joining Rally and want to know how it works?

When you join Rally, you have the opportunity to participate in data crowdsourcing projects — we call them “studies” — focused on understanding and finding solutions for social problems caused by the data economy. You will always see a simple explanation of a study’s purpose, the data it collects, how the data will be used, and who will have access to your data. All your data is stored in Mozilla’s restricted servers, and access to the analysis environment is tightly controlled. For those who really want to dig deep, you can read our detailed disclosures and even inspect our code

The post Take control over your data with Rally, a novel privacy-first data sharing platform appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Racial Justice Commitments: One Year In

Mozilla Blog - to, 24/06/2021 - 02:25

One year ago, we made a set of commitments to make diversity and inclusion more than a catchphrase or hot button topic. We decided to roll up our sleeves and get busy establishing significant goals, putting resources behind them and making sure that everyone, including our company leadership, was taking action to create a more diverse and equitable place at Mozilla and in society.

We have taken steps to address the issue of anti-Black racism and the lack of diversity and inclusion in our company, and hopefully, in society, through programming and people initiatives.  We have seen a significant increase in participation in diversity and inclusion initiatives, and perhaps, equally important, in our engagement survey results and in particular the increased scores on diversity and inclusion questions by people of color and women. While we have made strides on many of the goals established on June 18, 2020, we recognize this progress is the “First Step Toward Lasting Change.”  We continue to be committed through our actions and resources to improve Mozilla as a place to work for people of color and the internet for all.

1. Who we are: Our employee base and our communities

In our upcoming diversity and inclusion disclosure, you will find that we have greatly invested in improving diversity and enhancing a culture of inclusion at Mozilla. Through a balance of fun, education, celebration and conversations, we created safe spaces for people of color to share the totality of their human experience, honoring the beauty and joy of their lives and holding space to contend with the more sobering and harsh realities of race in society. 

We hosted three  panel discussions that each covered pertinent and insightful topics as designed by our Mozilla Resource Groups. There were the ones that gave us belly laughs – “What does it mean to lose your black card?” – and there were the ones that challenged us – “What is the impact of the model minority myth?” 

We held facilitated discussions designed to provide employees with an opportunity to engage in deep listening and sharing following an onslaught of racial violence across the U.S. These sessions, aptly named Gather @Mozilla, gave us an opportunity to collectively process some of the traumatic and triggering events happening around us. 

Our goal was to provide various options for employees to connect and learn. Recognizing that learning is a personal experience, we offered paths for individual learning and collective learning. We published resource libraries (and shared them publicly: Black History Month, Asian Pacific Islander Heritage Month), hosted virtual cooking lessons, convened talks with renowned authors, curated music playlists (Latin and Hispanic Heritage Month, Black History Month, Women’s History Month, API Heritage Month Playlist) and much more. By providing a breadth of opportunities to celebrate (Latin and Hispanic Heritage Month celebration, Black History Month celebration, Women’s History Month celebration, and Asian Pacific Islander Heritage Month celebration), we increased participation and invited our organization on a journey of co-creating an inclusive culture.  

As we round out the second half of 2021, we will be rolling out an Inclusion Champion program, working with DEI councils within each business group to promote organization-specific D&I programming, deploying a D&I skill development platform, and diversifying our talent acquisition pipeline.

2. What we build: Our outreach with our products

At Mozilla, we work to build a better internet and our products can help elevate the best of the internet. Through Pocket Collections, selections of stories curated by Pocket editors and guest editors, we introduced Collections that elevated diverse voices and gave insight into issues impacting BIPOC communities and the context around which they emerged (Racial Justice Collections, Essential Reading: Celebrating Juneteenth). We hoped to provide readers with content and perspectives they may not otherwise encounter.

We understood that when you point a finger at someone else, there is a finger pointing back at you. Thus, we launched a project that examined when and where biases creep into user research and design and initiated work efforts to reduce the amount of racist language in code (Remove all references to blacklist/whitelist within Gecko, Remove references to slave).

In the second half of 2021, our product teams will continue to identify opportunities to elevate diverse voices and combat unconscious biases in our products.

3. What we do beyond products: Our broader engagement with the world

We leveraged Dialogues & Debates, a speaker series, to address issues of A.I. and race and ethnicity and the challenges presented to communities of color because of this problem in tech and media. We had robust discussions about the use of technology to surveil historically vulnerable populations and used our network to call on fellow technology companies to be mindful of how they use technology in service to the criminal justice system instead of the communities of color we serve. The Mozilla Foundation launched campaigns calling on Nextdoor and Amazon Ring to pause their relationships with police departments and assess the impact of the platforms on users and communities of color. More than 28,000 people signed the petitions and several organizations partnered with Mozilla on escalation actions. 

We were able to have these critical community discussions and collaborations by being more thoughtful and intentional with the ways in which we used available funding from Mozilla Foundation in our commitment to social justice in tech and society. We granted 33% of Mozilla Foundation funds to black-led tech and social justice initiatives. Unfortunately, we fell short of our 40% target. While some may see this as a failure, we see it as an opportunity to acknowledge an area where we still need improvement and to commit to continuing to fund and elevate voices of color.

We also partnered with three Historically Black Colleges and Universities Engineering and Computer Science programs to promote the role of African Americans in tech, to engage in ethical computing discussions, and to cultivate relationships with aspiring scientists, designers and tech leaders so they understand there is a place for them in this industry.

Over the coming year, we are looking forward to deepening our relationships with institutions that serve and support communities of color and communities that have been historically marginalized. The First Step Toward Lasting Change squarely moves us along the journey of diversity, equity, inclusion and belonging but is not the middle nor final step. There remains significant room for improvement and we are committed to continue the course in closing the gaps that exist in tech and society.  

The post Mozilla Racial Justice Commitments: One Year In appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Gary Linden, legendary surfer & Firefox fan

Mozilla Blog - fr, 18/06/2021 - 18:00

On the internet you are never alone, and because of that at Mozilla we know that we can’t work to build a better internet alone. We believe in and rely on our community — from our volunteers, to our staff, to our users and even the parent’s of our staff (who also happen to be some of our power users). For Father’s Day, Mozilla’s Natalie Linden sat down with her father, big wave surf legend and surfboard maker, Gary Linden to talk the ocean, the internet and where humanity goes from here.

We should probably start by telling people who we are. I am Natalie Linden, the Director of the Creative Studio in Mozilla marketing.

And I’m Gary Linden. I’m your father. That’s probably my best accomplishment.

Awww Dad.

I make surfboards, run surfing events and surf. I’m semi-retired. Sort of.

Gary Linden

I don’t think you’re giving yourself enough credit. When I tell people I’m Gary Linden’s daughter, they always say “Gary Linden?! He’s a legend!”

You know, if you’re involved in something for all your life, and you do a reasonably good job, you’ll get old and then you’ll be the oldest one around. So of course you’ll be the legend! 

One of the things you’re the oldest guy doing is paddling into really, really big waves. 

Yeah I’m a big wave surfer, that’s been my passion. I wasn’t afraid of the ocean or of big waves, and that set me apart from most other surfers. So I got admission to a club that was pretty exclusive. And that was pretty cool. Then I started the Big Wave World Tour so younger surfers could have a career path to becoming a big wave rider. Big wave surfing takes more time and resources: you have to have the means to travel, the boards are more expensive. We weren’t seeing the younger people really be able to surf the big waves so we weren’t seeing what could be done in the peak athletic performance years. I’m pretty proud of that tour.

One of the questions I was going to ask you is why you do what you do, and I think you’re starting to answer that. The way you’ve always described it to me is that from the first time you rode a wave on a surfboard, you knew that’s what you wanted to do, and you’ve oriented your whole life around being able to surf as much as possible.

Yes. Even before I rode a surfboard, my father took me to the ocean and taught me to play in the waves, and about the currents, and body surfing. The freedom of it was like nothing else. I had asthma and hay fever, and when I was in the ocean I didn’t feel any of that. Whereas on land the pollens and the dryness just made being on the land kind of miserable. Like a fish out of water in a lot of ways. It was always rewarding for me to go into the ocean. It goes beyond just feeling good. It’s a state of mind as well. 

So you started making surfboards, too. 

I started making surfboards because surfing went into a transitional period — we all had longboards and then in the 70s, some of the Australians started experimenting with boards that were a foot shorter. There was nobody in San Diego making them, so I got a blank and shaped a board. And then I started making them for my friends, and it just set me on a path. But I’ve always made surfboards so that I could have the boards I needed to surf. If somebody else wanted one, that was fine, but I wasn’t making it for them. I was making it for me. Because surfing — not surfboard making — was my primary focus. 

How has the internet changed what you do?

Well first, the internet has made it incredibly easy to find out where the best waves of the day are. There are cameras all over the world now and you get surf reports. You don’t have to drive to the beach — you can live inland and plan ahead. And this year with the pandemic, live surfing competition was pretty much shut down. So a friend and I created a virtual surfing world tour called Surf Web Series, where we could take video clips of surfers who had gone out the prior year, take a little video of their best waves and then we’d put those in heats just like a regular event and judge them and take it all the way to a final like a surfing competition. That was a lot of fun because it filled the gaps for a lot of the kids who are surfing professionally but they couldn’t give anything back to their sponsors during the pandemic because they weren’t competing, they didn’t have a way to get exposure, they didn’t have a way to further their career. This gave them an opportunity to keep going in their career, and keep the world interested in the sport of surfing. It’s opened up another avenue for the sport. 

I just focus on being in the best shape I possibly can, so I can surf. And I’m going to do it as long as I can. And when I can’t stand up anymore, I’ll be on a belly board. And when I can’t do that, I’ll jump in the waves.

One of the things I really admire about you, dad, is that you never stop having ideas. You set this intention of surfing for your life, and you keep finding new ways at it. You’re 71 and you’re still growing, you’re still changing, you’re still figuring out how to use the latest tools and culture to do the thing you set out to do. It inspires me every single day. It also helps that I see it up close, because we share an office!

Well you inspire me too, because of your energy and motivation. I don’t think you’re ever going to stop either, because you are inspired, you are motivated. That’s what surfing was for me: it gave me something to focus on 100%. I love it so much, and it’s so good for me, that I don’t go snowboarding, I don’t skateboard, I don’t play football, I don’t ride bicycles. I don’t want to get hurt doing anything else. I don’t drink, or stay out at night. I just focus on being in the best shape I possibly can, so I can surf. And I’m going to do it as long as I can. And when I can’t stand up anymore, I’ll be on a belly board. And when I can’t do that, I’ll jump in the waves. I don’t really care. I just like that original feeling of going in the ocean with my dad and feeling clean and involved with the earth. My connection with the earth is the ocean. 

Speaking of staying in shape, how has your relationship with the internet changed since the pandemic?

You know the answer to this one, since we share a yoga room at our office. I started yoga about three years ago now because I wanted to be in better shape for surfing. I didn’t want to go at first because everyone was so young and as a beginner, it was intimidating. I couldn’t really keep up and felt awkward. But I found a geriatric yoga class, and it was really fun. I was getting better. Then with the pandemic, they started a zoom class. And now it’s actually even better. Because for older people, it’s still really intimidating. Now we can focus on the teacher and not feel self-conscious. It’s pretty awesome. Now you can do your work on the internet too — all the meetings and stuff. I mean, sometimes I wish the internet wasn’t there because you have to focus a lot more to stay grounded on the earth. Otherwise you’re just in that cloud. And that’s a really all-consuming place to be. I don’t think we were really made for that. So it’s another area where you have to find your balance. But you gotta find your balance in everything anyway. Even pigeon pose! 

Is there something about technology that blows your mind?

Yeah, it doesn’t ever stop. It’s like watching science fiction happen in real life. It goes so fast. I’m fortunate enough that I was born before television, so I’ve seen a lot of stuff change. It’s so rapid. If you go back in history and think about evolution, it took so long for us not to be covered in hair, and to be able to talk, and now we’re talking about having chips in our bodies to help us heal, and artificial intelligence. If you dwell on it, it’s really overwhelming for someone my age. It can be scary. But there’s a lot of positive to it, so you’ve gotta stay on that side. 

Speaking of positivity, what’s your favorite fun stuff to do online?

I like Facebook and Instagram because I get to have some kind of contact with people all over the world. I’ve got friends everywhere from my life of traveling, and when I post something, the person who comments could be someone I haven’t talked to for 50 years! I like going on Surfline to see the surf report. And I like to write and receive emails. Because it used to be such a lag! I used to write letters to my friends, and it would be a month between receipt. And you’d change in that month. But with email, you can keep the conversation going without interruption.

…it took so long for us not to be covered in hair, and to be able to talk, and now we’re talking about having chips in our bodies to help us heal, and artificial intelligence. If you dwell on it, it’s really overwhelming for someone my age. It can be scary. But there’s a lot of positive to it, so you’ve gotta stay on that side.

What’s your hope for how technology can change or improve the future? What do you want [your grandson] Nimo to be able to do?

I would like my grandson to be able to use the internet to feed and shelter the world. I don’t know how it’ll work, but you can already see… GoFundMe has helped the lives of a lot of my friends who got to my age or older, and just didn’t put anything away. Everybody throws a hundred bucks in, and all the sudden the guy’s at least got a chance to make it to hospice. That’s the kind of thing I hope we can do, that the communication will help us realize that it’s not just one person or one country versus another. It’s our world, and we have to all live together. I hope we get to the point that we see it’s a global economy, a global outcome. That we have to live as humans and not Americans or Chinese or Russians. I just think being able to communicate and see that we’re all the same, we all have the same needs. Food, shelter, companionship. If you get all that, you don’t really need anything else. 

That’s exactly why I work at Mozilla: I believe our collective future will be decided on the internet. So we need to make sure it’s a place that can breed a positive outcome. 

That’s right. And that’s the scary part we saw in the last election. All the junk that was online! So many lies! We didn’t know what was true, and what wasn’t true, and we had to decide for ourselves. We had to create our own filters. We had to choose the truth we wanted, the one that reflected the future of the world we want. 

You’re my hero, Dad. Happy Father’s Day.

Natalie Linden and her dad,
Gary Linden

The post Gary Linden, legendary surfer & Firefox fan appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

What is the difference between the internet, browsers, search engines and websites?

Mozilla Blog - to, 17/06/2021 - 20:32

Real talk: this web stuff can get confusing. And it’s really important that we all understand how it works, so we can be as informed and empowered as possible. Let’s start by breaking down the differences between the internet, browsers, search engines and websites. Lots of us get these four things confused with each other and use them interchangeably, though they are different. In this case, the old “information superhighway” analogy comes in handy.

Let’s start by breaking down the differences between internet, search engine, and browser. Lots of us get these three things confused with each other.

In this case, the old “internet superhighway” analogy comes in handy.

The internet

The internet is the superhighway’s system of roads, bridges and tunnels. It is the technical network and infrastructure that connect all the computers and devices that are online together across the world. Being connected to the internet means devices, and whoever is using them, can communicate with each other and share information.

Browsers

The browser is the car that gets you everywhere. You type a destination into the address bar and zoooom: your browser takes you anywhere on the internet. Firefox is a browser — one built specifically for people, not profit.

Search engines

Search engines like Yahoo, Google, Bing and DuckDuckGo are the compass and the map. You tell a search engine an idea of where you want to go by typing your search terms, and it gives you some possible destinations. Search engines are websites, and they can also be apps. More on apps later.

Websites and the web

Effectively, you drive along the internet highway, stopping at whatever towns, stores and roadside attractions catch your fancy, aka websites. Websites are the specific destinations you visit throughout the internet. This is the content — the webpages, websites, documents, social media, news, videos, images and so on that you view and experience via the internet. The “web” (which is short for “world wide web”, hence “www”) is the collection of all these websites.

Apps

Any program that you download and install on your device is an app. Browsers are apps. Some websites — like Facebook, YouTube, Spotify and The New York Times, for example — double up as apps, so you get the same or similar content on the app as you would on the corresponding website. 

The key thing to remember about apps, especially social media apps, is that while they are accessed via a connection to the internet (the infrastructure), content on them does not represent the full web. It’s just a slice. In addition, not everything published in an app is necessarily publicly accessible on the web. 
The web is the largest software platform ever, a great equalizer that works on any device, anywhere. By design, the web is open for anyone to participate in. Read more about Mozilla’s mission to keep the internet open and accessible to all.

Know someone who gets these things mixed up? It’s easy to do!
Pass this article along to share the knowledge.

The post What is the difference between the internet, browsers, search engines and websites? appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Celebrating our community: 10 years of the Reps Program

Mozilla Blog - wo, 16/06/2021 - 16:33

Mozilla has always been about community and understanding that the internet is a better place when we work together. Ten years ago, Mozilla created the Reps program to add structure to our regional programs, further building off of our open source foundation. Over the last decade, the program has helped activate local communities in over 50 countries, tested Mozilla products and launches before they were released to the public, and collaborated on some of our biggest projects. 

The last decade also has seen big shifts in technology, and it has only made us at Mozilla more thankful for our volunteers and more secure in our belief that community and collaboration are key to making a better internet.  

“As the threats to a healthy internet persist, our network of collaborative communities and contributors continues to provide an essential role in helping us make it better,” said Mitchell Baker, CEO and Chairwoman of Mozilla. “These passionate Mozillians give up their time to educate, empower and mobilize others to support Mozilla’s mission and expand the impact of the open source ecosystem – a critical part of making the internet more accessible and better than how they found it.”

Ahead of our 10 year anniversary virtual celebration for the Reps Mozilla program, or ReMo for short, we connected with six of the 205 current reps to talk about their favorite parts of the internet, why community is so important, and where the Reps program can go from here. 

Please introduce yourself! What community do you represent and how long have you been in the Mozilla Reps program?

Ioana Chiorean: I am the Reps Module Owner at this time. I am part of Mozilla Romania, but have always been involved in technical communities directly, like QA, Firefox OS and support. My latest roles have been more on the advocacy side as Tech Speaker and building the Reps community. I’ve been in the Reps program since 2011.

Irvin Chen: I’m a Mozilla Rep from Taipei, Taiwan. I’m representing the Mozilla Taiwan Community, one of the oldest Mozilla communities.

Lidya Christina: I’m a Mozilla Reps from Jakarta, Indonesia. I’ve been involved in the Reps program for more than two years now. I am also part of the review and resources team, provide operational support for the Mozilla community space in Jakarta, and a translator for the Mozilla localization project.

Michael Kohler: I have been part of the Reps program since 2012, and I am currently a Reps Peer helping out with strategy-related topics within the Reps program. After organizing events and building the community in Switzerland, I moved to Berlin in 2018 and started to help there. In the past 13 years I have worked on different Mozilla products such as Firefox, Firefox OS and Common Voice. 

Pranshu Khanna: I’m Pranshu Khanna, a Reps Council Member for the current term and a Rep from Mozilla Gujarat. I started my journey as a Firefox Student Ambassador from an event in January 2016, where my first contribution was to introduce the world of Open Source to over 150 college students. Since then, I’ve spoken to thousands of people about privacy, open web and open source to people across the world and have been a part of hundreds of events, programs and initiatives.

Robert Sayles: Currently, I reside in Dallas, Texas, and I represent the North American community. I first joined the Mozilla Reps program in 2012, focusing mainly on my volunteer contribution to the Mozilla Festival Volunteer Coordinator 2013. 

What part of the internet do you get the most joy from?

Irvin: For me, the most exciting thing about the internet is that no matter who you are or where you are located, you can always find and make some friends on the internet. For example, apart from each other, we could still collaborate online and successfully host the release party of Firefox in early 2000. Mozilla gives us, the local community contributors, the opportunity to participate, contribute and learn from each other on a global scale.

Michael: Nyan Cat is probably the part of the internet that I get most joy from. Kidding aside, for me the best part of the internet is probably the possibility to learn new astonishing facts about things I otherwise would never have looked up. All the knowledge is a few clicks away.

Pranshu: The most joyful moments from the internet have always come from being connected to people. It was 2006, and the ability to be on chat boards on a dial-up modem on 256Kbps to connect with people about anything, and scraping people on Orkut (remember that?). It’s been a ride, and now I speak to my mother everyday through FaceTime who is thousands of miles away and to my colleagues across the world. I would have been a kid in a small town in India who would not have imagined a world this big without the internet. It helped me embrace the idea of open knowledge and learn so much.

Why did you join the Mozilla family?

Lidya: I started in 2016, when I attended an offline localization event at the Mozilla community space in Jakarta for the first time. I have continued to be involved in localization (L10N) events since then, and I also joined the Mozilla Indonesia community to help manage events and the community space in Jakarta.

What makes me really engage with the community is that I appreciate that it is a supportive environment where the opportunities to learn (locally and globally) are wide. 

Michael: When I was in high school one of my teachers was a Firefox contributor. At some point he showed us what he is working on and that got me hooked into Mozilla. Already back then I had a big interest in open source, however it hadn’t occurred to me to contribute until that moment. I was mostly impressed by the kindness and willingness to help volunteers to contribute to Mozilla’s mission and products. I didn’t have much in-person contact with the community for the first three years, but the more I got to know many more Mozillians all around the world, the more I felt like I belonged in this community. I have found friends from all over the world due to my involvement with Mozilla!

Pranshu: Roots. Mozilla has its roots in activism since the time the internet was born, and my connection with the Mozilla manifesto was instant. I realized that it wasn’t just marketing fluff since this is a community built with passion like the company is, from a small community of developers working to build not just a browser, but user’s freedom of choice. Mozilla’s community is important to how it started and where it’s being taken, and — if you’re committed to be a part of the journey — shape the future of the internet. I have been a part of protesting Aadhaar for user privacy, building India’s National Privacy Law, mentor Open Source Leaders, and much much more. I’m so grateful for being a part of this family that genuinely wants to help people fall in love with what they are doing.

What is your favorite Mozilla product or Firefox project, and why?

Lidya: Beside the browser, my top favorite project/product are Pontoon (localization tool) and Firefox Monitor to get notified if my account was part of a data breach or not.

Michael: My favorite Mozilla product got to be Firefox. I’ve been a Firefox user for a long time and since 2008 I’ve been using Firefox Nightly (appropriately called “Minefield” back then). Since then I have been an avid advocate for Firefox and suggested Firefox to everyone who wasn’t already using Firefox. Thanks to Firefox my software engineering knowledge grew over time and up to this day has helped me in my career. And all that of course apart from being the window to the online world!

Pranshu: I love Common Voice! If I could use emojis, this would be filled with hearts. Common Voice is such a noble project to help people around the world give a voice. The beauty of the project is how it democratizes locales and gives people across all demographics a voice in the binary technological world.

Robert: I enjoyed working with Firefox Flicks many moons ago; as a Mozilla Rep, I had the privilege of interacting with the many talented creators and exploring how they were able to express themselves; I thought it was fantastic.   

Mozilla uses the term “community” quite a bit, and it means different things to different people – what does the Mozilla community mean to you?

Ioana: For me, it literally means the people. Especially those that dedicate their free time to help others, to volunteer. It is the place I grew up as a professional and learned so much about different cultures worldwide.

Pranshu: The Mozilla community is my family. I’ve met so many people across the world who passionately believe in the open web. This is a very different ecosystem than what the world considers a community, we are really close to each other. After all, doing good is a part of all of our code. 

Robert: Mozilla community means everyone brings something different to the table; I have witnessed a powerful movement over the years. When everyone gets together and brings their knowledge to the table, we can make a difference in the world.   

How has the ReMo program evolved over the past decade, and where do you think the program is headed?

Irvin: The Reps program had played an important role in connecting the isolated local communities. With regular meetups and events, we can meet with each other, receive regular updates from various projects, and collaborate on different efforts. As a community with years of history, we can extend our help beyond local users to foreign Mozillians by sharing our experience, such as experiences on community building, planning events, setting up the local website…etc.

Michael: In the past years Reps continued to provide important knowledge about their regions, such as organizing bug hunting events to test local websites to make sure they work for Firefox Quantum. There would have been quite a few bugs without the volunteers testing local websites that Mozilla employees wouldn’t have been able to test themselves. Additionally, Reps have always been great at coordinating communities and helping out with conflicts in the community.

I see a bright future for the Reps program. Mozilla can do so much more with the help of volunteers. Mozilla Reps is the perfect program to help coordinate, find and grow communities to advance Mozilla’s vision and mission over the coming years to come.

Pranshu: In the last decade the ReMo program has evolved from helping people to read, write and build on the internet to making the ecosystem better through creating leaders and helping users focus on their privacy. The program is headed to create pillars in the society that are committed to catalyse collaboration amongst diverse communities together for the common good, destroying silos that divide people. ReMo has Reps across the world, and I can imagine the community building great things together.

The post Celebrating our community: 10 years of the Reps Program appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Ludovic Hirlimann: My geeking plans for this summer

Thunderbird - to, 07/05/2015 - 10:39

During July I’ll be visiting family in Mongolia but I’ve also a few things that are very geeky that I want to do.

The first thing I want to do is plug the Ripe Atlas probes I have. It’s litle devices that look like that :

Hello @ripe #Atlas !

They enable anybody with a ripe atlas or ripe account to make measurements for dns queries and others. This helps making a global better internet. I have three of these probes I’d like to install. It’s good because last time I checked Mongolia didn’t have any active probe. These probes will also help Internet become better in Mongolia. I’ll need to buy some network cables before leaving because finding these in mongolia is going to be challenging. More on atlas at https://atlas.ripe.net/.

The second thing I intend to do is map Mongolia a bit better on two projects the first is related to Mozilla and maps gps coordinateswith wifi access point. Only a little part of The capital Ulaanbaatar is covered as per https://location.services.mozilla.com/map#11/47.8740/106.9485 I want this to be way more because having an open data source for this is important in the future. As mapping is my new thing I’ll probably edit Openstreetmap in order to make the urban parts of mongolia that I’ll visit way more usable on all the services that use OSM as a source of truth. There is already a project to map the capital city at http://hotosm.org/projects/mongolia_mapping_ulaanbaatar but I believe osm can server more than just 50% of mongolia’s population.

I got inspired to write this post by mu son this morning, look what he is doing at 17 months :

Geeking on a Sun keyboard at 17 months
Categorieën: Mozilla-nl planet

Andrew Sutherland: Talk Script: Firefox OS Email Performance Strategies

Thunderbird - to, 30/04/2015 - 22:11

Last week I gave a talk at the Philly Tech Week 2015 Dev Day organized by the delightful people at technical.ly on some of the tricks/strategies we use in the Firefox OS Gaia Email app.  Note that the credit for implementing most of these techniques goes to the owner of the Email app’s front-end, James Burke.  Also, a special shout-out to Vivien for the initial DOM Worker patches for the email app.

I tried to avoid having slides that both I would be reading aloud as the audience read silently, so instead of slides to share, I have the talk script.  Well, I also have the slides here, but there’s not much to them.  The headings below are the content of the slides, except for the one time I inline some code.  Note that the live presentation must have differed slightly, because I’m sure I’m much more witty and clever in person than this script would make it seem…

Cover Slide: Who!

Hi, my name is Andrew Sutherland.  I work at Mozilla on the Firefox OS Email Application.  I’m here to share some strategies we used to make our HTML5 app Seem faster and sometimes actually Be faster.

What’s A Firefox OS (Screenshot Slide)

But first: What is a Firefox OS?  It’s a multiprocess Firefox gecko engine on an android linux kernel where all the apps including the system UI are implemented using HTML5, CSS, and JavaScript.  All the apps use some combination of standard web APIs and APIs that we hope to standardize in some form.

Firefox OS homescreen screenshot Firefox OS clock app screenshot Firefox OS email app screenshot

Here are some screenshots.  We’ve got the default home screen app, the clock app, and of course, the email app.

It’s an entirely client-side offline email application, supporting IMAP4, POP3, and ActiveSync.  The goal, like all Firefox OS apps shipped with the phone, is to give native apps on other platforms a run for their money.

And that begins with starting up fast.

Fast Startup: The Problems

But that’s frequently easier said than done.  Slow-loading websites are still very much a thing.

The good news for the email application is that a slow network isn’t one of its problems.  It’s pre-loaded on the phone.  And even if it wasn’t, because of the security implications of the TCP Web API and the difficulty of explaining this risk to users in a way they won’t just click through, any TCP-using app needs to be a cryptographically signed zip file approved by a marketplace.  So we do load directly from flash.

However, it’s not like flash on cellphones is equivalent to an infinitely fast, zero-latency network connection.  And even if it was, in a naive app you’d still try and load all of your HTML, CSS, and JavaScript at the same time because the HTML file would reference them all.  And that adds up.

It adds up in the form of event loop activity and competition with other threads and processes.  With the exception of Promises which get their own micro-task queue fast-lane, the web execution model is the same as all other UI event loops; events get scheduled and then executed in the same order they are scheduled.  Loading data from an asynchronous API like IndexedDB means that your read result gets in line behind everything else that’s scheduled.  And in the case of the bulk of shipped Firefox OS devices, we only have a single processor core so the thread and process contention do come into play.

So we try not to be a naive.

Seeming Fast at Startup: The HTML Cache

If we’re going to optimize startup, it’s good to start with what the user sees.  Once an account exists for the email app, at startup we display the default account’s inbox folder.

What is the least amount of work that we can do to show that?  Cache a screenshot of the Inbox.  The problem with that, of course, is that a static screenshot is indistinguishable from an unresponsive application.

So we did the next best thing, (which is) we cache the actual HTML we display.  At startup we load a minimal HTML file, our concatenated CSS, and just enough Javascript to figure out if we should use the HTML cache and then actually use it if appropriate.  It’s not always appropriate, like if our application is being triggered to display a compose UI or from a new mail notification that wants to show a specific message or a different folder.  But this is a decision we can make synchronously so it doesn’t slow us down.

Local Storage: Okay in small doses

We implement this by storing the HTML in localStorage.

Important Disclaimer!  LocalStorage is a bad API.  It’s a bad API because it’s synchronous.  You can read any value stored in it at any time, without waiting for a callback.  Which means if the data is not in memory the browser needs to block its event loop or spin a nested event loop until the data has been read from disk.  Browsers avoid this now by trying to preload the Entire contents of local storage for your origin into memory as soon as they know your page is being loaded.  And then they keep that information, ALL of it, in memory until your page is gone.

So if you store a megabyte of data in local storage, that’s a megabyte of data that needs to be loaded in its entirety before you can use any of it, and that hangs around in scarce phone memory.

To really make the point: do not use local storage, at least not directly.  Use a library like localForage that will use IndexedDB when available, and then fails over to WebSQLDatabase and local storage in that order.

Now, having sufficiently warned you of the terrible evils of local storage, I can say with a sorta-clear conscience… there are upsides in this very specific case.

The synchronous nature of the API means that once we get our turn in the event loop we can act immediately.  There’s no waiting around for an IndexedDB read result to gets its turn on the event loop.

This matters because although the concept of loading is simple from a User Experience perspective, there’s no standard to back it up right now.  Firefox OS’s UX desires are very straightforward.  When you tap on an app, we zoom it in.  Until the app is loaded we display the app’s icon in the center of the screen.  Unfortunately the standards are still assuming that the content is right there in the HTML.  This works well for document-based web pages or server-powered web apps where the contents of the page are baked in.  They work less well for client-only web apps where the content lives in a database and has to be dynamically retrieved.

The two events that exist are:

DOMContentLoaded” fires when the document has been fully parsed and all scripts not tagged as “async” have run.  If there were stylesheets referenced prior to the script tags, the script tags will wait for the stylesheet loads.

load” fires when the document has been fully loaded; stylesheets, images, everything.

But none of these have anything to do with the content in the page saying it’s actually done.  This matters because these standards also say nothing about IndexedDB reads or the like.  We tried to create a standards consensus around this, but it’s not there yet.  So Firefox OS just uses the “load” event to decide an app or page has finished loading and it can stop showing your app icon.  This largely avoids the dreaded “flash of unstyled content” problem, but it also means that your webpage or app needs to deal with this period of time by displaying a loading UI or just accepting a potentially awkward transient UI state.

(Trivial HTML slide)

<link rel=”stylesheet” ...> <script ...></script> DOMContentLoaded!

This is the important summary of our index.html.

We reference our stylesheet first.  It includes all of our styles.  We never dynamically load stylesheets because that compels a style recalculation for all nodes and potentially a reflow.  We would have to have an awful lot of style declarations before considering that.

Then we have our single script file.  Because the stylesheet precedes the script, our script will not execute until the stylesheet has been loaded.  Then our script runs and we synchronously insert our HTML from local storage.  Then DOMContentLoaded can fire.  At this point the layout engine has enough information to perform a style recalculation and determine what CSS-referenced image resources need to be loaded for buttons and icons, then those load, and then we’re good to be displayed as the “load” event can fire.

After that, we’re displaying an interactive-ish HTML document.  You can scroll, you can press on buttons and the :active state will apply.  So things seem real.

Being Fast: Lazy Loading and Optimized Layers

But now we need to try and get some logic in place as quickly as possible that will actually cash the checks that real-looking HTML UI is writing.  And the key to that is only loading what you need when you need it, and trying to get it to load as quickly as possible.

There are many module loading and build optimizing tools out there, and most frameworks have a preferred or required way of handling this.  We used the RequireJS family of Asynchronous Module Definition loaders, specifically the alameda loader and the r-dot-js optimizer.

One of the niceties of the loader plugin model is that we are able to express resource dependencies as well as code dependencies.

RequireJS Loader Plugins

var fooModule = require('./foo'); var htmlString = require('text!./foo.html'); var localizedDomNode = require('tmpl!./foo.html');

The standard Common JS loader semantics used by node.js and io.js are the first one you see here.  Load the module, return its exports.

But RequireJS loader plugins also allow us to do things like the second line where the exclamation point indicates that the load should occur using a loader plugin, which is itself a module that conforms to the loader plugin contract.  In this case it’s saying load the file foo.html as raw text and return it as a string.

But, wait, there’s more!  loader plugins can do more than that.  The third example uses a loader that loads the HTML file using the ‘text’ plugin under the hood, creates an HTML document fragment, and pre-localizes it using our localization library.  And this works un-optimized in a browser, no compilation step needed, but it can also be optimized.

So when our optimizer runs, it bundles up the core modules we use, plus, the modules for our “message list” card that displays the inbox.  And the message list card loads its HTML snippets using the template loader plugin.  The r-dot-js optimizer then locates these dependencies and the loader plugins also have optimizer logic that results in the HTML strings being inlined in the resulting optimized file.  So there’s just one single javascript file to load with no extra HTML file dependencies or other loads.

We then also run the optimizer against our other important cards like the “compose” card and the “message reader” card.  We don’t do this for all cards because it can be hard to carve up the module dependency graph for optimization without starting to run into cases of overlap where many optimized files redundantly include files loaded by other optimized files.

Plus, we have another trick up our sleeve:

Seeming Fast: Preloading

Preloading.  Our cards optionally know the other cards they can load.  So once we display a card, we can kick off a preload of the cards that might potentially be displayed.  For example, the message list card can trigger the compose card and the message reader card, so we can trigger a preload of both of those.

But we don’t go overboard with preloading in the frontend because we still haven’t actually loaded the back-end that actually does all the emaily email stuff.  The back-end is also chopped up into optimized layers along account type lines and online/offline needs, but the main optimized JS file still weighs in at something like 17 thousand lines of code with newlines retained.

So once our UI logic is loaded, it’s time to kick-off loading the back-end.  And in order to avoid impacting the responsiveness of the UI both while it loads and when we’re doing steady-state processing, we run it in a DOM Worker.

Being Responsive: Workers and SharedWorkers

DOM Workers are background JS threads that lack access to the page’s DOM, communicating with their owning page via message passing with postMessage.  Normal workers are owned by a single page.  SharedWorkers can be accessed via multiple pages from the same document origin.

By doing this, we stay out of the way of the main thread.  This is getting less important as browser engines support Asynchronous Panning & Zooming or “APZ” with hardware-accelerated composition, tile-based rendering, and all that good stuff.  (Some might even call it magic.)

When Firefox OS started, we didn’t have APZ, so any main-thread logic had the serious potential to result in janky scrolling and the impossibility of rendering at 60 frames per second.  It’s a lot easier to get 60 frames-per-second now, but even asynchronous pan and zoom potentially has to wait on dispatching an event to the main thread to figure out if the user’s tap is going to be consumed by app logic and preventDefault called on it.  APZ does this because it needs to know whether it should start scrolling or not.

And speaking of 60 frames-per-second…

Being Fast: Virtual List Widgets

…the heart of a mail application is the message list.  The expected UX is to be able to fling your way through the entire list of what the email app knows about and see the messages there, just like you would on a native app.

This is admittedly one of the areas where native apps have it easier.  There are usually list widgets that explicitly have a contract that says they request data on an as-needed basis.  They potentially even include data bindings so you can just point them at a data-store.

But HTML doesn’t yet have a concept of instantiate-on-demand for the DOM, although it’s being discussed by Firefox layout engine developers.  For app purposes, the DOM is a scene graph.  An extremely capable scene graph that can handle huge documents, but there are footguns and it’s arguably better to err on the side of fewer DOM nodes.

So what the email app does is we create a scroll-region div and explicitly size it based on the number of messages in the mail folder we’re displaying.  We create and render enough message summary nodes to cover the current screen, 3 screens worth of messages in the direction we’re scrolling, and then we also retain up to 3 screens worth in the direction we scrolled from.  We also pre-fetch 2 more screens worth of messages from the database.  These constants were arrived at experimentally on prototype devices.

We listen to “scroll” events and issue database requests and move DOM nodes around and update them as the user scrolls.  For any potentially jarring or expensive transitions such as coordinate space changes from new messages being added above the current scroll position, we wait for scrolling to stop.

Nodes are absolutely positioned within the scroll area using their ‘top’ style but translation transforms also work.  We remove nodes from the DOM, then update their position and their state before re-appending them.  We do this because the browser APZ logic tries to be clever and figure out how to create an efficient series of layers so that it can pre-paint as much of the DOM as possible in graphic buffers, AKA layers, that can be efficiently composited by the GPU.  Its goal is that when the user is scrolling, or something is being animated, that it can just move the layers around the screen or adjust their opacity or other transforms without having to ask the layout engine to re-render portions of the DOM.

When our message elements are added to the DOM with an already-initialized absolute position, the APZ logic lumps them together as something it can paint in a single layer along with the other elements in the scrolling region.  But if we start moving them around while they’re still in the DOM, the layerization logic decides that they might want to independently move around more in the future and so each message item ends up in its own layer.  This slows things down.  But by removing them and re-adding them it sees them as new with static positions and decides that it can lump them all together in a single layer.  Really, we could just create new DOM nodes, but we produce slightly less garbage this way and in the event there’s a bug, it’s nicer to mess up with 30 DOM nodes displayed incorrectly rather than 3 million.

But as neat as the layerization stuff is to know about on its own, I really mention it to underscore 2 suggestions:

1, Use a library when possible.  Getting on and staying on APZ fast-paths is not trivial, especially across browser engines.  So it’s a very good idea to use a library rather than rolling your own.

2, Use developer tools.  APZ is tricky to reason about and even the developers who write the Async pan & zoom logic can be surprised by what happens in complex real-world situations.  And there ARE developer tools available that help you avoid needing to reason about this.  Firefox OS has easy on-device developer tools that can help diagnose what’s going on or at least help tell you whether you’re making things faster or slower:

– it’s got a frames-per-second overlay; you do need to scroll like mad to get the system to want to render 60 frames-per-second, but it makes it clear what the net result is

– it has paint flashing that overlays random colors every time it paints the DOM into a layer.  If the screen is flashing like a discotheque or has a lot of smeared rainbows, you know something’s wrong because the APZ logic is not able to to just reuse its layers.

– devtools can enable drawing cool colored borders around the layers APZ has created so you can see if layerization is doing something crazy

There’s also fancier and more complicated tools in Firefox and other browsers like Google Chrome to let you see what got painted, what the layer tree looks like, et cetera.

And that’s my spiel.

Links

The source code to Gaia can be found at https://github.com/mozilla-b2g/gaia

The email app in particular can be found at https://github.com/mozilla-b2g/gaia/tree/master/apps/email

(I also asked for questions here.)

Categorieën: Mozilla-nl planet

Joshua Cranmer: Breaking news

Thunderbird - wo, 01/04/2015 - 09:00
It was brought to my attention recently by reputable sources that the recent announcement of increased usage in recent years produced an internal firestorm within Mozilla. Key figures raised alarm that some of the tech press had interpreted the blog post as a sign that Thunderbird was not, in fact, dead. As a result, they asked Thunderbird community members to make corrections to emphasize that Mozilla was trying to kill Thunderbird.

The primary fear, it seems, is that knowledge that the largest open-source email client was still receiving regular updates would impel its userbase to agitate for increased funding and maintenance of the client to help forestall potential threats to the open nature of email as well as to innovate in the space of providing usable and private communication channels. Such funding, however, would be an unaffordable luxury and would only distract Mozilla from its central goal of building developer productivity tooling. Persistent rumors that Mozilla would be willing to fund Thunderbird were it renamed Firefox Email were finally addressed with the comment, "such a renaming would violate our current policy that all projects be named Persona."

Categorieën: Mozilla-nl planet

Joshua Cranmer: Why email is hard, part 8: why email security failed

Thunderbird - ti, 13/01/2015 - 05:38
This post is part 8 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. Part 7 discusses the problem of trust. This part discusses why email security has largely failed.

At the end of the last part in this series, I posed the question, "Which email security protocol is most popular?" The answer to the question is actually neither S/MIME nor PGP, but a third protocol, DKIM. I haven't brought up DKIM until now because DKIM doesn't try to secure email in the same vein as S/MIME or PGP, but I still consider it relevant to discussing email security.

Unquestionably, DKIM is the only security protocol for email that can be considered successful. There are perhaps 4 billion active email addresses [1]. Of these, about 1-2 billion use DKIM. In contrast, S/MIME can count a few million users, and PGP at best a few hundred thousand. No other security protocols have really caught on past these three. Why did DKIM succeed where the others fail?

DKIM's success stems from its relatively narrow focus. It is nothing more than a cryptographic signature of the message body and a smattering of headers, and is itself stuck in the DKIM-Signature header. It is meant to be applied to messages only on outgoing servers and read and processed at the recipient mail server—it completely bypasses clients. That it bypasses clients allows it to solve the problem of key discovery and key management very easily (public keys are stored in DNS, which is already a key part of mail delivery), and its role in spam filtering is strong motivation to get it implemented quickly (it is 7 years old as of this writing). It's also simple: this one paragraph description is basically all you need to know [2].

The failure of S/MIME and PGP to see large deployment is certainly a large topic of discussion on myriads of cryptography enthusiast mailing lists, which often like to partake in propositions of new end-to-end encryption of email paradigms, such as the recent DIME proposal. Quite frankly, all of these solutions suffer broadly from at least the same 5 fundamental weaknesses, and I see it unlikely that a protocol will come about that can fix these weaknesses well enough to become successful.

The first weakness, and one I've harped about many times already, is UI. Most email security UI is abysmal and generally at best usable only by enthusiasts. At least some of this is endemic to security: while it mean seem obvious how to convey what an email signature or an encrypted email signifies, how do you convey the distinctions between sign-and-encrypt, encrypt-and-sign, or an S/MIME triple wrap? The Web of Trust model used by PGP (and many other proposals) is even worse, in that inherently requires users to do other actions out-of-band of email to work properly.

Trust is the second weakness. Consider that, for all intents and purposes, the email address is the unique identifier on the Internet. By extension, that implies that a lot of services are ultimately predicated on the notion that the ability to receive and respond to an email is a sufficient means to identify an individual. However, the entire purpose of secure email, or at least of end-to-end encryption, is subtly based on the fact that other people in fact have access to your mailbox, thus destroying the most natural ways to build trust models on the Internet. The quest for anonymity or privacy also renders untenable many other plausible ways to establish trust (e.g., phone verification or government-issued ID cards).

Key discovery is another weakness, although it's arguably the easiest one to solve. If you try to keep discovery independent of trust, the problem of key discovery is merely picking a protocol to publish and another one to find keys. Some of these already exist: PGP key servers, for example, or using DANE to publish S/MIME or PGP keys.

Key management, on the other hand, is a more troubling weakness. S/MIME, for example, basically works without issue if you have a certificate, but managing to get an S/MIME certificate is a daunting task (necessitated, in part, by its trust model—see how these issues all intertwine?). This is also where it's easy to say that webmail is an unsolvable problem, but on further reflection, I'm not sure I agree with that statement anymore. One solution is just storing the private key with the webmail provider (you're trusting them as an email client, after all), but it's also not impossible to imagine using phones or flash drives as keystores. Other key management factors are more difficult to solve: people who lose their private keys or key rollover create thorny issues. There is also the difficulty of managing user expectations: if I forget my password to most sites (even my email provider), I can usually get it reset somehow, but when a private key is lost, the user is totally and completely out of luck.

Of course, there is one glaring and almost completely insurmountable problem. Encrypted email fundamentally precludes certain features that we have come to take for granted. The lesser known is server-side search and filtration. While there exist some mechanisms to do search on encrypted text, those mechanisms rely on the fact that you can manipulate the text to change the message, destroying the integrity feature of secure email. They also tend to be fairly expensive. It's easy to just say "who needs server-side stuff?", but the contingent of people who do email on smartphones would not be happy to have to pay the transfer rates to download all the messages in their folder just to find one little email, nor the energy costs of doing it on the phone. And those who have really large folders—Fastmail has a design point of 1,000,000 in a single folder—would still prefer to not have to transfer all their mail even on desktops.

The more well-known feature that would disappear is spam filtration. Consider that 90% of all email is spam, and if you think your spam folder is too slim for that to be true, it's because your spam folder only contains messages that your email provider wasn't sure were spam. The loss of server-side spam filtering would dramatically increase the cost of spam (a 10% reduction in efficiency would double the amount of server storage, per my calculations), and client-side spam filtering is quite literally too slow [3] and too costly (remember smartphones? Imagine having your email take 10 times as much energy and bandwidth) to be a tenable option. And privacy or anonymity tends to be an invitation to abuse (cf. Tor and Wikipedia). Proposed solutions to the spam problem are so common that there is a checklist containing most of the objections.

When you consider all of those weaknesses, it is easy to be pessimistic about the possibility of wide deployment of powerful email security solutions. The strongest future—all email is encrypted, including metadata—is probably impossible or at least woefully impractical. That said, if you weaken some of the assumptions (say, don't desire all or most traffic to be encrypted), then solutions seem possible if difficult.

This concludes my discussion of email security, at least until things change for the better. I don't have a topic for the next part in this series picked out (this part actually concludes the set I knew I wanted to discuss when I started), although OAuth and DMARC are two topics that have been bugging me enough recently to consider writing about. They also have the unfortunate side effect of being things likely to see changes in the near future, unlike most of the topics I've discussed so far. But rest assured that I will find more difficulties in the email infrastructure to write about before long!

[1] All of these numbers are crude estimates and are accurate to only an order of magnitude. To justify my choices: I assume 1 email address per Internet user (this overestimates the developing world and underestimates the developed world). The largest webmail providers have given numbers that claim to be 1 billion active accounts between them, and all of them use DKIM. S/MIME is guessed by assuming that any smartcard deployment supports S/MIME, and noting that the US Department of Defense and Estonia's digital ID project are both heavy users of such smartcards. PGP is estimated from the size of the strong set and old numbers on the reachable set from the core Web of Trust.
[2] Ever since last April, it's become impossible to mention DKIM without referring to DMARC, as a result of Yahoo's controversial DMARC policy. A proper discussion of DMARC (and why what Yahoo did was controversial) requires explaining the mail transmission architecture and spam, however, so I'll defer that to a later post. It's also possible that changes in this space could happen within the next year.
[3] According to a former GMail spam employee, if it takes you as long as three minutes to calculate reputation, the spammer wins.

Categorieën: Mozilla-nl planet

Joshua Cranmer: A unified history for comm-central

Thunderbird - sn, 10/01/2015 - 18:55
Several years back, Ehsan and Jeff Muizelaar attempted to build a unified history of mozilla-central across the Mercurial era and the CVS era. Their result is now used in the gecko-dev repository. While being distracted on yet another side project, I thought that I might want to do the same for comm-central. It turns out that building a unified history for comm-central makes mozilla-central look easy: mozilla-central merely had one import from CVS. In contrast, comm-central imported twice from CVS (the calendar code came later), four times from mozilla-central (once with converted history), and imported twice from Instantbird's repository (once with converted history). Three of those conversions also involved moving paths. But I've worked through all of those issues to provide a nice snapshot of the repository [1]. And since I've been frustrated by failing to find good documentation on how this sort of process went for mozilla-central, I'll provide details on the process for comm-central.

The first step and probably the hardest is getting the CVS history in DVCS form (I use hg because I'm more comfortable it, but there's effectively no difference between hg, git, or bzr here). There is a git version of mozilla's CVS tree available, but I've noticed after doing research that its last revision is about a month before the revision I need for Calendar's import. The documentation for how that repo was built is no longer on the web, although we eventually found a copy after I wrote this post on git.mozilla.org. I tried doing another conversion using hg convert to get CVS tags, but that rudely blew up in my face. For now, I've filed a bug on getting an official, branchy-and-tag-filled version of this repository, while using the current lack of history as a base. Calendar people will have to suffer missing a month of history.

CVS is famously hard to convert to more modern repositories, and, as I've done my research, Mozilla's CVS looks like it uses those features which make it difficult. In particular, both the calendar CVS import and the comm-central initial CVS import used a CVS tag HG_COMM_INITIAL_IMPORT. That tagging was done, on only a small portion of the tree, twice, about two months apart. Fortunately, mailnews code was never touched on CVS trunk after the import (there appears to be one commit on calendar after the tagging), so it is probably possible to salvage a repository-wide consistent tag.

The start of my script for conversion looks like this:

#!/bin/bash set -e WORKDIR=/tmp HGCVS=$WORKDIR/mozilla-cvs-history MC=/src/trunk/mozilla-central CC=/src/trunk/comm-central OUTPUT=$WORKDIR/full-c-c # Bug 445146: m-c/editor/ui -> c-c/editor/ui MC_EDITOR_IMPORT=d8064eff0a17372c50014ee305271af8e577a204 # Bug 669040: m-c/db/mork -> c-c/db/mork MC_MORK_IMPORT=f2a50910befcf29eaa1a29dc088a8a33e64a609a # Bug 1027241, bug 611752 m-c/security/manager/ssl/** -> c-c/mailnews/mime/src/* MC_SMIME_IMPORT=e74c19c18f01a5340e00ecfbc44c774c9a71d11d # Step 0: Grab the mozilla CVS history. if [ ! -e $HGCVS ]; then hg clone git+https://github.com/jrmuizel/mozilla-cvs-history.git $HGCVS fi

Since I don't want to include the changesets useless to comm-central history, I trimmed the history by using hg convert to eliminate changesets that don't change the necessary files. Most of the files are simple directory-wide changes, but S/MIME only moved a few files over, so it requires a more complex way to grab the file list. In addition, I also replaced the % in the usernames with @ that they are used to appearing in hg. The relevant code is here:

# Step 1: Trim mozilla CVS history to include only the files we are ultimately # interested in. cat >$WORKDIR/convert-filemap.txt <<EOF # Revision e4f4569d451a include directory/xpcom include mail include mailnews include other-licenses/branding/thunderbird include suite # Revision 7c0bfdcda673 include calendar include other-licenses/branding/sunbird # Revision ee719a0502491fc663bda942dcfc52c0825938d3 include editor/ui # Revision 52efa9789800829c6f0ee6a005f83ed45a250396 include db/mork/ include db/mdb/ EOF # Add the S/MIME import files hg -R $MC log -r "children($MC_SMIME_IMPORT)" \ --template "{file_dels % 'include {file}\n'}" >>$WORKDIR/convert-filemap.txt if [ ! -e $WORKDIR/convert-authormap.txt ]; then hg -R $HGCVS log --template "{email(author)}={sub('%', '@', email(author))}\n" \ | sort -u > $WORKDIR/convert-authormap.txt fi cd $WORKDIR hg convert $HGCVS $OUTPUT --filemap convert-filemap.txt -A convert-authormap.txt

That last command provides us the subset of the CVS history that we need for unified history. Strictly speaking, I should be pulling a specific revision, but I happen to know that there's no need to (we're cloning the only head) in this case. At this point, we now need to pull in the mozilla-central changes before we pull in comm-central. Order is key; hg convert will only apply the graft points when converting the child changeset (which it does but once), and it needs the parents to exist before it can do that. We also need to ensure that the mozilla-central graft point is included before continuing, so we do that, and then pull mozilla-central:

CC_CVS_BASE=$(hg log -R $HGCVS -r 'tip' --template '{node}') CC_CVS_BASE=$(grep $CC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2) MC_CVS_BASE=$(hg log -R $HGCVS -r 'gitnode(215f52d06f4260fdcca797eebd78266524ea3d2c)' --template '{node}') MC_CVS_BASE=$(grep $MC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2) # Okay, now we need to build the map of revisions. cat >$WORKDIR/convert-revmap.txt <<EOF e4f4569d451a5e0d12a6aa33ebd916f979dd8faa $CC_CVS_BASE # Thunderbird / Suite 7c0bfdcda6731e77303f3c47b01736aaa93d5534 d4b728dc9da418f8d5601ed6735e9a00ac963c4e, $CC_CVS_BASE # Calendar 9b2a99adc05e53cd4010de512f50118594756650 $MC_CVS_BASE # Mozilla graft point ee719a0502491fc663bda942dcfc52c0825938d3 78b3d6c649f71eff41fe3f486c6cc4f4b899fd35, $MC_EDITOR_IMPORT # Editor 8cdfed92867f885fda98664395236b7829947a1d 4b5da7e5d0680c6617ec743109e6efc88ca413da, e4e612fcae9d0e5181a5543ed17f705a83a3de71 # Chat EOF # Next, import mozilla-central revisions for rev in $MC_MORK_IMPORT $MC_EDITOR_IMPORT $MC_SMIME_IMPORT; do hg convert $MC $OUTPUT -r $rev --splicemap $WORKDIR/convert-revmap.txt \ --filemap $WORKDIR/convert-filemap.txt done

Some notes about all of the revision ids in the script. The splicemap requires the full 40-character SHA ids; anything less and the thing complains. I also need to specify the parents of the revisions that deleted the code for the mozilla-central import, so if you go hunting for those revisions and are surprised that they don't remove the code in question, that's why.

I mentioned complications about the merges earlier. The Mork and S/MIME import codes here moved files, so that what was db/mdb in mozilla-central became db/mork. There's no support for causing the generated splice to record these as a move, so I have to manually construct those renamings:

# We need to execute a few hg move commands due to renamings. pushd $OUTPUT hg update -r $(grep $MC_MORK_IMPORT .hg/shamap | cut -d' ' -f2) (hg -R $MC log -r "children($MC_MORK_IMPORT)" \ --template "{file_dels % 'hg mv {file} {sub(\"db/mdb\", \"db/mork\", file)}\n'}") | bash hg commit -m 'Pseudo-changeset to move Mork files' -d '2011-08-06 17:25:21 +0200' MC_MORK_IMPORT=$(hg log -r tip --template '{node}') hg update -r $(grep $MC_SMIME_IMPORT .hg/shamap | cut -d' ' -f2) (hg -R $MC log -r "children($MC_SMIME_IMPORT)" \ --template "{file_dels % 'hg mv {file} {sub(\"security/manager/ssl\", \"mailnews/mime\", file)}\n'}") | bash hg commit -m 'Pseudo-changeset to move S/MIME files' -d '2014-06-15 20:51:51 -0700' MC_SMIME_IMPORT=$(hg log -r tip --template '{node}') popd # Echo the new move commands to the changeset conversion map. cat >>$WORKDIR/convert-revmap.txt <<EOF 52efa9789800829c6f0ee6a005f83ed45a250396 abfd23d7c5042bc87502506c9f34c965fb9a09d1, $MC_MORK_IMPORT # Mork 50f5b5fc3f53c680dba4f237856e530e2097adfd 97253b3cca68f1c287eb5729647ba6f9a5dab08a, $MC_SMIME_IMPORT # S/MIME EOF

Now that we have all of the graft points defined, and all of the external code ready, we can pull comm-central and do the conversion. That's not quite it, though—when we graft the S/MIME history to the original mozilla-central history, we have a small segment of abandoned converted history. A call to hg strip removes that.

# Now, import comm-central revisions that we need hg convert $CC $OUTPUT --splicemap $WORKDIR/convert-revmap.txt hg strip 2f69e0a3a05a

[1] I left out one of the graft points because I just didn't want to deal with it. I'll leave it as an exercise to the reader to figure out which one it was. Hint: it's the only one I didn't know about before I searched for the archive points [2].
[2] Since I wasn't sure I knew all of the graft points, I decided to try to comb through all of the changesets to figure out who imported code. It turns out that hg log -r 'adds("**")' narrows it down nicely (1667 changesets to look at instead of 17547), and using the {file_adds} template helps winnow it down more easily.

Categorieën: Mozilla-nl planet

Philipp Kewisch: Monitor all http(s) network requests using the Mozilla Platform

Thunderbird - to, 02/10/2014 - 16:38

In an xpcshell test, I recently needed a way to monitor all network requests and access both request and response data so I can save them for later use. This required a little bit of digging in Mozilla’s devtools code so I thought I’d write a short blog post about it.

This code will be used in a testcase that ensures that calendar providers in Lightning function properly. In the case of the CalDAV provider, we would need to access a real server for testing. We can’t just set up a few servers and use them for testing, it would end in an unreasonable amount of server maintenance. Given non-local connections are not allowed when running the tests on the Mozilla build infrastructure, it wouldn’t work anyway. The solution is to create a fakeserver, that is able to replay the requests in the same way. Instead of manually making the requests and figuring out how the server replies, we can use this code to quickly collect all the requests we need.

Without further delay, here is the code you have been waiting for:

/* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ var allRequests = []; /** * Add the following function as a request observer: * Services.obs.addObserver(httpObserver, "http-on-examine-response", false); * * When done listening on requests: * dump(allRequests.join("\n===\n")); // print them * dump(JSON.stringify(allRequests, null, " ")) // jsonify them */ function httpObserver(aSubject, aTopic, aData) { if (aSubject instanceof Components.interfaces.nsITraceableChannel) { let request = new TracedRequest(aSubject); request._next = aSubject.setNewListener(request); allRequests.push(request); } } /** * This is the object that represents a request/response and also collects the data for it * * @param aSubject The channel from the response observer. */ function TracedRequest(aSubject) { let httpchannel = aSubject.QueryInterface(Components.interfaces.nsIHttpChannel); let self = this; this.requestHeaders = Object.create(null); httpchannel.visitRequestHeaders({ visitHeader: function(k, v) { self.requestHeaders[k] = v; } }); this.responseHeaders = Object.create(null); httpchannel.visitResponseHeaders({ visitHeader: function(k, v) { self.responseHeaders[k] = v; } }); this.uri = aSubject.URI.spec; this.method = httpchannel.requestMethod; this.requestBody = readRequestBody(aSubject); this.responseStatus = httpchannel.responseStatus; this.responseStatusText = httpchannel.responseStatusText; this._chunks = []; } TracedRequest.prototype = { uri: null, method: null, requestBody: null, requestHeaders: null, responseStatus: null, responseStatusText: null, responseHeaders: null, responseBody: null, toJSON: function() { let j = Object.create(null); for (let m of Object.keys(this)) { if (typeof this[m] != "function" && m[0] != "_") { j[m] = this[m]; } } return j; }, onStartRequest: function(aRequest, aContext) this._next.onStartRequest(aRequest, aContext), onStopRequest: function(aRequest, aContext, aStatusCode) { this.responseBody = this._chunks.join(""); this._chunks = null; this._next.onStopRequest(aRequest, aContext, aStatusCode); this._next = null; }, onDataAvailable: function(aRequest, aContext, aStream, aOffset, aCount) { let binaryInputStream = Components.classes["@mozilla.org/binaryinputstream;1"] .createInstance(Components.interfaces.nsIBinaryInputStream); let storageStream = Components.classes["@mozilla.org/storagestream;1"] .createInstance(Components.interfaces.nsIStorageStream); let outStream = Components.classes["@mozilla.org/binaryoutputstream;1"] .createInstance(Components.interfaces.nsIBinaryOutputStream); binaryInputStream.setInputStream(aStream); storageStream.init(8192, aCount, null); outStream.setOutputStream(storageStream.getOutputStream(0)); let data = binaryInputStream.readBytes(aCount); this._chunks.push(data); outStream.writeBytes(data, aCount); this._next.onDataAvailable(aRequest, aContext, storageStream.newInputStream(0), aOffset, aCount); }, toString: function() { let str = this.method + " " + this.uri; for (let hdr of Object.keys(this.requestHeaders)) { str += hdr + ": " + this.requestHeaders[hdr] + "\n"; } if (this.requestBody) { str += "\r\n" + this.requestBody + "\n"; } str += "\n" + this.responseStatus + " " + this.responseStatusText if (this.responseBody) { str += "\r\n" + this.responseBody + "\n"; } return str; } }; // Taken from: // http://hg.mozilla.org/mozilla-central/file/2399d1ae89e9/toolkit/devtools/webconsole/network-helper.js#l120 function readRequestBody(aRequest, aCharset="UTF-8") { let text = null; if (aRequest instanceof Ci.nsIUploadChannel) { let iStream = aRequest.uploadStream; let isSeekableStream = false; if (iStream instanceof Ci.nsISeekableStream) { isSeekableStream = true; } let prevOffset; if (isSeekableStream) { prevOffset = iStream.tell(); iStream.seek(Ci.nsISeekableStream.NS_SEEK_SET, 0); } // Read data from the stream. try { let rawtext = NetUtil.readInputStreamToString(iStream, iStream.available()) let conv = Components.classes["@mozilla.org/intl/scriptableunicodeconverter"] .createInstance(Components.interfaces.nsIScriptableUnicodeConverter); conv.charset = aCharset; text = conv.ConvertToUnicode(rawtext); } catch (err) { } // Seek locks the file, so seek to the beginning only if necko hasn't // read it yet, since necko doesn't eek to 0 before reading (at lest // not till 459384 is fixed). if (isSeekableStream && prevOffset == 0) { iStream.seek(Components.interfaces.nsISeekableStream.NS_SEEK_SET, 0); } } return text; }

view raw
TracedRequest.js
hosted with ❤ by GitHub

Categorieën: Mozilla-nl planet

Ludovic Hirlimann: Tips on organizing a pgp key signing party

Thunderbird - mo, 29/09/2014 - 13:03

Over the years I’ve organized or tried to organize pgp key signing parties every time I go somewhere. I the last year I’ve organized 3 that were successful (eg with more then 10 attendees).

1. Have a venue

I’ve tried a bunch of times to have people show up at the hotel I was staying in the morning - that doesn’t work. Having catering at the venues is even better, it will encourage people to come from far away (or long distance commute). Try to show the path in the venues with signs (paper with PGP key signing party and arrows help).

2. Date and time

Meeting in the evening after work works better ( after 18 or 18:30 works better).

Let people know how long it will take (count 1 hour/per 30 participants).

3. Make people sign up

That makes people think twice before saying they will attend. It’s also an easy way for you to know how much beer/cola/ etc.. you’ll need to provide if you cater food.

I’ve been using eventbrite to manage attendance at my last three meeting it let’s me :

  • know who is coming
  • Mass mail participants
  • have them have a calendar reminder
4 Reach out

For such a party you need people to attend so you need to reach out.

I always start by a search on biglumber.com to find who are the people using gpg registered on that site for the area I’m visiting (see below on what I send).

Then I look for local linux users groups / *BSD groups  and send an announcement to them with :

  • date
  • venue
  • link to eventbrite and why I use it
  • ask them to forward (they know the area better than you)
  • I also use lanyrd and twitter but I’m not convinced that it works.

for my last announcement it looked like this :

Subject: GnuPG / PGP key signing party September 26 2014 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4" This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hello my name is ludovic, I'm a sysadmins at mozilla working remote from europe. I've been involved with Thunderbird a lot (and still am). I'm organizing a pgp Key signing party in the Mozilla san francisco office on September the 26th 2014 from 6PM to 8PM. For security and assurances reasons I need to count how many people will attend. I'v setup a eventbrite for that at https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-o= f-trust-stronger-tickets-12867542165 (please take one ticket if you think about attending - If you change you mind cancel so more people can come). I will use the eventbrite tool to send reminders and I will try to make a list with keys and fingerprint before the event to make things more manageable (but I don't promise). for those using lanyrd you will be able to use http://lanyrd.com/ccckzw. Ludovic ps sent to buug.org,nblug.org end penlug.org - please feel free to post where appropriate ( the more the meerier, the stronger the web of trust).= ps2 I have contacted people listed on biglumber to have more gpg related people show up. --=20 [:Usul] MOC Team at Mozilla QA Lead fof Thunderbird http://sietch-tabr.tumblr.com/ - http://weusepgp.info/ 5. Make it easy to attend

As noted above making a list of participants to hand out helps a lot (I’ve used http://www.phildev.net/pius/ and my own stuff to make a list). It make it easier for you, for attendees. Tell people what they need to bring (IDs, pen, printed fingerprints if you don’t provide a list).

6. Send reminders

Send people reminder and let them know how many people intend to show up. It boosts audience.

Categorieën: Mozilla-nl planet

Ludovic Hirlimann: Gnupg / PGP key signing party in mozilla's San francisco space

Thunderbird - wo, 17/09/2014 - 02:35

I’m organizing a pgp Keysigning party in the Mozilla san francisco office on September the 26th 2014 from 6PM to 8PM.

For security and assurances reasons I need to count how many people will attend. I’ve setup a eventbrite for that at https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-of-trust-stronger-tickets-12867542165 (please take one ticket if you think about attending - If you change you mind cancel so more people can come).

I will use the eventbrite tool to send reminders and I will try to make a list with keys and fingerprint before the event to make things more manageable (but I don’t promise).

For those using lanyrd you will be able to use http://lanyrd.com/ccckzw.(Please tweet the event to get more people in).

Categorieën: Mozilla-nl planet

Joshua Cranmer: Why email is hard, part 7: email security and trust

Thunderbird - wo, 06/08/2014 - 05:39
This post is part 7 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. This part discusses the problem of trust.

At a technical level, S/MIME and PGP (or at least PGP/MIME) use cryptography essentially identically. Yet the two are treated as radically different models of email security because they diverge on the most important question of public key cryptography: how do you trust the identity of a public key? Trust is critical, as it is the only way to stop an active, man-in-the-middle (MITM) attack. MITM attacks are actually easier to pull off in email, since all email messages effectively have to pass through both the sender's and the recipients' email servers [1], allowing attackers to be able to pull off permanent, long-lasting MITM attacks [2].

S/MIME uses the same trust model that SSL uses, based on X.509 certificates and certificate authorities. X.509 certificates effectively work by providing a certificate that says who you are which is signed by another authority. In the original concept (as you might guess from the name "X.509"), the trusted authority was your telecom provider, and the certificates were furthermore intended to be a part of the global X.500 directory—a natural extension of the OSI internet model. The OSI model of the internet never gained traction, and the trusted telecom providers were replaced with trusted root CAs.

PGP, by contrast, uses a trust model that's generally known as the Web of Trust. Every user has a PGP key (containing their identity and their public key), and users can sign others' public keys. Trust generally flows from these signatures: if you trust a user, you know the keys that they sign are correct. The name "Web of Trust" comes from the vision that trust flows along the paths of signatures, building a tight web of trust.

And now for the controversial part of the post, the comparisons and critiques of these trust models. A disclaimer: I am not a security expert, although I am a programmer who revels in dreaming up arcane edge cases. I also don't use PGP at all, and use S/MIME to a very limited extent for some Mozilla work [3], although I did try a few abortive attempts to dogfood it in the past. I've attempted to replace personal experience with comprehensive research [4], but most existing critiques and comparisons of these two trust models are about 10-15 years old and predate several changes to CA certificate practices.

A basic tenet of development that I have found is that the average user is fairly ignorant. At the same time, a lot of the defense of trust models, both CAs and Web of Trust, tends to hinge on configurability. How many people, for example, know how to add or remove a CA root from Firefox, Windows, or Android? Even among the subgroup of Mozilla developers, I suspect the number of people who know how to do so are rather few. Or in the case of PGP, how many people know how to change the maximum path length? Or even understand the security implications of doing so?

Seen in the light of ignorant users, the Web of Trust is a UX disaster. Its entire security model is predicated on having users precisely specify how much they trust other people to trust others (ultimate, full, marginal, none, unknown) and also on having them continually do out-of-band verification procedures and publicly reporting those steps. In 1998, a seminal paper on the usability of a GUI for PGP encryption came to the conclusion that the UI was effectively unusable for users, to the point that only a third of the users were able to send an encrypted email (and even then, only with significant help from the test administrators), and a quarter managed to publicly announce their private keys at some point, which is pretty much the worst thing you can do. They also noted that the complex trust UI was never used by participants, although the failure of many users to get that far makes generalization dangerous [5]. While newer versions of security UI have undoubtedly fixed many of the original issues found (in no small part due to the paper, one of the first to argue that usability is integral, not orthogonal, to security), I have yet to find an actual study on the usability of the trust model itself.

The Web of Trust has other faults. The notion of "marginal" trust it turns out is rather broken: if you marginally trust a user who has two keys who both signed another person's key, that's the same as fully trusting a user with one key who signed that key. There are several proposals for different trust formulas [6], but none of them have caught on in practice to my knowledge.

A hidden fault is associated with its manner of presentation: in sharp contrast to CAs, the Web of Trust appears to not delegate trust, but any practical widespread deployment needs to solve the problem of contacting people who have had no prior contact. Combined with the need to bootstrap new users, this implies that there needs to be some keys that have signed a lot of other keys that are essentially default-trusted—in other words, a CA, a fact sometimes lost on advocates of the Web of Trust.

That said, a valid point in favor of the Web of Trust is that it more easily allows people to distrust CAs if they wish to. While I'm skeptical of its utility to a broader audience, the ability to do so for is crucial for a not-insignificant portion of the population, and it's important enough to be explicitly called out.

X.509 certificates are most commonly discussed in the context of SSL/TLS connections, so I'll discuss them in that context as well, as the implications for S/MIME are mostly the same. Almost all criticism of this trust model essentially boils down to a single complaint: certificate authorities aren't trustworthy. A historical criticism is that the addition of CAs to the main root trust stores was ad-hoc. Since then, however, the main oligopoly of these root stores (Microsoft, Apple, Google, and Mozilla) have made their policies public and clear [7]. The introduction of the CA/Browser Forum in 2005, with a collection of major CAs and the major browsers as members, and several [8] helps in articulating common policies. These policies, simplified immensely, boil down to:

  1. You must verify information (depending on certificate type). This information must be relatively recent.
  2. You must not use weak algorithms in your certificates (e.g., no MD5).
  3. You must not make certificates that are valid for too long.
  4. You must maintain revocation checking services.
  5. You must have fairly stringent physical and digital security practices and intrusion detection mechanisms.
  6. You must be [externally] audited every year that you follow the above rules.
  7. If you screw up, we can kick you out.

I'm not going to claim that this is necessarily the best policy or even that any policy can feasibly stop intrusions from happening. But it's a policy, so CAs must abide by some set of rules.

Another CA criticism is the fear that they may be suborned by national government spy agencies. I find this claim underwhelming, considering that the number of certificates acquired by intrusions that were used in the wild is larger than the number of certificates acquired by national governments that were used in the wild: 1 and 0, respectively. Yet no one complains about the untrustworthiness of CAs due to their ability to be hacked by outsiders. Another attack is that CAs are controlled by profit-seeking corporations, which misses the point because the business of CAs is not selling certificates but selling their access to the root databases. As we will see shortly, jeopardizing that access is a great way for a CA to go out of business.

To understand issues involving CAs in greater detail, there are two CAs that are particularly useful to look at. The first is CACert. CACert is favored by many by its attempt to handle X.509 certificates in a Web of Trust model, so invariably every public discussion about CACert ends up devolving into an attack on other CAs for their perceived capture by national governments or corporate interests. Yet what many of the proponents for inclusion of CACert miss (or dismiss) is the fact that CACert actually failed the required audit, and it is unlikely to ever pass an audit. This shows a central failure of both CAs and Web of Trust: different people have different definitions of "trust," and in the case of CACert, some people are favoring a subjective definition (I trust their owners because they're not evil) when an objective definition fails (in this case, that the root signing key is securely kept).

The other CA of note here is DigiNotar. In July 2011, some hackers managed to acquire a few fraudulent certificates by hacking into DigiNotar's systems. By late August, people had become aware of these certificates being used in practice [9] to intercept communications, mostly in Iran. The use appears to have been caught after Chromium updates failed due to invalid certificate fingerprints. After it became clear that the fraudulent certificates were not limited to a single fake Google certificate, and that DigiNotar had failed to notify potentially affected companies of its breach, DigiNotar was swiftly removed from all of the trust databases. It ended up declaring bankruptcy within two weeks.

DigiNotar indicates several things. One, SSL MITM attacks are not theoretical (I have seen at least two or three security experts advising pre-DigiNotar that SSL MITM attacks are "theoretical" and therefore the wrong target for security mechanisms). Two, keeping the trust of browsers is necessary for commercial operation of CAs. Three, the notion that a CA is "too big to fail" is false: DigiNotar played an important role in the Dutch community as a major CA and the operator of Staat der Nederlanden. Yet when DigiNotar screwed up and lost its trust, it was swiftly kicked out despite this role. I suspect that even Verisign could be kicked out if it manages to screw up badly enough.

This isn't to say that the CA model isn't problematic. But the source of its problems is that delegating trust isn't a feasible model in the first place, a problem that it shares with the Web of Trust as well. Different notions of what "trust" actually means and the uncertainty that gets introduced as chains of trust get longer both make delegating trust weak to both social engineering and technical engineering attacks. There appears to be an increasing consensus that the best way forward is some variant of key pinning, much akin to how SSH works: once you know someone's public key, you complain if that public key appears to change, even if it appears to be "trusted." This does leave people open to attacks on first use, and the question of what to do when you need to legitimately re-key is not easy to solve.

In short, both CAs and the Web of Trust have issues. Whether or not you should prefer S/MIME or PGP ultimately comes down to the very conscious question of how you want to deal with trust—a question without a clear, obvious answer. If I appear to be painting CAs and S/MIME in a positive light and the Web of Trust and PGP in a negative one in this post, it is more because I am trying to focus on the positions less commonly taken to balance perspective on the internet. In my next post, I'll round out the discussion on email security by explaining why email security has seen poor uptake and answering the question as to which email security protocol is most popular. The answer may surprise you!

[1] Strictly speaking, you can bypass the sender's SMTP server. In practice, this is considered a hole in the SMTP system that email providers are trying to plug.
[2] I've had 13 different connections to the internet in the same time as I've had my main email address, not counting all the public wifis that I have used. Whereas an attacker would find it extraordinarily difficult to intercept all of my SSH sessions for a MITM attack, intercepting all of my email sessions is clearly far easier if the attacker were my email provider.
[3] Before you read too much into this personal choice of S/MIME over PGP, it's entirely motivated by a simple concern: S/MIME is built into Thunderbird; PGP is not. As someone who does a lot of Thunderbird development work that could easily break the Enigmail extension locally, needing to use an extension would be disruptive to workflow.
[4] This is not to say that I don't heavily research many of my other posts, but I did go so far for this one as to actually start going through a lot of published journals in an attempt to find information.
[5] It's questionable how well the usability of a trust model UI can be measured in a lab setting, since the observer effect is particularly strong for all metrics of trust.
[6] The web of trust makes a nice graph, and graphs invite lots of interesting mathematical metrics. I've always been partial to eigenvectors of the graph, myself.
[7] Mozilla's policy for addition to NSS is basically the standard policy adopted by all open-source Linux or BSD distributions, seeing as OpenSSL never attempted to produce a root database.
[8] It looks to me that it's the browsers who are more in charge in this forum than the CAs.
[9] To my knowledge, this is the first—and so far only—attempt to actively MITM an SSL connection.

Categorieën: Mozilla-nl planet

Pages