mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Our Year in Review: How we’ve kept Firefox working for you in 2020

Mozilla Blog - di, 15/12/2020 - 14:59

This year began like any other year, with our best intentions and resolutions to carry out. Then by March, the world changed and everyone’s lives — personally and professionally — turned upside down. Despite that, we kept to our schedule to release a new Firefox every month and we were determined to keep Firefox working for you during challenging times.

We shifted our focus to work on features aimed at helping people adjust to the new way of life, and we made Firefox faster so that you could get more things done. It’s all part of fulfilling our promise to build a better internet for people. So, as we eagerly look to the end of 2020, we look back at this unprecedented year and present you with our list of top features that made 2020 a little easier.

Keeping Calm and Carrying on

How do you cope with this new way of life spent online? Here were the Firefox features we added this year, aimed at bringing some zen in your life.

  • Picture-in-Picture: An employee favorite, we rolled out Picture-in-Picture to Mac and Linux, making it available on all platforms, where previously it was only available on Windows. We continued to improve Picture-in-Picture throughout the year — adding features like keyboard controls for fast forward and rewind — so that you could multitask like never before. We, too, were seeking calming videos; eyeing election results; and entertaining the little ones while trying to juggle home and work demands.
  • No more annoying notifications: We all started browsing more as the web became our window into the outside world, so we replaced annoying notification request pop-ups to stop interrupting your browsing, and added a speech bubble in the address bar when you interacted with the site.
  • Pocket article recommendations: We brought our delightful Pocket article recommendations to Firefox users beyond the US, to Austria, Belgium, Germany, India, Ireland, Switzerland, and the United Kingdom. For anyone wanting to take a pause on doom scrolling, simply open up a new tab in Firefox and check out the positivity in the Pocket article recommendations.
  • Ease eye strain with larger screen view: We all have been staring at the screen for longer than we ever thought we should. So, we’ve improved the global level zoom setting so you can set it and forget it. Then, every website can appear larger, should you wish, to ease eye strain. We also made improvements to our high contrast mode which made text more readable for users with low vision.

 

Get Firefox

 

Getting you faster to the places you want to visit

We also looked under the hood of Firefox to improve the speed and search experiences so you could get things done no matter what 2020 handed you.

  • Speed: We made Firefox faster than ever with improved performance on both page loads and start up time. For those the technical details:
      • Websites that use flexbox-based layouts load 20% faster than before;
      • Restoring a session is 17% quicker, meaning you can more quickly pick up where you left off;
      • For Windows users, opening new windows got quicker by 10%;
      • Our JavaScript engine got a revamp improving page load performance by up to 15%, page responsiveness by up to 12%, and reduced memory usage by up to 8%, all the while making it more secure.
  • Search made faster: We were searching constantly this year — what is coronavirus; do masks work; and what is the electoral college? The team spent countless hours improving the search experience in Firefox so that you could search smarter, faster — You could type less and find more with the revamped address bar, where our search suggestions got a redesign. An updated shortcut suggests search engines, tabs, and bookmarks, getting you where you want to go right from the address bar.
  • Additional under-the-hood improvements: We made noticeable improvements to Firefox’s printing experience, which included a fillable PDF form. We also improved your shopping experience with updates to our password management and credit card autofill.
Our promise to build a better internet

This has been an unprecedented year for the world, and as you became more connected online, we stayed focused on pushing for more privacy. It’s just one less thing for you to worry about.

  • HTTPS-Only mode: If you visit a website that asks for your email address or payment info, look for that lock in the address bar, which indicates your connection to it is secure. A site that doesn’t have the lock signals its insecure. It could be as simple as an expired Secure Socket Layer (SSL) certificate. No matter, Firefox’s new HTTPS-Only mode will attempt to establish fully secure connections to every website you visit and will also ask for your permission before connecting to a website if it doesn’t support secure connections.
  • Added privacy protections: We kicked off the year by expanding our Enhanced Tracking Protection, preventing known fingerprinters from profiling our users based on their hardware, and introduced a protection against redirect tracking — always on while you are browsing more than ever.
  • Facebook Container updates: Given the circumstances of 2020, it makes sense that people turned to Facebook to stay connected to friends and family when we couldn’t visit in person. Facebook Container — which helps prevent Facebook from tracking you around the web — added improvements that allowed you to create exceptions to how and when it blocks Facebook logins, likes, and comments, giving you more control over your relationship with Facebook.

Even if you didn’t have Firefox to help with some of life’s challenges online over the past year, don’t start 2021 without it. Download the latest version of Firefox and try these privacy-protecting, easy-to-use features for yourself.

The post Our Year in Review: How we’ve kept Firefox working for you in 2020 appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla’s Vision for Trustworthy AI

Mozilla Blog - di, 15/12/2020 - 12:25
Mozilla is publishing its white paper, “Creating Trustworthy AI.”

A little over two years ago, Mozilla started an ambitious project: deciding where we should focus our efforts to grow the movement of people committed to building a healthier digital world. We landed on the idea of trustworthy AI.

When Mozilla started in 1998, the growth of the web was defining where computing was going. So Mozilla focused on web standards and building a browser. Today, computing — and the digital society that we all live in — is defined by vast troves of data, sophisticated algorithms and omnipresent sensors and devices. This is the era of AI. Asking questions today such as ‘Does the way this technology works promote human agency?’ or ‘Am I in control of what happens with my data?’ is like asking ‘How do we keep the web open and free?’ 20 years ago.

This current era of computing — and the way it shapes the consumer internet technology that more than 4 billion of us use everyday — has high stakes. AI increasingly powers smartphones, social networks, online stores, cars, home assistants and almost every other type of electronic device. Given the power and pervasiveness of these technologies, the question of whether AI helps and empowers or exploits and excludes will have a huge impact on the direction that our societies head over the coming decades.

It would be very easy for us to head in the wrong direction. As we have rushed to build data collection and automation into nearly everything, we have already seen the potential of AI to reinforce long-standing biases or to point us toward dangerous content. And there’s little transparency or accountability when an AI system spreads misinformation or misidentifies a face. Also, as people, we rarely have agency over what happens with our data or the automated decisions that it drives. If these trends continue, we’re likely to end up in a dystopian AI-driven world that deepens the gap between those with vast power and those without.

On the other hand, a significant number of people are calling attention to these dangerous trends — and saying ‘there is another way to do this!’ Much like the early days of open source, a growing movement of technologists, researchers, policy makers, lawyers and activists are working on ways to bend the future of computing towards agency and empowerment. They are developing software to detect AI bias. They are writing new data protection laws. They are inventing legal tools to put people in control of their own data. They are starting orgs that advocate for ethical and just AI. If these people — and Mozilla counts itself amongst them — are successful, we have the potential to create a world where AI broadly helps rather than harms humanity.

It was inspiring conversations with people like these that led Mozilla to focus the $20M+ that it spends each year on movement building on the topic of trustworthy AI. Over the course of 2020, we’ve been writing a paper titled “Creating Trustworthy AI” to document the challenges and ideas for action that have come up in these conversations. Today, we release the final version of this paper.

This ‘paper’ isn’t a traditional piece of research. It’s more like an action plan, laying out steps that Mozilla and other like-minded people could take to make trustworthy AI a reality. It is possible to make this kind of shift, just as we have been able to make the shift to clean water and safer automobiles in response to risks to people and society. The paper suggests the code we need to write, the projects we need to fund, the issues we need to champion, and the laws we need to pass. It’s a toolkit for technologists, for philanthropists, for activists, for lawmakers.

At the heart of the paper are eight big challenges the world is facing when it comes to the use of AI in the consumer internet technologies we all use everyday. These are things like: bias; privacy; transparency; security; and the centralization of AI power in the hands of a few big tech companies. The paper also outlines four opportunities to meet these challenges. These opportunities centre around the idea that there are developers, investors, policy makers and a broad public that want to make sure AI works differently — and to our benefit. Together, we have a chance to write code, process data, create laws and choose technologies that send us in a good direction.

Like any major Mozilla project, this paper was built using an open source approach. The draft we published in May came from 18 months of conversations, research and experimentation. We invited people to comment on that draft, and they did. People and organizations from around the world weighed in: from digital rights groups in Poland to civil rights activists in the U.S, from machine learning experts in North America to policy makers at the highest levels in Europe, from activists, writers and creators to ivy league professors. We have revised the paper based on this input to make it that much stronger. The feedback helped us hone our definitions of “AI” and “consumer technology.” It pushed us to make racial justice a more prominent lens throughout this work. And it led us to incorporate more geographic, racial, and gender diversity viewpoints in the paper.

In the months and years ahead, this document will serve as a blueprint for Mozilla Foundation’s movement building work, with a focus on research, advocacy and grantmaking. We’re already starting to manifest this work: Mozilla’s advocacy around YouTube recommendations has illuminated how problematic AI curation can be. The Data Futures Lab and European AI Fund that we are developing with partner foundations support projects and initiatives that reimagine how trustworthy AI is designed and built across multiple continents. And Mozilla Fellows and Awardees like Sylvie Delacroix, Deborah Raj, and Neema Iyer are studying how AI intersects with data governance, equality, and systemic bias. Past and present work like this also fed back into the white paper, helping us learn by doing.

We also hope that this work will open up new opportunities for the people who build the technology we use everyday. For so long, building technology that valued people was synonymous with collecting no or little data about them. While privacy remains a core focus of Mozilla and others, we need to find ways to protect and empower users that also include the collection and use of data to give people experiences they want. As the paper outlines, there are more and more developers — including many of our colleagues in the Mozilla Corporation — who are carving new paths that head in this direction.

Thank you for reading — and I look forward to putting this into action together.

The post Mozilla’s Vision for Trustworthy AI appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Why getting voting right is hard, Part II: Hand-Counted Paper Ballots

Mozilla Blog - ma, 14/12/2020 - 18:42

In Part I we looked at desirable properties for voting system. In this post, I want to look at the details of a specific system: hand-counted paper ballots.

Sample Ballot

Hand-counted paper ballots are probably the simplest voting system in common use (though mostly outside the US). In practice, the process usually looks something like the following:

  1. Election officials pre-print paper ballots and distribute them to polling places. Each paper ballot has a list of contests and the choices for each contest, and a box or some other location where the voter can indicate their choice, as shown above.
  2. Voters arrive at the polling place, identify themselves to election workers, and are issued a ballot. They mark the section of the ballot corresponding to their choice. They cast their ballots by putting them into a ballot box, which can be as simple as a cardboard box with a hole in the top for the ballots.
  3. Once the polls close, the election workers collect all the ballots. If they are to be locally counted, then the process is as below; if they are to be centrally counted, they are transported back to election headquarters for counting.

The counting process varies between jurisdictions, but at a high level the process is simple. The vote counters go through each ballot one at a time and determine which choice it is for. Joseph Lorenzo Hall provides a good description of the procedure for California’s statutory 1% tally here:

In practice, the hand-counting method used by counties in California seems very similar. The typical tally team uses four people consisting of two talliers, one caller and one witness:

  • The caller speaks aloud the choice on the ballot for the race being tallied (e.g., “Yes…Yes…Yes…” or “Lincoln…Lincoln…Lincoln…”).
  • The witness observes each ballot to ensure that the spoken vote corresponded to what was on the ballot and also collates ballots in cross-stacks of ten ballots.
  • Each tallier records the tally by crossing out numbers on a tally sheet to keep track of the vote tally.

Talliers announce the tally at each multiple of ten (“10”, “20”, etc.) so that they can roll-back the tally if the two talliers get out of sync.

Obviously other techniques are possible, but as long as people are able to observe, differences in technique are mostly about efficiency rather than accuracy or transparency. The key requirement here is that any observer can look at the ballots and see that they are being recorded as they are cast. Jurisdictions will usually have some mechanism for challenging the tally of a specific ballot.

Security and Verifiability

The major virtue of hand-counted paper ballots is that they are simple, with security and privacy properties that are easy for voters to understand and reason about, and for observers to verify for themselves

It’s easiest to break the election in two phases:

  • Voting and collecting the ballots
  • Counting the collected ballots

If each of these is done correctly, then we can have high confidence that the election was correctly decided.

Voting

The security properties of the voting process mostly come down to ballot handling, namely that:

  • Only authorized voters get ballots and only one ballot. Note that it’s necessary to ensure this because otherwise it’s very hard to prevent multiple voting, where an authorized voter puts in two ballots.
  • Only the ballots of authorized voters make it into the ballot box.
  • All the ballots in the ballot box and only the ballots from the ballot box make it to election headquarters.

The first two of these properties are readily observed by observers — whether independent or partisan. The last property typically relies on technical controls. For instance, in Santa Clara county ballots are taken from the ballot box and put into clear tamper-evident bags for transport to election central, which limits the ability for poll workers to replace the ballots. When put together all three properties provide a high degree of confidence that the right ballots are available to be counted. This isn’t to say that there’s no opportunity for fraud via sleight-of-hand or voter impersonation (more on this later) but it’s largely one-at-a-time fraud, affecting a few ballots at a time, and is hard to perpetrate at scale.

Counting

The counting process is even easier to verify: it’s conducted in the open and so observers have their own chance to see each ballot and be confident that it has been counted correctly. Obviously, you need a lot of observers because you need at least one for each counting team, but given that the number of voters far exceeds the number of counting teams, it’s not that impractical for a campaign to come up with enough observers.

Probably the biggest source of problems with hand-counted paper ballots is disputes about the meaning of ambiguous ballots. Ideally voters would mark their ballots according to the instructions, but it’s quite common for voters to make stray marks, mark more than one box, fill in the boxes with dots instead of Xs, or even some more exotic variations, as shown in the examples below. In each case, it needs to be determined how to handle the ballot. It’s common to apply an “Intent of the voter” standard, but this still requires judgement. One extra difficulty here is that at the point where you are interpreting each ballot, you already know what it looks like, so naturally this can lead to a fair amount of partisan bickering about whether to accept each individual ballot, as each side tries to accept ballots that seem like they are for their preferred candidate and disqualify ballots that seem like they are for their opponent.

double marklizard people

A related issue is whether a given ballot is valid. This isn’t so much an issue with ballots cast at a polling place, but for vote-by-mail ballots there can be questions about signatures on the envelopes, the number of envelopes, etc. I’ll get to this later when I cover vote by mail in a later post.

Privacy/Secrecy of the Ballot

The level of privacy provided by paper ballots depends a fair bit on the precise details of how they are used and handled. In typical elections, voters will be given some level of privacy to fill out their ballot, so they don’t have to worry too much about that stage (though presumably in theory someone could set up cameras in the polling place). Aside from that, we primarily need to worry about two classes of attack:

  1. Tracking a given voter’s ballot from checkin to counting.
  2. Determining how a voter voted from the ballot itself.

Ideally — at least from the perspective of privacy — the ballots are all identical and the ballot box is big enough that you get some level of shuffling (how much is an open question), then it’s quite hard to correlate the ballot a voter was given to when it’s counted, though you might be able to narrow it down some by looking at which polling place/box the ballot came in and where it was in the box. In some jurisdictions, ballots have serial numbers, which might make this kind of tracking easier, though only if records of which voter gets which ballot are kept and available. Apparently the UK has this kind of system but tightly controls the records.

It’s generally not possible to tell from a ballot itself which voter it belongs to unless the voter cooperates by making the ballot distinctive in some way. This might happen because the voter is being paid (or threatened) to cast their vote a certain way. While some election jurisdictions prohibit distinguishing marks, as a practical matter it’s not really possible to prevent voters from making such marks if they really want to. This is especially true when the ballots need not be machine readable and so the voter has the ability to fill in the box somewhat distinctively (there are a lot of ways to write an X!). In elections with a lot of contests, as with many places on the US, it is also possible to use what’s called a “pattern voting” attack in which you vote one contest the way you are told and then vote the downballot contests in a way that uniquely identifies you. This sort of attack is very hard to prevent, but actually checking that people voted they way they were told is of course a lot of work. There are also more exotic attacks such as fingerprinting paper stock, but none of these are easy to mount in bulk.

Accessibility

One big drawback of hand-marked ballots is that they are not very accessible, either to people with disabilities or to non-native speakers. For obvious reasons, if you’re blind or have limited dexterity it can be hard to fill in the boxes (this is even harder with optical scan type ballots). Many jurisdictions that use paper ballots will also have some accommodation for people with disabilities. Paper ballots work fine in most languages, but each language must be separately translated and then printed, and then you need to have extras of each ballot type in case more people come than you expect, so at the end of the day the logistics can get quite complicated. By contrast, electronic voting machines (which I’ll get to later) scale much better to multiple languages.

Scalability

Although hand-counting does a good job of producing accurate and verifiable counts, it does not scale very well1. Estimates of how expensive it is to count ballots vary quite a bit, but a 2010 Pew study of hand recounts in Washington and Minnesota (the 2004 Washington gubernatorial and 2008 Minnesota US Senate races) put the cost of recounting a single contest at between $0.15 and $0.60 per ballot. Of course, as noted above some of the cost here is that of disputing ambiguous ballots. If the races is not particularly competitive then these ballots can be set aside and only need to be carefully adjudicated if they have a chance of changing the result.

Importantly, the cost of hand-counting goes up with the number of ballots times the number of contests on the ballot. In the United States it’s not uncommon to have 20 or more contests per election. For example, here is a sample ballot from the 2020 general election in Santa Clara County, CA. This ballot has the following contests

Type Count President 1 US House of Representatives 1 State Assembly 1 Superior Court Judge 1 County Board of Education 1 County Board of Supervisors 1 Community College District 1 City Mayor 1 City Council (vote for two) 1 State Propositions 12 Local ballot measures 6 Total 32

In an election like this, the cost to count could be several dollars per ballot. Of course, California has an exceptionally large number of contests, but in general hand-counting represents a significant cost.

Aside from the financial impact of hand counting ballots, it just takes a long time. Pew notes that both the Washington and Minnesota recounts took around seven months to resolve, though again this is partly due to the small margin of victory. As another example, California law requires a “1% post-election manual tally” in which 1% of precincts are randomly selected for hand-counting. Even with such a restricted count, the tally can take weeks in a large county such as Los Angeles, suggesting that hand counting all the ballots would be prohibitive in this setting. This isn’t to say that hand counting can never work, obviously, merely that it’s not a good match for the US electoral system, which tends to have a lot more contests than in other countries.

Up Next: Optical Scanning

The bottom line here is that while hand counting works well in many jurisdictions it’s not a great fit for a lot of elections in the United States. So if we can’t count ballots by hand, then what can we do? The good news is that there are ballot counting mechanisms which can provide similar assurance and privacy properties to hand counting but do so much more efficiently, namely optical scan ballots. I’ll be covering that in my next post.

  1. By contrast, the marking process is very scalable: if you have a long line, you can put out more tables, pens, privacy screens, etc. 

The post Why getting voting right is hard, Part II: Hand-Counted Paper Ballots appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Cameron Kaiser: Unexpected FPR30 changes because 2020

Mozilla planet - vr, 11/12/2020 - 06:36
Well, there were more casualties from the Great Floodgap Power Supply Kablooey of 2020, and one notable additional casualty, because 2020, natch, was my trusty former daily driver Quad G5. Yes, this is the machine that among other tasks builds TenFourFox. The issue is recoverable and I think I can get everything back in order, but due to work and the extent of what appears gone wrong it won't happen before the planned FPR30 release date on December 15 (taking into account it takes about 30 hours to run a complete build cycle).

If you've read this blog for any length of time, you know how much I like to be punctual with releases to parallel mainstream Firefox. However, there have been no reported problems from the beta and there are no major security issues that must be patched immediately, so there's a simple workaround: on Monday night Pacific time the beta will simply become the release. If you're already using the beta, then just keep on using it. Since I was already intending to do a security-only release after FPR30 and I wasn't planning to issue a beta for it anyway, anything left over from FPR30 will get rolled into FPR30 SPR1 and this will give me enough cushion to get the G5 back in working order (or at least dust off the spare) for that release on or about January 26. I'm sure all of you will get over it by then. :)

Categorieën: Mozilla-nl planet

Niko Matsakis: Rotating the compiler team leads

Mozilla planet - vr, 11/12/2020 - 06:00

Since we created the Rust teams, I have been serving as lead of two teams: the compiler team and the language design team (I’ve also been a member of the core team, which has no lead). For those less familiar with Rust’s governance, the compiler team is focused on the maintenance and implementation of the compiler itself (and, more recently, the standard library). The language design team is focused on the design aspects. Over that time, all the Rust teams have grown and evolved, with the compiler team in particular being home to a number of really strong members.

Last October, I announced that pnkfelix was joining me as compiler team co-lead. Today, I am stepping back from my role as compiler team co-lead altogether. After taking nominations from the compiler team, pnkfelix and I are proud to announce that wesleywiser will replace me as compiler team co-lead. If you don’t know Wesley, there’ll be an announcement on Inside Rust where you can learn a bit more about what he has done, but let me just say I am pleased as punch that he agreed to serve as co-lead. He’s going to do a great job.

You’re not getting rid of me this easily

Stepping back as compiler team co-lead does not mean I plan to step away from the compiler. In fact, quite the opposite. I’m still quite enthusiastic about pushing forward on ongoing implementaton efforts like the work to implement RFC 2229, or the development on chalk and polonius. In fact, I am hopeful that stepping back as co-lead will create more time for these efforts, as well as time to focus on leadership of the language design team.

Rotation is key

I see these changes to compiler team co-leads as fitting into a larger trend, one that I believe is going to be increasingly important in Rust: rotation of leadership. To me, the “corest of the core” value of the Rust project is the importance of “learning from others” – or as I put it in my rust-latam talk from 20191, “a commitment to a CoC and a culture that emphasizes curiosity and deep research”. Part of learning from others has to be actively seeking out fresh leadership and promoting them into positions of authority.

But rotation has a cost too

Another core value of Rust is recognizing the inevitability of tradeoffs2. Rotating leadership is no exception: there is a lot of value in having the same people lead for a long time, as they accumulate all kinds of context and skills. But it also means that you are missing out on the fresh energy and ideas that other people can bring to the problem. I feel confident that Felix and Wesley will help to shape the compiler team in ways that I never would’ve thought to do.

Rotation with intention

The tradeoff between experience and enthusiasm makes it all the more important, in my opinion, to rotate leadership intentionally. I am reminded of Emily Dunham’s classic post on leaving a team3, and how it was aimed at normalizing the idea of “retirement” from a team as something you could actively choose to do, rather than just waiting until you are too burned out to continue.

Wesley, Felix, and I have discussed the idea of “staggered terms” as co-leads. The idea is that you serve as co-lead for two years, but we select one new co-lead per year, with the oldest co-lead stepping back. This way, at every point you have a mix of a new co-lead and someone who has already done it for one year and has some experience.

Lang and compiler need separate leadership

Beyond rotation, another reason I would like to step back from being co-lead of the compiler team is that I don’t really think it makes sense to have one person lead two teams. It’s too much work to do both jobs well, for one thing, but I also think it works to the detriment of the teams. I think the compiler and lang team will work better if they each have their own, separate “advocates”.

I’m actually very curious to work with pnkfelix and Wesley to talk about how the teams ought to coordinate, since I’ve always felt we could do a better job. I would like us to be actively coordinating how we are going to manage the implementation work at the same time as we do the design, to help avoid unbounded queues. I would also like us to be doing a better job getting feedback from the implementation and experimentation stage into the lang team.

You might think having me be the lead of both teams would enable coordination, but I think it can have the opposite effect. Having separate leads for compiler and lang means that those leads must actively communicate and avoids the problem of one person just holding things in their head without realizing other people don’t share that context.

Idea: Deliberate team structures that enable rotation

In terms of the compiler team structure, I think there is room for us to introduce “rotation” as a concept in other ways as well. Recently, I’ve been kicking around an idea for “compiler team officers”4, which would introduce a number of defined roles, each of which is setup in with staggered terms to allow for structured handoff. I don’t think the current proposal is quite right, but I think it’s going in an intriguing direction.

This proposal is trying to address the fact that a successful open source organization needs more than coders, but all too often we fail to recognize and honor that work. Having fixed terms is important because when someone is willing to do that work, they can easily wind up getting stuck being the only one doing it, and they do that until they burn out. The proposal also aims to enable more “part-time” leadership within the compiler team, by making “finer grained” duties that don’t require as much time to complete.

  1. Oh-so-subtle plug: I really quite liked that talk. 

  2. Though not always the tradeoffs you expect. Read the post. 

  3. If you haven’t read it, stop reading now and go do so. Then come back. Or don’t. Just read it already. 

  4. I am not sure that ‘officer’ is the right word here, but I’m not sure what the best replacement is. I want something that conveys respect and responsibility. 

Categorieën: Mozilla-nl planet

Martin Thompson: Next Level Version Negotiation

Mozilla planet - vr, 11/12/2020 - 01:00

The IAB EDM Program[1] met this morning. While the overall goal of the meeting, we ended up talking a lot a document I wrote a while back and how to design version negotiation in protocols.

This post provides a bit of background and shares some of what we learned today after what was quite a productive discussion.

Protocol Ossification #

The subject of protocol ossification has been something of a live discussion in the past several years. The community has come to the realization that it is effectively impossible to extend many Internet protocols without causing a distressing number of problems with existing deployments. It seems like no protocol is unaffected[2]. IP, TCP, TLS, and HTTP all have various issues that prevent extensions from working correctly.

A number of approaches have been tried. HTTP/2, which was developed early in this process, was deployed only for HTTPS. Even though a cleartext variant was defined, many implementations explicitly decided not to implement that, partly motivated by these concerns. QUIC doubles down on this by encrypting as much as possible.

TLS 1.3, which was delayed by about a year by related problems, doesn't have that option so it ultimately used trickery to avoid notice by problematic middleboxes: TLS 1.3 looks a lot like TLS 1.2 unless you are paying close attention.

One experiment that turned out to quite successful in revealing ossification in TLS was GREASE. David Benjamin and Adam Langley, who maintain the TLS stack used by Google[3] found that inserting random values into different extension points had something of a cleansing effect on the TLS ecosystem. Several TLS implementations were found to be intolerant of new extensions.

One observation out of the experiments with TLS was that protocol elements that routinely saw new values, like cipher suites, were less prone to failing when previously unknown values were encountered. Those that hadn't seen new values as often, like server name types or signature schemes, were more likely to show problems. This caused Adam Langley to advise that protocols "have one joint and keep it well oiled."

draft-iab-use-it-or-lose-it explores the problem space a little more thoroughly. The draft looks at a bunch of different protocols and finds that in general the observations hold. The central thesis is that for an extension point to be usable, it needs to be actively used.

Version Negotiation #

The subject of the discussion today was version negotiation. Of all the extension points available in protocols, the one that often sees the least use is version negotation. A version negotiation mechanism has to exist in the first version of a protocol, but it is never really tested until the second version is deployed.

No matter how carefully the scheme is designed[4], the experience with TLS shows that even a well-designed scheme can fail.

The insight for today, thanks largely to Tommy Pauly, was that the observation about extension points could be harnessed to make version negotiation work. Tommy observed that some protocols don't design in-protocol version negotiation schemes, but instead rely on the protocol at the next layer down. And these protocols have been more successful at avoid some of the pitfalls inherent to version negotiation.

At the next layer down the stack, the codepoints for the higher-layer protocol are just extension codepoints. They aren't exceptional for the lower layer and they probably get more use. Therefore, these extension points are less likely to end up being ossified when the time comes to rely on them.

Supporting Examples #

Tommy offered a few examples and we discussed several others.

IPv6 was originally intended to use the IP EtherType (0x0800) in 802.1, with routers looking at the IP version number to determine how to handle packets. That didn't work out[5]. What did work was assigning IPv6 its own EtherType (0x86dd). This supports the idea that a function that was already in use for other reasons[6] was better able to support the upgrade than the in-protocol mechanisms that were originally designed for that purpose.

HTTP/2 was floated as another potential example of this effect. Though the original reason for adding ALPN was performance - we wanted to ensure that we wouldn't have to do another round trip after the TLS handshake to do Upgrade exchange - the effect is that negotiation of HTTP relied on a mechanism that was well-tested and proven at the TLS layer[7].

We observed that ALPN doesn't work for the HTTP/2 to HTTP/3 upgrade as these protocols don't share a protocol. Here, we observed that we would likely end up relying on SVCB and the HTTPS DNS record.

Carsten Bormann also pointed at SenML, which deliberately provides no inherent version negotiation. I suggest that this is an excellent example of relying on lower-layer negotiation, in this case the content negotiation functions provided by underlying protocols like CoAP or HTTP.

It didn't come up at the time, but one of my favourite examples comes from the people building web services at Mozilla. They do not include version numbers in URLs or hostnames for their APIs and they don't put version numbers in request or response formats. The reasoning being that, should they need to roll a new version that is incompatible with the current one, they can always deploy to a new domain name. I always appreciated the pragmatism of that approach, though I still see lots of /v1/ in public HTTP API documentation.

These all seem to provide good support for the basic idea.

Counterexamples #

Any rule like this isn't worth anything without counterexamples. Understanding counterexamples helps us understand what conditions are necessary for the theory to hold.

SNMP, which was already mentioned in the draft as having successfully managed a version transition using an in-band mechanism, was a particularly interesting case study. Several observations were made, suggesting several inter-connected reasons for success. It was observed that there was no especially strong reason to prefer SNMPv3 over SNMPv2 (or SNMPv2c), a factor which resulted in both SNMP versions coexisting for years.

There was an interesting sidebar at this point. It was observed that SNMP doesn't have any strong need to avoid version downgrade attacks in the way that a protocol like TLS might. Other protocols might not tolerate such phlegmatic coexistence.

While SNMP clients do include probing code to determine what protocols were supported. However, as network management systems include provisioning information for devices, it is usually the case that protocol support for managed devices is stored alongside other configuration. Thus we concluded that SNMP - to the extent that it even needs version upgrades - was closest to the "shove it in the DNS" approach used for the upgrade to HTTP/3.

In Practice #

The lesson here is that planning for the next version doesn't mean designing a version negotiation mechanism. It's possible that a perfectly good mechanism already exists. If it does, it's almost certainly better than anything you might cook up.

This is particularly gratifying to me as I had already begun following the practice of SenML with other work. For instance, RFC 8188 provides no in-band negotiation of version or even cryptographic agility. Instead, it relies on the existing content-coding negotiation mechanisms as a means of enabling its own eventual replacement. This was somewhat controversial at the time, especially the cryptographic agility part, but in retrospect it seems to be a good choice.

It's also good to have a strong basis for rejecting profligate addition of extension points in protocols[8], but now it seems like we have firm reasons to avoid designing version negotiation mechanisms into every protocol.

Maybe version negotiation can now be put better into context. Version negotiation might only belong in protocols at the lowest levels of the stack[9]. For most protocols, which probably need to run over TLS for other reasons, ALPN and maybe SVCB can stand in for version negotiation, with the bonus that these are specifically designed to avoid adding latency. HTTP APIs can move to a different URL.

As this seems solid, I now have the task of writing a brief summary of this conclusion for the next revision of the "use it or lose it" draft. That might take some time as there are a few open issues that need some attention.

  1. Not electronic dance music sadly, it's about Evolvability, Deployability, & Maintainability of Internet protocols ↩︎

  2. UDP maybe. UDP is simple enough that it doesn't have features/bugs. Not to say that it is squeaky clean, it has plenty of baggage, with checksum issues, a reputation for being used for DoS, and issues with flow termination in NATs. ↩︎

  3. BoringSSL, which is now used by a few others, including Cloudflare and Apple. ↩︎

  4. Section 4.1 of RFC 6709 contains some great advice on how to design a version negotiation scheme, so that you can learn from experience. Though pay attention to the disclaimer in the last paragraph. ↩︎

  5. No one on the call was paying sufficient attention at the time, so we don't know precisely why. We intend to find out, of course. ↩︎

  6. At the time, there was still reasonable cause to think that IP wouldn't be the only network layer protocol, so other values were being used routinely. ↩︎

  7. You might rightly observe here that ALPN was brand new for HTTP/2, so the mechanism itself wasn't exactly proven. This is true, but there are mitigating factors. The negotiation method is exactly the same as many other TLS extensions. And we tested the mechanism thoroughly during HTTP/2 deployment as each new revision from the -04 draft onwards was deployed widely with a different ALPN string. By the time HTTP/2 shipped, ALPN was definitely solid. ↩︎

  8. There is probably enough material for a long post on why this is not a problem in JSON, but I'll just assert for now - without support - that there really is only one viable extension point in any JSON usage. ↩︎

  9. It doesn't seem like TLS or QUIC can avoid having version negotiation. ↩︎

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Launching the Lock Poisoning Survey

Mozilla planet - vr, 11/12/2020 - 01:00

The Libs team is looking at how we can improve the std::sync module, by potentially splitting it up into new modules and making some changes to APIs along the way. One of those API changes we're looking at is non-poisoning implementations of Mutex and RwLock. To find the best path forward we're conducting a survey to get a clearer picture of how the standard locks are used out in the wild.

The survey is a Google Form. You can fill it out here.

What is this survey for?

The survey is intended to answer the following questions:

  • When is poisoning on Mutex and RwLock being used deliberately.
  • Whether Mutex and RwLock (and their guard types) appear in the public API of libraries.
  • How much friction there is switching from the poisoning Mutex and RwLock locks to non-poisoning ones (such as from antidote or parking_lot).

This information will then inform an RFC that will set out a path to non-poisoning locks in the standard library. It may also give us a starting point for looking at the tangentially related UnwindSafe and RefUnwindSafe traits for panic safety.

Who is this survey for?

If you write code that uses locks then this survey is for you. That includes the standard library's Mutex and RwLock as well as locks from crates.io, such as antidote, parking_lot, and tokio::sync.

So what is poisoning anyway?

Let's say you have an Account that can update its balance:

impl Account { pub fn update_balance(&mut self, change: i32) { self.balance += change; self.changes.push(change); } }

Let's also say we have the invariant that balance == changes.sum(). We'll call this the balance invariant. So at any point when interacting with an Account you can always depend on its balance being the sum of its changes, thanks to the balance invariant.

There's a point in our update_balance method where the balance invariant isn't maintained though:

impl Account { pub fn update_balance(&mut self, change: i32) { self.balance += change; // self.balance != self.changes.sum() self.changes.push(change); } }

That seems ok, because we're in the middle of a method with exclusive access to our Account and everything is back to good when we return. There isn't a Result or ? to be seen so we know there's no chance of an early return before the balance invariant is restored. Or so we think.

What if self.changes.push didn't return normally? What if it panicked instead without actually doing anything? Then we'd return from update_balance early without restoring the balance invariant. That seems ok too, because a panic will start unwinding the thread it was called from, leaving no trace of any data it owned behind. Ignoring the Drop trait, no data means no broken invariants. Problem solved, right?

What if our Account wasn't owned by that thread that panicked? What if it was shared with other threads as a Arc<Mutex<Account>>? Unwinding one thread isn't going to protect other threads that could still access the Account, and they're not going to know that it's now invalid.

This is where poisoning comes in. The Mutex and RwLock types in the standard library use a strategy that makes panics (and by extension the possibility for broken invariants) observable. The next consumer of the lock, such as another thread that didn't unwind, can decide at that point what to do about it. This is done by storing a switch in the lock itself that's flipped when a panic causes a thread to unwind through its guard. Once that switch is flipped the lock is considered poisoned, and the next attempt to acquire it will receive an error instead of a guard.

The standard approach for dealing with a poisoned lock is to propagate the panic to the current thread by unwrapping the error it returns:

let mut guard = shared.lock().unwrap();

That way nobody can ever observe the possibly violated balance invariant on our shared Account.

That sounds great! So why would we want to remove it?

What's wrong with lock poisoning?

There's nothing wrong with poisoning itself. It's an excellent pattern for dealing with failures that can leave behind unworkable state. The question we're really asking is whether it should be used by the standard locks, which are std::sync::Mutex and std::sync::RwLock. We're asking whether it's a standard lock's job to implement poisoning. Just to avoid any confusion, we'll distinguish the poisoning pattern from the API of the standard locks by calling the former poisoning and the latter lock poisoning. We're just talking about lock poisoning.

In the previous section we motivated poisoning as a way to protect us from possibly broken invariants. Lock poisoning isn't actually a tool for doing this in the way you might think. In general, a poisoned lock can't tell whether or not any invariants are actually broken. It assumes that a lock is shared, so is likely going to outlive any individual thread that can access it. It also assumes that if a panic leaves any data behind then it's more likely to be left in an unexpected state, because panics aren't part of normal control flow in Rust. Everything could be fine after a panic, but the standard lock can't guarantee it. Since there's no guarantee there's an escape hatch. We can always still get access to the state guarded by a poisoned lock:

let mut guard = shared.lock().unwrap_or_else(|err| err.into_inner());

All Rust code needs to remain free from any possible undefined behavior in the presence of panics, so ignoring panics is always safe. Rust doesn't try guarantee all safe code is free from logic bugs, so broken invariants that don't potentially lead to undefined behavior aren't strictly considered unsafe. Since ignoring lock poisoning is also always safe it doesn't really give you a dependable tool to protect state from panics. You can always ignore it.

So lock poisoning doesn't give you a tool for guaranteeing safety in the presence of panics. What it does give you is a way to propagate those panics to other threads. The machinery needed to do this adds costs to using the standard locks. There's an ergonomic cost in having to call .lock().unwrap(), and a runtime cost in having to actually track state for panics.

With the standard locks you pay those costs whether you need to or not. That's not typically how APIs in the standard library work. Instead, you compose costs together so you only pay for what you need. Should it be a standard lock's job to synchronize access and propagate panics? We're not so sure it is. If it's not then what should we do about it? That's where the survey comes in. We'd like to get a better idea of how you use locks and poisoning in your projects to help decide what to do about lock poisoning. You can fill it out here.

Categorieën: Mozilla-nl planet

Wladimir Palant: How anti-fingerprinting extensions tend to make fingerprinting easier

Mozilla planet - do, 10/12/2020 - 14:57

Do you have a privacy protection extension installed in your browser? There are so many around, and every security vendor is promoting their own. Typically, these will provide a feature called “anti-fingerprinting” or “fingerprint protection” which is supposed to make you less identifiable on the web. What you won’t notice: this feature is almost universally flawed, potentially allowing even better fingerprinting.

Pig disguised as a bird but still clearly recognizable<figcaption> Image credits: OpenClipart </figcaption>

I’ve seen a number of extensions misimplement this functionality, yet I rarely bother to write a report. The effort to fully explain the problem is considerable. On the other hand, it is obvious that for most vendors privacy protection is merely a check that they can put on their feature list. Quality does not matter because no user will be able to tell whether their solution actually worked. With minimal resources available, my issue report is unlikely to cause a meaningful action.

That’s why I decided to explain the issues in a blog post, a typical extension will have at least three out of four. Next time I run across a browser extension suffering from all the same flaws I can send them a link to this post. And maybe some vendors will resolve the issues then. Or, even better, not even make these mistakes in the first place.

Contents How fingerprinting works

When you browse the web, you aren’t merely interacting with the website you are visiting but also with numerous third parties. Many of these have a huge interest in recognizing you reliably across different websites, advertisers for example want to “personalize” your ads. The traditional approach is storing a cookie in your browser which contains your unique identifier. However, modern browsers have a highly recommendable setting to clear cookies at the end of the browsing session. There is private browsing mode where no cookies are stored permanently. Further technical restrictions for third-party cookies are expected to be implemented soon, and EU data protection rules also make storing cookies complicated to say the least.

So cookies are becoming increasingly unreliable. Fingerprinting is supposed to solve this issue by recognizing individual users without storing any data on their end. The idea is to look at data about user’s system that browsers make available anyway, for example display resolution. It doesn’t matter what the data is, it should be:

  • sufficiently stable, ideally stay unchanged for months
  • unique to a sufficiently small group of people

Note that no data point needs to identify a single person by itself. If each of them refer to a different group of people, with enough data points the intersection of all these groups will always be a single person.

How anti-fingerprinting is supposed to work

The goal of anti-fingerprinting is reducing the amount and quality of data that can be used for fingerprinting. For example, CSS used to allow recognizing websites that the user visited before – a design flaw that could be used for fingerprinting among other things. It took quite some time and effort, but eventually the browsers found a fix that wouldn’t break the web. Today this data point is no longer available to websites.

Other data points remain but have been defused considerably. For example, browsers provide websites with a user agent string so that these know e.g. which browser brand and version they are dealing with. Applications installed by the users used to extend this user agent string with their own identifiers. Eventually, browser vendors recognized how this could be misused for fingerprinting and decided to remove any third-party additions. Much of the other information originally available here has been removed as well, so that today any user agent string is usually common to a large group of people.

Barking the wrong tree

Browser vendors have already invested a considerable amount of work into anti-fingerprinting. However, they usually limited themselves to measures which wouldn’t break existing websites. And while things like display resolution (unlike window size) aren’t considered by too many websites, these were apparently numerous enough that browsers still give them user’s display resolution and the available space (typically display resolution without the taskbar).

Privacy protection extensions on the other hand aren’t showing as much concern. So they will typically do something like:

screen.width = 1280; screen.height = 1024;

There you go, the website will now see the same display resolution for everybody, right? Well, that’s unless the website does this:

delete screen.width; delete screen.height;

And suddenly screen.width and screen.height are restored to their original values. Fingerprinting can now use two data points instead of one: not merely the real display resolution but also the fake one. Even if that fake display resolution were extremely common, it would still make the fingerprint slightly more precise.

Is this magic? No, just how JavaScript prototypes work. See, these properties are not defined on the screen object itself, they are part of the object’s prototype. So that privacy extension added an override for prototype’s properties. With the override removed the original properties became visible again.

So is this the correct way to do it?

Object.defineProperty(Screen.prototype, "width", {value: 1280}); Object.defineProperty(Screen.prototype, "height", {value: 1024});

Much better. The website can no longer retrieve the original value easily. However, it can detect that the value has been manipulated by calling Object.getOwnPropertyDescriptor(Screen.prototype, "width"). Normally the resulting property descriptor would contain a getter, this one has a static value however. And the fact that a privacy extension is messing with the values is again a usable data point.

Let’s try it without changing the property descriptor:

Object.defineProperty(Screen.prototype, "width", {get: () => 1280}); Object.defineProperty(Screen.prototype, "height", {get: () => 1024});

Almost there. But now the website can call Object.getOwnPropertyDescriptor(Screen.prototype, "width").get.toString() to see the source code of our getter. Again a data point which could be used for fingerprinting. The source code needs to be hidden:

Object.defineProperty(Screen.prototype, "width", {get: (() => 1280).bind(null)}); Object.defineProperty(Screen.prototype, "height", {get: (() => 1024).bind(null)});

This bind() call makes sure the getter looks like a native function. Exactly what we needed.

Update (2020-12-14): Firefox allows content scripts to call exportFunction() which is a better way to do this. In particular, it doesn’t require injecting any code into web page context. Unfortunately, this functionality isn’t available in Chromium-based browsers. Thanks to kkapsner for pointing me towards this functionality.

Catching all those pesky frames

There is a complication here: a website doesn’t merely have one JavaScript execution context, it has one for each frame. So you have to make sure your content script runs in all these frames. And so browser extensions will typically specify "all_frames": true in their manifest. And that’s correct. But then the website does something like this:

var frame = document.createElement("iframe"); document.body.appendChild(frame); console.log(screen.width, frame.contentWindow.screen.width);

Why is this newly created frame still reporting the original display width? We are back at square one: the website again has two data points to work with instead of one.

The problem here: if frame location isn’t set, the default is to load the special page about:blank. When Chrome developers created their extension APIs originally they didn’t give extensions any way to run content scripts here. Luckily, this loophole has been closed by now, but the extension manifest has to set "match_about_blank": true as well.

Timing woes

As anti-fingerprinting functionality in browser extensions is rather invasive, it is prone to breaking websites. So it is important to let users disable this functionality on specific websites. This is why you will often see code like this in extension content scripts:

chrome.runtime.sendMessage("AmIEnabled", function(enabled) { if (enabled) init(); });

So rather than initializing all the anti-fingerprinting measures immediately, this content script first waits for the extension’s background page to tell it whether it is actually supposed to do anything. This gives the website the necessary time to store all the relevant values before these are changed. It could even come back later and check out the modified values as well – once again, two data points are better than one.

This is an important limitation of Chrome’s extension architecture which is sadly shared by all browsers today. It is possible to run a content script before any webpage scripts can run ("run_at": "document_start"). This will only be a static script however, not knowing any of the extension state. And requesting extension state takes time.

This might eventually get solved by dynamic content script support, a request originally created ten years ago. In the meantime however, it seems that the only viable solution is to initialize anti-fingerprinting immediately. If the extension later says “no, you are disabled” – well, then the content script will just have to undo all manipulations. But this approach makes sure that in the common scenario (functionality is enabled) websites won’t see two variants of the same data.

The art of faking

Let’s say that all the technical issues are solved. The mechanism for installing fake values works flawlessly. This still leaves a question: how does one choose the “right” fake value?

How about choosing a random value? My display resolution is 1661×3351, now fingerprint that! As funny as this is, fingerprinting doesn’t rely on data that makes sense. All it needs is data that is stable and sufficiently unique. And that display resolution is certainly extremely unique. Now one could come up with schemes to change this value regularly, but fact is: making users stand out isn’t the right way.

What you’d rather want is finding the largest group out there and joining it. My display resolution is 1920×1080 – just the common Full HD, nothing to see here! Want to know my available display space? I have my Windows taskbar at the bottom, just like everyone else. No, I didn’t resize it either. I’m just your average Joe.

The only trouble with this approach: the values have to be re-evaluated regularly. Two decades ago, 1024×768 was the most common display resolution and a good choice for anti-fingerprinting. Today, someone claiming to have this screen size would certainly stick out. Similarly, in my website logs visitors claiming to use Firefox 48 are noticeable: it might have been a common browser version some years ago, but today it’s usually bots merely pretending to be website visitors.

Categorieën: Mozilla-nl planet

Mike Taylor: Differences in cookie length (size?) restrictions

Mozilla planet - do, 10/12/2020 - 07:00

I was digging through some of the old http-state tests (which got ported into web-platform-tests, and which I’m rewriting to be more modern and, mostly work?) and noticed an interesting difference between Chrome and Firefox in disabled-chromium0020-test (no idea why it’s called disabled and not, in fact, disabled).

That test looks something like:

Set-Cookie: aaaaaaaaaaaaa....(repeating a's for seemingly forever)

But first, a little background on expected behavior so you can begin to care.

rfc6265 talks about cookie size limits like so:

At least 4096 bytes per cookie (as measured by the sum of the length of the cookie’s name, value, and attributes).

(It’s actually trying to say at most, which confuses me, but a lot of things confuse me on the daily.)

So in my re-written version of disabled-chromium0020-test I’ve got (just assume a function that consumes this object and does something useful):

{ // 7 + 4089 = 4096 cookie: `test=11${"a".repeat(4089)}`, expected: `test=11${"a".repeat(4089)}`, name: "Set cookie with large value ( = 4kb)", },

Firefox and Chrome are happy to set that cookie. Fantastic. So naturally we want to test a cookie with 4097 bytes and make sure the cookie gets ignored:

// 7 + 4091 = 4098 { cookie: `test=12${"a".repeat(4091)}`, expected: "", name: "Ignore cookie with large value ( > 4kb)", },

If you’re paying attention, and good at like, reading and math, you’ll notice that 4096 + 1 is not 4098. A+ work.

What I discovered, much in the same way that Columbus discovered Texas, is that a “cookie string” that is 4097 bytes long currently has different behaviors in Firefox and Chrome (and probably most browsers, TBQH). Firefox (sort of correctly, according to the current spec language, if you ignore attributes) will only consider the name length + the value length, while Chrome will consider the entire cookie string including name, =, value, and all the attributes when enforcing the limit.

I’m going to include the current implementations here, because it makes me look smart (and I’m trying to juice SEO):

Gecko (which sets kMaxBytesPerCookie to 4096):

bool CookieCommons::CheckNameAndValueSize(const CookieStruct& aCookieData) { // reject cookie if it's over the size limit, per RFC2109 return (aCookieData.name().Length() + aCookieData.value().Length()) <= kMaxBytesPerCookie; }

Chromium (which sets kMaxCookieSize to 4096):

ParsedCookie::ParsedCookie(const std::string& cookie_line) { if (cookie_line.size() > kMaxCookieSize) { DVLOG(1) << "Not parsing cookie, too large: " << cookie_line.size(); return; } ParseTokenValuePairs(cookie_line); if (!pairs_.empty()) SetupAttributes(); }

Neither are really following what the spec says, so until the spec tightens that down (it’s currently only a SHOULD-level requirement which is how you say pretty-please in a game of telephone from browser engineers to computers), I bumped the test up to 4098 bytes so both browsers consistently ignore the cookie.

(I spent about 30 seconds testing in Safari, and it seems to match Firefox at least for the 4097 “cookie string” limit, but who knows what it does with attributes.)

Categorieën: Mozilla-nl planet

Cameron Kaiser: Floodgap downtime fixed

Mozilla planet - wo, 09/12/2020 - 23:32
I assume some of you will have noticed that Floodgap was down for a couple of days -- though I wouldn't know, since it wasn't receiving E-mail during the downtime. Being 2020 the problem turned out to be a cavalcade of simultaneous major failures including the complete loss of the main network backbone's power supply. Thus is the occasional "joy" of running a home server room. It is now on a backup rerouted supply while new parts are ordered and all services including TenFourFox and gopher.floodgap.com should be back up and running. Note that there will be some reduced redundancy until I can effect definitive repairs but most users shouldn't be affected.
Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla teams up with Twitter, Automattic, and Vimeo to provide recommendations on EU content responsibility

Mozilla planet - wo, 09/12/2020 - 09:30

The European Commission will soon unveil its landmark Digital Services Act draft law, that will set out a vision for the future of online content responsibility in the EU. We’ve joined up with Twitter, Auttomattic, and Vimeo to provide recommendations on how the EU’s novel proposals can ensure a more thoughtful approach to addressing illegal and harmful content in the EU, in a way that tackles online harms while safeguarding smaller companies’ ability to compete.

As we note in our letter,

“The present conversation is too often framed through the prism of content removal alone, where success is judged solely in terms of ever-more content removal in ever-shorter periods of time.

Without question, illegal content – including terrorist content and child sexual abuse material – must be removed expeditiously. Yet by limiting policy options to a solely stay up-come down binary, we forgo promising alternatives that could better address the spread and impact of problematic content while safeguarding rights and the potential for smaller companies to compete.

Indeed, removing content cannot be the sole paradigm of Internet policy, particularly when concerned with the phenomenon of ‘legal-but-harmful’ content. Such an approach would benefit only the very largest companies in our industry.

We therefore encourage a content moderation discussion that emphasises the difference between illegal and harmful content and highlights the potential of interventions that address how content is surfaced and discovered. Included in this is how consumers are offered real choice in the curation of their online environment.”

We look forward to working with lawmakers in the EU to help bring this vision for a healthier internet to fruition in the upcoming Digital Services Act deliberations.

You can read the full letter to EU lawmakers here and more background on our engagement with the EU DSA here.

The post Mozilla teams up with Twitter, Automattic, and Vimeo to provide recommendations on EU content responsibility appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Ryan Harter: Leading with Data - Cascading Metrics

Mozilla planet - wo, 09/12/2020 - 09:00

It's surprisingly hard to lead a company with data. There's a lot written about how to set good goals and how to avoid common pitfalls (like Surrogation) but I haven't seen much written about the practicalities of taking action on these metrics.

I spent most of this year working with …

Categorieën: Mozilla-nl planet

Pagina's