mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Air Mozilla: Bugzilla Project Meeting, 05 Jul 2017

Mozilla planet - wo, 05/07/2017 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Kat Braybrooke: two new articles: making, hacking, technomyths + machine ghosts!

Mozilla planet - di, 04/07/2017 - 17:41

I’m really excited to be able to share two new articles with you today about some of my recent research and practice exploring unexpected encounters between machines, hackers and makers that have just been published for the Digital Culture & Society Journal and Furtherfield.

shanzai hacking in china

The first is the 1st article I have ever published in a peer-reviewed academic journal (and a great one at that!) with the ever-inspiring Tim Jordan. It really felt like an honour to get a copy of the journal in the mail with my name inside it. Called “Genealogy, Culture and Technomyth: Decolonizing Western Information Technologies, from Open Source to the Maker Movement”, it explores a few key technomyths surrounding the hype around Western-based movements around information technologies, from maker cultures and hackspaces to Open Source to Web 2.0.

Using a materialist geneaological framework and applying it to non-Western case studies, from One Laptop Per Child in Peru to jugaad making in India to shanzai copyleft practices in China, we suggest that a heterogeneous set of essentially global cultural practices have been homogenized by the West. We identify three key aspects as constitutive to all three technomyths: technological determinism of information technologies, neoliberal capitalism and its “ideal future” subjectivities and the absence and/or invisibility of the non-Western.

An excerpt from the conclusion: “By looking closely at the maker movement as a technomyth through comparisons to other practices, and then comparing this analysis to the Open Source and Web 2.0 myths that came before it, we argue that not only do enthusiastic, zeitgeist-like proclamations about internet and information-based technologies exist (something often noted), but that these hypes take particular forms, framing societal possibilities through culturally-unique perspectives to innovation itself. Here it is important to underline that we are not arguing that nothing is new. Instead, in each of the technomyths we have explored, we have noted that there are specific technological origins, practices, economics and social interactions that are recognized – and many that are not. Our point here is that new technosocial practices are continually being channeled by influential technomyths that frame, direct and disseminate practices in their own mythical images.

We have also argued that jugaad, shanzai and OLPC-appropriation demonstrate what is neglected by the current dominant notion of the maker movement. The lack of communal, re-appropriated, necessity-based and non-Western uses of technology that we found were obstructed by maker movement progenitors has suggested three core constituents embedded within its claims: technological determinism, neoliberal capitalism and Western-centrism. Our analysis of two further technomyths, Open Source and Web 2.0, has confirmed these constituents as key.

A continued difficulty here is that our description has had to rely on the very broad categories of Western, neo-liberal and technologically determinist… but we do identify questions for future analysis. First, we have identified the ‘West’ as being formed not by a concrete conception… but by a relation to absence… the non-Western simply erased. This is something that juggad and other materialist practices fundamentally challenge, and it also suggests that attention should be paid to… future techno-subalterns. Second, we have seen a particular economic subjectivity presumed, one that prioritises the use of information technologies to act outside of state boundaries in the general pursuit of profit. This was most clear in Web 2.0, but is also present in other myths, and is again clearly challenged by more community-centred efforts such as OLPC reappropriation. Finally, we have seen how certain technologies are privileged as the drivers of technological determination. In all three Western technomyths, we find a fascination… with internet-technologies as determining social and cultural structures.”


To read the full article, for those with a general academic login, a PDF is now available [here]; for those with a Sussex login it’s available [here]; or please feel free to [email me] if you’d like a pre-publication copy. I’d love to hear your thoughts, rebuttals and opinions on it.

machine ghosts tour

The second article which I wrote for Furtherfield with the talented Emma O'Sullivan, is called “Hunting the Machine Ghosts of Brighton”, and outlines our experiences in organising my first-ever psychogeography tour as part of the excellent Haunted Random Forest Festival. On this tour, we unveiled machine entities hidden within seemingly idyllic urban landscapes across the city of Brighton, from peregrine falcon webcams to always-listening WiFi hotspots. We were joined by an eclectic group of inspiring local people from across the UK, who also joined us in facilitating nodes and building activities for the tour. It was a very inspiring (and radical!) way to explore a city through its machines, its algorithms and its forgotten ghosts - and it was certainly an experience I won’t forget.

I’d like to give a big thanks to the editors and peer reviewers at Digital Culture & Society and Furtherfield for their guidance, kindess and support in getting these writings out into the world. I look forward to the continued conversations yet to come from them, human-based, bot-based and otherwise! ;)

Categorieën: Mozilla-nl planet

Doug Belshaw: The Essential Elements of Digital Literacies (WCCE, July 2017)

Mozilla planet - di, 04/07/2017 - 12:21

Dave Quinn got in touch with me to bemoan the fact that my recent presentations haven’t been recorded. As a result, I’ve pre-recorded the talk I’m giving at the World Conference on Computers in Education at Dublin Castle today.

Slides: Google / Slideshare
Audio: SoundCloud

Depending on your privacy settings, you should see the slides and audio embedded above. They’re also archived at archive.org.

Categorieën: Mozilla-nl planet

Smokey Ardisson: Welcome, “The Month in WordPress”!

Mozilla planet - di, 04/07/2017 - 07:59

I stumbled into the افكار و احلام dashboard today to make a new post, and I noticed a new item in the “WordPress News” feed: a monthly roundup of what’s going on in the WordPress project. The WordPress Blog has, for as long as I can recall, limited itself to posting about releases (new versions, betas, etc.) and the occasional other high-profile news item, so if the blog was your main ongoing point-of-contact with WordPress (as I suspect it is for most users, more-often-than-not including me), you didn’t learn much about what was happening or where the software was headed until a release featuring those changes landed in your lap. So this is a welcome change, a quick overview of big items and pointers to other things that may be of interest, but on a monthly basis to still keep the WordPress Blog low-volume (and thus low-annoyance).

It reminds me of the weekly-ish Camino updates begun (I think) in 2005 by Samuel Sidler (with assistance from Wevah), first on Camino Update and then later on his own blog, and later taken over by me when Sam got busy with other things (and it would surprise me if Sam’s fingerprints weren’t on this new WordPress monthly roundup in some way). Over the years, those updates filled an important communication need in the Camino Project. It’s important to make it easy for people interested in your software to see what you’re doing (or that you are still doing something!), especially when those tentpole events like releases have a relatively long duration between them, but to do so without either requiring those interested people to dig in to the daily activity of the project or overwhelming them with such details or project jargon. I feel like “The Month in WordPress: June 2017” strikes the right balance and hits the mark for WordPress, and I’m excited to keep reading the feature in the months to come.

So welcome to the web, “The Month in WordPress”! :)

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 189

Mozilla planet - di, 04/07/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is cargo-make, a crate that helps you automate your build workflow beyond what cargo already offers. Thanks to Sagie Gur Ari for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

109 pull requests were merged in the last week

New Contributors
  • Lee Bousfield
  • Milton Mazzarri
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

Issues in final comment period:

An interesting issue:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I have rewritten the code that was formerly in c

And which you probably had written very well

Forgive me it was unsafe

@horse_rust on Twitter.

Thanks to @balrogboogie for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Selena Deckelmann: Runtime All-Hands June 2017 Summary

Mozilla planet - ma, 03/07/2017 - 22:24

All of Mozilla met in San Francisco last week for a work week. Unlike the last few All-Hands, we spent the week mostly informally and not in meetings — hacking in rooms together on near-term work.

The Runtime engineering team was focused on landing patches for Quantum Flow, Quantum DOM and Quantum Networking efforts. We had exciting changes related to Speedometer v2, both in improving how we measure and landing key patches. The Security Engineering team invited the Tor Project to join and deep dive into the Android version of the browser (based on Fennec, and called OrFox). The rest of the Runtime team was landing patches, reconnecting with colleagues across the org, and making exciting, measurable progress toward a great launch of Firefox 57.

I asked several team leads to send me their highlights from the week. I’ve summarized this below. If I missed something that was important to you, please get in touch.

Project Quantum highlights

“Watching my laptop race HTTP network queries against the disk cache and seeing that it was choosing the right transactions to have the network actually be faster.” -Patrick McManus

  • QF team fixed 26 Quantum Flow bugs since last Friday, June 23
  • Landed (preffed off, going to do a pref experiment for rollout) budget-based background tab throttling (meta bug)
  • Joel Maher and his “army of automation” has helped correct Speedometer reporting.
  • Got a bunch of people from different teams in a room and figured out the easiest/best architecture for supporting the moz-page-thumbs protocol in e10s (i.e. the protocol that supports everything you see when you open a new tab). Same, for nsITraceableListener support (which is must for 57: needed to support the NoScript addon).
  • Incremental table sweeping bug fixes landed that should reduce GC pause times.
  • Byte code cache landed and is on for 5% of Nightly population — this project was in progress for more than a year.
  • We now have a name for almost every runnable in Firefox.

Security/Privacy Highlights

“At Mozilla all hands this week. They are excited to work with us.” –Mike Perry, Tor Project

  • Tor Browser for Android was updated during the workweek to be based on Firefox 52 (from 45). The update is in QA now.
  • Patch written (and being rewritten) for constant blinding in the JIT.
  • A patch for integrating Tor into Focus was hacked up for discussion.
  • Got the TLS Canary (tool for testing changes to our crypto stack on Alexa-top-100 websites) running in TaskCluster.
  • Had first successful use of OneCRL administrative workflow

Other Runtime Highlights

“The culture of focusing on performance is in effect! Performance was a big part of every discussion and review.” -Andrew Overholt

  • “Making my first interoperable handshake and encrypted data for Mozilla’s IETF QUIC.” -Patrick McManus
  • JavaScript classes are done and fully optimized.
  • GeckoView example now being tested in automation.
  • Added security certificate information to GeckoView for use in PWA and Custom Tabs.
  • Taught a bunch of people how to profile at the two Quantum Flow profiler office hours sessions.

Thanks everyone for a productive week!

Categorieën: Mozilla-nl planet

Carsten Book: Sheriff Statistics for June 2017

Mozilla planet - ma, 03/07/2017 - 12:24

Hi,

Welcome to the Sheriff Statistics for June 2017 !

Also i would like to thank everyone for taking part in the Sheriff Survey – You can see the results here : https://blog.mozilla.org/tomcat/2017/06/23/sheriff-survey-results/ Now to the actual data for June! June 2017:

Autoland Tree:

Total Servo Sync Pushes: 254
Total Pushes: 1799
Total Number of commits 3711
Total number of commits without Servo 3445
Total Backouts: 167
Total of Multi-bug pushes 12
Total number of bugs changed 1702
Percentage of backout against bugs: 9.81198589894
Percentage of backouts: 9.28293496387
Percentage of backouts without Servo: 10.8090614887 (thats ~ +0,8 % higher rate compared to may)

Mozilla-inbound Total Servo Sync Pushes: 0
Total Pushes: 1117
Total Number of commits 3611
Total number of commits without Servo 3611
Total Backouts: 130
Total of Multi-bug pushes 159
Total number of bugs changed 1591
Percentage of backout against bugs: 8.17096165933
Percentage of backouts: 11.6383169203
Percentage of backouts without Servo: 11.6383169203 (~ +0,7 % higher rate compared to may)

So Sheriffs managed and monitored on the Integration Trees in May 2017 ~ 2900 pushes and 297 backouts.

Let us know when you have any Question or Feedback about Sheriffing.

Cheers and have a great July!,
-Tomcat

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: July’s Featured Extensions

Mozilla planet - zo, 02/07/2017 - 02:30

Firefox Logo on blue background

Pick of the Month: Privacy Badger

by EFF Technologists
Protects you from spying ads and invisible trackers.

“Works without any problems, causes no site loading issues, and is more trustworthy than other, similar programs.”

Featured: AdBlock for Firefox

by AdBlock
Robust ad blocker that takes aim against all forms of ads—pop-ups, banners, pre-rolls, and more.

“Best ad blocker out there.”

Featured: Disconnect

by Disconnect
Another great privacy protecting extension, Disconnect blocks invisible trackers and helps speed up your Firefox experience.

“One of the most important browser add-ons out there. Thanks!”

Featured: Easy YouTube Video Downloader Express

by Dishita
Very simple to use YouTube downloader; and one of the few to offer 1080p full HD and 256kbps MP3 download capability.

“Brilliant for downloading MP3’s and MP4’s.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post July’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Daniel Pocock: A FOSScamp by the beach

Mozilla planet - vr, 30/06/2017 - 10:47

I recently wrote about the great experience many of us had visiting OSCAL in Tirana. Open Labs is doing a great job promoting free, open source software there.

They are now involved in organizing another event at the end of the summer, FOSScamp in Syros, Greece.

Looking beyond the promise of sun and beach, FOSScamp is also just a few weeks ahead of the Outreachy selection deadline so anybody who wants to meet potential candidates in person may find this event helpful.

If anybody wants to discuss the possibilities for involvement in the event then the best place to do that may be on the Open Labs forum topic.

What will tomorrow's leaders look like?

While watching a talk by Joni Baboci, head of Tirana's planning department, I was pleasantly surprised to see this photo of Open Labs board members attending the town hall for the signing of an open data agreement:

It's great to see people finding ways to share the principles of technological freedoms far and wide and it will be interesting to see how this relationship with their town hall grows in the future.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: A Time for Action — Innovating for Diversity & Inclusion in Open Source Communities

Mozilla planet - vr, 30/06/2017 - 02:32
Photo credit: cyberdee via Visual Hunt / CC BY-NC-SA

Another year, another press story letting us know Open Source has a diversity problem. But this isn’t news — women, people of color, parents, non-technical contributors, cs/transgender and other marginalized people and allies have been sharing stories of challenge and overcoming for years. It’s can’t be enough to count who makes it through the gauntlet of tasks and exclusive cultural norms that lead to a first pull request; it’s not enough to celebrate increased diversity on stage at technical conferences — when audience remains homogeneous, and abuse goes unchallenged.

Open source is missing out on diverse perspectives, and experiences that can drive change for a better world because we’re stuck in our ways — continually leaning on long-held assumptions about why we lose people. At Mozilla, we believe that to truly influence positive change in Diversity & Inclusion in our communities, and more broadly in open source, we need to learn, empathize —and innovate. We’re committed building on the good work of our peers to further grow through action — building bridges and collaborating with other communities also investing in D&I.

This year, leading with our organizational strategy for D&I, we are in investing in our communities informed by three months of research. Qualitative research was conducted across the globe, with over 85 interviews as either part of an identity or focus groups, including interviews in the first language of participants, and for areas of low-bandwidth(or those who preferred not to speak on video) we interviewed in Telegram.

Qualitative data was analyzed from various sources including Mozilla Reps portal, Mozillian Sentiment Survey, a series of applications to Global Leadership events, regional meetups, a regional community survey, and various smaller data sources.

For five weeks, beginning July 3rd, this blog series will share key findings — challenges, and experiments we’re investing in for the remainder of the year and into next. As part of this, we intend to build bridges between our work and other open source communities research and work. At the end of this series we’ll post a link to schedule a presentation of this work to your community for input and future collaboration.

Onward!

A Time for Action — Innovating for Diversity & Inclusion in Open Source Communities was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Emma Irwin: A Time for Action — Innovating for Diversity & Inclusion in Open Source Communities

Mozilla planet - vr, 30/06/2017 - 02:28

Cross-posted to our Open Innovation Blog

Another year, another press story letting us know Open Source has a diversity problem. But this isn’t news — women, people of color, parents, non-technical contributors, cs/transgender and other marginalized people and allies have been sharing stories of challenge and overcoming for years. It’s can’t be enough to count who makes it through the gauntlet of tasks and exclusive cultural norms that lead to a first pull request; it’s not enough to celebrate increased diversity on stage at technical conferences — when audience remains homogeneous, and abuse goes unchallenged.

Open source is missing out on diverse perspectives, and experiences that can drive change for a better world because we’re stuck in our ways — continually leaning on long-held assumptions about why, why we lose people. At Mozilla, we believe that to truly influence positive change in Diversity & Inclusion in our communities, and more broadly in open source, we need to learn, empathize —and innovate. We’re committed building on the good work of our peers to further grow through action — building bridges and collaborating with other communities also investing in D&I.

This year, leading with our organizational strategy for D&I, we are in investing in our communities informed by three months of research. Qualitative research was conducted across the globe, with over 85 interviews as either part of an identity or focus groups, including interviews in the first language of participants, and for areas of low-bandwidth(or those who preferred not to speak on video) we interviewed in Telegram.

 

Qualitative data was analyzed from various sources including Mozilla Reps portal, Mozillian Sentiment Survey, a series of applications to Global Leadership events, regional meetups, a regional community survey, and various smaller data sources.

For five weeks, beginning July 3rd, this blog series will share key findings — challenges, and experiments we’re investing in for the remainder of the year and into next. As part of this, we intend to build bridges between our work and other open source communities research and work. At the end of this series we’ll post a link to schedule a presentation of this work to your community for input and future collaboration.

Cross-posted to our Open Innovation Blog

Feature Image  Photo credit: cyberdee via Visual Hunt / CC BY-NC-SA

FacebookTwitterGoogle+Share

Categorieën: Mozilla-nl planet

Robert O'Callahan: Patch On Linux Kernel Stable Branches Breaks rr

Mozilla planet - vr, 30/06/2017 - 01:35

A change in 4.12rc5 breaks rr. We're trying to get it fixed before 4.12 is released, and I think that will be OK. Unfortunately that change has already been backported to 3.18.57, 4.4.72, 4.9.32 and 4.11.5 :-( (all released on June 14, and I guess arriving in distros a bit later). Obviously we'll try to get the 4.12 fix also backported to those branches, but that will take a little while.

The symptoms are that long, complex replays fail with "overshot target ticks=... by " where N is generally a pretty large number (> 1000). If you look in the trace file, the value N will usually be similar to the difference between the target ticks and the previous ticks value for that task --- i.e. we tried to stop after N ticks but we actually stopped after about N*2 ticks. Unfortunately, rr tests don't seem to be affected.

I'm not sure if there's a reasonable workaround we can use in rr, or if there is one, whether it's worth effort to deploy. That may depend on how the conversation with upstream goes.

Categorieën: Mozilla-nl planet

Tim Taubert: Verified Binary Multiplication for GHASH

Mozilla planet - do, 29/06/2017 - 19:45

Previously I introduced some very basic Cryptol and SAWScript, and explained how to reason about the correctness of constant-time integer multiplication written in C/C++.

In this post I will touch on using formal verification as part of the code review process, in particular show how, by using the Software Analysis Workbench, we saved ourselves hours of debugging when rewriting the GHASH implementation for NSS.

What’s GHASH again?

GHASH is part of the Galois/Counter Mode, a mode of operation for block ciphers. AES-GCM for example uses AES as the block cipher for encryption, and appends a tag generated by the GHASH function, thereby ensuring integrity and authenticity.

The core of GHASH is multiplication in GF(2128), a characteristic-two finite field with coefficients in GF(2); they’re either zero or one. Polynomials in GF(2m) can be represented as m-bit numbers, with each bit corresponding to a term’s coefficient. In GF(23) for example, x^2 + 1 may be represented as the binary number 0b101 = 5.

Additions and subtractions in finite fields are “carry-less” because the coefficients must be in GF(p), for any GF(pm). As x * y is equivalent to adding x to itself y times, we can call multiplication in finite fields “carry-less” too. In GF(2) addition is simply XOR, so we can say that multiplication in GF(2m) is equal to binary multiplication without carries.

Note that the term carry-less only makes sense when talking about GF(2m) fields that are easily represented as binary numbers. Otherwise one would rather talk about multiplication in finite fields without comparing it to standard integer multiplication.

Franziskus’ post nicely describes why and how we updated our AES-GCM code in NSS. In case a user’s CPU is not equipped with the Carry-less Multiplication (CLMUL) instruction set, we need to provide a fallback and implement carry-less, constant-time binary multiplication ourselves, using standard integer multiplication with carry.

bmul() for 32-bit machines

The basic implementation of our binary multiplication algorithm is taken straight from Thomas Pornin’s excellent constant-time crypto post. To support 32-bit machines the best we can do is multiply two uint32_t numbers and store the result in a uint64_t.

For the full GHASH, Karatsuba decomposition is used: multiplication of two 128-bit integers is broken down into nine calls to bmul32(x, y, ...). Let’s take a look at the actual implementation:

/* Binary multiplication x * y = r_high << 32 | r_low. */ void bmul32(uint32_t x, uint32_t y, uint32_t *r_high, uint32_t *r_low) { uint32_t x0, x1, x2, x3; uint32_t y0, y1, y2, y3; uint32_t m1 = (uint32_t)0x11111111; uint32_t m2 = (uint32_t)0x22222222; uint32_t m4 = (uint32_t)0x44444444; uint32_t m8 = (uint32_t)0x88888888; uint64_t z0, z1, z2, z3; uint64_t z; /* Apply bitmasks. */ x0 = x & m1; x1 = x & m2; x2 = x & m4; x3 = x & m8; y0 = y & m1; y1 = y & m2; y2 = y & m4; y3 = y & m8; /* Integer multiplication (16 times). */ z0 = ((uint64_t)x0 * y0) ^ ((uint64_t)x1 * y3) ^ ((uint64_t)x2 * y2) ^ ((uint64_t)x3 * y1); z1 = ((uint64_t)x0 * y1) ^ ((uint64_t)x1 * y0) ^ ((uint64_t)x2 * y3) ^ ((uint64_t)x3 * y2); z2 = ((uint64_t)x0 * y2) ^ ((uint64_t)x1 * y1) ^ ((uint64_t)x2 * y0) ^ ((uint64_t)x3 * y3); z3 = ((uint64_t)x0 * y3) ^ ((uint64_t)x1 * y2) ^ ((uint64_t)x2 * y1) ^ ((uint64_t)x3 * y0); /* Merge results. */ z0 &= ((uint64_t)m1 << 32) | m1; z1 &= ((uint64_t)m2 << 32) | m2; z2 &= ((uint64_t)m4 << 32) | m4; z3 &= ((uint64_t)m8 << 32) | m8; z = z0 | z1 | z2 | z3; *r_high = (uint32_t)(z >> 32); *r_low = (uint32_t)z; }

Thomas’ explanation is not too hard to follow. The main idea behind the algorithm are the bitmasks m1 = 0b00010001..., m2 = 0b00100010..., m4 = 0b01000100..., and m8 = 0b10001000.... They respectively have the first, second, third, and fourth bit of every nibble set. This leaves “holes” of three bits between each “data bit”, so that with those applied at most a quarter of the 32 bits are equal to one.

Per standard integer multiplication, eight times eight bits will at most add eight carry bits of value one together, thus we need sufficiently sized holes per digit that can hold the value 8 = 0b1000. Three-bit holes are big enough to prevent carries from “spilling” over, they could even handle up to 15 = 0b1111 data bits in each of the two integer operands.

Review, tests, and verification

The first version of the patch came with a bunch of new tests, the vectors taken from the GCM specification. We previously had no such low-level coverage, all we had were a number of high-level AES-GCM tests.

When reviewing, after looking at the patch itself and applying it locally to see whether it builds and tests succeed, the next step I wanted to try was to write a Cryptol specification to prove the correctness of bmul32(). Thanks to the built-in pmult function that took only a few minutes.

m <- llvm_load_module "bmul.bc"; let {{ bmul32 : [32] -> [32] -> ([32], [32]) bmul32 a b = (take`{32} prod, drop`{32} prod) where prod = pad (pmult a b) pad x = zero # x }};

The SAWScript needed to properly parse the LLVM bitcode and formulate the equivalence proof is straightforward, it’s basically the same as shown in the previous post.

llvm_verify m "bmul32" [] do { x <- llvm_var "x" (llvm_int 32); y <- llvm_var "y" (llvm_int 32); llvm_ptr "r_high" (llvm_int 32); r_high <- llvm_var "*r_high" (llvm_int 32); llvm_ptr "r_low" (llvm_int 32); r_low <- llvm_var "*r_low" (llvm_int 32); let res = {{ bmul32 x y }}; llvm_ensure_eq "*r_high" {{ res.0 }}; llvm_ensure_eq "*r_low" {{ res.1 }}; llvm_verify_tactic abc; };

Compile to bitcode and run SAW. After just a few seconds it will tell us it succeeded in proving equivalency of both implementations.

$ saw bmul.saw Loading module Cryptol Loading file "bmul.saw" Successfully verified @bmul32 bmul() for 64-bit machines

bmul32() is called nine times, each time performing 16 multiplications. That’s 144 multiplications in total for one GHASH evaluation. If we had a bmul64() for 128-bit multiplication with uint128_t we’d need to call it only thrice.

The naive approach taken in the first patch revision was to just double the bitsize of the arguments and variables, and also extend the bitmasks. If you paid close attention to the previous section you might notice a problem here already. If not, it will become clear in a few moments.

typedef unsigned __int128 uint128_t; /* Binary multiplication x * y = r_high << 64 | r_low. */ void bmul64(uint64_t x, uint64_t y, uint64_t *r_high, uint64_t *r_low) { uint64_t x0, x1, x2, x3; uint64_t y0, y1, y2, y3; uint64_t m1 = (uint64_t)0x1111111111111111; uint64_t m2 = (uint64_t)0x2222222222222222; uint64_t m4 = (uint64_t)0x4444444444444444; uint64_t m8 = (uint64_t)0x8888888888888888; uint128_t z0, z1, z2, z3; uint128_t z; /* Apply bitmasks. */ x0 = x & m1; x1 = x & m2; x2 = x & m4; x3 = x & m8; y0 = y & m1; y1 = y & m2; y2 = y & m4; y3 = y & m8; /* Integer multiplication (16 times). */ z0 = ((uint128_t)x0 * y0) ^ ((uint128_t)x1 * y3) ^ ((uint128_t)x2 * y2) ^ ((uint128_t)x3 * y1); z1 = ((uint128_t)x0 * y1) ^ ((uint128_t)x1 * y0) ^ ((uint128_t)x2 * y3) ^ ((uint128_t)x3 * y2); z2 = ((uint128_t)x0 * y2) ^ ((uint128_t)x1 * y1) ^ ((uint128_t)x2 * y0) ^ ((uint128_t)x3 * y3); z3 = ((uint128_t)x0 * y3) ^ ((uint128_t)x1 * y2) ^ ((uint128_t)x2 * y1) ^ ((uint128_t)x3 * y0); /* Merge results. */ z0 &= ((uint128_t)m1 << 64) | m1; z1 &= ((uint128_t)m2 << 64) | m2; z2 &= ((uint128_t)m4 << 64) | m4; z3 &= ((uint128_t)m8 << 64) | m8; z = z0 | z1 | z2 | z3; *r_high = (uint64_t)(z >> 64); *r_low = (uint64_t)z; } Tests and another equivalence proof

The above version of bmul64() passed the GHASH test vectors with flying colors. That tricked reviewers into thinking it looked just fine, even if they just learned about the basic algorithm idea. Fallible humans. Let’s update the proofs and see what happens.

bmul : {n,m} (fin n, n >= 1, m == n*2 - 1) => [n] -> [n] -> ([n], [n]) bmul a b = (take`{n} prod, drop`{n} prod) where prod = pad (pmult a b : [m]) pad x = zero # x

Instead of hardcoding bmul for 32-bit integers we use polymorphic types m and n to denote the size in bits. m is mostly a helper to make it a tad more readable. We can now reason about carry-less n-bit binary multiplication.

Duplicating the SAWScript spec and running :s/32/64 is easy, but certainly nicer is adding a function that takes n as a parameter and returns a spec for n-bit arguments.

let SpecBinaryMul n = do { x <- llvm_var "x" (llvm_int n); y <- llvm_var "y" (llvm_int n); llvm_ptr "r_high" (llvm_int n); r_high <- llvm_var "*r_high" (llvm_int n); llvm_ptr "r_low" (llvm_int n); r_low <- llvm_var "*r_low" (llvm_int n); let res = {{ bmul x y }}; llvm_ensure_eq "*r_high" {{ res.0 }}; llvm_ensure_eq "*r_low" {{ res.1 }}; llvm_verify_tactic abc; }; llvm_verify m "bmul32" [] (SpecBinaryMul 32); llvm_verify m "bmul64" [] (SpecBinaryMul 64);

We use two instances of the bmul spec to prove correctness of bmul32() and bmul64() sequentially. The second verification will take a lot longer before yielding results.

$ saw bmul.saw Loading module Cryptol Loading file "bmul.saw" Successfully verified @bmul32 When verifying @bmul64: Proof of Term *(Term Ident "r_high") failed. Counterexample: %x: 15554860936645695441 %y: 17798150062858027007 lss__alloc0: 262144 lss__alloc1: 8 Term *(Term Ident "r_high") Encountered: 5413984507840984561 Expected: 5413984507840984531 saw: user error ("llvm_verify" (bmul.saw:31:1): Proof failed.)

Proof failed. As you probably expected by now, the bmul64() implementation is erroneous and SAW gives us a specific counterexample to investigate further. It took us a while to understand the failure but it seems very obvious in hindsight.

Fixing the bmul64() bitmasks

As already shown above, bitmasks leaving three-bit holes between data bits can avoid carry-spilling for up to two 15-bit integers. Using every fourth bit of a 64-bit argument however yields 16 data bits each, and carries can thus override data bits. We need bitmasks with four-bit holes.

/* Binary multiplication x * y = r_high << 64 | r_low. */ void bmul64(uint64_t x, uint64_t y, uint64_t *r_high, uint64_t *r_low) { uint128_t x1, x2, x3, x4, x5; uint128_t y1, y2, y3, y4, y5; uint128_t r, z; /* Define bitmasks with 4-bit holes. */ uint128_t m1 = (uint128_t)0x2108421084210842 << 64 | 0x1084210842108421; uint128_t m2 = (uint128_t)0x4210842108421084 << 64 | 0x2108421084210842; uint128_t m3 = (uint128_t)0x8421084210842108 << 64 | 0x4210842108421084; uint128_t m4 = (uint128_t)0x0842108421084210 << 64 | 0x8421084210842108; uint128_t m5 = (uint128_t)0x1084210842108421 << 64 | 0x0842108421084210; /* Apply bitmasks. */ x1 = x & m1; y1 = y & m1; x2 = x & m2; y2 = y & m2; x3 = x & m3; y3 = y & m3; x4 = x & m4; y4 = y & m4; x5 = x & m5; y5 = y & m5; /* Integer multiplication (25 times) and merge results. */ z = (x1 * y1) ^ (x2 * y5) ^ (x3 * y4) ^ (x4 * y3) ^ (x5 * y2); r = z & m1; z = (x1 * y2) ^ (x2 * y1) ^ (x3 * y5) ^ (x4 * y4) ^ (x5 * y3); r |= z & m2; z = (x1 * y3) ^ (x2 * y2) ^ (x3 * y1) ^ (x4 * y5) ^ (x5 * y4); r |= z & m3; z = (x1 * y4) ^ (x2 * y3) ^ (x3 * y2) ^ (x4 * y1) ^ (x5 * y5); r |= z & m4; z = (x1 * y5) ^ (x2 * y4) ^ (x3 * y3) ^ (x4 * y2) ^ (x5 * y1); r |= z & m5; *r_high = (uint64_t)(r >> 64); *r_low = (uint64_t)r; }

m1, …, m5 are the new bitmasks. m1 equals 0b0010000100001..., the others are each shifted by one. As the number of data bits per argument is now 64/5 <= n < 64/4 we need 5*5 = 25 multiplications. With three calls to bmul64() that’s 75 in total.

Run SAW again and, after about an hour, it will tell us it successfully verified @bmul64.

$ saw bmul.saw Loading module Cryptol Loading file "bmul.saw" Successfully verified @bmul32 Successfully verified @bmul64

You might want to take a look at Thomas Pornin’s version of bmul64(). This basically is the faulty version that SAW failed to verify, he however works around the overflow by calling it twice, passing arguments reversed bitwise the second time. He invokes bmul64() six times, which results in a total of 96 multiplications.

Some final thoughts

One of the takeaways is that even an implementation passing all test vectors given by a spec doesn’t need to be correct. That is not too surprising, spec authors can’t possibly predict edge cases from implementation approaches they haven’t thought about.

Using formal verification as part of the review process was definitely a wise decision. We likely saved hours of debugging intermittently failing connections, or random interoperability problems reported by early testers. I’m confident this wouldn’t have made it much further down the release line.

We of course added an extra test that covers that specific flaw but the next step definitely should be proper CI integration. The Cryptol code has already been written and there is no reason to not run it on every push. Verifying the full GHASH implementation would be ideal. The Cryptol code is almost trivial:

ghash : [128] -> [128] -> [128] -> ([64], [64]) ghash h x buf = (take`{64} res, drop`{64} res) where prod = pmod (pmult (reverse h) xor) <|x^^128 + x^^7 + x^^2 + x + 1|> xor = (reverse x) ^ (reverse buf) res = reverse prod

Proving the multiplication of two 128-bit numbers for a 256-bit product will unfortunately take a very very long time, or maybe not finish at all. Even if it finished after a few days that’s not something you want to automatically run on every push. Running it manually every time the code is touched might be an option though.

Categorieën: Mozilla-nl planet

Mozilla Open Design Blog: Join us on our research trip to learn more about Conscious Choosers!

Mozilla planet - do, 29/06/2017 - 19:10

Atlanta! Kansas City! Austin! Are you interested in helping us make the Internet more healthy, open and accessible?

The Mozilla Audience Insights team, with the help of our good friend and researcher Roberta Tassi, and the remote support of information designers Giorgio Uboldi and Matteo Azzi, will be visiting your cities to understand more about Conscious Choosers – a group of people whose commitment to clear values and beliefs are driving their behavior and choices in terms of brands, companies and organizations they support and use. We would like to learn more about their experiences and the role that the internet and technology play in their life.  This is an important group for Mozilla to understand because we believe people who think this way can also help us with our mission to keep the Internet open, accessible and healthy.

 

We would love to meet folks in person in:

Atlanta: July 11 – July 14, 2017

Kansas City: July 15 – July 19, 2017

Austin: July 20 – July 25, 2017

 

A few options to participate:

# Are you interested in meeting in person and contributing to the conversation with other people in the community?

Join us for a group discussion on the role of the Internet in our daily life, as individuals and societies, and why it’s important to keep it healthy.

# Are you a group of volunteers, a community or a non-profit organization that promotes some sort of activism and are targeting the Conscious Chooser audience as well?

We would like to come visit you and know more about what you do at the local level.

# Do you know your city and community well and want to learn more about human-centered design?

Join us as a local guide and participate in the whole research process.  You’ll get to experience a real field activity and synthesis with an expert team of designers and researchers.

If you answered yes to any of the above, or if you know anyone who fits the description, please reach us at audience_insights@mozilla.com

 

One more possibility:

# Are you someone who has strong personal values and beliefs and you expect the companies you support to have the same standards? Have you rejected a brand because you do not believe in their company values and business practices? Do you carefully research a product and company before you purchase?

Join us for individual interviews and help us better understand the values that drive your decisions and the motivations behind them.

 

We believe a healthy Internet is diverse and inclusive, so we would like to connect with a diverse group of participants. If you’re interested, start by filling out this form, so we can learn a little bit about you.  We will handle your information as described in the our Privacy Policy and will delete your information if you are not selected as a participant. We will contact you by July 22 if you have been selected.

For those who are not located in these US cities, you can participate in the discussion online on GitHub.

If we encounter an interesting nugget during our discussion, we will be sure to post it to try and gather more thoughts from you all!

If you have any questions, please feel free to reach out to us at: audience_insights@mozilla.com.

The post Join us on our research trip to learn more about Conscious Choosers! appeared first on Mozilla Open Design.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Introducing HumbleNet: a cross-platform networking library that works in the browser

Mozilla planet - do, 29/06/2017 - 18:50

HumbleNet started out as a project at Humble Bundle in 2015 to support an initiative to port peer-to-peer multiplayer games at first to asm.js and now to WebAssembly. In 2016, Mozilla’s web games program identified the need to enable UDP (User Datagram Protocol) networking support for web games, and asked if they could work with Humble Bundle to release the project as open source. Humble Bundle graciously agreed, and Mozilla worked with OutOfOrder.cc to polish and document HumbleNet. Today we are releasing the 1.0 version of this library to the world!

Why another networking library?

When the idea of HumbleNet first emerged we knew we could use WebSockets to enable multiplayer gaming on the web. This approach would require us to either replace the entire protocol with WebSockets (the approach taken by the asm.js port of Quake 3), or to tunnel UDP traffic through a WebSocket connection to talk to a UDP-based server at a central location.

In order to work, both approaches require a middleman to handle all network traffic between all clients. WebSockets is good for games that require a reliable ordered communication channel, but real-time games require a lower latency solution. And most real-time games care more about receiving the most recent data than getting ALL of the data in order. WebRTC’s UDP-based data channel fills this need perfectly. HumbleNet provides an easy-to-use API wrapper around WebRTC that enables real-time UDP connections between clients using the WebRTC data channel.

What exactly is HumbleNet?

HumbleNet is a simple C API that wraps WebRTC and WebSockets and hides away all the platform differences between browser and non-browser platforms. The current version of the library exposes a simple peer-to-peer API that allows for basic peer discovery and the ability to easily send data (via WebRTC) to other peers. In this manner, you can build a game that runs on Linux, macOS, and Windows, while using any web browser — and they can all communicate in real-time via WebRTC.  This means no central server (except for peer discovery) is needed to handle network traffic for the game. The peers can talk directly to each other.

HumbleNet itself uses a single WebSocket connection to manage peer discovery. This connection only handles requests such as “let me authenticate with you”, and “what is the peer ID for a server named “bobs-game-server”, and “connect me to peer #2345”.  After the peer connection is established, the games communicate directly over WebRTC.

HumbleNet demos

We have integrated HumbleNet into asm.js ports of Quake 2 and Quake 3 and we provide  a simple Unity3D demo as well.

Here is a simple video of me playing Quake 3 against myself. One game running in Firefox 54 (general release), the other in Firefox Developer Edition.

Getting started

You can find pre-built redistributables at https://humblenet.github.io/. These include binaries for Linux, macOS, Windows, a C# wrapper, Unity3D plugin, and emscripten (for targeting asm.js or WebAssembly).

Starting your peer server

Read the documentation about the peer server on the website. In general, for local development, simply starting the peer server is good enough. By default it will run in non-SSL mode on port 8080.

Using the HumbleNet API Initializing the library

To initialize HumbleNet just call humblenet_init() and then later humblnet_p2p_init(). The second call will initiate the connection to the peer server with the specified credentials.

humblenet_init(); // this initializes the P2P portion of the library connecting to the given peer server with the game token/secret (used by the peer server to validate the client). // the 4th parameter is for future use to authenticate the user with the peer server humblenet_p2p_init("ws://localhost:8080/ws", "game token", "game secret", NULL); Getting your local peer id

Before you can send any data to other peers, you need to know what your own peer ID is. This can be done by periodically polling the humblenet_p2p_get_my_peer_id() function.

// initialization loop (getting a peer) static PeerId myPeer = 0; while (myPeer == 0) { // allow the polling to run humblenet_p2p_wait(50); // fetch a peer myPeer = humblenet_p2p_get_my_peer_id(); } Sending data

To send data, we call humblenet_p2p_sendto.  The 3rd parameter is the send mode type. Currently HumbleNet implements 2 modes:SEND_RELIABLE and SEND_RELIABLE_BUFFERED.   The buffered version will attempt to do local buffering of several small messages and send one larger message to the other peer. They will be broken apart on the other end transparently.

void send_message(PeerId peer, MessageType type, const char* text, int size) { if (size > 255) { return; } uint8_t buff[MAX_MESSAGE_SIZE]; buff[0] = (uint8_t)type; buff[1] = (uint8_t)size; if (size > 0) { memcpy(buff + 2, text, size); } humblenet_p2p_sendto(buff, size + 2, peer, SEND_RELIABLE, CHANNEL); } Initial connections to peers

When initially connecting to a peer for the first time you will have to send an initial message several times while the connection is established. The basic approach here is to send a hello message once a second, and wait for an acknowledge response before assuming the peer is connected. Thus, minimally, any application will need 3 message types: HELLO, ACK, and some kind of DATA message type.

if (newPeer.status == PeerStatus::CONNECTING) { time_t now = time(NULL); if (now > newPeer.lastHello) { // try once a second send_message(newPeer.id, MessageType::HELLO, "", 0); startPeerLastHello = now; } } Retrieving data

To actually retrieve data that has been sent to your peer you need to use humblenet_p2p_peek and humblenet_p2p_recvfrom. If you assume that all packages are smaller than a max size, then a simple loop like this can be done to process any pending messages.  Note: Messages larger than your buffer size will be truncated. Using humblenet_p2p_peek you can see the size of the next message for the specified channel.

uint8_t buff[MAX_MESSAGE_SIZE]; bool done = false; while (!done) { PeerId remotePeer = 0; int ret = humblenet_p2p_recvfrom(buff, sizeof(buff), &remotePeer, CHANNEL); if (ret < 0) { if (remotePeer != 0) { // disconnected client } else { // error done = true; } } else if (ret > 0) { // we received data process it process_message(remotePeer, buff, sizeof(buff), ret); } else { // 0 return value means no more data to read done = true; } } Shutting down the library

To disconnect from the peer server, other clients, and shut down the library, simply call humblenet_shutdown.

humblenet_shutdown(); Finding other peers

HumbleNet currently provides a simple “DNS” like method of locating other peers.  To use this you simply register a name with a client, and then create a virtual peer on the other clients. Take the client-server style approach of Quake3 for example – and have your server register its name as “awesome42.”

humblenet_p2p_register_alias("awesome42");

Then, on your other peers, create a virtual peer for awesome42.

PeerID serverPeer = humblenet_p2p_virtual_peer_for_alias("awesome42");

Now the client can send data to serverPeer and HumbleNet will take care of translating the virtual peer to the actual peer once it resolves the name.

We have two systems on the roadmap that will improve the peer discovery system.  One is an event system that allows you to request a peer to be resolved, and then notifies you when it’s resolved. The second is a proper lobby system that allows you to create, search, and join lobbies as a more generic means of finding open games without needing to know any name up front.

Development Roadmap

We have a roadmap of what we plan on adding now that the project is released. Keep an eye on the HumbleNet site for the latest development.

Future work items include:

  1. Event API
    1. Allows a simple SDL2-style polling event system so that game code can easily check for various events from the peer server in a cleaner way, such as connects, disconnects, etc.
  2. Lobby API
    1. Uses the Event API to build a means of creating lobbies on the peer server in order to locate game sessions (instead of having to register aliases).
  3. WebSocket API
    1. Adds in support to easily connect to any websocket server with a clean simple API.
How can I contribute?

If you want to help out and contribute to the project, HumbleNet is being developed on GitHub: https://github.com/HumbleNet/humblenet/. Use the issue tracker and pull requests to contribute code. Be sure to read the CONTRIBUTING.md guide on how to create a pull request.

Categorieën: Mozilla-nl planet

Anthony Hughes: Some-Hands

Mozilla planet - do, 29/06/2017 - 05:06

[Opinions expressed herein are my own and have not been endorsed by Mozilla]

This week Mozillians from around the world are taking part in the biannual tradition of descending on a city for a week-long all-hands meeting. In this case the location is San Francisco. Unfortunately I’m not participating in this tradition for the first time in 9 years.

My work(fromhome)station for the Mozilla SF All-hands, June 2017 So what, What are you really missing?

On the surface it’s basically a 4-day meeting marathon but to me the meetings are the least valuable part of the week. To me these events are about relationships. All-hands is an opportunity to meet friends new and old, to share stories of survival and to laugh in the face of our failures. It is a time to talk candidly with leadership, to participate in real-time discussions with people I normally speak with over IRC or e-mail, and to talk with people I never get a chance to otherwise. Sharing stories in person is the best thing about these events.

This sounds awesome, why would you not go?

I want to begin by crediting Mozilla for standing up to the cultural regression that is occurring. They came out against Trump’s travel ban and have taken extra steps to ensure legal support during our travels and security at the All-hands. I commend them for taking these steps but it is still not enough to persuade me to risk crossing the border.

First and foremost I know some Mozillians have been barred from going simply because of their background. I chose not to go out in protest of Trump’s draconian policies and in solidarity with those of my peers who haven’t been given the same choice.

Secondly I am concerned that the attitudes in the Whitehouse have given license to bigotry. As someone who identifies as a member of the LGBTQ community I am worried about a regressive, draconian executive order being signed targeting this community. If this were to happen I would fully expect there to be protests in San Francisco. This would become a distraction from everything I’d come to accomplish as I wouldn’t think twice about joining in these protests, possibly resulting in my deportation.

Finally I want to avoid US border guards. I know myself, I respect authority as long as they respect my dignity and treat me as a human-being. I will not hesitate to fight back and make my voice heard if I feel mistreated. The outcome of which is likely to be detainment and/or being escorted out. Statistically the potential of this happening is low but it’s not zero. I have chosen to avoid this situation altogether. I’m just not willing to put myself nor Mozilla in this position.

My experience at the US border has never been a positive one. Whether I was traveling for business and had the proper clearance, or if I was just heading down to the US for vacation, US Border guards have made me feel progressively less welcome in their country (clearly I’m not alone). Under Trump this has reached a point where I can’t be bothered anymore. Life is short and there’s many more welcoming countries I’ve yet to explore.

What about the next all-hands?

I always look forward to these gatherings. It’s often a rollercoaster of emotions and I always leave more tired but more re-energized than when I arrived. I often rediscover my passion to fight for the open web. It pains me to be missing out but I know it’d pain me more if I’d gone knowing some of my peers were being deprived of this opportunity due to an untenable political situation. I hope this is a mere blip and that I can one day join my friends in the US but for now it is terre sans homme, for business and for pleasure.

 

Categorieën: Mozilla-nl planet

Gervase Markham: My Addons

Mozilla planet - do, 29/06/2017 - 01:59

Firefox Nightly (will be 56) already no longer supports addons which are not multiprocess-compatible. And Firefox 57 will not support “Legacy” addons – those which use XUL, XPCOM or the Addons SDK. I just started using Nightly instead of Aurora as my main browser, at Mark Mayo’s request :-), and this is what I found (after doing “Update Addons”):

  • Addons installed: 37
  • Non-multiprocess-compatible addons (may also be marked Legacy): 21 (57%)
  • Legacy addons: 5 (14%)
  • Addons which will work in 57, if nothing changes: 11 (29%)

Useful addons which no longer work as of now are: 1-Click YouTube Video Downloader, Advertising Cookie Opt-Out, AutoAuth, Expiry Canary (OK, I wrote that one, that’s my fault), Google Translator, Live HTTP Headers, Mass Password Reset, RESTClient, and User Agent Switcher.

Useful addons which will also no longer work in 57 (if nothing changes) include: Adblock Plus, HTTPS Everywhere, JSONView, and Send to Kodi.

I’m sure Adblock Plus is being updated, because it would be sheer madness if we went ahead and it was not being. As for the rest – who knows? There doesn’t seem to be a way of finding out other than researching each one individually.

In the Firefox (I think it was) Town Hall, there was a question asked about addons and whether we felt that we were in a good place in terms of people not having a bad experience with their addons stopping working. The answer came back that we were. I fully admit I may not be a typical user, but it seems like this will not be my experience… :-(

Categorieën: Mozilla-nl planet

Daniel Stenberg: Denied entry

Mozilla planet - wo, 28/06/2017 - 23:47

 – Sorry, you’re not allowed entry to the US on your ESTA.

The lady who delivered this message to me this early Monday morning, worked behind the check-in counter at the Arlanda airport. I was there, trying to check-in to my two-leg trip to San Francisco to the Mozilla “all hands” meeting of the summer of 2017. My chance for a while ahead to meet up with colleagues from all around the world.

This short message prevented me from embarking on one journey, but instead took me on another.

Returning home

I was in a bit of a shock by this treatment really. I mean, I wasn’t treated particularly bad or anything but just the fact that they downright refused to take me on for unspecified reasons wasn’t easy to swallow. I sat down for a few moments trying to gather my thoughts on what to do next. I then sent a few tweets out expressing my deep disappointment for what happened, emailed my manager and some others at Mozilla about what happened and that I can’t come to the meeting and then finally walked out the door again and traveled back home.

This tweet sums up what I felt at the time:

Going back home. To cry. To curse. To write code from home instead. Fricking miserable morning. No #sfallhands for me.

— Daniel Stenberg (@bagder) 26 juni 2017

Then the flood

That Monday passed with some casual conversations with people of what I had experienced, and then…

Someone posted to hacker news about me. That post quickly rose to the top position and it began. My twitter feed suddenly got all crazy with people following me and retweeting my rejection tweets from yesterday. Several well-followed people retweeted me and that caused even more new followers and replies.

By the end of the Tuesday, I had about 2000 new followers and twitter notifications that literally were flying by at a high speed.

I was contacted by writers and reporters. The German Linux Magazine was first out to post about me, and then golem.de did the same. I talked to Kate Conger on Gizmodo who wrote Mozilla Employee Denied Entry to the United States. The Register wrote about me. I was for a moment considered for a TV interview, but I think they realized that we had too little facts to actually know why I was denied so maybe it wasn’t really that TV newsworthy.

These articles of course helped boosting my twitter traffic even more.

In the flood of responses, the vast majority were positive and supportive of me. Lots of people highlighted the role of curl and acknowledged that my role in that project has been beneficial for quite a number of internet related software in the world. A whole bunch of the responses offered to help me in various ways. The one most highlighted is probably this one from Microsoft’s Chief Legal Officer Brad Smith:

I’d be happy to have one of our U.S. immigration legal folks talk with you to see if there’s anything we can do to help. Let me know.

— Brad Smith (@BradSmi) 27 juni 2017

I also received a bunch of emails. Some of them from people who offered help – and I must say I’m deeply humbled and grateful by the amount of friends I apparently have and the reach this got.

Some of the emails also echoed the spirit of some of the twitter replies I got: quite a few Americans feel guilty, ashamed or otherwise apologize for what happened to me. However, I personally do not at all think of this setback as something that my American friends are behind. And I have many.

Mozilla legal

Tuesday evening I had a phone call with our (Mozilla’s) legal chief about my situation and I helped to clarify exactly what I had done, what I’ve been told and what had happened. There’s a team working now to help me sort out what happened and why, and what I and we can do about it so that I don’t get to experience this again the next time I want to travel to the US. People are involved both on the US as well as on the Swedish side of things.

Personally I don’t have any plans to travel to the US in the near future so there’s no immediate rush. I had already given up attending this Mozilla all-hands.

Repercussions

Mark Nottingham sent an email on the QUIC working group’s mailing list, and here follows two selected sections from it:

You may have seen reports that someone who participates in this work was recently refused entry to the US*, for unspecified reasons.

We won’t hold any further interim meetings in the US, until there’s a change in this situation. This means that we’ll either need to find suitable hosts in Canada or Mexico, or our meeting rotation will need to change to be exclusively Europe and Asia.

I trust I don’t actually need to point out that I am that “someone” and again I’m impressed and humbled by the support and actions in my community.

Now what?

I’m now (end of Wednesday, 60 hours since the check-in counter) at 3000 more twitter followers than what I started out with this Monday morning. This turned out to be a totally crazy week and it has severally impacted my productivity. I need to get back to write code, I’m getting behind!

I hope we’ll get some answers soon as to why I was denied and what I can do to fix this for the future. When I get that, I will share all the info I can with you all.

So, back to work!

Thanks again

Before I forget: thank you all. Again. With all my heart. The amount of love I’ve received these last two days is amazing.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Building the Web of Things

Mozilla planet - wo, 28/06/2017 - 20:36

Mozilla is working to create a Web of Things framework of software and services that can bridge the communication gap between connected devices. By providing these devices with web URLs and a standardized data model and API, we are moving towards a more decentralized Internet of Things that is safe, open and interoperable.

The Internet and the World Wide Web are built on open standards which are decentralized by design, with anyone free to implement those standards and connect to the network without the need for a central point of control. This has resulted in the explosive growth of hundreds of millions of personal computers and billions of smartphones which can all talk to each other over a single global network.

As technology advances from personal computers and smartphones to a world where everything around us is connected to the Internet, new types of devices in our homes, cities, cars, clothes and even our bodies are going online every day.

The Internet of Things

The “Internet of Things” (IoT) is a term to describe how physical objects are being connected to the Internet so that they can be discovered, monitored, controlled or interacted with. Like any advancement in technology, these innovations bring with them enormous new opportunities, but also new risks.

At Mozilla our mission is “to ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.”

This mission has never been more important than today, a time when everything around us is being designed to connect to the Internet. As new types of devices come online, they bring with them significant new challenges around security, privacy and interoperability.

Many of the new devices connecting to the Internet are insecure, do not receive software updates to fix vulnerabilities, and raise new privacy questions around the collection, storage, and use of large quantities of extremely personal data.

Additionally, most IoT devices today use proprietary vertical technology stacks which are built around a central point of control and which don’t always talk to each other. When they do talk to each other it requires per-vendor integrations to connect those systems together. There are efforts to create standards, but the landscape is extremely complex and there’s still not yet a single dominant model or market leader.

A chart of leading proprietary IoT stacks

The Web of Things

Using the Internet of Things today is a lot like sharing information on the Internet before the World Wide Web existed. There were competing hypertext systems and proprietary GUIs, but the Internet lacked a unifying application layer protocol for sharing and linking information.

The “Web of Things” (WoT) is an effort to take the lessons learned from the World Wide Web and apply them to IoT. It’s about creating a decentralized Internet of Things by giving Things URLs on the web to make them linkable and discoverable, and defining a standard data model and APIs to make them interoperable.

A table showing Web of Things standards

The Web of Things is not just another vertical IoT technology stack to compete with existing platforms. It is intended as a unifying horizontal application layer to bridge together multiple underlying IoT protocols.

Rather than start from scratch, the Web of Things is built on existing, proven web standards like REST, HTTP, JSON, WebSockets and TLS (Transport Layer Security). The Web of Things will also require new web standards. In particular, we think there is a need for a Web Thing Description format to describe things, a REST style Web Thing API to interact with them, and possibly a new generation of HTTP better optimised for IoT use cases and use by resource constrained devices.

The Web of Things is not just a Mozilla Initiative, there is already a well established Web of Things community and related standardization efforts at the IETF, W3C, OCF and OGC. Mozilla plans to be a participant in this community to help define new web standards and promote best practices around privacy, security and interoperability.

From this existing work three key integration patterns have emerged for connecting things to the web, defined by the point at which a Web of Things API is exposed to the Internet.

Diagram comparing Direct, Gateway, and Cloud Integration Patterns

Direct Integration Pattern

The simplest pattern is the direct integration pattern where a device exposes a Web of Things API directly to the Internet. This is useful for relatively high powered devices which can support TCP/IP and HTTP and can be directly connected to the Internet (e.g. a WiFi camera). This pattern can be tricky for devices on a home network which may need to use NAT or TCP tunneling in order to traverse a firewall. It also more directly exposes the device to security threats from the Internet.

Gateway Integration Pattern

The gateway integration pattern is useful for resource-constrained devices which can’t run an HTTP server themselves and so use a gateway to bridge them to the web. This pattern is particularly useful for devices which have limited power or which use PAN network technologies like Bluetooth or ZigBee that don’t directly connect to the Internet (e.g. a battery powered door sensor). A gateway can also be used to bridge all kinds of existing IoT devices to the web.

Cloud Integration Pattern

In the cloud integration pattern the Web of Things API is exposed by a cloud server which acts as a gateway remotely and the device uses some other protocol to communicate with the server on the back end. This pattern is particularly useful for a large number of devices over a wide geographic area which need to be centrally co-ordinated (e.g. air pollution sensors).

Project Things by Mozilla

In the Emerging Technologies team at Mozilla we’re working on an experimental framework of software and services to help developers connect “things” to the web in a safe, secure and interoperable way.

Things Framework diagram

Project Things will initially focus on developing three components:

  • Things Gateway — An open source implementation of a Web of Things gateway which helps bridge existing IoT devices to the web
  • Things Cloud — A collection of Mozilla-hosted cloud services to help manage a large number of IoT devices over a wide geographic area
  • Things Framework — Reusable software components to help create IoT devices which directly connect to the Web of Things
Things Gateway

Today we’re announcing the availability of a prototype of the first component of this system, the Things Gateway. We’ve made available a software image you can use to build your own Web of Things gateway using a Raspberry Pi.

Things Gateway diagram

So far this early prototype has the following features:

  • Easily discover the gateway on your local network
  • Choose a web address which connects your home to the Internet via a secure TLS tunnel requiring zero configuration on your home network
  • Create a username and password to authorize access to your gateway
  • Discover and connect commercially available ZigBee and Z-Wave smart plugs to the gateway
  • Turn those smart plugs on and off from a web app hosted on the gateway itself

We’re releasing this prototype very early on in its development so that hackers and makers can get their hands on the source code to build their own Web of Things gateway and contribute to the project from an early stage.

This initial prototype is implemented in JavaScript with a NodeJS web server, but we are exploring an adapter add-on system to allow developers to build their own Web of Things adapters using other programming languages like Rust in the future.

Web Thing API

Our goal in building this IoT framework is to lead by example in creating a Web of Things implementation which embodies Mozilla’s values and helps drive IoT standards around security, privacy and interoperability. The intention is not just to create a Mozilla IoT platform but an open source implementation of a Web of Things API which anyone is free to implement themselves using the programming language and operating system of their choice.

To this end, we have started working on a draft Web Thing API specification to eventually propose for standardization. This includes a simple but extensible Web Thing Description format with a default JSON encoding, and a REST + WebSockets Web Thing API. We hope this pragmatic approach will appeal to web developers and help turn them into WoT developers who can help realize our vision of a decentralized Internet of Things.

We encourage developers to experiment with using this draft API in real life use cases and provide feedback on how well it works so that we can improve it.

Web Thing API spec - Member Submission

Get Involved

There are many ways you can contribute to this effort, some of which are:

  • Build a Web Thing — build your own IoT device which uses the Web Thing API
  • Create an adapter — Create an adapter to bridge an existing IoT protocol or device to the web
  • Hack on Project Things — Help us develop Mozilla’s Web of Things implementation

You can find out more at iot.mozilla.org and all of our source code is on GitHub. You can find us in #iot on irc.mozilla.org or on our public mailing list.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Analysis of the Alexa Top 1M sites

Mozilla planet - wo, 28/06/2017 - 18:47

Prior to the release of the Mozilla Observatory a year ago, I ran a scan of the Alexa Top 1M websites. Despite being available for years, the usage rates of modern defensive security technologies was frustratingly low. A lack of tooling combined with poor and scattered documentation had led to there being little awareness around countermeasures such as Content Security Policy (CSP), HTTP Strict Transport Security (HSTS), and Subresource Integrity (SRI).

A few months after the Observatory’s release — and 1.5M Observatory scans later — I reassessed the Top 1M websites. The situation appeared as if it was beginning to improve, with the use of HSTS and CSP up by approximately 50%. But were those improvements simply low-hanging fruit, or has the situation continued to improve over the following months?

Technology April 2016 October 2016 June 2017 % Change Content Security Policy (CSP) .005%1
.012%2 .008%1
.021%2 .018%1
.043%2 +125% Cookies (Secure/HttpOnly)3 3.76% 4.88% 6.50% +33% Cross-origin Resource Sharing (CORS)4 93.78% 96.21% 96.55% +.4% HTTPS 29.64% 33.57% 45.80% +36% HTTP → HTTPS Redirection 5.06%5
8.91%6 7.94%5
13.29%6 14.38%5
22.88%6 +57% Public Key Pinning (HPKP) 0.43% 0.50% 0.71% +42%  — HPKP Preloaded7 0.41% 0.47% 0.43% -9% Strict Transport Security (HSTS)8 1.75% 2.59% 4.37% +69%  — HSTS Preloaded7 .158% .231% .337% +46% Subresource Integrity (SRI) 0.015%9 0.052%10 0.113%10 +117% X-Content-Type-Options (XCTO) 6.19% 7.22% 9.41% +30% X-Frame-Options (XFO)11 6.83% 8.78% 10.98% +25% X-XSS-Protection (XXSSP)12 5.03% 6.33% 8.12% +28%

The pace of improvement across the web appears to be continuing at an astounding rate. Although a 36% increase in the number of sites that support HTTPS might seem small, the absolute numbers are quite large — it represents over 119,000 websites.

Not only that, but 93,000 of those websites have chosen to be HTTPS by default, with 18,000 of them forbidding any HTTP access at all through the use of HTTP Strict Transport Security.

The sharp jump in the rate of Content Security Policy (CSP) usage is similarly surprising. It can be difficult to implement for a new website, and often requires extensive rearchitecting to retrofit to an existing site, as most of the Alexa Top 1M sites are. Between increasingly improving documentation, advances in CSP3 such as ‘strict-dynamic’, and CSP policy generators such as the Mozilla Laboratory, it appears that we might be turning a corner on CSP usage around the web.

Observatory Grading

Despite this progress, the vast majority of large websites around the web continue to not use Content Security Policy and Subresource Integrity. As these technologies — when properly used — can nearly eliminate huge classes of attacks against sites and their users, they are given a significant amount of weight in Observatory scans.

As a result of their low usage rates amongst established websites, they typically receive failing grades from the Observatory. Nevertheless, I continue to see improvements across the board:

Grade April 2016 October 2016 June 2017 % Change  A+ .003% .008% .013% +62% A .006% .012% .029% +142% B .202% .347% .622% +79% C .321% .727% 1.38% +90% D 1.87% 2.82% 4.51% +60% F 97.60% 96.09% 93.45% -2.8%

As 969,924 scans were successfully completed in the last survey, a decrease in failing grades by 2.8% implies that over 27,000 of the largest sites in the world have improved from a failing grade in the last eight months alone.

In fact, my research indicates that over 50,000 websites around the web have directly used the Mozilla Observatory to improve their grades, indicated by scanning their website, making an improvement, and then scanning their website again. Of these 50,000 websites, over 2,500 have improved all the way from a failing grade to an A or A+ grade.

When I first built the Observatory a year ago at Mozilla, I had never imagined that it would see such widespread use. 3.8M scans across 1.55M unique domains later, it seems to have made a significant difference across the internet. I feel incredibly lucky to work at a company like Mozilla that has provided me with a unique opportunity to work on a tool designed solely to make internet a better place.

Please share the Mozilla Observatory and the Web Security Guidelines so that the web can continue to see improvements over the years to come!

 

Footnotes:

  1. Allows 'unsafe-inline' in neither script-src nor style-src
  2. Allows 'unsafe-inline' in style-src only
  3. Amongst sites that set cookies
  4. Disallows foreign origins from reading the domain’s contents within user’s context
  5. Redirects from HTTP to HTTPS on the same domain, which allows HSTS to be set
  6. Redirects from HTTP to HTTPS, regardless of the final domain
  7. As listed in the Chromium preload list
  8. max-age set to at least six months
  9. Percentage is of sites that load scripts from a foreign origin
  10. Percentage is of sites that load scripts
  11. CSP frame-ancestors directive is allowed in lieu of an XFO header
  12. Strong CSP policy forbidding 'unsafe-inline' is allowed in lieu of an XXSSP header

The post Analysis of the Alexa Top 1M sites appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Pagina's