mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 1 maand 1 week geleden

Mozilla Future Releases Blog: DNS-over-HTTPS (DoH) Update – Recent Testing Results and Next Steps

wo, 03/04/2019 - 00:21

We are giving several updates on our testing with DNS-over-HTTPS (DoH), a new protocol that uses encryption to protect DNS requests and responses. This post shares the latest results, what we’ve learned, and how we’re fine-tuning our next step in testing.

tl;dr: The results of our last performance test showed improvement or minimal impact when DoH is enabled. Our next experiment continues to test performance with Akamai and Cloudflare, and adds a performance test that takes advantage of a secure protocol for DNS resolvers set up between Cloudflare and Facebook.

What we learned

Back in November 2018, we rolled out a test of DoH in the United States to look at possible impacts to Content Delivery Networks (CDNs). Our goal was to closely examine performance again, specifically the case when users get less localized DNS responses that could slow the browsing experience, even if the DNS resolver itself is accurate and fast. We worked with Akamai to help us understand more about the possible impact.

The results were strong! Like our previous studies, DoH had minimal impact or clearly improved the total time it takes to get a response from the resolver and fetch a web page.

Here’s a sample result for our “time to first byte” measurement:

In this case, the absolute difference across the experiment was less than 10ms regression for 50th percentile and lower. For 75th percentile or higher, we saw no regression or even a performance win, particularly on the long tail of slower connections. We saw very similarly shaped results for the rest of our measurements.

While exploring the data in the last experiment, we discovered a few things we’d like to know more about and will run three additional tests described below.

Additional testing on performance and privacy

First, we saw a higher error rate during DNS queries than expected, with and without DoH. We’d like to know more about those errors — are they something that the browser could handle better? This is something we’re still researching, and this next experiment adds better error capturing for analysis.

And while the performance was good with DoH, we’re curious if we could improve it even more. CDNs and other distributed websites provide localized DNS responses depending on where you are in the network. The goal is to send you to a host near you on the network and therefore give you the best performance, but if your DNS goes through Cloudflare, this might not happen. The EDNS Client Subnet extension allows a centralized resolver like Cloudflare to give the authoritative resolver for the site you are going to some information about your location, which they can use to tune their responses. Ordinarily this isn’t safe for two reasons: first, the query to the authoritative resolver isn’t encrypted and so it leaks your query; and second, the authoritative resolver might be operated by a third party who would then learn about your browsing activity. This was something we needed to test further.

Since our last experiment, Facebook has partnered with Cloudflare to offer DNS-over-TLS (DoT) as a pilot project, which plugs the privacy leak caused by EDNS Client Subnet (ECS) use. There are two key reasons this work improves your privacy when connecting to Facebook.  First, Facebook operates their own authoritative resolvers, so third parties don’t get your query. Second, DoT encrypts your query so snoopers on the network can’t see it.

With DoT in place, we want to see if Cloudflare sending ECS to Facebook’s resolver results in our users getting better response times. To figure this out, we added tests for resolver response and web page fetch for test URLs from Facebook systems.

The experiment will check locally if the browser has a Facebook login cookie. If it does, it will attempt to collect data once per day — about the speed of queries to test URLs. This means that if you have never logged into Facebook, the experiment will not fetch anything from Facebook systems.

As with our last test, we aren’t passing any cookies, these domains aren’t ones that the user would automatically retrieve and just contain dummy content, so we aren’t disclosing anything to the resolver or Facebook about users’ browsing behavior.

We’re also re-running the Akamai tests to get to the bottom of the error messages issue.

Here is a sample of the data that we are collecting as part of this experiment:

The payload only contains timing and error information. For more information about how we are collecting telemetry for this experiment, read here. As always, you can always see what data Firefox sends back to Mozilla in about:telemetry.

We believe these DoH and DoT deployments are both exciting steps toward providing greater security and privacy to all. We’re happy to support the work Cloudflare is doing that might encourage others to pursue DoH and DoT deployments.

How will the next test roll out?

Starting the week of April 1, a small portion of our United States-based users in the Release channel will receive the DoH treatment. As before, this study will use Cloudflare’s DNS-over-HTTPS service and will continue to provide in-browser notifications about the experiment so that participants are fully informed and has the opportunity to decline.

We are working to build a larger ecosystem of trusted DoH providers, and we hope to be able to experiment with other providers soon. As before, we will continue to share the results of the DoH tests and provide updates once future plans solidify.

The post DNS-over-HTTPS (DoH) Update – Recent Testing Results and Next Steps appeared first on Future Releases.

Categorieën: Mozilla-nl planet

Daniel Stenberg: The HTTP Workshop 2019 begins

di, 02/04/2019 - 23:54

The forth season of my favorite HTTP series is back! The HTTP Workshop skipped over last year but is back now with a three day event organized by the very best: Mark, Martin, Julian and Roy. This time we’re in Amsterdam, the Netherlands.

35 persons from all over the world walked in the room and sat down around the O-shaped table setup. Lots of known faces and representatives from a large variety of HTTP implementations, client-side or server-side – but happily enough also a few new friends that attend their first HTTP Workshop here. The companies with the most employees present in the room include Apple, Facebook, Mozilla, Fastly, Cloudflare and Google – having three or four each in the room.

Patrick Mcmanus started off the morning with his presentation on HTTP conventional wisdoms trying to identify what have turned out as successes or not in HTTP land in recent times. It triggered a few discussions on the specific points and how to judge them. I believe the general consensus ended up mostly agreeing with the slides. The topic of unshipping HTTP/0.9 support came up but is said to not be possible due to its existing use. As a bonus, Anne van Kesteren posted a new bug on Firefox to remove it.

Mark Nottingham continued and did a brief presentation about the recent discussions in HTTPbis sessions during the IETF meetings in Prague last week.

Martin Thomson did a presentation about HTTP authority. Basically how a client decides where and who to ask for a resource identified by a URI. This triggered an intense discussion that involved a lot of UI and UX but also trust, certificates and subjectAltNames, DNS and various secure DNS efforts, connection coalescing, DNSSEC, DANE, ORIGIN frame, alternative certificates and more.

Mike West explained for the room about the concept for Signed Exchanges that Chrome now supports. A way for server A to host contents for server B and yet have the client able to verify that it is fine.

Tommy Pauly then talked to his slides with the title of Website Fingerprinting. He covered different areas of a browser’s activities that are current possible to monitor and use for fingerprinting and what counter-measures that exist to work against furthering that development. By looking at the full activity, including TCP flows and IP addresses even lots of our encrypted connections still allow for pretty accurate and extensive “Page Load Fingerprinting”. We need to be aware and the discussion went on discussing what can or should be done to help out.

<figcaption>The meeting is going on somewhere behind that red door.</figcaption>

Lucas Pardue discussed and showed how we can do TLS interception with Wireshark (since the release of version 3) of Firefox, Chrome or curl and in the end make sure that the resulting PCAP file can get the necessary key bundled in the same file. This is really convenient when you want to send that PCAP over to your protocol debugging friends.

Roberto Peon presented his new idea for “Generic overlay networks”, a suggested way for clients to get resources from one out of several alternatives. A neighboring idea to Signed Exchanges, but still different. There was an interested to further and deepen this discussion and Roberto ended up saying he’d at write up a draft for it.

Max Hils talked about Intercepting QUIC and how the ability to do this kind of thing is very useful in many situations. During development, for debugging and for checking what potentially bad stuff applications are actually doing on your own devices. Intercepting QUIC and HTTP/3 can thus also be valuable but at least for now presents some challenges. (Max also happened to mention that the project he works on, mitmproxy, has more stars on github than curl, but I’ll just let it slide…)

Poul-Henning Kamp showed us vtest – a tool and framework for testing HTTP implementations that both Varnish and HAproxy are now using. Massaged the right way, this could develop into a generic HTTP test/conformance tool that could be valuable for and appreciated by even more users going forward.

Asbjørn Ulsberg showed us several current frameworks that are doing GET, POST or SEARCH with request bodies and discussed how this works with caching and proposed that SEARCH should be defined as cacheable. The room mostly acknowledged the problem – that has been discussed before and that probably the time is ripe to finally do something about it. Lots of users are already doing similar things and cached POST contents is in use, just not defined generically. SEARCH is a already registered method but could get polished to work for this. It was also suggested that possibly POST could be modified to also allow for caching in an opt-in way and Mark volunteered to author a first draft elaborating how it could work.

Indonesian and Tibetan food for dinner rounded off a fully packed day.

Thanks Cory Benfield for sharing your notes from the day, helping me get the details straight!

Diversity

We’re a very homogeneous group of humans. Most of us are old white men, basically all clones and practically indistinguishable from each other. This is not diverse enough!

<figcaption>A big thank you to the HTTP Workshop 2019 sponsors!</figcaption>


Categorieën: Mozilla-nl planet

Firefox UX: An exception to our ‘No Guerrilla Research’ practice: A tale of user research at MozFest

di, 02/04/2019 - 22:31

Authors: Jennifer Davidson, Emanuela Damiani, Meridel Walkington, and Philip Walmsley

Sometimes, when you’re doing user research, things just don’t quite go as planned. MozFest was one of those times for us.

MozFest, the Mozilla Festival, is a vibrant conference and week-long “celebration for, by, and about people who love the internet.” Held at a Ravensbourne university in London, the festival features nine floors of simultaneous sessions. The Add-ons UX team had the opportunity to host a workshop at MozFest about co-designing a new submission flow for browser extensions and themes. The workshop was a version of the Add-ons community workshop we held the previous day.

On the morning of our workshop, we showed up bright-eyed, bushy-tailed, and fully caffeinated. Materials in place, slides loaded…we were ready. And then, no one showed up.

Perhaps because 1) there was too much awesome stuff going on at the same time as our workshop, 2) we were in a back corner, and 3) we didn’t proactively advertise our talk enough.

After processing our initial heartache and disappointment, Emanuela, a designer on the team, suggested we try something we don’t do often at Mozilla, if at all: guerrilla research. Guerrilla user research usually means getting research participants from “the street.” For example, a researcher could stand in front of a grocery store with a tablet computer and ask people to use a new app. This type of research method is different than “normal” user research methods (e.g. field research in a person’s home, interviewing someone remotely over video call, conducting a usability study in a conference room at an office) because there is much less control in screening participants, and all the research is conducted in the public eye [1].

Guerrilla research is a polarizing topic in the broader user research field, and with good reason: while there are many reasons one might employ this method, it can and is used as a means of getting research done on the cheap. The same thing that makes guerrilla research cheaper — convenient, easy data collection in one location — can also mean compromised research quality. Because we don’t have the same level of care and control with this method, the results of the work can be less effective compared to a study conducted with careful recruitment and facilitation.

For these reasons, we intentionally don’t use guerrilla research as a core part of the Firefox UX practice. However, on that brisk London day, with no participants in sight and our dreams of a facilitated session dashed, guerrilla research presented itself as a last-resort option, as a way we could salvage some of our time investment AND still get some valuable feedback from people on the ground. We were already there, we had the materials, and we had a potential pool of participants — people who, given their conference attendance, were more informed about our particular topic than those you would find in a random coffee shop. It was time to make the best of a tough situation and learn what we could.

And that’s just what we did.

Thankfully, we had four workshop facilitators, which meant we could evolve our roles to the new reality. Emanuela and Jennifer were traveling workshop facilitators. Phil, another designer on the team, took photos. Meridel, our content strategist, stationed herself at our original workshop location in case anyone showed up there, taking advantage of that time to work through content-related submission issues with our engineers.

<figcaption>A photo of Jennifer and Emanuela taking the workshop “on the road”</figcaption>

We boldly went into the halls at MozFest and introduced ourselves and our project. We had printouts and pens to capture ideas and get some sketching done with our participants.

<figcaption>A photo of Jennifer with four participants</figcaption>

In the end, seven people participated in our on-the-road co-design workshop. Because MozFest tends to attract a diverse group of attendees from journalists to developers to academics, some of these participants had experience creating and submitting browser extensions and themes. For each participant we:

  1. Approached them and introduced ourselves and the research we were trying to do. Asked if they wanted to participate. If they did, we proceeded with the following steps.
  2. Asked them if they had experience with extensions and themes and discussed that experience.
  3. Introduced the sketching exercise, where we asked them to come up with ideas about an ideal process to submit an extension or theme.
  4. Watched as the participant sketched ideas for the ideal submission flow.
  5. Asked the participant to explain their sketches, and took notes during their explanation.
  6. Asked participants if we could use their sketches and our discussion in our research. If they agreed, we asked them to read and sign a consent form.

The participants’ ideas, perspectives, and workflows fed into our subsequent team analysis and design time. Combined with materials and learnings from a prior co-design workshop on the same topic, the Add-ons UX team developed creator personas. We used those personas’ needs and workflows to create a submission flow “blueprint” (like a flowchart) that would match each persona.

<figcaption>A snippet of our submission flow “blueprint”</figcaption>

In the end, the lack of workshop attendance was a good opportunity to learn from MozFest attendees we may not have otherwise heard from, and grow closer as a team as we had to think on our feet and adapt quickly.

Guerrilla research needs to be handled with care, and the realities of its limitations taken into account. Happily, we were able to get real value out of our particular application, with note that the insights we gleaned from it were just one input into our final deliverables and not the sole source of research. We encourage other researchers and designers doing user research to consider adapting your methods when things don’t go as planned. It could recoup some of the resources and time you have invested, and potentially give you some valuable insights for your product design process.

Questions for us? Want to share your experiences with being adaptable in the field? Leave us a comment!

Acknowledgements

Much gratitude to our colleagues who created the workshop with us and helped us edit this blog post! In alphabetical order, thanks to Stuart Colville, Mike Conca, Kev Needham, Caitlin Neiman, Gemma Petrie, Mara Schwarzlose, Amy Tsay, and Jorge Villalobos.

Reference

[1] “The Pros and Cons of Guerrilla Research for Your UX Project.” The Interaction Design Foundation, 2016, www.interaction-design.org/literature/article/the-pros-and-cons-of-guerrilla-research-for-your-ux-project.

Also published on Firefox UX Blog.

An exception to our ‘No Guerrilla Research’ practice: A tale of user research at MozFest was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

François Marier: Recovering from a botched hg histedit on a mercurial bookmark

di, 02/04/2019 - 17:16

If you are in the middle of a failed Mercurial hg histedit, you can normally do hg histedit --abort to cancel it, though sometimes you also have to reach out for hg update -C. This is the equivalent of git's git rebase --abort and it does what you'd expect.

However, if you go ahead and finish the history rewriting and only notice problems later, it's not as straighforward. With git, I'd look into the reflog (git reflog) for the previous value of the branch pointer and simply git reset --hard to that value. Done.

Based on a Stack Overflow answer, I thought I could undo my botched histedit using:

hg unbundle ~/devel/mozilla-unified/.hg/strip-backup/47906774d58d-ae1953e1-backup.hg

but it didn't seem to work. Maybe it doesn't work when using bookmarks.

Here's what I ended up doing to fully revert my botched Mercurial histedit. If you know of a simpler way to do this, feel free to leave a comment.

Collecting the commits to restore

The first step was to collect all of the commits hashes I needed to restore. Luckily, I had sumitted my patch to Try before changing it and so I was able to look at the pushlog to get all of the commits at once.

If I didn't have that, I could also go to the last bookmark I pushed and click on parent commits until I hit the first one that's not mine. Then I could collect all of the commits using the browser's back button:

For that last one, I had to click on the changeset commit hash link in order to get the commit hash instead of the name of the bookmark (/rev/hashstore-crash-1434206).

Recreating the branch from scratch

This is what did to export patches for each commit and then re-import them one after the other:

for c in 3c31c543e736 7ddfe5ae2fa6 c04b620136c7 2d1bf04fd155 e194843f5b7a 47906774d58d f6a657bca64f 0d7a4e1c0079 976e25b49758 a1a382f2e773 b1565f3aacdb 3fdd157bb698 b1b041990577 220bf5cd9e2a c927a5205abe ; do hg export $c > ~/$c.patch ; done hg up ff8505d177b9 hg bookmarks hashstore-crash-1434206-new for c in 3c31c543e736 7ddfe5ae2fa6 c04b620136c7 2d1bf04fd155 e194843f5b7a 47906774d58d f6a657bca64f 0d7a4e1c0079 976e25b49758 a1a382f2e773 b1565f3aacdb 3fdd157bb698 b1b041990577 220bf5cd9e2a c927a5205abe 4140cd9c67b0 ; do hg import ~/$c.patch ; done Copying a bookmark

As an aside, if you want to make a copy of a bookmark before you do an hg histedit, it's not as simple as:

hg up hashstore-crash-1434206 hg bookmarks hashstore-crash-1434206-copy hg up hashstore-crash-1434206

While that seemed to work at the time, the histedit ended up messing with both of them.

An alternative that works is to push the bookmark to another machine. That way if worse comes to worse, you can hg clone from there and hg export the commits you want to re-import using hg import.

Categorieën: Mozilla-nl planet

François Marier: Mercurial commit series in Phabricator using Arcanist

di, 02/04/2019 - 17:16

Phabricator supports multi-commit patch series, but it's not yet obvious how to do it using Mercurial. So this the "hg" equivalent of this blog post for git users.

Note that other people have written tools and plugins to do the same thing and that an official client is coming soon.

Initial setup

I'm going to assume that you've setup arcanist and gotten an account on the Mozilla Phabricator instance. If you haven't, follow this video introduction or the excellent documentation for it (Bryce also wrote additionnal instructions for Windows users).

Make a list of commits to submit

First of all, use hg histedit to make a list of the commits that are needed:

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki... pick 5509b5db01a4 477987 Bug 1461515 - Fix and expand tracking annotation tes... pick e40312debf76 477988 Bug 1461515 - Make TP test fail if it uses the wrong... Create Phabricator revisions

Now, create a Phabricator revision for each commit (in order, from earliest to latest):

~/devel/mozilla-unified (annotation-list-1461515)$ hg up ee4d9e9fcbad 5 files updated, 0 files merged, 0 files removed, 0 files unresolved (leaving bookmark annotation-list-1461515) ~/devel/mozilla-unified (ee4d9e9)$ arc diff --no-amend Linting... No lint engine configured for this project. Running unit tests... No unit test engine is configured for this project. SKIP STAGING Phabricator does not support staging areas for this repository. Created a new Differential revision: Revision URI: https://phabricator.services.mozilla.com/D2484 Included changes: M modules/libpref/init/all.js M netwerk/base/nsChannelClassifier.cpp M netwerk/base/nsChannelClassifier.h M toolkit/components/url-classifier/Classifier.cpp M toolkit/components/url-classifier/SafeBrowsing.jsm M toolkit/components/url-classifier/nsUrlClassifierDBService.cpp M toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm M toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html M xpcom/base/ErrorList.py ~/devel/mozilla-unified (ee4d9e9)$ hg up 5509b5db01a4 3 files updated, 0 files merged, 0 files removed, 0 files unresolved ~/devel/mozilla-unified (5509b5d)$ arc diff --no-amend Linting... No lint engine configured for this project. Running unit tests... No unit test engine is configured for this project. SKIP STAGING Phabricator does not support staging areas for this repository. Created a new Differential revision: Revision URI: https://phabricator.services.mozilla.com/D2485 Included changes: M toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm M toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html M toolkit/components/url-classifier/tests/mochitest/trackingRequest.html ~/devel/mozilla-unified (5509b5d)$ hg up e40312debf76 2 files updated, 0 files merged, 0 files removed, 0 files unresolved ~/devel/mozilla-unified (e40312d)$ arc diff --no-amend Linting... No lint engine configured for this project. Running unit tests... No unit test engine is configured for this project. SKIP STAGING Phabricator does not support staging areas for this repository. Created a new Differential revision: Revision URI: https://phabricator.services.mozilla.com/D2486 Included changes: M toolkit/components/url-classifier/tests/mochitest/classifiedAnnotatedPBFrame.html M toolkit/components/url-classifier/tests/mochitest/test_privatebrowsing_trackingprotection.html Link all revisions together

In order to ensure that these commits depend on one another, click on that last phabricator.services.mozilla.com link, then click "Related Revisions" then "Edit Parent Revisions" in the right-hand side bar and then add the previous commit (D2485 in this example).

Then go to that parent revision and repeat the same steps to set D2484 as its parent.

Amend one of the commits

As it turns out my first patch wasn't perfect and I needed to amend the middle commit to fix some test failures that came up after pushing to Try. I ended up with the following commits (as viewed in hg histedit):

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki... pick c24f4d9e75b9 477992 Bug 1461515 - Fix and expand tracking annotation tes... pick 1840f68978a7 477993 Bug 1461515 - Make TP test fail if it uses the wrong...

which highlights that the last two commits changed and that I would have two revisions (D2485 and D2486) to update in Phabricator.

However, since the only reason why the third patch has a different commit hash is because its parent changed, theres's no need to upload it again to Phabricator. Lando doesn't care about the parent hash and relies instead on the parent revision ID. It essentially applies diffs one at a time.

The trick was to pass the --update DXXXX argument to arc diff:

~/devel/mozilla-unified (annotation-list-1461515)$ hg up c24f4d9e75b9 2 files updated, 0 files merged, 0 files removed, 0 files unresolved (leaving bookmark annotation-list-1461515) ~/devel/mozilla-unified (c24f4d9)$ arc diff --no-amend --update D2485 Linting... No lint engine configured for this project. Running unit tests... No unit test engine is configured for this project. SKIP STAGING Phabricator does not support staging areas for this repository. Updated an existing Differential revision: Revision URI: https://phabricator.services.mozilla.com/D2485 Included changes: M browser/base/content/test/general/trackingPage.html M netwerk/test/unit/test_trackingProtection_annotateChannels.js M toolkit/components/antitracking/test/browser/browser_imageCache.js M toolkit/components/antitracking/test/browser/browser_subResources.js M toolkit/components/antitracking/test/browser/head.js M toolkit/components/antitracking/test/browser/popup.html M toolkit/components/antitracking/test/browser/tracker.js M toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm M toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html M toolkit/components/url-classifier/tests/mochitest/trackingRequest.html

Note that changing the commit message will not automatically update the revision details in Phabricator. This has to be done manually in the Web UI if required.

Categorieën: Mozilla-nl planet

François Marier: Checking Your Passwords Against the Have I Been Pwned List

di, 02/04/2019 - 17:16

Two months ago, Troy Hunt, the security professional behind Have I been pwned?, released an incredibly comprehensive password list in the hope that it would allow web developers to steer their users away from passwords that have been compromised in past breaches.

While the list released by HIBP is hashed, the plaintext passwords are out there and one should assume that password crackers have access to them. So if you use a password on that list, you can be fairly confident that it's very easy to guess or crack your password.

I wanted to check my active passwords against that list to check whether or not any of them are compromised and should be changed immediately. This meant that I needed to download the list and do these lookups locally since it's not a good idea to send your current passwords to this third-party service.

I put my tool up on Launchpad / PyPI and you are more than welcome to give it a go. Install Postgres and Psycopg2 and then follow the README instructions to setup your database.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Crossing the Rust FFI frontier with Protocol Buffers

di, 02/04/2019 - 16:42

My team, the application services team at Mozilla, works on Firefox Sync, Firefox Accounts and WebPush.

These features are currently shipped on Firefox Desktop, Android and iOS browsers. They will soon be available in our new products such as our upcoming Android browser, our password manager Lockbox, and Firefox for Fire TV.

Solving for one too many targets

Until now, for historical reasons, these functionalities have been implemented in widely different fashions. They’ve been tightly coupled to the product they are supporting, which makes them hard to reuse. For example, there are currently three Firefox Sync clients: one in JavaScript, one in Java and another one in Swift.

Considering the size of our team, we quickly realized that our current approach to shipping products would not scale across more products and would lead to quality issues such as bugs or uneven feature-completeness across platform-specific implementations. About a year ago, we decided to plan for the future. As you may know, Mozilla is making a pretty big bet on the new Rust programming language, so it was natural for us to follow suit.

A cross-platform strategy using Rust

Our new strategy is as follows: We will build cross-platforms components, implementing our core business logic using Rust and wrapping it in a thin platform-native layer, such as Kotlin for Android and Swift for iOS.

Rust component bundled in an Android wrapper

The biggest and most obvious advantage is that we get one canonical code base, written in a safe and statically typed language, deployable on every platform. Every upstream business logic change becomes available to all our products with a version bump.

How we solved the FFI challenge safely

However, one of the challenges to this new approach is safely passing richly-structured data across the API boundary in a way that’s memory-safe and works well with Rust’s ownership system. We wrote the ffi-support crate to help with this, if you write code that does FFI (Foreign function interface) tasks you should strongly consider using it. Our initial versions were implemented by serializing our data structures as JSON strings and in a few cases returning a C-shaped struct by pointer. The procedure for returning, say, a bookmark from the user’s synced data looked like this:

Returning Bookmark data from Rust to Kotlin using JSON

Returning Bookmark data from Rust to Kotlin using JSON (simplified)

 

So what’s the problem with this approach?

  • Performance: JSON serializing and de-serializing is notoriously slow, because performance was not a primary design goal for the format. On top of that, an extra string copy happens on the Java layer since Rust strings are UTF-8 and Java strings are UTF-16-ish. At scale, it can introduce significant overhead.
  • Complexity and safety: every data structure is manually parsed and deserialized from JSON strings. A data structure field modification on the Rust side must be reflected on the Kotlin side, or an exception will most likely occur.
  • Even worse, in some cases we were not returning JSON strings but C-shaped Rust structs by pointer: forget to update the Structure Kotlin subclass or the Objective-C struct and you have a serious memory corruption on your hands.

We quickly realized there was probably a better and faster way that could be safer than our current solution.

Data serialization with Protocol Buffers v.2

Thankfully, there are many data serialization formats out there that aim to be fast. The ones with a schema language will even auto-generate data structures for you!

After some exploration, we ended up settling on Protocol Buffers version 2.

The —relative— safety comes from the automated generation of data structures in the languages we care about. There is only one source of truth—the .proto schema file—from which all our data classes are generated at build time, both on the Rust and consumer side.

protoc (the Protocol Buffers code generator) can emit code in more than 20 languages! On the Rust side, we use the prost crate which outputs very clean-looking structs by leveraging Rust derive macros.

Returning Bookmark data from Rust to Kotlin using Protocol Buffers 2

Returning Bookmark data from Rust to Kotlin using Protocol Buffers 2 (simplified)

 

And of course, on top of that, Protocol Buffers are faster than JSON.

There are a few downsides to this approach: it is more work to convert our internal types to the generated protobuf structs –e.g a url::Url has to be converted to a String first– whereas back when we were using serde-json serialization, any struct implementing serde::Serialize was a line away from being sent over the FFI barrier. It also adds one more step during our build process, although it was pretty easy to integrate.

One thing I ought to mention: Since we ship both the producer and consumer of these binary streams as a unit, we have the freedom to change our data exchange format transparently without affecting our Android and iOS consumers at all.

A look ahead

Looking forward, there’s probably a high-level system that could be used to exchange data over the FFI, maybe based on Rust macros. There’s also been talk about using FlatBuffers to squeeze out even more performance. In our case Protobufs provided the right trade-off between ease-of-use, performance, and relative safety.

So far, our components are present both on iOS, on Firefox and Lockbox, and on Lockbox Android, and will soon be in our new upcoming Android browser.

Firefox iOS has started to oxidize by replacing their password syncing engine with the one we built in Rust. The plan is to eventually do the same on Firefox Desktop as well.

If you are interested in helping us build the future of Firefox Sync and more, or simply following our progress, head to the application services Github repository.

The post Crossing the Rust FFI frontier with Protocol Buffers appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Stop videos from automatically playing with new autoplay controls from Firefox

di, 02/04/2019 - 15:30

The web is 30 years old. Over its lifetime we’ve had developments that have brought us to peaks of delight and others to the pits of frustration. The blink tag, … Read more

The post Stop videos from automatically playing with new autoplay controls from Firefox appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mike Hoye: Occasionally Useful

di, 02/04/2019 - 15:30

A bit of self-promotion: the UsesThis site asked me their four questions a little while ago; it went up today.

A colleague once described me as “occasionally useful, in the same way that an occasional table is a table.” Which I thought was oddly nice of them.

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): Changing the Language of Firefox Directly From the Browser

di, 02/04/2019 - 08:33
Language Settings

In Firefox there are two main user facing settings related to languages:

  • Web content: when you visit a web page, the browser will communicate to the server which languages you’d like to see content in. Technically, this is done by sending an Accept-Language HTTP header, which contains a list of locale codes in the user’s preferred order.
  • User interface: the language in which you want to see the browser (menus, preferences, etc.).

The difference between the two is not as intuitive as it might seem. A lot of users change the web content settings, and expect the user interface to change.

The Legacy Chaos

While the preferences for web content have been exposed in Firefox since its first versions, changing the language used for the user interface has always been challenging. Here’s the painful journey users had to face until a few months ago:

  • First of all, you need to be aware that there are other languages available. There are no preferences exposing this information, and documentation is spread across different websites.
  • You need to find and install a language pack – a special type of add-on – from addons.mozilla.org.
  • If the language used in the operating system is different from the one you’re trying to install in Firefox, you need to create a new preference in about:config, and set it to the correct locale code. Before Firefox 59 and intl.locale.requested, you would need to manually set the general.useragent.locale pref in any case, not just when there’s a discrepancy.

The alternative is to install a build of Firefox already localized in your preferred language. But, once again, they’re not so easy to find. Imagine you work in a corporate environment that provides you with an operating system in English (en-US). You search the Web for “Firefox download”, and automatically end up on this download page, which doesn’t provide information on the language you’re about to download. You need to pay a lot of attention and notice the link to other languages.

If you already installed Firefox in the wrong language, you need to uninstall it, find the installer in the correct language, and reinstall it. As part of the uninstall process on Windows, we ask users to volunteer for a quick survey to explain why they’re uninstalling the browser, and the amount of them saying something along the line of “wrong language” or “need to reinstall in the right language” is staggering, especially considering this is an optional comment within an optional survey. It’s a clear signal, if one was ever needed, that things need to improve.

Everything Changes with Firefox 65

Fast forward to Firefox 65:

We introduced a new Language section. Directly from the General pane it’s now possible to switch between languages already available in Firefox, removing the need for manually setting preferences in about:config. What if the language is not available? Then you can simply Search for more languages… from the dropdown menu.

Add your preferred language to the list (French in this case).

And restart the browser to have the interface localized in French. Notice how the message is displayed in both languages, to provide users with another hint to the user that they selected the right language.

If you’re curious, you can see a diagram of the complete user interaction here.

A lot happens under the hood for this brief interaction:

  • When the user asks for more languages, Firefox connects to addons.mozilla.org via API to retrieve the list of languages available for the version in use.
  • When the user adds the language, Firefox downloads and installs the language pack for the associated locale code.
  • In order to improve the user experience, if available it also downloads dictionaries associated with the requested language.
  • When the browser is restarted, the new locale code is set as first in the intl.locale.requested preference.

This feature is enabled by default in Beta and Release versions of Firefox. Language packs are not reliable on Nightly, given that strings change frequently, and a language still considered compatible but incomplete could lead to a completely broken browser (condition known as “yellow screen of death“, where the XUL breaks due to missing DTD entities).

What’s Next

First of all, there are still a few bugs to fix (you can find a list in the dependencies of the tracking bug). Sadly, there are still several places in the code that assume the language will never change, and cache the translated content too aggressively.

The path forward has yet to be defined. Completing the migration to Fluent would drastically improve the user experience:

  • Language switching would be completely restartless.
  • Firefox would support a list of fallback locales. With the old technology, if a translation is missing it can only fall back to English. With Fluent, we can set a fallback chain, for example Ligurian->Italian->English.

There are also areas of the browser that are not covered by language packs, and would need to be rewritten (e.g. the profile manager). For these, the language used always remains the one packaged in the build.

The user experience could also be improved. For example, can we make selecting the language part of a multiplatform onboarding experience? There are a lot of hints that we could take from the host operating system, and prompt the user about installing and selecting a different language. What if the user browses a lot of German content, but he’s using the browser in English? Should we suggest them that it’s possible to switch language?

With Firefox 66 we’re also starting to collect Telemetry data about internationalization settings – take a look at the table at the bottom of about:support – to understand more about our users, and how these changes in Preferences impact language distribution and possibly retention.

For once, Firefox is well ahead of the competition. For example, for Chrome it’s only possible to change language on Windows and Chromebook, while Safari only uses the language of your macOS.

This is the result of an intense cross-team effort. Special thanks to Zibi Braniecki for the initial idea and push, and all the work done under the hood to improve Firefox internationalization infrastructure, Emanuela Damiani for UX, and Mark Striemer for the implementation.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 280

di, 02/04/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is sonic, a fast, lightweight & schema-less search backend. Thanks to Vikrant for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

251 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs

No new RFCs were proposed this week.

Upcoming Events Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Thanks for walking through the process.

Quite the mental exercise, some people do Sudoku, others solve borrow puzzles!

Gambhiro on rust-users

Thanks to Tom Phinney for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: April’s featured extensions

di, 02/04/2019 - 02:56

Firefox Logo on blue background

Pick of the Month: Disable WebRTC

by Chris Antaki
Do you use VPN? This extension prevents your IP address from leaking through WebRTC.

“Simple and effective!”

Featured: CSS Exfil Protection

by Mike Gualtieri
Gain protection against a particular type of attack that occurs through Cascading Style Sheets (CSS).

“I had no idea this was an issue until reading about it recently.”

Featured: Cookie Quick Manager

by Ysard
Take full control of the cookies you’ve accumulated while browsing.

“The best cookie manager I have tested (and I have tested a lot, if not them all!)”

Featured: Amazon Container

by JackymanCS4
Prevent Amazon from tracking your movements around the web.

(NOTE: Though similarly titled to Mozilla’s Facebook Container and Multi-Account Containers, this extension is not affiliated with Mozilla.)

“Thank you very much.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post April’s featured extensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Firefox Nightly: Reducing Notification Permission Prompt Spam in Firefox

ma, 01/04/2019 - 21:25

Permission prompts are a common sight on the web today. They allow websites to prompt for access to powerful features when needed, giving users granular and contextual choice about what to allow. The permission model has allowed browsers to ship features that would have presented risks to privacy and security otherwise.

However, over the last few years the ecosystem has seen a rise in unsolicited, out-of-context permission prompts being put in front of users, particularly ones that ask for permission to send push notifications.

In the interest of protecting our users and the web platform from this, we plan to experiment with restricting how and when websites can ask for notification permissions.

Rationalizing Push Notification Permission Spam

Push notifications are a powerful capability that enables new kinds of interactions with sites. It is hard to imagine a modern chat or social networking app that doesn’t send notifications. Since notifications can be sent after you leave a site, it is only natural that a site would need to ask for permission to show them.

A Notification Permission Prompt

Looks familiar?

But anecdotal evidence tells us that there is an issue with notification permission prompts among our user base. As we browse the web, we regularly encounter these prompts and more often than not we become annoyed at them or don’t understand the intent of the website requesting the permission.

According to our telemetry data, the notifications prompt is by far the most frequently shown permission prompt, with about 18 million prompts shown on Firefox Beta in the month from Dec 25 2018 to Jan 24 2019. Not even 3% of these prompts got accepted by users. Most prompts are dismissed, while almost 19% of prompts caused users to leave the site immediately after being confronted with them. This is in stark contrast to the camera/microphone prompt, which has an acceptance rate of about 85%!

This leads us to believe that

  1. There are websites that show the notification prompt without the intent of using it to enhance the user experience, or that fail to convey this UX enhancement when prompting.
  2. There are websites that prompt for notification permission “too early”, without giving users enough context or time to decide if they want them, even if push notifications would significantly enhance the user experience on that site.

Last year Firefox introduced a new setting that allows users to entirely opt out of receiving new push notification permission prompts. This feature was well received, among users that discovered it and understood the implications. But we still fail to protect the large part of our users that do not explore their notification settings or don’t want to enforce such drastic measures. Thus, we are starting to explore other methods of preventing “permission spam” with two new experiments.

Experiment 1: Requiring User Interaction for Notification Permission Prompts in Nightly 68

A frequently discussed mitigation for this problem is requiring a user gesture, like a click or a keystroke, to trigger the code that requests permission. User interaction is a popular measure because it is often seen as a proxy for user consent and engagement with the website.

Firefox Telemetry from pre-release channels shows that very few websites request push notifications from a user gesture, indicating that this may be too drastic of a measure.

We suspect that these numbers might not paint a full picture of the resulting user experience, and we would like to get a real world impression of how requiring user interaction for notification permission prompts affects sites in the wild.

Hence, we will temporarily deny requests for permission to use Notifications unless they follow a click or keystroke in Firefox Nightly from April 1st to April 29th.

In the first two weeks of this experiment, Firefox will not show any user-facing notifications when the restriction is applied to a website. In the last two weeks of this experiment, Firefox will show an animated icon in the address bar (where our notification prompt normally would appear) when this restriction is applied. If the user clicks on the icon, they will be presented with the prompt at that time.

Prototype of new prompt without user interaction

Prototype of new prompt without user interaction

During this time, we ask our Nightly audience to watch out for any sites that might want to show notifications, but are unable to do so. In such a case, you can file a new bug on Bugzilla, blocking bug 1536413.

Experiment 2: Collecting Interaction and Environment Data around Permission Prompts from Release Users

We suspect that requiring user interaction is not a perfect solution to this problem. To get to a better approach, we need to find out more about how our release user population interacts with permission prompts.

Hence, we are planning to launch a short-running experiment in Firefox Release 67 to gather information about the circumstances in which users interact with permission prompts. Have they been on the site for a long time? Have they rejected a lot of permission prompts before? The goal is to collect a set of possible heuristics for future permission prompt restrictions.

At Mozilla, this sort of data collection on a release channel is an exception to the rule, and requires strict data review. The experiment will run for a limited time, with a small percentage of our release user population.

Implications on Developers and User Experience

Web developers should anticipate that Firefox and other browsers may in the future decide to reject a website’s permission request based on automatically determined heuristics.

When such an automatic rejection happens, users may be able to revert this decision retroactively. The Permissions API offers an opportunity to monitor changes in permission state to handle this case.

As a general principle, prompting for permissions should be done based on user interaction. Offering your users additional context, and delaying the prompt until the user chooses to show it, will not only future-proof your site, but likely also increase your user engagement and prompt acceptance rates.

 

Categorieën: Mozilla-nl planet

David Humphrey: The technology of nostalgia

ma, 01/04/2019 - 19:04

Today one of my first year web students emailed me a question:

Today when I was searching some questions on StackOverflow, I found their website turns to a really interesting display and can be changed to the regular one back and forth by a button on the top. I guess it's some April Fool's day joke...how did they make the mouse pointer thing? There are stars dropping and tracking the pointer when you move it, I was thinking of inspecting it to get a sense, but then I realized I usually use the pointer to click on an element for inspecting, but I can't click the mouse itself, so I'm lost...Here's a website to explain what I'm saying: https://stackoverflow.com/questions/1098040/checking-if-a-key-exists-in-a-javascript-object?rq=1 Could you tell me how does that work, is this till done by CSS? or JS?

I went to look, and here is an animation of what I found.  Notice the trail of stars behind my mouse pointer as it moves:

My first thought is that they must have a mousemove handler on the body.  I opened the dev tools and looked through the registered event listeners for mousemove.  Sure enough, there was one one registered on the document, and the code looked like this:

function onMouseMove(e) { cursor.x = e.clientX; cursor.y = e.clientY; addParticle( cursor.x, cursor.y, possibleColors[Math.floor(Math.random()*possibleColors.length)] ); }

Reading a bit further into the file revealed this comment:

/*! * Fairy Dust Cursor.js * - 90's cursors collection * -- https://github.com/tholman/90s-cursor-effects * -- https://codepen.io/tholman/full/jWmZxZ/ */

This is using tholman's cursor-effects JS library, and specifically the fairyDustCursor.

This code is really fun to read, especially for my early web students.  It's short, readable, not unnecessarily clever, and uses really common things in interesting ways.  Almost everything it does, my students have seen before--they just might not have thought to put it all together into one package like this.

Essentially how it works is that Partcle objects are stored in an array, and each one gets added to the DOM as a <span>*</span> with a different colour, and CSS is used to move (translate) each away from an origin (the mouse pointer's x and y position).  Over time (iterations of the requestAnimationFrame loop), each of these particles ages, and eventually dies, getting removed from the array and DOM.

As I read the code, something else struck me.  Both Stack Overflow and the cursor-effects library talk about this style of web site being from the 90s.  It's true, we didn't have the kind of refined and "delightful experiences" we take for granted today.  It was a lot of flashing, banner adds, high contrast colours, and people inventing (often badly) as they went.

Yet reading the code for how this effect was done, I couldn't help but pause to reflect on how modern it is at the same time.  Consider some of the browser APIs necessary to make this "90s" effect possible, and when they were first shipped (all dates are from caniuse.com):

  1. querySelector c. 2008 WebKit
  2. translate3d c. 2009 WebKit
  3. touchstart event c. 2009 Google
  4. requestAnimationFrame c. 2010 Mozilla
  5. pointer-events, touch-action c. 2012 Microsoft
  6. will-change c. 2014 Google

The progress that's been made on the web in the past 10 years is incredible.  In 2019 it only takes a few lines of code to do the kind of creative things we struggled with in 1999.  The web platform has evolved to something really great, and I love being part of it.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Socorro: March 2019 happenings

ma, 01/04/2019 - 18:00
Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the crash reporter collects data about the crash, generates a crash report, and submits that report to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

This blog post summarizes Socorro activities in March.

Read more… (5 min remaining to read)

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl up 2019 is over

ma, 01/04/2019 - 12:03

(I will update this blog post with more links to videos and PDFs to presentations as they get published, so come back later in case your favorite isn’t linked already.)

The third curl developers conference, curl up 2019, is how history. We gathered in the lovely Charles University in central Prague where we sat down in an excellent class room. After the HTTP symposium on the Friday, we spent the weekend to dive in deeper in protocols and curl details.

I started off the Saturday by The state of the curl project (youtube). An overview of how we’re doing right now in terms of stats, graphs and numbers from different aspects and then something about what we’ve done the last year and a quick look at what’s not do good and what we could work on going forward.

James Fuller took the next session and his Newbie guide to contributing to libcurl presentation. Things to consider and general best practices to that could make your first steps into the project more likely to be pleasant!

Long term curl hacker Dan Fandrich (also known as “Daniel two” out of the three Daniels we have among our top committers) followed up with Writing an effective curl test where the detailed what different tests we have in curl, what they’re for and a little about how to write such tests.

<figcaption>Sign seen at the curl up dinner reception Friday night</figcaption>

After that I was back behind the desk in the classroom that we used for this event and I talked The Deprecation of legacy crap (Youtube). How and why we are removing things, some things we are removing and will soon remove and finally a little explainer on our new concept and handling of “experimental” features.

Igor Chubin then explained his new protect for us: curlator: a framework for console services (Youtube). It’s a way and tooling that makes it easier to provide access to shell and console oriented services over the web, using curl.

Me again. Governance, money in the curl project and someone offering commercial support (Youtube) was a presentation about how we intend for the project to join a legal entity SFC, and a little about money we have, what to spend it on and how I feel it is good to keep the project separate from any commercial support ventures any of us might do!

While the list above might seems like more than enough, the day wasn’t over. Christian Schmitz also did his presentation on Using SSL root certificate from Mac/Windows.

Our local hero organizer James Fuller then spoiled us completely when we got around to have dinner at a monastery with beer brewing monks and excellent food. Good food, good company and curl related dinner subjects. That’s almost heaven defined!

Sunday

Daylight saving time morning and you could tell. I’m sure it was not at all related to the beers from the night before…

James Fuller fired off the day by talking to us about Curlpipe (github), a DSL for building http execution pipelines.

<figcaption>The class room we used for the curl up presentations and discussions during Saturday and Sunday.</figcaption>

Robin Marx then put in the next gear and entertained us another hour with a protocol deep dive titled HTTP/3 (QUIC): the details (slides). For me personally this was a exactly what I needed as Robin clearly has kept up with more details and specifics in the QUIC and HTTP/3 protocols specifications than I’ve managed and his talk help the rest of the room get at least little bit more in sync with current development.

Jakub Nesetril and Lukáš Linhart from Apiary then talked us through what they’re doing and thinking around web based APIs and how they and their customers use curl: Real World curl usage at Apiary.

Then I was up again and I got to explain to my fellow curl hackers about HTTP/3 in curl. Internal architecture, 3rd party libs and APIs.

Jakub Klímek explained to us in very clear terms about current and existing problems in his talk IRIs and IDNs: Problems of non-ASCII countries. Some of the problems involve curl and while most of them have their clear explanations, I think we have to lessons to learn from this: URLs are still as messy and undocumented as ever before and that we might have some issues to fix in this area in curl.

To bring my fellow up to speed on the details of the new API introduced the last year I then made a presentation called The new URL API.

Clearly overdoing it for a single weekend, I then got the honors of doing the last presentation of curl up 2019 and for an audience that were about to die from exhaustion I talked Internals. A walk-through of the architecture and what libcurl does when doing a transfer.

Summary

I ended up doing seven presentations during this single weekend. Not all of them stellar or delivered with elegance but I hope they were still valuable to some. I did not steal someone else’s time slot as I would gladly have given up time if we had other speakers wanted to say something. Let’s aim for more non-Daniel talkers next time!

A weekend like this is such a boost for inspiration, for morale and for my ego. All the friendly faces with the encouraging and appreciating comments will keep me going for a long time after this.

Thank you to our awesome and lovely event sponsors – shown in the curl up logo below! Without you, this sort of happening would not happen.

curl up 2020

I will of course want to see another curl up next year. There are no plans yet and we don’t know where to host. I think it is valuable to move it around but I think it is even more valuable that we have a friend on the ground in that particular city to help us out. Once this year’s event has sunken in properly and a month or two has passed, the case for and organization of next year’s conference will commence. Stay tuned, and if you want to help hosting us do let me know!


Categorieën: Mozilla-nl planet

Mozilla Thunderbird: All Thunderbird Bugs Have Been Fixed!

ma, 01/04/2019 - 10:00
April Fools!

We still have open bugs, but we’d like your help to close them!

We are grateful to have a very active set of users who generate a lot of bug reports and we are requesting your help in sorting them, an activity called bug triage. We’re holding “Bug Days” on April 8th (all day, EU and US timezones) and April 13th (EU and US timezones until 4pm EDT). During these bug days we will log on and work as a community to triage as many bugs as possible. All you’ll need is a Bugzilla account, Thunderbird Daily, and we’ll teach you the rest! With several of us working at the same time we can help each other in real time – answering questions, sharing ideas ideas, and enjoying being with like-minded people.

No coding or special skills are required, and you don’t need to be an expert or long term user of Thunderbird.

Some things you’ll be doing if you participate:

  • Help other users by checking their bug reports to see if you can reproduce the behavior of their reported problem.
  • Get advice about your own bug report(s).
  • Learn the basics about Thunderbird troubleshooting and how to contribute.

We’re calling this the “Game of Bugs”, named after the popular show Game of Thrones – where we will try to “slay” all the bugs. Those who participate fully in the event will get a Thunderbird Game of Bugs t-shirt for their participation (with the design below).

 Game of Bugs T-shirt design

Thunderbird: Game of Bugs

Sorry for the joke! But we hope you’ll join us on the 8th or the 13th via #tb-qa on Mozilla’s IRC so that we can put these bugs in their place which helps make Thunderbird even better. If you have any questions feel free to email ryan@thunderbird.net.

P.S. If you are unable to participate in bug day you can still help by checking out our Get Involved page on the website and contributing in the way you’d like!

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR14b1 available (now with H.264 video)

ma, 01/04/2019 - 05:52
TenFourFox Feature Parity Release 14 beta 1 is now available (downloads, hashes, release notes).

I had originally plotted three main features for this release, but getting the urgent FPR13 SPR1 set me back a few days with confidence testing and rebuilds and I have business trips and some vacation time coming up, so I jettisoned the riskiest of the three features (a set of JavaScript updates and a ugly hack to get Github and other sites working fully again) and concentrated on the other two. I'll be looking at that again for FPR15, so more on that later.

Before we get to the marquee features, though, there are two changes which you may not immediately notice. The first is a mitigation for a long-standing issue where some malicious sites keep popping up authentication modals using HTTP Auth. Essentially you can't do anything with the window until the modal is dealt with, so the site just asks for your credentials over and over, ultimately making the browser useless (as a means to make you call their "support line" where they can then social engineer their way into your computer). The ultimate solution is to make such things tab-modal rather than window-modal, but that's involved and sort of out of scope, so we now implement a similar change to what current Firefox does where there is a cap of three Cancels. If you cancel three times, the malicious site is not allowed to issue any more requests until you reload it. No actual data is leaked, assuming you don't type anything in, but it can be a nasty denial of service and it would have succeeded in ruining your day on TenFourFox just as easily as any other Firefox derivative. That said, just avoid iffy sites, yes?

The second change is more fundamental. For Firefox 66 Mozilla briefly experimented with setting a frame rate cap on low-end devices. Surprise, surprise: all of our systems are low-end devices! In FPR13 and prior, TenFourFox would try to push as many frames to the compositor as possible, no matter what it was trying to do, to achieve a 60fps target or better. However, probably none of our computers with the possible exception of high-end G5s were probably achieving 60fps consistently on most modern websites, and the browser would flail trying to desperately keep up. Instead, by setting a cap and enforcing it with software v-sync, frames aren't pushed as often and the browser can do more layout and rendering work per frame. Mozilla selected a 30fps cap, so that's what I selected as an arbitrary first cut. Some sites are less smooth, but many sites now render faster to first paint, particularly pages that do a lot of DOM transforms because now the resulting visual changes are batched. This might seem like an obvious change to make but the numbers had never been proven until then.

Mozilla ultimately abandoned this change in lieu of a more flexible approach with the idle queue, but our older codebase doesn't support that, and we don't have the other issues they encountered anyway because we don't support Electrolysis or APZ. There are two things to look at: we shouldn't have the same scrolling issues because we scroll synchronously, but do report any regressions in default scrolling or obvious changes in scroll rate (include what you're doing the scrolling with, such as the scroll bar, a mouse scroll wheel or your laptop trackpad). The second thing to look at is whether the 30fps frame rate is the right setting for all systems. In particular, should 7400 or G3 be even lower, maybe 15fps? You can change this with layout.frame_rate to some other target frame rate value and restarting the browser. What setting seems to do best on your machine? Include RAM, OS and CPU speed. One other possibility is to look at reducing the target frame rate dynamically based on battery state, but this requires additional plumbing we don't support just yet.

So now the main event: H.264 video support. Olga gets the credit here for the original code, which successfully loads our separately-distributed build of ffmpeg so that we don't run afoul of any licenses including it with the core browser. My first cut of this had issues where the browser ran out of memory on sites that ran lots of H.264 video as images (and believe me, this is not at all uncommon these days), but I got our build of ffmpeg trimmed down enough that it can now load the Vimeo front page and other sites generally without issue. Download the TenFourFox MP4 Enabler for either G4/7450 or G5 (this is a bit of a misnomer since we also use ffmpeg for faster MP3 and VP3 decoding, but I didn't want it confused with Olga's preexisting FFmpeg Enabler), download FPR14b1, run the Enabler to install the libraries and then start FPR14b1. H.264 video should now "just work." However, do note there may be a few pieces left to add for compatibility (for example, Twitter videos used to work and then something changed and now it doesn't and I don't know why, but Imgur, YouTube and Vimeo seem to work fine).

There are some things to keep in mind. While ffmpeg has very good AltiVec support, H.264 video tends to be more ubiquitous and run at higher bitrates, which cancel out the gains; I wouldn't expect dramatic performance improvements relative to WebM and while you may see them in FPR14 relative to FPR13 remember that we now have a frame rate cap which probably makes the decoder more efficient. As mentioned before, I only support G4/7450 (and of those, 1.25GHz and up) and G5 systems; a G4/7400 will have trouble keeping up even with low bitrates and there's probably no hope for G3 systems at all. The libraries provided are very stripped down both for performance and to reduce size and memory pressure, so they're not intended as a general purpose ffmpeg build (in particular, there are no encoders, multiplexers or protocols, some codecs have been removed, and VP8/VP9 are explicitly disabled since our in-tree hyped-up libvpx is faster). You can build your own libraries and put them into the installation location if you want additional features (see the wiki instructions for supported versions and the build process), and you may want to do this for WebM in particular if you want better quality since our build has the loop filter and other postprocessing cut down for speed, but I won't offer support for custom libraries and you'd be on your own if you hit a bug. Finally, the lockout code I wrote when I was running into memory pressure issues is still there and will still cancel all decoding H.264 instances if any one of them fails to get memory for a frame, hopefully averting a crash. This shouldn't happen much now with the slimmer libraries but I still recommend as much RAM as your system can hold (at least 2GB). Oh, and one other thing: foxboxes work fine with H.264!

Now, enjoy some of the Vimeo videos you previously could only watch with the old PopOutPlayer, back when it actually still worked. Here are four of my favourites: Vicious Cycle (PG-13 for pixelated robot blood), Fired On Mars (PG-13 for an F-bomb), Other Half (PG-13 for an F-bomb and oddly transected humans), and Unsatisfying (unsatisfying). I've picked these not only because they're entertaining but also because they run the gamut from hand-drawn animation to CGI to live action and give you an idea of how your system performs. However, I strongly recommend you click the gear icon and select 360p before you start playback, even on a 2005 G5; HD performance is still best summarized as "LOL."

At least one of you asked how to turn it off. Fortunately, if you never install the libraries, it'll never be "on" (the browser will work as before). If you do install them, and decide you prefer the browser without it, you can either delete the libraries from ~/Library/TenFourFox-FFmpeg (stop the browser first just in case it has them loaded) or set media.ffmpeg.enabled to false.

The other, almost anticlimactic change, is that you can now embed JavaScript in your AppleScripts and execute them in browser tabs with run JavaScript. While preparing this blog post I discovered an annoying bug in the AppleScript support, but since no one has reported it so far, it must not be something anyone's actually hitting (or I guess no one's using it much yet). It will be fixed for final which will come out parallel with Firefox 60.7/67 on or about May 14.

Categorieën: Mozilla-nl planet

Martin Giger: Sustainable smart home with the TXT

zo, 31/03/2019 - 19:28

fischertechnik launched the smart home kit last year. A very good move on a conceptual level. Smart home and IoT (internet of things) are rapidly growing technology sectors. The unique placement of the TXT allows it to be a perfect introductory platform to this world. However, the smart home platform from fischertechnik relies on a …

The post Sustainable smart home with the TXT appeared first on Humanoids beLog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: The future of HTTP Symposium

za, 30/03/2019 - 07:26

This year’s version of curl up started a little differently: With an afternoon of HTTP presentations. The event took place the same week the IETF meeting has just ended here in Prague so we got the opportunity to invite people who possibly otherwise wouldn’t have been here… Of course this was only possible thanks to our awesome sponsors, visible in the image above!

Lukáš Linhart from Apiary started out with “Web APIs: The Past, The Present and The Future”. A journey trough XML-RPC, SOAP and more. One final conclusion might be that we’re not quite done yet…

James Fuller from MarkLogic talked about “The Defenestration of Hypermedia in HTTP”. How HTTP web technologies have changed over time while the HTTP paradigms have survived since a very long time.

I talked about DNS-over-HTTPS. A presentation similar to the one I did before at FOSDEM, but in a shorter time so I had to talk a little faster!

Mike Bishop from Akamai (editor of the HTTP/3 spec and a long time participant in the HTTPbis work) talked about “The evolution of HTTP (from HTTP/1 to HTTP/3)” from HTTP/0.9 to HTTP/3 and beyond.

Robin Marx then rounded off the series of presentations with his tongue in cheek “HTTP/3 (QUIC): too big to fail?!” where we provided a long list of challenges for QUIC and HTTP/3 to get deployed and become successful.

We ended this afternoon session with a casual Q&A session with all the presenters discussing various aspects of HTTP, the web, REST, APIs and the benefits and deployment challenges of QUIC.

I think most of us learned things this afternoon and we could leave the very elegant Charles University room enriched and with more food for thoughts about these technologies.

We ended the evening with snacks and drinks kindly provided by Apiary.

(This event was not streamed and not recorded on video, you had to be there in person to enjoy it.)


Categorieën: Mozilla-nl planet

Pagina's