mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Localization (L10N): L10n Report: April 2021 Edition

Mozilla planet - vr, 16/04/2021 - 18:55

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added
  • Cebuano (ceb)
  • Hiligaynon (hil)
  • Meiteilon (mni)
  • Papiamento (pap-AW)
  • Shilha (shi)
  • Somali (so)
  • Uyghur (ug)
Update on the communication channels

On April 3rd, as part of a broader strategy change at Mozilla, we moved our existing mailing lists (dev-l10n, dev-l10n-web, dev-l10n-new-locales) to Discourse. If you are involved in localization, please make sure to create an account on Discourse and set up your profile to receive notifications when there are new messages in the Localization category.

We also decided to shut down our existing Telegram channel dedicated to localization. This was originally created to fill a gap, given its broad availability on mobile, and the steep entry barrier required to use IRC. In the meantime, IRC has been replaced by chat.mozilla.org, which offers a much better experience on mobile platforms. Please make sure to check out the dedicated Wiki page with instructions on how to connect, and join our #l10n-community room.

New content and projects What’s new or coming up in Firefox desktop

For all localizers working on Firefox, there is now a Firefox L10n Newsletter, including all information regarding the next major release of Firefox (89, aka MR1). Here you can find the latest issue, and you can also subscribe to this thread in discourse to receive a message every time there’s an update.

One important update is that the Firefox 89 cycle will last 2 extra weeks in Beta. These are the important deadlines:

  • Firefox 89 will move from Nightly to Beta on April 19 (unchanged).
  • It will be possible to update localizations for Firefox 89 until May 23 (previously May 9).
  • Firefox 89 will be released on June 1.

As a consequence, the Nightly cycle for Firefox 90 will also be two weeks longer.

What’s new or coming up in mobile

Like Firefox desktop, Firefox for iOS and Firefox for Android are still on the road to the MR1 release. I’ve published some details on Discourse here. Dates and info are still relevant, nothing changes in terms of l10n.

All strings for Firefox for iOS should already have landed.

Most strings for Firefox for Android should have landed.

What’s new or coming up in web projects AMO:

The Voice Fill and Firefox Voice Beta extensions are being retired.

Common Voice:

The project is transitioning to Mozilla Foundation. The announcement was made earlier this week. Some of the Mozilla staff who worked closely with the project will continue working on it in their new roles. The web part, the part that contributes to the site localization will remain in Pontoon.

Firefox Accounts:

Beta was launched on March 17. The sprint cycle is now aligned with Firefox Nightly moving forward. The next code push will be on April 21. The cutoff to include localized strings is a week earlier than the code push date.

MDN:

All locales are disabled with the exception of fr, ja, zh-CN and zh-TW. There is a blog on this decision. The team may add back more languages later. If it does happen, the attributes to the work done by community members will be retained in Pontoon. Nothing will be lost.

Mozilla.org:
  • Migration from .lang to .ftl has completed. The strings containing brand and product names that were not converted properly will appear as warnings and would not be shown on the production site. Please resolve these issues as soon as possible.
  • A select few locales are chosen to be supported by vendor service: ar, hi-IN, id, ja, and ms. The community managers were reached out for this change. The website should be fully localized in these languages by the first week of May. For more details on this change and for ways to report translation issues, please check out the announcement on Discourse.
Events
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)
Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Categorieën: Mozilla-nl planet

Jan-Erik Rediger: This Week in Glean: rustc, iOS and an M1

Mozilla planet - vr, 16/04/2021 - 16:00

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All "This Week in Glean" blog posts are listed in the TWiG index (and on the Mozilla Data blog). This article is cross-posted on the Mozilla Data blog.

Back in February I got an M1 MacBook. That's Apple's new ARM-based hardware.

I got it with the explicit task to ensure that we are able to develop and build Glean on it. We maintain a Swift language binding, targeting iOS, and that one is used in Firefox iOS. Eventually these iOS developers will also have M1-based machines and want to test their code, thus Glean needs to work.

Here's what we need to get to work:

  • Compile the Rust portions of Glean natively on an M1 machine
  • Build & test the Kotlin & Swift language bindings on an M1 machine, even if non-native (e.g. Rosetta 2 emulation for x86_64)
  • Build & test the Swift language bindings natively and in the iPhone simulator on an M1 machine
  • Stretch goal: Get iOS projects using Glean running as well
Rust on an M1

Work on getting Rust compiled on M1 hardware started last year in June already, with the availability of the first developer kits. See Rust issue 73908 for all the work and details. First and foremost this required a new target: aarch64-apple-darwin. This landed in August and was promoted to Tier 21 with the December release of Rust 1.49.0.

By the time I got my MacBook compiling Rust code on it was as easy as on an Intel MacBook. Developers on Intel MacBooks can cross-compile just as easily:

rustup target add aarch64-apple-darwin cargo build --target aarch64-apple-darwin Glean Python & Kotlin on an M1

Glean Python just ... worked. We use cffi to load the native library into Python. It gained aarch642 macOS support in v14.4.1. My colleague glandium later contributed support code so we build release wheels for that target too. So it's both possible to develop & test Glean Python, as well as use it as a dependency without having a full Rust development environment around.

Glean Android is not that straight forward. Some of our transitive dependencies are based on years-old pre-built binaries of SQLite and of course there's not much support behind updating those Java libraries. It's possible. A friend managed to compile and run that library on an M1. But for Glean development we simply recommend relying on Rosetta 2 (the x86_64 compatibility layer) for now. It's as easy as:

arch -x86_64 $SHELL make build-kotlin

At least if you have Java set up correctly... The default Android emulator isn't usable on M1 hardware yet, but Google is working on a compatible one: Android M1 emulator preview. It's usable enough for some testing, but for that part I most often switch back to my Linux Desktop (that has the additional CPU power on top).

Glean iOS on an M1

Now we're getting to the interesting part: Native iOS development on an M1. Obviously for Apple this is a priority: Their new machines should become the main machine people do iOS development on. Thus Xcode gained aarch64 support in version 12 long before the hardware was available. That caused quite some issues with existing tooling, such as the dependency manager Carthage. Here's the issue:

  • When compiling for iOS hardware you would pick a target named aarch64-apple-ios, because ... iPhones and iPads are ARM-based since forever.
  • When compiling for the iOS simulator you would pick a target named x86_64-apple-ios, because conveniently the simulator uses the host's CPU (that's what makes it fast)

So when the compiler saw x86_64 and iOS it knew "Ah, simulator target" and when it saw aarch64 and ios it knew "Ah, hardware". And everyone went with this, Xcode happily built both targets and, if asked to, was able to bundle them into one package.

With the introduction of Apple Silicion3 the iOS simulator run on these machines would also be aarch644, and also contain ios, but not be for the iOS hardware.

Now Xcode and the compiler will get confused what to put where when building on M1 hardware for both iOS hardware and the host architecture.

So the compiler toolchain gained knowledge of a new thing: arm64-apple-ios14.0-simulator, explicitly marking the simulator target. The compiler knows from where to pick the libraries and other SDK files when using that target. You still can't put code compiled for arm64-apple-ios and arm64-apple-ios14.0-simulator into the same universal binary5, because you can have each architecture only once (the arm64 part in there). That's what Carthage and others stumbled over.

Again Apple prepared for that and for a long time they have wanted you to use XCFramework bundles6. Carthage just didn't used to support that. The 0.37.0 release fixed that.

That still leaves Rust behind, as it doesn't know the new -simulator target. But as always the Rust community is ahead of the game and deg4uss3r started adding a new target in Rust PR #81966. He got half way there when I jumped in to push it over the finish line. How these targets work and how LLVM picks the right things to put into the compiled artifacts is severly underdocumented, so I had to go the trial-and-error route in combination with looking at LLVM source code to find the missing pieces. Turns out: the 14.0 in arm64-apple-ios14.0-simulator is actually important.

With the last missing piece in place, the new Rust target landed in February and is available in Nightly. Contrary to the main aarch64-apple-darwin or aarch64-apple-ios target, the simulator target is not Tier 2 yet and thus no prebuilt support is available. rustup target add aarch64-apple-darwin does not work right now. I am now in discussions to promote it to Tier 2, but it's currently blocked by the RFC: Target Tier Policy.

It works on nightly however and in combination with another cargo capability I'm able to build libraries for the M1 iOS simulator:

cargo +nightly build -Z build-std --target aarch64-apple-ios-sim

For now Glean iOS development on an M1 is possible, but requires Nightly. Goal achieved, I can actually work with this!

In a future blog post I want to explain in more detail how to teach Xcode about all the different targets it should build native code for.

All The Other Projects

This was marked a stretch goal for a reason. This involves all the other teams with Rust code and the iOS teams too. We're not there yet and there's currently no explicit priority to make development of Firefox iOS on M1 hardware possible. But when it comes to it, Glean will be ready for it and the team can assist others to get it over the finish line.

Want to hear more about Glean and our cross-platform Rust development? Come to next week's Rust Linz meetup, where I will be talking about this.

Footnotes:

1

See Platform Support for what the Tiers means.
2: The other name for that target.
3: "Apple Silicon" is yet another name for what is essentially the same as "M1" or "macOS aarch64"
4: Or arm64 for that matter. Yes, yet another name for the same thing.
5: "Universal Binaries" have existed for a long time now and allow for one binary to include the compiled artifacts for multiple targets. It's how there's only one Firefox for Mac download which runs natively on either Mac platform.
6: Yup, the main documentation they link to is a WWDC 2019 talk recording video.

Categorieën: Mozilla-nl planet

Data@Mozilla: This Week in Glean: rustc, iOS and an M1

Mozilla planet - vr, 16/04/2021 - 15:12

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

Back in February I got an M1 MacBook. That’s Apple’s new ARM-based hardware.

I got it with the explicit task to ensure that we are able to develop and build Glean on it. We maintain a Swift language binding, targeting iOS, and that one is used in Firefox iOS. Eventually these iOS developers will also have M1-based machines and want to test their code, thus Glean needs to work.

Here’s what we need to get to work:

  • Compile the Rust portions of Glean natively on an M1 machine
  • Build & test the Kotlin & Swift language bindings on an M1 machine, even if non-native (e.g. Rosetta 2 emulation for x86_64)
  • Build & test the Swift language bindings natively and in the iPhone simulator on an M1 machine
  • Stretch goal: Get iOS projects using Glean running as well
Rust on an M1

Work on getting Rust compiled on M1 hardware started last year in June already, with the availability of the first developer kits. See Rust issue 73908 for all the work and details. First and foremost this required a new target: aarch64-apple-darwin. This landed in August and was promoted to Tier 21 with the December release of Rust 1.49.0.

By the time I got my MacBook compiling Rust code on it was as easy as on an Intel MacBook. Developers on Intel MacBooks can cross-compile just as easily:

rustup target add aarch64-apple-darwin cargo build --target aarch64-apple-darwin Glean Python & Kotlin on an M1

Glean Python just … worked. We use cffi to load the native library into Python. It gained aarch642 macOS support in v14.4.1. My colleague glandium later contributed support code so we build release wheels for that target too. So it’s both possible to develop & test Glean Python, as well as use it as a dependency without having a full Rust development environment around.

Glean Android is not that straight forward. Some of our transitive dependencies are based on years-old pre-built binaries of SQLite and of course there’s not much support behind updating those Java libraries. It’s possible. A friend managed to compile and run that library on an M1. But for Glean development we simply recommend relying on Rosetta 2 (the x86_64 compatibility layer) for now. It’s as easy as:

arch -x86_64 $SHELL make build-kotlin

At least if you have Java set up correctly… The default Android emulator isn’t usable on M1 hardware yet, but Google is working on a compatible one: Android M1 emulator preview. It’s usable enough for some testing, but for that part I most often switch back to my Linux Desktop (that has the additional CPU power on top).

Glean iOS on an M1

Now we’re getting to the interesting part: Native iOS development on an M1. Obviously for Apple this is a priority: Their new machines should become the main machine people do iOS development on. Thus Xcode gained aarch64 support in version 12 long before the hardware was available. That caused quite some issues with existing tooling, such as the dependency manager Carthage. Here’s the issue:

  • When compiling for iOS hardware you would pick a target named aarch64-apple-ios, because … iPhones and iPads are ARM-based since forever.
  • When compiling for the iOS simulator you would pick a target named x86_64-apple-ios, because conveniently the simulator uses the host’s CPU (that’s what makes it fast)

So when the compiler saw x86_64 and iOS it knew “Ah, simulator target” and when it saw aarch64 and ios it knew “Ah, hardware”. And everyone went with this, Xcode happily built both targets and, if asked to, was able to bundle them into one package.

With the introduction of Apple Silicion3 the iOS simulator run on these machines would also be aarch644, and also contain ios, but not be for the iOS hardware.

Now Xcode and the compiler will get confused what to put where when building on M1 hardware for both iOS hardware and the host architecture.

So the compiler toolchain gained knowledge of a new thing: arm64-apple-ios14.0-simulator, explicitly marking the simulator target. The compiler knows from where to pick the libraries and other SDK files when using that target. You still can’t put code compiled for arm64-apple-ios and arm64-apple-ios14.0-simulator into the same universal binary5, because you can have each architecture only once (the arm64 part in there). That’s what Carthage and others stumbled over.

Again Apple prepared for that and for a long time they have wanted you to use XCFramework bundles6. Carthage just didn’t used to support that. The 0.37.0 release fixed that.

That still leaves Rust behind, as it doesn’t know the new -simulator target. But as always the Rust community is ahead of the game and deg4uss3r started adding a new target in Rust PR #81966. He got half way there when I jumped in to push it over the finish line. How these targets work and how LLVM picks the right things to put into the compiled artifacts is severly underdocumented, so I had to go the trial-and-error route in combination with looking at LLVM source code to find the missing pieces. Turns out: the 14.0 in arm64-apple-ios14.0-simulator is actually important.

With the last missing piece in place, the new Rust target landed in February and is available in Nightly. Contrary to the main aarch64-apple-darwin or aarch64-apple-ios target, the simulator target is not Tier 2 yet and thus no prebuilt support is available. rustup target add aarch64-apple-darwin does not work right now. I am now in discussions to promote it to Tier 2, but it’s currently blocked by the RFC: Target Tier Policy.

It works on nightly however and in combination with another cargo capability I’m able to build libraries for the M1 iOS simulator:

cargo +nightly build -Z build-std --target aarch64-apple-ios-sim

For now Glean iOS development on an M1 is possible, but requires Nightly. Goal achieved, I can actually work with this!

In a future blog post I want to explain in more detail how to teach Xcode about all the different targets it should build native code for.

All The Other Projects

This was marked a stretch goal for a reason. This involves all the other teams with Rust code and the iOS teams too. We’re not there yet and there’s currently no explicit priority to make development of Firefox iOS on M1 hardware possible. But when it comes to it, Glean will be ready for it and the team can assist others to get it over the finish line.

Want to hear more about Glean and our cross-platform Rust development? Come to next week’s Rust Linz meetup, where I will be talking about this.

Footnotes:

  1. See Platform Support for what the Tiers means.↩︎
  2. The other name for that target.↩︎
  3. “Apple Silicon” is yet another name for what is essentially the same as “M1” or “macOS aarch64”↩︎
  4. Or arm64 for that matter. Yes, yet another name for the same thing.↩︎
  5. “Universal Binaries” have existed for a long time now and allow for one binary to include the compiled artifacts for multiple targets. It’s how there’s only one Firefox for Mac download which runs natively on either Mac platform.↩︎
  6. Yup, the main documentation they link to is a WWDC 2019 talk recording video.↩︎
Categorieën: Mozilla-nl planet

Robert O'Callahan: Demoing The Pernosco Omniscient Debugger: Debugging Crashes In Node.js And GDB

Mozilla planet - vr, 16/04/2021 - 13:48

This post was written by Pernosco co-founder Kyle Huey.

Traditional debugging forms a hypothesis about what is going wrong with the program, gathers evidence to accept or reject that hypothesis, and repeats until the root cause of the bug is found. This process is time-consuming, and formulating useful hypotheses often requires deep understanding of the software being debugged. With the Pernosco omniscient debugger there’s no need to speculate about what might have happened, instead an engineer can ask what actually did happen. This radically simplifies the debugging process, enabling much faster progress while requiring much less domain expertise.

To demonstrate the power of this approach we have two examples from well-known and complex software projects. The first is an intermittently crashing node.js test. From a simple stack walk it is easy to see that the proximate cause of the crash is calling a member function with a NULL `this` pointer. The next logical step is to determine why that pointer is NULL. In a traditional debugging approach, this requires pre-existing familiarity with the codebase, or reading code and looking for places where the value of this pointer could originate from. Then an experiment, either poking around in an interactive debugger or adding relevant logging statements, must be run to see where the NULL pointer originates from. And because this test fails intermittently, the engineer has to hope that the issue can be reproduced again and that this experiment doesn’t disturb the program’s behavior so much that the bug vanishes.

In the Pernosco omniscient debugger, the engineer just has to click on the NULL value. With all program state available at all points in time, the Pernosco omniscient debugger can track this value back to its logical origin with no guesswork on the part of the user. We are immediately taken backwards to the point where the connection in question received an EOF and set this pointer to NULL. You can read the full debugging transcript here.

Similarly, with a crash in gdb, the proximate cause of the crash is immediately obvious from a stack walk: the program has jumped through a bad vtable pointer to NULL. Figuring out why the vtable address has been corrupted is not trivial with traditional methods: there are entire tools such as ASAN (which requires recompilation) or Valgrind (which is very slow) that have been designed to find and diagnose memory corruption bugs like this. But in the Pernosco omniscient debugger a click on the object’s pointer takes the user to where it was assigned into the global variable of interest, and another click on the value of the vtable pointer takes the user to where the vtable pointer was erroneously overwritten. Walk through the complete debugging session here.

As demonstrated in the examples above, the Pernosco omniscient debugger makes it easy to track down even classes of bugs that are notoriously difficult to work with such as race conditions or memory corruption errors. Try out Pernosco individual accounts or on-premises today!

Categorieën: Mozilla-nl planet

About:Community: In loving memory of Ricardo Pontes

Mozilla planet - vr, 16/04/2021 - 12:43

It brings us great sadness to share the news that a beloved Brazilian community member and Rep alumnus, Ricardo Pontes has recently passed away.

Ricardo was one of the first Brazilian community members, contributing for more than 10 years, a good friend, and a mentor to other volunteers.

His work was instrumental on the Firefox OS days and his passion inspiring. His passing is finding us sadden and shocked. Our condolences to his family and friends.

Below are some words about Ricardo from fellow Mozillians (old and new)

  • Sérgio Oliveira (seocam): Everybody that knew Ricardo, or Pontes as we usually called him in the Mozilla community,  knows that he had a strong personality (despite his actual height). We always stood for what he believed was right and fought for it, but always smiling, making jokes and playing around with the situations. It was a real fun partner with him in many situations, even the not so easy. We are lucky to have photos of Ricardo, since he was always behind the camera taking pictures of us, and always great pictures. Pontes, it was a great pleasure to defend the free Web side-by-side with you. I’ll miss you my friend.
  • Felipe Gomes: O Ricardo sempre foi uma pessoa alegre, animada, e que tinha o dom de unir todos os grupos. Até em sua luta foi possível ver como as pessoas se uniram para rezar por ele e o quanto ele era querido para seus amigos e familiares. As memórias que temos dele são as memórias que ele registrou de nós através de sua câmera. Descanse em paz meu amigo.
  • Andrea Balle:  Pontes is and always will be part of Mozilla Brazil. One of the first members, the “jurassic team” as we called. Pontes was a generous, intelligent and high-spirited friend. I will always remeber him as a person with great enthusiasm for sharing the things that he loved, including bikes, photography, technology and the free web. He will be deeply missed.
  • Armando Neto: I met Ricardo 10 years ago, in a hotel hallway, we were chatting about something I don’t remember, but I do remember we’re laughing, and I will always remember him that way in that hallway.. laughing.
  • Luciana Viana: O Ricardo era quieto e calado, mas observava tudo e estava sempre atento aos movimentos. Nos conhecemos graças a Mozilla e tivemos a oportunidade de conviver graças às nossas inúmeras viagens juntos: Buenos Aires, Cartagena, Barcelona, Toronto, viagens inesquecíveis graças a sua presença, contribuições e senso de humor. Descance em paz querido Chuck. Peço a Deus que conforte o coração da família.
  • Clauber Stipkovic: Thank you for everything, my friend. For all the laughter, for all the late nights we spent talking about mozilla, about life and what we expected from the future. Thank you for being our photographer and recording so many cool moments, that we spent together. Unfortunately your future was very short, but I am sure that you recorded your name in the history of everything you did. May your passage be smooth and peaceful.
  • Luigui Delyer (luiguild): Ricardo was present in the best days I have ever had in my life as Mozillian, he taught me a lot, we enjoyed a lot, we travel a lot, we teach a lot, his legacy is inevitable, his name will be forever in Mozilla’s history and in our hearts. May the family feel embraced by the entire world community that he helped to build.
  • Fabricio Zuardi: As lembranças que tenho do Ricardo são todas de uma pessoa sorrindo, alegre e com alto astral. Nos deu ótimos registros de momentos felizes. Desejo conforto aos familiares e amigos, foi uma pessoa especial.
  • Guilermo Movia: I don’t remember when was the first time that I met Ricardo, but there were so many meetings and travels where our paths crossed. I remember him coming to Mar del Plata to help us talking pictures with ” De todos, para todos”  campaign. His pictures were always great, and show the best of the community. Hope you can rest in peace
  • Rosana Ardila: Ricardo was part of the soul of the Mozilla Brazil community, he was a kind and wonderful human being. It was humbling to see his commitment to the Mozilla Community. He’ll be deeply missed
  • Andre Garzia: Ricardo has been a friend and my Reps mentor for many years, it was through him and others that I discovered the joys of volunteering in a community. His example, wit, and smile, were always part of what made our community great. Ricardo has been an inspiring figure for me, not only because the love of the web that ties us all here but because he followed his passions and showed me that it was possible to pursue a career in what we loved. He loved photography, biking, and punk music, and that is how I chose to remember him. I’ll always cherish the memories we had travelling the world and exchanging stories. My heart and thoughts go to his beloved partner and family. I’ll miss you a lot my friend.
  • Lenno Azevedo: Ricardo foi o meu segundo mentor no programa Mozilla Reps, no qual me guiou dentro do projeto, me ensinando o caminho das pedras que ajudou a me tornar um bom Reps. Vou guarda pra sempre os ensinamentos e incentivos que me deu ao longo dos anos, principalmente na minha atual profissão. Te devo uma companheiro. Obrigado por tudo, descance em paz!
  • Reuben Morais: Ricardo was a beautiful soul, full of energy and smiles. Meeting him at events was always an inspiring opportunity. His energy always made every gathering feel like we all knew each other as childhood friends, I remember feeling this even when I was new. He’ll be missed by all who crossed paths with him.
  • Rubén Martín (nukeador): Ricardo was key to support the Mozilla community in Brazil, as a creative mind he was always behind his camera trying to capture and communicate what was going on, his work will remember him online. A great memory comes to my mind about the time we shared back in 2013 presenting Firefox OS to the world from Barcelona’s Mobile World Congress. You will be deeply missed, all my condolences to his family and close friends. Obrigado por tudo, descanse em paz!
  • Pierros Papadeas: A creative and kind soul, Ricardo will be surely missed by the communities he engaged so passionately.
  • Gloria Meneses: Taking amazing photos, skating and supporting his local community. A very active mozillian who loved parties after long working Reps sessions and a beer lover, that’s how I remember Ricardo. The most special memories I have from Ricardo are In Cartagena at Firefox OS event, in Barcelona at Mobile world congress taking photos, in Madrid at Reps meetings and in the IRC channel supporting Mozilla Brazil. I still can’t believe it. Rest in peace Ricardo.
  • William Quiviger: I remember Ricardo being very soft spoken and gently but fiercely passionate about Mozilla and our mission. I remember his eyes lighting up when I approached him about joining the Reps program. Rest in peace Ricardo.
  • Fernando García (stripTM): I am very shocked by this news. It is so sad and so unfair.
  • Mário Rinaldi: Ricardo era uma pessoa alegre e jovial, fará muita falta neste mundo.
  • Lourdes Castillo:  I will always remember Ricardo as a friend and brother who has always been dedicated to the Mozilla community. A tremendous person with a big heart. A hug to heaven and we will always remember you as a tremendous Mozillian and brother! Rest in peace my mozfriend
  • Luis Sánchez (lasr21) – The legacy of Ricardo’s passions will live throughout the hundreds of new contributors that his work reach.
  • Miguel Useche: Ricardo was one of the first mozillian I met outside my country. It was interesting to know someone that volunteered on Mozilla, did photography and loved to do skateboarding, just like me! I became a fan of his art and loved the few time I had the opportunity to share with him. Rest in peace bro!
  • Antonio Ladeia – Ricardo was a special guy, always happy and willing to help. I was presented with the pleasure of meeting him. His death will make this world a little sadder.
  • Eduardo Urcullú (Urcu): Ricardo o mejor conocido como “O Pontes” realmente fue un amigo muy divertido, aunque callado si cuando aún no lo conoces bien. Lo conocí en un evento de software libre allá por el año 2010 (cuando aún tenia el cabello largo xD), realmente las fotos quebtomaba con su cámara y su humor situacional son cosas para recordarlo. R.I.P. Pontes
  • Dave Villacreses (DaveEcu) Ricardo was part of the early group of supporters here in Latin America, he contributed to breathing lofe to our beloved Latam community. I remember he loved photography and was full of ideas and interesting comments to make every time. Smart and proactive. It is a really sad moment for our entire community.
  • Arturo Martinez: I met Ricardo during the MozCamp LATAM, and since then we became good friends, our paths crossed several times during events, flights, even at the MWC, he was an amazing Mozillian, always making us laugh, taking impressive pictures, with a willpower to defend what he believed, with few words but lots of passion, please rest in peace my friend.
  • Adriano Cupello:  The first time we met, we were in Cartagena for the launch of Firefox OS and I met one of the most amazing group of people of my life.  Pontes was one of them and very quickly became an “old friend” like the ones we have known at school all our lives.  He was an incredible and strong character and a great photographer.  Also he was my mentor at Mozilla reps program. The last time we talked, we tried to have a beer, but due to the circumstances of work, we were unable to.  We schedule it for the next time, and this time never came.  This week I will have this beer thinking about him.  I would like to invite all of you in the next beer that you have with your friends or alone, to dedicate this one to his memory and to the great moments we spent together with him.  My condolences and my prayers to the family and his partner @cahcontri who fought a very painful battle to report his situation until the last day with all his love.  Thank you for all lovely memories you left in my mind! We will miss you a lot! Cheers Pontes!
  • Rodrigo Padula: There were so many events, beers, good conversations and so many jokes and laughs that I don’t even remember when I met Ricardo. We shared the same sense of humor and bad jokes. Surely only good memories will remain! Rest in peace Ricado, we will miss you! 
  • Brian King: I was fortunate to have met Ricardo several times. Although quiet, you felt his presence and he was a very cool guy. We’ll miss you, I hope you get that big photo in the sky. RIP Ricardo.

Some pictures of Ricardo’s life as a Mozilla contributor can be found here

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Built-in FTP implementation to be removed in Firefox 90

Mozilla planet - do, 15/04/2021 - 23:01

Last year, the Firefox platform development team announced plans to remove the built-in FTP implementation from the browser. FTP is a protocol for transferring files from one host to another.

The implementation is currently disabled in the Firefox Nightly and Beta pre-release channels and will be disabled when Firefox 88 is released on April 19, 2021. The implementation will be removed in Firefox 90.  After FTP is disabled in Firefox, the browser will delegate ftp:// links to external applications in the same manner as other protocol handlers.

With the deprecation, browserSettings.ftpProtocolEnabled will become read-only. Attempts to set this value will have no effect.

Most places where an extension may pass “ftp” such as filters for proxy or webRequest should not result in an error, but the APIs will no longer handle requests of those types.

To help offset this removal, ftp  has been added to the list of supported protocol_handlers for browser extensions. This means that extensions will be able to prompt users to launch a FTP application to handle certain links.

Please let us if you have any questions on our developer community forum.

The post Built-in FTP implementation to be removed in Firefox 90 appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Ryan Harter: Opportunity Sizing: Is the Juice Worth the Squeeze?

Mozilla planet - do, 15/04/2021 - 22:00

My peers at Mozilla are running workshops on opportunity sizing. If you're unfamiliar, opportunity sizing is when you take some broad guesses at how impactful some new project might be before writing any code. This gives you a rough estimate of what the upside for this work might be.

The …

Categorieën: Mozilla-nl planet

Alex Gibson: My eighth year working at Mozilla

Mozilla planet - do, 15/04/2021 - 02:00

What will 2020 bring? Your guess is as good as mine. My hope is it can only get better from here.

Fucking. Hell.

Well, that was the most short sighted and optimistic take ever, eh? It feels like a decade since I wrote that, and a world away from where we all stand today. I would normally write a post on this day to talk about some of the work that I’ve been doing at Mozilla over the past 12 months, but that seems kinda insignificant right now. The global pandemic has hit the world hard, and while we’re starting to slowly to recover, it’s going to be a long process. Many businesses world wide, including Mozilla, felt the direct impact of the pandemic. I count myself fortunate to still have a stable job, and to be able to look after my family during this time. We’re all still healthy, and that’s all that really matters right now.

One thing that’s kept me going over the past year is seeing just how much people can come together to help and support each other. Family, friends, colleagues, management at work - have all been amazing. And as difficult as my kids have found the last 12 months, it motivates me to see them continue to bring enthusiasm to the world. No matter what’s happening that day, they can always cheer me up.

So I’m going to leave this short and just say stay safe. It’s going to be a truly global effort to get through this. Afterward, I’m sure there will likely be a new definition of “normal”. But I have hope that we are going to get there.

Categorieën: Mozilla-nl planet

Allen Wirfs-Brock: Personal Digital Habitats

Mozilla planet - do, 15/04/2021 - 00:35

In early 2013 I wrote the blog post Ambient Computing Challenge: Please Abstract My Digital Life. In it I lamented about the inessential complexity we all encounter living with a multitude of weakly integrated digital devices and services:

I simply want to think about all my “digital stuff” as things that are always there and always available.  No matter where I am or which device I’m using.

… My attention should always be on “my stuff.”  Different devices and different services should fade into the background.

In the eight years since I wrote that blog post not much has changed in how we think about and use our various devices. Each device is still a world unto itself. Sure, there are cloud applications and services that provide support for coordinating some of “my stuff” among devices. Collaborative applications and sync services are more common and more powerful—particularly if you restrict yourself to using devices from a single company’s ecosystem. But my various devices and their idiosyncratic differences have not “faded into the background.”

Why haven’t we done better? A big reason is conceptual inertia. It’s relatively easy for software developers to imagine and implement incremental improvement to the status quo. But before developers can create a new innovative system (or users can ask for one) they have to be able to envision it and have a vocabulary for talking about it. So, I’m going to coin a term, Personal Digital Habitat, for an alternative conceptual model for how we could integrate our personal digital devices. For now, I’ll abbreviate it as PDH because each for the individual words are important. However, if it catches on I suspect we will just say habitat, digihab, or just hab.

A Personal Digital Habitat is a federated multi-device information environment within which a person routinely dwells. It is associated with a personal identity and encompasses all the digital artifacts (information, data, applications, etc.) that the person owns or routinely accesses. A PDH overlays all of a person’s devices1 and they will generally think about their digital artifacts in terms of common abstractions supported by the PDH rather than device- or silo-specific abstractions. But presentation and interaction techniques may vary to accommodate the physical characteristics of individual devices.

People will think of their PDH as the storage location of their data and other digital artifacts. They should not have to think about where among their devices the artifacts are physically stored. A PDH is responsible for making sure that artifacts are available from each of its federated devices when needed. As a digital repository, a PDH should be a “local-first software” system, meaning that it conforms to:

… a set of principles for software that enables both collaboration and ownership for users. Local-first ideals include the ability to work offline and collaborate across multiple devices, while also improving the security, privacy, long-term preservation, and user control of data.

Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. Local-first software: you own your data, in spite of the cloud. 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!), October 2019, pages 154–178. doi:10.1145/3359591.3359737

There is one important difference between my concept of a PDH and what Kieppmann, et al describe. They talk about the need to “synchronize” a user’s data that may be stored on multiple devices and this will certainly be a fundamental (and hopefully transparent) service provided by a PDH. But they also talk at length about multi-user collaboration where simultaneous editing may occur. With every person having their own PDH, support for inter-PDH collaborative editing will certainly be important. But I think focusing on multi-user collaboration is a distraction from the personal nature of a PDH. The design of a PDH should optimize for intra-PDH activities. At any point in time, a person’s attention is generally focused on one thing. We will rarely make simultaneous edits to a single digital artifact from multiple devices federated into our PDHs. But we will rapidly shift our attention (and our editing behaviors) among our devices. Tracking intra-PDH attention shifts is likely to be useful for intra-PDH data synchronization. I’m intrigued with understanding how we might use explicit attention shifts such as touching a keyboard or picking up a tablet as clues about user intent.

So let’s make this all a little more concrete by stepping through a usage scenario. Assume that I’m sitting at my desk, in front of a large display2. Also sitting on the desk is a laptop, a tablet with a stylus, and a “phone” is in my pocket. These devices are all federated parts of my PDH. Among other things, this means that I have access to the same artifacts from all the devices and that I can frictionlessly switch among them.

  1. Initially, I’m typing formatted text into an essay that is visible on the desktop display.
  2. I pick up the tablet. The draft essay with the text I just typed is visible.
  3. I use the stylus to drag apart two paragraphs creating a drawing area and then make a quick block diagram. As I draw on the tablet, the desktop display also updates with the diagram.
  4. As I look at my drawing in context, I notice a repeated word in the preceding paragraph so I use a scratch-out gesture to delete the extra word. That word disappears from the desktop display.
  5. I put down the tablet, shift my attention back to the large display, and use a mouse and keyboard to select items in the diagram and type labels for them. If I glance at the tablet, I will see the labels.
  6. Shifting back to tablet, I use the stylus to add a couple of missing lines to the diagram.
  7. Suddenly, the desk top speaker announces, “Time to walk to the bus stop, meeting with Erik at Starbucks in 30 minutes.” The announcement might have come from any of the devices, the PDH software is aware of the proximity of my devices and makes sure only one speaks the alert.
  8. I put the laptop into my bag, check that I have my phone, and head to the bus stop.
  9. I walk fast and when I get to the bus stop I see on my phone that the bus won’t arrive for 5 minutes.
  10. From the PDH recent activities list on the phone I open the draft essay and reread what I recently wrote and attach a voice note to one of the paragraphs.
  11. Latter, after the meeting, I stay at Starbucks for a while and use the laptop to continue working on my essay following the suggestions in the voice notes I made at the bus stop.
  12. …and life goes on in my Personal Digital Habitat…

Personal Digital Habitat is an aspirational metaphor. Such metaphors have had an important role in the evolution of our computing systems. In the 1980s and 90s it was the metaphor of a virtual desktop with direct manipulation of icons corresponding to metaphorical digital artifacts that made personal computers usable by a majority of humanity. Since then we added additional metaphors such as clouds, webs, and stores that sell apps. But our systems still primarily work in terms of one person using one physical “computer” at a time—even though many of us, in a given day, frequently switch our attention among multiple computers. Personal Digital Habitat is a metaphor that can help us imagine how to unify all our computer-based personal devices and simplify our digital lives.

This essay was inspired by a twitter thread from March 22, 2021. The thread includes discussion of some technical aspects PDHs. Future blog posts may talk about the technology and how we might go about creating them.

Footnotes1    The person’s general purpose computer-like devices, not the hundreds of special purpose devices such as “smart” light switches or appliances in the surrounding ambient computing environment. A PDH may mediate a person’s interaction with such ambient devices but such devices are not a federated part of the PDH. Should a smart watch be federated into a PDH? Yes, usually. How about a heart pacemaker? NO! 2    Possibly hardwired to a “desktop” computer.
Categorieën: Mozilla-nl planet

Daniel Stenberg: curl 7.76.1 – h2 works again

Mozilla planet - wo, 14/04/2021 - 08:07

I’m happy to once again present a new curl release to the world. This time we decided to cut the release cycle short and do a quick patch release only two weeks since the previous release. The primary reason was the rather annoying and embarrassing HTTP/2 bug. See below for all the details.

Release presentation Numbers

the 199th release
0 changes
14 days (total: 8,426)

21 bug-fixes (total: 6,833)
30 commits (total: 27,008)
0 new public libcurl function (total: 85)
0 new curl_easy_setopt() option (total: 288)

0 new curl command line option (total: 240)
23 contributors, 10 new (total: 2,366)
14 authors, 6 new (total: 878)
0 security fixes (total: 100)
0 USD paid in Bug Bounties (total: 5,200 USD)

Bug-fixes

This was a very short cycle but we still managed to merge a few interesting fixes. Here are some:

HTTP/2 selection over HTTPS

This regression is the main reason for this patch release. I fixed an issue before 7.76.0 was released and due to lack of covering tests with other TLS backends, nobody noticed that my fix also break HTTP/2 selection over HTTPS when curl was built to use one GnuTLS, BearSSL, mbedTLS, NSS, SChannnel, Secure Transport or wolfSSL!

The problem I fixed for 7.76.0: I made sure that no internal code updates the HTTP version choice the user sets, but that it then updates only the internal “we want this version”. Without this fix, an application that reuses an easy handle could without specifically asking for it, get another HTTP version in subsequent requests if a previous transfer had been downgraded. Clearly the fix was only partial.

The new fix should make HTTP/2 work and make sure the “wanted version” is used correctly. Fingers crossed!

Progress meter final update in parallel mode

When doing small and quick transfers in parallel mode with the command line tool, the logic could make the last update call to get skipped!

file: support getting directories again

Another regression. A recent fix made curl not consider directories over FILE:// to show a size (if -I or -i is used). That did however also completely break “getting” such a directory…

HTTP proxy: only loop on 407 + close if we have credentials

When a HTTP(S) proxy returns a 407 response and closes the connection, curl would retry the request to it even if it had no credentials to use. If the proxy just consistently did the same 407 + close, curl would get stuck in a retry loop…

The fixed version now only retries the connection (with auth) if curl actually has credentials to use!

Next release cycle

The plan is to make the next cycle two weeks shorter, to get us back on the previously scheduled path. This means that if we open the feature window on Monday, it will be open for just a little over two weeks, then give us three weeks of only bug-fixes before we ship the next release on May 26.

The next one is expected to become 7.77.0. Due to the rather short feature window this coming cycle I also fear that we might not be able to merge all the new features that are waiting to get merged.

Categorieën: Mozilla-nl planet

Ludovic Hirlimann: My geeking plans for this summer

Thunderbird - do, 07/05/2015 - 10:39

During July I’ll be visiting family in Mongolia but I’ve also a few things that are very geeky that I want to do.

The first thing I want to do is plug the Ripe Atlas probes I have. It’s litle devices that look like that :

Hello @ripe #Atlas !

They enable anybody with a ripe atlas or ripe account to make measurements for dns queries and others. This helps making a global better internet. I have three of these probes I’d like to install. It’s good because last time I checked Mongolia didn’t have any active probe. These probes will also help Internet become better in Mongolia. I’ll need to buy some network cables before leaving because finding these in mongolia is going to be challenging. More on atlas at https://atlas.ripe.net/.

The second thing I intend to do is map Mongolia a bit better on two projects the first is related to Mozilla and maps gps coordinateswith wifi access point. Only a little part of The capital Ulaanbaatar is covered as per https://location.services.mozilla.com/map#11/47.8740/106.9485 I want this to be way more because having an open data source for this is important in the future. As mapping is my new thing I’ll probably edit Openstreetmap in order to make the urban parts of mongolia that I’ll visit way more usable on all the services that use OSM as a source of truth. There is already a project to map the capital city at http://hotosm.org/projects/mongolia_mapping_ulaanbaatar but I believe osm can server more than just 50% of mongolia’s population.

I got inspired to write this post by mu son this morning, look what he is doing at 17 months :

Geeking on a Sun keyboard at 17 months
Categorieën: Mozilla-nl planet

Andrew Sutherland: Talk Script: Firefox OS Email Performance Strategies

Thunderbird - do, 30/04/2015 - 22:11

Last week I gave a talk at the Philly Tech Week 2015 Dev Day organized by the delightful people at technical.ly on some of the tricks/strategies we use in the Firefox OS Gaia Email app.  Note that the credit for implementing most of these techniques goes to the owner of the Email app’s front-end, James Burke.  Also, a special shout-out to Vivien for the initial DOM Worker patches for the email app.

I tried to avoid having slides that both I would be reading aloud as the audience read silently, so instead of slides to share, I have the talk script.  Well, I also have the slides here, but there’s not much to them.  The headings below are the content of the slides, except for the one time I inline some code.  Note that the live presentation must have differed slightly, because I’m sure I’m much more witty and clever in person than this script would make it seem…

Cover Slide: Who!

Hi, my name is Andrew Sutherland.  I work at Mozilla on the Firefox OS Email Application.  I’m here to share some strategies we used to make our HTML5 app Seem faster and sometimes actually Be faster.

What’s A Firefox OS (Screenshot Slide)

But first: What is a Firefox OS?  It’s a multiprocess Firefox gecko engine on an android linux kernel where all the apps including the system UI are implemented using HTML5, CSS, and JavaScript.  All the apps use some combination of standard web APIs and APIs that we hope to standardize in some form.

Firefox OS homescreen screenshot Firefox OS clock app screenshot Firefox OS email app screenshot

Here are some screenshots.  We’ve got the default home screen app, the clock app, and of course, the email app.

It’s an entirely client-side offline email application, supporting IMAP4, POP3, and ActiveSync.  The goal, like all Firefox OS apps shipped with the phone, is to give native apps on other platforms a run for their money.

And that begins with starting up fast.

Fast Startup: The Problems

But that’s frequently easier said than done.  Slow-loading websites are still very much a thing.

The good news for the email application is that a slow network isn’t one of its problems.  It’s pre-loaded on the phone.  And even if it wasn’t, because of the security implications of the TCP Web API and the difficulty of explaining this risk to users in a way they won’t just click through, any TCP-using app needs to be a cryptographically signed zip file approved by a marketplace.  So we do load directly from flash.

However, it’s not like flash on cellphones is equivalent to an infinitely fast, zero-latency network connection.  And even if it was, in a naive app you’d still try and load all of your HTML, CSS, and JavaScript at the same time because the HTML file would reference them all.  And that adds up.

It adds up in the form of event loop activity and competition with other threads and processes.  With the exception of Promises which get their own micro-task queue fast-lane, the web execution model is the same as all other UI event loops; events get scheduled and then executed in the same order they are scheduled.  Loading data from an asynchronous API like IndexedDB means that your read result gets in line behind everything else that’s scheduled.  And in the case of the bulk of shipped Firefox OS devices, we only have a single processor core so the thread and process contention do come into play.

So we try not to be a naive.

Seeming Fast at Startup: The HTML Cache

If we’re going to optimize startup, it’s good to start with what the user sees.  Once an account exists for the email app, at startup we display the default account’s inbox folder.

What is the least amount of work that we can do to show that?  Cache a screenshot of the Inbox.  The problem with that, of course, is that a static screenshot is indistinguishable from an unresponsive application.

So we did the next best thing, (which is) we cache the actual HTML we display.  At startup we load a minimal HTML file, our concatenated CSS, and just enough Javascript to figure out if we should use the HTML cache and then actually use it if appropriate.  It’s not always appropriate, like if our application is being triggered to display a compose UI or from a new mail notification that wants to show a specific message or a different folder.  But this is a decision we can make synchronously so it doesn’t slow us down.

Local Storage: Okay in small doses

We implement this by storing the HTML in localStorage.

Important Disclaimer!  LocalStorage is a bad API.  It’s a bad API because it’s synchronous.  You can read any value stored in it at any time, without waiting for a callback.  Which means if the data is not in memory the browser needs to block its event loop or spin a nested event loop until the data has been read from disk.  Browsers avoid this now by trying to preload the Entire contents of local storage for your origin into memory as soon as they know your page is being loaded.  And then they keep that information, ALL of it, in memory until your page is gone.

So if you store a megabyte of data in local storage, that’s a megabyte of data that needs to be loaded in its entirety before you can use any of it, and that hangs around in scarce phone memory.

To really make the point: do not use local storage, at least not directly.  Use a library like localForage that will use IndexedDB when available, and then fails over to WebSQLDatabase and local storage in that order.

Now, having sufficiently warned you of the terrible evils of local storage, I can say with a sorta-clear conscience… there are upsides in this very specific case.

The synchronous nature of the API means that once we get our turn in the event loop we can act immediately.  There’s no waiting around for an IndexedDB read result to gets its turn on the event loop.

This matters because although the concept of loading is simple from a User Experience perspective, there’s no standard to back it up right now.  Firefox OS’s UX desires are very straightforward.  When you tap on an app, we zoom it in.  Until the app is loaded we display the app’s icon in the center of the screen.  Unfortunately the standards are still assuming that the content is right there in the HTML.  This works well for document-based web pages or server-powered web apps where the contents of the page are baked in.  They work less well for client-only web apps where the content lives in a database and has to be dynamically retrieved.

The two events that exist are:

DOMContentLoaded” fires when the document has been fully parsed and all scripts not tagged as “async” have run.  If there were stylesheets referenced prior to the script tags, the script tags will wait for the stylesheet loads.

load” fires when the document has been fully loaded; stylesheets, images, everything.

But none of these have anything to do with the content in the page saying it’s actually done.  This matters because these standards also say nothing about IndexedDB reads or the like.  We tried to create a standards consensus around this, but it’s not there yet.  So Firefox OS just uses the “load” event to decide an app or page has finished loading and it can stop showing your app icon.  This largely avoids the dreaded “flash of unstyled content” problem, but it also means that your webpage or app needs to deal with this period of time by displaying a loading UI or just accepting a potentially awkward transient UI state.

(Trivial HTML slide)

<link rel=”stylesheet” ...> <script ...></script> DOMContentLoaded!

This is the important summary of our index.html.

We reference our stylesheet first.  It includes all of our styles.  We never dynamically load stylesheets because that compels a style recalculation for all nodes and potentially a reflow.  We would have to have an awful lot of style declarations before considering that.

Then we have our single script file.  Because the stylesheet precedes the script, our script will not execute until the stylesheet has been loaded.  Then our script runs and we synchronously insert our HTML from local storage.  Then DOMContentLoaded can fire.  At this point the layout engine has enough information to perform a style recalculation and determine what CSS-referenced image resources need to be loaded for buttons and icons, then those load, and then we’re good to be displayed as the “load” event can fire.

After that, we’re displaying an interactive-ish HTML document.  You can scroll, you can press on buttons and the :active state will apply.  So things seem real.

Being Fast: Lazy Loading and Optimized Layers

But now we need to try and get some logic in place as quickly as possible that will actually cash the checks that real-looking HTML UI is writing.  And the key to that is only loading what you need when you need it, and trying to get it to load as quickly as possible.

There are many module loading and build optimizing tools out there, and most frameworks have a preferred or required way of handling this.  We used the RequireJS family of Asynchronous Module Definition loaders, specifically the alameda loader and the r-dot-js optimizer.

One of the niceties of the loader plugin model is that we are able to express resource dependencies as well as code dependencies.

RequireJS Loader Plugins

var fooModule = require('./foo'); var htmlString = require('text!./foo.html'); var localizedDomNode = require('tmpl!./foo.html');

The standard Common JS loader semantics used by node.js and io.js are the first one you see here.  Load the module, return its exports.

But RequireJS loader plugins also allow us to do things like the second line where the exclamation point indicates that the load should occur using a loader plugin, which is itself a module that conforms to the loader plugin contract.  In this case it’s saying load the file foo.html as raw text and return it as a string.

But, wait, there’s more!  loader plugins can do more than that.  The third example uses a loader that loads the HTML file using the ‘text’ plugin under the hood, creates an HTML document fragment, and pre-localizes it using our localization library.  And this works un-optimized in a browser, no compilation step needed, but it can also be optimized.

So when our optimizer runs, it bundles up the core modules we use, plus, the modules for our “message list” card that displays the inbox.  And the message list card loads its HTML snippets using the template loader plugin.  The r-dot-js optimizer then locates these dependencies and the loader plugins also have optimizer logic that results in the HTML strings being inlined in the resulting optimized file.  So there’s just one single javascript file to load with no extra HTML file dependencies or other loads.

We then also run the optimizer against our other important cards like the “compose” card and the “message reader” card.  We don’t do this for all cards because it can be hard to carve up the module dependency graph for optimization without starting to run into cases of overlap where many optimized files redundantly include files loaded by other optimized files.

Plus, we have another trick up our sleeve:

Seeming Fast: Preloading

Preloading.  Our cards optionally know the other cards they can load.  So once we display a card, we can kick off a preload of the cards that might potentially be displayed.  For example, the message list card can trigger the compose card and the message reader card, so we can trigger a preload of both of those.

But we don’t go overboard with preloading in the frontend because we still haven’t actually loaded the back-end that actually does all the emaily email stuff.  The back-end is also chopped up into optimized layers along account type lines and online/offline needs, but the main optimized JS file still weighs in at something like 17 thousand lines of code with newlines retained.

So once our UI logic is loaded, it’s time to kick-off loading the back-end.  And in order to avoid impacting the responsiveness of the UI both while it loads and when we’re doing steady-state processing, we run it in a DOM Worker.

Being Responsive: Workers and SharedWorkers

DOM Workers are background JS threads that lack access to the page’s DOM, communicating with their owning page via message passing with postMessage.  Normal workers are owned by a single page.  SharedWorkers can be accessed via multiple pages from the same document origin.

By doing this, we stay out of the way of the main thread.  This is getting less important as browser engines support Asynchronous Panning & Zooming or “APZ” with hardware-accelerated composition, tile-based rendering, and all that good stuff.  (Some might even call it magic.)

When Firefox OS started, we didn’t have APZ, so any main-thread logic had the serious potential to result in janky scrolling and the impossibility of rendering at 60 frames per second.  It’s a lot easier to get 60 frames-per-second now, but even asynchronous pan and zoom potentially has to wait on dispatching an event to the main thread to figure out if the user’s tap is going to be consumed by app logic and preventDefault called on it.  APZ does this because it needs to know whether it should start scrolling or not.

And speaking of 60 frames-per-second…

Being Fast: Virtual List Widgets

…the heart of a mail application is the message list.  The expected UX is to be able to fling your way through the entire list of what the email app knows about and see the messages there, just like you would on a native app.

This is admittedly one of the areas where native apps have it easier.  There are usually list widgets that explicitly have a contract that says they request data on an as-needed basis.  They potentially even include data bindings so you can just point them at a data-store.

But HTML doesn’t yet have a concept of instantiate-on-demand for the DOM, although it’s being discussed by Firefox layout engine developers.  For app purposes, the DOM is a scene graph.  An extremely capable scene graph that can handle huge documents, but there are footguns and it’s arguably better to err on the side of fewer DOM nodes.

So what the email app does is we create a scroll-region div and explicitly size it based on the number of messages in the mail folder we’re displaying.  We create and render enough message summary nodes to cover the current screen, 3 screens worth of messages in the direction we’re scrolling, and then we also retain up to 3 screens worth in the direction we scrolled from.  We also pre-fetch 2 more screens worth of messages from the database.  These constants were arrived at experimentally on prototype devices.

We listen to “scroll” events and issue database requests and move DOM nodes around and update them as the user scrolls.  For any potentially jarring or expensive transitions such as coordinate space changes from new messages being added above the current scroll position, we wait for scrolling to stop.

Nodes are absolutely positioned within the scroll area using their ‘top’ style but translation transforms also work.  We remove nodes from the DOM, then update their position and their state before re-appending them.  We do this because the browser APZ logic tries to be clever and figure out how to create an efficient series of layers so that it can pre-paint as much of the DOM as possible in graphic buffers, AKA layers, that can be efficiently composited by the GPU.  Its goal is that when the user is scrolling, or something is being animated, that it can just move the layers around the screen or adjust their opacity or other transforms without having to ask the layout engine to re-render portions of the DOM.

When our message elements are added to the DOM with an already-initialized absolute position, the APZ logic lumps them together as something it can paint in a single layer along with the other elements in the scrolling region.  But if we start moving them around while they’re still in the DOM, the layerization logic decides that they might want to independently move around more in the future and so each message item ends up in its own layer.  This slows things down.  But by removing them and re-adding them it sees them as new with static positions and decides that it can lump them all together in a single layer.  Really, we could just create new DOM nodes, but we produce slightly less garbage this way and in the event there’s a bug, it’s nicer to mess up with 30 DOM nodes displayed incorrectly rather than 3 million.

But as neat as the layerization stuff is to know about on its own, I really mention it to underscore 2 suggestions:

1, Use a library when possible.  Getting on and staying on APZ fast-paths is not trivial, especially across browser engines.  So it’s a very good idea to use a library rather than rolling your own.

2, Use developer tools.  APZ is tricky to reason about and even the developers who write the Async pan & zoom logic can be surprised by what happens in complex real-world situations.  And there ARE developer tools available that help you avoid needing to reason about this.  Firefox OS has easy on-device developer tools that can help diagnose what’s going on or at least help tell you whether you’re making things faster or slower:

– it’s got a frames-per-second overlay; you do need to scroll like mad to get the system to want to render 60 frames-per-second, but it makes it clear what the net result is

– it has paint flashing that overlays random colors every time it paints the DOM into a layer.  If the screen is flashing like a discotheque or has a lot of smeared rainbows, you know something’s wrong because the APZ logic is not able to to just reuse its layers.

– devtools can enable drawing cool colored borders around the layers APZ has created so you can see if layerization is doing something crazy

There’s also fancier and more complicated tools in Firefox and other browsers like Google Chrome to let you see what got painted, what the layer tree looks like, et cetera.

And that’s my spiel.

Links

The source code to Gaia can be found at https://github.com/mozilla-b2g/gaia

The email app in particular can be found at https://github.com/mozilla-b2g/gaia/tree/master/apps/email

(I also asked for questions here.)

Categorieën: Mozilla-nl planet

Joshua Cranmer: Breaking news

Thunderbird - wo, 01/04/2015 - 09:00
It was brought to my attention recently by reputable sources that the recent announcement of increased usage in recent years produced an internal firestorm within Mozilla. Key figures raised alarm that some of the tech press had interpreted the blog post as a sign that Thunderbird was not, in fact, dead. As a result, they asked Thunderbird community members to make corrections to emphasize that Mozilla was trying to kill Thunderbird.

The primary fear, it seems, is that knowledge that the largest open-source email client was still receiving regular updates would impel its userbase to agitate for increased funding and maintenance of the client to help forestall potential threats to the open nature of email as well as to innovate in the space of providing usable and private communication channels. Such funding, however, would be an unaffordable luxury and would only distract Mozilla from its central goal of building developer productivity tooling. Persistent rumors that Mozilla would be willing to fund Thunderbird were it renamed Firefox Email were finally addressed with the comment, "such a renaming would violate our current policy that all projects be named Persona."

Categorieën: Mozilla-nl planet

Joshua Cranmer: Why email is hard, part 8: why email security failed

Thunderbird - di, 13/01/2015 - 05:38
This post is part 8 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. Part 7 discusses the problem of trust. This part discusses why email security has largely failed.

At the end of the last part in this series, I posed the question, "Which email security protocol is most popular?" The answer to the question is actually neither S/MIME nor PGP, but a third protocol, DKIM. I haven't brought up DKIM until now because DKIM doesn't try to secure email in the same vein as S/MIME or PGP, but I still consider it relevant to discussing email security.

Unquestionably, DKIM is the only security protocol for email that can be considered successful. There are perhaps 4 billion active email addresses [1]. Of these, about 1-2 billion use DKIM. In contrast, S/MIME can count a few million users, and PGP at best a few hundred thousand. No other security protocols have really caught on past these three. Why did DKIM succeed where the others fail?

DKIM's success stems from its relatively narrow focus. It is nothing more than a cryptographic signature of the message body and a smattering of headers, and is itself stuck in the DKIM-Signature header. It is meant to be applied to messages only on outgoing servers and read and processed at the recipient mail server—it completely bypasses clients. That it bypasses clients allows it to solve the problem of key discovery and key management very easily (public keys are stored in DNS, which is already a key part of mail delivery), and its role in spam filtering is strong motivation to get it implemented quickly (it is 7 years old as of this writing). It's also simple: this one paragraph description is basically all you need to know [2].

The failure of S/MIME and PGP to see large deployment is certainly a large topic of discussion on myriads of cryptography enthusiast mailing lists, which often like to partake in propositions of new end-to-end encryption of email paradigms, such as the recent DIME proposal. Quite frankly, all of these solutions suffer broadly from at least the same 5 fundamental weaknesses, and I see it unlikely that a protocol will come about that can fix these weaknesses well enough to become successful.

The first weakness, and one I've harped about many times already, is UI. Most email security UI is abysmal and generally at best usable only by enthusiasts. At least some of this is endemic to security: while it mean seem obvious how to convey what an email signature or an encrypted email signifies, how do you convey the distinctions between sign-and-encrypt, encrypt-and-sign, or an S/MIME triple wrap? The Web of Trust model used by PGP (and many other proposals) is even worse, in that inherently requires users to do other actions out-of-band of email to work properly.

Trust is the second weakness. Consider that, for all intents and purposes, the email address is the unique identifier on the Internet. By extension, that implies that a lot of services are ultimately predicated on the notion that the ability to receive and respond to an email is a sufficient means to identify an individual. However, the entire purpose of secure email, or at least of end-to-end encryption, is subtly based on the fact that other people in fact have access to your mailbox, thus destroying the most natural ways to build trust models on the Internet. The quest for anonymity or privacy also renders untenable many other plausible ways to establish trust (e.g., phone verification or government-issued ID cards).

Key discovery is another weakness, although it's arguably the easiest one to solve. If you try to keep discovery independent of trust, the problem of key discovery is merely picking a protocol to publish and another one to find keys. Some of these already exist: PGP key servers, for example, or using DANE to publish S/MIME or PGP keys.

Key management, on the other hand, is a more troubling weakness. S/MIME, for example, basically works without issue if you have a certificate, but managing to get an S/MIME certificate is a daunting task (necessitated, in part, by its trust model—see how these issues all intertwine?). This is also where it's easy to say that webmail is an unsolvable problem, but on further reflection, I'm not sure I agree with that statement anymore. One solution is just storing the private key with the webmail provider (you're trusting them as an email client, after all), but it's also not impossible to imagine using phones or flash drives as keystores. Other key management factors are more difficult to solve: people who lose their private keys or key rollover create thorny issues. There is also the difficulty of managing user expectations: if I forget my password to most sites (even my email provider), I can usually get it reset somehow, but when a private key is lost, the user is totally and completely out of luck.

Of course, there is one glaring and almost completely insurmountable problem. Encrypted email fundamentally precludes certain features that we have come to take for granted. The lesser known is server-side search and filtration. While there exist some mechanisms to do search on encrypted text, those mechanisms rely on the fact that you can manipulate the text to change the message, destroying the integrity feature of secure email. They also tend to be fairly expensive. It's easy to just say "who needs server-side stuff?", but the contingent of people who do email on smartphones would not be happy to have to pay the transfer rates to download all the messages in their folder just to find one little email, nor the energy costs of doing it on the phone. And those who have really large folders—Fastmail has a design point of 1,000,000 in a single folder—would still prefer to not have to transfer all their mail even on desktops.

The more well-known feature that would disappear is spam filtration. Consider that 90% of all email is spam, and if you think your spam folder is too slim for that to be true, it's because your spam folder only contains messages that your email provider wasn't sure were spam. The loss of server-side spam filtering would dramatically increase the cost of spam (a 10% reduction in efficiency would double the amount of server storage, per my calculations), and client-side spam filtering is quite literally too slow [3] and too costly (remember smartphones? Imagine having your email take 10 times as much energy and bandwidth) to be a tenable option. And privacy or anonymity tends to be an invitation to abuse (cf. Tor and Wikipedia). Proposed solutions to the spam problem are so common that there is a checklist containing most of the objections.

When you consider all of those weaknesses, it is easy to be pessimistic about the possibility of wide deployment of powerful email security solutions. The strongest future—all email is encrypted, including metadata—is probably impossible or at least woefully impractical. That said, if you weaken some of the assumptions (say, don't desire all or most traffic to be encrypted), then solutions seem possible if difficult.

This concludes my discussion of email security, at least until things change for the better. I don't have a topic for the next part in this series picked out (this part actually concludes the set I knew I wanted to discuss when I started), although OAuth and DMARC are two topics that have been bugging me enough recently to consider writing about. They also have the unfortunate side effect of being things likely to see changes in the near future, unlike most of the topics I've discussed so far. But rest assured that I will find more difficulties in the email infrastructure to write about before long!

[1] All of these numbers are crude estimates and are accurate to only an order of magnitude. To justify my choices: I assume 1 email address per Internet user (this overestimates the developing world and underestimates the developed world). The largest webmail providers have given numbers that claim to be 1 billion active accounts between them, and all of them use DKIM. S/MIME is guessed by assuming that any smartcard deployment supports S/MIME, and noting that the US Department of Defense and Estonia's digital ID project are both heavy users of such smartcards. PGP is estimated from the size of the strong set and old numbers on the reachable set from the core Web of Trust.
[2] Ever since last April, it's become impossible to mention DKIM without referring to DMARC, as a result of Yahoo's controversial DMARC policy. A proper discussion of DMARC (and why what Yahoo did was controversial) requires explaining the mail transmission architecture and spam, however, so I'll defer that to a later post. It's also possible that changes in this space could happen within the next year.
[3] According to a former GMail spam employee, if it takes you as long as three minutes to calculate reputation, the spammer wins.

Categorieën: Mozilla-nl planet

Joshua Cranmer: A unified history for comm-central

Thunderbird - za, 10/01/2015 - 18:55
Several years back, Ehsan and Jeff Muizelaar attempted to build a unified history of mozilla-central across the Mercurial era and the CVS era. Their result is now used in the gecko-dev repository. While being distracted on yet another side project, I thought that I might want to do the same for comm-central. It turns out that building a unified history for comm-central makes mozilla-central look easy: mozilla-central merely had one import from CVS. In contrast, comm-central imported twice from CVS (the calendar code came later), four times from mozilla-central (once with converted history), and imported twice from Instantbird's repository (once with converted history). Three of those conversions also involved moving paths. But I've worked through all of those issues to provide a nice snapshot of the repository [1]. And since I've been frustrated by failing to find good documentation on how this sort of process went for mozilla-central, I'll provide details on the process for comm-central.

The first step and probably the hardest is getting the CVS history in DVCS form (I use hg because I'm more comfortable it, but there's effectively no difference between hg, git, or bzr here). There is a git version of mozilla's CVS tree available, but I've noticed after doing research that its last revision is about a month before the revision I need for Calendar's import. The documentation for how that repo was built is no longer on the web, although we eventually found a copy after I wrote this post on git.mozilla.org. I tried doing another conversion using hg convert to get CVS tags, but that rudely blew up in my face. For now, I've filed a bug on getting an official, branchy-and-tag-filled version of this repository, while using the current lack of history as a base. Calendar people will have to suffer missing a month of history.

CVS is famously hard to convert to more modern repositories, and, as I've done my research, Mozilla's CVS looks like it uses those features which make it difficult. In particular, both the calendar CVS import and the comm-central initial CVS import used a CVS tag HG_COMM_INITIAL_IMPORT. That tagging was done, on only a small portion of the tree, twice, about two months apart. Fortunately, mailnews code was never touched on CVS trunk after the import (there appears to be one commit on calendar after the tagging), so it is probably possible to salvage a repository-wide consistent tag.

The start of my script for conversion looks like this:

#!/bin/bash set -e WORKDIR=/tmp HGCVS=$WORKDIR/mozilla-cvs-history MC=/src/trunk/mozilla-central CC=/src/trunk/comm-central OUTPUT=$WORKDIR/full-c-c # Bug 445146: m-c/editor/ui -> c-c/editor/ui MC_EDITOR_IMPORT=d8064eff0a17372c50014ee305271af8e577a204 # Bug 669040: m-c/db/mork -> c-c/db/mork MC_MORK_IMPORT=f2a50910befcf29eaa1a29dc088a8a33e64a609a # Bug 1027241, bug 611752 m-c/security/manager/ssl/** -> c-c/mailnews/mime/src/* MC_SMIME_IMPORT=e74c19c18f01a5340e00ecfbc44c774c9a71d11d # Step 0: Grab the mozilla CVS history. if [ ! -e $HGCVS ]; then hg clone git+https://github.com/jrmuizel/mozilla-cvs-history.git $HGCVS fi

Since I don't want to include the changesets useless to comm-central history, I trimmed the history by using hg convert to eliminate changesets that don't change the necessary files. Most of the files are simple directory-wide changes, but S/MIME only moved a few files over, so it requires a more complex way to grab the file list. In addition, I also replaced the % in the usernames with @ that they are used to appearing in hg. The relevant code is here:

# Step 1: Trim mozilla CVS history to include only the files we are ultimately # interested in. cat >$WORKDIR/convert-filemap.txt <<EOF # Revision e4f4569d451a include directory/xpcom include mail include mailnews include other-licenses/branding/thunderbird include suite # Revision 7c0bfdcda673 include calendar include other-licenses/branding/sunbird # Revision ee719a0502491fc663bda942dcfc52c0825938d3 include editor/ui # Revision 52efa9789800829c6f0ee6a005f83ed45a250396 include db/mork/ include db/mdb/ EOF # Add the S/MIME import files hg -R $MC log -r "children($MC_SMIME_IMPORT)" \ --template "{file_dels % 'include {file}\n'}" >>$WORKDIR/convert-filemap.txt if [ ! -e $WORKDIR/convert-authormap.txt ]; then hg -R $HGCVS log --template "{email(author)}={sub('%', '@', email(author))}\n" \ | sort -u > $WORKDIR/convert-authormap.txt fi cd $WORKDIR hg convert $HGCVS $OUTPUT --filemap convert-filemap.txt -A convert-authormap.txt

That last command provides us the subset of the CVS history that we need for unified history. Strictly speaking, I should be pulling a specific revision, but I happen to know that there's no need to (we're cloning the only head) in this case. At this point, we now need to pull in the mozilla-central changes before we pull in comm-central. Order is key; hg convert will only apply the graft points when converting the child changeset (which it does but once), and it needs the parents to exist before it can do that. We also need to ensure that the mozilla-central graft point is included before continuing, so we do that, and then pull mozilla-central:

CC_CVS_BASE=$(hg log -R $HGCVS -r 'tip' --template '{node}') CC_CVS_BASE=$(grep $CC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2) MC_CVS_BASE=$(hg log -R $HGCVS -r 'gitnode(215f52d06f4260fdcca797eebd78266524ea3d2c)' --template '{node}') MC_CVS_BASE=$(grep $MC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2) # Okay, now we need to build the map of revisions. cat >$WORKDIR/convert-revmap.txt <<EOF e4f4569d451a5e0d12a6aa33ebd916f979dd8faa $CC_CVS_BASE # Thunderbird / Suite 7c0bfdcda6731e77303f3c47b01736aaa93d5534 d4b728dc9da418f8d5601ed6735e9a00ac963c4e, $CC_CVS_BASE # Calendar 9b2a99adc05e53cd4010de512f50118594756650 $MC_CVS_BASE # Mozilla graft point ee719a0502491fc663bda942dcfc52c0825938d3 78b3d6c649f71eff41fe3f486c6cc4f4b899fd35, $MC_EDITOR_IMPORT # Editor 8cdfed92867f885fda98664395236b7829947a1d 4b5da7e5d0680c6617ec743109e6efc88ca413da, e4e612fcae9d0e5181a5543ed17f705a83a3de71 # Chat EOF # Next, import mozilla-central revisions for rev in $MC_MORK_IMPORT $MC_EDITOR_IMPORT $MC_SMIME_IMPORT; do hg convert $MC $OUTPUT -r $rev --splicemap $WORKDIR/convert-revmap.txt \ --filemap $WORKDIR/convert-filemap.txt done

Some notes about all of the revision ids in the script. The splicemap requires the full 40-character SHA ids; anything less and the thing complains. I also need to specify the parents of the revisions that deleted the code for the mozilla-central import, so if you go hunting for those revisions and are surprised that they don't remove the code in question, that's why.

I mentioned complications about the merges earlier. The Mork and S/MIME import codes here moved files, so that what was db/mdb in mozilla-central became db/mork. There's no support for causing the generated splice to record these as a move, so I have to manually construct those renamings:

# We need to execute a few hg move commands due to renamings. pushd $OUTPUT hg update -r $(grep $MC_MORK_IMPORT .hg/shamap | cut -d' ' -f2) (hg -R $MC log -r "children($MC_MORK_IMPORT)" \ --template "{file_dels % 'hg mv {file} {sub(\"db/mdb\", \"db/mork\", file)}\n'}") | bash hg commit -m 'Pseudo-changeset to move Mork files' -d '2011-08-06 17:25:21 +0200' MC_MORK_IMPORT=$(hg log -r tip --template '{node}') hg update -r $(grep $MC_SMIME_IMPORT .hg/shamap | cut -d' ' -f2) (hg -R $MC log -r "children($MC_SMIME_IMPORT)" \ --template "{file_dels % 'hg mv {file} {sub(\"security/manager/ssl\", \"mailnews/mime\", file)}\n'}") | bash hg commit -m 'Pseudo-changeset to move S/MIME files' -d '2014-06-15 20:51:51 -0700' MC_SMIME_IMPORT=$(hg log -r tip --template '{node}') popd # Echo the new move commands to the changeset conversion map. cat >>$WORKDIR/convert-revmap.txt <<EOF 52efa9789800829c6f0ee6a005f83ed45a250396 abfd23d7c5042bc87502506c9f34c965fb9a09d1, $MC_MORK_IMPORT # Mork 50f5b5fc3f53c680dba4f237856e530e2097adfd 97253b3cca68f1c287eb5729647ba6f9a5dab08a, $MC_SMIME_IMPORT # S/MIME EOF

Now that we have all of the graft points defined, and all of the external code ready, we can pull comm-central and do the conversion. That's not quite it, though—when we graft the S/MIME history to the original mozilla-central history, we have a small segment of abandoned converted history. A call to hg strip removes that.

# Now, import comm-central revisions that we need hg convert $CC $OUTPUT --splicemap $WORKDIR/convert-revmap.txt hg strip 2f69e0a3a05a

[1] I left out one of the graft points because I just didn't want to deal with it. I'll leave it as an exercise to the reader to figure out which one it was. Hint: it's the only one I didn't know about before I searched for the archive points [2].
[2] Since I wasn't sure I knew all of the graft points, I decided to try to comb through all of the changesets to figure out who imported code. It turns out that hg log -r 'adds("**")' narrows it down nicely (1667 changesets to look at instead of 17547), and using the {file_adds} template helps winnow it down more easily.

Categorieën: Mozilla-nl planet

Pagina's