Mozilla Thunderbird: January 2024 Community Office Hours: Context Menu Updates
UPDATE: Our January Office Hours was fantastic! Here’s the full video replay.
A New Year of New Office HoursWe’re back from our end of year break, breaking in our new calendars, and ready to start 2024 with our renewed, refreshed, and refocused community office hours. Thank you to everyone who joined us for our November session! If you missed out on our chat about the new Cards View and the Thunderbird design process, you can find the video (which also describes the new format) in this blog post.
We’re excited for another year of bringing you expert insights from the Thunderbird Team and our broader community. To kick off 2024, and to build on November’s excellent discussion, we’ll be continuing our dive into another important aspect of the Thunderbird design process.
January Office Hours Topic: Message Context Menu <figcaption class="wp-element-caption">Mock-up: designs shown are not final and subject to change. </figcaption>We’ve been working on some significant (and what we think are pretty fantastic) UI changes to Thunderbird. Besides the new Cards View, we have some exciting overhauls to the Message Context Menu (aka the right-click menu) planned. UX Engineer Elizabeth Mitchell will discuss these changes, and most importantly, why we’re making them. Additionally, Elizabeth is one of the leaders on making Thunderbird accessible for all! We’re excited to hear how the new Message Context Menu will make your email experience easier and more effective.
If you’d like a sneak peak of the Context Menu plans, you can find them here.
And as always, if you have any questions you’d like to ask during the January office hours, you can e-mail them to officehours@thunderbird.net.
Join Us On Zoom(Yes, we’re still on Zoom for now, but a Jitsi server for future office hours is in the works!)
When: January 25 at 18:00 UTC
Direct URL To Join: https://mozilla.zoom.us/j/92739888755
Meeting ID: 92739888755
Password: 365021
Dial by your location:
- +1 646 518 9805 US (New York)
- +1 669 219 2599 US (San Jose)
- +1 647 558 0588 Canada
- +33 1 7095 0103 France
- +49 69 7104 9922 Germany
- +44 330 088 5830 United Kingdom
- Find your local number: https://mozilla.zoom.us/u/adkUNXc0FO
The call will be recorded and this post updated with a link to the recording afterwards.
Stay Informed About Future Thunderbird Releases and EventsWant to be notified about upcoming releases AND Community Office Hours? Subscribe to the Thunderbird Release and Events Calendar!
The post January 2024 Community Office Hours: Context Menu Updates appeared first on The Thunderbird Blog.
Mozilla Open Policy & Advocacy Blog: Platform Tilt: Documenting the Uneven Playing Field for an Independent Browser Like Firefox
Browsers are the principal gateway connecting people to the open Internet, acting as their agent and shaping their experience. The central role of browsers has long motivated us to build and improve Firefox in order to offer people an independent choice. However, this centrality also creates a strong incentive for dominant players to control the browser that people use. The right way to win users is to build a better product, but shortcuts can be irresistible — and there’s a long history of companies leveraging their control of devices and operating systems to tilt the playing field in favor of their own browser.
This tilt manifests in a variety of ways. For example: making it harder for a user to download and use a different browser, ignoring or resetting a user’s default browser preference, restricting capabilities to the first-party browser, or requiring the use of the first-party browser engine for third-party browsers.
For years, Mozilla has engaged in dialog with platform vendors in an effort to address these issues. With renewed public attention and an evolving regulatory environment, we think it’s time to publish these concerns using the same transparent process and tools we use to develop positions on emerging technical standards. So today we’re publishing a new issue tracker where we intend to document the ways in which platforms put Firefox at a disadvantage and engage with the vendors of those platforms to resolve them.
This tracker captures the issues we experience developing Firefox, but we believe in an even playing field for everyone, not just us. We encourage other browser vendors to publish their concerns in a similar fashion, and welcome the engagement and contributions of other non-browser groups interested in these issues. We’re particularly appreciative of the efforts of Open Web Advocacy in articulating the case for a level playing field and for documenting self-preferencing.
People deserve choice, and choice requires the existence of viable alternatives. Alternatives and competition are good for everyone, but they can only flourish if the playing field is fair. It’s not today, but it’s also not hard to fix if the platform vendors wish to do so.
We call on Apple, Google, and Microsoft to engage with us in this new forum to speedily resolve these concerns.
The post Platform Tilt: Documenting the Uneven Playing Field for an Independent Browser Like Firefox appeared first on Open Policy & Advocacy.
The Servo Blog: Tauri update: embedding prototype, offscreen rendering, multiple webviews, and more!
Back in November, we highlighted our ongoing efforts to make Servo more embeddable, and today we are a few steps closer!
Tauri is a framework for building desktop apps that combine a web frontend with a Rust backend, and work is already ongoing to expand it to mobile apps and other backend languages. But unlike say, Electron or React Native, Tauri is both engine-agnostic and frontend-agnostic, allowing you to use any frontend tooling you like and whichever web engine makes the most sense for your users.
To integrate Servo with Tauri, we need to add support for Servo in WRY, the underlying webview library, and the developers of Tauri have created a proof of concept doing exactly that! While this is definitely not production-ready yet, you can play around with it by checking out the servo-wry-demo branch (permalink) and following the README.
While servoshell, our example browser, continues to be the “reference” for embedding Servo, this has its limitations in that servoshell’s needs are often simpler than those of a general-purpose embeddable webview. For example, the “minibrowser” UI needs the ability to reserve space at the top of the window, and hook the presenting of new frames to do extra drawing, but it doesn’t currently need multiple webviews.
This is where working with the Tauri team has been especially invaluable for Servo — they’ve used their experience integrating with other embeddable webviews to guide changes on the Servo side. Early changes include making it possible to position Servo webviews anywhere within a native window (@wusyong, #30088), and give them translucent or transparent backgrounds (@wusyong, #30488).
Support for multiple webviews in one window is needed for parity with the other WRY backends. Servo currently has a fairly pervasive assumption that only one webview is active at a time. We’ve found almost all of the places where this assumption was made (@delan, #30648), and now we’re breaking those findings into changes that can actually be reviewed and landed (@delan, #30840, #30841, #30842).
Support for multiple windows sounds similar, but it’s a lot harder. Servo handles user input and drawing with a component known for historical reasons as the “compositor”. Since the constellation — the heart of Servo — is currently associated with exactly one compositor, and the compositor is currently tightly coupled with the event loop of exactly one window, supporting multiple windows will require some big architectural changes. @paulrouget’s extensive research and prior work on making Servo embeddable will prove especially helpful.
Offscreen rendering is critical for integrating Servo with apps containing non-Servo components. For example, you might have a native app that uses Servo for online help or an OAuth flow, or a game that uses Servo for purchases or social features. We can now draw Servo to an offscreen framebuffer and let the app decide how to present it (@delan, #30767), rather than assuming control of the whole window, and servoshell now uses this ability except when the minibrowser is disabled (--no-minibrowser).
Precompiling mozangle and mozjs would improve developer experience by reducing initial build times. We can now build the C++ parts of mozangle as a dynamic library (.so/.dylib/.dll) on Linux and macOS (@atbrakhi, mozangle#71), though more work is needed to distribute and make use of them.
We’re exploring two approaches to precompiling mozjs. The easier approach is to build the C++ parts as a static library (.a/.lib) and cache the generated Rust bindings (@wusyong, mozjs#439). Building a dynamic library (@atbrakhi, mozjs#432) will be more difficult, but it should reduce build times even further.
Many thanks to NLnet for sponsoring this work.
Adrian Gaudebert: L'état de l'Adrian 2023
L'année 2023 est terminée, donc comme depuis trois ans, c'est l'heure de dresser un bilan de ce que j'ai fait ces douze derniers mois. Je démarre cette retrospective avec la sensation de n'avoir « rien » fait, mais c'est parce que je me suis concentré essentiellement sur un seul et unique projet, notre jeu Dawnmaker. Vous allez le voir, l'année a en fait été bien chargée pour moi. En route pour le bilan !
Projets principaux Arpentor StudioNotre société, cofondée avec Alexis, a stagné cette année. Nous sommes toujours deux, même si nous avons été trois à deux moments dans l'année, avec Aurélie au sound design en début d'année puis Agathe au UX / UI design pendant deux mois. Nous n'avons quasiment aucune entrée d'argent, nous ne nous payons pas, et avons également réduit au minimum les dépenses de fonctionnement pour tenir le plus longtemps possible.
Mécaniquement, la gestion du studio m'a demandé moins de temps cette année. J'ai dû faire un dossier de demande de solde pour une subvention de la Région, plusieurs itérations sur le budget de Dawnmaker pour des négociations (qui n'ont malheureusement mené à rien) avec un éditeur, et l'entretien administratif mensuel — envoyer les factures à notre expert-comptable, essentiellement.
La principale erreur sur laquelle j'ai appris, et redressé la barre, en 2023 porte sur la stratégie d'édition de notre jeu. Depuis début 2022, nous avons établi une feuille de route qui implique l'arrivée d'un éditeur, partenaire qui prend en charge le financement et la publication de Dawnmaker. C'est, je crois aujourd'hui, une erreur, a fortiori dans le contexte actuel de l'industrie du jeu vidéo : les éditeurs traversent une période de disette financière, liée à de nombreux facteurs — bulle financière de 2021 suite au COVID et à la forte augmentation des habitudes de jeu, de nombreuses très grosses sorties en 2023, décalées là aussi à cause du COVID, qui ont phagocyté les ventes de jeux indépendants, et bien sûr les taux d'intérêts bancaires qui ont explosé. Résultat, en 2023, les éditeurs sont frileux et il est devenu très difficile de leur vendre son jeu.
Baser la stratégie financière de son entreprise sur l'apport d'un partenaire externe, sur lequel nous n'avons aucun contrôle, me paraît donc un risque énorme. C'est pourtant la stratégie de l'immense majorité des studios de jeu vidéo aujourd'hui, pour une raison très simple : ça coûte très cher de produire un jeu vidéo ! De notre côté, nous avons la chance aujourd'hui de pouvoir travailler sans salaire, grâce notamment au RSA. C'est cependant une situation qui n'est ni enviable, ni viable sur le moyen terme.
Face à tous ces éléments, j'ai décidé de modifier notre stratégie pour Dawnmaker. Nous ne planifions plus le fait de trouver un éditeur. Notre plan principal, désormais, est de sortir le jeu nous-même (en « auto-édition »), dans un délai qui nous permet à la fois de le mener vers une qualité satisfaisante pour un produit commercial, et de ne pas nous couler financièrement. Nous avons donc deux échéances : début mars, nous devons avoir terminé la vertical slice du jeu, une version qui contient tous les systèmes du jeu mais avec une partie seulement de son contenu. C'est un produit de très bonne qualité, proche de l’état final attendu, et qui est donc représentatif de ce qu'on veut faire. On se donnera ensuite environ trois mois pour chercher à nouveau un éditeur, tout en menant une campagne marketing et en ajoutant quelques améliorations au jeu en fonction des retours de nos testeuses et testeurs. Si courant mai nous n'avons pas sécurisé de financement, nous sortirons alors le jeu nous-mêmes — très probablement fin juin. Ce sera une version « amputée » du jeu, loin du contenu que nous souhaiterions avoir, mais une version malgré tout fonctionnelle et de qualité professionnelle. Et bien sûr, si un éditeur s'engage sur notre jeu et le finance, nous reprendrons le plan secondaire, qui consiste à faire une vraie phase de production, recruter quelques personnes en plus, et sortir, probablement début 2025, une version complète du jeu.
En conclusion, Arpentor Studio avance, mais c'est difficile. 2024 sera une année décisive pour le studio, avec soit l'arrivée d'un éditeur pour notre premier jeu, soit sa sortie. Dans tous les cas, ça devrait faire entrer de l'argent dans l'entreprise, ce dont j'ai hâte !
DawnmakerLe projet Cities of Heksiga a changé de nom et s'appelle désormais Dawnmaker ! J'ai passé l'essentiel de mon année 2023 à travailler dessus, sur trois aspects principalement : la programmation, le Game Design — la conception des règles du jeu et de son contenu — et le marketing.
Dawnmaker a beaucoup changé pendant cette année. Le jeu est passé d'un rendu 2D très (très) basique à un rendu en 3D en début d'année, puis est revenu vers la 2D pendant l'été. Le passage du jeu vers la 3D était prévu de longue date, mais s'est avéré être une erreur. Quasiment tous les éditeurs à qui nous avons montré le jeu nous l'ont fait remarquer. La question qui nous a fait revenir en arrière a été : « quelle est la valeur ajoutée de la 3D pour le jeu ? » On a eu bien du mal à y répondre…
Nous avons donc fait machine arrière — pas vraiment, puisque j'en ai profité pour coder tout le rendu avec une nouvelle techno optimisée pour la 2D. Grand bien nous en a fait : le jeu est vraiment beaucoup plus beau maintenant ! Il tourne également mieux sur ma machine vieillissante, ce qui est un bon signe pour le sortir sur téléphones portables. J'ai aussi amélioré notre éditeur de contenu pour qu'Alexis soit le plus autonome possible sur l'intégration des assets des bâtiments.
Voici une petite fresque de la progression du jeu en 2023 :
En janvier
En mai
En novembre
Au delà de l'aspect graphique, nous avons ajouté beaucoup de contenu (une quarantaine de nouveaux bâtiments, une vingtaine de nouvelles cartes), des mécaniques importantes (notamment une boucle de progression à la roguelike), et beaucoup de choses pour améliorer la jouabilité du jeu (de nouvelles interfaces, notamment grâce aux contributions de Menica Folden, du drag&drop pour jouer les cartes, des petites animations un peu partout… ).
Comme annoncé dans ma retrospective 2022, Dawnmaker a énormément progressé cette année, et il est passé d'un prototype à un vrai jeu vidéo. Il reste cependant encore beaucoup de choses à régler : la boucle de progression n'est toujours pas fonctionnelle, il n'y a aucun accompagnement à la prise en main pour les nouveaux joueurs, il faut retravailler encore une grosse partie de l'interface du jeu… Et tout ça, idéalement pour mars 2024 ! Autant vous dire que : c'est chaud. Mais la sortie du jeu approche, et ça, ça fait plaisir ! Peut-être que vous pourrez acheter Dawnmaker en 2024 ?
Projets secondaires SoulsComme l'année dernière, je n'ai quasiment pas eu l'occasion de toucher à Souls, mon vieux projet de jeu de cartes compétitif. Mais : « quasiment », car oui, je l'ai tout de même ressorti de sa boîte, et j'en ai fait une partie. Ça été l'occasion de me remémorer là où j'en étais, et surtout tous les défauts de la version en cours. Je ne travaille toujours pas activement dessus, mais j'ai bon espoir de m'y remettre un peu en 2024.
BlogEn début d'année, je me suis mis l'objectif de publier 6 articles sur ce blog, un tous les deux mois. L'objectif est presque atteint : j'en ai publié 5.
- L'état de l'Adrian 2022
- How much does it cost to make a game like Dawnmaker?
- Dawnmaker's endless conundrum of infinite replayability
- Removing Dawnmaker's 3rd dimension
- The ruins of Dawnmaker's lost continent
La majorité de ces articles est en anglais, car ils ont également servi de contenu pour la newsletter d'Arpentor Studio, lancée cette année.
J'ai eu un peu de mal à me lancer dans l'écriture régulière, mais je me suis créé un système en cours d’année — en gros, un rappel tous les deux mois — et depuis, je m'y tiens correctement ! J’ai bon espoir de continuer sur ce rythme en 2024, pour continuer à partager avec vous mes expériences.
Autres jeuxEn 2023 j'ai enfin rejoint une association locale de créateurs de jeux, la Compagnie des zAuteurs Lyonnais (CAL). C'est un groupement informel d'auteurs et autrices de jeux de société, qui se réunit dans les bars à jeux lyonnais régulièrement. Ce fût l'occasion pour moi d'enfin entrer dans ce milieu, de tester des prototypes très chouettes, et surtout de montrer les miens, de prototypes. Parce que oui, j'ai malgré tout continué à travailler, épisodiquement, sur des protos de jeux.
Le premier a pour nom de code « Little Brass Imhotep », car il est conçu pour être à la croisée des expériences de jeu de Little Town, Brass: Birmingham et Imhotep: The Duel. Le concept central est le suivant : il y a un plateau de 5 par 5 cases, sur lequel les joueurs vont construire des bâtiments. Ces bâtiments peuvent être activés pour donner des ressources ou des points de victoire. Les joueurs disposent d'ouvriers, qu'ils vont placer à l’extrémité d'une ligne ou d'une colonne, et ce faisant vont activer tous les bâtiments de la ligne ou colonne. Construire un bâtiment permet de marquer des points de victoire et de créer ou d'améliorer un moteur, mais donne aussi des opportunités à l'adversaire de l'exploiter.
Les premiers playtests ont fait apparaître de nombreuses lacunes dans le système de jeu, notamment une trop grande symétrie sur les ressources et les effets qui rend la mécanique principale, construire des bâtiments, peu attirante. Le prototype en est pour l'instant resté là.
Mon second prototype de l'année est né de la volonté de croiser l'expérience d'un draft de Magic — probablement mon expérience de jeu préférée — avec le cycle de feedback très court d'un autochess. La première version du jeu, nom de code « Cube Light » (oui je sais je suis nul en noms), donne ceci : sur une table de 8 joueurs, chacun⋅e reçoit un deck de 4 cartes (les mêmes pour chaque personne), puis va commencer à drafter des paquets de 4 cartes. Ensuite, chaque joueur se constitue un deck de 7 cartes, en jetant une de ses cartes. Puis on joue un match en 1 contre 1 : chaque joueur pioche trois cartes, puis simultanément va répartir ses 3 cartes, face cachée, sur trois lieux disposés au milieu de la table. Une fois les cartes placées, on les dévoile chacun son tour. Bien sûr, chaque carte a des effets variés, de même que les lieux, il faut donc placer ses cartes au bon endroit, et anticiper les prochaines cartes que l'on piochera, pour créer des combinaisons puissantes. À la fin du deuxième tour, la manche est terminée, et on compte la puissance cumulée des personnages joués sur chaque lieu. Un joueur qui a strictement plus de puissance que son adversaire sur un lieu contrôle celui-ci, et le joueur qui contrôle le plus de lieux gagne la manche. On recommence ensuite une nouvelle phase de draft, en ayant changé les places des joueurs. On construit un deck de 10 cartes, on fait une manche en trois tours. On répète ça sur 4 manches, et à la fin de la dernière manche le joueur qui a le plus de points de victoire remporte la partie !
Je me suis rendu compte, pendant que je produisais ce prototype, que ça se rapproche énormément de Challengers, un excellent jeu sorti en 2022, et dont le pitch est assez proche du mien — reproduire l'expérience d'un autochess en jeu de plateau. Mon objectif cependant est d'avoir une expérience plus proche de celle de Magic, c'est-à-dire d'avoir plus de décisions stratégiques, à la fois pendant le choix des cartes (la phase de draft) et pendant les manches.
Le premier playtest a laissé apparaître de nombreux axes d'améliorations, mais le cœur du jeu fonctionne bien et constitue une base solide. J'espère prendre du temps cette année pour reprendre ce prototype et en faire un jeu fun, au moins pour mon groupe de joueurs de Magic.
Mes recommandations de l'annéeEt voilà pour le bilan de mon travail sur 2023 ! C'est l'heure de terminer ce bilan par une partie plus fun. Cette année à nouveau, j'aimerais vous partager les quelques découvertes culturelles que j'ai le plus appréciées ces douze derniers mois.
Mon jeu vidéo de l'année2023 a été une année pauvre en jeux vidéo pour moi. Peut-être est-ce le fait de passer mes journées à travailler sur un jeu qui m'empêche d'apprécier pleinement les autres ? Peut-être est-ce parce que j'ai utilisé beaucoup de mon temps de jeu à étudier des jeux en lien avec Dawnmaker ? Ou bien est-ce un simple concours de circonstances qui fait qu'aucun jeu ne m'a vraiment happé, ou marqué, cette année ?
Quoi qu'il en soit, le meilleur jeu auquel j'ai joué cette année est Baldur's Gate 3. Je suis un immense fan des deux premiers titres, sur lesquels j'ai passé énormément de temps étant ado. J'abordais le troisième opus avec beaucoup d'appréhension, mais il ne m'a pas déçu. Le jeu donne vraiment la sensation de jouer à un Baldur's Gate pur jus, mais moderne. Certains personnages sont très attachants, l'histoire est prenante, et le contenu est gigantesque. C'est presque le seul vrai point noir pour moi d'ailleurs : je n'aime pas passer à côté de quelque chose dans un jeu, du coup j'ai passé trop de temps à tout fouiller. Et je sais que j'ai malgré ça raté des tas de choses, parce que le jeu est ainsi conçu.
Bref : Baldur's Gate 3 mérite son titre de Game of the Year.
Mes jeux de plateau de l'annéeTrop difficile de choisir un seul jeu cette année, alors en voilà deux : Spirit Island et Brass: Birmingham ! Deux gros jeux, dans lesquels il faut beaucoup réfléchir, l'un coopératif et l'autre compétitif.
Spirit Island, jeu coopératif donc, vous met dans la peau des esprits protecteurs d'une île qui se fait envahir par des colons. Chaque esprit a un gameplay différent, des capacités spéciales, et un lot de cartes de départ uniques. En solo ou avec vos alliés, vous devez développer vos ressources (gagner plus d'énergie pour jouer vos cartes, obtenir de nouvelles cartes plus puissantes… ) et utiliser vos cartes pour détruire les envahisseurs, les empêcher de construire des villages ou des cités, et de répandre la désolation sur votre île luxuriante. C'est vraiment un jeu excellent, dans lequel chaque tour est un gros puzzle à plusieurs, où il y a des interactions entre les capacités des joueurs. Et bonus : sa complexité limite assez fortement l'effet « joueur alpha », quand un joueur dirige tous les autres.
Brass: Birmingham, à l'inverse, est un jeu compétitif dans l'Angleterre de la révolution industrielle. Pur jeu de gestion, il faut y construire des bâtiments — mines de charbon, fonderies, usines, manufactures… — pour développer ses ressources et marquer des points de victoire. On y construit également des canaux ou chemins de fer, on y vend des ressources, et on s'adapte aux cartes de sa main pour se positionner sur la carte. Il y a un gros aspect planification qui est contrebalancé par l'importance d'être opportuniste par moment. Ce n'est pas le jeu numéro 1 sur boardgamegeek pour rien !
Ma BD de l'annéeThe Nice House on the Lake, tomes 1 et 2, gagnent la palme d'or de la BD 2023 ! C'est un comic — une bande dessinée américaine — de science fiction, un huit clos qui démarre très simplement et tourne, très rapidement, vers quelque chose d'angoissant. Il y a des interludes qui montrent un futur dramatique, un personnage très énigmatique qui est au centre de l'intrigue, des enjeux qui se développent progressivement pour atteindre une ouverture, à la fin du tome 2, qui donne vraiment envie de lire la suite ! Difficile d'en dire plus tant tout le plaisir de la lecture se trouve dans la découverte de cette intrigue, mais grosse recommandation de ma part.
Mon livre de l'annéeChose incroyable, en 2023, mon livre préféré n'est pas une fiction, mais un livre de productivité : How to take smart notes. L'auteur y présente une méthode de prise de notes créée par le sociologue Niklas Luhmann. La méthode est simple, mais demande une certaine assiduité pour qu'elle développe tout son potentiel. En résumé : prendre des notes temporaires, constamment, puis régulièrement les transformer en notes « permanentes », des notes autosuffisantes, rédigées, et surtout systématiquement mises en lien avec d'autres notes. L'idée est de se constituer une base de notes, qu'on relit régulièrement, en suivant des liens et surtout en en créant de nouveaux à chaque fois que c'est pertinent. C'est à la fois une manière de mieux apprendre, en se forçant à écrire ce qu'on apprend et les idées qu'on développe, et à la fois une manière de structurer sa pensée et d'articuler ses idées, pour les transformer et en faire des outils novateurs et impactant.
Conclusions sur l'année 2023Bon, ben, quelle année bizarre — comme prévu. Bosser aussi longtemps sur un unique projet, ou presque, c'est éreintant. Heureusement, on a pu montrer du concret au cours de l'année, grace notamment à notre serveur discord et à la newsletter que j'ai lancée. Mais à l'heure du bilan, l'impression que rien n'a avancé est vraiment forte, bien que totalement fausse. Fin 2022, je déclarais que j'avais encore beaucoup d'énergie, là, je dois avouer que c'est moins le cas. Je compte sur cette année 2024 pour chambouler un peu tout ça et me rebooster !
Sur ce, je vous remercie chaleureusement de m'avoir lu, je vous souhaite une très bonne année 2024, et je vous dis à bientôt sur ce blog pour une grande annonce sur Dawnmaker !
Adrian Gaudebert: L'état de l'Adrian 2022
Il est l'heure, tardive, de faire le point sur mon année 2022 ! Vous allez le lire, l'année a été chargée, ce qui explique que j'ai un peu de retard dans la rédaction de ce billet… Mais pour me faire pardonner, je vous ai mis quelques recommandations culturelles à la fin !
Voici donc un résumé de ce que j'ai fait en 2022…
Projets principaux Arpentor StudioMon projet principal en 2022 a évidemment été le studio de jeu vidéo que nous avons créé avec Alexis. J'ai raconté l'essentiel de l'histoire dans mon billet Starting a Games Studio [en], mais je voudrais revenir ici sur d'autres aspects de cette aventure, notamment sur certaines erreurs que nous avons faites.
En début d'année, nous avons rejoint l'incubateur Let's GO, porté par l'association régionale Game Only. Ce fût une excellente décision que de postuler, ce programme nous a apporté énormément de connaissances, de contacts, d'opportunités, et puis des bons moments de fun aussi ! Mais ça nous a mené à faire une erreur fondamentale : nous nous sommes laissés porter par les connaissances qu'on nous livrait, sans nous demander si c'était vraiment pertinent de s'en servir à ce moment-là.
Concrètement, nous avons modifié notre plan initial. Nous voulions nous concentrer sur la création d'un jeu relativement rapidement, entre un an et un an et demi. Entraîné par les formations, notamment sur les financements, nous avons révisé ce plan pour le faire grossir, impliquer plus de gens, dépenser plus d'argent pour pouvoir en demander plus, etc. Ce changement de stratégie a eu plusieurs conséquences :
- Nous avons passé énormément de temps à faire des dossiers de financement, des pitch decks et autre documents de recherche d'argent, et pas assez à travailler concrètement sur notre jeu. Nous avons du coup pris beaucoup de retard sur la production de celui-ci. Hors sans jeu un minimum abouti, sans une vraie démo qui montre notre savoir-faire, impossible d'espérer signer un contrat avec un éditeur — ce sans quoi nous ne pourrons de toute manière pas terminer notre jeu.
- Nous avons anticipé sur l'arrivée de financements qui, il s'avère, n'étaient pas aussi faciles à obtenir que prévu. Nous avons commencé à nous rémunérer Alexis et moi, nous avons recruté une employée, nous avons engagé des frais de déplacement sur des salons… Le fait de n'avoir pas obtenu le principal financement public sur lequel nous comptions nous a mis face à une situation qui aurait pu devenir critique : la faillite. Heureusement pour nous, nous avons su nous rattraper suffisamment tôt. Malheureusement, ça impliquait de nous séparer de notre employée, d'arrêter de nous salarier, et de réduire nos frais dans le futur.
- Nous avons fait grossir notre jeu, ajoutant de nombreuses fonctionnalités, jusqu'à atteindre un point où j'estimais qu'il nous aurait fallut une équipe de plus de 10 personnes pendant un an et demi pour réussir à finir le jeu. Là aussi, nous avons su largement réduire la taille du jeu et revenir à quelque chose de plus raisonnable pour nous, sans (trop) compromettre la vision que nous avions.
Cette année a du coup été éprouvante pour moi, à faire un peu les montagnes russes : on a passé une partie de l'année à rêver d'une grosse production, de financements faramineux, de faire un jeu très ambitieux. Et puis le parpaing de la réalité s'est écrasé sur la tartelette aux fraises de nos illusions, et il a fallut revenir à des choses plus raisonnables, prendre des décisions difficiles, faire du mal à des gens.
Malgré tout ça, ou grâce à tout ça, j'ai énormément appris en 2022 : sur la production d'un jeu, la stratégie d'entreprise, le recrutement, les relations avec les éditeurs… Le timing n'était pas toujours le bon pour apprendre ces choses-là, mais je sais qu'on s'en souviendra le moment venu, et que ça n'aura pas servi à rien. L'essentiel, comme me disait récemment un grand homme, ce n'est pas de ne plus faire d'erreur : c'est de toujours faire de nouvelles erreurs.
Si je devais recommencer demain, je ferais en sorte de garder ce plan de commencer petit, et de grossir tout doucement. Commencer par faire quasiment des jeux de Jams, en quelques jours seulement, puis faire un jeu en un mois, puis en deux, puis en quatre, etc. L'idée étant de monter en compétence doucement mais sûrement, sur toute la chaîne de production d'un jeu vidéo, et de se faire connaître en sortant régulièrement du contenu. C'est un modèle qui a bien fonctionné pour d'autres studios, et qui me semble vraiment sain pour quelqu'un comme moi qui n'a pas 10 ans d'expérience dans l'industrie. C'est aussi, je crois, une bonne manière de créer une entreprise financièrement stable dans ce milieu difficile.
Pour conclure, Arpentor Studio va bien. En fin d'année, nous avons fait en sorte de bien redresser la barre, et nous nous dirigeons actuellement vers un cap qui nous semble plus cohérent, plus sûr. On ne sortira probablement pas de jeu en 2023, mais progressera énormément dessus, on fera grossir l'équipe, et on mettra en place tout ce qu'il faut pour sortir le meilleur jeu possible en 2024.
État : en cours.
Cities of HeksigaQui dit studio de jeu vidéo dit forcément jeu vidéo. Ça n'est pas vraiment un secret (même si j'en ai peu parlé), nous travaillons depuis un peu plus d'un an sur un jeu que nous appelons actuellement Cities of Heksiga. C'est un jeu de stratégie solo, pour PC et mobile, qui se déroule dans un univers de Fantasy Steampunk. C'est en quelque sorte un jeu de plateau numérique, à la Terraforming Mars par exemple, qui mélange deck building (améliorer un deck de carte au fil de la partie en acquérant des cartes de plus en plus fortes ou synergiques) et pose de tuiles sur un plateau. Je ne vous en dit pas plus pour le moment parce qu'on a encore beaucoup de choses à stabiliser, mais ça viendra bien assez tôt. Sachez qu'on vise actuellement une sortie pendant la première moitié de 2024.
Sur ce jeu, je suis responsable de la programmation (le jeu est codé avec des technologies du Web, en TypeScript, avec une interface qui utilise Svelte) mais aussi du game design, c'est-à-dire de la conception des mécaniques du jeu. Alexis quant à lui est responsable de la direction artistique, de la création de tous les assets graphiques, et de la narration du jeu. Nous sommes également accompagnés par Aurélie, qui créé la musique et tous les effets sonores qui viennent embellir l'expérience.
En 2022 j'ai travaillé sur plusieurs prototypes du jeu (j'en compte au moins une douzaine d'après notre documentation), itérant chaque fois sur les mécaniques centrales du jeu pour trouver une formule qui fonctionne. J'ai fait quelques prototypes papier, mais je suis rapidement passé sur des versions numériques, parce que nos mécaniques impliquaient tout un ensemble de calculs et d'actions automatiques difficiles à effectuer manuellement.
Le prototype de Cities of Heksiga au 12 janvier 2023
J'ai également travaillé sur des outils, notamment un outil de gestion du contenu du jeu : j'ai une interface très simple qui me permet de créer rapidement un nouveau bâtiment, ou de mettre à jour un bâtiment existant, puis d'exporter ça en un seul clic. Le fait que nous utilisions des techno Web me permet d'être très efficace là-dessus, et j'ai bon espoir de mettre en place un workflow de game design aux petits oignons d'ici quelques mois.
Fin 2022, nous terminons, enfin mais difficilement, notre phase de prototypage. C'est-à-dire que nous avons consolidé les mécaniques centrales du jeu, que nous les avons validées (bon, pas vraiment, mais c'est en cours et j'ai confiance) et que nous pouvons maintenant passer à la suite : créer une vraie démo qui déchire, et étoffer doucement le jeu en ajoutant de nouvelles mécaniques et du contenu.
Comme je l'ai dit dans la partie précédente, nous avons passé trop peu de temps à travailler sur ce jeu cette année. Mais ça a présenté un avantage : nous avons eu le temps de le faire tester, de prendre des retours posés et construits sur les forces et les faiblesses de nos différents prototypes. Au final, nous avons pu identifier des problèmes fondamentaux et les corriger, ce qui aurait été plus difficile si nous avions eu plus la tête dans le guidon. Un mal pour un bien !
En 2023, Cities of Heksiga devrait vraiment prendre forme, et passer d'un prototype à une véritable démo, puis à une vertical slice, une version représentative de ce que nous voulons que le jeu final soit. Nous prévoyons actuellement de sortir le jeu dans la première moitié de 2024.
État : en cours.
Projets secondaires SoulsSouls, mon jeu de cartes compétitif en ligne, a fait une grosse pause en 2022. Au milieu de tout le reste, je n'ai tout simplement pas eu le temps de me remettre dessus. Mais tout mon travail à côté a pour objectif de monter en compétence et de créer un contexte dans lequel il sera possible de faire de Souls un succès. Donc quelque part, ça avance quand même !
État : en pause.
Board Game Jam 2Voici mon gros projet secondaire de ces derniers mois : l'organisation d'une Jam de création de jeux de plateau. C'est une idée que mon ami Aurélien et moi avions depuis trèèèès longtemps, qui s'est enfin concrétisée début 2020 via l'association Game Dev Party… mais qui s'est fait couper en plein milieu par l'annonce du premier confinement. Je suis donc très heureux d'avoir enfin pu mener une vraie Board Game Jam jusqu'au bout !
Mais qu'est-ce que c'est que ce truc, me demandez-vous ? Une Jam, c'est un événement de création, initialement de jeu vidéo, en équipe, en général sur un week-end. On réunit une cinquantaine de personnes dans un même lieu physique, ils se répartissent en groupes et passent leur week-end à créer de toutes pièces, depuis zéro, un jeu vidéo. À Lyon, l'association Game Dev Party a fait de l'organisation de ces événement sa spécialité depuis 2011 — et j'en suis membre organisateur depuis 2012. Une Board Game Jam, c'est le même principe, mais pour les jeux de société.
La table de matériel mis à disposition des participant⋅e⋅s
L'événement a eu lieu mi-janvier, et s'est soldée par une franche réussite : environ 40 participant⋅e⋅s pour 9 jeux créés pendant le week-end. Le week-end s'est déroulé sans accroc majeur (oublions les quelques couacs techniques du dimanche soir), les gens avaient l'air heureux, et les jeux produits étaient incroyablement engageants et variés.
Je suis particulièrement ravi de cette formule. Travailler sur un jeu vidéo présente un réel challenge technique : il faut programmer, il faut illustrer, il faut sonoriser… Le temps d'itération est relativement long, entre le moment où on a une idée et le moment où on peut réellement la tester, clavier, souris ou manette en main. Avec le jeu de société, ce temps d'itération est très largement réduit. Une nouvelle idée de carte ? Un bout de papier, un crayon, et hop, la carte est créée et prête à être testée.
C'était épuisant de porter cet événement, mais je suis fier de ce qu'on a réalisé, et je compte fortement sur d'autres personnes pour organiser de nouveaux événements de ce type. Parce que c'est quand même super frustrant de voir tous ces gens créer des jeux et de ne pas participer !!!
État : terminé.
BlogJe me note donc, pour mon moi du futur, de faire attention à rester ouvert : c'est éprouvant d'avancer sans que rien de concret ne « sorte », sans avoir la satisfaction d'avoir terminé quelque chose. Alors, Adrian de 2022 : n'oublie pas de parler de ce que tu fais, de montrer tes avancées, même si c'est moche, même si ça marche mal, parce que ça te donnera la sensation de progresser, et que ça t'aidera beaucoup !
Raté ! Je n'ai publié que deux articles en 2022 : How I did my market research on Steam [en] en mars puis Starting a Games Studio [en] en août. Ce dernier a été un énorme travail, que j'ai fait sur plusieurs mois, mais ça reste très insuffisant pour moi. Heureusement, j'ai quand même partagé mon travail, mais ailleurs : sur un serveur discord qu'on utilise pour les playtests de notre jeu, et au sein de l'incubateur Let's GO. Je n'ai pas ressenti le besoin de plus écrire, même si ça reste un objectif que j'aimerais tenir un jour. J'ai beaucoup appris de gens qui ont partagé leurs expériences avant moi, et je souhaite rendre ce service moi aussi. C'est dans cette démarche que j'ai écrit ces deux billets, mais je pense que je peux en faire plus.
Allez, objectif pour 2023 : 6 billets dans l'année, soit un tous les deux mois !
Mes recommandations de l'annéePour conclure ce billet, j'ai envie de faire un truc nouveau : vous recommander quelques œuvres culturelles qui m'ont marquées cette année.
Mon jeu vidéo de l'annéeSans conteste, c'est Planet Crafter qui a été mon jeu de 2022. On y mélange survie, exploration et construction de base sur une planète inhabitable, et notre objectif est de la terraformer. Le jeu est en early access, mais son contenu est déjà énorme, et les mises à jour ont toutes été très bénéfiques. J'ai pris quelques grosses claques en découvrant certains lieux, j'ai passé des heures à me construire une belle base, la progression est excellemment maîtrisée, il y a toujours quelque chose à faire, bref : je vous recommande de jouer à Planet Crafter !
PS : j'ai découvert via le CanardPC de janvier que les créateurs de Planet Crafter sont un couple de Toulousains. Ils ont fait ce jeu à deux. C'est très impressionnant. :-)
Mon jeu de plateau de l'annéeJ'ai été conquis par Terraforming Mars: Expédition Arès. Ce mélange des cartes du merveilleux Terraforming Mars original avec la mécanique d'actions partagées de Race for the Galaxy a complètement fait mouche chez moi. C'est tout ce que j'aime : de l'engine building pur, avec de la planification, un poil de bluff, et juste ce qu'il faut de ressources. C'est accessible, et ça se joue (relativement) vite, entre 1h et 1h30.
Ma BD de l'annéeJe décerne le prix de la BD de l'année à Bolchoï Arena, de Boulet et Aseyn. Le tome 1 date de 2018, mais je n'ai découvert la série qu'en 2022 à l'occasion de la sortie du tome 3 — pour une série prévue en 5 livres. Dans cette histoire de Science Fiction, on suit les pérégrinations d'une jeune femme dans le Bolchoï, monde virtuel en ligne particulièrement gigantesque qui reproduit à l'identique l'univers connu. Jusqu'à, bien sûr, qu'il se passe des trucs de ouf qui posent des tonnes de questions. On y retrouve de l'aventure, de l'exploration, de la géopolitique, des questions existentielles sur le rapport aux mondes virtuels, et bien plus mais je peux pas dire quoi pour pas spoiler. J'ai très très hâte de lire la suite, les trois premiers tomes sont excellents !
Mon livre de l'annéeAndy Weir, auteur du livre de SF The Martian, qui a été adapté au cinéma dans un film éponyme avec Matt Damon (très bonne adaptation soit dit en passant), a sorti deux autres livres : Artemis et Project Hail Mary. Si Artemis est une lecture très agréable, Project Hail Mary a été une claque monumentale. Le personnage principal cynique à souhaite, la narration par flashbacks qui fait monter la compréhension et les enjeux, et un incroyable twist au milieu du livre qui change complètement la donne : j'ai adoré ce livre, et je ne peux que le recommander à tout le monde, c'est une merveille.
Conclusions sur l'année 20222022 fût une année encore plus éprouvante que ce que j'avais prévu. Mais j'ai énormément appris, sur beaucoup de choses. J'ai été tour à tour programmeur, game designer, producer, entrepreneur, recruteur, organisateur… Ça fait beaucoup pour un seul homme, c'est épuisant, mais je ne regrette pas ! Dans tout ça, j'ai tout de même vraiment réussi à me préserver, à ne pas me surcharger de travail, à prendre de (longues) vacances, et c'est une très bonne chose. Je ne suis pas cramé, j'ai encore plein d'énergie pour 2023, et je suis confiant sur l'avenir.
Bonne année 2023 à vous toutes et tous, chères lectrices, chers lecteurs, et merci de tout cœur de me suivre dans ces aventures !
Mozilla Localization (L10N): Advancing Mozilla’s mission through our work on localization standards
After the previous post highlighting what the Mozilla community and Localization Team achieved in 2023, it’s time to dive deeper on the work the team does in the area of localization technologies and standards.
A significant part of our work on localization at Mozilla happens within the space of Internet standards. We take seriously our commitments that stem from the Mozilla Manifesto:
We are committed to an internet that includes all the peoples of the earth — where a person’s demographic characteristics do not determine their online access, opportunities, or quality of experience.
To us, this means that it’s not enough to strive to improve the localization of our products, but that we need to improve the localizability of the Internet as a whole. We need to take the lessons we are learning from our work on Firefox, Thunderbird, websites, and all our other projects, and make them available to everyone, everywhere.
That’s a pretty lofty goal we’ve set ourselves, but to be fair it’s not just about altruism. With our work on Fluent and DOM Localization, we’re in a position where it would be far too easy to rest on our laurels, and to consider what we have “good enough”. To keep going forward and to keep improving the experiences of our developers and localizers, we need input from the outside that questions our premises and challenges us. One way for us to do that is to work on Internet standards, presenting our case to other experts in the field.
In 2023, a large part of our work on localization standards has been focused on Unicode MessageFormat 2 (aka “MF2”), an upcoming message formatting specification, as well as other specifications building on top of it. Work on this has been ongoing since late 2019, and Mozilla has been one of the core participants from the start. The base MF2 spec is now slated for an initial “technology preview” release as a part of the 2024 Spring’s Unicode CLDR release.
Compared to Fluent, MF2 corresponds to the syntax and formatting of a single message pattern. Separately, we’ve also been working on the syntax and representation of a resource format for messages (corresponding to Fluent’s FTL files), as well as championing JavaScript language proposals for formatting messages and parsing resources. Work on standardizing DOM localization (as in, being able to use just HTML to localize a website) is also getting started in W3C/WHATWG, but its development is contingent on all the preceding specifications reaching a more stable stage.
So, besides the long term goal of improving localization everywhere, what are the practical results of these efforts? The nature of this work is exploratory, so predicting results has not and will not be completely possible. One tangible benefit that we’ve been able to already identify and deploy is a reconsideration of how Fluent messages with internal selectors — like plurals — are presented to localizers: Rather than showing a message in pieces, we’ve adopted the MF2 approach of presenting a message with its selectors (possibly more than one) applying to the whole message. This duplicates some parts of the message, but it also makes it easier to read and to translate via machine translation, as well as ensuring that it is internally consistent across all languages.
Another byproduct of this work is MF2’s message data model: Unlike anything before it, it is capable of representing all messages in all languages in all formats. We are currently refactoring our tools and internal systems around this data model, allowing us to deduplicate file format-specific tooling, making it easier to add new features and support new syntaxes. In Pontoon, this approach already made it easier to introduce syntax highlighting and improve the editing experience for right-to-left scripts. To hear more, you can join us at FOSDEM next month, where we’ll be presenting on this in more detail!
At Mozilla, we do not presume to have all the answers, or to always be right. Instead, we try to share what we have, and to learn from others. With many points of view, we gain greater insights – and we help make the world a better place for all peoples of all demographic characteristics.
Mozilla Localization (L10N): Mozilla Localization in 2023
The Mozilla localization community had a busy and productive 2023. Let’s look at some numbers that defined our year:
- 32 projects and 258 locales set up in Pontoon
- 3,685 new user registrations
- 1,254 active users, submitting at least one translation (on average 235 users per month)
- 432,228 submitted translations
- 371,644 approved translations
- 23,866 new strings to translate
Thank you to all the volunteers who contributed to Mozilla’s localization efforts over the last 12 months!
In case you’re curious about the lion theme: localization is often referred to as l10n, a numeronym which looks like the word lion. That’s why our team’s logo is a lion head, stylized as the original Mozilla logo by artist Shepard Fairey.
Pontoon DevelopmentA core area of focus in 2023 was pretranslation. From the start, our goal with this feature was to support the community by making it easier to leverage existing translations and provide a way to bootstrap translation of new content.
When pretranslation is enabled, any new string added in Pontoon will be pretranslated using a 100% match from translation memory or — if no match exists — we’ll leverage Google AutoML Translation engine with a model custom trained on the existing locale’s translation memory. Translations are stored in Pontoon with a special “pretranslated” status so that localizers can easily find and review them. Pretranslated strings are also saved to repositories (e.g. GitHub), and eventually ship in the product.
You can find more details on how we approached testing and involved the community in this blog post from July. Over the course of 2023 we pretranslated 14,033 strings for 16 locales across 15 projects.
Towards the end of the year, we also worked on two features that have been long requested by users: 1) it’s now possible to use Pontoon with a light theme; and 2) we improved the translation experience on mobile, with the original 3-column layout adapting to smaller screen sizes.
Listening to user feedback remains our priority: in case you missed it, we have just published the results of a new survey, where we asked localizers which features they would like to see implemented in Pontoon. We look forward to implementing some of your fantastic ideas in 2024!
CommunityCommunity is at the core of Mozilla’s localization model, so it’s crucial to identify sustainability issues as early as possible. Only relying on completion levels, or how quickly a locale can respond to urgent localization requests, are not sufficient inputs to really understand the health of a community. Indeed, an extremely dedicated volunteer can mask deeper problems and these issues only become visible — and urgent — when such a person leaves a project, potentially without a clear succession plan.
To prevent these situations, we’ve been researching ways to measure the health of each locale by analyzing multiple data points — for example, the number of new sign-ups actively contributing to localization and getting reviews from translators and managers — and we’ve started reaching out to specific communities to trial test interventions. With the help of existing locale managers, this resulted in several promotions to translator (Arabic, Czech, German) or even manager (Czech, Russian, Simplified Chinese).
During these conversations with various local communities, we heard loud and clear how important in-person meetings are to understanding what Mozilla is working on, and how interacting with other volunteers and building personal connections is extremely valuable. Over the past few years, some unique external factors — COVID and an economic recession chief among them — made the organization of large scale events challenging. We investigated the feasibility of small-scale, local events organized directly by community members, but this initiative wasn’t successful since it required a significant investment of time and energy by localizers on top of the work they were already doing to support Mozilla with product localization.
To counterbalance the lack of in-person events and keep volunteers in the loop, we organized two virtual fireside chats for localizers in May and November (links to recordings).
What’s coming in 2024In order to strengthen our connection with existing and potential volunteers, we’re planning to organize regular online events this year. We intend to experiment with different formats and audiences for these events, while also improving our presence on social networks (did you know we’re on Mastodon?). Keep an eye out on this blog and Matrix for more information in the coming months.
As many of you have asked in the past, we also want to integrate email functionalities in Pontoon; users should be able to opt in to receive specific communications via email on top of in-app notifications. We also plan to experiment with automated emails to re-engage inactive users with elevated permissions (translators, managers).
It’s clear that a community can only be sustainable if there are active managers and translators to support new contributors. On one side, we will work to create onboarding material for new volunteers so that existing managers and translators can focus on the linguistic aspects. On the other, we’ll engage the community to discuss a refined set of policies that foster a more inclusive and transparent environment. For example, what should the process be when a locale doesn’t have a manager or active translator, yet there are contributors not receiving reviews? How long should an account retain elevated permissions if it’s apparently gone silent? What are the criteria for promotions to translator or manager roles?
For both initiatives, we will reach out to the community for feedback in the coming months.
As for Pontoon, you can expect some changes under the hood to improve performances and overall reliability, but also new user-facing features (e.g. fine-grained search, better translation memory management).
Thank you!We want to thank all the volunteers who have dedicated their time and skills to localizing Mozilla products. Your tireless efforts are essential in advancing the Mozilla mission of fostering an open and accessible internet for everyone.
Looking ahead, we are excited about the opportunities that 2024 brings. We look forward to working alongside our community to expand the impact of localization and continue breaking down language barriers. Your support is invaluable, and together, we will continue shaping a more inclusive digital world. Thank you for being an integral part of this journey.
Mozilla Open Policy & Advocacy Blog: Mozilla Weighs in on State Comprehensive Privacy Proposals
[Read our letters to legislators in Massachusetts and Maine.]
Today, Mozilla is calling for the passage of strong state privacy protections, such as those modeled off of the American Data Privacy and Protection Act at the federal level. Today’s action came in the form of letters to relevant committee leadership in the Massachusetts and Maine legislatures encouraging them to consider and pass proposals that have been introduced in their respective states.
At Mozilla, we believe that individuals’ security and privacy on the internet are fundamental and must not be treated as optional. In the best of worlds, this “privacy for all” mindset would mean a law at the federal level that protects all Americans from abuse and misuse of their data, which is why we have advocated for decisive action to pass a comprehensive Federal privacy law.
Recently, however, even more states are considering enacting privacy protections. These protections, if crafted incorrectly, could create a false facade of privacy for users and risk enshrining harmful data practices in the marketplace. If crafted correctly, they could provide vital privacy protections and drive further conversation of federal legislation.
The proposals we weighed in on today meet the Mozilla standard for privacy because they: require data minimization; create strong security requirements; prohibit deceptive design that impairs individual autonomy; prohibit algorithmic discrimination; and more.
Mozilla has previously supported legislative and regulatory action in California, and we hope to see more state legislatures introduce and pass strong privacy legislation.
The post Mozilla Weighs in on State Comprehensive Privacy Proposals appeared first on Open Policy & Advocacy.
Firefox Developer Experience: [Reverted] Fixing keyboard navigation in Inspector Rules view
Given the feedback we received on this blog post and in other channel, we’re reverting this and the Enter key will work the way it was previously. The fix is already in Firefox Beta/Developer Edition 123.0b6, and will be in Firefox 122.0.1 which should be released 2024-02-06.
If you liked the “new” behavior that we were trying to introduce, you can enable it by navigating to about:config, and set devtools.inspector.rule-view.focusNextOnEnter to false. We also plan to expose this option in the settings UI (#1878490).
The new focus indicator style we introduced also revealed a couple issues that we’ll tackle in next releases:
- The closing bracket is focusable, and hitting Enter or Space while it has the focus will add a new property to the rule. This is definitely not self-explanatory, so we’ll try to make it better (#1876676). Note that you can still click anywhere in the rule where there no item to add a new property, and we’ll keep it that way.
- It’s hard to tell when an element is focused whether it’s being edited or not (#1876674). This one is a bit trickier, as we want to limit layout shift when toggling edit mode, and we want a consistent focus indicator. We’ll experiment various solutions to find what feels right.
Finally, I wanted to emphasize that we do want to hear (hopefully constructive) feedback from you, web developers, so we can make better choices to support you. You can do that on Mastodon, Twitter, Discourse, Element and of course, on Bugzilla, our bug tracker (you can connect with a Github account). We’re a very small team, we definitely don’t know everything and we can’t test all the new libraries, frameworks and workflows that are created. So we really rely on your feedback and bug reports to make Firefox Developer Tools better, faster and more solid.
Original ArticleStarting Firefox 122, when editing a selector, a property name or a property value in the Inspector, the Enter key will no longer move the focus to the next input, but will validate what was entered and focus the matching element (#1861674). You can still use Ctrl + Enter (Cmd + Enter on macOS) or Tab to validate and move the focus to the next input.
<figcaption class="wp-element-caption">The Rules view after the background-color value was modified and validated with the Enter key. The value element is now focused (hence the focus indicator). Previously, this will have enabled the edit mode on the color property.</figcaption> Why?When you click on a selector, a property name or a property value, a text input appears to modify the underlying value. Previously, when the user hit Enter, we advanced the editor to the next editable property, which is also directly turned into a text input. This behavior seems to exist since the Firebug days and every browsers Developers Tools implemented it, as it allowed to quickly edit multiple properties in a rule without leaving the keyboard.
In 2023 the Accessibility team at Mozilla ran an audit on DevTools and created a list of issues that needed to be fixed. One of the area we focused on was the Inspector, and especially keyboard navigation in the Rules view. As we were fixing those issues, making the keyboard navigation better, it struck us that it was unnecessary hard to exit “edit” mode with the keyboard only; the only way to do this was with the Esc key, but that also reverts any changes that was made in the text input! What I ended up doing most of the time is do validate with Enter, which moves the focus to the next input, then hit Esc to opt-out of the edit mode.
This extra step (and the unnecessary CPU cycles that goes with it) doesn’t seem justified when we already have other keyboard shortcut that can validate the input and move to the next one: Tab, which already existed and works across all browsers, and Ctrl (Cmd on macOS) + Enter, which we added based on user feedback (#1873416).
On top of that, this could be confusing for non-sighted user. In the web, you navigate through the inputs of a form with the Tab key, and Enter should validate the form. The change we made bring the Rules view behavior closer to regular forms, which should be more comfortable for non-sighted user, as well as people with no prior experience of the tool.
For those who’ve been using it for years or even decades (and all the DevTools team members fall onto that category), we know this is going to take a bit to get used to. We did fix some of the issues we saw in Tab and “edit mode” navigation, so when you hit Enter but wanted the focus to move to the next input, you should be able to hit Tab and then Enter to activate edit mode on the field you wanted to modify.
Again, we know this could be frustrating in the beginning, but, for us, the advantages this brings to the table makes it worthwhile, and I hope to you to.
Eitan Isaacson: Introducing Spiel
I wrote the beginning of what I hope will be an appealing speech API for desktop Linux and beyond. It consists of two parts, a speech provider interface specification and a client library. My hope is that the simplicity of the design and its leverage of existing free desktop technologies will make adoption of this API easy.
Of course, Linux already has a speech framework in the form of Speech Dispatcher. I believe there have been a handful of technologies and recent developments in the free desktop space that offer a unique opportunity to build something truly special. They include:
D-BusD-Bus came about several years after Speech Dispatcher. It is worth pausing and thinking about the different architectural similarities between a local speech service and a desktop IPC bus. The problems that Speech Dispatcher tackles, such as auto-spawning, wire protocols, IPC transports, session persistence, modularity, and others have been generalized by D-Bus.
Instead of a specialized module for Speech Dispatcher, what if speech engines just exposed an interface on the session bus? With a service file they can automatically spawn and go away as needed.
Flatpak (and Snap??)Flatpak offers a standardized packaging format that can encapsulate complex setups into a sandboxed installation with little to no thoughts of the dependency hell Linux users have grown accustomed to. One neat feature in Flatpaks is that they support exposing fully sandboxed D-Bus services, such as a speech engine. Flatpaks offer an out-of-band distribution model that sidesteps the limitations and fragmentation of traditional distro package streams. Flatpak repositories like Flathub are the perfect vehicle for speech engines because of the mix of proprietary and peculiar licenses that are often associated with them, for example…
Neural text to speechI have always been frustrated with the lack of naturally sounding speech synthesis in free software. It always seemed that the game was rigged and only the big tech platforms would be able to afford to distribute nice sounding voices. This is all quickly changing with a flurry of new speech systems covering many languages. It is very exciting to see this happening, it seems like there is a new innovation on this front every day. Because of the size of some of the speech models, and because of the eclectic copyright associated with them we can’t expect distros to preinstall them, Flatpaks and Neural speech systems are a perfect match for this purpose.
Talking apps that aren’t screen readersIn recent years we have seen many new applications of speech synthesis entering the mainstream - navigation apps, e-book readers, personal assistants and smart speakers. When Speech Dispatcher was first designed, its primary audience was blind Linux users. As the use cases have ballooned so has the demand for a more generalized framework that will cater to a diverse set of users.
There is precedent for technology that was designed for disabled people becoming mainstream. Everyone benefits when a niche technology becomes conventional, especially those who depend on it most.
Questions and AnswersI’m sure you have questions, I have some answers. So now we will play our two roles, you the perplexed skeptic, unsure about why another software stack is needed, and me - a benevolent guide who can anticipate your questions.
Why are you starting from scratch? Can’t you improve Speech Dispatcher?Speech Dispatcher is over 20 years old. Of course, that isn’t a reason to replace it. After all, some of your favorite apps are even older. Perhaps there is room for incremental improvements in Speech Dispatcher. But, as I wrote above, I believe there are several developments in recent years that offer an opportunity for a clean slate.
I love eSpeak, what is all this talk about “naturally sounding” voices?eSpeak isn’t going anywhere. It has a permissible license, is very responsive, and is ergonomic for screen reader users who consume speech at high rates for long periods of time. We will have an eSpeak speech provider in this new framework.
Many other users, who rely on speech for narration or virtual assistants will prefer a more natural voice. The goal is to make those speech engines available and easy to install.
I know for a fact that you can use /insert speech engine/ with Speech DispatcherIt is true that with enough effort you can plug anything into Speech Dispatcher.
Speech Dispatcher depends on a fraught set of configuration files, scripts, executables and shared libraries. A user who wants to use a synthesis engine other than the default bundled one in their distro needs to open a terminal, carefully place resources in the right place and edit configuration files.
What plan do you have to migrate all the current applications that rely on Speech Dispatcher?I don’t. Both APIs can coexist. I’m not a contributor or maintainer of Speech Dispatcher. There might always be a need for the unique features in Speech Dispatcher, and it might have another 20 years of service ahead.
I couldn’t help but notice you chose to write libspiel in C instead of a modern memory safe language with a strong ownership model like Rust.Yes.
Support.Mozilla.Org: Introducing Mandy and Donna
Hey everybody,
I’m so thrilled to start 2024 with good news for you all. Mandy Cacciapaglia and Donna Kelly are joining our Customer Experience team as a Product Support Manager for Firefox and a Content Strategist. Here’s a bit from them both:
- Mandy Cacciapaglia — Product Support Manager for Firefox
Hi there! Mandy here — I am Mozilla’s new Product Support Manager for Firefox. I’m so excited to collaborate with this awesome group, and dive into Firefox reporting, customer advocacy and feedback, and product support so we can keep elevating our amazing browser. I’m based in NYC, and outside of work you will find me watercolor painting, backpacking, or reading mysteries.
- Donna Kelly — Content Strategist
Hi everyone! I’m Donna, and I am very happy to be here as your new Content Strategist on the Customer Experience team. I will be working on content strategy to improve our knowledge base, documentation, localization, and overall user experience!In my free time, I love hanging out with my dog (a rescue tri-pawd named Sundae), hiking, reading (big Stephen King fan), playing video games, and anything involving food. Looking forward to getting to know everyone!
You’ll hear more from them in our next community call (which will be on January 17). In the meantime, please join me to congratulate and welcome both of them into the team!
Firefox Developer Experience: Geckodriver 0.34.0 Released
We are proud to announce the next major release of geckodriver 0.34.0. It ships with a new extension feature that has been often requested by the WebDriver community.
ContributionsWith geckodriver being an open source project, we are grateful to get contributions from people outside of Mozilla:
- Mitesh Gulecha updated the Print command to also allow numbers to be used for printing single pages as PDF.
- James Hendry refactored our error handling code, now utilizing the anyhow and thiserror crates, and as such removed the unknown path error type which is not part of the WebDriver specification.
- Razvan Cojocaru improved the Firefox version check to allow Firefox distributions with custom prefixes for the application name.
Geckodriver code is written in Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for geckodriver.
New Features Support for “Virtual Authenticators”Virtual Authenticators serve as a WebDriver Extension designed to simulate user authentication (WebAuthn) on web applications during automated testing. This functionality encompasses a range of methods, including passwords, biometrics, and security keys.
Geckodriver supports all available commands:
- Add Virtual Authenticator
- Remove Virtual Authenticator
- Add Credential
- Get Credentials
- Remove Credential
- Remove All Credentials
- Set User Verified
Specifying –port=0 as an argument allows geckodriver to dynamically find and use an available free port on the system. It’s important to note that when employing this argument, the final port value must be retrieved from the standard output (stdout).
Fixes- While searching for a default Firefox installation on the system, geckodriver used the Contents/MacOS/firefox-bin executable instead of the binary specified in the app bundle’s Info.plist file. This behavior resulted in a malfunction due to a regression in Firefox, particularly affecting the Firefox 121 release.
As usual links to the pre-compiled binaries for popular platforms and the source code are available on the GitHub repository.
The Rust Programming Language Blog: Announcing Rust 1.75.0
The Rust team is happy to announce a new version of Rust, 1.75.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.75.0 with:
$ rustup update stableIf you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.75.0.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!
What's in 1.75.0 stable async fn and return-position impl Trait in traitsAs announced last week, Rust 1.75 supports use of async fn and -> impl Trait in traits. However, this initial release comes with some limitations that are described in the announcement post.
It's expected that these limitations will be lifted in future releases.
Pointer byte offset APIsRaw pointers (*const T and *mut T) used to primarily support operations operating in units of T. For example, <*const T>::add(1) would add size_of::<T>() bytes to the pointer's address. In some cases, working with byte offsets is more convenient, and these new APIs avoid requiring callers to cast to *const u8/*mut u8 first.
- pointer::byte_add
- pointer::byte_offset
- pointer::byte_offset_from
- pointer::byte_sub
- pointer::wrapping_byte_add
- pointer::wrapping_byte_offset
- pointer::wrapping_byte_sub
The Rust compiler continues to get faster, with this release including the application of BOLT to our binary releases, bringing a 2% mean wall time improvements on our benchmarks. This tool optimizes the layout of the librustc_driver.so library containing most of the rustc code, allowing for better cache utilization.
We are also now building rustc with -Ccodegen-units=1, which provides more opportunity for optimizations in LLVM. This optimization brought a separate 1.5% wall time mean win to our benchmarks.
In this release these optimizations are limited to x86_64-unknown-linux-gnu compilers, but we expect to expand that over time to include more platforms.
Stabilized APIs- Atomic*::from_ptr
- FileTimes
- FileTimesExt
- File::set_modified
- File::set_times
- IpAddr::to_canonical
- Ipv6Addr::to_canonical
- Option::as_slice
- Option::as_mut_slice
- pointer::byte_add
- pointer::byte_offset
- pointer::byte_offset_from
- pointer::byte_sub
- pointer::wrapping_byte_add
- pointer::wrapping_byte_offset
- pointer::wrapping_byte_sub
These APIs are now stable in const contexts:
- Ipv6Addr::to_ipv4_mapped
- MaybeUninit::assume_init_read
- MaybeUninit::zeroed
- mem::discriminant
- mem::zeroed
Check out everything that changed in Rust, Cargo, and Clippy.
Contributors to 1.75.0Many people came together to create Rust 1.75.0. We couldn't have done it without all of you. Thanks!
Support.Mozilla.Org: 2023 in a nutshell
Hey SUMO nation,
As we’re inching closer towards 2024, I’d like to take a step back to reflect on what we’ve accomplished in 2023. It’s a lot, so let’s dive in!
- Overall pageviews
From Jan 1st to the end of November, we’ve got a total of 255+ million pageviews on SUMO. We’ve been in a consistent pageview number drop since 2018, and this time around, we’re down 7% from last year. This is far from bad, though, as this is our lowest yearly drop since 2018.
- Forum
In the forum, we’ve seen an average of 2.8k questions per month this year. This is a 6.67% down turn from last year. We also see a downturn in our answer rate within 72 hours, 71% compared to 75% last year. We also see a drop in our solved rate, 10% this year compared to 14% last year. On a typical month, our average contributors on the forum excluding OP is around 200 (compared to 240 last year).
*See Support glossary- KB
We see an increase over different metrics on KB contribution this year, though. In total, we’ve got a total of 1990 revisions (14% increase from last year) from 136 non staff members. Our review rate this year is 80%, while our approval rate is 96%, compared to 73% and 95% in 2022). In total, we’ve got 29 non-staff reviewers this year.
- Localization
On the localization side, the number is overall pretty normal. Total revision is around 13K (same as last year) from 400 non-staff members, with 93% review rate and 99% approval rate (compared to 90% and 99% last year) from a total of 118 non-staff reviewers.
- Social Support
From year to date, the Social Support contributors have sent a total of 850 responses (compared to 908 last year) and interacted with 1645 conversations. Our resolved rate has dropped to 40.74%, compared to 70% last year. We have made major improvements on other metrics, though. For example, this year, our contributors were responsible for more replies from our total responses (75% in total compared to 39.6% last year). Our conversion rate is also improving from 20% in 2022 to 52% this year. It means, our contributors have taken more role in answering the overall inbounds and have replied more consistently than last year.
- Mobile Store Support
On the Mobile Store Support side, our contributors this year have contributed to 1260 replies and interacted with 3149 conversations in total. That makes our conversion rate at 36% this year, compared to 46% last year. And those are mostly contributions to non-English reviews.
In addition to the regular contribution, here are some of the community highlights from 2023:
- We did some internal assessment and external benchmarking in Q1, which informed our experiments in Q2. Learn the results of those experiments from this call.
- We also updated our contributor guidelines, including article review guidelines and created a new policy around the use of generative AI.
- By the end of the year, the Spanish community has done something really amazing. They have managed to translate and update 70% of in-product desktop articles (as opposed to 11% when we started the call for help.
We’d also like to take this opportunity to highlight some Customer Experience team’s projects that we’ve tackled this year (some with close involvement and help from the community).
- Information Architecture (IA) — Josh Cajinarobleto
We split this one into two concurrent projects:
- Phase 1 Navigation Improvements — initial phase aims to:
- Surface the community forums in a clearer way
- Streamline the Ask a Question user flow
- Improve link text and calls-to-action to better match what users might expect when navigating on the site
- Updates to the main navigation and small changes to additional site UI (like sidebar menus, page headers, etc.) can be expected
- Cross-system content structure and hierarchy — the goal of this project is to:
- Improve our ability to gather data metrics across functional areas of SUMO (KB, ticketing, and forums)
- Improve recommended “next steps” by linking related content across KB and Forums
- Create opportunities for grouping and presenting content on SUMO by alternate categories and not just by product
- Research project — Cindi Jordan
Project Background:
-
- This research was conducted between August 2023 and November 2023. The goal of this project is to provide actionable insights on how to improve the customer experience of SUMO.
- Research approach:
- Stakeholder engagement process
- Surveyed 786 Mozilla Support users
- Conducted three rounds of interviews recruited from survey respondents:
- Sprint 1: Evaluated content and article structure
- Sprint 2: Evaluated the overall SUMO customer experience
- Sprint 3: Co-design of an improved SUMO experience
- This research was conducted by PH1 Research, who have conducted similar research for Mozilla in 2022.
- Please consider: Participants for this study were recruited via a banner ad in SUMO. As a result, these findings only reflect the experiences and needs of users who actively use SUMO. It does not reflect users who may not be aware of SUMO or have decided not to use it.
Executive Summary:
- Users consider SUMO a trustworthy and content-rich resource. SUMO offers resources that can appropriately help users of different technical levels. The most common user flow is via Google search. Very few are logging in to SUMO directly.
- The goal of SUMO should be to assist Mozilla users to improve their product experience. Content should be consolidated and optimized to show fewer, high quality results on Google search and SUMO search. The article experience should aim to boost relevance and task success. The SUMO website should aid users to diagnose systems, understand problems, find solutions, and discover additional resources when needed.
Recommendations:
- Our recommendation is that SUMO’s strategy should be to provide a self-service experience that makes users feel that Mozilla cares about their problems and offers a range of solutions appealing to various persona types (technical/non-technical).
- The pillars for making SUMO valuable to users should be:
- Confidence: As a user, I need to be confident that the resource provided will resolve my problem.
- Guidance: As a user, I need to feel guided through the experience of finding a solution, even when I don’t understand the problem or solutions available.
- Trust: As a user, I need to trust that the resources have been provided by a trustworthy authority on the subject (SUMO scores well here because of Mozilla).
- Wagtail migration — Abby Parise and team
-
-
- Modernizing our CMS can provide significant benefits in terms of user experience, performance, security, flexibility, collaboration, and analytics.
- This resulted in a decision to move forward with the plan to migrate our CMS to Wagtail — a modern, open-source content management system focused on flexibility and user experience.
- We are currently in the process of planning the next phases for implementation.
-
-
- Pocket migration to SUMO
- We successfully migrated and published 100% of previously identified Pocket help center content from HelpScout’s CMS to SUMO’s CMS, with proper redirects in place to ensure a seamless transition for the user.
- The localization community began efforts to help us localize the content, which had previously only been available in en-US.
- Firefox account to Mozilla account rebrand in early November.
- Officially supporting account users and login less support flow (read more about that here).
- Pocket migration to SUMO
-
- This was a very challenging project, not only because we had to migrate our large codebase and very large data set from MySQL, but also because of the challenge of performing the actual data migration within a reasonable period of time, on the order of a few hours at most, so that we could minimize the disruption to users and contributors. In the end, it was a multi-month project comprising coordinated research, planning and effort between our engineering team and our SRE (Site Reliability Engineering) team. We’re now on a much better database foundation for the future, because:
- Postgres is better suited for enterprise-level applications like ours, with very large datasets, frequent write operations and complex queries.
- We can also take advantage of connection pooling via PgBouncer, which will improve our resilience under huge and often malicious traffic spikes (which have been occurring much more frequently during the past year).
- Last but not least, our database now supports the full unicode character set, which means it can fully handle all characters, including emoji’s , in all languages. Our MySQL database had only limited unicode support, due to its initial configuration, and rather than invest in resolving that, which would have meant a significant chunk of work, we decided to invest instead in Postgres.
- This was a very challenging project, not only because we had to migrate our large codebase and very large data set from MySQL, but also because of the challenge of performing the actual data migration within a reasonable period of time, on the order of a few hours at most, so that we could minimize the disruption to users and contributors. In the end, it was a multi-month project comprising coordinated research, planning and effort between our engineering team and our SRE (Site Reliability Engineering) team. We’re now on a much better database foundation for the future, because:
This year, you all continue to impress us with the persistence and dedication that you show to Mozilla by contributing to our platform, despite the current state of our world right now. To every single one of you who contributed in one way or another to SUMO, I’d like to express my sincere gratitude because without you all, our platform is just an empty shell. To celebrate this, we’ve prepared this simple dashboard with contribution data that you can filter based on username so you can see how much you’ve accomplished this year (we talked about this in our last community call this year).
Let’s be proud of what we’ve accomplished to keep the internet as a global & public resource for everybody, and let’s keep on rocking the helpful web through 2024 and beyond!
If you’re a looker and interested in contributing to Mozilla Support, please head over to our Contribute page to learn more about our programs!
Mozilla Privacy Blog: Mozilla’s Comments to FCC: Net Neutrality Essential for Competition, Innovation, Privacy
[Read our full submission here]
Net neutrality – the concept that your internet provider should not be able to block, throttle, or prioritize elements of your internet service, such as to favor their own products or business partners – is on the docket again in the United States. With the FCC putting out a notice of proposed rulemaking (NPRM) to reinstate net neutrality, Mozilla weighed in last week with a clear message: the FCC should reestablish these common sense rules as soon as possible.
We have been fighting for net neutrality around the world for the better part of a decade and a half. Most notably, this included Mozilla’s challenge to the Trump FCC’s dismantling of net neutrality in 2018.
American internet users are on the cusp of renewed protections for the open internet. Our recently submitted comment to the FCC’s NPRM took a step back to remind the FCC and the public of the real benefits of net neutrality: Competition, Grassroots Innovation, Privacy, and Transparency and Accountability.
Simply put, if the FCC moves forward with reclassification of broadband as a Title II service, it will protect innovation in edge services; unlock vital privacy safeguards; and prevent ISPs from leveraging their market power to control people’s experiences online. With vast increases in our dependence on the internet since the COVID-19 pandemic, these protections are more important than ever.
We encourage others who are passionate about the open internet to file reply comments on the proceeding, which are due January 17, 2024.
You can read our full comment here.
The post Mozilla’s Comments to FCC: Net Neutrality Essential for Competition, Innovation, Privacy appeared first on Open Policy & Advocacy.
The Mozilla Blog: CAPTCHA successor Privacy Pass has no easy answers for online abuse
As much as the Web continues to inspire us, we know that sites put up with an awful lot of abuse in order to stay online. Denial of service attacks, fraud and other flavors of abusive behavior are a constant pressure on website operators.
One way that sites protect themselves is to find some way to sort “good” visitors from “bad.” CAPTCHAs are a widely loathed and unreliable means of distinguishing human visitors from automated solvers. Even worse, beneath this sometimes infuriating facade is a system that depends extensively on invasive tracking and profiling.
(You can find a fun overview of the current state of CAPTCHA here.)
Finding a technical solution to this problem that does not involve such privacy violations is an appealing challenge, but a difficult one. Well-meaning attempts can easily fail without giving due consideration to other factors. For instance, Google’s Web Environment Integrity proposal fell flat because of its potential to be used to unduly constrain personal choice in how to engage online (see our position for details).
Privacy Pass is a framework published by the IETF that is seen as having the potential to help address this difficult problem. It is a generalization of a system originally developed by Cloudflare to reduce their dependence on CAPTCHAs and tracking. For the Web, the central idea is that Privacy Pass might provide websites with a clean indication that a visitor is OK, separate from the details of their browsing history.
The way Privacy Pass works is that one website hands out special tokens to people the site thinks are OK. Other sites can ask people to give them a token. The second site then knows that a visitor with a token is considered OK by the first site, but they don’t learn anything else. If the second site trusts the first, they might treat people with tokens more favorably than those without.
The cryptography that backs Privacy Pass provides two interlocked guarantees:
- authenticity: the recipient of a token can guarantee that it came from the issuer
- privacy: the recipient of the token cannot trace the token to its issuance, which prevents them from learning who was issued each token
The central promise of Privacy Pass is that the privacy guarantee would allow the exchange of tokens to be largely automated, with your browser forwarding tokens between sites that trust you to sites that are uncertain. This would happen without your participation. Sites could use these tokens to reduce their dependence on annoying and ineffective CAPTCHAs.
Our analysis of Privacy Pass shows that while the technology is sound, applying that technology to an open system like the Web comes with a host of non-technical hazards.
We examine the privacy properties of Privacy Pass, how useful it might be, whether it could improve equity of access, and whether it might bias toward centralization. We find problems that aren’t technical in nature and hard to reconcile.
In considering how Privacy Pass might be deployed, there is a direct tension between privacy and open participation. The system requires token providers to be widely trusted to respect privacy, but our vision of an open Web means that restrictions on participation cannot be imposed lightly. Resolving this tension is necessary when deciding who can provide tokens.
The analysis concludes that the problem of abuse is not one that will yield to a technical solution like Privacy Pass. For a problem this challenging, technical options might not provide a comprehensive solution, but they need to do more than shift problems around. Technical solutions need to complement other measures. Privacy Pass does allow us to focus on the central problem of identifying abusive visitors, but there is a need to have safeguards in place that prevent a number of serious secondary problems.
Our analysis does not ultimately identify a path to building the non-technical safeguards necessary for a successful deployment of Privacy Pass on the Web.
Finally, we look at the deployments of Privacy Pass in Safari and Chrome browsers. We conclude that these deployments have inadequate safeguards for the problems we identify.
The post CAPTCHA successor Privacy Pass has no easy answers for online abuse appeared first on The Mozilla Blog.
The Rust Programming Language Blog: Announcing `async fn` and return-position `impl Trait` in traits
The Rust Async Working Group is excited to announce major progress towards our goal of enabling the use of async fn in traits. Rust 1.75, which hits stable next week, will include support for both -> impl Trait notation and async fn in traits.
This is a big milestone, and we know many users will be itching to try these out in their own code. However, we are still missing some important features that many users need. Read on for recommendations on when and how to use the stabilized features.
What's stabilizingEver since the stabilization of RFC #1522 in Rust 1.26, Rust has allowed users to write impl Trait as the return type of functions (often called "RPIT"). This means that the function returns "some type that implements Trait". This is commonly used to return closures, iterators, and other types that are complex or impossible to write explicitly.
/// Given a list of players, return an iterator /// over their names. fn player_names( players: &[Player] ) -> impl Iterator<Item = &String> { players .iter() .map(|p| &p.name) }Starting in Rust 1.75, you can use return-position impl Trait in trait (RPITIT) definitions and in trait impls. For example, you could use this to write a trait method that returns an iterator:
trait Container { fn items(&self) -> impl Iterator<Item = Widget>; } impl Container for MyContainer { fn items(&self) -> impl Iterator<Item = Widget> { self.items.iter().cloned() } }So what does all of this have to do with async functions? Well, async functions are "just sugar" for functions that return -> impl Future. Since these are now permitted in traits, we also permit you to write traits that use async fn.
trait HttpService { async fn fetch(&self, url: Url) -> HtmlBody; // ^^^^^^^^ desugars to: // fn fetch(&self, url: Url) -> impl Future<Output = HtmlBody>; } Where the gaps lie -> impl Trait in public traitsThe use of -> impl Trait is still discouraged for general use in public traits and APIs for the reason that users can't put additional bounds on the return type. For example, there is no way to write this function in a way that is generic over the Container trait:
fn print_in_reverse(container: impl Container) { for item in container.items().rev() { // ERROR: ^^^ // the trait `DoubleEndedIterator` // is not implemented for // `impl Iterator<Item = Widget>` eprintln!("{item}"); } }Even though some implementations might return an iterator that implements DoubleEndedIterator, there is no way for generic code to take advantage of this without defining another trait. In the future we plan to add a solution for this. For now, -> impl Trait is best used in internal traits or when you're confident your users won't need additional bounds. Otherwise you should consider using an associated type.1
async fn in public traitsSince async fn desugars to -> impl Future, the same limitations apply. In fact, if you use bare async fn in a public trait today, you'll see a warning.
warning: use of `async fn` in public traits is discouraged as auto trait bounds cannot be specified --> src/lib.rs:7:5 | 7 | async fn fetch(&self, url: Url) -> HtmlBody; | ^^^^^ | help: you can desugar to a normal `fn` that returns `impl Future` and add any desired bounds such as `Send`, but these cannot be relaxed without a breaking API change | 7 - async fn fetch(&self, url: Url) -> HtmlBody; 7 + fn fetch(&self, url: Url) -> impl std::future::Future<Output = HtmlBody> + Send; |Of particular interest to users of async are Send bounds on the returned future. Since users cannot add bounds later, the error message is saying that you as a trait author need to make a choice: Do you want your trait to work with multithreaded, work-stealing executors?
Thankfully, we have a solution that allows using async fn in public traits today! We recommend using the trait_variant::make proc macro to let your users choose. This proc macro is part of the trait-variant crate, published by the rust-lang org. Add it to your project with cargo add trait-variant, then use it like so:
#[trait_variant::make(HttpService: Send)] pub trait LocalHttpService { async fn fetch(&self, url: Url) -> HtmlBody; }This creates two versions of your trait: LocalHttpService for single-threaded executors and HttpService for multithreaded work-stealing executors. Since we expect the latter to be used more commonly, it has the shorter name in this example. It has additional Send bounds:
pub trait HttpService: Send { fn fetch( &self, url: Url, ) -> impl Future<Output = HtmlBody> + Send; }This macro works for async because impl Future rarely requires additional bounds other than Send, so we can set our users up for success. See the FAQ below for an example of where this is needed.
Dynamic dispatchTraits that use -> impl Trait and async fn are not object-safe, which means they lack support for dynamic dispatch. We plan to provide utilities that enable dynamic dispatch in an upcoming version of the trait-variant crate.
How we hope to improve in the futureIn the future we would like to allow users to add their own bounds to impl Trait return types, which would make them more generally useful. It would also enable more advanced uses of async fn. The syntax might look something like this:
trait HttpService = LocalHttpService<fetch(): Send> + Send;Since these aliases won't require any support on the part of the trait author, it will technically make the Send variants of async traits unnecessary. However, those variants will still be a nice convenience for users, so we expect that most crates will continue to provide them.
Of course, the goals of the Async Working Group don't stop with async fn in traits. We want to continue building features on top of it that enable more reliable and sophisticated use of async Rust, and we intend to publish a more extensive roadmap in the new year.
Frequently asked questions Is it okay to use -> impl Trait in traits?For private traits you can use -> impl Trait freely. For public traits, it's best to avoid them for now unless you can anticipate all the bounds your users might want (in which case you can use #[trait_variant::make], as we do for async). We expect to lift this restriction in the future.
Should I still use the #[async_trait] macro?There are a couple of reasons you might need to continue using async-trait:
- You want to support Rust versions older than 1.75.
- You want dynamic dispatch.
As stated above, we hope to enable dynamic dispatch in a future version of the trait-variant crate.
Is it okay to use async fn in traits? What are the limitations?Assuming you don't need to use #[async_trait] for one of the reasons stated above, it's totally fine to use regular async fn in traits. Just remember to use #[trait_variant::make] if you want to support multithreaded runtimes.
The biggest limitation is that a type must always decide if it implements the Send or non-Send version of a trait. It cannot implement the Send version conditionally on one of its generics. This can come up in the middleware pattern, for example, RequestLimitingService<T> that is HttpService if T: HttpService.
Why do I need #[trait_variant::make] and Send bounds?In simple cases you may find that your trait appears to work fine with a multithreaded executor. There are some patterns that just won't work, however. Consider the following:
fn spawn_task(service: impl HttpService + 'static) { tokio::spawn(async move { let url = Url::from("https://rust-lang.org"); let _body = service.fetch(url).await; }); }Without Send bounds on our trait, this would fail to compile with the error: "future cannot be sent between threads safely". By creating a variant of your trait with Send bounds, you avoid sending your users into this trap.
Note that you won't see a warning if your trait is not public, because if you run into this problem you can always add the Send bounds yourself later.
For a more thorough explanation of the problem, see this blog post.2
Can I mix async fn and impl trait?Yes, you can freely move between the async fn and -> impl Future spelling in your traits and impls. This is true even when one form has a Send bound.3 This makes the traits created by trait_variant nicer to use.
trait HttpService: Send { fn fetch(&self, url: Url) -> impl Future<Output = HtmlBody> + Send; } impl HttpService for MyService { async fn fetch(&self, url: Url) -> HtmlBody { // This works, as long as `do_fetch(): Send`! self.client.do_fetch(url).await.into_body() } } Why don't these signatures use impl Future + '_?For -> impl Trait in traits we adopted the 2024 Capture Rules early. This means that the + '_ you often see today is unnecessary in traits, because the return type is already assumed to capture input lifetimes. In the 2024 edition this rule will apply to all function signatures. See the linked RFC for more.
Why am I getting a "refine" warning when I implement a trait with -> impl Trait?If your impl signature includes more detailed information than the trait itself, you'll get a warning:
pub trait Foo { fn foo(self) -> impl Debug; } impl Foo for u32 { fn foo(self) -> String { // ^^^^^^ // warning: impl trait in impl method signature does not match trait method signature self.to_string() } }The reason is that you may be leaking more details of your implementation than you meant to. For instance, should the following code compile?
fn main() { // Did the implementer mean to allow // use of `Display`, or only `Debug` as // the trait says? println!("{}", 32.foo()); }Thanks to refined trait implementations it does compile, but the compiler asks you to confirm your intent to refine the trait interface with #[allow(refining_impl_trait)] on the impl.
ConclusionThe Async Working Group is excited to end 2023 by announcing the completion of our primary goal for the year! Thank you to everyone who helpfully participated in design, implementation, and stabilization discussions. Thanks also to the users of async Rust who have given great feedback over the years. We're looking forward to seeing what you build, and to delivering continued improvements in the years to come.
-
Note that associated types can only be used in cases where the type is nameable. This restriction will be lifted once impl_trait_in_assoc_type is stabilized. ↩
-
Note that in that blog post we originally said we would solve the Send bound problem before shipping async fn in traits, but we decided to cut that from the scope and ship the trait-variant crate instead. ↩
-
This works because of auto-trait leakage, which allows knowledge of auto traits to "leak" from an item whose signature does not specify them. ↩
Mozilla Localization (L10N): 2024 Pontoon survey results
The results from the 2024 Pontoon survey are in and the 3 top-voted features we commit to implement are:
- Add ability to edit Translation Memory entries (611 votes).
- Improve performance of Pontoon translation workspace and dashboards (603 votes).
- Add ability to propose new Terminology entries (595 votes).
The remaining features ranked as follows:
- Add ability to preview Fluent strings in the editor (572 votes).
- Link project names in Concordance search results to corresponding strings (540 votes).
- Add “Copy translation from another locale as suggestion” batch action (523 votes).
- Add ability to receive automated notifications via email (521 votes).
- Add Timeline tab with activity to Project, Locale, ProjectLocale dashboards (501 votes).
- Add ability to read notifications one by one, or mark notifications as unread (495 votes).
- Add virtual keyboard with special characters to the editor (469 votes).
We thank everyone who dedicated their time to share valuable responses and suggest potential features for us to consider implementing!
A total of 365 Pontoon users participated in the survey, 169 of which voted on all features. Each user could give each feature 1 to 5 votes. Check out the full report.
We look forward to implementing these new features and working towards a more seamless and efficient translation experience with Pontoon. Stay tuned for updates!
Firefox Developer Experience: Firefox WebDriver Newsletter — 121
WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).
This newsletter gives an overview of the work we’ve done as part of the Firefox 121 release cycle.
ContributionsWith Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.
WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.
WebDriver BiDi New: “browsingContext.contextDestroyed” eventbrowsingContext.contextDestroyed is a new event that allows clients to be notified when a context is discarded. This event will be emitted for instance when a tab is closed or when a frame is removed from the DOM. The event’s payload contains the context which was destroyed, the url of the context and the parent context id (for child contexts). Note that when closing a tab containing iframes, only a single event will be emitted for the top-level context to avoid unnecessary protocol traffic.
Support for “userActivation” parameter in script.callFunction and script.evaluateThe userActivation parameter is a boolean which allows the script.callFunction and script.evaluate commands to execute JavaScript while simulating that the user is currently interacting with the page. This can be useful to use features which are only available on user activation, such as interacting with the clipboard. The default value for this parameter is false.
Support for “defaultValue” field in browsingContext.userPromptOpened eventThe browsingContext.userPromptOpened event will now provide a defaultValue field set to the default value of user prompts of type “prompt“. If the default value was not provided (or was an empty string), the defaultValue field is omitted.
Here is an example payload for a window.prompt usage:
{ "type": "event", "method": "browsingContext.userPromptOpened", "params": { "context": "67b77507-0728-496f-b951-72650ead8c8a", "type": "prompt", "message": "What is your favorite automation protocol", "defaultValue": "WebDriver BiDi" } } <figcaption class="wp-element-caption">Prompt example on a webpage.</figcaption> Updates for the browsingContext.captureScreenshot commandThe browsingContext.captureScreenshot command received several updates, some of which are non backwards-compatible.
First, the scrollIntoView parameter was removed. The parameter could lead to confusing results as it does not ensure the scrolled element becomes fully visible. If needed, it is easy to scroll into view using script.evaluate.
The clip parameter value BoxClipRectangle renamed its type property from “viewport” to “box“.
Finally, a new origin parameter was added with two possible values: “document” or “viewport” (defaults to “viewport“). This argument allows clients to define the origin and bounds of the screenshot. Typically, in order to take “full page” screenshots, using the “document” value will allow the screenshot to expand beyond the viewport, without having to scroll manually. In combination with the clip parameter, this should allow more flexibility to take page, viewport or element screenshots.
Typically, you can use the origin set to “document” and the clip type “element” to take screenshots of elements without worrying about the scroll position or the viewport size:
{ "context": "67b77507-0728-496f-b951-72650ead8c8a", "origin": "document", "clip": { "type": "element", "element": { "sharedId": "67b77507-0728-496f-b951-72650ead8c8a" } } } <figcaption class="wp-element-caption">Left: an example page scrolled to the top. Right: screenshot of the page footer, which was scrolled-out and taller than the viewport, using origin “document” and clip type “element”.</figcaption> Added context property for Window serializationSerialized Window or Frame objects now contain a context property which contains the corresponding context id. This id can then be used to send commands to this Window/Frame and can also be exchanged with WebDriver Classic (Marionette).
Bug Fixes- Fixed a bug where serialization of a Node nested inside of a data structure (Array, Map, Set, etc.) was failing.
- browsingContext.navigate with wait set to “none” should always return the correct navigation id.
Marionette now supports serialization and deserialization of Window and Frame objects.
Mozilla Thunderbird: When Will Thunderbird For Android Be Released?
When will Thunderbird for Android be released? This is a question that comes up quite a lot, and we appreciate that you’re all excited to finally put Thunderbird in your pocket. It’s not a simple answer, but we’ll do our best to explain why things are taking longer than expected.
We have always been a bit vague on when we were going to release Thunderbird for Android. At first this was because we still had to figure out what features we wanted to add to K-9 Mail before we were comfortable calling it Thunderbird. Once we had a list, we estimated how long it would take to add those features to the app. Then something happened that always happens in software projects – things took longer than expected. So we cut down on features and aimed for a release at the end of 2023. As we got closer to the end of the year, it became clear that even with the reduced set of features, the release date would have almost certainly slipped into early 2024.
We then sat together and reevaluated the situation. In the end we decided that there’s no rush. We’ll work on the features we wanted in the app in the first place, because you deserve the best mobile experience we can give you. Once those features have been added, we’ll release the app as Thunderbird for Android.
Why Wait? Try K-9 Mail NowBut of course you don’t have to wait until then. All our development happens out in the open. The stable version of K-9 Mail contains all of the features we have already completed. The beta version of K-9 Mail contains the feature(s) we’re currently working on.
Both stable and beta versions can be installed via F-Droid or Google Play.
Thunderbird for Android / K-9 Mail: November/December 2023 Progress Report K-9 Mail’s FutureSide note: Quite a few people seem to love K-9 Mail and have asked us to keep the robot dog around. We believe it should be relatively little effort to build two apps from one code base. The apps would be virtually identical and only differ in app name, app icon, and the color scheme. So our current plan is to keep K-9 Mail around.
Whether you prefer metal dogs or mythical birds, we’ve got you covered.
The post When Will Thunderbird For Android Be Released? appeared first on The Thunderbird Blog.
Pages
- 1
- 2
- 3
- folgjende ›
- lêste »