mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

About:Community: Firefox 66 new contributors

Mozilla planet - ma, 11/03/2019 - 15:21

With the release of Firefox 66, we are pleased to welcome the 39 developers who contributed their first code change to Firefox in this release, 35 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Socorro: February 2019 happenings

Mozilla planet - ma, 11/03/2019 - 14:00
Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the crash reporter collects data about the crash, generates a crash report, and submits that report to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

This blog post summarizes Socorro activities in February.

Read more… (6 mins to read)

Categorieën: Mozilla-nl planet

The Servo Blog: This Month In Servo 126

Mozilla planet - ma, 11/03/2019 - 01:30

In the past month, we merged 176 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online. Plans for 2019 will be published soon.

This week’s status updates are here.

Exciting works in progress Notable Additions
  • jdm improved the rendering of 2d canvas paths with transforms applied.
  • sreeise implemented the DOM interfaces for audio, text, and video tracks.
  • ceyusa added support for hardware accelerated rendering in the media backend.
  • jdm prevented a panic when going back in history from a page using WebGL.
  • paulrouget enabled support for sharing Gecko’s VR process on Oculus devices.
  • asajeffrey made fullscreen content draw over top of any other page content.
  • jdm fixed a regression in hit-testing certain kinds of content.
  • paulrouget added automatic header file generation for the C embedding API.
  • jdm converted the Magic Leap port to use the official embedding API.
  • Manishearth added support for media track constraints to getUserMedia.
  • asajeffrey made the VR embedding API more flexible.
  • Manishearth implemented support for sending and receiving video streams over WebRTC.
  • jdm redesigned the media dependency graph to reduce time spent compiling Servo when making changes.
  • Manishearth added support for extended attributes on types in the WebIDL parser.
  • asajeffrey avoided a deadlock in the VR thread.
  • jdm fixed a severe performance problem when loading sites that use a lot of innerHTML modification.
  • asajeffrey implemented a test VR display that works on desktop.
  • Manishearth implemented several missing WebRTC callbacks.
  • jdm corrected the behaviour of the contentWindow API when navigating an iframe backwards in history.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Mike Conley: Firefox Front-End Performance Update #14

Mozilla planet - za, 09/03/2019 - 03:27

We’re only a few weeks away from Firefox 67 merging from the Nightly channel to Beta, and since my last update, a number of things have landed.

It’s the end of a long week for me, so I apologize for the brevity here. Let’s check it out!

Document Splitting Foundations for WebRender (In-Progress by Doug Thayer)

dthayer is still trucking along here – he’s ironed out a number of glitches, and kats is giving feedback on some APZ-related changes. dthayer is also working on a WebRender API endpoint for generating frames for multiple documents in a single transaction, which should help reduce the window of opportunity for nasty synchronization bugs.

Warm-up Service (In-Progress by Doug Thayer)

dthayer is pressing ahead with this experiment to warm up a number of critical files for Firefox shortly after the OS boots. He is working on a prototype that can be controlled via a pref that we’ll be able to test on users in a lab-setting (and perhaps in the wild as a SHIELD experiment).

Startup Cache Telemetry (In-Progress by Doug Thayer)

dthayer landed this Telemetry early in the week, and data has started to trickle in. After a few more days, it should be easier for us to make inferences on how the startup caches are operating out in the wild for our Nightly users.

Smoother Tab Animations (In-Progress by Felipe Gomes)

UX, Product and Engineering are currently hashing out the remainder of the work here. Felipe is also aiming to have the non-responsive tab strip bug fixed soon.

Lazier Hidden Window (Completed by Felipe Gomes)

After a few rounds of landings and backouts, this appears to have stuck! The hidden window is now created after the main window has finished painting, and this has resulted in a nice ts_paint (startup paint) win on our Talos benchmark!

<figcaption>This is a graph of the ts_paint startup paint Talos benchmark. The highlighted node is the first mozilla-central build with the hidden window work. Lower is better, so this looks like a nice win!</figcaption>

There’s still potential for more improvements on the hidden window, but that’s been split out to a separate project / bug.

Browser Adjustment Project (In-Progress by Gijs Kruitbosch)

This project appears to be reaching its conclusion, but with rather unsatisfying results. Denis Palmeiro from Vicky Chin’s team has done a bunch of testing of both the original set of patches that Gijs landed to lower the global frame rate (painting and compositing) from 60fps to 30fps for low-end machines, as well as the new patches that decrease the frequency of main-thread painting (but not compositing) to 30fps. Unfortunately, this has not yielded the page load wins that we wanted1. We’re still waiting to see if there’s a least a power-usage win here worth pursuing, but we’re almost ready the pull the plug on this one.

Better about:newtab Preloading (In-Progress by Gijs Kruitbosch)

Gijs has a set of patches that should make this possible, which will mean (in theory) that we’ll present a ready-to-roll about:newtab when users request one more often than not.

Unfortunately, there’s a small snag with a test failure in automation, but Gijs is on the case.

Experiments with the Process Priority Manager (In-Progress by Mike Conley)

The Process Priority Manager has been enabled in Nightly for a number of weeks now, and no new bugs have been filed against it. I filed a bug earlier this week to run a pref-flip experiment on Beta after the Process Priority Manager patches are uplifted later this month. Our hope is that this has a neutral or positive impact on both page load time and user retention!

Make the PageStyleChild load lazily (Completed by Mike Conley)

There’s an infrequently used feature in Firefox that allows users to switch between different CSS stylesheets that a page might offer. I’ve made the component that scans the document for alternative stylesheets much lazier, and also made it skip non web-pages, which means (at the very least) less code running when loading about:home and about:newtab


  1. This was unexpected – we ran an experiment late in 2018 where we noticed that lowering the frame rate manually via the layout.frame_rate pref had a positive impact on page load time… unfortunately, this effect is no longer being observed. This might be due to other refresh driver work that has occurred in the meantime. 

Categorieën: Mozilla-nl planet

Chris H-C: Blast from the Past: I filed a bug against Firefox 3.6.6

Mozilla planet - vr, 08/03/2019 - 15:40

A screenshot of the old bugzilla duplicate finder UI with the text inside table cells not rendering at allOn June 30, 2010 I was:

  • Sleepy. My daughter had just been born a few months prior and was spending her time pooping, crying, and not sleeping (as babies do).
  • Busy. I was working at Research in Motion (it would be three years before it would be renamed BlackBerry) on the BlackBerry Browser for BlackBerry 6.0. It was a big shift for us since that release was the first one using WebKit instead of the in-house “mango” rendering engine written in BlackBerry-mobile-dialect Java.
  • Keen. Apparently I was filing a bug against Firefox 3.6.6?!

Yeah. I had completely forgotten about this. Apparently while reading my RSS feeds in Google Reader (that doesn’t make me old, does it?) taking in news from Dragonmount about the Wheel of Time (so I guess I’ve always been a nerd, then) the text would sometimes just fail to render. I even caught it happening on the old Bugzilla “possible duplicate finder” UI (see above).

The only reason I was reminded this exists was because I received bugmail on my personal email address when someone accidentally added and removed themselves from the Cc list.

Pretty sure this bug, being no longer reproducible, still in UNCONFIRMED state, and filed against a pre-rapid-release version Firefox is something I should close. Yeah, I’ll just go and do that.

:chutten

 

Categorieën: Mozilla-nl planet

Mike Taylor: A historical look at lowercase defaultstatus

Mozilla planet - vr, 08/03/2019 - 07:00

The other day I was doing some research on DOM methods and properties that Chrome implements, and has a usecounter for, but don't exist in Firefox.

defaultstatus caught my eye, because like, there's also a use counter for defaultStatus.

(The discerning reader will notice there's a lowercase and a lowerCamelCase version. The less-discerning reader should maybe slow down and start reading from the beginning.)

As far as I know, there's no real spec for these old BOM (Baroque Object Model) properties. It's supposed to allow you to set the default value for window.status, but it probably hasn't done anything in your browser for years.

image of some baroque art shit

Chrome inherited lowercase defaultstatus from Safari, but I would love to know why Safari (or KHTML pre-fork?) added it, and why Opera, Firefox or IE never bothered. Did a site break? Did someone complain about a missing status on a page load? Did this all stem from a typo?

DOMWindow.idl has the following similar-ish comments over the years and probably more, but nothing that points to a bug:

This attribute is an alias of defaultStatus and is necessary for legacy uses. For compatibility with legacy content.

It's hard to pin down exactly when it was added. It's in Safari 0.82's kjs_window.cpp. And in this "old" kde source tree as well. It is in current KHTML sources, so that suggests it was inherited by Safari after all.

Curious to see some code in the wild, I did some bigquerying with BigQuery on the HTTPArchive dataset and got a list of ~3000 sites that have a lowercase defaultstatus. Very exciting stuff.

There's at least 4 kinds of results:

1) False-positive results like var foo_defaultstatus. I could re-run the query, but global warming is real and making Google cloud servers compute more things will only hasten our own destruction.

2) User Agent sniffing, but without looking at navigator.userAgent. I guess you could call it User Agent inference, if you really cared to make a distinction.

Here's an example from some webmail script:

O.L3 = function(n) { switch (n) { case 'ie': p = 'execScript'; break; case 'ff': p = 'Components'; break; case 'op': p = 'opera'; break; case 'sf': case 'gc': case 'wk': p = 'defaultstatus'; break; } return p && window[p] !== undefined; }

And another from some kind of design firm's site:

browser = (function() { return { [snip] 'firefox': window.sidebar, 'opera': window.opera, 'webkit': undefined !== window.defaultstatus, 'safari': undefined !== window.defaultstatus && typeof CharacterData != 'function', 'chrome': typeof window.chrome === 'object', [snip] } })();

3a) Enumerating over global built-ins. I don't know why people do this. I see some references to Babel, Ember, and JSHint. Are we making sure the scripts aren't leaking globals? Or trying to overwrite built-ins? Who knows.

3b) Actual usage, on old sites. Here's a few examples:

<body background="images/bvs_green_bkg.gif" bgcolor="#598580" text="#A2FF00" onload="window.defaultstatus=document.title;return true;"> <body onload="window.defaultstatus='Индийский гороскоп - ведическая астрология, джйотиш онлайн.'">

This one is my favorite, and not just because the site never calls it:

function rem() { window.defaultstatus="ok" }

OK, so what have we learned? I'm not sure we've learned much of anything, to be honest.

If Chrome were to remove defaultstatus the code using it as intended wouldn't break—a new global would be set, but that's not a huge deal. I guess the big risk is breaking UA sniffing and ended up in an unanticipated code-path, or worse, opting users into some kind of "your undetected browser isn't supported, download Netscape 2" scenario.

Anyways, window.defaultstatus, or window.defaultStatus for that matter, isn't as cool or interesting as Caravaggio would have you believe. Thanks for reading.

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: FOSDEM 2019 and DeltaChat

Mozilla planet - do, 07/03/2019 - 22:35

During the last month we attended two events: FOSDEM, Europe’s premier free software event, and a meetup with the folks behind DeltaChat. At both events we met great people, had interesting conversations, and talked through potential future collaboration with Thunderbird. This post details some of our conversations and insights gather from those events.

FOSDEM 2019

Magnus (Thunderbird Technical Manager), Kai (Thunderbird Security Engineer), and I (Ryan, Community Manager) arrived in Brussels for Europe’s premier free software event (free as in freedom, not beer): FOSDEM. I was excited to meet many of our contributors in-person who I’d only met online. It’s exhilarating to be looking someone in the eye and having a truly human interaction around something that you’re passionate about – this is what makes FOSDEM a blast.

There are too many conversations that we had to detail in their entirety in this blog post, but below are some highlights.

Chat over IMAP/Email

One thing we discussed at FOSDEM was Chat over IMAP with the people from Open-Xchange. Robert even gave a talk called “Break the Messaging Silos with COI”. They made a compelling case as to why email is a great medium for chat, and the idea of using a chat that lets you select the provider that stores your data – genius! We followed on FOSDEM with a meetup with the DeltaChat folks in Freiburg, Germany where we discussed encryption and Chat over Email.

Encryption, Encryption, Encryption

We discussed encryption a lot, primarily because we have been thinking about it a lot as a project. With the rising awareness of users about privacy concerns in tech, services like Protonmail getting a lot of attention, and in acknowledgement that many Thunderbird users rely on encrypted Email for their security – it was important that we use this opportunity to talk with our sister projects, contributors, and users about how we can do better.

Sequoia-PGP

We were very grateful that the Sequoia-PGP team took the time to sit down with us and listen to our ideas and concerns surrounding improving encrypted Email support in Thunderbird. Sequoia-PGP is an OpenPGP library, written in Rust that appears to be pretty solid. There is a potential barrier to incorporating their work into Thunderbird, in license compatibility (we use MPL and they use GPL). But we discussed a wide range of topics and have continued talking through what is possible following the event, it is my hope that we will find some way to collaborate going forward.

One thing that stood out to me about the Sequoia team was their true interest in seeing Thunderbird be the best that it can be, and they seemed to genuinely want to help us. I’m grateful to them for the time that they spent and look forward to getting another opportunity to sit with them and chat.

pEp

Following our discussion with the Sequoia team, we spoke to Volker of the pEp Foundation. Over dinner we discussed Volker’s vision of privacy by default and lowering the barrier of using encryption for all communication. We had spoken to Volker in the past, but it was great to sit around a table, enjoy a meal, and talk about the ways in which we could collaborate. pEp’s approach centers around key management and improved user experience to make encryption more understandable and easier to manage for all users (this is a simplified explanation, see pEp’s website for more information). I very much appreciated Volker taking the time to walk us through their approach, and sharing ideas as to how Thunderbird might move forward. Volker’s passion is infectious and I was happy to get to spend time with him discussing the pEp project.

EteSync

People close to me know that I have a strong desire to see encrypted calendar and contact sync become a standard (I’ve even grabbed the domains cryptdav.com and cryptdav.org). So when I heard that Tom of EteSync was at FOSDEM, I emailed him to set up a time to talk. EteSync is secure, end-to-end encrypted and privacy respecting sync for your contacts, calendars and tasks. That hit the mark!

In our conversation we discussed potential ways to work together, and I encouraged him to try and make this into a standard. He was quite interested and we talked through who we should pull into the conversation to move this forward. I’m happy to say that we’ve managed to get Thunderbird Council Chairman and Lightning Calendar author Philipp Kewisch in on the conversation – so I hope to see us move this along. I’m so glad that Tom created an implementation that will help people maintain their privacy online. We so often focus on securing our communication, but what about the data that is produced from those conversations? He’s doing important work and I’m glad that I was able to find ways to support his vision. Tom also gave a talk at FOSDEM this year, called “Challenges With Building End-to-End Encrypted Applications – Learnings From Etesync”.

Autocrypt on the Train

During FOSDEM we attended a talk about Autocrypt by Vincent Breitmoser. As we headed to the city Freiburg, for our meetup with the people behind DeltaChat, we realized Vincent was on our train and managed to sit with him on the ride over. Vincent was going to the same meetup that we were so it shouldn’t have been surprising, but it was great to get an opportunity to sit down with him and discuss how the Autocrypt project was doing and the state of email encryption, in general.

Vincent reiterated Autocrypt’s focus on raising the floor on encryption, getting as many people using encryption keys as possible and handling some of the complexity around the exchange of keys. We had concerns around the potential for man-in-the-middle attacks when using Autocrypt and Vincent was upfront about that and we had a useful discussion about balancing the risks and ease of use of email security. Vincent’s sincerity and humble nature made the conversation an enjoyable one, and I came away having made a new friend. Vincent is a good guy, and following our meetup in Freiburg we have discussed other ways in which we could collaborate.

Other FOSDEM Conversations

Of course, I will inevitably leave out someone in recounting who we talked to as FOSDEM. I had many conversations with old friends, met new people, and shared ideas. I got to meet Elio Qoshi of Ura Design face-to-face for the first time, which was really awesome (they did a style guide and usability study for Thunderbird, and have contributed in a number of other ways). I spoke to the creators of Mailfence, a privacy-focused email provider.

I attended a lot of talks and had my head filled with new perspectives, had preconceived notions challenged, and learned a lot. I hope that we’ll get to return next year and share some of the work that we’re doing now!

DeltaChat in Freiburg

A while before finishing our FOSDEM planning, we were invited by Holger Krekel to come to Freiburg, Germany following FOSDEM and learn more about Chat over Email (as their group calls it), and their implementation – DeltaChat. They use Autocrypt in DeltaChat, so there were conversations about that as well. Patrick Brunschwig, the author of the  Enigmail add-on was also present, and had interesting insights to add to the encryption conversation.

Hanging at a flat in Freiburg we spent two days talking through Chat over Email support in Thunderbird, how we might improve encryption in Thunderbird core, and thought through how Thunderbird can enhance its user experience around chat and encryption. Friedel, the author of rpgp, a rust implementation of OpenPGP, showed up at the event and shared his insights – which we appreciated.

I also got an opportunity to talk with the core maintainer of DeltaChat, Björn Petersen, about the state of chat generally. He started DeltaChat in order to offer an alternative to these chat silos, with a focus on an experience that would be on par with the likes of Telegram, Signal, and WhatsApp.

Following more general conversations, I spoke with Björn, Janka, and Xenia about the chat experience in DeltaChat. We discussed what a Chat over Email implementation in Thunderbird might look like, and more broadly talked through other potential UX improvements in the app. Xenia described the process their team went through when polling DeltaChat users about potential improvements and what insights they gained in doing that. We chatted about how what they have learned might apply to Thunderbird and it was very enlightening.

At one point Holger took us to Freiburg’s Chaos Computer Club, and there we got to hang out and talk about a wide range of topics – mostly centered around open source software and privacy. I thought it was fascinating and I got to learn about new projects that are up and coming. I hope to be able to collaborate with some of them to improve Thunderbird. In the end I was grateful that Holger and the rest of the DeltaChat contributors encouraged us to join them for their meetup, and opened up their space for us so that we could spend time with them and learn from them.

Thanks for reading this post! I know it was long, but I hope you found it interesting and learned something from it.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: One hour takedown deadlines: The wrong answer to Europe’s content regulation question

Mozilla planet - do, 07/03/2019 - 18:53

We’ve written a lot recently about the dangers that the EU Terrorist Content regulation poses to internet health and user rights, and efforts to combat violent extremism. One aspect that’s particularly concerning is the rule that all online hosts must remove ‘terrorist content’ within 60 minutes of notification. Here we unpack why that obligation is so problematic, and put forward a more nuanced approach to content takedowns for EU lawmakers.

Since the early days of the web, ‘notice & action’ has been the cornerstone of online content moderation. As there is so much user-generated content online, and because it is incredibly challenging for an internet intermediary to have oversight of each and every user activity, the best way to tackle illegal or harmful content is for online intermediaries to take ‘action’ (e.g. remove it) once they have been ‘notified’ of its existence by a user or another third party. Despite the fast-changing nature of internet technology and policy, this principle has shown remarkable resilience. While it often works imperfectly and there is much that could be done to make the process more effective, it remains a key tool for online content control.

Unfortunately, the EU’s Terrorist Content regulation stretches this tool beyond its limit. Under the proposed rules, all hosting services, regardless of their size, nature, or exposure to ‘terrorist content’ would be obliged to put in place technical and operational infrastructure to remove content within 60 minutes of notification. There’s three key reasons why this is a major policy error:

  • Regressive burden: Not all internet companies are the same, and it is reasonable to suggest that in terms of online content control, those who have more should do more. More concretely, it is intuitive that a social media service with billions in revenue and users should be able to remove notified content more quickly than a small family-run online service with a far narrower reach. Unfortunately however, this proposal forces all online services – regardless of their means – to implement the same ambitious 60-minute takedown timeframe. This places a disproportionate burden on those least able to comply, giving an additional competitive advantage to the handful of already dominant online platforms.
  • Incentivises over-removal: A crucial aspect of the notice & action regime is the post-notification review and assessment. Regardless of whether a notification of suspected illegal content comes from a user, a law enforcement authority, or a government agency, it is essential that online services review the notification to assess its validity and conformity with basic evidentiary standards. This ‘quality assurance’ aspect is essential given how often notifications are either inaccurate, incomplete, or in some instances, bogus. However, a hard deadline of 60 minutes to remove notified content makes it almost impossible for most online services to do the kind of content moderation due diligence that would minimise this risk. What’s likely to result is the over-removal of lawful content. Worryingly, the risk is especially high for ‘terrorist content’ given its context-dependent nature and the thin line between intentionally terroristic and good-faith public interest reporting.
  • Little proof that it actually works: Most troubling about the European Commission’s 60-minute takedown proposal is that there doesn’t seem to be any compelling reason why 60 minutes is an appropriate or necessary timeframe. To this date, the Commission has produced no research or evidence to justify this approach; a surprising state of affairs given how radically this obligation departs from existing policy norms. At the same time, a ‘hard’ 60 minute deadline strips the content moderation process of strategy and nuance, allowing for no distinction between type of terrorist content, it’s likely reach, or the likelihood that it will incite terrorist offences. With no distinction there can be no prioritisation.

For context, the decision by the German government to mandate a takedown deadline of 24 hours for ‘obviously illegal’ hate speech in its 2017 ‘NetzDG’ law sparked considerable controversy on the basis of the risks outlined above. The Commission’s proposal brings a whole new level of risk. Ultimately, the 60-minute takedown deadline in the Terrorist Content regulation is likely to undermine the ability for new and smaller internet services to compete in the marketplace, and creates the enabling environment for interference with user rights. Worse, there is nothing to suggest that it will help reduce the terrorist threat or the problem of radicalisation in Europe.

From our perspective, the deadline should be replaced by a principle-based approach, which ensures the notice & action process is scaled according to different companies’ exposure to terrorist content and their resources. For that reason, we welcome amendments that have been suggested in some European Parliament committees that call for terrorist content to be removed ‘expeditiously’ or ‘without undue delay’ upon notification. This approach would ensure that online intermediaries make the removal of terrorist content from their services a key operational objective, but in a way which is reflective of their exposure, the technical architecture, their resources, and the risk such content is likely to pose.

As we’ve argued consistently, one of the EU Terrorist Content regulation’s biggest flaws is its lack of any proportionality criterion. Replacing the hard 60-minute takedown deadline with a principle-based approach would go a long way towards addressing that. While this won’t fix everything – there are still major concerns with regard to upload filtering, the unconstrained role of government agencies, and the definition of terrorist content – it would be an important step in the right direction.

The post One hour takedown deadlines: The wrong answer to Europe’s content regulation question appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Firefox UX: How to validate an idea when you’re not working in a startup.

Mozilla planet - do, 07/03/2019 - 17:05
I had a brilliant idea! How do I get stakeholders to understand whether the market sees it in the same way?

People in startups have tried so hard to avoid spending time and money on building a product that doesn’t achieve the product/ market fit, so do tech companies. Resources are always limited. Making right decisions on where to put their resources are serious in organizations, and sometimes, it’s even harder to make one than in a startup.

ChecknShare, an experimental product idea from Mozilla Taipei for improving Taiwanese seniors’ online sharing experience, has learned a lot after doing several rounds of validations. In our retrospective meeting, we found the process can be polished to be more efficient when we both validate our ideas and communicate with our stakeholders at the same time.

Here are 3 steps that I suggest for validating your idea:

Step 1: Define hypotheses with stakeholders

Having hypotheses in the planning stage is essential, but never forget to include stakeholders when making your beautiful list of hypotheses. Share your product ideas with stakeholders, and ask them if they have any questions. Take their questions into consideration to plan for a method which can cover them all.

Your stakeholders might be too busy to participate in the process of defining the hypotheses. It’s understandable, you just need to be sure they all agree on the hypotheses before you start validating.

Step 2: Identify the purpose of validating your idea

Are you just trying to get some feedback for further iteration? Or do you need to show some results to your stakeholders in order to get some engagement/ resources from them? The purpose might influence how you select the validation methods.

There are two types of validation methods, qualitative and quantitative. Quantitative methods focus on finding “what the results look like”, while qualitative methods focus on “why/ how these results came about”. If you’re trying to get some insights for design iteration, knowing “why users have trouble falling in love with your idea” could be your first priority in the validation stage. Nevertheless, things might be different when you’re trying to get your stakeholders to agree.

From the path that ChecknShare has gone through, quantitative results were much easier to influence stakeholders as concrete numbers were interpreted as a representation of a real world situation. I’m not saying quantitative methods are “must-dos” during the validation stage, but be sure to select a method that speaks your stakeholders’ language.

Step 3: Select validation methods that validate the hypotheses precisely

With the hypotheses that were acknowledged by your stakeholders and the purpose behind the validation, you can select methods wisely without wasting time on inconsequential work.

In the following, I’m going to introduce the 5 validation methods that we conducted for ChecknShare and the lessons we’ve learned from each of them. I hope these shared lessons can help you find your perfect one. Starting with the qualitative methods:

Qualitative Validation Methods1. Participatory Workshop

The participatory workshop was an approach for us to validate the initial ideas generated from the design sprint. During the co-design process, we had 6 participants who matched with our target user criteria. We prioritized the scenario, got first-hand feedback for the ideas, and did quick iterations with our participants. (For more details on how we hosted the workshop, please look at the blog I wrote previously.)

Although hosting a workshop externally can be challenging due to some logistic works like recruiting relevant participants and finding a large space for accommodating people, we see participatory workshop as a fast and effective approach for having early interactions with our target users.

2. Physical pitching survey<figcaption>The pitching session in a local learning center</figcaption>

In order to see how our target market reacts to the idea in the early stage, we hosted a pitching session in a local learning center that offered free courses for seniors to learn how to use smartphones. During the pitching session, we handed out paper questionnaires to investigate their smartphone behaviors, interests of the idea, and their willingness to participate in our future user testings.

It was our first time experimenting with a physical survey instead of sitting in the office and deploying surveys through virtual platforms. A physical survey isn’t the best approach to get a massive number of responses in a short time. However, we got a chance to talk to real people, saw their emotional expressions when pitching an idea, recruited user testing participants, and pilot tested a potential channel for our future go-to-market strategy.

Moreover, we invited our stakeholders to attend the pitching session. It provided a chance for them to be immersed in the environment and feel more empathy around our target users. The priceless experience made our post conversations with stakeholders more realistic when we were evaluating the risk and potential of our target users who the team wasn’t quite familiar with.

<figcaption>Our stakeholders were chatting with seniors during the pitching session</figcaption>3. User Testing

During user testing, we were focusing on the satisfaction level of the product features and the usability of the UI flow. For the usability testing, we provided several pairs of paper prototypes for A/B testing participants’ understanding of the copy and UI design, and an interactive prototype to see if they could accomplish the tasks we assigned. The feedback indicated the areas that needed to be tweaked in the following iteration.

<figcaption>A/B Testing the product feature by using paper prototypes</figcaption>

User testing can get various results as it depends on how you design it. From our experience of conducting a user testing that combined concept testing and usability testing, we learned that the usability testing could be postponed to the production stage since the detailed design polishment was too early before the production stage was officially kicked off by stakeholders.

Quantitative Validation Methods

When we realized that qualitative results didn’t speak our stakeholders’ language, we started to recollect our stakeholders’ questions holistically and applied quantitative methods to answer them. Here are the following 2 methods we applied:

4. Online Survey

To understand the potential market size and the product value proposition which our stakeholders consider of great importance, we designed an online survey that investigated the current sharing behavior and the preference of the features among different ages. It helped us to see if there were any other user segments that were similar with seniors and the priority of the features.

<figcaption>The pie chart and bar chart reveal the portion of our target users.</figcaption><figcaption>The EDM we sent out for spreading the online survey</figcaption>

The challenge of conducting an online survey is to find an efficient deployment channel with less bias. Since the age range of our target responses were quite wide (from age 21 to 65, 9 segments), conducting an online survey became time-consuming and was beyond our expectations. To get at least 50 responses from each age bracket, we delivered survey invitations through Mozilla Taiwan’s social account, sent out EDM by collaborating with our media partner, and also bought responses from Survey Monkey.

When we reviewed the entire survey results with our stakeholders, we had a constructive discussion and progressed on defining our target audience and the value proposition based on solid numbers. An online survey can be an easier approach if the survey scope uses a narrower age range. For making constructive discussions happen earlier, we’d suggest running a quick survey once the product concept is settled.

5. Landing Page Test

We couldn’t just use a survey to investigate a participant’s app download willingness since it’s very hard to avoid leading questions. Therefore, the team decided to run a landing page test and see how the real market reacted to the product concept. We designed a landing page which contained a key message, product introduction of the top 3 features, several CTA buttons for email signup, and a hidden email collecting section that only showed when a participant clicked on the CTA button. We intentionally kept the page structure similar to a common landing page. (Have no idea what a landing page test is? Scott McLeod published a thorough landing page test guide which might be very helpful for you :)) Along with the landing page, we had an Ad banner which is consistent with our landing page design.

We ran our ad on Google Display Network for 5 days and got 10x more visitors than the previous online survey responses, which is the largest number of participants compared to the other validations we conducted. The CTR and conversion rate was quite persuasive, so ChecknShare finally got support from our stakeholders and the team was able to start thinking about more details around design implementation.

Landing page test is uncommon in Taiwan’s software industry, not to mention testing product concepts for seniors. We weren’t quite confident with getting reliable results at the beginning, but it ended up reaching out to the most seniors we’ve never had in our long validation journey. Here I summarized some suggestions for running a landing page test:

  • Set success criteria with stakeholders before running the test.
    Finding a reasonable benchmark target is essential. There’s no such thing as an absolute number for setting a KPI because it can vary depending on the region, acquiring channels, and the product category.
  • Make sure your copy can deliver the key product values in 5–10 secs read.
    The copy on both ad and landing page should be simple, clear, and touching. Simply pilot testing the copy with fresh eyes can be very insightful for copy iterations.
  • Reduce any factors that might influence the reading experience.
    Don’t let the website design ruin your test results. Remember to check the accessibility of your website (especially text size and contrast ratio). Pairing comprehensible illustrations, UI screens or even some animation of the UI flow with your copy can be very helpful in making it easier to understand.
The endless quantitative-qualitative dilemma

“What if I don’t have sufficient time to do both qualitative and quantitative testing?” you might ask.

We believe that having both qualitative and quantitative results are important. One supports each other. If you don’t have time to do both, take a step back, talk with your stakeholders, and think about what are the most important criteria that have to be true for becoming a successful product.

There’s no perfect method to validate all types of hypotheses precisely. Keep asking yourself why you need to do this validation, and be creative.

References:
1.
8 tips for hosting your first participatory workshop — Tina Hsieh
2.
How to setup a landing page for testing a business or product idea. — Scott McLeod
3.
How to Test and Validate Startup Ideas — Mitch Robinson

How to validate an idea when you’re not working in a startup. was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Mark Surman: Mozilla, AI and internet health:an update

Mozilla planet - wo, 06/03/2019 - 17:12

Last year the Mozilla team asked itself: what concrete improvements to the health of the internet do we want to tackle over the next 3–5 years?

We looked at a number of different areas we could focus. Making the ad economy more ethical. Combating online harassment. Countering the rush to biometric everything. All worthy topics.

As my colleague Ashley noted in her November blog post, we settled in the end on the topic of ‘better machine decision making’. This means we will focus a big part of our internet health movement building work on pushing the world of AI to be more human — and more humane.

Earlier this year, we looked in earnest at how to get started. We have now mapped out a list of first steps we will take across our main program areas — and we’re digging in. Here are some of the highlights of the tasks we’ve set for ourselves this year:

Shape the agenda

  • Bring the ‘better machine decision making’ concept to life by leaning into a focus on AI in the Internet Health Report, MozFest and press pitches about our fellows.
  • Shake up the public narrative about AI by promoting — and funding — artists working on topics like automated censorship, behavioural manipulation and discriminatory hiring.
  • Define a specific (policy) agenda by bringing in senior fellows to ask questions like: ‘how do we use GDPR to push on AI issues?’; or ‘could we turn platforms into info fiduciaries?’

Connect Leaders

  • Highlight the role of AI in areas like privacy and discrimination by widely promoting the work of fellowship, host orgs and MozFest alumni working on these issues.
  • Promote ethics in computer science education through a $3.5M award fund for professors, knowing we need to get engineers thinking about ethics issues to create better AI.
  • Find allies working on AI + consumer tech issues by heavily focusing our ‘hosted fellowships’ in this area — and then building a loose coalition amongst host orgs.

Rally citizens

  • Show consumers how pervasive machine decision making is by growing the number of products that include AI covered in the Privacy Not Included buyers guide.
  • Shine a light on AI, misinformation and tech platforms through a high profile EU election campaign, starting with a public letter to Facebook and political ad transparency.
  • Lend a hand to developers who care about ethics and AI by exploring ideas like the Union of Concern Technologists and an ‘ethics Q+A’ campaign at campus recruiting fairs.

We’re also actively refining our definition of ‘better machine decision making’ — and developing a more detailed theory of how we make it happen. A first step in this process was to update the better machine decision making issue brief that we first developed back in November. This process has proven helpful and gives us something crisper to work from. However, we still have a ways to go in setting out a clear impact goal for this work.

As a next step, I’m going to post a series of reflections that came to me in writing this document. I’m going to invite other people to do the same. I’m also going to work with my colleague Sam to look closely at Mozilla’s internet health theory of change through an AI lens — poking at the question of how we might change industry norms, government policy and consumer demand to drive better machine decision making.

The approach we are taking is: 1. dive in and take action; 2. reflect and refine our thinking as we go; and 3. engage our community and allies as we do these things; 4. rinse and repeat. Figuring out where we go — and where we can make concrete change on how AI gets made and used — has to be an iterative process. That’s why we’ll keep cycling through these steps as we go.

With that in mind, myself and others from the Mozilla team will start providing updates and reflections on our blogs. We’ll also be posting invitations to get involved as we go. And, we will track it all on the nascent Mozilla AI wiki. You can can use to follow along — and get involved.

The post Mozilla, AI and internet health:<br>an update appeared first on Mark Surman.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the Month – November 2018

Mozilla planet - wo, 06/03/2019 - 12:06

Please join us in congratulating Viswaprasath KS, our Rep of the Month for November 2018!

Viswaprasath KS, also know as iamvp7, is a long time Mozillian from India who joined the Mozilla Rep program in June 2013. By profession he works as a software developer. He initially started contributing with designs and SUMO (Army of Awesome). He was also part of Firefox Student Ambassador E-Board and helped students to build exciting Firefox OS apps. In May 2014 he became one of the Firefox OS app reviewers.

vs

 

Currently he is an active Mozilla TechSpeaker and loves to evangelise about WebExtensions and Progressive Web Apps. He has been an inspiration to many and loves to keep working towards a better web. He has worked extensively on Rust and WebExtensions, conducting many informative sessions on these topics recently. Together with other Mozillians he also wrote “Building Browser Extension”.

Thanks Viswaprasath, keep rocking the Open Web! :tada: :tada:

To congratulate him, please head over to Discourse!

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Indian government allows expanded private sector use of Aadhaar through ordinance (but still no movement on data protection law)

Mozilla planet - di, 05/03/2019 - 11:34

On Thursday, the Indian government approved an ordinance — an extraordinary procedure allowing the government to enact legislation without Parliamentary approval — that threatens to dilute the impact of the Supreme Court’s decision last September.

The Court had placed fundamental limits to the otherwise ubiquitous use of Aadhaar, India’s biometric ID system, including the requirement of an authorizing law for any private sector use. While the ordinance purports to provide this legal backing, its broad scope could dilute both the letter and intent of the judgment. As per the ordinance, companies will now be able to authenticate using Aadhaar as long as the Unique Identification Authority of India (UIDAI) is satisfied that “certain standards of privacy and security” are met. These standards remain undefined, and especially in the absence of a data protection law, this raises serious concerns.

The swift movement to foster expanded use of Aadhaar is in stark contrast to the lack of progress on advancing a data protection bill that would safeguard the rights of Indians whose data is implicated in this system. Aadhaar continues to be effectively mandatory for a vast majority of Indian residents, given its requirement for the payment of income tax and various government welfare schemes. Mozilla has repeatedly warned of the dangers of a centralized database of biometric information and authentication logs.

The implementation of these changes with no public consultation only exacerbates the lack of public accountability that has plagued the project. We urge the Indian government to consider the serious privacy and security risks from expanded private sector use of Aadhaar. The ordinance will need to gain Parliamentary approval in the upcoming session (and within six months) or else it will lapse. We urge the Parliament not to push through this law which clearly dilutes the Supreme Court’s diktat, and any subsequent proposals must be preceded by wide public consultation and debate.

 

The post Indian government allows expanded private sector use of Aadhaar through ordinance (but still no movement on data protection law) appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

QMO: DevEdition 66 Beta 14 Friday, March 8th

Mozilla planet - di, 05/03/2019 - 11:09

Hello Mozillians,

We are happy to let you know that Friday, March 8th, we are organizing DevEdition 66 Beta 14 Testday. We’ll be focusing our testing on: Firefox Screenshots, Search, Build installation & uninstallation.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Categorieën: Mozilla-nl planet

Daniel Stenberg: Julia’s cheat sheet for curl

Mozilla planet - di, 05/03/2019 - 08:44

Julia Evans makes these lovely comic style cheat sheets for various linux/unix networking tools and a while ago she made one for curl. I figured I’d show it here if you happened to miss her awesome work.

And yes, people have already pointed out to her that

Categorieën: Mozilla-nl planet

Ian Bicking: The Firefox Experiments I Would Have Liked To Try

Mozilla planet - ma, 04/03/2019 - 07:00

I have been part of the Firefox Test Pilot team for several years. I had a long list of things I wanted to build. Some I didn’t personally want to build, but I thought they were interesting ideas. I didn’t get very far through this list at all, and now that Test Pilot is being retired I am unlikely to get to them in the future.

Given this I feel I have to move this work out of my head, and publishing a list of ideas seems like an okay way to do that. Many of these ideas were inspired by something I saw in the wild, sometimes a complete product (envy on my part!), or the seed of an idea embedded in some other product.

The experiments are a spread: some are little features that seem potentially useful. Others are features seen elsewhere that show promise from user research, but we could only ship them with confidence if we did our own analysis. Some of these are just ideas for how to explore an area more deeply, without a clear product in mind.

Test Pilot’s purpose was to find things worth shipping in the browser, which means some of these experiments aren’t novel, but there is an underlying question: would people actually use it? We can look at competitors to get ideas, but we have to ship something ourselves if we want to analyze the benefit.

Table of contents:

Sticky Reader Mode

mockup of Sticky Reader Mode

Give Reader Mode in Firefox a preference to make it per-domain sticky. E.g. if I use Reader Mode on nytimes.com and then if I visit an article on nytimes.com in the future it’ll automatically convert to reader mode. (The nytimes.com homepage would not be a candidate for that mode.)

I made an experiment in sticky-reader-mode, and I think it works really nicely. It changes the browsing experience significantly, and most importantly it doesn’t require frequent proactive engagement to change behavior. Lots of these proposed ideas are tools that require high engagement by the user, and if you don’t invoke the tool then they do nothing. In practice no one (myself included) remembers to invoke these tools. Once you click the preference on a site Sticky Reader Mode then you are opted in to this new experience with no further action required.

There are a bunch of similar add-ons. Sticky Reader Mode works a bit better than most because of its interface, and it will push you directly into Reader Mode without rendering the normal page. But it does this by using APIs that are not public to normal WebExtensions. As a result it can’t be shipped outside Test Pilot, and can’t go in addons.mozilla.org. So… just trust me, it’s great.

Recently I’ve come upon Brave Speed Reader which is similar, but without per-site opt-in, and using machine learning to identify articles.

Cloud Browser

mockup of a Cloud Browser

Run a browser/user-agent in the cloud and use a mobile view as a kind of semantic or parsed view on that user agent (the phone would just control the browser that is hosted on the cloud). At its simplest we just take the page, simplify it in a few ways, and send it on - similar to what Opera Mini does. The approach lends itself to a variety of task-oriented representations of remote content.

When I first wrote this down I had just stared at my phone while it took 30 seconds to show me a 404 page. The browser probably knew after a couple seconds that it was a 404 but it acted as a rendering engine and not a user agent, so the browser insisted on faithfully rendering the useless not found page.

Obviously running a full browser instance in the cloud is resource hungry and finicky but I think we could ignore those issues while testing. Those are hard but solved operational issues.

Prior art: Opera Mini does some of this. Puffin is specifically cloud rendering for mobile. Light Point does the same for security reasons.

I later encountered brow.sh which is another interesting take on this (specifically with html.brow.sh).

This is a very big task, but I still believe there’s tremendous potential in it. Most of my concepts are not mobile-based, in part because I don’t like mobile, I don’t like myself when using a mobile device, and it’s not something I want to put my energy into. But I still like this idea.

Modal Page Actions

mockup of Modal Page Actions

This was tangentially inspired by Vivaldi’s Image Properties, not because of the interface, but thinking about how to fit high-information inspection tools into the browser.

The idea: instead of context menus, page actions, or other interaction points that are part of the “chrome”, implement one overlay interface: the do-something-with-this-page interface. Might also be do-something-with-this-element (e.g. replacing the 7 image-related context menu items: View Image, Copy Image, Copy Image Location, Save Image As, Email Image, Set As Desktop Background, and View Image Info).

The interface would be an overlay onto the page, similar to what happens when you start Screenshots:

Screenshots interface

Everything that is now in the Page Action menu (the ... in the URL bar), or in the context menu, would be available here. Some items might have a richer interface, e.g., Send Tab To Device would show the devices directly instead of using a submenu. Bookmarking would include some inline UI for managing the resulting bookmark, and so on.

There was some pushback because of the line of death – that is, the idea all trusted content must clearly originate from the browser chrome, and not the content area. I do not believe in the Line of Death, it’s something users could use to form trust, but I don’t believe they do use it (further user research required).

The general pattern is inspired by mobile interfaces which are typically much more modal than desktop interfaces. Modal interfaces have gotten a bad rap, I think somewhat undeserved: modal interfaces are also interfaces that guide you through processes, or ask you to explicitly dismiss the interface. It’s not unreasonable to expect someone to finish what they start.

Find+1

mockup of Find + 1

We have find-in-page but what about find-in-anything-linked-from-this-page?

Hit Cmd-Shift-F and you get an interface to do that. All the linked pages will be loaded in the background and as you search we show snippets of matching pages. Clicking on a snippet opens or focuses the tab and goes to where the search term was found.

I started experimenting in find-plus-one and encountered some challenges: hidden tabs aren’t good workers, loading pages in the background takes a lot of grinding in Firefox, and most links on pages are stupid (e.g., I don’t want to search your Careers page). An important building block would be a way to identify the important (non-navigational) parts of a page. Maybe lighter-weight ways to load pages (in other projects I’ve used CSP injection). The Copy Keeper concept did come about while I experimented with this.

A simpler implementation of this might simply do a text search of all your open tabs, which would be technically simpler and mostly an exercise in making a good representation of the results.

Your Front Page

mockup of Your Front Page

Create a front page of news from the sites you already visit. Like an RSS reader, but prepopulated with your history. This creates an immediate well-populated experience.

My initial thought was to use ad hoc parsers for popular news sites, and at run an experiment with just a long whitelist of news providers.

I got the feedback: why not just use RSS? Good question: I thought RSS was kind of passé, but I hadn’t looked for myself. I went on to do some analysis of RSS, and found it available for almost all news sites. The autodetection (<link rel=alternate>) is not as widely available, and it requires manual searching to find many feeds. Still RSS is a good way to get an up-to-date list of articles and their titles. Article content isn’t well represented and other article metadata is inaccurate or malformed (e.g., there are no useful tags). So using RSS would be very reasonable discovery mechanism, but an “RSS reader” doesn’t seem like a good direction on the current web.

Page Archive

This is bringing back old functionality from Page Shot, a project of mine which morphed into Firefox Screenshots: save full DOM copies of pages. What used to be fairly novel is now well represented by several projects (e.g., WebMemex or World Brain Memex).

Unfortunately I have never been able to really make this kind of tool part of my own day-to-day behavior, and I’ve become skeptical it can work for a general populace. But maybe there’s a way to package up this functionality that is more accessible, or happens more implicitly. I forked a version of Page Shot as pagearchive a while ago, with this in mind, but I haven’t (and likely won’t) come back to it.

Personal Historical Archive

This isn’t really a product idea, but instead an approach to developing products.

One can imagine many tools that directly interact or learn from the content of your browsing. There is both a privacy issue here and a privacy opportunity: looking at this data is creepy, but if the tools live in your user agent (that belongs to you and hosts your information locally) then it’s not so creepy.

But it’s really hard to make experiments on this because you need a bunch of data. If you build a tool that starts watching your browsing then it will only slowly build up interesting information. The actual information that is already saved in browser history is interesting, but in my experience it is too limited and of poor quality. For instance, it is quite hard to build up a navigational path from the history when you use multiple tabs.

A better iterative development approach would be one where you have a static set of all the information you might want, and you can apply tools to that information. If you find something good then later you can add new data collection to the browser, secure in the knowledge that it’s going to find interesting things.

I spent quite a bit of effort on this, and produced `personal-history-archive. It’s something I still want to come back to. It’s a bit of a mess, because at various times it was retrofitted to collect historical information, or collect it on an ongoing basis, or collected it when driven by a script. I also tried to build tools in parallel for doing analysis on the resulting database.

This is also a byproduct of experimentation with machine learning. I wanted to apply things I was learning to browser data, but the data I wanted wasn’t there. I spent all my time collecting and cleaning data, and ended up spending only a small amount of time analyzing the data. I suspect I’m not the only one who has done this.

Navigational Breadcrumbs

mockup of Navigational Breadcrumbs

When I click on a link I lose the reminder of why I clicked on it. What on the previous page led me to click on this? Was I promised something? Are there sibling links that I might want to continue to directly instead of going back and selecting another link?

This tool would give you additional information about the page you are on, how you got there, and given where you came from, where you might go next. Would this be a sidebar? Overlay content? In a popup? I’m not sure.

Example: using this, if I click on a link from Reddit I will be able to see the title of the Reddit post (which usually doesn’t match the document title), and a link to comments on the page. If I follow a link from Twitter, I’ll be able to see the Tweet I came from.

This could be interesting paired with link preview (like a tentative forward). Maybe the mobile browser Linkbubbles (now integrated into Brave) has some ideas to offer.

Technically this will use some of the techniques from Personal History Archive, which tracks link sources.

This is based on the train of thought I wrote down in an HN comment – itself a response to Freeing the Web from the Browser.

I want to try this still, and have started a repo crossnav but haven’t put anything there yet. I think even some naive approaches could work, just trying to detect the category of link and the related links (e.g., on Reddit the category is other submissions, and the related links are things like comments).

Copy Keeper

mockup of Copy Keeper

A notebook/logbook that is filled in every time you copy from a web page. When you copy it records (locally):

  • Text of selection
  • HTML of selection
  • Screenshot of the block element around the selection
  • Text around selection
  • Page URL and nearest anchor/id
  • Page title
  • Datetime

This overloads “copy” to mean “remember”.

Clips would be searchable, and could be moved back to the clipboard in different forms (text, HTML, image, bibliographical reference, source URL). Maybe clips would be browsable in a sidebar (maybe the sidebar has to be open for copies to be collected), or clips could be browsed in a normal tab (Library-style).

I created a prototype in copy-keeper. I thought it was interesting and usable, though whether it would actually get any use in practice I don’t know. It’s one of those tools that seems handy but requires effort, and as a result doesn’t get used.

Change Scout

mockup of Change Scout

(Wherein I both steal a name from another team, and turn it into a category…)

Change Scout will monitor a page for you, and notify you when it changes. Did someone edit the document? Was there activity on an issue? Did an article get updated? Put Change Scout to track it and it will tell you what changes and when.

It would monitor the page inside the browser, so it would have access to personalized and authenticated content. A key task would be finding ways to present changes in an interesting and compact way. In another experiment I tried some very simple change detection tools, and mostly end up frustrated (small changes look very large to naive algorithms).

Popup Tab Switcher

Tab Switcher mockup

We take the exact UI of the Side View popup, but make it a tab switcher. “Recent Tabs” are the most recently focused tabs (weighted somewhat by how long you were on the tab), and then there’s the complete scrollable list. Clicking on an item simply focuses that tab. You can close tabs without focusing them.

I made a prototype in tab-switchr. In it I also added some controls to close tabs, which was very useful for my periodic tab cleanups. Given that it was a proactive tool, I surprised myself by using it frequently. There’s work in Firefox to improve this, unrelated to anything I’ve done. It reminds me a bit of various Tree-Style Tabs, which I both like because they make it easier to see my tabs, and dislike because I ultimately am settled on normal top-tabs. The popup interface is less radical but still provides many of the benefits.

I should probably clean this up a little and publish it.

Personal Podcast

Create your own RSS feed.

  • When you are on a page with some audio source, you can add the audio to your personal feed
  • When on an article, you can generate an audio version that will be added to the feed
  • You get an RSS feed with a random token to make it private (I don’t think podcast apps handle authentication well, but this requires research)
  • Maybe you can just send/text the link to add it to your preferred podcast app
  • If apps don’t accept RSS links very well, maybe something more complicated would be required. An app that just installs an RSS feed? We want to avoid the feed accidentally ending up in podcast directories.
Bookmark Manager

There’s a lot of low-rated bookmark managers in addons.mozilla.org and the Chrome Extension store. Let’s make our own low-rated bookmark manager!

But seriously, this would anticipate updates to the Library and built-in bookmark manager, which are deficient.

Some resources/ideas: Comment with a few gripes Google’s bookmark manager Bookmark section on addons.mozilla.org Bookmark organizers on addons.mozilla.org * Relevant WebExtension APIs

Extended Library

mockup of the Extended Library

The “Library” in Firefox is the combination history and bookmark browser you get if you use “Show all bookmarks” or “Show all history”.

In this idea we present the user with a record of their assets, wherever they are.

This is like a history view (and would be built from history), but would use heuristics to pick out certain kinds of things: docs you’ve edited, screenshots you’ve taken, tickets you’ve opened, etc. We’d be trying hard to find long-lived documents in your history, instead of transitional navigation, articles, things you’ve gotten to from public indexes, etc.

Automatically determining what should be tagged as a “library item” would be the hard part. But I think having an organic view of these items, regardless of underlying service, would be quite valuable. The browser has access to all your services, and it’s easy to forget what service hosts the thing you are thinking about.

Text Mobile Screenshot

mockup of Text Mobile Screenshot

This tool will render the tab in a mobile factor (using the devtools responsive design mode), take a full-page screenshot, and text the image and URL to a given number. Probably it would only support texting to yourself.

I’ve looked into this some, and getting the mobile view of a page is not entirely obvious and requires digging around deep in the browser. Devtools does some complicated stuff to display the mobile view. The rest is basic UI flows and operational support.

Email Readable

Emails the Reader Mode version of a site to yourself. In our research, people love to store things in Email, so why not?

Though it lacks the simplicity of this concept, Email Tabs contains this basic functionality. Email This does almost exactly this.

Your History Everywhere

An extension that finds and syncs your history between browsers (particularly between Chrome and Firefox).

This would use the history WebExtension APIs. Maybe we could create a Firefox Sync client in Chrome. Maybe it could be a general way to move things between browsers. Actual synchronization is hard, but creating read-only views into the data in another browser profile is much easier.

Obviously there’s lots of work to synchronize this data between Firefox properties, and knowing the work involved this isn’t easy and often involves close work with the underlying platform. Without full access to the platform (like on Chrome) we’ll have to find ways to simplify the problem in order to make it feasible.

My Homepage

Everyone (with an FxA account) gets there own homepage on the web. It’s like Geocities! Or maybe closer to github.io.

But more seriously, it would be programmatically accessible simple static hosting. Not just for you to write your own homepage, but an open way for applications to publish user content, without those applications themselves turning into hosting platforms. We’d absorb all the annoyances of hosting content (abuse, copyright, quotas, ops, financing) and let open source developers focus on enabling interesting content generation experiences for users on the open web.

Here’s a general argument why I think this would be a useful thing for us to do. And another from Les Orchard.

Studying what Electron does for people

This is a proposal for user research:

Electron apps are being shipped for many services, including services that don’t require any special system integration. E.g., Slack doesn’t require anything that a web browser can’t do. Spotify maybe catches some play/pause keys, but is very close to being a web site. Yet there is perceived value in having an app.

The user research would focus on cases where the Electron app doesn’t have any/many special permissions. What gives the app value over the web page?

The goal would be to understand the motivations and constraints of users, so we could consider ways to make the in-browser experience equally pleasant to the Electron app.

App quick switcher

Per my previous item: why do I have an IRCCloud app? Why do people use Slack apps? Maybe it’s just because they want to be able to switch into and out of those apps quickly.

A proposed product solution: add a shortcut to any specific (pinned?) tab. Might be autocreated. Using the shortcut when the app is already selected will switch you back to your previous-selected tab. Switching to the tab without the shortcut will display a gentle reminder that the shortcut exists (so you can train yourself to start using it).

To make it a little more fancy, I thought we might also be able to do a second related “preview” shortcut. This would let you peek into the window. I’m not sure what “peeking” means. Maybe we just show a popup with a screenshot of that other window.

Maybe this should all just overload ⌘1/2/3 (maybe shift-⌘1/etc for peeking). Note these shortcuts do not currently have memory – you can switch to the first tab with ⌘1, but you can’t switch back.

This is one suggested solution to Whatever Electron does for people.

I started some work in quick-switch-extension, but keyboard shortcuts were a bit wonky, and I couldn’t figure out useful additional functionality that would make it fun. Firefox (Nightly?) now has Ctrl-Tab functionality that takes you to recent tabs, mitigating this problem (though it is not nearly as predictable as what I propose here).

Just Save

Just Save saves a page. It’s like a bookmark. Or a remembering. Or an archive. Or all of those all at once.

Just Save is a one-click operation, though a popup does show up (similar in function to Pocket) that would allow some additional annotation of your saved page.

We save: 1. Link 2. Title 3. Standard metadata 4. Screenshot 5. Frozen version of page 6. Scroll position 7. The tab history 8. Remember the other open tabs, so if some of them are saved we offer later relations between them 9. Time the page was saved 10. Query terms that led to the page

It’s like bookmarks, but purely focused on saving, while bookmarks do double-duty as a navigational tool. The tool encourages after-the-fact discovery and organization, not at-the-time-of-save choices.

And of course there’s a way to find and manage your saved pages. This idea needs more exploration of why you would return to a page or piece of information, and thus what we’d want to expose and surface from your history. We’ve done research, but it’s really just a start.

Open Search Combined Search

We have several open search providers. How many exist out there? How many could we find in history?

In theory Open Search is an API where a user could do personalized search across many properties, though I’m not sure if any sufficient number of sites has enabled it.

Notes Commander

It’s Notes, but with slash commands.

I other words it’s a document, but if you complete a line that begins with a / then it will try to execute that command, appending or overwriting from that point.

So for instance /timestamp just replaces itself with a timestamp.

Maybe /page inserts the currently active tab. /search foo puts search results into the document, but as editable (and followable) links. /page save freezes the page as one big data link, and inserts that link into the note.

It’s a little like Slack, but in document form, and with the browser as the context instead of a messaging platform. It’s a little like a notebook programming interface, but less structured and more document-like.

The ability to edit the output of commands is particularly interesting to me, and represents the kind of ad hoc information organizing that we all do regularly.

I experimented some with this in Notes, and got it working a little bit, but working with CKEditor (that Notes is built on) was just awful and I couldn’t get anything to work well. Notes also has a very limited set of supported content (no images or links), which was problematic. Maybe it’s worth doing it from scratch (with ProseMirror or Slate?)

After I tried to mock this up, I realized that the underlying model is much too unclear in my mind. What’s this for? When is it for? What would a list of commands look like?

Another thing I realized while attempting a mockup is that there should be a rich but normalized way to represent pages and URLs and so forth. Often you’ll be referring to URLs of pages that are already open. You may want to open sets of pages, or see immediately which URLs are already open in a tab. A frozen version of a page should be clearly linked to the source of that page, which of course could be an open tab. There’s a lot of pieces to fit together here, both common nouns and verbs, all of which interact with the browser session itself.

Automater

Automation and scripting for your browser: make demonstrations for your browser, give it a name, and you have a repeatable script.

The scripts will happen in the browser itself, not via any backend or scraping tool. In case of failed expectations or changed sites, the script will halt and tell the user.

Scripts could be as simple as “open a new tab pointing to this page every weekday at 9am”, or could involve clipping information, or just doing a navigational pattern before presenting the page to a user.

There’s a huge amount of previous work in this area. I think the challenge here is to create something that doesn’t look like a programming language displayed in a table.

Sidekick

Sidekick is a sidebar interface to anything, or everything, contextually. Some things it might display:

  • Show you the state of your clipboard
  • Show you how you got to the current tab (similar to Navigational Breadcrumbs)
  • Show you other items from the search query that kicked off the current tab
  • Give quick navigation to nearby pages, given the referring page (e.g., the next link, or next set of links)
  • Show you buttons to activate other tabs you are likely to switch to from the current tab
  • Show shopping recommendations or other content-aware widgets
  • Let you save little tidbits (text, links, etc), like an extended clipboard or notepad
  • Show notifications you’ve recently received
  • Peek into other tabs, or load them inline somewhat like Side View
  • Checklists and todos
  • Copy a bunch of links into the sidebar, then treat them like a todo/queue

Possibly it could be treated like an extensible widget holder.

From another perspective: this is like a continuous contextual feature recommender. I.e., it would try to answer the question: what’s the feature you could use right now?

Timed Repetition

Generally in order to commit something to long-term memory you must revisit information later, ideally long enough that it’s a struggle.

Is anything we see in a browser worth committing to long-term memory? Sometimes it feels like nothing is worth remembering, but that’s a kind of nihilism based on the shitty aspects of typical web browsing behavior.

The interface would require some positive assertion: I want to know this. Probably you’d want to highlight the thing you’d “know”. Then, later, we’d want to come up with some challenge. We don’t need a “real” test that is verified by the browser, instead we simply need to ask some related question, then the user can say if they got it right or not (or remembered it or not).

Reader Mode improvements

Reader mode is a bit spartan. Maybe it could be a bit nicer:

  • Pick up some styles or backgrounds from the hosting site
  • Display images or other media differently or more prominently
  • Add back some markup or layout that Readability erases
  • Apply to some other kinds of sites that aren’t articles (e.g., a video site)
  • A multicolumn version like McReadability
Digest Mode

Inspired by Full Hacker News (comments): take a bunch of links (typically articles) and concatenate their content into one page.

Implicitly this requires Reader Mode parsing of the pages, though that is relatively cheap for “normal” articles. Acquiring a list of pages is somewhat less clear. Getting a list of pages is a kind of news/RSS question. Taking a page like Hacker News and figuring out what the “real” links are is another approach that may be interesting. Lists of related links are everywhere, yet hard to formally define.

This would work very nicely with complementary text summarization.

Open question: is this actually an interesting or useful way to consume information?

Firefox for X

There’s an underlying concept here worth explaining:

Feature develop receives a lot of skepticism. And it’s reasonable: there’s a lot of conceit in a feature, especially embedded in a large product. Are people going to use a product or not because of some little feature? Or maybe the larger challenge: can some feature actually change behavior? Every person has their own thing going on, people aren’t interested in our theories, and really not that many people are interested in browsers. Familiar functionality – the back button, bookmarks, the URL bar, etc. – are what they expect, what they came for, and what they will gravitate to. Everything I’ve written so far in this list are things people won’t actually use.

A browser is particularly problematic because it’s so universal. It’s for sites and apps and articles. It’s for the young and the elderly, the experienced and not. It’s used for serious things, it’s used for concentration, and it’s used for dumb things and to avoid concentrating. How can you build a feature for everyone, targeting anything they might do? And if you build something, how can a person trust a new feature is really for them, not some other person? People are right to be skeptical of the new!

But we also know that most people regularly use more than one browser. Some people use Chrome for personal stuff, and Firefox for work. Some people do the exact opposite. Some people do their banking and finance in a specific browser. Some use a specific browser just for watching videos.

Which browser a person uses for which task is seemingly random. Maybe they were told to use a specific browser for one task, and then the other browser became the fallback. Maybe they once heard somewhere once that one browser was more secure. Maybe flash seemed broken on one browser when they were watching a video, and now a pattern has been set.

This has long seemed like an opportunity to me. Market a browser that actually claims to be the right browser for some of these purposes! Firefox has Developer Edition and it’s been reasonably successful.

This offers an opportunity for both Mozilla and Firefox users to agree on purpose. What is Firefox for? Everything! Is this feature meant for you? Unlikely! In a purpose-built browser both sides can agree what it’s trying to accomplish.

This idea often gets poo-pooed for how much work it is, but I think it’s simpler than it seems. Here’s what a “new browser” means:

  • Something you can find and download from its own page or site
  • It’s Firefox, but uses its own profile, keeping history/etc separate from other browser instances (including Firefox)
  • It has its own name and icon, and probably a theme to make it obvious what browser you are in
  • It comes with some browser extensions and prefs changed, making it more appropriate for the proposed use case

The approach is heavy on marketing and build tools, and light on actual browser engineering.

I also have gotten frequent feedback that Multi-Account Containers should solve all these use cases, but that gets everything backwards. People already understand multiple browsers, and having completely new entry points to bring people to Firefox is a feature, not a bug.

Sadly I think the time for this has passed, maybe in the market generally or maybe just for Mozilla. It would have been a very different approach to the browser.

Some of us in the Test Pilot team had some good brainstorming around actual concepts too, which is where I actually get excited about the ideas:

Firefox Study

For students, studying.

  • Integrate note-taking tools
  • Create project and class-based organizational tools, helping to organize tabs, bookmarks, and notes
  • Tools to document and organize deadlines
  • Citation generators

I don’t know what to do with online lectures and video, but it feels like there’s some meaningful improvements to be done in that space. Video-position-aware notetaking tools?

I think the intentionality of opening a browser to study is a good thing. iPads are somewhat popular in education, and I suspect part of that is having a device that isn’t built around multitasking, and using an iPad means stepping away from regular computing.

Firefox Media

To watch videos. This requires very few features, but benefits from just being a separate profile, history, and icon.

There’s a small number of features that might be useful:

  • Cross-service search (like Can I Stream.it or JustWatch)
  • Search defaults to video search
  • Cross-service queue
  • Quick service-based navigation

I realize it’s a lot like Roku in an app.

Firefox for Finance

This is really just about security.

Funny story: people say they value security very highly. But if Mozilla wants to make changes in Firefox that increase security but break some sites – particularly insecure sites – people will then stop using Firefox. They value security highly, but still just below anything at all breaking. This is very frustrating for us.

At the same time, I kind of get it. I’m dorking around on the web and I click through to some dumb site, and I get a big ol’ warning or a blank page or some other weirdness. I didn’t even care about the page or its security, and here my browser is trying to make me care.

That’s true some of the time, but not others. If you are using Firefox for Finance, or Firefox Super Secure, or whatever we might call it, then you really do care.

There’s a second kind of security implied here as well: security from snooping eyes and on shared computers. Firefox Master Password is a useful feature here. Generally there’s an opportunity for secure data at rest.

This is also a vehicle for education in computer security, with an audience that we know is interested.

Firefox Low Bandwidth

Maybe we work with proxy services. Or just do lots of content blocking. In this browser we let content break (and give a control to load the full content), so long as you start out compact.

  • Cache content that isn’t really supposed to be cached
  • Don’t load some kinds of content
  • Block fonts and other seemingly-unimportant content
  • Monitoring tools to see where bandwidth usage is going
Firefox for Kids

Sadly making things for kids is hard, because you are obliged to do all sorts of things if you claim to target children, but you don’t have to do anything if kids just happen to use your tool.

There is an industry of tools in this area that I don’t fully understand, and I’d want to research before thinking about a feature list. But it seems like it comes down to three things:

  • Blocking problematic content
  • Encouraging positive content
  • Monitoring tools for parents

There’s something very uninspiring about that list, it feels like it’s long on negativity and short on positive engagement. Coming up with an answer to that is not a simple task.

Firefox Calm

Inspired by a bunch of things:

What would a calm Firefox experience look like? Or maybe it would be better to think about a calm presentation of the web. At some point I wrote out some short pitches:

  • Read without distraction: Read articles like they are articles, not interactive (and manipulative) experiences.
  • Stay focused on one thing at a time: Instead of a giant list of tabs and alerts telling you what we aren’t doing, automatically focus on the one thing you are doing right now.
  • Control your notifications: Instead of letting any site poke at you for any reason, notifications are kept to a minimum and batched.
  • Focused writing: When you need to focus on what you are saying, not what people are saying to you, enter focused writing mode.
  • Get updates without falling down a news hole: Avoid clickbait, don’t reload pages, just see updates from the sites you trust (relates to Your Front Page)
  • Pomodoro: let yourself get distracted… but only a little bit. The Pomodoro technique helps you switch between periods of focused work and letting yourself relax
  • Don’t even ask: Do you want notifications from the news site you visited once? Do you want videos to autoplay? Of course not, and we’ll stop even asking.
  • Suggestion-free browsing: Every page you look at isn’t an invitation to tell you what you should look at next. Remove suggested content, and do what YOU want to do next. (YouTube example)
Concluding thoughts

Not just the conclusion of this list, the conclusion of my work in this area…

Some challenges in the design process:

  1. Asking someone to do something new is hard, and unlikely to happen. My previous post (The Over-engaged Knowledge Worker) relates to this tension.
  2. … and yet a “problem” isn’t enough to get someone to do something either.
  3. If someone is consciously and specifically doing some task, then there’s an opportunity.
  4. Creating wholistic solutions is unwelcome, unintuitively each thing that adds to the size of a solution diminishes from the breadth of problems the solution can solve.
  5. … and yet, abstract solutions without any clear suggestion of what they solve aren’t great either!
  6. Figuring out how to package functionality is a big deal.
  7. Approaches that increase the density of information or choices are themselves somewhat burdensome.
  8. … and yet context-sensitive approaches are unpredictable and distracting compared to consistent (if dense) functionality.
  9. I still believe there’s a wealth of material in the content of the pages people encounter. But it’s irregular and hard to understand, it takes concerted and long-term effort to do something here.
  10. Lots of the easy stuff, the roads well traveled, are still hard for a lot of people. Maybe this can be fixed by optimizing current UI… but I think there’s still room for novel improvements to old ideas.
  11. User research is a really great place to start, but it’s not very prescriptive. It’s mostly problem-finding, not solution-finding.
  12. There’s some kinds of user research I wish I had access to, specifically really low level analysis of behavior. What’s in someone’s mind when they open a new tab, or reuse one? In what order do they scan the UI? What are mental models of a URL, of pages and how they change, in what order to people compose (mentally and physically) things they want to share… it feels like it can go on forever, and there would be a ton of detail in the results, but given all the other constraints these insights feel important.
  13. There’s so many variables in an experiment, that it’s hard to know what failures really means. Every experiment that offers a novel experience involves several choices, and any one choice can cause the experiment to fail.

As Test Pilot comes to an end, I do find myself asking: is there room for qualitative improvements in desktop browser UI? Desktop computing is waning. User expectations of a browser are calcified. The only time people make a choice is when something breaks, and the only way to win is to not break anything and hope you competitor does break things.

So, is there room for improvement? Of course there is! The millions of hours spent every day in Firefox alone… this is actually important. Yes, a lot of things are at a local maximum, and we can A/B test little tweaks to get some suboptimal parts to their local maximum. But I do not believe in any way that the browsers we know are the optimal container. The web is bigger than browsers, bigger than desktop or mobile or VR, and a user agent can do unique things beyond any site or app.

And yet…

Categorieën: Mozilla-nl planet

Daniel Stenberg: alt-svc in curl

Mozilla planet - zo, 03/03/2019 - 16:45

The RFC 7838 was published already in April 2016. It describes the new HTTP header Alt-Svc, or as the title of the document says HTTP Alternative Services.

HTTP Alternative Services

An alternative service in HTTP lingo is a quite simply another server instance that can provide the same service and act as the same origin as the original one. The alternative service can run on another port, on another host name, on another IP address, or over another HTTP version.

An HTTP server can inform a client about the existence of such alternatives by returning this Alt-Svc header. The header, which has an expiry time, tells the client that there’s an optional alternative to this service that is hosted on that host name, that port number using that protocol. If that client is a browser, it can connect to the alternative in the background and if that works out fine, continue to use that host for the rest of the time that alternative is said to work.

In reality, this header becomes a little similar to the DNS records SRV or URI: it points out a different route to the server than what the A/AAAA records for it say.

The Alt-Svc header came into life as an attempt to help out with HTTP/2 load balancing, since with the introduction of HTTP/2 clients would suddenly use much more persistent and long-living connections instead of the very short ones used for traditional HTTP/1 web browsing which changed the nature of how connections are done. This way, a system that is about to go down can hint the clients on how to continue using the service, elsewhere.

Alt-Svc: h2="backup.example.com:443"; ma=2592000; HTTP upgrades

Once that header was published, the by then already existing and deployed Google QUIC protocol switched to using the Alt-Svc header to hint clients (read “Chrome users”) that “hey, this service is also available over gQUIC“. (Prior to that, they used their own custom alternative header that basically had the same meaning.)

This is important because QUIC is not TCP. Resources on the web that are pointed out using the traditional HTTPS:// URLs, still imply that you connect to them using TCP on port 443 and you negotiate TLS over that connection. Upgrading from HTTP/1 to HTTP/2 on the same connection was “easy” since they were both still TCP and TLS. All we needed then was to use the ALPN extension and voila: a nice and clean version negotiation.

To upgrade a client and server communication into a post-TCP protocol, the only official way to it is to first connect using the lowest common denominator that the HTTPS URL implies: TLS over TCP, and only once the server tells the client what more there is to try, the client can go on and try out the new toys.

For HTTP/3, this is the official way for HTTP servers to tell users about the availability of an HTTP/3 upgrade option.

curl

I want curl to support HTTP/3 as soon as possible and then as I’ve mentioned above, understanding Alt-Svc is a key prerequisite to have a working “bootstrap”. curl needs to support Alt-Svc. When we’re implementing support for it, we can just as well support the whole concept and other protocol versions and not just limit it to HTTP/3 purposes.

curl will only consider received Alt-Svc headers when talking HTTPS since only then can it know that it actually speaks with the right host that has the authority enough to point to other places.

Experimental

This is the first feature and code that we merge into curl under a new concept we do for “experimental” code. It is a way for us to mark this code as: we’re not quite sure exactly how everything should work so we allow users in to test and help us smooth out the quirks but as a consequence of this we might actually change how it works, both behavior and API wise, before we make the support official.

We strongly discourage anyone from shipping code marked experimental in production. You need to explicitly enable this in the build to get the feature. (./configure –enable-alt-svc)

But at the same time we urge and encourage interested users to test it out, try how it works and bring back your feedback, criticism, praise, bug reports and help us make it work the way we’d like it to work so that we can make it land as a “normal” feature as soon as possible.

Ship

The experimental alt-svc code has been merged into curl as of commit 98441f3586 (merged March 3rd 2019) and will be present in the curl code starting in the public release 7.64.1 that is planned to ship on March 27, 2019. I don’t have any time schedule for when to remove the experimental tag but ideally it should happen within just a few release cycles.

alt-svc cache

The curl implementation of alt-svc has an in-memory cache of known alternatives. It can also both save that cache to a text file and load that file back into memory. Saving the alt-svc cache to disk allows it to survive curl invokes and to truly work the way it was intended. The cache file stores the expire timestamp per entry so it doesn’t matter if you try to use a stale file.

curl –alt-svc

Caveat: I now talk about how a feature works that I’ve just above said might change before it ships. With the curl tool you ask for alt-svc support by pointing out the alt-svc cache file to use. Or pass a “” (empty name) to make it not load or save any file. It makes curl load an existing cache from that file and at the end, also save the cache to that file.

curl also already since a long time features fancy connection options such as –resolve and –connect-to, which both let a user control where curl connects to, which in many cases work a little like a static poor man’s alt-svc. Learn more about those in my curl another host post.

libcurl options for alt-svc

We start out the alt-svc support for libcurl with two separate options. One sets the file name to the alt-svc cache on disk (CURLOPT_ALTSVC), and the other control various aspects of how libcurl should behave in regards to alt-svc specifics (CURLOPT_ALTSVC_CTRL).

I’m quite sure that we will have reason to slightly adjust these when the HTTP/3 support comes closer to actually merging.

Categorieën: Mozilla-nl planet

Cameron Kaiser: Another choice for Intel TenFourFox users

Mozilla planet - zo, 03/03/2019 - 01:08
Waaaaaaay back when, I parenthetically mentioned in passing an anonymous someone(tm) trying to resurrect the then-stalled Intel port. Since then we now have a periodically updated unofficial and totally unsupported mainline Intel version, but it wasn't actually that someone who was working on it. That someone now has a release, too.

@OlgaTPark's Intel TenFourFox fork is a bit unusual in that it is based on 45.9 (yes, back before the FPR releases began), so it is missing later updates in the FPR series. On the other hand, it does support Tiger (mainline Intel TenFourFox requires at least 10.5), it additionally supports several features not supported by TenFourFox, i.e., by enabling Mozilla features in some of its operating system-specific flavours that are disabled in TenFourFox for reasons of Tiger compatibility, and also includes support for H.264 video with ffmpeg.

H.264 video has been a perennial request which I've repeatedly nixed for reasons of the MPEG LA threatening to remove and purée the genitals of those who would use its patents without a license, and more to the point using ffmpeg in Firefox and TenFourFox probably would have violated the spirit, if not the letter, of the Mozilla Public License. Currently, mainline Firefox implements H.264 using operating system support and the Cisco decoder as an external plugin component. Olga's scheme does much the same thing using a separate component called the FFmpeg Enabler, so it should be possible to implement the glue code in mainline TenFourFox, "allowing" the standalone, separately-distributed enabler to patch in the library and thus sidestepping at least the Mozilla licensing issue. The provided library is a fat dylib with PowerPC and Intel support and the support glue is straightforward enough that I may put experimental support for this mechanism in FPR14.

(Long-time readers will wonder why there is MP3 decoding built into TenFourFox, using minimp3 which itself borrows code from ffmpeg, if I have these objections. There are three simple reasons: MP3 patents have expired, it was easy to do, and I'm a big throbbing hypocrite. One other piece of "OlgaFox" that I'll backport either for FPR13 final or FPR14 is a correctness fix for our MP3 decoder which apparently doesn't trip up PowerPC, but would be good for Intel users.)

Ordinarily I don't like forks using the same name, even if I'm no longer maintaining the code, so that I can avoid receiving spurious support requests or bug reports on code I didn't write. For example, I asked the Oysttyer project to change names from TTYtter after I had ceased maintaining it so that it was clearly recognized they were out on their own, and they graciously did. In this case, though it might be slightly confusing, I haven't requested my usual policy because it is clearly and (better be) widely known that no Intel version of TenFourFox, no matter what version or what features, is supported by me.

On the other hand, if someone used Olga's code as a basis for, say, a 10.5-specific PowerPC fork of TenFourFox enabling features supported in that OS (a la the dearly departed AuroraFox), I would have to insist that the name be changed so we don't get people on Tenderapp with problem reports about it. Fortunately, Olga's release uses the names TenFiveFox and TenSixFox for those operating system-specific versions, and I strongly encourage anyone who wants to do such a Leopard-specific port to follow suit.

Releases can be downloaded from Github, and as always, there is no support and no promises of updates. Do not send support questions about this or any Intel build of TenFourFox to Tenderapp.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: March’s featured extensions

Mozilla planet - vr, 01/03/2019 - 20:23

Firefox Logo on blue background

Pick of the Month: Bitwarden – Free Password Manager

by 8bit Solutions LLC
Store your passwords securely (via encrypted vaults) and sync across devices.

“Works great, looks great, and it works better than it looks.”

Featured: Save Page WE

by DW-dev
Save complete pages or just portions as a single HTML file.

“Good for archiving the web!”

Featured: Terms of Service; Didn’t Read

by Abdullah Diaa, Hugo, Michiel de Jong
A clever tool for cutting through the gibberish of common ToS contracts you encounter around the web.

“Excellent time and privacy saver! Let’s face it, no one reads all the legalese in the ToS of each site used.”

Featured: Feedbro

by Nodetics
An advanced reader for aggregating all of your RSS/Atom/RDF sources.

“The best of its kind. Thank you.”

Featured: Don’t Touch My Tabs!

by Jeroen Swen
Don’t let clicked links take control of your current tab and load content you didn’t ask for.

“Hijacking ads! Deal with it now!”

Featured: DuckDuckGo Privacy Essentials

by DuckDuckGo
Search with enhanced security—tracker blocking, smarter encryption, private search, and other privacy perks.

“Perfect extension for blocking trackers while not breaking webpages.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post March’s featured extensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Bleach: stepping down as maintainer

Mozilla planet - vr, 01/03/2019 - 15:00
What is it?

Bleach is a Python library for sanitizing and linkifying text from untrusted sources for safe usage in HTML.

I'm stepping down

In October 2015, I had a conversation with James Socol that resulted in me picking up Bleach maintenance from him. That was a little over 3 years ago. In that time, I:

  • did 12 releases
  • improved the tests; switched from nose to pytest, added test coverage for all supported versions of Python and html5lib, added regression tests for xss strings in OWASP Testing Guide 4.0 appendix
  • worked with Greg to add browser testing for cleaned strings
  • improved documentation; added docstrings, added lots of examples, added automated testing of examples, improved copy
  • worked with Jannis to implement a security bug disclosure policy
  • improved performance (Bleach v2.0 released!)
  • switched to semver so the version number was more meaningful
  • did a rewrite to work with the extensive html5lib API changes
  • spent a couple of years dealing with the regressions from the rewrite
  • stepped up as maintainer for html5lib and did a 1.0 release
  • added support for Python 3.6 and 3.7

I accomplished a lot.

A retrospective on OSS project maintenance

I'm really proud of the work I did on Bleach. I took a great project and moved it forward in important and meaningful ways. Bleach is used by a ton of projects in the Python ecosystem. You have likely benefitted from my toil.

While I used Bleach on projects like SUMO and Input years ago, I wasn't really using Bleach on anything while I was a maintainer. I picked up maintenance of the project because I was familiar with it, James really wanted to step down, and Mozilla was using it on a bunch of sites--I picked it up because I felt an obligation to make sure it didn't drop on the floor and I knew I could do it.

I never really liked working on Bleach. The problem domain is a total fucking pain-in-the-ass. Parsing HTML like a browser--oh, but not exactly like a browser because we want the output of parsing to be as much like the input as possible, but as safe. Plus, have you seen XSS attack strings? Holy moly! Ugh!

Anyhow, so I did a bunch of work on a project I don't really use, but felt obligated to make sure it didn't fall on the floor, that has a pain-in-the-ass problem domain. I did that for 3+ years.

Recently, I had a conversation with Osmose that made me rethink that. Why am I spending my time and energy on this?

Does it further my career? I don't think so. Time will tell, I suppose.

Does it get me fame and glory? No.

Am I learning while working on this? I learned a lot about HTML parsing. I have scars. It's so crazy what browsers are doing.

Is it a community through which I'm meeting other people and creating friendships? Sort of. I like working with James, Jannis, and Greg. But I interact and work with them on non-Bleach things, too, so Bleach doesn't help here.

Am I getting paid to work on it? Not really. I did some of the work on work-time, but I should have been using that time to improve my skills and my career. So, yes, I spent some work-time on it, but it's not a project I've been tasked with to work on. For the record, I work on Socorro which is the Mozilla crash-ingestion pipeline. I don't use Bleach on that.

Do I like working on it? No.

Seems like I shouldn't be working on it anymore.

I moved Bleach forward significantly. I did a great job. I don't have any half-finished things to do. It's at a good stopping point. It's a good time to thank everyone and get off the stage.

What happens to Bleach?

I'm stepping down without working on what comes next. I think Greg is going to figure that out.

Thank you!

Jannis was a co-maintainer at the beginning because I didn't want to maintain it alone. Jannis stepped down and Greg joined. Both Jannis and Greg were a tremendous help and fantastic people to work with. Thank you!

Sam Snedders helped me figure out a ton of stuff with how Bleach interacts with html5lib. Sam was kind enough to deputize me as a temporary html5lib maintainer to get 1.0 out the door. I really appreciated Sam putting faith in me. Conversations about the particulars of HTML parsing--I'll miss those. Thank you!

While James wasn't maintaining Bleach anymore, he always took the time to answer questions I had. His historical knowledge, guidance, and thoughtfulness were crucial. James was my manager for a while. I miss him. Thank you!

There were a handful of people who contributed patches, too. Thank you!

Thank your maintainers!

My experience from 20 years of OSS projects is that many people are in similar situations: continuing to maintain something because of internal obligations long after they're getting any value from the project.

Take care of the maintainers of the projects you use! You can't thank them enough for their time, their energy, their diligence, their help! Not just the big successful projects, but also the one-person projects, too.

Shout-out for PyCon 2019 maintainers summit

Sumana mentioned that PyCon 2019 has a maintainers summit. That looks fantastic! If you're in the doldrums of maintaining an OSS project, definitely go if you can.

Changes to this blog post

Update March 2, 2019: I completely forgot to thank Sam Snedders which is a really horrible omission. Sam's the best!

Categorieën: Mozilla-nl planet

Pagina's