Mozilla Nederland LogoDe Nederlandse

Cameron Kaiser: TenFourFox FPR5b1 available

Mozilla planet - fr, 29/12/2017 - 05:51
TenFourFox Feature Parity Release 5 beta 1 is now available (downloads, hashes, release notes).

The biggest changes are lots and lots of AltiVec: additional AltiVec decoding acceleration for VP9 (you need MSE enabled to make the most of this, or things like YouTube will default to VP8), more usage of our VMX-accelerated string search package, and (new in this version) AltiVec-accelerated PNG image decoding. There are also a number of minor but notable improvements to layout compatibility, DOM and HTML5 support (most notably true passive event listeners, which should improve scroll and general event performance on some sites), as well as some additional efficiency improvements to JavaScript. FPR5 also marks mostly the end of all of the Quantum-related platform changes we can safely backport to our fork of 45; while there will be some additional optimization work, I will primarily be concentrating on actual new support rather than speedups since most of the low-hanging fruit and some of the stuff I have to jump a little for has already been plucked.

There are two somewhat more aggressive things in this version. The first is to even more heavily throttle inactive or background tabs to reduce their impact; the normal Firefox runs animation frames on background tabs at 1Hz but this version reduces that to a third (i.e., a tick every three seconds instead of one). It's possible to make this even more aggressive, including just not ticking any timers in background tabs at at all, but I'm a little nervous about halting it entirely. Neither approach affects audio playing in inactive or background tabs, which I tested thoroughly in Amazon Music and YouTube. This should make having multiple tabs open and loaded a bit less heavyweight, particularly on single processor Macs.

The second has to do with session store. Currently, every 25 seconds (Firefox has a default of 15), the browser serializes its state and writes it out to disk so you can pick up where you left off in the event of a crash. I'm loathe to completely halt this or make the interval much more than 60 seconds, but I also know that this does drag on the browser and can also spin SSD write cycles. This version increases it to 30 seconds, but also reduces the number of forward pages written (up to 5 instead of unlimited), and also purges old closed tabs more aggressively -- Firefox purges these after about two weeks, but we now purge old tabs every 24 hours, which I thought was a good compromise. This means much less data is written and much less cruft accumulates as you browse, reducing the browser's overhead and memory usage over longer uptimes. However, I know this may upset those of you who have tab closure regret, so advise if this drives you nuts even though I am unlikely to reverse this change completely.

This version is timed to come out with Firefox ESR 52.6.0, which is not scheduled for release until January 23, so don't panic if you don't see much commit activity in Github for awhile. 52.8.0, scheduled for May 7, will be the transition point to 60ESR. More on that when we get there.

For FPR6, I'm looking at a couple features to get us more HTML5 support points, and possibly something like date-time input controls or <details> and <summary> support. This may also be the first release with built-in adblock, though the adblock support will only be basic, will not be comprehensive, and may include blocking certain tracking scripts as well as image ads. It won't be enabled by default.

Party safe on New Years'.

Categorieën: Mozilla-nl planet

Oops! Mozilla's partnership with acclaimed TV series Mr Robot didn't go as ... - Techly

Nieuws verzameld via Google - fr, 29/12/2017 - 03:39


Oops! Mozilla's partnership with acclaimed TV series Mr Robot didn't go as ...
A recent partnership between Mozilla and the acclaimed television series Mr Robot intended to give users a fun Easter egg, but instead, it just freaked out the internet. Mozilla rolled out the mysterious extension called “Looking Glass 1.0.3 ...

Categorieën: Mozilla-nl planet

Mozilla Issues Critical Security Patch for Thunderbird Flaw - Dark Reading

Nieuws verzameld via Google - to, 28/12/2017 - 19:44

Mozilla Issues Critical Security Patch for Thunderbird Flaw
Dark Reading
The critical patch was one of five security bugs Mozilla fixed this month. Others include two vulnerabilities rated high, one moderate, and one low. Both of the highly rated security flaws affected the RSS feed. The moderate and low bugs affected RSS ...

Categorieën: Mozilla-nl planet

Don Marti: Predictions for 2018

Mozilla planet - to, 28/12/2017 - 09:00

Bitcoin to the moooon: The futures market is starting up, so here comes a bunch more day trader action. More important, think about all the bucket shops (I even saw an "invest in Bitcoin without owning Bitcoin" ad on public transit in London), legit financial firms, Libertarian true believers, and coins lost forever because of human error. Central bankers had better keep an eye on Bitcoin, though. Last recession we saw that printing money doesn't work as well as it used to, because it ends up in the hands of rich people who, instead of priming economic pumps with it, just drive up the prices of assets. I would predict "Entire Round of Quantitative Easing Gets Invested in Bitcoin Without Creating a Single New Job" but I'm saving that one for 2019. Central banks will need to innovate. Federal Reserve car crushers? Relieve medical deby by letting the UK operate NHS clinics at their consulates in the USA, and we trade them US green cards for visas that allow US citizens to get treated there? And—this is a brilliant quality of Bitcoin that I recognized too late—there is no bad news that could credibly hurt the value of a purely speculative asset.

The lesson for regular people here is not so much what to do with Bitcoin, but remember to keep putting some well-considered time into actions that you predict have unlikely but large and favorable outcomes. Must remember to do more of this.

High-profile Bitcoin kidnapping in the USA ends in tragedy: Kidnappers underestimate the amount of Bitcoin actually available to change hands, ask for more than the victim's family (or fans? a crowdsourced kidnapping of a celebrity is now a possibility) can raise in time. Huge news but not big enough to slow down something that the finance scene has already committed to.

Tech industry reputation problems hit open source. California Internet douchebags talk like a positive social movement but act like East Coast vampire squid—and people are finally not so much letting them define the terms of the conversation. The real Internet economy is moving to a three-class system: plutocrats, well-paid brogrammers with Aeron chairs, free snacks and good health insurance, and everyone else in the algorithmically-managed precariat. So far, people are more concerned about the big social and surveillance marketing companies, but open source has some of the same issues. Just as it was widely considered silly for people to call Facebook users "the Facebook community" in 2017, some of the "community" talk about open source will be questioned in 2018. Who's working for who, and who's vulnerable to the risks of doing work that someone else extracts the value of? College athletes are ahead of the open source scene on this one.

Adfraud becomes a significant problem for end users: Powerful botnets in data centers drove the pivot to video. Now that video adfraud is well-known, more of the fraud hackers will move to attribution fraud. This ties in to adtech consolidation, too. Google is better at beating simple to midrange fraud than the rest of the Lumascape, so the steady progress towards a two-logo Lumascape means fewer opportunities for bots in data centers.

Attribution fraud is nastier than servers-talking-to-servers fraud, since it usually depends on having fraudulent and legit client software on the same system—legit to be used for a human purchase, fraudulent to "serve the ad" that takes credit for it. Unlike botnets that can run in data centers, attribution fraud comes home with you. Yeech. Browsers and privacy tools will need to level up from blocking relatively simple Lumascape trackers to blocking cleverer, more aggressive attribution fraud scripts.

Wannabe fascists keep control of the US Congress, because your Marketing budget: "Dark" social campaigns (both ads and fake "organic" activity) are still a thing. In the USA, voter suppression and gerrymandering have been cleverly enough done that social manipulation can still make a difference, and it will.

In the long run, dark social will get filtered out by habits, technology, norms, and regulation—like junk fax and email spam before it—but we don't have a "long run" between now and November 2018. The only people who could make an impact on dark social now are the legit advertisers who don't want their brands associated with this stuff. And right now the expectations to advertise on the major social sites are stronger than anybody's ability to get an edgy, controversial "let's not SPONSOR ACTUAL F-----G NAZIS" plan through the 2018 marketing budget process.

Yes, the idea of not spending marketing money on supporting nationalist extremist forums is new and different now. What a year.

Bonus links

These Publishers Bought Millions Of Website Visits They Later Found Out Were Fraudulent

No boundaries for user identities: Web trackers exploit browser login managers

Best of 2017 #8: The World's Most Expensive Clown Show

My Internet Mea Culpa

2017 Was the Year I Learned About My White Privilege

With the people, not just of the people

When Will Facebook Take Hate Seriously?

Using Headless Mode in Firefox – Mozilla Hacks : the Web developer blog

Why Chuck E. Cheese’s Has a Corporate Policy About Destroying Its Mascot’s Head

Dozens of Companies Are Using Facebook to Exclude Older Workers From Job Ads

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

Categorieën: Mozilla-nl planet

The Firefox Frontier: Firefox Extensions for New Year’s Resolutions

Mozilla planet - wo, 27/12/2017 - 15:00

It’s that time of year again where we endeavor to improve ourselves, to wash away poor habits of the past and improve our lot in life. Yet most of us … Read more

The post Firefox Extensions for New Year’s Resolutions appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Thunderbird-gaten maken mailvervalsing mogelijk - AG Connect - AG Connect

Nieuws verzameld via Google - wo, 27/12/2017 - 13:58

Thunderbird-gaten maken mailvervalsing mogelijk - AG Connect
AG Connect
De open source stichting Mozilla heeft net voor de kerst zijn e-mailprogramma Thunderbird ge-update. De nieuwe versie 52.5.2 dicht diverse gaten, waaronder enkele kritieke. Naast een bug die JavaScript ongecontroleerd liet draaien, is er een bug die ...

en meer »
Categorieën: Mozilla-nl planet

Mozilla Patches Critical Bug in Thunderbird | Threatpost | The first ... - Threatpost

Nieuws verzameld via Google - ti, 26/12/2017 - 20:12


Mozilla Patches Critical Bug in Thunderbird | Threatpost | The first ...
Mozilla issued a critical security update to its popular open-source Thunderbird email client. The patch was part of a December release of five fixes that included two bugs rated high and one rated moderate and another low. Mozilla said Thunderbird ...
Mozilla patches one critical, two high flaws in ThunderbirdSC Magazine

alle 2 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Mozilla dicht spoofinglek in e-mailclient Thunderbird -

Nieuws verzameld via Google - ti, 26/12/2017 - 09:04

Mozilla dicht spoofinglek in e-mailclient Thunderbird
Mozilla heeft een beveiligingsupdate uitgebracht die meerdere kwetsbaarheden in de e-mailclient Thunderbird verhelpt, waaronder een lek dat het mogelijk maakt om de afzender te spoofen. In Thunderbird 52.5.2 zijn in totaal vijf kwetsbaarheden gepatcht ...

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 214

Mozilla planet - ti, 26/12/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is crossbeam-channel, a crate that improves multi-producer multi-consumer channels compared to what the standard library offers. Thanks to leodasvacas for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

118 pull requests were merged in the last week

New Contributors
  • Antal Szabó
  • Christopher Durham
  • Ed Schouten
  • Florian Keller
  • Jonas Platte
  • Matti Niemenmaa
  • Sam Green
  • Scott Abbey
  • Wilco Kusee
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Every great language needs a Steve.

aaron-lebo on Hacker News about @steveklabnik.

Thanks to Aleksey Kladov for the suggestion!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Categorieën: Mozilla-nl planet

Dustin J. Mitchell: FRustrations 1

Mozilla planet - mo, 25/12/2017 - 16:00

I’ve been hacking about learing Rust for a bit more than a year now, building a Hawk crate and hacking on a distributed lock service named Rubbish (which will never amount to anything but gives me a purpose).

In the process, I’ve run into some limits of the language. I’m going to describe some of those in a series of posts starting with this one.

One of the general themes I’ve noticed is lots of things work great in demos, where everything is in a single function (thus allowing lots of type inference) and most variables are 'static. Try to elaborate these demos out into a working application, and the borrow checker immediately blocks your path.

Today’s frustration is a good example.

Ownership As Exclusion and Async Rust

A common pattern in Rust is to take ownership of a resource as a way of excluding other uses until the operation is finished. In particular, the send method of Sinks in the futures crate takes self, not &self. Because this is an async function, its result is a future that will yield the Sink on success.

The safety guarantee here is that only one async send can be performed at any given time. The language guarantees that nothing else can access the Sink until the send is complete.

Building a Chat Application

As part of learning about Tokio, Futures, and so on, I elected to build a chat application, starting with the simple pipelined server in the Tokio docs. This example uses Stream::and_then to map requests to responses, which makes sense for a strict request/response protocol, but does not make sense for a chat protocol. It should be possible to send or receive a message at any time in a chat protocol, so I modified the example to use send to send one message at a time:

let server = connections.for_each(move |(socket, _peer_addr)| { let (writer, _reader) = socket.framed(LineCodec).split(); let server = writer.send("Hello, World!".to_string()) .and_then(|writer2| writer2.send("Welcome to Chat.".to_string()) .then(|_| Ok(())); handle.spawn(server); Ok(()) });

This bit works fine: the resulting server greets each user, then drops the socket and disconnects them, as expected. Note the threading of the writer: the first writer.send is using the writer returned from split, while the second is using the result of the Future from the first (I have unecessarily called it writer2 here for clarity). In fact, send moves self, so writer.send("Welcome to Chat".to_string()) would not be permitted as that value has been moved.

Based on how I would design a chat app in Python, JavaScript, or any other language, I chose to make a struct to represent a connected user in the chat:

pub struct ChatConnection { reader: SplitStream<Framed<TcpStream, LineCodec>>, writer: SplitSink<Framed<TcpStream, LineCodec>>, peer: SocketAddr, } impl ChatConnection { fn new(socket: TcpStream, peer: SocketAddr) -> ChatConnection { let (writer, reader) = socket.framed(LineCodec).split(); ChatConnection { writer: writer, reader: reader, peer: peer, } } fn run(&self) -> Box<Future<Item = (), Error = ()>> { Box::new(self.writer .send("Welcome to Chat!".to_string()) .then(|_| Ok(()))) } }

When a new connection arrives, other code allocates a new ChatConnection and calls its run method, spawning a task into the event loop with the resulting future.

This doesn’t work, though:

error[E0507]: cannot move out of borrowed content --> src/ | | Box::new(self.writer | ^^^^ cannot move out of borrowed content

There’s sense in this: the language is preventing multiple simultaneous sends. If it allowed self.writer to be accessible to other code while the Future was not complete, then that other code could potentially, unsafely, call send again.

But it makes it difficult to store the writer in a struct – something any reasonably complex application is going to need to do. The two Rust-approved solutions here are to always move writer around as a local variable (as done in the demo), or to move self in the run method (fn run(self) ..). The first “hides” writer in a thicket of closures, making it difficult or impossible to find when, for example, another user sends this one a private message. The second just moves the problem: now we have a ChatConnection object to which nothing but the run method is allowed to refer, meaning that nothing can communicate with it.

The Fix

The most obvious fix is to wrap the writer in another layer of abstraction with runtime safety guarantees. This means something like a Mutex, although the Mutex class will block a thread on conflict, which will result in deadlock in a single-threaded, asynchronous situation such as this one. I assume there is some Futures equivalent to Mutex that will return a Future<Item = Guard> which resolves when the underlying resource is available.

Looking at some of the existing chat examples, I see that they use futures::sync::mpsc channels to communciate between connections. This is in keeping with the stream/sink model (channels are just another form of a stream), but replacing the Future-yielding send with the non-blocking (but memory-unbounded) unbounded_send method.


I feel like this solution is “cheating”: the language makes it difficult to send messages on the channel it provides, so wrap it in anohter channel with better semantics. Even the code to connect those two channels is, to my eye, obfuscating this issue:

let socket_writer = rx.fold(writer, |writer, msg| { let amt = io::write_all(writer, msg.into_bytes()); let amt =|(writer, _)| writer); amt.map_err(|_| ()) });

That rx.fold function is doing a lot of work, but there is nary a comment to draw attention to this fact. Those accustomed to functional programming, and familiar with Rust’s use of the term “fold” for what most languages call “reduce”, might figure out what’s going on more quickly. The writer is moved into the accumulator for the fold (reduce) operation, then moved into the closure argument, and when the future is finished it is moved back into the accumulator for the next iteration. This is a clever application of the first Rust-approved solution above: move the writer around in local variables without ever landing it in a stable storage location.

So, this is a key characteristic of asynchronous Rust, without which programs will not compile. Yet these “examples”, which are meant to be instructive, bury the approach behind some clever stream combinators as if they are ashamed of it. The result is almost immediate frustration and confusion for the newcomer to asynchronous Rust trying to learn from these examples.

Categorieën: Mozilla-nl planet

Manish Goregaokar: Undefined vs Unsafe in Rust

Mozilla planet - snein, 24/12/2017 - 01:00

Recently Julia Evans wrote an excellent post about debugging a segfault in Rust. (Go read it, it’s good)

One thing it mentioned was

I think “undefined” and “unsafe” are considered to be synonyms.

This is … incorrect. However, we in the Rust community have never really explicitly outlined the distinction, so that confusion is on us! This blog post is an attempt to clarify the difference of terminology as used within the Rust community. It’s a very useful but subtle distinction and I feel we’d be able to talk about safety more expressively if this was well known.

Unsafe means two things in Rust, yay

So, first off, the waters are a bit muddied by the fact that Rust uses unsafe to both mean “within an unsafe {} block” block and “something Bad is happening here”. It’s possible to have safe code within an unsafe block; indeed this is the primary function of an unsafe block. Somewhat counterintutively, the unsafe block’s purpose is to actually tell the compiler “I know you don’t like this code but trust me, it’s safe!” (where “safe” is the negation of the second meaning of “unsafe”, i.e. “something Bad is not happening here”).

Similarly, we use “safe code” to mean “code not using unsafe{} blocks” but also “code that is not unsafe”, i.e. “code where nothing bad happens”.

This blog post is primarily about the “something bad is happening here” meaning of “unsafe”. When referring to the other kind I’ll specifically say “code within unsafe blocks” or something like that.

Undefined behavior

In languages like C, C++, and Rust, undefined behavior is when you reach a point where the compiler is allowed to do anything with your code. This is distinct from implementation-defined behavior, where usually a given compiler/library will do a deterministic thing, however they have some freedom from the spec in deciding what that thing is.

Undefined behavior can be pretty scary. This is usually because in practice it causes problems when the compiler assumes “X won’t happen because it is undefined behavior”, and X ends up happening, breaking the assumptions. In some cases this does nothing dangerous, but often the compiler will end up doing wacky things to your code. Dereferencing a null pointer will sometimes cause segfaults (which is the compiler generating code that actually dereferences the pointer, making the kernel complain), but sometimes it will be optimized in a way that assumes it won’t and moves around code such that you have major problems.

Undefined behavior is a global property, based on how your code is used. The following function in C++ or Rust may or may not exhibit undefined behavior, based on how it gets used:

int deref(int* x) { return *x; } // do not try this at home fn deref(x: *mut u32) -> u32 { unsafe { *x } }

As long as you always call it with a valid pointer to an integer, there is no undefined behavior involved.

But in either language, if you use it with some pointer conjured out of thin air (or, like 0x01), that’s probably undefined behavior.

As it stands, UB is a property of the entire program and its execution. Sometimes you may have snippets of code that will always exhibit undefined behavior regardless of how they are called, but in general UB is a global property.

Unsafe behavior

Rust’s concept of “unsafe behavior” (I’m coining this term because “unsafety” and “unsafe code” can be a bit confusing) is far more scoped. Here, fn deref is “unsafe”1, even if you always call it with a valid pointer. The reason it is still unsafe is because it’s possible to trigger UB by only changing the “safe” caller code. I.e. “changes to code outside unsafe blocks can trigger UB if they include calls to this function”.

Basically, in Rust a bit of code is “safe” if it cannot exhibit undefined behavior under all circumstances of that code being used. The following code exhibits “safe behavior”:

unsafe { let x = 1; let raw = &x as *const u32; println!("{}", *raw); }

We dereferenced a raw pointer, but we knew it was valid. Of course, actual unsafe blocks will usually be “actually totally safe” for less obvious reasons, and part of this is because unsafe blocks sometimes can pollute the entire module.

Basically, “safe” in Rust is a more local property. Code isn’t safe just because you only use it in a way that doesn’t trigger UB, it is safe because there is literally no way to use it such that it will do so. No way to do so without using unsafe blocks, that is2.

This is a distinction that’s possible to draw in Rust because it gives us the ability to compartmentalize safety. Trying to apply this definition to C++ is problematic; you can ask “is std::unique_ptr<T> safe?”, but you can always use it within code in a way that you trigger undefined behavior, because C++ does not have the tools for compartmentalizing safety. The distinction between “code which doesn’t need to worry about safety” and “code which does need to worry about safety” exists in Rust in the form of “code outside of unsafe {}” and “code within unsafe {}”, whereas in C++ it’s a lot fuzzier and based on expectations (and documentation/the spec).

So C++’s std::unique_ptr<T> is “safe” in the sense that it does what you expect but if you use it in a way counter to how it’s supposed to be used (constructing one from an invalid pointer, for example) it can blow up. This is still a useful sense of safety, and is how one regularly reasons about safety in C++. However it’s not the same sense of the term as used in Rust, which can be a bit more formal about what the expectations actually are.

So unsafe in Rust is a strictly more general concept – all code exhibiting undefined behavior in Rust is also “unsafe”, however not all “unsafe” code in Rust exhibits undefined behavior as written in the current program.

Rust furthermore attempts to guarantee that you will not trigger undefined behavior if you do not use unsafe {} blocks. This of course depends on the correctness of the compiler (it has bugs) and of the libraries you use (they may also have bugs) but this compartmentalization gets you most of the way there in having UB-free programs.

  1. Once again in we have a slight difference between an “unsafe fn”, i.e. a function that needs an unsafe block to call and probably is unsafe, and an “unsafe function”, a function that exhibits unsafe behavior.

  2. This caveat and the confusing dual-usage of the term “safe” lead to the rather tautological-sounding sentence “Safe Rust code is Rust code that cannot cause undefined behavior when used in safe Rust code”

Categorieën: Mozilla-nl planet

Botond Ballo: Control Flow Visualizer (CFViz): an rr / gdb plugin

Mozilla planet - fr, 22/12/2017 - 19:59

rr (short for “record and replay”) is a very powerful debugging tool for C++ programs, or programs written in other compiled languages like Rust1. It’s essentially a reverse debugger, which allows you to record the execution of a program, and then replay it in the debugger, moving forwards or backwards in the replay.

I’ve been using rr for Firefox development at Mozilla, and have found it to be enormously useful.

One task that comes up very often while debugging is figuring out why a function produced a particular value. In rr, this is often done by going back to the beginning of the function, and then stepping through it line by line.

This can be tedious, particularly for long functions. To help automate this task, I wrote – in collaboration with my friend Derek Berger, who is learning Rust – a small rr plugin called Control Flow Visualizer, or CFViz for short.

To illustrate CFViz, consider this example function foo() and a call site for it:

example code

With the CFViz plugin loaded into rr, if you invoke the command cfviz while broken anywhere in the call to foo() during a replay, you get the following output:

example output

Basically, the plugin illustrates what path control flow took through the function, by coloring each line of code based on whether and how often it was executed. This way, you can tell at a glance things like:

  • which of several return statements produced the function’s return value
  • which conditional branches were taken during the execution
  • which loops inside the function are hot (were executed many times)

saving you the trouble of having to step through the function to determine this information.

CFViz’s implementation strategy is simple: it uses gdb’s Python API to step through the function of interest and see which lines were executed in what order. In then passes that information to a small Rust program which handles the formatting and colorization of the output.

While designed with rr in mind, CFViz also works with vanilla gdb, with the limitation that it will only visualize the rest of the function’s execution from the point where it was invoked (since, without rr, it cannot go backwards to the function’s starting point).

I’ve found CFViz to be quite useful for debugging Firefox’s C++ code. Hope you find it useful too!

CFViz is open source. Bug reports, patches, and other contributions are welcome!


1. rr also has a few important limitations: it only runs on Intel CPUs, and only on Linux (although there is a similar tool called Time-Travel Debugging for Windows)

Categorieën: Mozilla-nl planet

Firefox Test Pilot: Graduation Report: Activity Stream

Mozilla planet - fr, 22/12/2017 - 18:01

Activity Stream launched as one of the first Test Pilot experiments. Our goal with Activity Stream from the beginning has been to create new ways for Firefox users to interact with and benefit from their history and bookmarks. Web browsers have historically kept this valuable information tucked away, limiting its usefulness. We were inspired by some smart features in Firefox, like the Awesome Bar, and wanted to bring that kind of intelligence to the rest of the browser.

<figcaption>SVP of Firefox Mark Mayo at the Mozilla All Hands in December 2016</figcaption>

We believed that if people could easily get back to the pages they had recently viewed and saved, they would be happier and more productive. We wanted to help people rediscover where they had been and help them decide where to go next.

Here’s what we learned

Our first attempt at this included two new features in Firefox: a new New Tab page and a Library view to see all of your bookmarks and history ordered from newest to oldest.

<figcaption>First version of Activity Stream on New Tab and the Library view</figcaption>

While we we were equally excited about the possibilities of both of these features, we found very quickly that people spent much more time interacting with New Tab. We decided that splitting our efforts on these two features wasn’t the best way to positively impact most people in Firefox and made the decision to retire the Library.

The good news is that this gave us more time to focus on New Tab. The first version included 4 major sections: Search, Top Sites, Spotlight (later renamed Highlights), and Top Activity. Each of these sections changed and morphed as we collected feedback through surveys, A/B tests, and user interviews.

<figcaption>Snapshot of the many experiments that we ran for each version of Activity Stream</figcaption>Search

Up first was Search, which might be the most obvious. Or maybe it wasn’t. When we asked people what this search box did, many answered that it would search their history and bookmarks. That’s a pretty good guess considering the other items that are on the page. The problem is that it actually searches the web using your default search engine (Google or Bing for example). Because of this feedback, we changed the label of the search box to say “Search the Web”. This seemed to clear things up for most people.

<figcaption>Awesome Bar search box, toolbar search box, New Tab search box, oh my!</figcaption>

One of the most surprising things that we learned while running this experiment is that around thirty percent of New Tab interactions were with that search box. You might wonder why that’s so surprising, but if you look at Firefox closely, you’ll notice that there are actually two other search boxes above this one: the Awesome Bar in the top left and the Search box in the top right. We believe that the New Tab search box is this popular because it’s in the content of the page and reminds people of the familiar search box from their favorite search engine.

Top Sites

After the search box, we have the ever popular Top Sites which are, well, your top… sites. To be more specific the sites (or pages) that show up here are ones that you have visited both frequently and fairly recently. This is the same technology that powers the Awesome Bar in Firefox, and it’s called frecency. Basically it’s good at guessing which sites you might want to visit based on your recent browsing. We made some minor changes to the algorithm that powers Top Sites but the bigger changes that we made were visual.

The Top Sites tiles in previous versions of Firefox used large screenshots of the sites you visited. We wanted something that was both more compact and easier to recognize, given the other items that were on the page, and decided to use icons to represent each site.

<figcaption>Top Sites in previous versions of Firefox were a large grid of screenshots</figcaption>

This seemed like a pretty obvious solution that would mirror the app launchers that people were familiar with on both their phones and laptops. The problem was that it wasn’t all that obvious in the end. Many sites had poor quality icons that were very small. This made it difficult for people to recognize their favorite sites.

<figcaption>The first version of Top Sites had smaller icons with a matching background color</figcaption>

We addressed this by creating our own collection of high-quality icons. Unfortunately for our icon artists, there are an endless number of sites on the web and therefore too many icons for us to hand curate. The other problem with icons is that they’re great for home pages but not so good for specific sections or pages on a site. So you might see a Reddit or CNN icon that looks like the home page when it was actually a specific page on the site.

This made it difficult to guess where an icon might take you. In the end, we settled on the best of both worlds. For home pages with a nice icon, we give you that in all its glory. For sections of a site or where a large icon isn’t available, we combine the small icon with a screenshot to give you some extra hints about which page you’ll land on.

<figcaption>The final version has a large icon when available and otherwise a small icon with screenshot</figcaption>Highlights… or was it Spotlight?

Next up on New Tab was the ever changing Spotlight section. The name Spotlight didn’t last for too long thanks to another feature with that same name in a certain popular (mac)OS. We settled on the name Highlights as a replacement even though to this day we worry that it isn’t quite right. We’ve debated the name several times since but always end up back at Highlights. The original idea for this section is that it would be the “highlights” of your recent activity in the browser that you would see in the more expansive Library view.

<figcaption>Earlier version of Highlights with different kinds of content mixed together</figcaption>

We actually spent a lot of time iterating on this section. Our goal was to provide a similar feature to Top Sites but in reverse. Rather than showing you the things you visited most, we wanted to show you the things you had just discovered and might want to get back to again. Ideally these would be things you might have bookmarked had you thought of it.

We ended up with a fairly sophisticated system where Firefox would assign each of your recently visited pages and bookmarks a score, and it would show you the items with the highest score each time you opened a new tab. We gave bookmarks more points since you had told us they were important and that way they would hang around and be available to you for a little bit longer.

<figcaption>We ran many experiments on Highlights. Not all of them were as conclusive as we hoped.</figcaption>

For many of us on the team, this was a really great feature that we loved using. Unfortunately, when interviewing users, especially those using New Tab for the first time, they found Highlights to be confusing. They didn’t understand why items weren’t in chronological order (thanks to the scoring system) and when the section was empty, they didn’t know what to expect.

We made a number of changes to address these concerns. We went back to a simpler version of Highlights that is mostly chronological with bookmarks showing up first. We also added little ? bubbles to explain the different sections and give users quick access to customization. Finally, we added message boxes to explain the sections when they were empty.

<figcaption>We added message boxes to explain sections that were empty</figcaption>Top Activity

Last (and maybe least) we had Top Activity at the bottom of the page. Somewhat like Highlights, Top Activity was meant to be some of the most interesting things from your recent history. In reality, it was just the first few items from the Library view.

<figcaption>Top Activity showed the most recent entries from history</figcaption>

This actually turned out to be a more effective feature than we had anticipated. We had a lot of positive feedback about easy access to the most recently visited pages. We did soon realize though that Top Activity and Highlights were remarkably similar features and decided to combine them. Through a few different iterations we ended up with the 9 cards you are familiar with in Highlights today.


Something that became clear through much of our testing is that people wanted to customize their New Tab. We found ourselves wanting the same thing in different ways. Some people wanted two rows of Top Sites. Others wanted to remove the search box and still others wanted to choose between just history or bookmarks in Highlights. So we added a whole slew of customization options to a nice side panel where it’s easy to see what your New Tab will look like as you make changes.

<figcaption>Preferences let you customize the sections on New Tab</figcaption>Recommended by Pocket

So those are all the sections right? Well almost! Last year we tested some content recommendations with our good friends at Pocket. We had some mixed results back then and some technical challenges that kept us from doing additional tests. Since that time though, Mozilla acquired Pocket, and we’re now part of the same company! This made it even easier to run experiments together and so we did. Recently we shipped the latest version of this feature called Recommended by Pocket, which helps you find the most interesting articles from around the web.

<figcaption>Pocket recommendations help you find the most interesting articles from around the web</figcaption>

We rolled this out as a test so that not everyone received this feature to begin with. We compared how much people used New Tab with and without this feature, and we were excited to find that people used New Tab more when this was enabled.

<figcaption>Percentage of New Tab page views where a user clicked on something on the page</figcaption>

These results gave us the confidence to ship Pocket recommendations in a number of key countries including the United States, Canada, and Germany.


All of these lessons and iterations came together into the really great New Tab experience that we have today:

Many of the details are different but most of the big ideas are very much the same. We have stayed focused on helping people connect to the places they’ve been and hope to help them find where they might want to go next. We could not have done any of this without the amazing help, feedback, and patience of you, our loyal test pilots. Thank you so much for joining us on this journey!

Here’s what happens next

The exciting news is that we shipped this feature as part of Firefox Quantum! We continue to learn and iterate and look forward to making these features even better for all of our Firefox users.

Thank you again for your help, and we encourage you to participate in helping other Test Pilot experiments learn and grow the same way that we’ve done with this one.

Graduation Report: Activity Stream was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Open Innovation for Inclusion

Mozilla planet - fr, 22/12/2017 - 03:07

In the second half of 2017 Mozilla’s Open Innovation team worked on initiatives aimed at exploring new ways to advance Web accessibility. Through a decentralized design sprint we arrived at a number of early concepts which could help enhance accessibility of Web content in Firefox.

Designing with the Crowd

We partnered with Stanford University for a user-centric open design sprint. Technology is permeating most human interactions, but we still have very centralized design processes, that only include few people. We wanted to experiment with an open innovation approach that would allow users with accessibility needs to take an active part in the design process. Our chosen path to tackle this challenge allowed for a collaborative form of crowdsourcing. Instead of relying on individual work, we got our participants to work in teams across countries, time zones and professional expertise.
The design sprint ran for one week and 113 participants that signed up online joined the slack channel we used to coordinate interaction. We had a very diverse group of people in terms of their background, their geography, gender and age.

In fact, 42% of our participants either have disabilities themselves or take care for someone in need. Participants from this group were essential for the sprint outcomes as they brought direct experiences to the design process, which inspired other participants with expertise in design and coding.

We narrowed down the problem space by focusing on three specific user groups: elderly people, people with severely limited gestures and people with cognitive impairments.

Winning Ideas from our Decentralized Design Sprint

The sprint resulted in over 60 early stage ideas for how to make browsing with Firefox more accessible. From those ideas Mozilla’s Test Pilot and our Accessibility team chose the 5 that best fulfilled the overarching criteria of the sprint: ideas that showed an understanding of the user needs, demonstrated empathy, were unique, addressed a real problem, had real-world applicability and applicability beyond accessibility needs.

The winning ideas are:

  • Verbose Mode: A voice that guides users in the browsing experience. From: Casey Rigby, Brian Hochhalter, Chandan Baba, Theresa Anderson, Daniel Alexander
  • Onboarding for all abilities: Including new Firefox users from the first interaction. From: Jason Przewoznik, Rahul Sawn, Sohan Subhash, Drashti Kaushik
  • Numbered commands: Helping navigate voice commands. From: Angela, Mary Brennan, Sherry Feng, smcharg
  • Browser based breadcrumbs: Help users understand where they are in the web. From: Phil Daquila, Sherry Stanley, Anne Zbitnew, Ilene E
  • Color the web for readability: Control the colors of websites to match your readability preferences. From: Bri Norton, Jessica Fung, Kristing Maughan, Parisa Nikzap, Neil McCaffrey, Tiffany Chen

Our Test Pilot and Accessibility teams have drawn a lot of inspiration from the ideas that came from the participants and will include some of the ideas in their product development explorations. Of particular interest were ideas designed around voice as user interface. As our Machine Learning Group at Mozilla invests research and development resources in the still young field of speech recognition we want to encourage further iterations on how to make the technology applicable to address accessibility needs. If you want to join the slack channel and continue iterating on these ideas please contact us at We are committed to keep this process open for everyone.

We’d like to thank all participants which have contributed to this initiative and we invite you to continue following us on our next steps.

If you’d like to learn more about a second accessibility design initiative we ran together with the Test Pilot team and students of the Design and Media School Ravensbourne in London, check out the Test Pilot blog.

Open Innovation for Inclusion was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Kim Moir: Distributed teams: Better communication and engagement

Mozilla planet - to, 21/12/2017 - 21:46

As I have written before, I work on a very distributed team.

Screen Shot 2017-12-21 at 10.57.56 AM<figcaption class="wp-caption-text">Mozilla release engineering around the world.</figcaption>

I always think that as a distributed team, we have to overcome friction to communicate. If we all worked in the same physical office, you could just walk over to someone’s desk and look at the same screen to debug a problem.  Instead, we have to talk in slack, irc, a video chat, email, or issue trackers.  When the discussion takes place in a public forum, some people hesitate to discuss the issue.  It’s sometimes difficult to admit you don’t know something, even if the team culture is welcoming and people are happy to answer questions.

Over the last month or so I’ve been facilitating the releng team meeting. We have one meeting a week, and the timeslot rotates so that one week it’s convenient for folks in Europe, the other time it’s convenient for those in Pacific timezones.  The people in Eastern timezones usually attend both since it overlaps our work days in both cases.  We have a shared document where we have discussion items and status updates.  As part of each person’s update on the work they are doing I asked them to add:

1) Shoutout to someone you’d like to thank for helping you this week, or someone who you’d like to recognize for doing great work that helped the team

2) Where you need help

One of the things about writing tooling for a large distributed system such as Mozilla’s build and release pipeline is that a lot of the time, things just work. There are many ongoing projects to make components of it more resilient or more scalable.  So it’s good to publicly acknowledge that they work they are doing is is appreciated, and not just in the case of heroic work to address an operational failure.

Sometimes it’s surprising what people are thankful for – you may think it’s something small but it makes a difference in people’s happiness.   For example, conducting code reviews quickly so people can move forward with landing their patches.  Other times, it’s a larger projects that get the shoutout.  For example, when we were getting ready for the Quantum release, Johan wrote a document about all the update scenarios we needed to implement so release management were on the same page as us.  Ben wrote some tests to test these update scenarios so we could ensure they were implemented in an automated fashion. Thanking people for their work feels great and improves team engagement.

IMG_5654<figcaption class="wp-caption-text">Go team!</figcaption>

Asking people to indicate where they stuck or need help normalizes asking for help as part of team culture. Whether you are new to the team or have been working on it for a long time, people see that it’s okay to describe where they are stuck understanding the root of a problem or how to implement a solution.  When you have all the team in a room, people can jump in with suggestions or point you to other people with the expertise to help.  Also, if you have too much work on your plate, someone who just finished a project may be able to jump in which allows the team to redistribute workload more effectively.

At Rail’s suggestion, some people started having regularly scheduled 1x1s with people who aren’t their manager.  I started having 1x1s with Nick, who lives in New Zealand.  Our work days don’t overlap for very long so we haven’t worked much together in the past.  This has been great as we got to know each other better and can share expertise.  I was in a course earlier this year where a colleague mentioned that a sign of a dysfunctional team is when everyone talks to the manager, but team members don’t talk to each other.  So regularly scheduled 1x1s with teammates are a fantastic way to get to know people better, and gain new skills.

We have been working on migrating our build and release pipeline to a new system. During this migration, Ben and Aki would often announce that they would be in a shared video conference room for a few hours in the afternoon, in case people needed help. This was another great way to reduce friction when people got stuck solving a problem.  We could just go and ask.  A lot of the time, the room was silent as people worked, but we could have a quick conversation.  Even if you knew the solution to a problem, it was useful to talk about your approach with other team members to ensure you were on the right path.

The final thing is that Mihai created a shared drive of team pictures.  I gave a presentation last week, and included many team pictures.  I really like to show the human side of teams, and nothing shows that better than pictures of people having fun together.  So it’s really awesome that we have an archive of team pictures that we can look at and use when showcase our work.

In summary, these are some things that have worked for our distributed team

  1. Saying thanks to team members and asking for help in regularly scheduled team meetings.
  2. Regularly scheduled 1x1s with teammates you want to get to know better or learn new skills from
  3. Regularly scheduled video conferences for project teams to assist with debugging
  4. Shared drive for team pictures

If you work on a distributed team, what strategies to you use to help your team communicate more effectively?

Further reading

Categorieën: Mozilla-nl planet

Joel Maher: running tests by bugzilla component instead of test suite

Mozilla planet - to, 21/12/2017 - 18:07

Over the years we have had great dreams of running our tests in many different ways.  There was a dream of ‘hyperchunking’ where we would run everything in hundreds of chunks finishing in just a couple of minutes for all the tests.  This idea is difficult for many reasons, so we shifted to ‘run-by-manifest’, while we sort of do this now for mochitest, we don’t for web-platform-tests, reftest, or xpcshell.  Both of these models require work on how we schedule and report data which isn’t too hard to solve, but does require a lot of additional work and supporting 2 models in parallel for some time.

In recent times, there has been an ongoing conversation about ‘run-by-component’.  Let me explain.  We have all files in tree mapped to bugzilla components.  In fact almost all manifests have a clean list of tests that map to the same component.  Why not schedule, run, and report our tests on the same bugzilla component?

I got excited near the end of the Austin work week as I started working on this to see what would happen.


This is hand crafted to show top level productions, and when we expand those products you can see all the components:


I just used the first 3 letters of each component until there was a conflict, then I hand edited exceptions.

What is great here is we can easy schedule networking only tests:


and what you would see is:


^ keep in mind in this example I am using the same push, but just filtering- but I did test on a smaller scale for a bit with just Core-networking until I got it working.

What would we use this for:

  1. collecting code coverage on components instead of random chunks which will give us the ability to recommend tests to run with more accuracy than we have now
  2. tools like SETA will be more deterministic
  3. developers can filter in treeherder on their specific components and see how green they are, etc.
  4. easier backfilling of intermittents for sheriffs as tests are not moving around between chunks every time we add/remove a test

While I am excited about the 4 reasons above, this is far from being production ready.  There are a few things we would need to solve:

  1. My current patch takes a list of manifests associated with bugzilla components are runs all manifests related to that component- we would need to sanitize all manifests to only have tests related to one component (or solve this differently)
  2. My current patch iterates through all possible test types- this is grossly inefficient, but the best I could do with mozharness- I suspect a slight bit of work and I could have reftest/xpcshell working, likewise web-platform tests.  Ideally we would run all tests from a source checkout and use |./mach test <component>| and it would find what needs to run
  3. What do we do when we need to chunk certain components?  Right now I hack on taskcluster to duplicate a ‘component’ test for each component in a .json file; we also cannot specify specific platform specific features and lose a lot of the functionality that we gain with taskcluster;  I assume some simple thought and a feature or two would allow for us to retain all the features of taskcluster with the simplicity of component based scheduling
  4. We would need a concrete method for defining the list of components (#2 solves this for the harnesses).  Currently I add raw .json into the taskcluster decision task since it wouldn’t find the file I had checked into the tree when I pushed to try.  In addition, finding the right code names and mappings would ideally be automatic, but might need to be a manual process.
  5. when we run tests in parallel, they will have to be different ‘platforms’ such as linux64-qr, linux64-noe10s.  This is much easier in the land of taskcluster, but a shift from how we currently do things.

This is something I wanted to bring visibility to- many see this as the next stage of how we test at Mozilla, I am glad for tools like taskcluster, mozharness, and common mozbase libraries (especially manifestparser) which have made this a simple hack.  There is still a lot to learn here, we do see a lot of value going here, but are looking for value and not for dangers- what problems do you see with this approach?

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Dec. 21, 2017

Mozilla planet - to, 21/12/2017 - 17:00

Reps Weekly Meeting Dec. 21, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Dec. 21, 2017

Mozilla planet - to, 21/12/2017 - 17:00

Reps Weekly Meeting Dec. 21, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Applying Open Practices — Arduino

Mozilla planet - to, 21/12/2017 - 14:42

This is the fifth post in our Open by Design series describing findings from industry research on how companies use open practices, share knowledge, work, or influence in order to shape a market towards their business goals. This time we’ll take a look at Arduino, a name synonymous with hardware hacking for the masses.

Since 2003, this 50-person company, with offices in Europe and US, has build out a robust ecosystem of accessible, open electronics ideal for prototyping new technology and exploring novel hardware applications. The first Arduino board was introduced in 2005 to help design students without prior experience in electronics or micro-controller programming to create working prototypes connecting the physical world to the digital world. It has grown to become the world’s most popular teaching platform for physical prototyping. Arduino launched an integrated development environment (IDE) in 2015, and also has begun offering services to build and customize teaching materials suited to the specific needs of its educational partners.

Behind the widespread adoption of its hardware platform there is a focus on a guiding mission and a clearly-defined user group: making technology open and accessible for non-technical beginners. All hardware design and development decisions feed into keeping the experience optimal and consistent for this target group, attracting a solid, stable base of fans.

The popularity of an open-source platform does not, however, necessarily translate to a sustainable business model. One consequence of Arduino’s growing popularity has been the proliferation of non-licensed third-party versions of its boards. What can’t be cloned is Arduino’s model of community collaboration, strategic partnerships, and mix of open and closed practices — all primary forces in driving their ongoing success.

“Being open means you engage a lot of people with different skills and expertise — you create an ecosystem that is much more diverse that any company could create by itself. It also provides a lot of momentum for the company at the core, that is driving it (…) We are making something that exists no matter what happens to the company, it will continue to exist, it will still have a life of its own.”Dave Mellis, Co-Founder and former Software Lead — Arduino

Arduino was originally conceived as an educational kit to help creative people learn physical computing, and has always relied heavily on Learning from Use: which in this case, involved putting prototypes in front of students to study their learning process goals and frustrations, to gather ideas for how the kits could be made less confusing and more user-friendly. CEO Massimo Banzi personally teaches a number of Arduino workshops each year, giving him direct experiential knowledge that helps prioritize the organization’s hardware R&D efforts.

Continual improvement of its prototyping kits has extended Arduino’s popularity beyond technologists and designers, capturing the attention of artists who are interested in engaging with tech. Arduino’s specific focus on users with expertise in music, performance, and visual art who are passionate about publishing and sharing their work has increased the platform’s visibility, scope and speed of adoption.

On the IDE side, Arduino relies on a more expert community of designer-software specialists who have developed more in-depth technical expertise, and want to push the boundaries by creating custom libraries. In this community, a more familiar approach to open source is employed: creating together with a community of developers who form an essential part of the product development team.

With the launch of Arduino Education in 2015, the team brought creating together closer to the front end of the innovation timeline, collaborating to define services and materials with end users. Long before launching the service under the Arduino brand, the team conducted early explorations with teachers and schools to ensure the product was ideal for an established base of educators. This close collaboration averts the risk normally associated with new product development by ensuring a core community of users before the product is launched.

The Benefits of Participation

Arduino’s engagement with educational institutions and the maker community ensures a continuous feedback loop with end users, resulting in Better Products & Services in hardware. ‘Better’ in this case does not mean ‘technically more advanced’ than competitors — but rather that the boards, kits and instructions better fulfill Arduino’s educational mission. With the IDE, Arduino relies on an open source approach to Lower Product Development Costs. The educational services business — an extension of the Arduino brand — has co-developed its strategy with voices from the educational institutions, helping to anticipate their specific needs, driving even greater Adoption.

Alex Klepel & Gitte Jonsdatter (CIID)*ZJ4iaJ4OV4_nw3O3GQaaEA.png

Applying Open Practices — Arduino was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Andy McKay: The Mozilla Bug Firehose - Design Decisions

Mozilla planet - to, 21/12/2017 - 09:00

There could be many blog posts about the Mozilla bug firehose. This is just about dealing with one particular aspect.

When a bug comes into Mozilla it needs to get triaged - someone needs to figure out what to do with it. Triaging is an effort to try and get bugs appropriately classified to see how critical the bug is. Part of shipping a product every 6 weeks is that we have to try and fix crucial bugs in each release. To do that you have to read the bugs reports and try to understand what's happening.

We set aside a certain amount of time for triage meetings every week to triage those bugs. With product, engineering management, QA management and a bunch of engineers, those meetings are expensive to the whole organisation. So we started getting pretty ruthless on those triage meetings to keep them as short as possible .

One category of bugs that we got a lot of (an awful lot of) in WebExtension is a "feature request". This is where someone is asking for feature that we currently don't provide in the WebExtensions API. This one bug could slow down a whole triage as all the people look at a bug and think about it and try to decide if its a good idea or not. That's a terribly expensive and inefficient way to spend the triage meeting.

Instead we decided that we can do a few things:

  • We can usually determine quickly if a bug is within the bounds of WebExtensions or not. If not, it gets closed quickly.
  • We can usually determine quickly if a bug is reasonable or not. If it is, it goes into the backlog.
  • The rest go into a seperate bucket called design-decision-needed.

Then we have a seperate design-decision-needed meeting. For that meeting we do a few things:

  • Pick a few bugs from the bucket. For arbitrary reasons we pick the 3 oldest and 3 newest.
  • Try to involve the community, so we let the community know about the meeting and bugs before hand. All our meetings are public, but we specifically feel community should be involved in this one.
  • Ping the reporter before the meeting asking if they want to come to the meeting or enter more details in the bug.
  • Try to spend 5 minutes on each bug.
  • Try to find someone to argue for the bug, especially when everyone thinks it shouldn't happen.

There are currently 107 bugs needing a decision, 57 have been denied, 133 have been approved and 2 deferred. The deferred bugs are ones where we still have no real idea what to do with.

We are hoping this process means that:

  • Contributors can feel comfortable that before they start working on a bug, they know the patch will be accepted.
  • Everyone's time is respected, from developers to the reporters.
  • The process constrains internal developers time to prevent them from being overwhelmed by external requests.

The single best part of this process, was that we got our awesome Community Manager - Caitlin to run these triages. She does a much better job of working with the contributors and making them welcome than I can do.

So far we feel this process has been pretty good. One thing we need to improve is following up with comments on the bug quickly after the meeting. Some have fallen through the cracks and we struggle to remember later on what we discussed. Ideally, we do need more contributors to work on the bugs that have been marked as design-decision-approved. They are all there for the taking!

Categorieën: Mozilla-nl planet