mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 1 week 5 dagen geleden

Cameron Kaiser: TenFourFox FPR5b1 available

vr, 29/12/2017 - 05:51
TenFourFox Feature Parity Release 5 beta 1 is now available (downloads, hashes, release notes).

The biggest changes are lots and lots of AltiVec: additional AltiVec decoding acceleration for VP9 (you need MSE enabled to make the most of this, or things like YouTube will default to VP8), more usage of our VMX-accelerated string search package, and (new in this version) AltiVec-accelerated PNG image decoding. There are also a number of minor but notable improvements to layout compatibility, DOM and HTML5 support (most notably true passive event listeners, which should improve scroll and general event performance on some sites), as well as some additional efficiency improvements to JavaScript. FPR5 also marks mostly the end of all of the Quantum-related platform changes we can safely backport to our fork of 45; while there will be some additional optimization work, I will primarily be concentrating on actual new support rather than speedups since most of the low-hanging fruit and some of the stuff I have to jump a little for has already been plucked.

There are two somewhat more aggressive things in this version. The first is to even more heavily throttle inactive or background tabs to reduce their impact; the normal Firefox runs animation frames on background tabs at 1Hz but this version reduces that to a third (i.e., a tick every three seconds instead of one). It's possible to make this even more aggressive, including just not ticking any timers in background tabs at at all, but I'm a little nervous about halting it entirely. Neither approach affects audio playing in inactive or background tabs, which I tested thoroughly in Amazon Music and YouTube. This should make having multiple tabs open and loaded a bit less heavyweight, particularly on single processor Macs.

The second has to do with session store. Currently, every 25 seconds (Firefox has a default of 15), the browser serializes its state and writes it out to disk so you can pick up where you left off in the event of a crash. I'm loathe to completely halt this or make the interval much more than 60 seconds, but I also know that this does drag on the browser and can also spin SSD write cycles. This version increases it to 30 seconds, but also reduces the number of forward pages written (up to 5 instead of unlimited), and also purges old closed tabs more aggressively -- Firefox purges these after about two weeks, but we now purge old tabs every 24 hours, which I thought was a good compromise. This means much less data is written and much less cruft accumulates as you browse, reducing the browser's overhead and memory usage over longer uptimes. However, I know this may upset those of you who have tab closure regret, so advise if this drives you nuts even though I am unlikely to reverse this change completely.

This version is timed to come out with Firefox ESR 52.6.0, which is not scheduled for release until January 23, so don't panic if you don't see much commit activity in Github for awhile. 52.8.0, scheduled for May 7, will be the transition point to 60ESR. More on that when we get there.

For FPR6, I'm looking at a couple features to get us more HTML5 support points, and possibly something like date-time input controls or <details> and <summary> support. This may also be the first release with built-in adblock, though the adblock support will only be basic, will not be comprehensive, and may include blocking certain tracking scripts as well as image ads. It won't be enabled by default.

Party safe on New Years'.

Categorieën: Mozilla-nl planet

Don Marti: Predictions for 2018

do, 28/12/2017 - 09:00

Bitcoin to the moooon: The futures market is starting up, so here comes a bunch more day trader action. More important, think about all the bucket shops (I even saw an "invest in Bitcoin without owning Bitcoin" ad on public transit in London), legit financial firms, Libertarian true believers, and coins lost forever because of human error. Central bankers had better keep an eye on Bitcoin, though. Last recession we saw that printing money doesn't work as well as it used to, because it ends up in the hands of rich people who, instead of priming economic pumps with it, just drive up the prices of assets. I would predict "Entire Round of Quantitative Easing Gets Invested in Bitcoin Without Creating a Single New Job" but I'm saving that one for 2019. Central banks will need to innovate. Federal Reserve car crushers? Relieve medical deby by letting the UK operate NHS clinics at their consulates in the USA, and we trade them US green cards for visas that allow US citizens to get treated there? And—this is a brilliant quality of Bitcoin that I recognized too late—there is no bad news that could credibly hurt the value of a purely speculative asset.

The lesson for regular people here is not so much what to do with Bitcoin, but remember to keep putting some well-considered time into actions that you predict have unlikely but large and favorable outcomes. Must remember to do more of this.

High-profile Bitcoin kidnapping in the USA ends in tragedy: Kidnappers underestimate the amount of Bitcoin actually available to change hands, ask for more than the victim's family (or fans? a crowdsourced kidnapping of a celebrity is now a possibility) can raise in time. Huge news but not big enough to slow down something that the finance scene has already committed to.

Tech industry reputation problems hit open source. California Internet douchebags talk like a positive social movement but act like East Coast vampire squid—and people are finally not so much letting them define the terms of the conversation. The real Internet economy is moving to a three-class system: plutocrats, well-paid brogrammers with Aeron chairs, free snacks and good health insurance, and everyone else in the algorithmically-managed precariat. So far, people are more concerned about the big social and surveillance marketing companies, but open source has some of the same issues. Just as it was widely considered silly for people to call Facebook users "the Facebook community" in 2017, some of the "community" talk about open source will be questioned in 2018. Who's working for who, and who's vulnerable to the risks of doing work that someone else extracts the value of? College athletes are ahead of the open source scene on this one.

Adfraud becomes a significant problem for end users: Powerful botnets in data centers drove the pivot to video. Now that video adfraud is well-known, more of the fraud hackers will move to attribution fraud. This ties in to adtech consolidation, too. Google is better at beating simple to midrange fraud than the rest of the Lumascape, so the steady progress towards a two-logo Lumascape means fewer opportunities for bots in data centers.

Attribution fraud is nastier than servers-talking-to-servers fraud, since it usually depends on having fraudulent and legit client software on the same system—legit to be used for a human purchase, fraudulent to "serve the ad" that takes credit for it. Unlike botnets that can run in data centers, attribution fraud comes home with you. Yeech. Browsers and privacy tools will need to level up from blocking relatively simple Lumascape trackers to blocking cleverer, more aggressive attribution fraud scripts.

Wannabe fascists keep control of the US Congress, because your Marketing budget: "Dark" social campaigns (both ads and fake "organic" activity) are still a thing. In the USA, voter suppression and gerrymandering have been cleverly enough done that social manipulation can still make a difference, and it will.

In the long run, dark social will get filtered out by habits, technology, norms, and regulation—like junk fax and email spam before it—but we don't have a "long run" between now and November 2018. The only people who could make an impact on dark social now are the legit advertisers who don't want their brands associated with this stuff. And right now the expectations to advertise on the major social sites are stronger than anybody's ability to get an edgy, controversial "let's not SPONSOR ACTUAL F-----G NAZIS" plan through the 2018 marketing budget process.

Yes, the idea of not spending marketing money on supporting nationalist extremist forums is new and different now. What a year.

Bonus links

These Publishers Bought Millions Of Website Visits They Later Found Out Were Fraudulent

No boundaries for user identities: Web trackers exploit browser login managers

Best of 2017 #8: The World's Most Expensive Clown Show

My Internet Mea Culpa

2017 Was the Year I Learned About My White Privilege

With the people, not just of the people

When Will Facebook Take Hate Seriously?

Using Headless Mode in Firefox – Mozilla Hacks : the Web developer blog

Why Chuck E. Cheese’s Has a Corporate Policy About Destroying Its Mascot’s Head

Dozens of Companies Are Using Facebook to Exclude Older Workers From Job Ads

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

Categorieën: Mozilla-nl planet

The Firefox Frontier: Firefox Extensions for New Year’s Resolutions

wo, 27/12/2017 - 15:00

It’s that time of year again where we endeavor to improve ourselves, to wash away poor habits of the past and improve our lot in life. Yet most of us … Read more

The post Firefox Extensions for New Year’s Resolutions appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 214

di, 26/12/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is crossbeam-channel, a crate that improves multi-producer multi-consumer channels compared to what the standard library offers. Thanks to leodasvacas for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

118 pull requests were merged in the last week

New Contributors
  • Antal Szabó
  • Christopher Durham
  • Ed Schouten
  • Florian Keller
  • Jonas Platte
  • Matti Niemenmaa
  • Sam Green
  • Scott Abbey
  • Wilco Kusee
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Every great language needs a Steve.

aaron-lebo on Hacker News about @steveklabnik.

Thanks to Aleksey Kladov for the suggestion!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Categorieën: Mozilla-nl planet

Dustin J. Mitchell: FRustrations 1

ma, 25/12/2017 - 16:00
Foreword

I’ve been hacking about learing Rust for a bit more than a year now, building a Hawk crate and hacking on a distributed lock service named Rubbish (which will never amount to anything but gives me a purpose).

In the process, I’ve run into some limits of the language. I’m going to describe some of those in a series of posts starting with this one.

One of the general themes I’ve noticed is lots of things work great in demos, where everything is in a single function (thus allowing lots of type inference) and most variables are 'static. Try to elaborate these demos out into a working application, and the borrow checker immediately blocks your path.

Today’s frustration is a good example.

Ownership As Exclusion and Async Rust

A common pattern in Rust is to take ownership of a resource as a way of excluding other uses until the operation is finished. In particular, the send method of Sinks in the futures crate takes self, not &self. Because this is an async function, its result is a future that will yield the Sink on success.

The safety guarantee here is that only one async send can be performed at any given time. The language guarantees that nothing else can access the Sink until the send is complete.

Building a Chat Application

As part of learning about Tokio, Futures, and so on, I elected to build a chat application, starting with the simple pipelined server in the Tokio docs. This example uses Stream::and_then to map requests to responses, which makes sense for a strict request/response protocol, but does not make sense for a chat protocol. It should be possible to send or receive a message at any time in a chat protocol, so I modified the example to use send to send one message at a time:

let server = connections.for_each(move |(socket, _peer_addr)| { let (writer, _reader) = socket.framed(LineCodec).split(); let server = writer.send("Hello, World!".to_string()) .and_then(|writer2| writer2.send("Welcome to Chat.".to_string()) .then(|_| Ok(())); handle.spawn(server); Ok(()) });

This bit works fine: the resulting server greets each user, then drops the socket and disconnects them, as expected. Note the threading of the writer: the first writer.send is using the writer returned from split, while the second is using the result of the Future from the first (I have unecessarily called it writer2 here for clarity). In fact, send moves self, so writer.send("Welcome to Chat".to_string()) would not be permitted as that value has been moved.

Based on how I would design a chat app in Python, JavaScript, or any other language, I chose to make a struct to represent a connected user in the chat:

pub struct ChatConnection { reader: SplitStream<Framed<TcpStream, LineCodec>>, writer: SplitSink<Framed<TcpStream, LineCodec>>, peer: SocketAddr, } impl ChatConnection { fn new(socket: TcpStream, peer: SocketAddr) -> ChatConnection { let (writer, reader) = socket.framed(LineCodec).split(); ChatConnection { writer: writer, reader: reader, peer: peer, } } fn run(&self) -> Box<Future<Item = (), Error = ()>> { Box::new(self.writer .send("Welcome to Chat!".to_string()) .then(|_| Ok(()))) } }

When a new connection arrives, other code allocates a new ChatConnection and calls its run method, spawning a task into the event loop with the resulting future.

This doesn’t work, though:

error[E0507]: cannot move out of borrowed content --> src/main.rs:82:18 | | Box::new(self.writer | ^^^^ cannot move out of borrowed content

There’s sense in this: the language is preventing multiple simultaneous sends. If it allowed self.writer to be accessible to other code while the Future was not complete, then that other code could potentially, unsafely, call send again.

But it makes it difficult to store the writer in a struct – something any reasonably complex application is going to need to do. The two Rust-approved solutions here are to always move writer around as a local variable (as done in the demo), or to move self in the run method (fn run(self) ..). The first “hides” writer in a thicket of closures, making it difficult or impossible to find when, for example, another user sends this one a private message. The second just moves the problem: now we have a ChatConnection object to which nothing but the run method is allowed to refer, meaning that nothing can communicate with it.

The Fix

The most obvious fix is to wrap the writer in another layer of abstraction with runtime safety guarantees. This means something like a Mutex, although the Mutex class will block a thread on conflict, which will result in deadlock in a single-threaded, asynchronous situation such as this one. I assume there is some Futures equivalent to Mutex that will return a Future<Item = Guard> which resolves when the underlying resource is available.

Looking at some of the existing chat examples, I see that they use futures::sync::mpsc channels to communciate between connections. This is in keeping with the stream/sink model (channels are just another form of a stream), but replacing the Future-yielding send with the non-blocking (but memory-unbounded) unbounded_send method.

Frustration

I feel like this solution is “cheating”: the language makes it difficult to send messages on the channel it provides, so wrap it in anohter channel with better semantics. Even the code to connect those two channels is, to my eye, obfuscating this issue:

let socket_writer = rx.fold(writer, |writer, msg| { let amt = io::write_all(writer, msg.into_bytes()); let amt = amt.map(|(writer, _)| writer); amt.map_err(|_| ()) });

That rx.fold function is doing a lot of work, but there is nary a comment to draw attention to this fact. Those accustomed to functional programming, and familiar with Rust’s use of the term “fold” for what most languages call “reduce”, might figure out what’s going on more quickly. The writer is moved into the accumulator for the fold (reduce) operation, then moved into the closure argument, and when the future is finished it is moved back into the accumulator for the next iteration. This is a clever application of the first Rust-approved solution above: move the writer around in local variables without ever landing it in a stable storage location.

So, this is a key characteristic of asynchronous Rust, without which programs will not compile. Yet these “examples”, which are meant to be instructive, bury the approach behind some clever stream combinators as if they are ashamed of it. The result is almost immediate frustration and confusion for the newcomer to asynchronous Rust trying to learn from these examples.

Categorieën: Mozilla-nl planet

Manish Goregaokar: Undefined vs Unsafe in Rust

zo, 24/12/2017 - 01:00

Recently Julia Evans wrote an excellent post about debugging a segfault in Rust. (Go read it, it’s good)

One thing it mentioned was

I think “undefined” and “unsafe” are considered to be synonyms.

This is … incorrect. However, we in the Rust community have never really explicitly outlined the distinction, so that confusion is on us! This blog post is an attempt to clarify the difference of terminology as used within the Rust community. It’s a very useful but subtle distinction and I feel we’d be able to talk about safety more expressively if this was well known.

Unsafe means two things in Rust, yay

So, first off, the waters are a bit muddied by the fact that Rust uses unsafe to both mean “within an unsafe {} block” block and “something Bad is happening here”. It’s possible to have safe code within an unsafe block; indeed this is the primary function of an unsafe block. Somewhat counterintutively, the unsafe block’s purpose is to actually tell the compiler “I know you don’t like this code but trust me, it’s safe!” (where “safe” is the negation of the second meaning of “unsafe”, i.e. “something Bad is not happening here”).

Similarly, we use “safe code” to mean “code not using unsafe{} blocks” but also “code that is not unsafe”, i.e. “code where nothing bad happens”.

This blog post is primarily about the “something bad is happening here” meaning of “unsafe”. When referring to the other kind I’ll specifically say “code within unsafe blocks” or something like that.

Undefined behavior

In languages like C, C++, and Rust, undefined behavior is when you reach a point where the compiler is allowed to do anything with your code. This is distinct from implementation-defined behavior, where usually a given compiler/library will do a deterministic thing, however they have some freedom from the spec in deciding what that thing is.

Undefined behavior can be pretty scary. This is usually because in practice it causes problems when the compiler assumes “X won’t happen because it is undefined behavior”, and X ends up happening, breaking the assumptions. In some cases this does nothing dangerous, but often the compiler will end up doing wacky things to your code. Dereferencing a null pointer will sometimes cause segfaults (which is the compiler generating code that actually dereferences the pointer, making the kernel complain), but sometimes it will be optimized in a way that assumes it won’t and moves around code such that you have major problems.

Undefined behavior is a global property, based on how your code is used. The following function in C++ or Rust may or may not exhibit undefined behavior, based on how it gets used:

int deref(int* x) { return *x; } // do not try this at home fn deref(x: *mut u32) -> u32 { unsafe { *x } }

As long as you always call it with a valid pointer to an integer, there is no undefined behavior involved.

But in either language, if you use it with some pointer conjured out of thin air (or, like 0x01), that’s probably undefined behavior.

As it stands, UB is a property of the entire program and its execution. Sometimes you may have snippets of code that will always exhibit undefined behavior regardless of how they are called, but in general UB is a global property.

Unsafe behavior

Rust’s concept of “unsafe behavior” (I’m coining this term because “unsafety” and “unsafe code” can be a bit confusing) is far more scoped. Here, fn deref is “unsafe”1, even if you always call it with a valid pointer. The reason it is still unsafe is because it’s possible to trigger UB by only changing the “safe” caller code. I.e. “changes to code outside unsafe blocks can trigger UB if they include calls to this function”.

Basically, in Rust a bit of code is “safe” if it cannot exhibit undefined behavior under all circumstances of that code being used. The following code exhibits “safe behavior”:

unsafe { let x = 1; let raw = &x as *const u32; println!("{}", *raw); }

We dereferenced a raw pointer, but we knew it was valid. Of course, actual unsafe blocks will usually be “actually totally safe” for less obvious reasons, and part of this is because unsafe blocks sometimes can pollute the entire module.

Basically, “safe” in Rust is a more local property. Code isn’t safe just because you only use it in a way that doesn’t trigger UB, it is safe because there is literally no way to use it such that it will do so. No way to do so without using unsafe blocks, that is2.

This is a distinction that’s possible to draw in Rust because it gives us the ability to compartmentalize safety. Trying to apply this definition to C++ is problematic; you can ask “is std::unique_ptr<T> safe?”, but you can always use it within code in a way that you trigger undefined behavior, because C++ does not have the tools for compartmentalizing safety. The distinction between “code which doesn’t need to worry about safety” and “code which does need to worry about safety” exists in Rust in the form of “code outside of unsafe {}” and “code within unsafe {}”, whereas in C++ it’s a lot fuzzier and based on expectations (and documentation/the spec).

So C++’s std::unique_ptr<T> is “safe” in the sense that it does what you expect but if you use it in a way counter to how it’s supposed to be used (constructing one from an invalid pointer, for example) it can blow up. This is still a useful sense of safety, and is how one regularly reasons about safety in C++. However it’s not the same sense of the term as used in Rust, which can be a bit more formal about what the expectations actually are.

So unsafe in Rust is a strictly more general concept – all code exhibiting undefined behavior in Rust is also “unsafe”, however not all “unsafe” code in Rust exhibits undefined behavior as written in the current program.

Rust furthermore attempts to guarantee that you will not trigger undefined behavior if you do not use unsafe {} blocks. This of course depends on the correctness of the compiler (it has bugs) and of the libraries you use (they may also have bugs) but this compartmentalization gets you most of the way there in having UB-free programs.

  1. Once again in we have a slight difference between an “unsafe fn”, i.e. a function that needs an unsafe block to call and probably is unsafe, and an “unsafe function”, a function that exhibits unsafe behavior.

  2. This caveat and the confusing dual-usage of the term “safe” lead to the rather tautological-sounding sentence “Safe Rust code is Rust code that cannot cause undefined behavior when used in safe Rust code”

Categorieën: Mozilla-nl planet

Botond Ballo: Control Flow Visualizer (CFViz): an rr / gdb plugin

vr, 22/12/2017 - 19:59

rr (short for “record and replay”) is a very powerful debugging tool for C++ programs, or programs written in other compiled languages like Rust1. It’s essentially a reverse debugger, which allows you to record the execution of a program, and then replay it in the debugger, moving forwards or backwards in the replay.

I’ve been using rr for Firefox development at Mozilla, and have found it to be enormously useful.

One task that comes up very often while debugging is figuring out why a function produced a particular value. In rr, this is often done by going back to the beginning of the function, and then stepping through it line by line.

This can be tedious, particularly for long functions. To help automate this task, I wrote – in collaboration with my friend Derek Berger, who is learning Rust – a small rr plugin called Control Flow Visualizer, or CFViz for short.

To illustrate CFViz, consider this example function foo() and a call site for it:

example code

With the CFViz plugin loaded into rr, if you invoke the command cfviz while broken anywhere in the call to foo() during a replay, you get the following output:

example output

Basically, the plugin illustrates what path control flow took through the function, by coloring each line of code based on whether and how often it was executed. This way, you can tell at a glance things like:

  • which of several return statements produced the function’s return value
  • which conditional branches were taken during the execution
  • which loops inside the function are hot (were executed many times)

saving you the trouble of having to step through the function to determine this information.

CFViz’s implementation strategy is simple: it uses gdb’s Python API to step through the function of interest and see which lines were executed in what order. In then passes that information to a small Rust program which handles the formatting and colorization of the output.

While designed with rr in mind, CFViz also works with vanilla gdb, with the limitation that it will only visualize the rest of the function’s execution from the point where it was invoked (since, without rr, it cannot go backwards to the function’s starting point).

I’ve found CFViz to be quite useful for debugging Firefox’s C++ code. Hope you find it useful too!

CFViz is open source. Bug reports, patches, and other contributions are welcome!

Footnotes

1. rr also has a few important limitations: it only runs on Intel CPUs, and only on Linux (although there is a similar tool called Time-Travel Debugging for Windows)


Categorieën: Mozilla-nl planet

Firefox Test Pilot: Graduation Report: Activity Stream

vr, 22/12/2017 - 18:01

Activity Stream launched as one of the first Test Pilot experiments. Our goal with Activity Stream from the beginning has been to create new ways for Firefox users to interact with and benefit from their history and bookmarks. Web browsers have historically kept this valuable information tucked away, limiting its usefulness. We were inspired by some smart features in Firefox, like the Awesome Bar, and wanted to bring that kind of intelligence to the rest of the browser.

<figcaption>SVP of Firefox Mark Mayo at the Mozilla All Hands in December 2016</figcaption>

We believed that if people could easily get back to the pages they had recently viewed and saved, they would be happier and more productive. We wanted to help people rediscover where they had been and help them decide where to go next.

Here’s what we learned

Our first attempt at this included two new features in Firefox: a new New Tab page and a Library view to see all of your bookmarks and history ordered from newest to oldest.

<figcaption>First version of Activity Stream on New Tab and the Library view</figcaption>

While we we were equally excited about the possibilities of both of these features, we found very quickly that people spent much more time interacting with New Tab. We decided that splitting our efforts on these two features wasn’t the best way to positively impact most people in Firefox and made the decision to retire the Library.

The good news is that this gave us more time to focus on New Tab. The first version included 4 major sections: Search, Top Sites, Spotlight (later renamed Highlights), and Top Activity. Each of these sections changed and morphed as we collected feedback through surveys, A/B tests, and user interviews.

<figcaption>Snapshot of the many experiments that we ran for each version of Activity Stream</figcaption>Search

Up first was Search, which might be the most obvious. Or maybe it wasn’t. When we asked people what this search box did, many answered that it would search their history and bookmarks. That’s a pretty good guess considering the other items that are on the page. The problem is that it actually searches the web using your default search engine (Google or Bing for example). Because of this feedback, we changed the label of the search box to say “Search the Web”. This seemed to clear things up for most people.

<figcaption>Awesome Bar search box, toolbar search box, New Tab search box, oh my!</figcaption>

One of the most surprising things that we learned while running this experiment is that around thirty percent of New Tab interactions were with that search box. You might wonder why that’s so surprising, but if you look at Firefox closely, you’ll notice that there are actually two other search boxes above this one: the Awesome Bar in the top left and the Search box in the top right. We believe that the New Tab search box is this popular because it’s in the content of the page and reminds people of the familiar search box from their favorite search engine.

Top Sites

After the search box, we have the ever popular Top Sites which are, well, your top… sites. To be more specific the sites (or pages) that show up here are ones that you have visited both frequently and fairly recently. This is the same technology that powers the Awesome Bar in Firefox, and it’s called frecency. Basically it’s good at guessing which sites you might want to visit based on your recent browsing. We made some minor changes to the algorithm that powers Top Sites but the bigger changes that we made were visual.

The Top Sites tiles in previous versions of Firefox used large screenshots of the sites you visited. We wanted something that was both more compact and easier to recognize, given the other items that were on the page, and decided to use icons to represent each site.

<figcaption>Top Sites in previous versions of Firefox were a large grid of screenshots</figcaption>

This seemed like a pretty obvious solution that would mirror the app launchers that people were familiar with on both their phones and laptops. The problem was that it wasn’t all that obvious in the end. Many sites had poor quality icons that were very small. This made it difficult for people to recognize their favorite sites.

<figcaption>The first version of Top Sites had smaller icons with a matching background color</figcaption>

We addressed this by creating our own collection of high-quality icons. Unfortunately for our icon artists, there are an endless number of sites on the web and therefore too many icons for us to hand curate. The other problem with icons is that they’re great for home pages but not so good for specific sections or pages on a site. So you might see a Reddit or CNN icon that looks like the home page when it was actually a specific page on the site.

This made it difficult to guess where an icon might take you. In the end, we settled on the best of both worlds. For home pages with a nice icon, we give you that in all its glory. For sections of a site or where a large icon isn’t available, we combine the small icon with a screenshot to give you some extra hints about which page you’ll land on.

<figcaption>The final version has a large icon when available and otherwise a small icon with screenshot</figcaption>Highlights… or was it Spotlight?

Next up on New Tab was the ever changing Spotlight section. The name Spotlight didn’t last for too long thanks to another feature with that same name in a certain popular (mac)OS. We settled on the name Highlights as a replacement even though to this day we worry that it isn’t quite right. We’ve debated the name several times since but always end up back at Highlights. The original idea for this section is that it would be the “highlights” of your recent activity in the browser that you would see in the more expansive Library view.

<figcaption>Earlier version of Highlights with different kinds of content mixed together</figcaption>

We actually spent a lot of time iterating on this section. Our goal was to provide a similar feature to Top Sites but in reverse. Rather than showing you the things you visited most, we wanted to show you the things you had just discovered and might want to get back to again. Ideally these would be things you might have bookmarked had you thought of it.

We ended up with a fairly sophisticated system where Firefox would assign each of your recently visited pages and bookmarks a score, and it would show you the items with the highest score each time you opened a new tab. We gave bookmarks more points since you had told us they were important and that way they would hang around and be available to you for a little bit longer.

<figcaption>We ran many experiments on Highlights. Not all of them were as conclusive as we hoped.</figcaption>

For many of us on the team, this was a really great feature that we loved using. Unfortunately, when interviewing users, especially those using New Tab for the first time, they found Highlights to be confusing. They didn’t understand why items weren’t in chronological order (thanks to the scoring system) and when the section was empty, they didn’t know what to expect.

We made a number of changes to address these concerns. We went back to a simpler version of Highlights that is mostly chronological with bookmarks showing up first. We also added little ? bubbles to explain the different sections and give users quick access to customization. Finally, we added message boxes to explain the sections when they were empty.

<figcaption>We added message boxes to explain sections that were empty</figcaption>Top Activity

Last (and maybe least) we had Top Activity at the bottom of the page. Somewhat like Highlights, Top Activity was meant to be some of the most interesting things from your recent history. In reality, it was just the first few items from the Library view.

<figcaption>Top Activity showed the most recent entries from history</figcaption>

This actually turned out to be a more effective feature than we had anticipated. We had a lot of positive feedback about easy access to the most recently visited pages. We did soon realize though that Top Activity and Highlights were remarkably similar features and decided to combine them. Through a few different iterations we ended up with the 9 cards you are familiar with in Highlights today.

Customization

Something that became clear through much of our testing is that people wanted to customize their New Tab. We found ourselves wanting the same thing in different ways. Some people wanted two rows of Top Sites. Others wanted to remove the search box and still others wanted to choose between just history or bookmarks in Highlights. So we added a whole slew of customization options to a nice side panel where it’s easy to see what your New Tab will look like as you make changes.

<figcaption>Preferences let you customize the sections on New Tab</figcaption>Recommended by Pocket

So those are all the sections right? Well almost! Last year we tested some content recommendations with our good friends at Pocket. We had some mixed results back then and some technical challenges that kept us from doing additional tests. Since that time though, Mozilla acquired Pocket, and we’re now part of the same company! This made it even easier to run experiments together and so we did. Recently we shipped the latest version of this feature called Recommended by Pocket, which helps you find the most interesting articles from around the web.

<figcaption>Pocket recommendations help you find the most interesting articles from around the web</figcaption>

We rolled this out as a test so that not everyone received this feature to begin with. We compared how much people used New Tab with and without this feature, and we were excited to find that people used New Tab more when this was enabled.

<figcaption>Percentage of New Tab page views where a user clicked on something on the page</figcaption>

These results gave us the confidence to ship Pocket recommendations in a number of key countries including the United States, Canada, and Germany.

Conclusion

All of these lessons and iterations came together into the really great New Tab experience that we have today:

Many of the details are different but most of the big ideas are very much the same. We have stayed focused on helping people connect to the places they’ve been and hope to help them find where they might want to go next. We could not have done any of this without the amazing help, feedback, and patience of you, our loyal test pilots. Thank you so much for joining us on this journey!

Here’s what happens next

The exciting news is that we shipped this feature as part of Firefox Quantum! We continue to learn and iterate and look forward to making these features even better for all of our Firefox users.

Thank you again for your help, and we encourage you to participate in helping other Test Pilot experiments learn and grow the same way that we’ve done with this one.

Graduation Report: Activity Stream was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Open Innovation for Inclusion

vr, 22/12/2017 - 03:07

In the second half of 2017 Mozilla’s Open Innovation team worked on initiatives aimed at exploring new ways to advance Web accessibility. Through a decentralized design sprint we arrived at a number of early concepts which could help enhance accessibility of Web content in Firefox.

Designing with the Crowd

We partnered with Stanford University for a user-centric open design sprint. Technology is permeating most human interactions, but we still have very centralized design processes, that only include few people. We wanted to experiment with an open innovation approach that would allow users with accessibility needs to take an active part in the design process. Our chosen path to tackle this challenge allowed for a collaborative form of crowdsourcing. Instead of relying on individual work, we got our participants to work in teams across countries, time zones and professional expertise.
The design sprint ran for one week and 113 participants that signed up online joined the slack channel we used to coordinate interaction. We had a very diverse group of people in terms of their background, their geography, gender and age.

In fact, 42% of our participants either have disabilities themselves or take care for someone in need. Participants from this group were essential for the sprint outcomes as they brought direct experiences to the design process, which inspired other participants with expertise in design and coding.

We narrowed down the problem space by focusing on three specific user groups: elderly people, people with severely limited gestures and people with cognitive impairments.

Winning Ideas from our Decentralized Design Sprint

The sprint resulted in over 60 early stage ideas for how to make browsing with Firefox more accessible. From those ideas Mozilla’s Test Pilot and our Accessibility team chose the 5 that best fulfilled the overarching criteria of the sprint: ideas that showed an understanding of the user needs, demonstrated empathy, were unique, addressed a real problem, had real-world applicability and applicability beyond accessibility needs.

The winning ideas are:

  • Verbose Mode: A voice that guides users in the browsing experience. From: Casey Rigby, Brian Hochhalter, Chandan Baba, Theresa Anderson, Daniel Alexander
  • Onboarding for all abilities: Including new Firefox users from the first interaction. From: Jason Przewoznik, Rahul Sawn, Sohan Subhash, Drashti Kaushik
  • Numbered commands: Helping navigate voice commands. From: Angela, Mary Brennan, Sherry Feng, smcharg
  • Browser based breadcrumbs: Help users understand where they are in the web. From: Phil Daquila, Sherry Stanley, Anne Zbitnew, Ilene E
  • Color the web for readability: Control the colors of websites to match your readability preferences. From: Bri Norton, Jessica Fung, Kristing Maughan, Parisa Nikzap, Neil McCaffrey, Tiffany Chen

Our Test Pilot and Accessibility teams have drawn a lot of inspiration from the ideas that came from the participants and will include some of the ideas in their product development explorations. Of particular interest were ideas designed around voice as user interface. As our Machine Learning Group at Mozilla invests research and development resources in the still young field of speech recognition we want to encourage further iterations on how to make the technology applicable to address accessibility needs. If you want to join the slack channel and continue iterating on these ideas please contact us at firefoxaccessibility@cs.stanford.edu. We are committed to keep this process open for everyone.

We’d like to thank all participants which have contributed to this initiative and we invite you to continue following us on our next steps.

If you’d like to learn more about a second accessibility design initiative we ran together with the Test Pilot team and students of the Design and Media School Ravensbourne in London, check out the Test Pilot blog.

Open Innovation for Inclusion was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Kim Moir: Distributed teams: Better communication and engagement

do, 21/12/2017 - 21:46

As I have written before, I work on a very distributed team.

Screen Shot 2017-12-21 at 10.57.56 AM<figcaption class="wp-caption-text">Mozilla release engineering around the world.</figcaption>

I always think that as a distributed team, we have to overcome friction to communicate. If we all worked in the same physical office, you could just walk over to someone’s desk and look at the same screen to debug a problem.  Instead, we have to talk in slack, irc, a video chat, email, or issue trackers.  When the discussion takes place in a public forum, some people hesitate to discuss the issue.  It’s sometimes difficult to admit you don’t know something, even if the team culture is welcoming and people are happy to answer questions.

Over the last month or so I’ve been facilitating the releng team meeting. We have one meeting a week, and the timeslot rotates so that one week it’s convenient for folks in Europe, the other time it’s convenient for those in Pacific timezones.  The people in Eastern timezones usually attend both since it overlaps our work days in both cases.  We have a shared document where we have discussion items and status updates.  As part of each person’s update on the work they are doing I asked them to add:

1) Shoutout to someone you’d like to thank for helping you this week, or someone who you’d like to recognize for doing great work that helped the team

2) Where you need help

One of the things about writing tooling for a large distributed system such as Mozilla’s build and release pipeline is that a lot of the time, things just work. There are many ongoing projects to make components of it more resilient or more scalable.  So it’s good to publicly acknowledge that they work they are doing is is appreciated, and not just in the case of heroic work to address an operational failure.

Sometimes it’s surprising what people are thankful for – you may think it’s something small but it makes a difference in people’s happiness.   For example, conducting code reviews quickly so people can move forward with landing their patches.  Other times, it’s a larger projects that get the shoutout.  For example, when we were getting ready for the Quantum release, Johan wrote a document about all the update scenarios we needed to implement so release management were on the same page as us.  Ben wrote some tests to test these update scenarios so we could ensure they were implemented in an automated fashion. Thanking people for their work feels great and improves team engagement.

IMG_5654<figcaption class="wp-caption-text">Go team!</figcaption>

Asking people to indicate where they stuck or need help normalizes asking for help as part of team culture. Whether you are new to the team or have been working on it for a long time, people see that it’s okay to describe where they are stuck understanding the root of a problem or how to implement a solution.  When you have all the team in a room, people can jump in with suggestions or point you to other people with the expertise to help.  Also, if you have too much work on your plate, someone who just finished a project may be able to jump in which allows the team to redistribute workload more effectively.

At Rail’s suggestion, some people started having regularly scheduled 1x1s with people who aren’t their manager.  I started having 1x1s with Nick, who lives in New Zealand.  Our work days don’t overlap for very long so we haven’t worked much together in the past.  This has been great as we got to know each other better and can share expertise.  I was in a course earlier this year where a colleague mentioned that a sign of a dysfunctional team is when everyone talks to the manager, but team members don’t talk to each other.  So regularly scheduled 1x1s with teammates are a fantastic way to get to know people better, and gain new skills.

We have been working on migrating our build and release pipeline to a new system. During this migration, Ben and Aki would often announce that they would be in a shared video conference room for a few hours in the afternoon, in case people needed help. This was another great way to reduce friction when people got stuck solving a problem.  We could just go and ask.  A lot of the time, the room was silent as people worked, but we could have a quick conversation.  Even if you knew the solution to a problem, it was useful to talk about your approach with other team members to ensure you were on the right path.

The final thing is that Mihai created a shared drive of team pictures.  I gave a presentation last week, and included many team pictures.  I really like to show the human side of teams, and nothing shows that better than pictures of people having fun together.  So it’s really awesome that we have an archive of team pictures that we can look at and use when showcase our work.

In summary, these are some things that have worked for our distributed team

  1. Saying thanks to team members and asking for help in regularly scheduled team meetings.
  2. Regularly scheduled 1x1s with teammates you want to get to know better or learn new skills from
  3. Regularly scheduled video conferences for project teams to assist with debugging
  4. Shared drive for team pictures

If you work on a distributed team, what strategies to you use to help your team communicate more effectively?

Further reading


Categorieën: Mozilla-nl planet

Joel Maher: running tests by bugzilla component instead of test suite

do, 21/12/2017 - 18:07

Over the years we have had great dreams of running our tests in many different ways.  There was a dream of ‘hyperchunking’ where we would run everything in hundreds of chunks finishing in just a couple of minutes for all the tests.  This idea is difficult for many reasons, so we shifted to ‘run-by-manifest’, while we sort of do this now for mochitest, we don’t for web-platform-tests, reftest, or xpcshell.  Both of these models require work on how we schedule and report data which isn’t too hard to solve, but does require a lot of additional work and supporting 2 models in parallel for some time.

In recent times, there has been an ongoing conversation about ‘run-by-component’.  Let me explain.  We have all files in tree mapped to bugzilla components.  In fact almost all manifests have a clean list of tests that map to the same component.  Why not schedule, run, and report our tests on the same bugzilla component?

I got excited near the end of the Austin work week as I started working on this to see what would happen.

rbc

This is hand crafted to show top level productions, and when we expand those products you can see all the components:

rbc_expanded

I just used the first 3 letters of each component until there was a conflict, then I hand edited exceptions.

What is great here is we can easy schedule networking only tests:

rbc_scheduling

and what you would see is:

rbc_networking

^ keep in mind in this example I am using the same push, but just filtering- but I did test on a smaller scale for a bit with just Core-networking until I got it working.

What would we use this for:

  1. collecting code coverage on components instead of random chunks which will give us the ability to recommend tests to run with more accuracy than we have now
  2. tools like SETA will be more deterministic
  3. developers can filter in treeherder on their specific components and see how green they are, etc.
  4. easier backfilling of intermittents for sheriffs as tests are not moving around between chunks every time we add/remove a test

While I am excited about the 4 reasons above, this is far from being production ready.  There are a few things we would need to solve:

  1. My current patch takes a list of manifests associated with bugzilla components are runs all manifests related to that component- we would need to sanitize all manifests to only have tests related to one component (or solve this differently)
  2. My current patch iterates through all possible test types- this is grossly inefficient, but the best I could do with mozharness- I suspect a slight bit of work and I could have reftest/xpcshell working, likewise web-platform tests.  Ideally we would run all tests from a source checkout and use |./mach test <component>| and it would find what needs to run
  3. What do we do when we need to chunk certain components?  Right now I hack on taskcluster to duplicate a ‘component’ test for each component in a .json file; we also cannot specify specific platform specific features and lose a lot of the functionality that we gain with taskcluster;  I assume some simple thought and a feature or two would allow for us to retain all the features of taskcluster with the simplicity of component based scheduling
  4. We would need a concrete method for defining the list of components (#2 solves this for the harnesses).  Currently I add raw .json into the taskcluster decision task since it wouldn’t find the file I had checked into the tree when I pushed to try.  In addition, finding the right code names and mappings would ideally be automatic, but might need to be a manual process.
  5. when we run tests in parallel, they will have to be different ‘platforms’ such as linux64-qr, linux64-noe10s.  This is much easier in the land of taskcluster, but a shift from how we currently do things.

This is something I wanted to bring visibility to- many see this as the next stage of how we test at Mozilla, I am glad for tools like taskcluster, mozharness, and common mozbase libraries (especially manifestparser) which have made this a simple hack.  There is still a lot to learn here, we do see a lot of value going here, but are looking for value and not for dangers- what problems do you see with this approach?


Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Dec. 21, 2017

do, 21/12/2017 - 17:00

Reps Weekly Meeting Dec. 21, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Dec. 21, 2017

do, 21/12/2017 - 17:00

Reps Weekly Meeting Dec. 21, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Applying Open Practices — Arduino

do, 21/12/2017 - 14:42

This is the fifth post in our Open by Design series describing findings from industry research on how companies use open practices, share knowledge, work, or influence in order to shape a market towards their business goals. This time we’ll take a look at Arduino, a name synonymous with hardware hacking for the masses.

Since 2003, this 50-person company, with offices in Europe and US, has build out a robust ecosystem of accessible, open electronics ideal for prototyping new technology and exploring novel hardware applications. The first Arduino board was introduced in 2005 to help design students without prior experience in electronics or micro-controller programming to create working prototypes connecting the physical world to the digital world. It has grown to become the world’s most popular teaching platform for physical prototyping. Arduino launched an integrated development environment (IDE) in 2015, and also has begun offering services to build and customize teaching materials suited to the specific needs of its educational partners.

Behind the widespread adoption of its hardware platform there is a focus on a guiding mission and a clearly-defined user group: making technology open and accessible for non-technical beginners. All hardware design and development decisions feed into keeping the experience optimal and consistent for this target group, attracting a solid, stable base of fans.

The popularity of an open-source platform does not, however, necessarily translate to a sustainable business model. One consequence of Arduino’s growing popularity has been the proliferation of non-licensed third-party versions of its boards. What can’t be cloned is Arduino’s model of community collaboration, strategic partnerships, and mix of open and closed practices — all primary forces in driving their ongoing success.

“Being open means you engage a lot of people with different skills and expertise — you create an ecosystem that is much more diverse that any company could create by itself. It also provides a lot of momentum for the company at the core, that is driving it (…) We are making something that exists no matter what happens to the company, it will continue to exist, it will still have a life of its own.”Dave Mellis, Co-Founder and former Software Lead — Arduino

Arduino was originally conceived as an educational kit to help creative people learn physical computing, and has always relied heavily on Learning from Use: which in this case, involved putting prototypes in front of students to study their learning process goals and frustrations, to gather ideas for how the kits could be made less confusing and more user-friendly. CEO Massimo Banzi personally teaches a number of Arduino workshops each year, giving him direct experiential knowledge that helps prioritize the organization’s hardware R&D efforts.

Continual improvement of its prototyping kits has extended Arduino’s popularity beyond technologists and designers, capturing the attention of artists who are interested in engaging with tech. Arduino’s specific focus on users with expertise in music, performance, and visual art who are passionate about publishing and sharing their work has increased the platform’s visibility, scope and speed of adoption.

On the IDE side, Arduino relies on a more expert community of designer-software specialists who have developed more in-depth technical expertise, and want to push the boundaries by creating custom libraries. In this community, a more familiar approach to open source is employed: creating together with a community of developers who form an essential part of the product development team.

With the launch of Arduino Education in 2015, the team brought creating together closer to the front end of the innovation timeline, collaborating to define services and materials with end users. Long before launching the service under the Arduino brand, the team conducted early explorations with teachers and schools to ensure the product was ideal for an established base of educators. This close collaboration averts the risk normally associated with new product development by ensuring a core community of users before the product is launched.

The Benefits of Participation

Arduino’s engagement with educational institutions and the maker community ensures a continuous feedback loop with end users, resulting in Better Products & Services in hardware. ‘Better’ in this case does not mean ‘technically more advanced’ than competitors — but rather that the boards, kits and instructions better fulfill Arduino’s educational mission. With the IDE, Arduino relies on an open source approach to Lower Product Development Costs. The educational services business — an extension of the Arduino brand — has co-developed its strategy with voices from the educational institutions, helping to anticipate their specific needs, driving even greater Adoption.

Alex Klepel & Gitte Jonsdatter (CIID)

https://cdn-images-1.medium.com/max/1200/1*ZJ4iaJ4OV4_nw3O3GQaaEA.png

Applying Open Practices — Arduino was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Andy McKay: The Mozilla Bug Firehose - Design Decisions

do, 21/12/2017 - 09:00

There could be many blog posts about the Mozilla bug firehose. This is just about dealing with one particular aspect.

When a bug comes into Mozilla it needs to get triaged - someone needs to figure out what to do with it. Triaging is an effort to try and get bugs appropriately classified to see how critical the bug is. Part of shipping a product every 6 weeks is that we have to try and fix crucial bugs in each release. To do that you have to read the bugs reports and try to understand what's happening.

We set aside a certain amount of time for triage meetings every week to triage those bugs. With product, engineering management, QA management and a bunch of engineers, those meetings are expensive to the whole organisation. So we started getting pretty ruthless on those triage meetings to keep them as short as possible .

One category of bugs that we got a lot of (an awful lot of) in WebExtension is a "feature request". This is where someone is asking for feature that we currently don't provide in the WebExtensions API. This one bug could slow down a whole triage as all the people look at a bug and think about it and try to decide if its a good idea or not. That's a terribly expensive and inefficient way to spend the triage meeting.

Instead we decided that we can do a few things:

  • We can usually determine quickly if a bug is within the bounds of WebExtensions or not. If not, it gets closed quickly.
  • We can usually determine quickly if a bug is reasonable or not. If it is, it goes into the backlog.
  • The rest go into a seperate bucket called design-decision-needed.

Then we have a seperate design-decision-needed meeting. For that meeting we do a few things:

  • Pick a few bugs from the bucket. For arbitrary reasons we pick the 3 oldest and 3 newest.
  • Try to involve the community, so we let the community know about the meeting and bugs before hand. All our meetings are public, but we specifically feel community should be involved in this one.
  • Ping the reporter before the meeting asking if they want to come to the meeting or enter more details in the bug.
  • Try to spend 5 minutes on each bug.
  • Try to find someone to argue for the bug, especially when everyone thinks it shouldn't happen.

There are currently 107 bugs needing a decision, 57 have been denied, 133 have been approved and 2 deferred. The deferred bugs are ones where we still have no real idea what to do with.

We are hoping this process means that:

  • Contributors can feel comfortable that before they start working on a bug, they know the patch will be accepted.
  • Everyone's time is respected, from developers to the reporters.
  • The process constrains internal developers time to prevent them from being overwhelmed by external requests.

The single best part of this process, was that we got our awesome Community Manager - Caitlin to run these triages. She does a much better job of working with the contributors and making them welcome than I can do.

So far we feel this process has been pretty good. One thing we need to improve is following up with comments on the bug quickly after the meeting. Some have fallen through the cracks and we struggle to remember later on what we discussed. Ideally, we do need more contributors to work on the bugs that have been marked as design-decision-approved. They are all there for the taking!

Categorieën: Mozilla-nl planet

Mozilla B-Team: happy shiney bmo push day

do, 21/12/2017 - 05:41

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1423391] Add additional phabricator settings to generate_conduit_data.pl for running bmo-extensions demo
  • [1424787] Due Date on bug modal is 1 day ahead
  • [1424155] Write scripts to import/export attachments to disk
  • [1376826] New HTML Header for BMO
  • [1425119] Allow the new-bug interface to be used
  • [1424940] Support HTML5 datepicker
  • [1403777] Migrate urlbase from params to localconfig
  • [1409957] Create polling daemon to query Phabricator for recent transcations and update bug data according to revision changes
  • [1422329] The phabricator conduit API method feed.query_id return data format has changed so the phabbugz_feed.pl daemon needs to be updated
  • [1420771] Remove global footer
  • [1426424] feed daemon complains when trying to set an inactive review flag
  • [1361890] Remove asset concatenation
  • [1426117] Failure when opening a bug: Invalid parameter passed to Bugzilla::Bug::new_from_list: It must be numeric.
  • [905763] Fix named anchors in various pages so that the Sandstone theme header can be set to a fixed position
  • [1424408] “Sign in with GitHub” button triggers a bugzilla security error, if I’m viewing a page with e.g. “t=”

discuss these changes on mozilla.tools.bmo.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Rust in 2017: what we achieved

do, 21/12/2017 - 01:00

Rust’s development in 2017 fit into a single overarching theme: increasing productivity, especially for newcomers to Rust. From tooling to libraries to documentation to the core language, we wanted to make it easier to get things done with Rust. That desire led to a roadmap for the year, setting out 8 high-level objectives that would guide the work of the team.

How’d we do? Really, really well.

There’s not room in a single post to cover everything that happened, but we’ll cover some of the highlights below.

The goals for 2017 Rust should have a lower learning curve Rust should have a pleasant edit-compile-debug cycle
  • The cargo check workflow
    • Cargo now offers a check subcommand which can be used to speed up the edit-compile cycle when you’re working on getting your code to pass the compiler’s checks. This mode, in particular, skips producing executable artifacts for crates in the dependency tree, instead doing just enough work to be able to type check the current crate.
  • Incremental recompilation
    • The cornerstone of our approach to improving compilation times is incremental recompilation, allowing rebuilds to reuse significant pieces of work from prior compilations. Over the course of the year we have put a lot of work into making this happen and now we are happy to announce that incremental compilation will start riding the trains with the next beta version of the compiler in January and become available on the stable channel with Rust 1.24 in February!
    • You can see how incremental recompilation performs in practice on some of our key benchmarks below. Note that -opt refers to optimized builds, “best case” refers to a recompilation with no changes, and println refers to a recompilation with a small change, like adding a println call to a function body. We expect the 50+% speedups we’re seeing now to continue to grow next year as we push incremental recompilation more deeply through the compiler.
    • Together with the changes in the compiler we will also update Cargo to use incremental recompilation by default for select use cases, so you can take advantage of improved compile times without the need for additional configuration. Of course you will also be able to opt into and out of the feature on a case by case basis as you see fit.

Incremental recompilation benchmarks

Rust should provide a solid, but basic IDE experience
  • Rust now has solid IDE support in IntelliJ and via the Rust Language Server (RLS). Whether you prefer a fully-featured IDE or a more lightweight editor with IDE features, you can boost your productivity by taking advantage of great Rust integration.
  • IntelliJ. Rust has official support in JetBrains’ IDEs (IntelliJ IDEA, CLion, WebStorm, etc.), which includes:
    • Finding types, functions and traits across the whole project, its dependencies and the standard library.
    • Hierarchical overview of the symbols defined in the current file.
    • Search for all implementations of a given trait.
    • Go to definition of symbol at cursor.
    • Navigation to the parent module.
    • Refactoring and code generation
  • RLS. The RLS is an editor-independent source of intelligence about Rust programs. It is used to power Rust support in many editors including Visual Studio Code, Visual Studio, and Atom, with more in the pipeline. It is on schedule for a 1.0 release in early 2018, but is currently available in preview form for all channels (nightly, beta, and stable). It supports:
    • Code completion (using Racer)
    • Go to definition (and peek definition if the editor supports it)
    • Find all references
    • Find impls for a type or trait
    • Symbol search (current file and project)
    • Reformatting using rustfmt, renaming
    • Apply error suggestions (e.g., to add missing imports)
    • Docs and types on hover
    • Code generation using snippets
    • Cargo tasks
    • Installation and update of the RLS (via rustup)
Rust should integrate easily into large build systems
  • Alternative registries. Cargo now has unstable support for installing crates from registries other than crates.io. This will enable companies to manage and use internal crates as easily as open source crates. Work is underway developing crate servers that are more tailored for private use than the crates.io server is.
  • Cargo as a component. A lot of work this year went into gathering constraints from stakeholders who want to integrate Rust crates into a large existing build system (like Bazel). The Cargo team has formulated a vision of Cargo as a suite of components that can be customized or swapped out, making it easy for an external build system to manage the work it is built to do, while still integrating with crates.io and with Cargo workflows. While we did not get as far as we hoped in terms of implementing this vision, there is ongoing work spiking out “build plan generation” to a sufficient degree that it can support the Firefox build system and Tup. This initial spike should provide a good strawman for further iteration in early 2018.
Rust should provide easy access to high quality crates Rust should be well-equipped for writing robust servers
  • Futures and Tokio
    • Much of the story for Rust on the server has revolved around its async I/O story. The futures crate was introduced in late 2016, and the Tokio project (which provides a networking-focused event loop for use with futures) published its 0.1 early in 2017. Since then, there’s been significant work building out the “Tokio ecosystem”, and a lot of feedback about the core primitives. Late in the year, the Tokio team proposed a significant API revamp to streamline and clarify the crate’s API, and work is underway on a book dedicated to asynchronous programming in Rust. This latest round of work is expected to land very early in 2018.
  • Async ecosystem
  • Generators
    • Thanks to a heroic community effort, Rust also saw experimental generator support land in 2017! That support provides the ingredients necessary for async/await notation, which is usable today on nightly. Further work in this area is expected to be a high priority in early 2018.
  • Web frameworks
    • Finally, sophisticated web frameworks like Rocket (sync) and Gotham (async) have continued to evolve this year, and take advantage of Rust’s expressivity to provide a robust but productive style of programming.
Rust should have 1.0-level crates for essential tasks
  • Libz Blitz. The library team launched the Libz Blitz this year, a major effort to vet and improve a large number of foundational crates and push them toward 1.0 releases. It was a massive community effort: we performed a crowd-sourced “crate evaluation” every two weeks, fully vetting a crate against a clear set of guidelines, assessing the issue tracker, and sussing out any remaining design questions. While not all of the assessed crates have published a 1.0 yet, they are all very close to doing so. The full list includes: log, env_logger, rayon, mio, url, num_cpus, semver, mime, reqwest, tempdir, threadpool, byteorder, bitflags, cc-rs, walkdir, same-file, memmap, lazy_static, flate2.
  • API Guidelines. A great by-product of the Libz Blitz is the API Guidelines book, which consolidates the official library team API guidance as informed by the standard library and the Libz Blitz process.
Rust’s community should provide mentoring at all levels
  • We ran 5 RustBridge Workshops in 2017, in Kyiv, Ukraine; Mexico City, Mexico; Portland, OR, USA; Zurich, Switzerland; and Columbus, OH, USA! RustBridge workshops aim to get underrepresented folks started in Rust. Attendees get an introduction to syntax and concepts, work on some exercism exercises, and build a web application that delivers emergency compliments and crab pictures. We hope to scale this program and help more folks run more workshops in 2018!
  • The Increasing Rust’s Reach program brought people with skills from other areas (such as teaching) and with different experiences into Rust so that we can improve in areas where the community is missing these skills and experiences. The participants have helped immensely, and many are planning to continue helping in the Rust community going forward. We’re glad they’re here! Here are some blog posts about the experience:
  • Last but not least, we also launched the first Rust impl Period. This was an ambitious effort to simultaneously help get a lot of new people contributing to the Rust ecosystem while also getting a lot of things done. To that end, we created 40+ working groups, each with their own focus area, leaders, and chat channel. These groups identified good “entry points” for people who wanted to contribute, and helped mentor them through the changes needed. This event was a wild success and resulted in changes and contributions to all areas of Rust, ranging from the compiler internals to documentation to the ecosystem at large. To those of you who participated, a great big thank you — and please keep contributing! To those of you who didn’t get a chance, don’t worry: we hope to make this a regular tradition.
2018

We’ll be spinning up the 2018 roadmap process in the very near future; watch this space!

Thank you!

We got a staggering amount of work done this year — and the “we” here includes an equally staggering number of people. Because the work has been spread out over so many facets of the project, it’s hard to provide a single list of people who contributed. For the impl period specifically, you can see detailed contribution lists in the newsletters:

but of course, there have been contributions of all kinds during the year.

In this post, I’d like to specifically call out the leaders and mentors who have helped orchestrate our 2017 work. Leadership of this kind — where you are working to enable others — is hard work and not recognized enough. So let’s hand it to these folks!

  • Cargo
    • carols10cents, for sustained leadership and mentoring work throughout the year on crates.io.
  • Community
  • Compiler
    • nikomatsakis, for an incredible amount of leadership, organization, and mentoring work, and for a lot of high-value hacking on NLL in particular.
    • arielb1, likewise for mentoring and hacking work, spanning both NLL and the rest of the compiler.
    • michaelwoerister, for pushing continuously on delivering incremental recompilation, and creating opportunities for others to join in throughout the year.
    • eddyb, for continuing to act as a general compiler guru, and for tackling some truly heavy lifts around const generics this year.
  • Dev tools
    • nrc, for overseeing the dev tools group as a whole, and for steady work toward shipping the RLS and rustfmt, despite many thorny infrastructure problems to get there.
    • matklad, for the incredible work on IntelliJ Rust.
    • xanewok, for enormous efforts making the RLS a reality.
    • fitzgen, for happily corralling a huge contributor base around bindgen.
  • Docs
    • steveklabnik, for launching and overseeing a hugely exciting revamp of rustdoc.
    • quietmisdreavus, for overseeing tons of activity in the docs world, but most especially for helping the community significantly improve rustdoc this year.
  • Infrastructure
    • mark-simulacrum, for getting the perf website to a highly useful state, and for overhauling rustbuild to better support contribution.
    • aidanhs, for coordinating maintenance of crater.
  • Language
    • withoutboats, for keeping us focused on the programmer experience and for helping the community navigate discussion around very thorny language design issues.
    • cramertj, for keeping us focused on shipping, and in particular building consensus around some of the topics where that’s been hardest to find: impl Trait, and module system changes.
    • nikomatsakis, for making the NLL RFC so accessible, and pioneering the idea of using a separate repo for it to allow for greater participation.
  • Libraries
    • brson, for envisioning and largely overseeing the Libz Blitz initiative.
    • kodraus, for gracefully taking over the Libz Blitz and seeing it to a successful conclusion.
    • dtolnay, for taking on the API guidelines work and getting it to a coherent and polished state.
    • budziq, for a ton of work coordinating and editing contributions to the cookbook.
    • dhardy, for leading a heroic effort to revamp the rand crate.

Technical leaders are an essential ingredient for our success, and I hope in 2018 we can continue to grow our leadership pool, and get even more done — together.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Plugging in on Policy

wo, 20/12/2017 - 23:00
Mozilla Tech Policy Fellows continue to lead policy conversations around the world.

 

When Mozilla rolled out a new fellowship focused on tech policy this past June, the goal was to gather some of the world’s top policymakers in tech to continue advancing the important initiatives they were working on in government as fellows with Mozilla.

We rounded up 10 fellows from the U.S., Brazil, India, and Kenya as part of the initial cohort. Fellows are spending the year keeping the Internet open and free both by furthering the crucial work they had already been leading, and by finding new ways to add to forward-thinking policy efforts.

Fellows are urging policymakers to keep net neutrality in the United States and to adopt it in India, they’re promoting data privacy and security in East Africa and Brazil, and they’re encouraging increased access to high-quality broadband in vulnerable communities in rural, urban, and tribal areas everywhere. The fellows have all described their work in depth on Mozilla’s network blog:

Alan Davidson is advancing policies and practices to support building the field of public interest technologists — the next generation of leaders with expertise in technology and public policy who we need to guide our society through coming challenges such as encryption, autonomous vehicles, blockchain, cybersecurity, and more.

Amina Fazlullah is exploring policies that will help lower the cost of broadband access, support broad adoption, ensure that applications are developed with the most vulnerable users in mind, promoting a fair and open Internet, and identifying and highlighting the good work of digital inclusion organizations around the world.

Camille Fischer worked on policies that support legal protections for privacy rights in the U.S. Camille completed her fellowship and recently joined the Electronic Frontier Foundation as a fellow working on free speech and government transparency.

Caroline Holland is working to promote competition for a healthy Internet to make sure consumers have access to affordable and competitive high speed broadband and equal access to the lawful content they desire.

Amba Kak is moving policies forward on net neutrality, zero rating and the open Internet in India, including supporting the country’s recent commitment to comprehensive net-neutrality protection.

— In East Africa, Linet Kwamboka is working to promote policies that support data protection and privacy as well as data literacy for the public.

Terah Lyons explored the global role of stakeholders from public, private, civil society, and academic stakeholder communities on AI policy to address issues related to ethics, accountability, the future of work, and safety and control. Terah recently completed her fellowship and joined the Partnership on AI as its first Executive Director.

–From Brazil, Marília Monteiro is working to analyze tech policy issues from a consumer protection perspective to ensure that policy makers are balancing consumer interests with technology and innovation advances.

Jason Schultz is exploring AI’s impact on open technologies including the need for new methods both to measure the negative impacts of AI closure and to adapt alternatives in meaningful technological, economic, and social ways.

Gigi Sohn is continuing her nearly 30-year fight for fast, fair, and open networks, including working to keep net neutrality in the U.S.

The Tech Policy Fellows gathered at MozFest, Mozilla’s annual festival for the open Internet movement, in October. They led workshops, roundtables, and panels, and — of course — met Foxy.

Mozilla Tech Policy Fellow Amina Fazlullah meets Foxy at MozFest.

Fellows are also contributing to the upcoming 2018 version of the Internet Health Report (you can get involved with that project, too!) and they are working closely with a dedicated Advisory Board, made up of seven top experts and supporters of a free and open Internet located in six different countries.

Mozilla will begin recruiting for a new cohort of fellows in 2018 — keep an eye out for our announcement and help us bring together even more amazing tech policy leaders to advance this crucial work.

The post Plugging in on Policy appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Chris H-C: Another Stay of Execution for Firefox Users on Windows XP

wo, 20/12/2017 - 21:44

windows_xp_bliss-wide_dusk

Firefox users who are on Windows XP now have until August 28, 2018 to upgrade their machines. In the grand Internet tradition I will explore this by pretending someone is asking me questions.

Why?

The last Firefox release supporting Windows XP is Firefox ESR 52. Previously Firefox ESR 52 was set to end-of-life on or around May of 2018 after the next ESR, Firefox ESR 59, had been released and stabilized. Now, with an email to the Enterprise, Dev Platform, Firefox Dev, and Mobile Firefox Dev mailing lists, :Sylvestre has announced that the next ESR will be Firefox ESR 60, which extends the Firefox ESR 52 end-of-life to August 28, 2018.

No, not “Why did it change,” Why should anyone still care about Windows XP? Hasn’t it been out-of-service for a decade or something?

Not quite a decade, but the last release of Windows XP was over nine years ago, and even Microsoft’s extended support tapped out nearly four years ago.

But as to why we should care… well, Windows XP is still a large-ish portion of the Firefox user base. I don’t have public numbers in front of me, but you can see the effect the Windows XP Firefox population numbers had on the Firefox Hardware Report when we diverted them to ESR this past March. At that time they were nearly 8.5% of all Firefox users. That was more than all versions of Mac Firefox users.

Also, it’s possible that these users may be some of the most vulnerable of the Internet’s users. They deserve our thought.

Oh, okay, fine. If they matter so much, why aren’t we supporting them forever?

As you can see from the same Firefox Hardware Report, the number of Windows XP Firefox users was in steady decline. At some point our desire and capability to support this population of users can no longer match up with our desire to ship the best experience to the most users.

Given the slope of the decline in the weeks leading up to when we migrated Windows XP Firefox users to Firefox ESR, we ought to be getting pretty close to zero. We hate to remove support from any users, but there was a real cost to supporting Windows XP.

For instance, the time between the ESR branch being cut and the first Windows XP-breaking change was a mere six days. And it wasn’t on purpose, we were just fixing something somewhere in Gecko in a way that Windows XP didn’t like.

So who are we going to drop support for next?

I don’t know of any plans to drop support for any Operating Systems in the near future. I expect we’ll drop support for older compilers in our usual manner, but not OSs.

That pretty much sums it up.

If you have any questions about Firefox ESR 60, please check out the Firefox ESR FAQ.

:chutten


Categorieën: Mozilla-nl planet

Firefox Test Pilot: Students Pitch New Accessibility Features for Firefox

wo, 20/12/2017 - 20:56

In early October, the Test Pilot team helped kick off an undergraduate product design course at design and media school Ravensbourne in London. The partnership with Ravensbourne was spearheaded by Mozilla’s Open Innovation team with the goal of generating ideas for Mozilla’s innovation pipeline. With Test Pilot as the “client” for the course, students working in seven small teams have been iterating on design concepts to advance accessibility in Firefox. Test Pilot team members John Gruen and I were back in London last month to evaluate the student’s final product pitches. We ultimately chose two winning teams.

First Place: Team Spectrum

The winning team set out to improve the browser experience for individuals with phonological dyslexia. After conducting both secondary and primary research, the team identified an opportunity to help alleviate the tired eyes and general cognitive overhead that individuals with dyslexia experience when reading text on the web. Team Spectrum’s solution is an add-on that allows people to enlarge and embolden text and apply color filters to increase contrast — all from the browser toolbar.

<figcaption>Screenshot from Team Spectrum’s prototype showing a large cursor intended to make it easier to enlarge and embolden text for more comfortable reading</figcaption>

The team conducted some initial usability testing and found that their solution increased reported ease of reading and reading speed.

Honorable Mention: Team Elderline

The second place team was interested in helping older adults use the browser more easily by creating an add-on to surface customization options. Through their research, Team Elderline determined that among the biggest opportunities were to support adults over 75-years-old in: reading online more comfortably, reducing confusing ads on webpages, and understanding icons representing valuable browser controls. The team’s solution is a control panel that lets people easily manipulate the current webpage to make content more accessible and manageable.

<figcaption>Screenshot from Team Elderline’s prototype showing control panel options tailored to the current webpage</figcaption>

Based on what they learned from initial usability testing of their concept, Team Elderline’s final concept emphasized plain language and clearer selection indicators.

In addition to the pitches by Teams Spectrum and Elderline, the other student teams presented a range of intriguing concepts to support challenges encountered by older adults, students with dyslexia, and individuals with Attention Deficit Hyperactivity Disorder (ADHD) in the browser.

Next Steps

We will be sharing the winning concepts with the broader Test Pilot team and investigating the feasibility of turning the concepts into future experiments. Thanks to all of the Ravensbourne students for your fresh thinking on improving Firefox through accessibility! Read more about our collaboration with Ravensbourne on their blog.

Students Pitch New Accessibility Features for Firefox was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Pagina's