mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Hacks.Mozilla.Org: Avoiding race conditions in SharedArrayBuffers with Atomics

Mozilla planet - wo, 14/06/2017 - 17:44

This is the 3rd article in a 3-part series:

  1. A crash course in memory management
  2. A cartoon intro to ArrayBuffers and SharedArrayBuffers
  3. Avoiding race conditions in SharedArrayBuffers with Atomics

In the last article, I talked about how using SharedArrayBuffers could result in race conditions. This makes working with SharedArrayBuffers hard. We don’t expect application developers to use SharedArrayBuffers directly.

But library developers who have experience with multithreaded programming in other languages can use these new low-level APIs to create higher-level tools. Then application developers can use these tools without touching SharedArrayBuffers or Atomics directly.

Layer diagram showing SharedArrayBuffer + Atomics as the foundation, and JS libaries and WebAssembly threading building on top

Even though you probably shouldn’t work with SharedArrayBuffers and Atomics directly, I think it’s still interesting to understand how they work. So in this article, I’ll explain what kinds of race conditions concurrency can bring, and how Atomics help libraries avoid them.

But first, what is a race condition?

Drawing of two threads racing towards memory

 

Race conditions: an example you may have seen before

A pretty straightforward example of a race condition can happen when you have a variable that is shared between two threads. Let’s say one thread wants to load a file and the other thread checks whether it exists. They share a variable, fileExists, to communicate.

Initially, fileExists is set to false.

Two threads working on some code. Thread 1 is loading a file if fileExists is true, and thread 2 is setting fileExists

As long as the code in thread 2 runs first, the file will be loaded.

Diagram showing thread 2 going first and file load succeeding

But if the code in thread 1 runs first, then it will log an error to the user, saying that the file does not exist.

Diagram showing thread 1 going first and file load failing

But that’s not the problem. It’s not that the file doesn’t exist. The real problem is the race condition.

Many JavaScript developers have run into this kind of race condition, even in single-threaded code. You don’t have to understand anything about multithreading to see why this is a race.

However, there are some kinds of race conditions which aren’t possible in single-threaded code, but that can happen when you’re programming with multiple threads and those threads share memory.

Different classes of race conditions and how Atomics help

Let’s explore some of the different kinds of race conditions you can have in multithreaded code and how Atomics help prevent them. This doesn’t cover all possible race conditions, but should give you some idea why the API provides the methods that it does.

Before we start, I want to say again: you shouldn’t use Atomics directly. Writing multithreaded code is a known hard problem. Instead, you should use reliable libraries to work with shared memory in your multithreaded code.

Caution sign

With that out of the way…

Race conditions in a single operation

Let’s say you had two threads that were incrementing the same variable. You might think that the end result would be the same regardless of which thread goes first.

Diagram showing two threads incrementing a variable in turn

But even though, in the source code, incrementing a variable looks like a single operation, when you look at the compiled code, it is not a single operation.

At the CPU level, incrementing a value takes three instructions. That’s because the computer has both long-term memory and short-term memory. (I talk more about how this all works in another article).

Drawing of a CPU and RAM

All of the threads share the long-term memory. But the short-term memory—the registers—are not shared between threads.

Each thread needs to pull the value from memory into its short-term memory. After that, it can run the calculation on that value in short-term memory. Then it writes that value back from its short-term memory to the long-term memory.

Diagram showing a variable being loaded from memory to a register, then being operated on, and then being stored back to memory

If all of the operations in thread 1 happen first, and then all the operations in thread 2 happen, we will end up with the result that we want.

Flow chart showing instructions happening sequentially on one thread, then the other

But if they are interleaved in time, the value that thread 2 has pulled into its register gets out of sync with the value in memory. This means that thread 2 doesn’t take thread 1’s calculation into consideration. Instead, it just clobbers the value that thread 1 wrote to memory with its own value.

Flow chart showing instructions interleaved between threads

One thing atomic operations do is take these operations that humans think of as being single operations, but which the computer sees as multiple operations, and makes the computer see them as single operations, too.

This is why they’re called atomic operations. It’s because they take an operation that would normally have multiple instructions—where the instructions could be paused and resumed—and it makes it so that they all happen seemingly instantaneously, as if it were one instruction. It’s like an indivisible atom.

Instructions encased in an atom

Using atomic operations, the code for incrementing would look a little different.

Atomics.add(sabView, index, 1)

Now that we’re using Atomics.add, the different steps involved in incrementing the variable won’t be mixed up between threads. Instead, one thread will finish its atomic operation and prevent the other one from starting. Then the other will start its own atomic operation.

Flow chart showing atomic execution of the instructions

The Atomics methods that help avoid this kind of race are:

You’ll notice that this list is fairly limited. It doesn’t even include things like division and multiplication. A library developer could create atomic-like operations for other things, though.

To do that, the developer would use Atomics.compareExchange. With this, you get a value from the SharedArrayBuffer, perform an operation on it, and only write it back to the SharedArrayBuffer if no other thread has updated it since you first checked. If another thread has updated it, then you can get that new value and try again.

Race conditions across multiple operations

So those Atomic operations help avoid race conditions during “single operations”. But sometimes you want to change multiple values on an object (using multiple operations) and make sure no one else is making changes to that object at the same time. Basically, this means that during every pass of changes to an object, that object is on lockdown and inaccessible to other threads.

The Atomics object doesn’t provide any tools to handle this directly. But it does provide tools that library authors can use to handle this. What library authors can create is a lock.

Diagram showing two threads and a lock

If code wants to use locked data, it has to acquire the lock for the data. Then it can use the lock to lock out the other threads. Only it will be able to access or update the data while the lock is active.

To build a lock, library authors would use Atomics.wait and Atomics.wake, plus other ones such as Atomics.compareExchange and Atomics.store. If you want to see how these would work, take a look at this basic lock implementation.

In this case, thread 2 would acquire the lock for the data and set the value of locked to true. This means thread 1 can’t access the data until thread 2 unlocks.

Thread 2 gets the lock and uses it to lock up shared memory

If thread 1 needs to access the data, it will try to acquire the lock. But since the lock is already in use, it can’t. The thread would then wait—so it would be blocked—until the lock is available.

Thread 1 waits until the lock is unlocked

Once thread 2 is done, it would call unlock. The lock would notify one or more of the waiting threads that it’s now available.

Thread 1 is notified that the lock is available

That thread could then scoop up the lock and lock up the data for its own use.

Thread 1 uses the lock

A lock library would use many of the different methods on the Atomics object, but the methods that are most important for this use case are:

Race conditions caused by instruction reordering

There’s a third synchronization problem that Atomics take care of. This one can be surprising.

You probably don’t realize it, but there’s a very good chance that the code you’re writing isn’t running in the order you expect it to. Both compilers and CPUs reorder code to make it run faster.

For example, let’s say you’ve written some code to calculate a total. You want to set a flag when the calculation is finished.

subTotal = price + fee; total += subTotal; isDone = true

To compile this, we need to decide which register to use for each variable. Then we can translate the source code into instructions for the machine.

Diagram showing what that would equal in mock assembly

So far, everything is as expected.

What’s not obvious if you don’t understand how computers work at the chip level (and how the pipelines that they use for executing code work) is that line 2 in our code needs to wait a little bit before it can execute.

Most computers break down the process of running an instruction into multiple steps. This makes sure all of the different parts of the CPU are busy at all times, so it makes the best use of the CPU.

Here’s one example of the steps an instruction goes through:

  1. Fetch the next instruction from memory
  2. Figure out what the instruction is telling us to do (aka decode the instruction), and get the values from the registers
  3. Execute the instruction
  4. Write the result back to the register

 fetch the instruction
 decode the instruction and fetch register values
 Execute the operation
 Write back the result

So that’s how one instruction goes through the pipeline. Ideally, we want to have the second instruction following directly after it. As soon as it has moved into stage 2, we want to fetch the next instruction.

The problem is that there is a dependency between instruction #1 and instruction #2.

Diagram of a data hazard in the pipeline

We could just pause the CPU until instruction #1 has updated subTotal in the register. But that would slow things down.

To make things more efficient, what a lot of compilers and CPUs will do is reorder the code. They will look for other instructions which don’t use subTotal or total and move those in between those two lines.

Drawing of line 3 of the assembly code being moved between lines 1 and 2

This keeps a steady stream of instructions moving through the pipe.

Because line 3 didn’t depend on any values in line 1 or 2, the compiler or CPU figures it’s safe to reorder like this. When you’re running in a single thread, no other code will even see these values until the whole function is done, anyway.

But when you have another thread running at the same time on another processor, that’s not the case. The other thread doesn’t have to wait until the function is done to see these changes. It can see them almost as soon as they are written back to memory. So it can tell that isDone was set before total.

If you were using isDone as a flag that the total had been calculated and was ready to use in the other thread, then this kind of reordering would create race conditions.

Atomics attempt to solve some of these bugs. When you use an Atomic write, it’s like putting a fence between two parts of your code.

Atomic operations aren’t reordered relative to each other, and other operations aren’t moved around them. In particular, two operations that are often used to enforce ordering are:

All variable updates above Atomics.store in the function’s source code are guaranteed to be done before Atomics.store is done writing its value back to memory. Even if the non-Atomic instructions are reordered relative to each other, none of them will be moved below a call to Atomics.store which comes below in the source code.

And all variable loads after Atomics.load in a function are guaranteed to be done after Atomics.load fetches its value. Again, even if the non-atomic instructions are reordered, none of them will be moved above an Atomics.load that comes above them in the source code.

Diagram showing Atomics.store and Atomics.load maintaining order

Note: The while loop I show here is called a spinlock and it’s very inefficient. And if it’s on the main thread, it can bring your application to a halt. You almost certainly don’t want to use that in real code.

Once again, these methods aren’t really meant for direct use in application code. Instead, libraries would use them to create locks.

Conclusion

Programming multiple threads that share memory is hard. There are many different kinds of race conditions just waiting to trip you up.

Drawing of shared memory with a dragon and "Here be dragons" above

This is why you don’t want to use SharedArrayBuffers and Atomics in your application code directly. Instead, you should depend on proven libraries by developers who are experienced with multithreading, and who have spent time studying the memory model.

It is still early days for SharedArrayBuffer and Atomics. Those libraries haven’t been created yet. But these new APIs provide the basic foundation to build on top of.

Categorieën: Mozilla-nl planet

Mozilla Launches Campaign to Raise Awareness for Internet Health

Mozilla Blog - wo, 14/06/2017 - 17:00

Today, Mozilla unveils several initiatives including an event focused on Internet Health with special guests DeRay McKesson, Lauren Duca and more, a brand new podcast, new tech to help create a voice database, as well as some local SF pop-ups.

Mozilla is doing this to draw the public’s attention to mounting concern over the consolidation of power online, including the Federal Communications Commission’s proposed actions to kill net neutrality.

New Polling

60 percent of people in the U.S. are worried about online services being owned by a small number of services, according to a new Mozilla/Ipsos poll released today.

“The Internet is a vital tool that touches every aspect of modern life,” said Mark Surman, Mozilla’s Executive Director. “If you care about freedom of speech, economic growth and a level playing field, then you care about guarding against those who would throttle, lock down or monopolize the web as if they owned it.

According to another Mozilla/Ipsos poll, seventy-six percent of people in the U.S. support net neutrality.

“At Mozilla, we’re fueling a movement to ensure the web is something that belongs to all of us. Forever,” Surman added.

“A Night for Internet Health”

On Thursday, June 29, Mozilla will host “A Night for Internet Health” — a free live event featuring prominent thinkers, performers, and political voices discussing power, progress, and life on the Web.

Mozilla will be joined by musician Neko Case, Pod Save the People host DeRay McKesson, Teen Vogue columnist Lauren Duca, comedian Moshe Kasher, tech media personality Veronica Belmont, and Sens. Al Franken and Ron Wyden via video.

The event is from 7-10 p.m. (PDT), June 29 at the SFJazz Center in San Francisco. Tickets will be available through the Center’s Box Office starting on June 15.

Credentials are available for media.

IRL podcast

On June 26, Mozilla will debut the podcast IRL: Because Online Life is Real Life. Host Veronica Belmont will share stories from the wilds of the Web, and real talk about online issues that affect us all.

People can listen to the IRL trailer or pre-subscribe to IRL on Apple Podcasts, Stitcher, Pocket Casts, Overcast, or RadioPublic.

Project Common Voice: The World’s First Crowdsourced Voice Database

Voice-enabled devices represent the next major disruption, but access to databases is expensive and doesn’t include a diverse set of accents and languages. Mozilla’s Project Common Voice aims to solve the problem by inviting people to donate samples of their voices to a massive global project that will allow anyone to quickly and easily train voice-enabled applications. Mozilla will make this resource available to the public later this year.

The project will be featured at guerilla pop-ups in San Francisco, where people can also create custom tote bags or grab a T-shirt that expresses their support for a healthy Internet and net neutrality.

Locations:

Pop-ups:
  • Wednesday, June 28: From noon – 6 p.m. PDT at Justin Herman Plaza in San Francisco.
  • Thursday, June 29: From 7 – 10 at SFJazz in San Francisco.
  • Friday, June 30 – July 1:  From noon – 6 p.m. PDT at Union Square in San Francisco.
SF Take-Over

Beginning on Monday, June 19, Mozilla will launch a provocative advertising campaign across San Francisco and online, highlighting what’s at stake with the attacks on net neutrality and power consolidation on the web.

The advertisements juxtapose opposing messages, highlighting the power dynamics of the Internet and offering steps people can take to create a healthier Internet. For example, one advertisement contrasts “Let’s Kill Innovation” with “Actually, let’s not. Raise your voice for net neutrality.”

San Franciscans and visitors will see the ads across the city and will be placed along Market and Embarcadero Streets, San Francisco Airport, projected on buildings– as well as online, radio, social media and prominent websites.

About Mozilla

Mozilla has been a pioneer and advocate for the open web for more than 15 years. We promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to experience the Web on computers, tablets and mobile devices. For more information, visit www.mozilla.org.

The post Mozilla Launches Campaign to Raise Awareness for Internet Health appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla Launches Campaign to Raise Awareness for Internet Health

Mozilla planet - wo, 14/06/2017 - 17:00

Today, Mozilla unveils several initiatives including an event focused on Internet Health with special guests DeRay McKesson, Lauren Duca and more, a brand new podcast, new tech to help create a voice database, as well as some local SF pop-ups.

Mozilla is doing this to draw the public’s attention to mounting concern over the consolidation of power online, including the Federal Communications Commission’s proposed actions to kill net neutrality.

New Polling

60 percent of people in the U.S. are worried about online services being owned by a small number of services, according to a new Mozilla/Ipsos poll released today.

“The Internet is a vital tool that touches every aspect of modern life,” said Mark Surman, Mozilla’s Executive Director. “If you care about freedom of speech, economic growth and a level playing field, then you care about guarding against those who would throttle, lock down or monopolize the web as if they owned it.

According to another Mozilla/Ipsos poll, seventy-six percent of people in the U.S. support net neutrality.

“At Mozilla, we’re fueling a movement to ensure the web is something that belongs to all of us. Forever,” Surman added.

“A Night for Internet Health”

On Thursday, June 29, Mozilla will host “A Night for Internet Health” — a free live event featuring prominent thinkers, performers, and political voices discussing power, progress, and life on the Web.

Mozilla will be joined by musician Neko Case, Pod Save the People host DeRay McKesson, Teen Vogue columnist Lauren Duca, comedian Moshe Kasher, tech media personality Veronica Belmont, and Sens. Al Franken and Ron Wyden via video.

The event is from 7-10 p.m. (PDT), June 29 at the SFJazz Center in San Francisco. Tickets will be available through the Center’s Box Office starting on June 15.

Credentials are available for media.

IRL podcast

On June 26, Mozilla will debut the podcast IRL: Because Online Life is Real Life. Host Veronica Belmont will share stories from the wilds of the Web, and real talk about online issues that affect us all.

People can listen to the IRL trailer or pre-subscribe to IRL on Apple Podcasts, Stitcher, Pocket Casts, Overcast, or RadioPublic.

Project Common Voice: The World’s First Crowdsourced Voice Database

Voice-enabled devices represent the next major disruption, but access to databases is expensive and doesn’t include a diverse set of accents and languages. Mozilla’s Project Common Voice aims to solve the problem by inviting people to donate samples of their voices to a massive global project that will allow anyone to quickly and easily train voice-enabled applications. Mozilla will make this resource available to the public later this year.

The project will be featured at guerilla pop-ups in San Francisco, where people can also create custom tote bags or grab a T-shirt that expresses their support for a healthy Internet and net neutrality.

Locations:

Pop-ups:
  • Wednesday, June 28: From noon – 6 p.m. PDT at Justin Herman Plaza in San Francisco.
  • Thursday, June 29: From 7 – 10 at SFJazz in San Francisco.
  • Friday, June 30 – July 1:  From noon – 6 p.m. PDT at Union Square in San Francisco.
SF Take-Over

Beginning on Monday, June 19, Mozilla will launch a provocative advertising campaign across San Francisco and online, highlighting what’s at stake with the attacks on net neutrality and power consolidation on the web.

The advertisements juxtapose opposing messages, highlighting the power dynamics of the Internet and offering steps people can take to create a healthier Internet. For example, one advertisement contrasts “Let’s Kill Innovation” with “Actually, let’s not. Raise your voice for net neutrality.”

San Franciscans and visitors will see the ads across the city and will be placed along Market and Embarcadero Streets, San Francisco Airport, projected on buildings– as well as online, radio, social media and prominent websites.

About Mozilla

Mozilla has been a pioneer and advocate for the open web for more than 15 years. We promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to experience the Web on computers, tablets and mobile devices. For more information, visit www.mozilla.org.

The post Mozilla Launches Campaign to Raise Awareness for Internet Health appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Robert O'Callahan: New "rr pack" Command

Mozilla planet - wo, 14/06/2017 - 15:04

I think there's huge potential to use rr for debugging cloud services. Apparently right now interactive debugging is mostly not used in the cloud, which makes sense — it's hard to identify the right process to debug, much less connect to it, and even if you could, stopping it for interactive analysis would likely interfere too much with your distributed system. However, with rr you could record any number of process executions without breaking your system, identify the failed runs after the fact, and debug them at your leisure.

Unfortunately there are a couple of problems making that difficult right now. One is that the largest cloud providers don't support the hardware performance counter rr needs. I'm excited to hear that Amazon has recently enabled some HW performance counters on dedicated hosts — hopefully they can be persuaded to add the retired-conditional-branch counter to their whitelist (and someone can fix the Xen PMU virtualization bug that breaks rr). Another problem is that rr's traces aren't easy to move from one machine to another. I've started addressing this problem by implementing a new rr command, rr pack.

There are two problems with rr traces. One is that on filesystems that do not support "reflink" file copies, to keep recording overhead low we sometimes hardlink files into the trace, or for system libraries we just assume they won't change even if we can't hardlink them. This means traces are not fully self-contained in the latter case, and in the former case the recording can be invalidated if the files change. The other problem is that every time an mmap occurs we clone/link a new file into the trace, even if a previous mmap mapped the same file, because we have no fast way of telling if the file has changed or not. This means traces appear to contain large numbers of large files but many of those files are duplicates.

rr pack fixes both of those problems. You run it on a trace directory in-place. It deduplicates trace files by computing a cryptographic hash (BLAKE2b, 256 bits) of each file and keeping only one file for any given hash. It identifies needed files outside the trace directory, and hardlinks to files outside the trace directory, and copies them into the trace directory. It rewrites trace records (the mmaps file) to refer to the new files, so the trace format hasn't changed. You should be able to copy around the resulting trace, and modify any files outside the trace, without breaking it. I tried pretty hard to ensure that interrupted rr pack commands leave the trace intact (using fsync and atomic rename); of course, an interrupted rr pack may not fully pack the trace so the operation should be repeated. Successful rr pack commands are idempotent.

We haven't really experimented with trace portability yet so I can't say how easy it will be to just zip up a trace directory and replay it on a different computer. We know that currently replaying on a machine with different CPUID values is likely to fail, but we have a solution in the works for that — Kyle's patches to add ARCH_SET_CPUID to control "CPUID faulting" are in Linux kernel 4.12 and will let rr record and replay CPUID values.

Categorieën: Mozilla-nl planet

Air Mozilla: Rust Bay Area Meetup June 2017

Mozilla planet - wo, 14/06/2017 - 04:00

Rust Bay Area Meetup June 2017 https://www.meetup.com/Rust-Bay-Area/ Tentative agenda will be: - Andrew Stone from VMWare talking about Haret - William Morgan from Buoyant talking about linkerd-tcp

Categorieën: Mozilla-nl planet

Sean McArthur: hyper v0.11

Mozilla planet - di, 13/06/2017 - 22:27

The async release of hyper is here, version 0.11.0. There’s an updated website, and new guides to try to help you get up to speed with all the changes.

hyper is an HTTP library built in Rust, providing fast and safe client and server implementations.

v0.11

This release marks a form of stability for async hyper. This isn’t saying hyper’s API won’t continue to evolve (and break), but that when such a break happens, it will happen in a v0.12, and the changes will be concentrated. It should be possible to start building frameworks and tools using v0.11.

Even before v0.11 was tagged, many were so excited by the prospects of async hyper, they already are using it. Some examples:

  • sccache has been using hyper’s Client to manage resources in S3.
  • npm uses hyper for their Registry change stream
Async

The biggest deal here, of course, is the switch to non-blocking (or “async”) IO. This has been the push for this release for a long time, and the landscape in the Rust community changed a lot while we were working on this. Last year, a framework for building asynchronous network protocols was released, Tokio. There a lot of great things to say about it, and hyper has embraced it fully.

This means a big change in API.

For instance, Request and Response bodies are no longer used via the std::io::{Read, Write} traits. Instead, bodies are Streams of bytes. Streams are essentially a Future that can resolve multiple times, which matches how an async connection works: bunches of bytes are received at different times.

By integrating with Tokio, hyper and the community gain a lot. Adding in Transport Layer Security is just combining hyper::server::Http with something like tokio_tls::TlsServer. That same TlsServer can be plugged into any protocol, and Http can be wrapped in any other community piece implementing the right trait. The same can be done with other concepts, like generic timeouts.

Hop over to the guides if you’d like to see how to get working examples.

Headers

Being a large breaking change release, an opportunity was taken to refine the headers system in hyper. Some standout changes:

  • A Raw type was added, and the set_raw, get_raw, etc methods now use it. It allows for a more ergonomic way of adding raw header values, and it’s also faster when a Raw in most cases.
  • The HeaderFormat trait has been merged into the Header trait. They were previously separate due to trait object safety rules, but now that trait methods can have a where Self: Sized added, there is no need to separate them.
  • The semantics of Header::fmt_header were clarified. Most of the time, headers can be written on one line. There is the rare exception (technically only Set-Cookie is specified) where each “value” must be on a separate line. Now, fmt_header receives a hyper::header::Formatter, with only a fmt_line method. Pretty much every header can just implement std::fmt::Display, and call f.fmt_line(self), but now Set-Cookie doesn’t need to use a hack to format itself.
Performance

hyper v0.10 was no slouch. It can churn through requests and pump out responses. However, as it uses blocking IO, it hits a problem when you have tons of connections to your server at the same time. Blocking IO means it needs to use multiple threads, only being able to deal with 1 connection per thread. Threads, when you have a lot, get to be expensive.1 So, switching to non-blocking IO means that we keep going fast, but each additional connection isn’t nearly as expensive.

hyper v0.11 is fast2, and handles thousands of connections like a champ.

Changelog

The changes are big. There is a changelog if you want to see all of them. The changelog tries to only contain changes from v0.10, but it’s not exhaustive.

Thanks

There are a lot of people to thank for getting this release out the door. This really is a fantastic community.

Next

hyper is now tracking the Futures and Tokio crates. Work is happening in there as well, as we find patterns and problems that aren’t unique to hyper, and should be available for any async protocol.

There has been community desire (and on the hyper team too!) to stabilize some sort of http crate. This would contain types for handle statuses, methods, versions, and headers, but without client or server or protocol version implementations. We’re trying to find a good design that supports all the possible use cases, and HTTP1 and HTTP2, without sacrificing any performance. Once such a thing exists, hyper would likely replace the types it uses with those.

In doing the above, that may mean that hyper’s current headers system won’t fit. It might make sense to break that out into its own crate, so that people who want typed headers can have them, while a bare bones server could live without them. This would also help reqwest in its road to 1.0, since it publicly exports hyper::headers, but hyper likely won’t reach v1.0 before it.

And of course, we always want to go faster. That will never stop!

v0.11.0

Again, go get it! Read the new guides. Tell us what you think!

  1. hyper uses a set number of threads, not growing as more connections are made. It’s a different trade off, but not too relevant for explaining why non-blocking IO is better. 

  2. hyper doesn’t lead the pack in benchmarks (yet), but it’s not in the back either. The last benchmark put it at 58% requests per second of the fastest. Since that benchmark was published, some significant low-hanging improvements were made. A new preview should be available soon. And we’ll keep going! 

Categorieën: Mozilla-nl planet

Air Mozilla: Rust Libs Meeting 2017-06-13

Mozilla planet - di, 13/06/2017 - 22:00

Rust Libs Meeting 2017-06-13 walkdir crate evaluation

Categorieën: Mozilla-nl planet

The Best Firefox Ever

Mozilla Blog - di, 13/06/2017 - 21:00

With E10s, our new version of Firefox nails the “just right” balance between memory and speed


On the Firefox team, one thing we always hear from our users is that they rely on the web for complex tasks like trip planning and shopping comparisons. That often means having many tabs open. And the sites and web apps running in those tabs often have lots of things going on– animations, videos, big pictures and more. Complex sites are more and more common. The average website today is nearly 2.5 megabytes – the same size as the original version of the game Doom, according to Wired. Up until now, a complex site in one Firefox tab could slow down all the others. That often meant a less than perfect browsing experience.

To make Firefox run even complex sites faster, we’ve been changing it to run using multiple operating system processes. Translation? The old Firefox used a single process to run all the tabs in a browser. Modern browsers split the load into several independent processes. We named our project to split Firefox into multiple processes ‘Electrolysis ’ (or E10s) after the chemical process that divides water into its core elements. E10s is the largest change to Firefox code in our history. And today we’re launching our next big phase of the E10s initiative.

A Faster Firefox With Four Content Processes

With today’s release, Firefox uses up to four processes to run web page content across all open tabs. This means that a heavy, complex web page in one tab has a much lower impact on the responsiveness and speed in other tabs. By separating the tabs into separate processes, we make better use of the hardware on your computer, so Firefox can deliver you more of the web you love, with less waiting.

I’ve been living with this turned on by default in the pre-release version of Firefox (Nightly). The performance improvements are remarkable. Besides running faster and crashing less, E10S makes websites feel more smooth. Even busy pages, like Facebook newsfeeds, spool out smoothly and cleanly. After making the switch to Firefox with E10s, now I can’t live without it.

Firefox 54 with E10s makes sites run much better on all computers, especially on computers with less memory. Firefox aims to strike the “just right” balance between speed and memory usage. To learn more about Firefox’s multi-process architecture, and how it’s different from Chrome’s, check out Ryan Pollock’s post about the search for the Goldilocks browser.

Multi-Process Without Memory Bloat Firefox Wins Memory Usage Comparison

In our tests comparing memory usage for various browsers, we found that Firefox used significantly less RAM than other browsers on Windows 10, macOS, and Linux. (RAM stands for Random Access Memory, the type of memory that stores the apps you’re actively running.) This means that with Firefox you can browse freely, but still have enough memory left to run the other apps you want to use on your computer.

The Best Firefox Ever

This is the best release of Firefox ever, with improvements that will be very noticeable to even casual users of our beloved browser. Several other enhancements are shipping in Firefox today, and you can visit our release notes to see the full list. If you’re a web developer, or if you’ve built a browser extension, check out the Hacks Blog to read about all the new Web Platform and WebExtension APIs shipping today.

As we continue to make progress on Project Quantum, we are pushing forward in building a completely revamped browser made for modern computing. It’s our goal to make Firefox the fastest and smoothest browser for PCs and mobile devices. Through the end of 2017, you’ll see some big jumps in capability and performance from Team Firefox. If you stopped using Firefox, try it again. We think you’ll be impressed. Thank you and let us know what you think.

The post The Best Firefox Ever appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: The Best Firefox Ever

Mozilla planet - di, 13/06/2017 - 21:00

With E10s, our new version of Firefox nails the “just right” balance between memory and speed


On the Firefox team, one thing we always hear from our users is that they rely on the web for complex tasks like trip planning and shopping comparisons. That often means having many tabs open. And the sites and web apps running in those tabs often have lots of things going on– animations, videos, big pictures and more. Complex sites are more and more common. The average website today is nearly 2.5 megabytes – the same size as the original version of the game Doom, according to Wired. Up until now, a complex site in one Firefox tab could slow down all the others. That often meant a less than perfect browsing experience.

To make Firefox run even complex sites faster, we’ve been changing it to run using multiple operating system processes. Translation? The old Firefox used a single process to run all the tabs in a browser. Modern browsers split the load into several independent processes. We named our project to split Firefox into multiple processes ‘Electrolysis ’ (or E10s) after the chemical process that divides water into its core elements. E10s is the largest change to Firefox code in our history. And today we’re launching our next big phase of the E10s initiative.

A Faster Firefox With Four Content Processes

With today’s release, Firefox uses up to four processes to run web page content across all open tabs. This means that a heavy, complex web page in one tab has a much lower impact on the responsiveness and speed in other tabs. By separating the tabs into separate processes, we make better use of the hardware on your computer, so Firefox can deliver you more of the web you love, with less waiting.

I’ve been living with this turned on by default in the pre-release version of Firefox (Nightly). The performance improvements are remarkable. Besides running faster and crashing less, E10S makes websites feel more smooth. Even busy pages, like Facebook newsfeeds, spool out smoothly and cleanly. After making the switch to Firefox with E10s, now I can’t live without it.

Firefox 54 with E10s makes sites run much better on all computers, especially on computers with less memory. Firefox aims to strike the “just right” balance between speed and memory usage. To learn more about Firefox’s multi-process architecture, and how it’s different from Chrome’s, check out Ryan Pollock’s post about the search for the Goldilocks browser.

Multi-Process Without Memory Bloat Firefox Wins Memory Usage Comparison

In our tests comparing memory usage for various browsers, we found that Firefox used significantly less RAM than other browsers on Windows 10, macOS, and Linux. (RAM stands for Random Access Memory, the type of memory that stores the apps you’re actively running.) This means that with Firefox you can browse freely, but still have enough memory left to run the other apps you want to use on your computer.

The Best Firefox Ever

This is the best release of Firefox ever, with improvements that will be very noticeable to even casual users of our beloved browser. Several other enhancements are shipping in Firefox today, and you can visit our release notes to see the full list. If you’re a web developer, or if you’ve built a browser extension, check out the Hacks Blog to read about all the new Web Platform and WebExtension APIs shipping today.

As we continue to make progress on Project Quantum, we are pushing forward in building a completely revamped browser made for modern computing. It’s our goal to make Firefox the fastest and smoothest browser for PCs and mobile devices. Through the end of 2017, you’ll see some big jumps in capability and performance from Team Firefox. If you stopped using Firefox, try it again. We think you’ll be impressed. Thank you and let us know what you think.

The post The Best Firefox Ever appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Air Mozilla: Selling Your Attention: The Web and Advertising with Tim Wu

Mozilla planet - di, 13/06/2017 - 21:00

 The Web and Advertising with Tim Wu You don't need cash to search Google or to use Facebook, but they're not free. We pay for these services with our attention and with...

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Firefox 54: E10S-Multi, WebExtension APIs, CSS clip-path

Mozilla planet - di, 13/06/2017 - 20:57
“E10S-Multi:” A new multi-process model for Firefox

Today’s release completes Firefox’s transformation into a fully multi-process browser, running many simultaneous content processes in addition to a UI process and, on Windows, a special GPU process. This design makes it easier to utilize all of the cores available on modern processors and, in the future, to securely sandbox web content. It also improves stability, ensuring that a single content process crashing won’t take out all of your other tabs, nor the rest of the browser.

Illustration of Firefox's new multi-process architecture, showing one Firefox UI process talking to four Content Processes. Each content process has several tabs within it.

An initial version of multi-process Firefox (codenamed “Electrolysis”, or “e10s” for short) debuted with Firefox 48 last August. This first version moved Firefox’s UI into its own process so that the browser interface remains snappy even under load. Firefox 54 takes this further by running many content processes in parallel: each one with its own RAM and CPU resources managed by the host operating system.

Additional processes do come with a small degree of memory overhead, no matter how well optimized, but we’ve worked wonders to reduce this to the bare minimum. Even with those optimizations, we wanted to do more to ensure that Firefox is respectful of your RAM. That’s why, instead of spawning a new process with every tab, Firefox sets an upper limit: four by default, but configurable by users (dom.ipc.processCount in about:config). This keeps you in control, while still letting Firefox take full advantage of multi-core CPUs.

To learn more about Firefox’s multi-process architecture, check out this Medium post about the search for the “Goldilocks” browser.

New WebExtension APIs

Firefox continues its rapid implementation of new WebExtension APIs. These APIs are designed to work cross-browser, and will be the only APIs available to add-ons when Firefox 57 launches this November.

Most notably, it’s now possible to create custom DevTools panels using WebExtensions. For example, the screenshot below shows the Chrome version of the Vue.js DevTools running in Firefox without any modifications. This dramatically reduces the maintenance burden for authors of devtools add-ons, ensuring that no matter which framework you prefer, its tools will work in Firefox.

Screenshot of Firefox showing the Vue.js DevTools extension running in Firefox

Additionally:

Read about the full set of new and changed APIs on the Add-ons Blog, or check out the complete WebExtensions documentation on MDN.

CSS shapes in clip-path

The CSS clip-path property allows authors to define which parts of an element are visible. Previously, Firefox only supported clipping paths defined as SVG files. With Firefox 54, authors can also use CSS shape functions for circles, ellipses, rectangles or arbitrary polygons (Demo).

Like many CSS values, clipping shapes can be animated. There are some rules that control how the interpolation between values is performed, but long story short: as long as you are interpolating between the same shapes, or polygons with the same number of vertices, you should be fine. Here’s how to animate a circular clipping:

See the Pen Animated clip-path by ladybenko (@ladybenko) on CodePen.

You can also dynamically change clipping according user input, like in this example that features a “periscope” effect controlled by the mouse:

See the Pen clip-path (periscope) by ladybenko (@ladybenko) on CodePen.

To learn more, check our article on clip-path from last week.

Project Dawn

Lastly, the release of Firefox 54 marks the completion of the Project Dawn transition, eliminating Firefox’s pre-beta release channel, codenamed “Aurora.” Firefox releases now move directly from Nightly into Beta every six weeks. Firefox Developer Edition, which was based on Aurora, is now based on Beta.

For early adopters, we’ve also made Firefox Nightly for Android available on Google Play.

Categorieën: Mozilla-nl planet

Daniel Stenberg: HTTP Workshop s03e02

Mozilla planet - di, 13/06/2017 - 17:29

(Season three, episode two)

Previously, on the HTTP Workshop. Yesterday ended with a much appreciated group dinner and now we’re back energized and eager to continue blabbing about HTTP frames, headers and similar things.

Martin from Mozilla talked on “connection management is hard“. Parts of the discussion was around the HTTP/2 connection coalescing that I’ve blogged about before. The ORIGIN frame is a draft for a suggested way for servers to more clearly announce which origins it can answer for on that connection which should reduce the frequency of 421 needs. The ORIGIN frame overrides DNS and will allow coalescing even for origins that don’t otherwise resolve to the same IP addresses. The Alt-Svc header, a suggested CERTIFICATE frame and how does a HTTP/2 server know for which origins it can do PUSH for?

A lot of positive words were expressed about the ORIGIN frame. Wildcard support?

Willy from HA-proxy talked about his Memory and CPU efficient HPACK decoding algorithm. Personally, I think the award for the best slides of the day goes to Willy’s hand-drawn notes.

Lucas from BBC talked about usage data for iplayer and how much data and number of requests they serve and how their largest share of users are “non-browsers”. Lucas mentioned their work on writing a libcurl adaption to make gstreamer use it instead of libsoup. Lucas talk triggered a lengthy discussion on what needs there are and how (if at all) you can divide clients into browsers and non-browser.

Wenbo from Google spoke about Websockets and showed usage data from Chrome. The median websockets connection time is 20 seconds and 10% something are shorter than 0.5 seconds. At the 97% percentile they live over an hour. The connection success rates for Websockets are depressingly low when done in the clear while the situation is better when done over HTTPS. For some reason the success rate on Mac seems to be extra low, and Firefox telemetry seems to agree. Websockets over HTTP/2 (or not) is an old hot topic that brought us back to reiterate issues we’ve debated a lot before. This time we also got a lovely and long side track into web push and how that works.

Roy talked about Waka, a HTTP replacement protocol idea and concept that Roy’s been carrying around for a long time (he started this in 2001) and to which he is now coming back to do actual work on. A big part of the discussion was focused around the wakli compression ideas, what the idea is and how it could be done and evaluated. Also, Roy is not a fan of content negotiation and wants it done differently so he’s addressing that in Waka.

Vlad talked about his suggestion for how to do cross-stream compression in HTTP/2 to significantly enhance compression ratio when, for example, switching to many small resources over h2 compared to a single huge resource over h1. The security aspect of this feature is what catches most of people’s attention and the following discussion. How can we make sure this doesn’t leak sensitive information? What protocol mechanisms exist or can we invent to help out making this work in a way that is safer (by default)?

Trailers. This is again a favorite topic that we’ve discussed before that is resurfaced. There are people around the table who’d like to see support trailers and we discussed the same topic in the HTTP Workshop in 2016 as well. The corresponding issue on trailers filed in the fetch github repo shows a lot of the concerns.

Julian brought up the subject of “7230bis” – when and how do we start the work. What do we want from such a revision? Fixing the bugs seems like the primary focus. “10 years is too long until update”.

Kazuho talked about “HTTP/2 attack mitigation” and how to handle clients doing many parallel slow POST requests to a CDN and them having an origin server behind that runs a new separate process for each upload.

And with this, the day and the workshop 2017 was over. Thanks to Facebook for hosting us. Thanks to the members of the program committee for driving this event nicely! I had a great time. The topics, the discussions and the people – awesome!

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 186

Mozilla planet - di, 13/06/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is structopt, a crate that lets your auto-derive your command-line options from a struct to parse them into. Thanks to m4b for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

115 pull requests were merged in the last week.

New Contributors
  • Arthur Arnold
  • Campbell Barton
  • Fuqiao Xue
  • gentoo90
  • Inokentiy Babushkin
  • Michael Killough
  • Nick Whitney
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: MozMEAO SRE Status Report - June 13, 2017

Mozilla planet - di, 13/06/2017 - 02:00

Here’s what happened on the MozMEAO SRE team from June 6th - June 13th.

Current work Frankfurt Kubernetes cluster provisioning

We’re provisioning a new Kubernetes 1.6.4 cluster in Frankfurt (eu-central-1). This cluster takes advantage of features in new versions of kops, helm, and kubectl.

We’ve modified our New Relic, Datadog, and mig DaemonSets with tolerations so we can gather system metrics from both K8s master and worker nodes.

The first apps to be installed in this cluster will be bedrock and basket.

Basket move to Kubernetes

Basket has been moved to Kubernetes! We experienced some networking issues in our Virginia Kubernetes cluster, so traffic has been routed away from this cluster for the time being.

Snippets

The Firefox 56 activity stream will ship to some users, with some form of snippets integration.

Links
Categorieën: Mozilla-nl planet

Aaron Klotz: Why I prefer using CRITICAL_SECTIONs for mutexes in Windows Nightly builds

Mozilla planet - ma, 12/06/2017 - 23:50

In the past I have argued that our Nightly builds, both debug and release, should use CRITICAL_SECTIONs (with full debug info) for our implementation of mozilla::Mutex. I’d like to illustrate some reasons why this is so useful.

They enable more utility in WinDbg extensions

Every time you initialize a CRITICAL_SECTION, Windows inserts the CS’s debug info into a process-wide linked list. This enables their discovery by the Windows debugging engine, and makes the !cs, !critsec, and !locks commands more useful.

They enable profiling of their initialization and acquisition

When the “Create user mode stack trace database” gflag is enabled, Windows records the call stack of the thread that called InitializeCriticalSection on that CS. Windows also records the call stack of the owning thread once it has acquired the CS. This can be very useful for debugging deadlocks.

They track their contention counts

Since every CS has been placed in a process-wide linked list, we may now ask the debugger to dump statistics about every live CS in the process. In particular, we can ask the debugger to output the contention counts for each CS in the process. After running a workload against Nightly, we may then take the contention output, sort it descendingly, and be able to determine which CRITICAL_SECTIONs are the most contended in the process.

We may then want to more closely inspect the hottest CSes to determine whether there is anything that we can do to reduce contention and all of the extra context switching that entails.

In Summary

When we use SRWLOCKs or initialize our CRITICAL_SECTIONs with the CRITICAL_SECTION_NO_DEBUG_INFO flag, we are denying ourselves access to this information. That’s fine on release builds, but on Nightly I think it is worth having around. While I realize that most Mozilla developers have not used this until now (otherwise I would not be writing this blog post), this rich debugger info is one of those things that you do not miss until you do not have it.

For further reading about critical section debug info, check out this archived article from MSDN Magazine.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 12 Jun 2017

Mozilla planet - ma, 12/06/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Rain of Rust - 2nd online meeting

Mozilla planet - ma, 12/06/2017 - 18:00

Rain of Rust - 2nd online meeting This event belongs to a series of online Rust events that we run in the month of June, 2017

Categorieën: Mozilla-nl planet

Daniel Stenberg: HTTP Workshop – London edition. First day.

Mozilla planet - ma, 12/06/2017 - 17:40

The HTTP workshop series is back for a third time this northern hemisphere summer. The selected location for the 2017 version is London and this time we’re down to a two-day event (we seem to remove a day every year)…

Nothing in this blog entry is a quote to be attributed to a specific individual but they are my interpretations and paraphrasing of things said or presented. Any mistakes or errors are all mine.

At 9:30 this clear Monday morning, 35 persons sat down around a huge table in a room in the Facebook offices. Most of us are the same familiar faces that have already participated in one or two HTTP workshops, but we also have a set of people this year who haven’t attended before. Getting fresh blood into these discussions is certainly valuable. Most major players are represented, including Mozilla, Google, Facebook, Apple, Cloudflare, Fastly, Akamai, HA-proxy, Squid, Varnish, BBC, Adobe and curl!

Mark (independent, co-chair of the HTTP working group as well as the QUIC working group) kicked it all off with a presentation on quic and where it is right now in terms of standardization and progress. The upcoming draft-04 is becoming the first implementation draft even though the goal for interop is set basically at handshake and some very basic data interaction. The quic transport protocol is still in a huge flux and things have not settled enough for it to be interoperable right now to a very high level.

Jana from Google presented on quic deployment over time and how it right now uses about 7% of internet traffic. The Android Youtube app’s switch to QUIC last year showed a huge bump in usage numbers. Quic is a lot about reducing latency and numbers show that users really do get a reduction. By that nature, it improves the situation best for those who currently have the worst connections.

It doesn’t solve first world problems, this solves third world connection issues.

The currently observed 2x CPU usage increase for QUIC connections as compared to h2+TLS is mostly blamed on the Linux kernel which apparently is not at all up for this job as good is should be. Things have clearly been more optimized for TCP over the years, leaving room for improvement in the UDP areas going forward. “Making kernel bypassing an interesting choice”.

Alan from Facebook talked header compression for quic and presented data, graphs and numbers on how HPACK(-for-quic), QPACK and QCRAM compare when used for quic in different networking conditions and scenarios. Those are the three current header compression alternatives that are open for quic and Alan first explained the basics behind them and then how they compare when run in his simulator. The current HPACK version (adopted to quic) seems to be out of the question for head-of-line-blocking reasons, the QCRAM suggestion seems to run well but have two main flaws as it requires an awkward layering violation and an annoying possible reframing requirement on resends. Clearly some more experiments can be done, possible with a hybrid where some QCRAM ideas are brought into QPACK. Alan hopes to get his simulator open sourced in the coming months which then will allow more people to experiment and reproduce his numbers.

Hooman from Fastly on problems and challenges with HTTP/2 server push, the 103 early hints HTTP response and cache digests. This took the discussions on push into the weeds and into the dark protocol corners we’ve been in before and all sorts of ideas and suggestions were brought up. Some of them have been discussed before without having been resolved yet and some ideas were new, at least to me. The general consensus seems to be that push is fairly complicated and there are a lot of corner cases and murky areas that haven’t been clearly documented, but it is a feature that is now being used and for the CDN use case it can help with a lot more than “just an RTT”. But is perhaps the 103 response good enough for most of the cases?

The discussion on server push and how well it fares is something the QUIC working group is interested in, since the question was asked already this morning if a first version of quic could be considered to be made without push support. The jury is still out on that I think.

ekr from Mozilla spoke about TLS 1.3, 0-RTT, how the TLS 1.3 handshake looks like and how applications and servers can take advantage of the new 0-RTT and “0.5-RTT” features. TLS 1.3 is already passed the WGLC and there are now “only” a few issues pending to get solved. Taking advantage of 0RTT in an HTTP world opens up interesting questions and issues as HTTP request resends and retries are becoming increasingly prevalent.

Next: day two.

Categorieën: Mozilla-nl planet

Tarek Ziadé: Molotov, Arsenic & Geckodriver

Mozilla planet - ma, 12/06/2017 - 08:05

Molotov is the load testing tool we're using for stressing our web services at Mozilla QA.

It's a very simple framework based on asyncio & aiohttp, that will let you run tests with a lot of concurrent coroutines. Using an event loop makes it quite efficient to run a lot of concurrent requests against a single endpoint. Molotov is used with another tool to perform distributed load tests from the cloud. But even if you use it from your laptop, it can send a fair amount of load. On one project, we were able to kill the service with one macbook sending 30,000 requests per second.

Molotov is also handy to run integration tests. The same scenario used to load test a service can be used to simulate a few users on a service and make sure it behaves as expected.

But the tool can only test HTTP(S) endpoints via aiohttp.Client, so if you want to run tests through a real browser, you need to use a tool like Selenium, or drive the browser directly via Marionette for example.

Running real browsers in Molotov can make sense for some specific use cases. For example, you can have a scenario where you want to have several users interact on a web page and have the JS executed there. A chat app, a shared pad, etc..

But the problem with Selenium Python libraries is that they are all written (as far as I know) in a synchronous fashion. They can be used in Molotov of course, but each call would block the loop and defeat concurrency.

The other limitation is that one instance of a browser cannot be used by several concurrent users. For instance in Firefox, even if Marionette is internally built in an async way, if two concurrent scripts are trying to change the active tab at the same time, that would break their own scenario.

Introducing Arsenic

By the time I was thinking about building an async library to drive browsers, I had an interesting conversation with Jonas Obrist whom I had met at Pycon Malaysia last year. He was in the process of writing an asynchronous Selenium client for his needs. We ended up agreeing that it would be great to collaborate on an async library that would work against the new WebDriver protocol, which defines HTTP endpoints a browser can serve.

WebDriver is going to be implemented in all browsers, and a library that'd use that protocol would be able to drive all kind of browsers. In Firefox we have a similar feature with Marionette, which is a TCP server you can use to driver Firefox. But eventually, Firefox will implement WebDriver.

Geckodriver is Mozilla's WebDriver implementation, and can be used to proxy calls to Firefox. Geckodriver is an HTTP server that translates WebDriver calls into Marionette calls, and also deals with starting and stopping Firefox.

And Arsenic is the async WebDriver client Jonas started. It's already working great. The project is here on Github: https://github.com/HDE/arsenic

Molotov + Arsenic == Molosonic

To use Arsenic with Molotov, I just need to pass along the event loop that's utilized in the load testing tool, and also make sure that it runs at the most one Firefox browser per Molotov worker. We want to have a browser instance attached per session instance when the test is running.

The setup_session and teardown_session fixtures are the right place to start and stop a browser via Arsenic. To make the setup even easier, I've created a small extension for Molotov called Molosonic, that will take care of running a Firefox browser and attaching it to the worker session.

In the example below, a browser is created every time a worker starts a new session:

import molotov from molosonic import setup_browser, teardown_browser @molotov.setup_session() async def _setup_session(wid, session): await setup_browser(session) @molotov.teardown_session() async def _teardown_session(wid, session): await teardown_browser(session) @molotov.scenario(1) async def example(session): firefox = session.browser await firefox.get('http://example.com')

That's all it takes to use a browser in Molotov in an asynchronous way, thanks to Arsenic. From there, driving a test that simulates several users hitting a webpage and interacting through it requires some synchronization subtleties I will demonstrate in a tutorial I am still working on.

All these projects are still very new and not ready for prime time, but you can still check out Arsenic's docs at http://arsenic.readthedocs.io

Beyond Molotov use cases, Arsenic is a very exciting project if you need a way to drive browsers in an async program. And async programming is tomorrow's standard in Python.

Categorieën: Mozilla-nl planet

Firefox Nightly: Date/Time Inputs Enabled on Nightly

Mozilla planet - ma, 12/06/2017 - 06:01

Exciting! Firefox is now providing simple and chic interfaces for representing, setting and picking a time or date on Nightly. Various content attributes defined in the HTML standard, such as @step, @min, and @max, are implemented for finer-grained control over data values.

Take a closer look at this feature, and come join us in making it better with better browser compatibility!

What’s Currently Supported <input type=time>

The default format is shown as below.

Here is how it looks when you are setting a value for a time. The value provided must be in the format “hh:mm[:ss[.mmm]]”, according to the spec.

Note that there is no picker for <input type=time>. We decided not to support it since we think it’s easier and faster to enter a time using the keyboard than selecting it from a picker. If you have a different opinion, let us know!

<input type=date>

The layout of an input field for date looks as below. If @value attribute is provided, it must be in the format “yyyy-mm-dd”, according to the spec.

A date picker will be popped out when you click on the input field. You can choose to set a date by typing in the field or selecting one from the picker.

      

Validation

Date/Time inputs allow you to set content attributes like @min, @max, @step or @required to specify the desired date/time range.

For example, you can set the @min and @max attribute for <input type=time>, and if the user selects a time outside of the specified range, a validation error message is shown to let the user know the expected range.

By setting the @step attribute, you can specify the expected date/time interval values. For example:

Localization

<input type=date> and <input type=time> input box are automatically formatted based on your browser locale, that means the Firefox browser with the language you downloaded and installed. This is the same as your interface language of Firefox.

This is how <input type=time> looks like using Firefox Traditional Chinese!

The calendar picker for <input type=date> is also formatted based on your browser language. Hence, the first day of the week can start on Monday or Sunday, depending on your browser language. Note that this is not configurable.

Only Gregorian calendar system is supported at the moment. All dates and times will be converted to ISO 8601 format, as specified in the spec, before being submitted to the web server.

Happy Hacking

Wondering how you can help us make this feature more awesome? Download the latest Firefox Nightly and give it a try.

Try it out:

Try it out:

If you are looking for more fun, you can try some more examples on MDN.

If you encounter an issue, report it by submitting the “summary” and “description” fields on Bugzilla.

If you are an enthusiastic developer and would like to contribute to the project, we have features that are in our backlog that you are welcome to contribute to! User interaction behaviors and visual styles are well defined in the specs.

Thanks,
The Date/Time Inputs Team

Categorieën: Mozilla-nl planet

Pagina's