mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Addons Blog: Add-on Policies Update: Newtab and Search

Mozilla planet - ma, 28/10/2019 - 19:22

As part of our ongoing work to make add-ons safer for Firefox users, we are updating our Add-on Policies to add clarification and guidance for developers regarding data collection. The following is a summary of the changes, which will go into effect on December 2, 2019.

  • Search functionality provided or loaded by the add-on must not collect search terms or intercept searches that are going to a third-party search provider.
  • If the collection of visited URLs or user search terms is required for the add-on to work, the user must provide affirmative consent (i.e., explicit opt-in from the user) at first-run, since that information can contain personal information. For more information on how to create a data collection consent dialog, refer to our best practices.
  • Add-ons must not load or redirect to a remote new tab page. The new tab page must be contained within the add-on.

You can preview the policies and ensure your extensions abide by them to avoid any disruption. If you have questions about these updated policies or would like to provide feedback, please post to this forum thread.

The post Add-on Policies Update: Newtab and Search appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Nathan Froyd: evaluating bazel for building firefox, part 1

Mozilla planet - ma, 28/10/2019 - 18:11

After the Whistler All-Hands this past summer, I started seriously looking at whether Firefox should switch to using Bazel for its build system.

The motivation behind switching build systems was twofold.  The first motivation was that build times are one of the most visible developer-facing aspects of the build system and everybody appreciates faster builds.  What’s less obvious, but equally important, is that making builds faster improves automation: less time waiting for try builds, more flexibility to adjust infrastructure spending, and less turnaround time with automated reviews on patches submitted for review.  The second motivation was that our build system is used by exactly one project (ok, two projects), so there’s a lot of onboarding cost both in terms of developers who use the build system and in terms of developers who need to develop the build system.  If we could switch to something more off-the-shelf, we could improve the onboarding experience and benefit from work that other parties do with our chosen build system.

You may have several candidates that we should have evaluated instead.  We did look at other candidates (although perhaps none so deeply as Bazel), and all of them have various issues that make them unsuitable for a switch.  The reasons for rejecting other possibilities fall into two broad categories: not enough platform support (read: Windows support) and unlikely to deliver on making builds faster and/or improving the onboarding/development experience.  I’ll cover the projects we looked at in a separate post.

With that in mind, why Bazel?

Bazel advertises itself with the tagline “{Fast, Correct} – Choose two”.  What’s sitting behind that tagline is that when building software via, say, Make, it’s very easy to write Makefiles in such a way that builds are fast, but occasionally (or not-so-occasionally) fail because somebody forgot to specify “to build thing X, you need to have built thing Y”.  The build doesn’t usually fail because thing Y is built before thing X: maybe the scheduling algorithm for parallel execution in make chooses to build Y first 99.9% of the time, and 99% of those times, building Y finishes prior to even starting to build X.

The typical solution is to become more conservative in how you build things such that you can be sure that Y is always built before X…but typically by making the dependency implicit by, say, ordering the build commands Just So, and not by actually making the dependency explicit to make itself.  Maybe specifying the explicit dependency is rather difficult, or maybe somebody just wants to make things work.  After several rounds of these kind of fixes, you wind up with Makefiles that are (probably) correct, but probably not as fast as it could be, because you’ve likely serialized build steps that could have been executed in parallel.  And untangling such systems to the point that you can properly parallelize things and that you don’t regress correctness can be…challenging.

(I’ve used make in the above example because it’s a lowest-common denominator piece of software and because having a concrete example makes differentiating between “the software that runs the build” and “the specification of the build” easier.  Saying “the build system” can refer to either one and sometimes it’s not clear from context which is in view.  But you should not assume that the problems described above are necessarily specific to make; the problems can happen no matter what software you rely on.)

Bazel advertises a way out of the quagmire of probably correct specifications for building your software.  It does this—at least so far as I understand things, and I’m sure the Internet will come to correct me if I’m wrong—by asking you to explicitly specify dependencies up front.  Build commands can then be checked for correctness by executing the commands in a “sandbox” containing only those files specified as dependencies: if you forgot to specify something that was actually needed, the build will fail because the file(s) in question aren’t present.

Having a complete picture of the dependency graph enables faster builds in three different ways.  The first is that you can maximally parallelize work across the build.  The second is that Bazel comes with built-in facilities for farming out build tasks to remote machines.  Note that all build tasks can be distributed, not just C/C++/Rust compilation as via sccache.  So even if you don’t have a particularly powerful development machine, you can still pretend that you have a large multi-core system at your disposal.  The third is that Bazel also comes with built-in facilities for aggressive caching of build artifacts.  Again, like remote execution, this caching applies across all build tasks, not just C/C++/Rust compilation.  In Firefox development terms, this is Firefox artifact builds done “correctly”: given appropriate setup, your local build would simply download whatever was appropriate for the changes in your current local tree and rebuild the rest.

Having a complete picture of the dependency graph enables a number of other nifty features.  Bazel comes with a query language for the dependency graph, enabling you to ask questions like “what jobs need to run given that these files changed?”  This sort of query would be valuable for determining what jobs to run in automation; we have a half-hearted (and hand-updated) version of this in things like files-changed in Taskcluster job specifications.  But things like “run $OS tests for $OS-only changes” or “run just the mochitest chunk that contains the changed mochitest” become easy.

It’s worth noting here that we could indeed work towards having the entire build graph available all at once in the current Firefox build system.  And we have remote execution and caching abilities via sccache, even moreso now that sccache-dist is being deployed in Mozilla offices.  We think we have a reasonable idea of what it would take to work towards Bazel-esque capabilities with our current system; the question at hand is how a switch to Bazel compares to that and whether a switch would be more worthwhile for the health of the Firefox build system over the long term.  Future posts are going to explore that question in more detail.

Categorieën: Mozilla-nl planet

Firefox UX: Prototyping Firefox With CSS Grid

Mozilla planet - ma, 28/10/2019 - 16:26

Prototyping with HTML and CSS grid is really helpful for understanding flexibility models. I was able to understand how my design works in a way that was completely different than doing it in a static design program.

Links:

Transcript:
So I’m working on a design of the new address bar in Firefox. Our code name for it is the QuantumBar. There’s a lot of pieces to this. But one of the things that I’ve been trying to figure out is how it fits into the Firefox toolbar and how it collapses and expands, the squishiness of the toolbar, and trying to kinda rethink that by just building it in code. So I have this sort of prototype here where I’ve recreated the Firefox top toolbar here in HTML, and you can see how it like collapses and things go away and expands as I grow and shrink this here.

I’ve used CSS Grid to do a lot of this layout. And here I’ve turned on the grid lines just for this section of the toolbar. It’s one of the many grids that I have here. But I wanna point out these like flexible spaces over here, and then this part here, the actual QuantumBar piece, right? So you can see I’ve played around with some different choices about how big things can be at this giant window size here. And I was inspired by my friend, Jen Simmons, who’s been talking about Grid for a long time, and she was explaining how figuring out whether to use minmax or auto or fr units. It’s something that is, you can only really figure out by coding it up and a little trial and error here. And it allowed me to understand better how the toolbar works as you squish it and maybe come up with some better ways of making it work.

Yeah, as we squish it down, maybe here we wanna prioritize the width of this because this is where the results are gonna show up in here, and we let these flexible spaces squish a little sooner and a little faster. And that’s something that you can do with Grid and some media queries like at this point let’s have it squished this way, at this point let’s have it squished another way. Yeah, and I also wanted to see how it would work then if your toolbar was full of lots of icons and you have the other search bar and no spacers, how does that work? And can we prioritize maybe the size of the address bar a little more so that because you’ll notice on the top is the real Firefox, we can get in weird situations where this, it’s just not even usable anymore. And maybe we should stop it from getting that small and still have it usable.

Anyway it’s a thing I’ve been playing with and what I’ve found was that using HTML and CSS to mock this up had me understand it in a way that was way better than doing it in some sort of static design program.

Categorieën: Mozilla-nl planet

Wladimir Palant: Avast Online Security and Avast Secure Browser are spying on you

Mozilla planet - ma, 28/10/2019 - 08:47

Are you one of the allegedly 400 million users of Avast antivirus products? Then I have bad news for you: you are likely being spied upon. The culprit is the Avast Online Security extension that these products urge you to install in your browser for maximum protection.

But even if you didn’t install Avast Online Security yourself, it doesn’t mean that you aren’t affected. This isn’t obvious but Avast Secure Browser has Avast Online Security installed by default. It is hidden from the extension listing and cannot be uninstalled by regular means, its functionality apparently considered an integral part of the browser. Avast products promote this browser heavily, and it will also be used automatically in “Banking Mode.” Given that Avast bought AVG a few years ago, there is also a mostly identical AVG Secure Browser with the built-in AVG Online Security extension.

Avast watching you while browsing the web

Table of Contents Summary of the findings

When Avast Online Security extension is active, it will request information about your visited websites from an Avast server. In the process, it will transmit data that allows reconstructing your entire web browsing history and much of your browsing behavior. The amount of data being sent goes far beyond what’s necessary for the extension to function, especially if you compare to competing solutions such as Google Safe Browsing.

Avast Privacy Policy covers this functionality and claims that it is necessary to provide the service. Storing the data is considered unproblematic due to anonymization (I disagree), and Avast doesn’t make any statements explaining just how long it holds on to it.

What is happening exactly?

Using browser’s developer tools you can look at an extension’s network traffic. If you do it with Avast Online Security, you will see a request to https://uib.ff.avast.com/v5/urlinfo whenever a new page loads in a tab:

Request performed by Avast Online Security in Chrome's developer tools

So the extension sends some binary data and in return gets information on whether the page is malicious or not. The response is then translated into the extension icon to be displayed for the page. You can clearly see the full address of the page in the binary data, including query part and anchor. The rest of the data is somewhat harder to interpret, I’ll get to it soon.

This request isn’t merely sent when you navigate to a page, it also happens whenever you switch tabs. And there is an additional request if you are on a search page. This one will send every single link found on this page, be it a search result or an internal link of the search engine.

What data is being sent?

The binary UrlInfoRequest data structure used here can be seen in the extension source code. It is rather extensive however, with a number of fields being nested types. Also, some fields appear to be unused, and the purpose of others isn’t obvious. Finally, there are “custom values” there as well which are a completely arbitrary key/value collection. That’s why I decided to stop the extension in the debugger and have a look at the data before it is turned into binary. If you want to do it yourself, you need to find this.message() call in scripts/background.js and look at this.request after this method is called.

The interesting fields were:

Field Contents uri The full address of the page you are on. title Page title if available. referer Address of the page that you got here from, if any. windowNum
tabNum Identifier of the window and tab that the page loaded into. initiating_user_action
windowEvent How exactly you got to the page, e.g. by entering the address directly, using a bookmark or clicking a link. visited Whether you visited this page before. locale Your country code, which seems to be guessed from the browser locale. This will be “US” for US English. userid A unique user identifier generated by the extension (the one visible twice in the screenshot above, starting with “d916”). For some reason this one wasn’t set for me when Avast Antivirus was installed. plugin_guid Seems to be another unique user identifier, the one starting with “ceda” in the screenshot above. Also not set for me when Avast Antivirus was installed. browserType
browserVersion Type (e.g. Chrome or Firefox) and version number of your browser. os
osVersion Your operating system and exact version number (the latter only known to the extension if Avast Antivirus is installed).

And that’s merely the fields which were set. The data structure also contains fields for your IP address and a hardware identifier but in my tests these stayed unused. It also seems that for paying Avast customers the identifier of the Avast account would be transmitted as well.

What does this data tell about you?

The data collected here goes far beyond merely exposing the sites that you visit and your search history. Tracking tab and window identifiers as well as your actions allows Avast to create a nearly precise reconstruction of your browsing behavior: how many tabs do you have open, what websites do you visit and when, how much time do you spend reading/watching the contents, what do you click there and when do you switch to another tab. All that is connected to a number of attributes allowing Avast to recognize you reliably, even a unique user identifier.

If you now think “but they still don’t know who I am” – think again. Even assuming that none of the website addresses you visited expose your identity directly, you likely have a social media account. There has been a number of publications showing that, given a browsing history, the corresponding social media account can be identified in most cases. For example, this 2017 study concludes:

Of the 374 people who confirmed the accuracy of our de-anonymization attempt, 268 (72%) were the top candidate generated by the MLE, and 303 participants (81%) were among the top 15 candidates. Consistent with our simulation results, we were able to successfully de-anonymize a substantial proportion of users who contributed their web browsing histories.

With the Avast data being far more extensive, it should allow identifying users with an even higher precision.

Isn’t this necessary for the extension to do its job?

No, the data collection is definitely unnecessary to this extent. You can see this by looking at how Google Safe Browsing works, the current approach being largely unchanged compared to how it was integrated in Firefox 2.0 back in 2006. Rather than asking a web server for each and every website, Safe Browsing downloads lists regularly so that malicious websites can be recognized locally.

No information about you or the sites you visit is communicated during list updates. […] Before blocking the site, Firefox will request a double-check to ensure that the reported site has not been removed from the list since your last update. This request does not include the address of the visited site, it only contains partial information derived from the address.

I’ve seen a bunch of similar extensions by antivirus vendors, and so far all of them provided this functionality by asking the antivirus app. Presumably, the antivirus has all the required data locally and doesn’t need to consult the web service every time. In fact, I could see Avast Online Security also consult the antivirus application for the websites you visit if this application is installed. It’s an additional request however, the request to the web service goes out regardless. Update (2019-10-29): I understand this logic better now, and the requests made to the antivirus application have a different purpose.

Wait, but Avast Antivirus isn’t always installed! And maybe the storage requirements for the full database exceed what browser extensions are allowed to store. In this case the browser extension has no choice but to ask the Avast web server about every website visited. But even then, this isn’t a new problem. For example, the Mozilla community had a discussion roughly a decade ago about whether security extensions really need to collect every website address. The decision here was: no, sending merely the host name (or even a hash of it) is sufficient. If higher precision is required, the extension could send the full address only if a potential match is detected.

What about the privacy policy?

But Avast has a privacy policy. They surely explained there what they need this data for and how they handle it. There will most certainly be guarantees in there that they don’t keep any of this data, right?

Let’s have a look. The privacy policy is quite long and applies to all Avast products and websites. The relevant information doesn’t come until the middle of it:

We may collect information about the computer or device you are using, our products and services running on it, and, depending on the type of device it is, what operating systems you are using, device settings, application identifiers (AI), hardware identifiers or universally unique identifiers (UUID), software identifiers, IP Address, location data, cookie IDs, and crash data (through the use of either our own analytical tools or tolls provided by third parties, such as Crashlytics or Firebase). Device and network data is connected to the installation GUID.

We collect device and network data from all users. We collect and retain only the data we need to provide functionality, monitor product and service performance, conduct research, diagnose and repair crashes, detect bugs, and fix vulnerabilities in security or operations (in other words, fulfil our contract with you to provision the service).

Unfortunately, after reading this passage I still don’t know whether they retain this data for me. I mean, “conduct research” for example is a very wide term and who knows what data they need to do it? Let’s look further.

Our AntiVirus and Internet security products require the collection of usage data to be fully functional. Some of the usage data we collect include:

[…]

  • information about where our products and services are used, including approximate location, zip code, area code, time zone, the URL and information related to the URL of sites you visit online

[…]

We use this Clickstream Data to provide you malware detection and protection. We also use the Clickstream Data for security research into threats. We pseudonymize and anonymize the Clickstream Data and re-use it for cross-product direct marketing, cross-product development and third party trend analytics.

And that seems to be all of it. In other words, Avast will keep your data and they don’t feel like they need your approval for that. They also reserve the right to use it in pretty much any way they like, including giving it to unnamed third parties for “trend analytics.” That is, as long as the data is considered anonymized. Which it probably is, given that technically the unique user identifier is not tied to you as a person. That your identity can still be deduced from the data – well, bad luck for you.

Edit (2019-10-29): I got a hint that Avast acquired Jumpshot a bunch of years ago. And if you take a look at the Jumpshot website, they list “clickstream data from 100 million global online shoppers and 20 million global app users” as their product. So you now have a pretty good guess as to where your data is going.

Conclusions

Avast Online Security collecting personal data of their users is not an oversight and not necessary for the extension functionality either. The extension attempts to collect as much context data as possible, and it does so on purpose. The Avast privacy policy shows that Avast is aware of the privacy implications here. However, they do not provide any clear retention policy for this data. They rather appear to hold on to the data forever, feeling that they can do anything with it as long as the data is anonymized. The fact that browsing data can usually be deanonymized doesn’t instill much confidence however.

This is rather ironic given that all modern browsers have phishing and malware protection built in that does essentially the same thing but with a much smaller privacy impact. In principle, Avast Secure Browser has this feature as well, it being Chromium-based. However, all Google services have been disabled and removed from the settings page – the browser won’t let you send any data to Google, sending way more data to Avast instead.

Update (2019-10-28): Somehow I didn’t find existing articles on the topic when I searched initially. This article mentions the same issue in passing, it was published in January 2015 already. The screenshot there shows pretty much the same request, merely with less data.

Categorieën: Mozilla-nl planet

IRL (podcast): “The Weird Kids at the Big Tech Party” from ZigZag

Mozilla planet - ma, 28/10/2019 - 08:05

Season 4 of ZigZag is about examining the current culture of business and work, figuring out what needs to change, and experimenting with new ways to do it. Sign up for their newsletter and subscribe to the podcast for free wherever you get your podcasts.

Categorieën: Mozilla-nl planet

Niko Matsakis: why async fn in traits are hard

Mozilla planet - za, 26/10/2019 - 06:00

After reading boat’s excellent post on asynchronous destructors, I thought it might be a good idea to write some about async fn in traits. Support for async fn in traits is probably the single most common feature request that I hear about. It’s also one of the more complex topics. So I thought it’d be nice to do a blog post kind of giving the “lay of the land” on that feature – what makes it complicated? What questions remain open?

I’m not making any concrete proposals in this post, just laying out the problems. But do not lose hope! In a future post, I’ll lay out a specific roadmap for how I think we can make incremental progress towards supporting async fn in traits in a useful way. And, in the meantime, you can use the async-trait crate (but I get ahead of myself…).

The goal

In some sense, the goal is simple. We would like to enable you to write traits that include async fn. For example, imagine we have some Database trait that lets you do various operations against a database, asynchronously:

trait Database { async fn get_user( &self, ) -> User; } Today, you should use async-trait

Today, of course, the answer is that you should dtolnay’s excellent async-trait crate. This allows you to write almost what we wanted:

#[async_trait] trait Database { async fn get_user(&self) -> User; }

But what is really happening under the hood? As the crate’s documentation explains, this declaration is getting transformed to the following. Notice the return type.

trait Database { fn get_user(&self) -> Pin<Box<dyn Future<Output = User> + Send + '_>>; }

So basically you are returning a boxed dyn Future – a future object, in other words. This desugaring is rather different from what happens with async fn in other contexts – but why is that? The rest of this post is going to explain some of the problems that async fn in traits is trying to solve, which may help explain why we have a need for the async-trait crate to begin with!

Async fn normally returns an impl Future

We saw that the async-trait crate converts an async fn to something that returns a dyn Future. This is contrast to the async fn desugaring that the Rust compiler uses, which produces an impl Future. For example, imagine that we have an inherent method async fn get_user() defined on some particular service type:

impl MyDatabase { async fn get_user(&self) -> User { ... } }

This would get desugared to something similar to:

impl MyDatabase { fn get_user(&self) -> impl Future<Output = User> + '_ { ... } }

So why does async-trait do something different? Well, it’s because of “Complication #1”…

Complication #1: returning impl Trait in traits is not supported

Currently, we don’t support -> impl Trait return types in traits. Logically, though, we basically know what the semantics of such a construct should be: it is equivalent to a kind of associated type. That is, the trait is promising that invoking get_user will return some kind of future, but the precise type will be determined by the details of the impl (and perhaps inferred by the compiler). So, if know logically how impl Trait in traits should behave, what stops us from implementing it? Well, let’s see…

Complication #1a. impl Trait in traits requires GATs

Let’s return to our Database example. Imagine that we permitted async fn in traits. We would therefore desugar

trait Database { async fn get_user(&self) -> User; }

into something that returns an impl Future:

trait Database { fn get_user(&self) -> impl Future<Output = User> + '_; }

and then we would in turn desugar that into something that uses an associated type:

trait Database { type GetUser<'s>: Future<Output = User> + 's; fn get_user(&self) -> Self::GetUser<'_>; }

Hmm, did you notice that I wrote type GetUser<'s>, and not type GetUser? Yes, that’s right, this is not just an associated type, it’s actually a generic associated type. The reason for this is that async fn always capture all of their arguments – so whatever type we return will include the &self as part of it, and therefore it has to include the lifetime 's. So, that’s one complication, we have to figure out generic associated types.

Now, in some sense that’s not so bad. Conceptually, GATs are fairly simple. Implementation wise, though, we’re still working on how to support them in rustc – this may require porting rustc to use chalk, though that’s not entirely clear. In any case, this work is definitely underway, but it’s going to take more time.

Unfortunately for us, GATs are only the beginning of the complications around async fn (and impl Trait) in traits!

Complication #2: send bounds (and other bounds)

Right now, when you write an async fn, the resulting future may or may not implement Send – the result depends on what state it captures. The compiler infers this automatically, basically, in typical auto trait fashion.

But if you are writing generic code, you may well want to need to require that the resulting future is Send. For example, imagine we are writing a finagle_database thing that, as part of its inner working, happens to spawn off a parallel thread to get the current user. Since we’re going to be spawning a thread with the result from d.get_user(), that result is going to have to be Send, which means we’re going to want to write a function that looks something like this1:

fn finagle_database<D: Database>(d: &D) where for<'s> D::GetUser<'s>: Send, { ... spawn(d.get_user()); ... }

This example seems “ok”, but there are four complications

  • First, we wrote the name GetUser, but that is something we introduced as part of “manually” desugaring async fn get_user. What name would the user actually use?
  • Second, writing for<'s> D::GetUser<'s> is kind of grody, we’re obviously going to want more compact syntax (this is really an issue around generic associated types in general).
  • Third, our example Database trait has only one async fn, but obviously there might be many more. Probably we will want to make all of them Send or None – so you can expand a lot more grody bounds in a real function!
  • Finally, forcing the user to specify which exact async fns have to return Send futures is a semver hazard.

Let me dig into those a bit.

Complication #2a. How to name the associated type?

So we saw that, in a trait, returning an impl Trait value is equivalent to introducing a (possibly generic) associated type. But how should we name this associated type? In my example, I introduced a GetUser associated type as the result of the get_user function. Certainly, you could imagine a rule like “take the name of the function and convert it to camel case”, but it feels a bit hokey (although I suspect that, in practice, it would work out just fine). There have been other proposals too, such as typeof expressions and the like.

Complication #2b. Grody, complex bounds, especially around GATs.

In my example, I used the strawman syntax for<'s> D::GetUser<'s>: Send. In real life, unfortunately, the bounds you need may well get more complex still. Consider the case where an async fn has generic parameters itself:

trait Foo { async fn bar<A, B>(a: A, b: B); }

Here, the future that results bar is only going to be Send if A: Send and B: Send. This suggests a bound like

where for<A: Send, B: Send> { S::bar<A, B>: Send }

From a conceptual point-of-view, bounds like these are no problem. Chalk can handle them just fine, for example. But I think this is pretty clearly a problem and not something that ordinary users are going to want to write on a regular basis.

Complication #2c. Listing specific associated types reveals implementation details

If we require functions to specify the exact futures that are Send, that is not only tedious, it could be a semver hazard. Consider our finagle_database function – from its where clause, we can see that it spawns out get_user into a scoped thread. But what if we wanted to modify it in the future to spawn off more database operations? That would require us to modify the where-clauses, which might in turn break our callers. Seems like a problem, and it suggests that we might want some way to say “all possible futures are send”.

Conclusion: We might want a new syntax for propagating auto traits to async fns

All of this suggests that we might want some way to propagate auto traits through to the results of async fns explicitly. For example, you could imagine supporting async bounds, so that we might write async Send instead of just Send:

pub fn finagle_database<DB>(t: DB) where DB: Database + async Send, { }

This syntax would be some kind of “default” that expands to explicit Send bounds both DB and all the futures potentially returned by DB.

Or perhaps we’d even want to avoid any syntax, and somehow “rejigger” how Send works when applied to traits that contain async fns? I’m not sure about how that would work.

It’s worth pointing out this same problem can occur with impl Trait in return position2, or indeed any associaed types. Therefore, we might prefer a syntax that is more general and not tied to async.

Complication #3: supporting dyn traits that have async fns

Now imagine that had our trait Database, containing an async fn get_user. We might like to write functions that operate over dyn Database values. There are many reasons to prefer dyn Database values:

  • We don’t want to generate many copies of the same function, one per database type;
  • We want to have collections of different sorts of databases, such as a Vec<Box<dyn Database>> or something like that.

In practice, a desire to support dyn Trait comes up in a lot of examples where you would want to use async fn in traits.

Complication #3a: dyn Trait have to specify their associated type values

We’ve seen that async fn in traits effectively desugars to a (generic) associated type. And, under the current Rust rules, when you have a dyn Trait value, the type must specify the values for all associated types. If we consider our desugared Database trait, then, it would have to be written dyn Database<GetUser<'s> = XXX>. This is obviously no good, for two reasons:

  1. It would require us to write out the full type for the GetUser, which might be super complicated.
  2. And anyway, each dyn Database is going to have a distinct GetUser type. If we have to specify GetUser, then, that kind of defeats the point of using dyn Database in the first place, as the type is going to be specific to some particular service, rather than being a single type that applies to all services.
Complication #3b: no “right choice” for X in dyn Database<GetUser<'s> = X>

When we’re using dyn Database, what we actually want is a type where GetUser is not specified. In other words, we just want to write dyn Database, full stop, and we want that to be expanded to something that is perhaps “morally equivalent” to this:

dyn Database<GetUser<'s> = dyn Future<..> + 's>

In other words, all the caller really wants to know when it calls get_user is that it gets back some future which it can poll. It doesn’t want to know exactly which one.

Unfortunately, actually using dyn Future<..> as the type there is not a viable choice. We probably want a Sized type, so that the future can be stored, moved into a box, etc. We could imagine then that dyn Database defaults its “futures” to Box<dyn Future<..>> instead – well, actually, Pin<Box<dyn Future>> would be a more ergonomic choice – but there are a few concerns with that.

First, using Box seems rather arbitrary. We don’t usually make Box this “special” in other parts of the language.

Second, where would this box get allocated? The actual trait impl for our service isn’t using a box, it’s creating a future type and returning it inline. So we’d need to generate some kind of “shim impl” that applies whenever something is used as a dyn Database – this shim impl would invoke the main function, box the result, and return that.

Third, because a dyn Future type hides the underlying future (that is, indeed, its entire purpose), it also blocks the auto trait mechanism from figuring out if the result is Send. Therefore, when we make e.g. a dyn Database type, we need to specify not only the allocation mechanism we’ll use to manipulate the future (i.e., do we use Box?) but also whether the future is Send or not.

Now you see why async-trait desugars the way it does

After reviewing all these problems, we now start to see where the design of the async-trait crate comes from:

  • To avoid Complications #1 and #2, async-trait desugars async fn to return a dyn Future instead of an impl Future.
  • To avoid Complication #3, async-trait chooses for you to use a Pin<Box<dyn Future + Send>> (you can opt-out from the Send part). This is almost always the correct default.

All in all, it’s a very nice solution.

The only real drawback here is that there is some performance hit from boxing the futures – but I suspect it is negligible in almost all applications. I don’t think this would be true if we boxed the results of all async fns; there are many cases where async fns are used to create small combinators, and there the boxing costs might start to add up. But only boxing async fns that go through trait boundaries is very different. And of course it’s worth highlighting that most languages box all their futures, all of the time. =)

Summary

So to sum it all up, here are some of the observations from this article:

  • async fn desugars to a fn returning impl Trait, so if we want to support async fn in traits, we should also support fns that return impl Trait in traits.
    • It’s worth pointing out also that sometimes you have to manually desugar an async fn to a fn that returns impl Future to avoid capturing all your arguments, so the two go hand in hand.
  • Returning impl Trait in a trait is equivalent to an associated type in the trait.
    • This associated type does need to be nameable, but what name should we give this associated type?
    • Also, this associated type often has to be generic, especially for async fn.
  • Applying Send bounds to the futures that can be generated is tedious, grody, and reveals semver details. We probably some way to make that more ergonomic.
    • This quite likely applies to the general impl Trait case too, but it may come up somewhat less frequently.
  • We do want the ability to have dyn Trait versions of traits that contain associated functions and/or impl Trait return types.
    • But currently we have no way to have a dyn Trait without fully specifying all of its associated types; in our case, those associated types have a 1-to-1 relationship with the Self type, so that defeats the whole point of dyn Trait.
    • Therefore, in the case of dyn Trait, we would want to have the async fn within returning some form of dyn Future. But we would have to effectively “hardcode” two choices:
      • What form of pointer to use (e.g., Box)
      • Is the resulting future Send, Sync, etc
    • This applies to the general impl Trait case too.

The goal of this post was just to lay out the problems. I hope to write some follow-up posts digging a bit into the solutions – though for the time being, the solution is clear: use the async-trait crate.

Footnotes
  1. Astute readers might note that I’m eliding a further challenge, which is that you need a scoping mechanism here to handle the lifetimes. Let’s assume we have something like Rayon’s scope or crossbeam’s scope available. 

  2. Still, consider a trait IteratorX that is like Iterator, where the adapters return impl Trait. In such a case, you probably want a way to say not only “I take a T: IteratorX + Send” but also that the IteratorX values returned by calls to map and the like are Send. Presently you would have to list out the specific associated types you want, which also winds up revealing implementation details. 

Categorieën: Mozilla-nl planet

Robert O'Callahan: Pernosco Demo Video

Mozilla planet - vr, 25/10/2019 - 00:34

Over the last few years we have kept our work on the Pernosco debugger mostly under wraps, but finally it's time to show the world what we've been working on! So, without further ado, here's an introductory demo video showing Pernosco debugging a real-life bug:

This demo is based on a great gdb tutorial created by Brendan Gregg. If you read his blog post, you'll get more background and be able to compare Pernosco to the gdb experience.

Pernosco makes developers more productive by providing scalable omniscient debugging — accelerating existing debugging strategies and enabling entirely new strategies and features — and by integrating that experience into cloud-based workflows. The latter includes capturing test failures occurring in CI so developers can jump into a debugging session with one click on a URL, separating failure reproduction from debugging so QA staff can record test failures and send debugger URLs to developers, and letting developers collaborate on debugging sessions.

Over the next few weeks we plan to say a lot more about Pernosco and how it benefits software developers, including a detailed breakdown of its approach and features. To see those updates, follow @_pernosco_ or me on Twitter. We're opening up now because we feel ready to serve more customers and we're keen to talk to people who think they might benefit from Pernosco; if that's you, get in touch. (Full disclosure: Pernosco uses rr so for now we're limited x86-64 Linux, and statically compiled languages like C/C++/Rust.)

Categorieën: Mozilla-nl planet

The Mozilla Blog: Longtime Mozilla board member Bob Lisbonne moves from Foundation to Corporate Board; Outgoing CEO Chris Beard Corporate Board Term Ends

Mozilla planet - do, 24/10/2019 - 20:15

Today, Mozilla Co-Founder and Chairwoman Mitchell Baker announced that Mozilla Foundation Board member Bob Lisbonne has moved to the Mozilla Corporation Board; and as part of a planned, phased transition, Mozilla Corporation’s departing CEO Chris Beard has stepped down from his role as a Mozilla Corporation board member.

“We are in debt to Chris for his myriad contributions to Mozilla,” said Mozilla Chairwoman and Co-Founder Mitchell Baker. “We’re fortunate to have Bob make this shift at a time when his expertise is so well matched for Mozilla Corporation’s current needs.”

Bob has been a member of the Mozilla Foundation Board since 2006, but his contributions to the organization began with Mozilla’s founding. Bob played an important role in converting the earlier Netscape code into open source code and was part of the team that launched the Mozilla project in 1998.

“I’m incredibly fortunate to have been involved with Mozilla for over two decades,” said Bob Lisbonne. “Creating awesome products and services that advance the Mozilla mission remains as important as ever. In this new role, I’m eager to contribute my expertise and help advance the Internet as a global public resource, open and accessible to all.”

During his tenure on the Mozilla Foundation board, Bob has been a significant creative force in building both the Foundation’s programs — in particular the programs that led to MozFest — and the strength of the board. As he moves to the Mozilla Corporation Board, Bob will join the other Mozilla Corporation Board members in selecting, onboarding, and supporting a new CEO for Mozilla Corporation. Bob’s experience across innovation, investment, strategy and execution in the startup and technology arenas are particularly well suited to Mozilla Corporation’s setting.

Bob’s technology career spans 25 years, during which he played key roles as entrepreneur, venture capitalist, and executive. He was CEO of internet startup Luminate, and a General Partner with Matrix Partners. He has served on the Boards of companies which IPO’ed and were acquired by Cisco, HP, IBM, and Yahoo, among others. For the last five years, Bob has been teaching at Stanford University’s Graduate School of Business.

With Bob’s move and Chris’ departure, the Mozilla Corporation board will include: Mitchell Baker, Karim Lakhani, Julie Hanna, and Bob Lisbonne. The remaining Mozilla Foundation board members are: Mitchell Baker, Brian Behlendorf, Ronaldo Lemos, Helen Turvey, Nicole Wong and Mohamed Nanabhay.

The Mozilla Foundation board will begin taking steps to fill the vacancy created by Bob’s move. At the same time, the Mozilla Corporation board’s efforts to expand its make-up will continue.

Founded as a community open source project in 1998, Mozilla currently consists of two organizations: the 501(c)3 Mozilla Foundation, which backs emerging leaders and mobilizes citizens to create a global movement for the health of the internet; and its wholly owned subsidiary, the Mozilla Corporation, which creates products, advances public policy and explores new technologies that give people more control over their lives online, and shapes the future of the internet platform for the public good. Each is governed by a separate board of directors. The two organizations work in concert with each other and a global community of tens of thousands of volunteers under the single banner: Mozilla.
Because of its unique structure, Mozilla stands apart from its peers in the technology and social enterprise sectors globally as one of the most impactful and successful social enterprises in the world.

The post Longtime Mozilla board member Bob Lisbonne moves from Foundation to Corporate Board; Outgoing CEO Chris Beard Corporate Board Term Ends appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Firefox Extension Spotlight: Enhancer for YouTube

Mozilla planet - do, 24/10/2019 - 18:00

“I wanted to offer a useful extension people can trust,” explains Maxime RF, creator of Enhancer for YouTube, a browser extension providing a broad assortment of customization options so you … Read more

The post Firefox Extension Spotlight: Enhancer for YouTube appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Jan-Erik Rediger: This Week in Glean: A Release

Mozilla planet - do, 24/10/2019 - 17:30

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.)

Last week's blog post: This Week in Glean: Glean on Desktop (Project FOG) by chutten.

Back in June when Firefox Preview shipped, it also shipped with Glean, our new Telemetry library, initially targeting mobile platforms. Georg recently blogged about the design principles of Glean in Introducing Glean — Telemetry for humans.

Plans for improving mobile telemetry for Mozilla go back as as far as December 2017. The first implementation of the Glean SDK was started around August 2018, all written in Kotlin (though back then it was mostly ideas in a bunch of text documents). This implementation shipped in Firefox Preview and was used up until now.

On March 18th I created an initial Rust workspace. This kicked of a rewrite of Glean using Rust to become a cross-platform telemetry SDK to be used on Android, iOS and eventually coming back to desktop platforms again.

1382 commits later1 I tagged v19.0.02.

Obviously that doesn't make people use it right away, but given all consumers of Glean right now are Mozilla products, it's up on us to get them to use it. So Alessio did just that by upgrading Android Components, a collection of Android libraries to build browsers or browser-like applications, to this new version.

This will soon roll out to nightly releases of Firefox Preview and, given we don't hit any larger bugs, hit the release channel in about 2 weeks. Additionally, that finally unblocks the Glean team to work on new features, ironing out some sharp edges and bringing Glean to Firefox on Desktop. Oh, and of course we still need to actually release it for iOS.

Thanks

Glean in Rust is the project I've been constantly working on since March. But getting it to a release was a team effort with help from a multitude of people and teams.

Thanks to everyone on the Glean SDK team:

Thanks to Frank Bertsch for a ton of backend work as well as the larger Glean pipeline team led by Mark Reid, to ensure we can handle the incoming telemetry data and also reliably analyse it. Thanks to the Data Engineering team led by Katie Parlante. Thanks to Mihai Tabara, the Release Engineering team and the Cloud Operations team, to help us with the release on short notice. Thanks to the Application Services team for paving the way of developing mobile libraries with Rust and to the Android Components team for constant help with Android development.

1

Not all of which are just code for the Android version. There's a lot of documentation too.

2

This is the first released version. This is just the version number that follows after the Kotlin implementation. Version numbers are cheap.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: From js13kGames to MozFest Arcade: A game dev Web Monetization story

Mozilla planet - do, 24/10/2019 - 16:32

This is a short story of how js13kGames, an online “code golf” competition for web game developers, tried out Web Monetization this year. And ended up at the Mozilla Festival, happening this week in London, where we’re showcasing some of our winning entries.

Decorative banner for the js13K Games MozFest Arcade

A brief history of js13kGames

The js13kGames online competition for HTML5 game developers is constantly evolving. We started in 2012, and we run every year from August 13th to September 13th. In 2017, we added a new A-Frame category.

You still had to build web games that would fit within the 13 kilobytes zipped package as before, but the new category added the A-Frame framework “for free”, so it wasn’t counted towards the size limit. The new category resulted in some really cool entries.

Fast forward twelve months to 2018 – the category changed its name to WebXR. We added Babylon.js as a second option. In 2019, the VR category was extended again, with Three.js as the third library of choice. Thanks to the Mozilla Mixed Reality team we were able to give away three Oculus Quest devices to the winning entries.

The evolution of judging mechanics

The process for judging js13kGames entries has also evolved. At the beginning, about 60 games were submitted each year. Judges could play all the games to judge them fairly. In recent years, we’ve received nearly 250 entries. It’s really hard to play all of them, especially since judges tend to be busy people. And then, how can you be sure you scored fairly?

That’s why we introduced a new voting system. The role of judges changed: they became experts focused on giving constructive feedback, rather than scoring. Expert feedback is valued highly by participants, as one of the most important benefits in the competition.

At the same time, Community Awards became the official results. We upgraded the voting system with the new mechanism of “1 on 1 battles.” By comparing two games at once, you can focus and judge them fairly, and then move on to vote on another pair.

Voters compared the games based on consistent criteria: gameplay, graphics, theme, etc. This made “Community” votes valuable to developers as a feedback mechanism also. Developers could learn what their game was good at, and where they could improve. Many voting participants also wrote in constructive feedback, similar to what the experts provided. This feedback was accurate and eventually valuable for future improvements.

Web Monetization in the world of indie games

js13kGames Web Monetization with Coil

This year we introduced the Web Monetization category in partnership with Coil. The challenge to developers was to integrate Web Monetization API concepts within their js13kGames entries. Out of 245 games submitted overall, 48 entries (including WebXR ones) had implemented the Web Monetization API. It wasn’t that difficult.

Basically, you add a special monetization meta tag to index.html:

<!DOCTYPE HTML> <html> <head> <meta charset="utf-8"> <title>Flood Escape</title> <meta name="monetization" content="your_payment_pointer"> // ... </head>

And then you need to add code to detect if a visitor is a paid subscriber (to Coil or any other similar service available in the future):

if(document.monetization && document.monetization.state === 'started') { // do something }

You can do this detection via an event too:

function startEventHandler(event){ // do something } document.monetization.addEventListener('monetizationstart', startEventHandler);

If the monetization event starts, that means the visitor has been identified as a paying subscriber. Then they can receive extra or special content: be it more coins, better weapons, shorter cooldown, extra level, or any other perk for the player.

It’s that simple to implement web monetization! No more bloated, ever changing SDKs to place advertisements into the games. No more waiting months for reports to see if spending time on this was even worth it.

The Web Monetization API gives game developers and content creators a way to monetize their creative work, without compromising their values or the user experience. As developers, we don’t have to depend on annoying in-game ads that interrupt the player. We can get rid of tracking scripts invading player privacy. That’s why Enclave Games creations never have any ads. Instead, we’ve implemented the Web Monetization API. We now offer extra content and bonuses to subscribers.

See you at MozFest

This all leads to London for the 2019 Mozilla Festival. Working with Grant for the Web, we’ve prepared something special: MozFest Arcade.

If you’re attending Mozfest, check out our special booth with game stations, gamepads, virtual reality headsets, and more. You will be able to play Enclave Games creations and js13kGames entries that are web-monetized! You can see for yourself how it all works under the hood.

Grant for the Web is a $100M fund to boost open, fair, and inclusive standards and innovation in web monetization. It is funded and led by Coil, working in collaboration with founding collaborators Mozilla and Creative Commons. (Additional collaborators may be added in the future.) A program team, led by Loup Design & Innovation, manages the day-to-day operations of the program.

It aims to distribute grants to web creators who would like to try web monetization as their business model, to earn revenue, and offer real competition to intrusive advertising, paywalls, and closed marketplaces.

If you’re in London, please join us at the Friday’s Science Fair at MozFest House. You can learn more about Web Monetization, Grant for the Web, while playing cool games. Also, you can get a free Coil subscription in the process. Join us through the weekend at the Indie Games Arcade at Ravensbourne University!

The post From js13kGames to MozFest Arcade: A game dev Web Monetization story appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mike Hoye: The State Of Mozilla, 2019

Mozilla planet - wo, 23/10/2019 - 18:52

As I’ve done in previous years, here’s The State Of Mozilla, as observed by me and presented by me to our local Linux user group.

Presentation:[ https://www.youtube.com/embed/RkvDnIGbv4w ]

And Q&A: [ https://www.youtube.com/embed/jHeNnSX6GcQ ]

Nothing tectonic in there – I dodged a few questions, because I didn’t want to undercut the work that was leading up to the release of Firefox 70, but mostly harmless stuff.

Can’t be that I’m getting stockier, though. Must be the shirt that’s unflattering. That’s it.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Firefox Preview/GeckoView Add-ons Support

Mozilla planet - wo, 23/10/2019 - 17:00

Back in June, Mozilla announced Firefox Preview, an early version of the new browser for Android that is built on top of Firefox’s own mobile browser engine, GeckoView. We’ve gotten great feedback about the superior performance of GeckoView so far. Not only is it faster than ever, it also opens up many opportunities for building deeper privacy features that we have already started exploring, and a lot of users were wondering what this step meant for add-ons.

We’re happy to confirm that GeckoView is currently building support for extensions through the WebExtensions API. This feature will be available in Firefox Preview, and we are looking forward to offering a great experience for both mobile users and developers.

Bringing GeckoView and Firefox Preview up to par with the APIs that were supported previously in Firefox for Android won’t happen overnight. For the remainder of 2019 and leading into 2020, we are focusing on building support for a selection of content from our Recommended Extensions program that work well on mobile and cover a variety of utilities and features.

At the moment, Firefox Preview does not yet officially support extensions. While some members of the community have discovered that some extensions inadvertently work in Firefox Preview, we do not recommend attempting to install them until they are officially supported as other issues may arise. We expect to implement support for the initial selection of extensions in the first half of 2020, and will post updates here as we make progress.

If you haven’t yet had a chance, why don’t you give Firefox Preview a try and let us know what you think.

The post Firefox Preview/GeckoView Add-ons Support appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: The two-value syntax of the CSS Display property

Mozilla planet - wo, 23/10/2019 - 16:54

If you like to read release notes, then you may have spotted in the Firefox 70 notes a line about the implementation of the two-value syntax of the display CSS property. Or maybe you saw a mention in yesterday’s Firefox 70 roundup post. Today I’ll explain what this means, and why understanding this two-value syntax is important despite only having an implementation in Firefox right now.

The display property

The display property is how we change the formatting context of an element and its children. In CSS some elements are block by default, and others are inline. This is one of the first things you learn about CSS.

The display property enables switching between these states. For example, this means that an h1, usually a block element, can be displayed inline. Or a span, initially an inline element, can be displayed as a block.

More recently we have gained CSS Grid Layout and Flexbox. To access these we also use values of the display property — display: grid and display: flex. Only when the value of display is changed do the children become flex or grid items and begin to respond to the other properties in the grid or flexbox specifications.

Two-value display – span with display: flex

What grid and flexbox demonstrate, however, is that an element has both an outer and an inner display type. When we use display: flex we create a block-level element, with flex children. The children are described as participating in a flex formatting context. You can see this if you take a span and apply display: flex to it — the span is now block-level. It behaves as block-level things do in relationship to other boxes in the layout. It’s as if you had applied display: block to the span, however we also get the changed behavior of the children. In the CodePen below you can see that the string of text and the em have become two flex items.

See the Pen
Mozilla Hacks two-value Display: span with display: flex
by rachelandrew (@rachelandrew)
on CodePen.

Two-value display – span with display: grid

Grid layout behaves in the same way. If we use display: grid we create a block-level element and a grid formatting context for the children. We also have methods to create an inline-level box with flex or grid children with display: inline-flex and display: inline-grid. The next example shows a div, normally a block-level element, behaving as an inline element with grid item children.

As an inline element, the box does not take up all the space in the inline dimension, and the following string of text displays next to it. The children however are still grid items.

See the Pen
Mozilla Hacks two-value display: inline-grid
by rachelandrew (@rachelandrew)
on CodePen.

Refactoring display

As the above examples show, the outer display type of an element is always block or inline, and dictates how the box behaves in the normal flow of the document. The inner display type then changes the formatting context of the children.

To better describe this behavior, the CSS Display specification has been refactored to allow for display to accept two values. The first describes whether the outer display type is block or inline, whereas the second value describes the formatting of the children. This table shows how some of these new values map to the single values – now referred to as legacy values – in the spec.

Single value New value block block flow flow-root block flow-root inline inline flow inline-block inline flow-root flex block flex inline-flex inline flex grid block grid inline-grid inline grid

There are more values of display, including lists and tables; to see the full set of values visit the CSS Display Specification.

We can see how this would work for Flexbox. If I want to have a block-level element with flex children I use display: block flex, and if I want an inline-level element with flex children I use display: inline flex. The below example will work in Firefox 70.

See the Pen
Mozilla Hacks two-value Display: two value flex values
by rachelandrew (@rachelandrew)
on CodePen.

Our trusty display: block and display: inline don’t remain untouched either, display: block becomes display: block flow – that is a block element with children participating in normal flow. A display: inline element becomes display: inline flow.

display: inline-block and display: flow-root

This all becomes more interesting if we look at a couple of values of display – one new, one which dates back to CSS2. Inline boxes in CSS are designed to sit inside a line box, the anonymous box which wraps each line of text in a sentence. This means that they behave in certain ways: If you add padding to all of the edges of an inline box, such as in the example below where I have given the inline element a background color, the padding applies. And yet, it does not push the line boxes in the block direction away. In addition, inline boxes do not respect width or height (or inline-size and block-size).

Using display: inline-block causes the inline element to contain this padding, and to accept the width and height properties. It remains an inline thing however; it continues to sit in the flow of text.

In this next CodePen I have two span elements, one regular inline and the other inline-block, so that you can see the difference in layout that this value causes.

See the Pen
Mozilla Hacks two-value display: inline-block
by rachelandrew (@rachelandrew)
on CodePen.

We can then take a look at the newer value of display, flow-root. If you give an element display: flow-root it becomes a new block formatting context, becoming the root element for a new normal flow. Essentially, this causes floats to be contained. Also, margins on child elements stay inside the container rather than collapsing with the margin of the parent.

In the next CodePen, you can compare the first example without display: flow-root and the second with display: flow-root. The image in the first example pokes out of the bottom of the box, as it has been taken out of normal flow. Floated items are taken out of flow and shorten the line boxes of the content that follows. However, the actual box does not contain the element, unless that box creates a new block formatting context.

The second example does have flow-root and you can see how the box with the grey background now contains the float, leaving a gap underneath the text. If you have ever contained floats by setting overflow to auto, then you were achieving the same thing, as overflow values other than the default visible create a new block formatting context. However, there can be some additional unwanted effects such as clipping of shadows or unexpected scrollbars. Using flow-root gives you the creation of a block formatting context (BFC) without anything else happening.

See the Pen
Mozilla Hacks two-value display: flow-root
by rachelandrew (@rachelandrew)
on CodePen.

The reason to highlight display: inline-block and display: flow-root is that these two things are essentially the same. The well-known value of inline-block, creates an inline flow-root which is why the new two-value version of display: inline-block is display: inline flow-root. It does exactly the same job as the flow-root property which, in a two-value world, becomes display: block flow-root.

You can see both of these values used in this last example, using Firefox 70.

See the Pen
Mozilla Hacks two-value display: inline flow-root and block flow-root
by rachelandrew (@rachelandrew)
on CodePen.

Can we use these two value properties?

With support currently available only in Firefox 70, it is too early to start using these two-value properties in production. Currently, other browsers will not support them. Asking for display: block flex will be treated as invalid except in Firefox. Since you can access all of the functionality using the one-value syntax, which will remain as aliases of the new syntax, there is no reason to suddenly jump to these.

However, they are important to be aware of, in terms of what they mean for CSS. They properly explain the interaction of boxes with other boxes, in terms of whether they are block or inline, plus the behavior of the children. For understanding what display is and does, I think they make for a very useful clarification. As a result, I’ve started to teach display using these two values to help explain what is going on when you change formatting contexts.

It is always exciting to see new features being implemented, I hope that other browsers will also implement these two-value versions soon. And then, in the not too distant future we’ll be able to write CSS in the same way as we now explain it, clearly demonstrating the relationship between boxes and the behavior of their children.

The post The two-value syntax of the CSS Display property appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mike Hoye: 80×25

Mozilla planet - wo, 23/10/2019 - 14:02

Every now and then, my brain clamps on to obscure trivia like this. It takes so much time. “Because the paper beds of banknote presses in 1860 were 14.5 inches by 16.5 inches, a movie industry cartel set a standard for theater projectors based on silent film, and two kilobytes is two kilobytes” is as far back as I have been able to push this, but let’s get started.

In August of 1861, by order of the U.S. Congress and in order to fund the Union’s ongoing war efforts against the treasonous secessionists of the South, the American Banknote Company started printing what were then called “Demand Notes”, but soon widely known as “greenbacks”.

It’s difficult to research anything about the early days of American currency on Wikipedia these days; that space has been thoroughly colonized by the goldbug/sovcit cranks. You wouldn’t notice it from a casual examination, which is of course the plan; that festering rathole is tucked away down in the references, where articles will fold a seemingly innocuous line somewhere into the middle, tagged with an exceptionally dodgy reference. You’ll learn that “the shift from demand notes to treasury notes meant they could no longer be redeemed for gold coins[1]” – which is strictly true! – but if you chase down that footnote you wind up somewhere with a name like “Lincoln’s Treason – Fiat Currency, Maritime Law And The U.S. Treasury’s Conspiracy To Enslave America”, which I promise I am only barely exaggerating about.

It’s not entirely clear if this is a deliberate exercise in coordinated crank-wank or just years of accumulated flotsam from the usual debate-club dead-enders hanging off the starboard side of the Overton window. There’s plenty of idiots out there that aren’t quite useful enough to work the k-cups at the Heritage Institute, and I guess they’re doing something with their time, but the whole thing has a certain sinister elegance to it that the Randroid crowd can’t usually muster. I’ve got my doubts either way, and I honestly don’t care to dive deep enough into that sewer to settle them. Either way, it’s always good to be reminded that the goldbug/randroid/sovcit crank spectrum shares a common ideological klancestor.

Mercifully that is not what I’m here for. I am here because these first Demand Notes, and the Treasury Notes that came afterwards, were – on average, these were imprecise times – 7-3/8” wide by 3-1/4” tall.

I haven’t been able to precisely answer the “why” of that – I believe, but do not know, that that this is because of the size of the specific dimensions of the presses they were printed on. Despite my best efforts I haven’t been able to find the exact model and specifications of that device. I’ve asked the U.S. Congressional Research Service for some help with this, but between them and the Bureau of Engraving and Printing, we haven’t been able to pin it down. From my last correspondence with them:

Unfortunately, we don’t have any materials in the collection identifying the specific presses and their dimension for early currency production. The best we can say is that the presses used to print currency in the 1860s varied in size and model. These presses went by a number of names, including hand presses, flat-bed presses, and spider presses. They also were capable of printing sheets of paper in various sizes. However, the standard size for printing securities and banknotes appears to have been 14.5 inches by 16.5 inches. We hope this bit of information helps.

… which is unfortunate, but it does give us some clarity. A 16.5″ by 14.5″ printing sheet lets you print eight 7-3/8” by 3-1/4″ sheets to size, with a fraction of an inch on either side for trimming.

The answer to that question starts to matter about twenty years later on the heels of the 1880 American Census. Mandated to be performed once a decade, the United States population had grown some 30% since the previous census, and even with enormous effort the final tabulations weren’t finished until 1888, an unacceptable delay.

One of the 1880 Census’ early employees was a man named Herman Hollerith, a recent graduate of the Columbia School of Mines who’d been invited to join the Census efforts early on by one of his professors. The Census was one of the most important social and professional networking exercises of the day, and Hollerith correctly jumped at the opportunity:

The absence of a permanent institution meant the network of individuals with professional census expertise scattered widely after each census. The invitation offered a young graduate the possibility to get acquainted with various members of the network, which was soon to be dispersed across the country.

As an aside, that invitation letter is one of the most important early documents in the history of computing for lots of reasons, including this one:

The machine in that picture was the third generation of the “Hollerith Tabulator”, notable for the replaceable plugboard that made it reprogrammable. I need to find some time to dig further into this, but that might be the first multipurpose, if not “general purpose” as we’ve come to understand it, electronic computation device. This is another piece of formative tech that emerged from this era, one that led to directly to the removable panels (and ultimately the general componentization) of later computing hardware.

Well before the model 3, though, was the original 1890 Hollerith Census Tabulator that relied on punchcards much like this one.

Hollerith took the inspiration for those punchcards from the “punch photographs” used by some railways at the time to make sure that tickets belonged to the passengers holding them. You can see a description of one patent for them here dating to 1888, but Hollerith relates the story from a few years earlier:

One thing that helped me along in this matter was that some time before I was traveling in the west and I had a ticket with what I think was called a punch photograph. When the ticket was first presented to a conductor he punched out a description of the individual, as light hair, dark eyes, large nose etc. So you see I only made a punch photograph of each person.

Tangentially: this is the birth of computational biometrics. And as you can see from this extract from The Railway News (Vol. XLVIII, No. 1234 , published Aug. 27, 1887) people have been concerned about harassment because of unfair assessment by the authorities from day one:

punch-photograph

After experimenting with a variety of card sizes Hollerith decided that to save on production costs he’d use the same boxes the U.S. Treasury was using for the currency of the day: the Demand Note. Punch cards stayed about that shape, punched with devices that looked a lot like this for about 20 years until Thomas Watson Sr. (IBM’s first CEO, from whom the Watson computer gets its name) asked Clair D. Lake and J. Royden Peirce to develop a new, higher data-density card format.

Tragically, this is the part where I need to admit an unfounded assertion. I’ve got data, the pictures line up and numbers work, but I don’t have a citation. I wish I did.

Take a look at “Type Design For Typewriters: Olivetti, written by Maria Ramos Silvia. (You can see a historical talk from her on the history of typefaces here that’s also pretty great.)

Specifically, take a look on page 46 at Mikron Piccolo, Mikron Condensed. The fonts don’t precisely line up – see the different “4”, for example, when comparing it to the typesetting of IBM’s cards – but the size and spacing do. In short: a line of 80 characters, each separated by a space, is the largest round number of digits that the tightest typesetting of the day would allow to be fit on a single 7-3/8” wide card: a 20-point condensed font.

I can’t find a direct citation for this; that’s the only disconnect here. But the spacing all fits, the numbers all work, and I’d bet real money on this: that when Watson gave Lake the task of coming up with a higher information-density punch card, Lake looked around at what they already had on the shelf – a typewriter with the highest-available character density of the day, on cards they could manage with existing and widely-available tooling – and put it all together in 1928. The fact that a square hole – a radical departure from the standard circular punch – was a patentable innovation at the time was just icing on the cake.

The result of that work is something you’ll certainly recognize, the standard IBM punchcard, though of course there’s lot more to it than that. Witness the full glory of the Card Stock Acceptance Procedure, the protocol for measuring folding endurance, air resistance, smoothness and evaluating the ash content, moisture content and pH of the paper, among many other things.

At one point sales of punchcards and related tooling constituted a completely bonkers 30% of IBM’s annual profit margin, so you can understand that IBM had a lot invested in getting that consistently, precisely correct.

At around this time John Logie Baird invented the first “mechanical television”; like punchcards, the first television cameras were hand-cranked devices that relied on something called a Nipkow disk, a mechanical tool for separating images into sequential scan lines, a technique that survives in electronic form to this day. By linearizing the image signal Baird could transmit the image’s brightness levels via a simple radio signal and in 1926 he did just that, replaying that mechanically encoded signal through a CRT and becoming the inventor of broadcast television. He would go on to pioneer colour television – originally called Telechrome, a fantastic name I’m sad we didn’t keep – but that’s a different story.

Baird’s original “Televisor” showed its images on a 7:3 aspect ration vertically oriented cathode ray tube, intended to fit the head and shoulders of a standing person, but that wouldn’t last.

For years previously, silent films had been shot on standard 35MM stock, but the addition of a physical audio track to 35MM film stock didn’t leave enough space left over for the visual area. So – after years of every movie studio having its own preferred aspect ratio, which required its own cameras, projectors, film stock and tools (and and and) – in 1929 the movie industry agreed to settle on the Society of Motion Picture And Television Engineers’ proposed standard of 0.8 inches by 0.6 inches, what became known as the Academy Ratio, or as we better know it today, 4:3.

Between 1932 and 1952, when widescreen for cinemas came into vogue as a differentiator from standard television, just about all the movies made in the world were shot in that aspect ratio, and just about every cathode ray tube made came in that shape, or one that could display it reliably. In 1953 studios started switching to a wider “Cinemascope”, to aggressively differentiate themselves from television, but by then television already had a large, thoroughly entrenched install base, and 4:3 remained the standard for in-home displays – and CRT manufacturers – until widescreen digital television came to market in the 1990s.

As computers moved from teleprinters – like, physical, ink-on-paper line printers – to screens, one byproduct of that standardization was that if you wanted to build a terminal, you either used that aspect ratio or you started making your own custom CRTs, a huge barrier to market entry. You can do that if you’re IBM, and you’re deeply reluctant to if you’re anyone else. So when DEC introduced their VT52 terminal, a successor to the VT50 and earlier VT05 that’s what they shipped, and with only 1Kb of display ram (one kilobyte!) it displayed only twelve rows of widely-spaced text. Math is unforgiving, and 80×12=960; even one more row breaks the bank. The VT52 and its successor the VT100, though, doubled that capacity giving users the opulent luxury of two entire kilobytes of display memory, laid out with a font that fit nicely on that 4:3 screen. The VT100 hit the market in August of 1978, and DEC sold more than six million of them over the product’s lifespan.

You even got an extra whole line to spare! Thanks to the magic of basic arithmetic 80×25 just sneaks under that opulent 2k limit with 48 bytes to spare.

This is another point where direct connections get blurry, because 1976 to 1984 was an incredibly fertile time in the history of computing history. After a brief period where competing terminal standards effectively locked software to the hardware that it shipped on, the VT100 – being the first terminal to market fully supporting the recently codified ANSI standard control and escape sequences – quickly became the de-facto standard, and soon afterwards the de-jure, codified in ANSI-X3.64/ECMA-48. CP/M, soon to be replaced with PC-DOS and then MS-DOS came from this era, with ANSI.SYS being the way DOS programs talked to the display from DOS 2.0 through to beginning of Windows. Then in 1983 the Apple IIe was introduced, the first Apple computer to natively support an 80×24 text display, doubling the 40×24 default of their earlier hardware. The original XTerm, first released in 1984, was also created explicitly for VT100 compatibility.

Fascinatingly, the early versions of the ECMA-48 standard specify that this standard isn’t solely meant for displays, specifying that “examples of devices conforming to this concept are: an alpha-numeric display device, a printer or a microfilm output device.”

A microfilm output device! This exercise dates to a time when microfilm output was a design constraint! I did not anticipate that cold-war spy-novel flavor while I was dredging this out, but it’s there and it’s magnificent.

It also dates to a time that the market was shifting quickly from mainframes and minicomputers to microcomputers – or, as we call them today, “computers” – as reasonably affordable desktop machines that humans might possibly afford and that companies might own a large number of, meaning this is also where the spectre of backcompat starts haunting the industry – This moment in a talk from the Microsoft developers working on the Windows Subsystem for Linux gives you a sense of the scale of that burden even today. In fact, it wasn’t until the fifth edition of ECMA-48 was published in 1991, more than a decade after the VT100 hit the market, that the formal specification for terminal behavior even admitted the possibility (Appendix F) that a terminal could be resized at all, meaning that the existing defaults were effectively graven in stone during what was otherwise one of the most fertile and formative periods in the history of computing.

As a personal aside, my two great frustrations with doing any kind of historical CS research remain the incalculable damage that academic paywalls have done to the historical record, and the relentless insistence this industry has on justifying rather than interrogating the status quo. This is how you end up on Stack Overflow spouting unresearched nonsense about how “4 pixel wide fonts are untidy-looking”. I’ve said this before, and I’ll say it again: whatever we think about ourselves as programmers and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize, and by telling and retelling these unsourced, inaccurate just-so stories without ever doing the work of finding the real truth, we’re betraying ourselves, our history and our future. But it’s pretty goddamned difficult to convince people that they should actually look things up instead of making up nonsense when actually looking things up, even for a seemingly simple question like this one, can cost somebody on the outside edge of an academic paywall hundreds or thousands of dollars.

So, as is now the usual in these things:

  • There are technical reasons,
  • There are social reasons,
  • It’s complicated, and
  • Open access publication or GTFO.

But if you ever wondered why just about every terminal in the world is eighty characters wide and twenty-five characters tall, there you go.

Categorieën: Mozilla-nl planet

Mozilla GFX: Dramatically reduced power usage in Firefox 70 on macOS with Core Animation

Mozilla planet - di, 22/10/2019 - 16:52

In Firefox 70 we changed how pixels get to the screen on macOS. This allows us to do less work per frame when only small parts of the screen change. As a result, Firefox 70 drastically reduces the power usage during browsing.

 1.8W<figcaption style="text-align: center;">Power usage, in Watts, as displayed by Intel Power Gadget. Lower numbers are better.</figcaption>

In short, Firefox 70 improves power usage by 3x or more for many use cases. The larger the Firefox window and the smaller the animation, the bigger the difference. Users have reported much longer battery life, cooler machines and less fan spinning.

I’m seeing a huge improvement over here too (2015 13″ MacBook Pro with scaled resolutions on internal display as well as external 4K display). Prior to this update I literally couldn’t use Firefox because it would spin my fans way up and slow down my whole computer. Thank you, I’m very happy to finally see Core Animation being implemented.

Charlie Siegel

After so many years, I have been able to use Firefox on my Mac – I used to test every Firefox release, and nothing had worked in the past.

Vivek Kapoor

I usually try nightly builds every few weeks but end up going back to Edge Chromium or Chrome for speed and lack of heat. This makes my 2015 mbp without a dedicated dGPU become a power sipper compared to earlier builds.

atiensivu

Read on for the technical details behind these changes.

Technical Details

Let’s take a brief look at how the Firefox compositing pipeline works. There are three major steps to getting pixels on the screen:

<figcaption style="text-align: center;">Step 1: Firefox draws pixels into “Gecko layers”.</figcaption> <figcaption style="text-align: center;">Step 2: The Firefox “compositor” assembles these Gecko layers to produce the rendering of the window.</figcaption> <figcaption style="text-align: center;">Step 3: The operating system’s window manager assembles all windows on the screen to produce the screen content.</figcaption>

The improvements in Firefox 70 were the result of reducing the work in steps 2 and 3: In both steps, we were doing work for the entire window, even if only a small part of the window was updating.

Why was our compositor always redrawing the entire window? The main reason was the lack of convenient APIs on macOS for partial compositing.

The Firefox compositor on macOS makes use of hardware acceleration via OpenGL. Apple’s OpenGL documentation recommends the following method of getting OpenGL content to the screen: You create an NSOpenGLContext, you attach it to an NSView (using -[NSOpenGLContext setView:]), and then you render to the context’s default framebuffer, filling the entire framebuffer with fresh content. At the end of each frame, you call -[NSOpenGLContext flushBuffer]. This updates the screen with your rendered content.

The crucial limitation here is that flushBuffer gives you no way to indicate which parts of the OpenGL context have changed. This is a limitation which does not exist on Windows: On Windows, the corresponding API has full support for partial redraws.

Every Firefox window contains one OpenGL context, which covers the entire window. Firefox 69 was using the API described above. So we were always redrawing the whole window on every change, and the window manager was always copying our entire window to the screen on every change. This turned out to be a problem despite the fact that these draws were fully hardware accelerated.

Enter Core Animation

Core Animation is the name of an Apple framework which lets you create a tree of layers (CALayer). These layers usually contain textures with some pixel content. The layer tree defines the positions, sizes, and order of the layers within the window. Starting with macOS 10.14, all windows use Core Animation by default, as a way to share their rendering with the window manager.

So, does Core Animation have an API which lets us indicate which areas inside an OpenGL context have changed? No, unfortunately it does not. However, it provides a number of other useful capabilities, which are almost as good and in some cases even better.

First and foremost, Core Animation lets us share a GPU buffer with the window manager in a way that minimizes copies: We can create an IOSurface and render to it directly using OpenGL by treating it as an offscreen framebuffer, and we can assign that IOSurface to a CALayer. Then, when the window manager composites that CALayer onto the screen surface, it will read directly from our GPU buffer with no additional copies. (IOSurface is the macOS API which provides a handle to a GPU buffer that can be shared between processes. It’s worth noting that the ability to assign an IOSurface to the CALayer contents property is not properly documented. Nevertheless, all major browsers on macOS now make use of this API.)

Secondly, Core Animation lets us display OpenGL rendering in multiple places within the window at the same time and update it in a synchronized fashion. This was not possible with the old API we were using: Without Core Animation, we would have needed to create multiple NSViews, each with their own NSOpenGLContext, and then call flushBuffer on each context on every frame. There would have been no guarantee that the rendering from the different contexts would end up on the screen at the same time. But with Core Animation, we can just group updates from multiple layers into the same CATransaction, and the screen will be updated atomically.

Having multiple layers allows us to update just parts of the window: Whenever a layer is mutated in any way, the window manager will redraw an area that includes the bounds of that layer, rather than the bounds of the entire window. And we can mark individual layers as opaque or transparent. This cuts down the window manager’s work some more for areas of the window that only contain opaque layers. With the old API, if any part of our OpenGL context’s default framebuffer was transparent, we needed to make the entire OpenGL context transparent.

Lastly, Core Animation allows us to move rendered content around in the window cheaply. This is great for efficient scrolling. (Our current compositor does not yet make use of this capability, but future work in WebRender will take advantage of it.)

The Firefox Core Animation compositor

How do we make use of those capabilities in Firefox now?

The most important change is that Firefox is now in full control of its swap chain. In the past, we were asking for a double-buffered OpenGL context, and our rendering to the default framebuffer was relying on the built-in swap chain. So on every frame, we could guess that the existing framebuffer content was probably two frames old, but we could never know for sure. Because of this, we just ignored the framebuffer content and re-rendered the entire buffer. In the new world, Firefox renders to offscreen buffers of its own creation and it knows exactly which pixels of each buffer need to be updated and which pixels still contain valid content. This allows us to reduce the work in step 2 drastically: Our compositor can now finally do partial redraws. This change on its own is responsible for most of the power savings.

In addition, each Firefox window is now “tiled” into multiple square Core Animation layers whose contents are rendered separately. This cuts down on work in step 3.

And finally, Firefox windows are additionally split into transparent and opaque parts: Transparent CALayers cover the “vibrant” portions of the window, and opaque layers cover the rest of the window. This saves some more work in step 3. It also means that the window manager does not need to redraw the vibrancy blur effect unless something in the vibrant part of the window changes.

The rendering pipeline in Firefox on macOS now looks as follows:

Step 1: Firefox draws pixels into “Gecko layers”.

Step 2: For each square CALayer tile in the window, the Firefox compositor combines the relevant Gecko layers to redraw the changed parts of that CALayer.

Step 3: The operating system’s window manager assembles all updated windows and CALayers on the screen to produce the screen content.

You can use the Quartz Debug app to visualize the improvements in step 3. Using the “Flash screen updates” setting, you can see that the window manager’s repaint area in Firefox 70 (on the right) is a lot smaller when a tab is loading in the background:

https://i.imgur.com/ks6z0IB.mp4

And in this screenshot with the “Show opaque regions” feature, you can see that Firefox now marks most of the window as opaque (green):

Future Work

We are planning to build onto this work to improve other browsing use cases: Scrolling and full screen video can be made even more efficient by using Core Animation in smarter ways. We are targeting WebRender for these further optimizations. This will allow us to ship WebRender on macOS without a power regression.

Acknowledgements

We implemented these changes with over 100 patches distributed among 28 bugzilla bugs. Matt Woodrow reviewed the vast majority of these patches. I would like to thank everybody involved for their hard work. Thanks to Firefox contributor Mark, who identified the severity of this problem early on, provided sound evidence, and was very helpful with testing. And thanks to all the other testers that made sure this change didn’t introduce any bugs, and to everyone who followed along on Bugzilla.

During the research phase of this project, the Chrome source code and the public Chrome development notes turned out to be an invaluable resource. Chrome developers (mostly Chris Cameron) had already done the hard work of comparing the power usage of various rendering methods on macOS. Their findings accelerated our research and allowed us to implement the most efficient approach right from the start.

Questions and Answers
  • Are there similar problems on other platforms?
    • Firefox uses partial compositing on some platforms and GPU combinations, but not on all of them. Notably, partial compositing is enabled in Firefox on Windows for non-WebRender, non-Nvidia systems on reasonably recent versions of Windows, and on all systems where hardware acceleration is off. Firefox currently does not use partial compositing on Linux or Android.
  • OpenGL on macOS is deprecated. Would Metal have posed similar problems?
    • In some ways yes, in other ways no. Fundamentally, in order to get Metal content to the screen, you have to use Core Animation: you need a CAMetalLayer. However, there are no APIs for partial updates of CAMetalLayers either, so you’d need to implement a solution with smaller layers similarly to what was done here. As for Firefox, we are planning to add a Metal back-end to WebRender in the future, and stop using OpenGL on machines that support Metal.
  • Why was this only a problem now? Did power usage get worse in Firefox 57?
    • As far as we are aware, the power problem did not start with Firefox Quantum. The OpenGL compositor has always been drawing the entire window ever since Firefox 4, which was the first version of Firefox that came with hardware acceleration. We believe this problem became more serious over time simply because screen resolutions increased. Especially the switch to retina resolutions was a big jump in the number of pixels per window.
  • What do other browsers do?
    • Chrome’s compositor tries to use Core Animation as much as it can and has a fallback path for some rare unhandled cases. And Safari’s compositor is entirely Core Animation based; Safari basically skips step 2.
  • Why does hardware accelerated rendering have such a high power cost per pixel?
    • The huge degree to which these changes affected power usage surprised us. We have come up with some explanations, but this question probably deserves its own blog post. Here’s a summary: At a low level, the compositing work in step 2 and step 3 is just copying of memory using the GPU. Integrated GPUs share their L3 cache and main memory with the CPU. So they also share the memory bandwidth. Compositing is mostly memory bandwidth limited: The destination pixels have to be read, the source texture pixels have to be read, and then the destination pixel writes have to be pushed back into memory. A screen worth of pixels takes up around 28MB at the default scaled retina resolution (1680×1050@2x). This is usually too big for the L3 cache, for example the L3 cache in my machine is 8MB big. So each screenful of one layer of compositing takes up 3 * 28MB of memory bandwidth. My machine has a memory bandwidth of ~28GB/s, so each screenful of compositing takes about 3 milliseconds. We believe that the GPU runs at full frequency while it waits for memory. So you can estimate the power usage by checking how long the GPU runs each frame.
  • How does this affect WebRender’s architecture? Wasn’t the point of WebRender to redraw the entire window every frame?
    • These findings have informed substantial changes to WebRender’s architecture. WebRender is now adding support for native layers and caching, so that unnecessary redraws can be avoided. WebRender still aims to be able to redraw the entire window at full frame rate, but it now takes advantage of caching in order to reduce power usage. Being able to paint quickly allows more flexibility and fewer performance cliffs when making layerization decisions.
  • Details about the measurements in the chart:
    • These numbers were collected from test runs on a Macbook Pro (Retina, 15-inch, Early 2013) with an Intel HD Graphics 4000, on macOS 10.14.6, with the default Firefox window size of a new Firefox profile, at a resolution of 1680×1050@2x, at medium display brightness. The numbers come from the PKG measurement as displayed by the Intel Power Gadget app. The power usage in the idle state on this machine is 3.6W, so we subtracted the 3.6W baseline from the displayed values for the numbers in the chart in order to get a sense of Firefox’s contribution. Here are the numbers used in the chart (after the subtraction of the 3.6W idle baseline):
      • Scrolling: before: 16.4W, after: 9.4W
      • Spinning Square: before: 12.8W, after: 2.9W
      • YouTube Video: before: 28.4W, after: 16.4W
      • Google Docs Idle: before: 7.4W, after: 1.6W
      • Tagesschau Audio Player: before: 24.4W, after: 4.1W
      • Loading Animation, low complexity: before: 4.7W, after: 1.8W
      • Loading Animation, medium complexity: before: 7.7W, after: 2.1W
      • Loading Animation, high complexity: before: 19.4W, after: 1.8W
    • Details on the scenarios:
    • Some users have reported even higher impacts from these changes than what our test machine showed. There seem to be large variations in power usage from compositing on different Mac models.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Firefox 70 — a bountiful release for all

Mozilla planet - di, 22/10/2019 - 15:45

Firefox 70 is released today, and includes great new features such as secure password generation with Lockwise and the new Firefox Privacy Protection Report; you can read the full details in the Firefox 70 Release Notes.

Amazing user features and protections aside, we’ve also got plenty of cool additions for developers in this release. These include DOM mutation breakpoints and inactive CSS rule indicators in the DevTools, several new CSS text properties, two-value display syntax, and JS numeric separators. In this article, we’ll take a closer look at some of the highlights!

For all the details, check out the following:

Note that the new Mozilla Developer YouTube channel will have videos covering many of the features mentioned below. Why not subscribe, so you can get them when they come out?

HTML forms and secure passwords

To enable the generation of secure passwords (as mentioned above) we’ve updated HTML <input> elements; any <input> element of type password will have an option to generate a secure password available in the context menu, which can then be stored in Lockwise.

For example, take the following:

<input type=”password”>

In the Firefox UI, you’ll then be able to generate a secure password like so:Context menu showing password generation option

In addition, any type="password" field with autocomplete=”new-password” set on it will have an autocomplete UI to generate a new password in-context.

Note: It is advisable to use autocomplete=”new-password” on password change and registration forms as a strong signal to password managers that a field expects a new password, not an existing one.

CSS

Let’s turn our attention to the new CSS features in Firefox 70.

New options for styling underlines!

Firefox 70 introduces three new properties related to text decoration/underline:

  • text-decoration-thickness: sets the thickness of lines added via text-decoration.
  • text-underline-offset: sets the distance between a text decoration and the text it is set on. Bear in mind that this only works on underlines.
  • text-decoration-skip-ink: sets whether underlines and overlines are drawn if they cross descenders and ascenders. The default value, auto, causes them to only be drawn where they do not cross over a glyph. To allow underlines to cross glyphs, set the value to none.

So, for example, the following code:

h1 { text-decoration: underline red; text-decoration-thickness: 3px; text-underline-offset: 6px; }

will give you this kind of effect:

a heading with a thick, red, offset underline that isn't drawn over the heading's descenders

Two-keyword display values

For years, the humble display property has taken a single value, whether we are talking about simple display choices like block, inline, or none, or newer display modes like flex or grid.

However, as Rachel explains, the boxes on your page have an outer display type, which determines how the box is laid out in relation to other boxes on the page, and an inner display type, which determines how the box’s children will behave. Browsers have done this for a while, but it has only been specified recently. The new set of two-keyword values allow you to explicitly specify the outer and inner display values.

In supporting browsers (just Firefox at the time of writing), the single keyword values we know and love will map to new two-keyword values, for example:

  • display: flex; is equivalent to display: block flex;
  • display: inline-flex; is equivalent to display: inline flex;

Rachel will explain this in more detail in an upcoming blog post. For now, watch this space!

JavaScript

Now let’s move on to the JavaScript.

Numeric separators

Numeric separators are now supported in JavaScript — underscores can now be used as separators in large numbers, so that they are more readable. For example:

let myNumber = 1_000_000_000_000; console.log(myNumber); // Logs 1000000000000 let myHex = 0xA0_B0_C0 console.log(myHex); // Logs 10531008

Numeric separators are usable with any kind of numeric literal, including BigInts.

Intl improvements

We’ve improved JavaScript i18n (internationalization), starting with the implementation of the Intl.RelativeTimeFormat.formatToParts() method. This is a special version of Intl.RelativeTimeFormat.format() that returns an array of objects, each one representing a part of the value, rather than returning a string of the localized time value.

const rtf = new Intl.RelativeTimeFormat("en", { numeric: "auto" }); rtf.format(-2, "day"); // Returns "2 days ago" rtf.formatToParts(-2, "day"); /* Returns [ ​  { type: "integer", value: "2", unit: "day" }, ​  { type: "literal", value: " days ago" } ​] */

This is useful because it allows you to easily isolate the numeric value out of the string, for example.

In addition, Intl.NumberFormat.format() and Intl.NumberFormat.formatToParts() now accept BigInt values.

Performance improvements

JavaScript has got generally faster thanks to our new baseline interpreter! You can learn more by reading The Baseline Interpreter: a faster JS interpreter in Firefox 70.

Developer tools

There is a whole host of awesome new things going on with Firefox 70 developer tools. Let’s find out what they are!

Inactive CSS rules indicator in rules panel

Inactive CSS properties in the Rules view of the Page Inspector are now colored gray and have an information icon displayed next to them. The properties are technically valid, but won’t have any effect on the element. When you hover over the info icon, you’ll see a useful message about why the CSS is not being applied, including a hint about how to fix the problem and a “Learn more” link for more information.

For example, in this case our grid-auto-columns property is inactive because we are trying to apply it to an element that is not a grid container:

 grid applied

And in this case, our flex property is inactive because we are trying to apply it to an element that is not a flex item. (Its parent is not a flex container.):

A warning message saying that the flex property is inactive because the element's parent is not a flex container

To fix this second issue, we can go into the inspector, find the element’s parent (a <div> in this case), and apply display: flex; to it:

 flex

Our fix is shown in the Changes panel, and from there can be copied and put into our code base. Sorted!

the changes panel, which can be used to copy the code changes you made, and then paste them back into your codebase

Pause on DOM Mutation in Debugger

In complex dynamic web apps it is sometimes hard to tell which script changed the page and caused the issue when you run into a problem. DOM Mutation Breakpoints (aka DOM Change Breakpoints) let you pause scripts that add, remove, or change specific elements.

Try inspecting any element on your page. When you /ctrl + click it in the HTML inspector, you’ll see a new context menu item “Break on…”, with the following sub-items:

  • Subtree modification
  • Attribute modification
  • Node removal

Once a DOM mutation breakpoint is set, you’ll see it listed under “DOM Mutation Breakpoints” in the right-hand pane of the Debugger; this is also where you’ll see breaks reported.

the degugger interface showing paused dom mutation code after a DOM mutation breakpoint was hit

For more details, see Break on DOM mutation. If you find them useful for your work, you might find Event Listener Breakpoints and XHR Breakpoints useful too!

Color contrast information in the color picker!

In the CSS Rules view, you can click foreground colors with the color picker to determine if their contrast with the background color meets accessibility guidelines.

a color picker showing color contrast information between the foreground and background colors

Accessibility inspector: keyboard checks

The Accessibility inspector‘s Check for issues dropdown now includes keyboard accessibility checks:

The firefox accessibility inspector, showing acessibility check options; contrast, keyboard, or text labels

Selecting this option causes Firefox to go through each node in the accessibility tree and highlight all that have a keyboard accessibility issue:

the accessibility inspector showing the results of keyboard checks - several messages highlighting where and what the problems are

Hovering over or clicking each one will reveal information about what the issue is, along with a “Learn more” link for more details on how to fix it.

Try it now, on a web page near you!

Web socket inspector

In Firefox DevEdition, the Network monitor now has a new “Messages” panel, which appears when you are monitoring a web socket connection (i.e. a 101 response). This can be used to inspect web socket frames sent and received through the connection.

Read Firefox’s New WebSocket Inspector to find out more. Note that this functionality was originally supposed to be in Firefox 70 general release, but we had a few more bugs to iron out, so expect it in Firefox 71! For now, you can use it in DevEditionplease share your constructive feedback!

The post Firefox 70 — a bountiful release for all appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

About:Community: Firefox 70 new contributors

Mozilla planet - di, 22/10/2019 - 15:21

With the release of Firefox 70, we are pleased to welcome the 45 developers who contributed their first code change to Firefox in this release, 32 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

The Mozilla Blog: Latest Firefox Brings Privacy Protections Front and Center Letting You Track the Trackers

Mozilla planet - di, 22/10/2019 - 15:01

Our push this year has been building privacy-centric features in our products that are on by default. With this move, we’re taking the guesswork out of how to give yourself more privacy online thanks to always-on features like blocking third-party tracking cookies and cryptominers also known as Enhanced Tracking Protection. Since July 2 we’ve blocked more than 450 billion tracking requests that attempt to follow you around the web.

450 billion tracker have been blocked with Enhanced Tracking Protection

Much of this work has been behind the scenes — practically invisible to you — making it so that whenever you use Firefox, the privacy protections are working for you in the background.

But now with growing threats to your privacy, it’s clear that you need more visibility into how you’re being tracked online so you can better combat it. That’s why today we’re introducing a new feature that offers you a free report outlining the number of third-party and social media trackers blocked automatically by the Firefox browser with Enhanced Tracking Protection.

In some ways a browser is like a car, where the engine drives you to the places you want to go and a dashboard tells you the basics like how fast you’re going or whether you need gas. Nowadays, most cars go beyond the basics, and dashboards tell you much more than ever, like when you need to brake or when a car is in your blind spot, essentially taking extra steps to protect you. Similar to a car’s dashboard, we created an easy-to-view report within Firefox that shows you the extra steps it takes to protect you when you’re online. So you can enjoy your time without worrying who’s tracking you, potentially using your data or browsing history without your knowledge.

Here’s how Firefox’s Privacy Protections report works for you:



The Firefox Privacy Protections report includes:
    • See how many times Enhanced Tracking Protection blocks an attempt to tag you with cookies –  One of the many unseen ways that Firefox keeps you safe is to block third-party tracking cookies. It’s part of our Enhanced Tracking Protection that we launched by default in September. It prevents third-party trackers from building a profile of you based on your online activity. Now, you’ll see the number of cross-site and social media trackers, fingerprinters and cryptominers we blocked on your behalf.
    • Keep up to date on data breaches with Firefox Monitor –  Data breaches are not uncommon, so it’s more important than ever to stay on top of your email accounts and passwords. Now, you can view at a glance a summary of the number of unsafe passwords that may have been used in a breach, so that you can take action to update and change those passwords.
    • Manage your passwords and synced devices with Firefox Lockwise– Now, you can get a brief look at the number of passwords you have safely stored with Firefox Lockwise. We’ve also added a button where you can click to view your logins and update. You’ll also have the ability to quickly view and manage how many devices you are syncing and sharing your passwords with.

“The industry uses dark patterns to push people to “consent” to an unimaginable amount of data collection. These interfaces are designed to push you to allow tracking your behavior as you browse the web,” said Selena Deckelmann, Senior Director of Firefox Engineering at Mozilla. “Firefox’s Data Privacy Principles are concise and clear. We respect your privacy, time, and attention. You deserve better. For Firefox, this is business as usual. And we extend this philosophy to how we protect you from others online.”

Stay up-to-date on Your Personalized Privacy Protections

There are a couple ways to access your personalized Firefox’s privacy protections. First, when you visit a site and see a shield icon in the address bar, Firefox is blocking 10 billion — that’s billion with a B —  trackers every day, stopping thousands of companies from viewing your online activity. Now, when you click on the shield icon, then click on Show Report, you’ll see a complete overview.

Click on the shield icon, then click on Show Report, to see a complete overview

 

The number of cross-site and social media trackers, fingerprinters and cryptominers

 

A complete overview of your Privacy Protections

Another way to access the report is to visit here. The Privacy Protections section of your report are based on your recent week’s online activities.

Keep your passwords safe with Lockwise’s new password generator and improved management

As a further demonstration of our commitment to your privacy and security, we’ve built visible consumer-facing products like Monitor and Lockwise, available to you when you sign up for a Firefox account. Equipped with this information, you can take full advantage of the products and services that’s also part of this latest release.

Last year, Firefox Lockwise (previously Lockbox) launched as a Test Pilot experiment to safely store and take your passwords everywhere. Since then, we’ve incorporated feedback from users like launching it on Android, in addition to desktop and iOS. Today, we’ve added two of the most popular requested features for Lockwise that are now available in Firefox: password generator with improved management plus integrated updates on breached accounts with Firefox Monitor.

Take a look at the improved Lockwise dashboard:



The newest Firefox release includes enabling users to generate and manage passwords with Firefox Lockwise, stay informed about data breaches with Firefox Monitor, which is now even better integrated with the Firefox browser and its features.

  • Generate new, secure passwords – With multiple accounts like email, banks, retailers or delivery services, it can be tough to come up with creative and secure passwords rather than the typical 123456 or your favorite sports team, which are not secure at all. Now, when you create an account you’ll be auto-prompted to let Lockwise generate a safe password, which you can save directly in the Firefox browser. For current accounts, you can right click in the password field to access securely generated passwords through the fill option. All securely generated passwords are auto-saved to your Firefox Lockwise account.
  • Improved dashboard to manage your passwords – To access the new Lockwise dashboard, click on the main menu button located on the far right of your toolbar. It looks like ☰ with three parallel lines. From there click on “Logins and Passwords”, you’ll see the new and improved Firefox Lockwise dashboard open in a new tab, which allows you to search, sort, create, update, delete and manage your passwords to all your accounts. Plus, you’ll see a notification from Firefox Monitor if the account may have been involved in a data breach.
  • Take your passwords with you everywhere – Use saved passwords in the Firefox browser on any device by downloading Firefox Lockwise for Android and iOS. With a Firefox Account, you can sync all your logins between Firefox browsers and the Firefox Lockwise apps to auto-fill and safely access your passwords across devices whenever you are on the go.
Preventing additional data leaks in today’s Firefox release

We’re always looking for ways to protect your privacy, and as you know there are multiple ways that companies can access your data. Initially launched in Private Browsing mode in January 2018, we’re now stripping path information from the HTTP referrer sent to third-party trackers to prevent additional data leaks in today’s Firefox release. The HTTP referrer is data sent in an HTTP connection, and can be leveraged to track you from site to site. Additionally, companies can collect and sell this data to third parties and use it to build user profiles.

To see what else is new or what we’ve changed in today’s release, you can check out our release notes.

Check out and download the latest version of Firefox available here.

 

The post Latest Firefox Brings Privacy Protections Front and Center Letting You Track the Trackers appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: The Illusion of choice and the need for default privacy protection

Mozilla planet - di, 22/10/2019 - 15:00

Since July 2019, Firefox’s Enhanced Tracking Protection has blocked over 450 Billion third-party tracking requests from exploiting user data for profit. This shocking number reveals the sheer scale of online tracking and it highlights why the current advertising industry push on transparency, choice and “consent” as a solution to online privacy simply won’t work. The solutions put forth by other tech companies and the ad industry provide the illusion of choice. Let’s step through the reasons why that is and why we ultimately felt it necessary to enable Enhanced Tracking Protection by default.

A few months ago, we began to enable Enhanced Tracking Protection, which protects Firefox users from cookie-based tracking by default. We did this for a few reasons:

1. People do not expect their data to be sent to, and collected by, third-party companies as they browse the web. For example, 72% of people do not expect that Facebook uses “Like” buttons to collect data about a person’s online activity on websites outside of Facebook (when the buttons are not actually clicked). Many in the ad industry will point to conversion rates for behaviorally targeted ads as evidence for consumers being okay with the privacy tradeoff, but people don’t know they are actually making such a tradeoff. And even if they were aware, we shouldn’t expect them to have the information necessary to evaluate the tradeoff. When people are asked explicitly about it, they are generally opposed. 68% of people believe that using online tracking to tailor advertisements is unethical.

2. The scale of the problem is immense. We currently see about 175 tracking domains being blocked per Firefox client per day. This has very quickly resulted in over 450B trackers being blocked in total since July. You can see the numbers accelerate in the beginning of September after we enabled Enhanced Tracking Protection for all users.

Estimate of Tracking Requests Blocked by Firefox with Enhanced Tracking Protection

It should be clear from these numbers that users would be quickly overwhelmed if they were required to make individual choices about data sharing to this many companies.

3. The industry uses dark patterns to push people to “consent” via cookie/consent banners.

We’ve all had to click through consent banners every time we visit a new site. Let’s walk through the dark patterns in one large tech company’s consent management flow as an example, keeping in mind that this experience is not unique — you can find plenty of other examples just like this one. This particular consent management flow shows how these interfaces are often designed counterintuitively so that users likely don’t think they are agreeing to be tracked. We’ve redacted the company name in the example to focus on the content of the experience.

To start off, we’re presented with a fairly standard consent prompt, which is meant to allow the site visitor to make an informed choice about how their data can be collected and used. However note that clicking anywhere on the page provides “consent”. It only gets worse from here…

Consent prompt on large tech company website

If the user manages to click “Manage Ad Cookies” before clicking elsewhere on the page, they are given the options to “Save Settings” or “Allow All”. According to the highlighted text, clicking either of these buttons at this point provides consent to all partners to collect user data.  Users are not given the option to “Disable All”.

Confusing consent dialog

Instead, if a user wants to manage consent they have to click the link labeled view vendor consent. Wording matters here! If a person is skimming through that dialog they’ll assume that link is informational. This consent flow is constructed to make the cognitive load required to protect oneself as high as possible, while providing ample opportunity to “take the easy way out” and allow all tracking.

Finally, users who make it to the consent management section of the flow are presented with 415 individual sliders. The website provides a global toggle, but let’s assume a user actually wants to make informed choices about each partner. After all, that is the point, right?

Confusing consent mechanism

Eight of the 415 privacy policies linked from the consent management page are inaccessible. They throw certificate errors, fail to resolve, or time out.

Error loading privacy policies for 3rd party partners

The 407 privacy policies that load correctly total over 1.3 million words. That will take the average adult over 86 hours — two solid work weeks — just to read. That doesn’t even consider the time needed to reflect on that information and make an informed choice.

Proposals for “transparency and consent” as a solution to rampant web tracking should be seen for what they really are: proposals to continue business as usual.

Thankfully Firefox blocks almost all of the third-party cookies loaded on the page by default, despite the deceptive methods used to get the visitor to “consent”.

Tracking Cookies Blocked in Firefox by Default

While it is easy to focus on this particular example, this experience is far from unique. The sheer volume of tracker blocking that we see with Firefox’s Enhanced Tracking Protection (around 175 blocks per client per day) confirms that the average individual would never be able to make informed choices about whether or not individual companies can collect their data. This also highlights how tech companies need to do more if they are really serious about privacy, rather than push the burden onto their customers.

Firefox already blocks tracking by default. Today, a new version of Firefox will be released which will make it clear when tracking attempts are happening without your knowledge and it will highlight how Firefox is keeping you safe.

We invite you to try and download the Firefox browser here.

 

The post The Illusion of choice and the need for default privacy protection appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Pagina's