mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Future Releases Blog: What’s next in making Encrypted DNS-over-HTTPS the Default

Mozilla planet - vr, 06/09/2019 - 20:10

In 2017, Mozilla began working on the DNS-over-HTTPS (DoH) protocol, and since June 2018 we’ve been running experiments in Firefox to ensure the performance and user experience are great. We’ve also been surprised and excited by the more than 70,000 users who have already chosen on their own to explicitly enable DoH in Firefox Release edition. We are close to releasing DoH in the USA, and we have a few updates to share.

After many experiments, we’ve demonstrated that we have a reliable service whose performance is good, that we can detect and mitigate key deployment problems, and that most of our users will benefit from the greater protections of encrypted DNS traffic. We feel confident that enabling DoH by default is the right next step. When DoH is enabled, users will be notified and given the opportunity to opt out.

This post includes results of our latest experiment, configuration recommendations for systems administrators and parental controls providers, and our plans for enabling DoH for some users in the USA.

Results of our Latest Experiment

Our latest DoH experiment was designed to help us determine how we could deploy DoH, honor enterprise configuration and respect user choice about parental controls.

We had a few key learnings from the experiment.

  • We found that OpenDNS’ parental controls and Google’s safe-search feature were rarely configured by Firefox users in the USA. In total, 4.3% of users in the study used OpenDNS’ parental controls or safe-search. Surprisingly, there was little overlap between users of safe-search and OpenDNS’ parental controls. As a result, we’re reaching out to parental controls operators to find out more about why this might be happening.
  • We found 9.2% of users triggered one of our split-horizon heuristics. The heuristics were triggered in two situations: when websites were accessed whose domains had non-public suffixes, and when domain lookups returned both public and private (RFC 1918) IP addresses. There was also little overlap between users of our split-horizon heuristics, with only 1% of clients triggering both heuristics.
Moving Forward

Now that we have these results, we want to tell you about the approach we have settled on to address managed networks and parental controls. At a high level, our plan is to:

  • Respect user choice for opt-in parental controls and disable DoH if we detect them;
  • Respect enterprise configuration and disable DoH unless explicitly enabled by enterprise configuration; and
  • Fall back to operating system defaults for DNS when split horizon configuration or other DNS issues cause lookup failures.

We’re planning to deploy DoH in “fallback” mode; that is, if domain name lookups using DoH fail or if our heuristics are triggered, Firefox will fall back and use the default operating system DNS. This means that for the minority of users whose DNS lookups might fail because of split horizon configuration, Firefox will attempt to find the correct address through the operating system DNS.

In addition, Firefox already detects that parental controls are enabled in the operating system, and if they are in effect, Firefox will disable DoH. Similarly, Firefox will detect whether enterprise policies have been set on the device and will disable DoH in those circumstances. If an enterprise policy explicitly enables DoH, which we think would be awesome, we will also respect that. If you’re a system administrator interested in how to configure enterprise policies, please find documentation here. If you find any bugs, please report them here.

Options for Providers of Parental Controls

We’re also working with providers of parental controls, including ISPs, to add a canary domain to their blocklists. This helps us in situations where the parental controls operate on the network rather than an individual computer. If Firefox determines that our canary domain is blocked, this will indicate that opt-in parental controls are in effect on the network, and Firefox will disable DoH automatically. If you are a provider of parental controls, details are available here. Please reach out to us for more information at doh-canary-domain@mozilla.com. We’re also interested in connecting with commercial blocklist providers, in the US and internationally.

This canary domain is intended for use in cases where users have opted in to parental controls. We plan to revisit the use of this heuristic over time, and we will be paying close attention to how the canary domain is adopted. If we find that it is being abused to disable DoH in situations where users have not explicitly opted in, we will revisit our approach.

Plans for Enabling DoH Protections by Default

We plan to gradually roll out DoH in the USA starting in late September. Our plan is to start slowly enabling DoH for a small percentage of users while monitoring for any issues before enabling for a larger audience. If this goes well, we will let you know when we’re ready for 100% deployment. For the moment, we encourage enterprise administrators and parental control providers to check out our config documentation and get in touch with any questions.

 

The post What’s next in making Encrypted DNS-over-HTTPS the Default appeared first on Future Releases.

Categorieën: Mozilla-nl planet

Honza Bambas: Visual Studio Code auto-complete displays MDN reference for CSS and HTML tags

Mozilla planet - vr, 06/09/2019 - 13:25

Mozilla Developer Network (now MDN Web Docs) is great, probably the best Web development reference site from them all. And therefor even Microsoft defaults to us now in Visual Studio Code.

Snippet from they Release Notes for 1.38.0:

Languages MDN Reference for HTML and CSS

VS Code now displays a URL pointing to the relevant MDN Reference in completion and hover of HTML & CSS entities:

HTML & CSS MDN Reference

We thank the MDN documentation team for their effort in curating mdn-data / mdn-browser-compat-data and making MDN resources easily accessible by VS Code.

The post Visual Studio Code auto-complete displays MDN reference for CSS and HTML tags appeared first on mayhemer's blog.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Semantic Placement in Augmented Reality using MrEd

Mozilla planet - do, 05/09/2019 - 20:57
Semantic Placement in Augmented Reality using MrEd

In this article we’re going to take a brief look at how we may want to think about placement of objects in Augmented Reality. We're going to use our recently released lightweight AR editing tool MrEd to make this easy to demonstrate.

Designers often express ideas in a domain appropriate language. For example a designer may say “place that chair on the floor” or “hang that photo at eye level on the wall”.

However when we finalize a virtual scene in 3d we often keep only the literal or absolute XYZ position of elements and throw out the original intent - the deeper reason why an object ended up in a certain position.

It turns out that it’s worth keeping the intention - so that when AR scenes are re-created for new participants or in new physical locations that the scenes still “work” - that they still are satisfying experiences - even if some aspects change.

In a sense this recognizes the Japanese term 'Wabi-Sabi'; that aesthetic placement is always imperfect and contends between fickle forces. Describing placement in terms of semantic intent is also similar to responsive design on the web or the idea of design patterns as described by Christopher Alexander.

Let’s look at two simple examples of semantic placement in practice.

1. Relative to the Ground

When you’re placing objects in augmented reality you often want to specify that those objects should be relationally placed in a position relative to other objects. A typical, in fact ubiquitous, example of placement is that often you want an object to be positioned relative to “the ground”.

Sometimes the designer's intent is to select the highest relative surface underneath the object in question (such as placing a lamp on a table) or at other times to select the lowest relative surface underneath an object (such as say placing a kitten on the floor under a table). Often, as well, we may want to express a placement in the air - such as say a mailbox, or a bird.

In this very small example I’ve attached a ground detection script to a duck, and then sprinkled a few other passive objects around the scene. As the ground is detected the duck will pop down from a default position to be offset relative to the ground (although still in the air). See the GIF above for an example of the effect.

To try this scene out yourself you will need WebXR for iOS which is a preview of emerging WebXR standards using iOS ARKit to expose augmented reality features in a browser environment. This is the url for the scene above in play mode (on a WebXR capable device):

https://painted-traffic.glitch.me/.mred/build/?mode=play&doc=doc_103575453

Here is what it should look like in edit mode:

Semantic Placement in Augmented Reality using MrEd

You can also clone the glitch and edit the scene yourself (you’ll want to remember to set a password in the .env file and then login from inside MrEd). See:

https://glitch.com/edit/#!/painted-traffic

Here’s my script itself:

/// #title grounded /// #description Stick to Floor/Ground - dynamically and constantly searching for low areas nearby ({ start: function(evt) { this.sgp.startWorldInfo() }, tick: function(e) { let floor = this.sgp.getFloorNear({point:e.target.position}) if(floor) { e.target.position.y = floor.y } } })

This is relying on code baked into MrEd (specifically inside of findFloorNear() in XRWorldInfo.js if you really want to get detailed).

In the above example I begin by calling startWorldInfo() to start painting the ground planes (so that I can see them since it’s nice to have visual feedback). And, every tick, I call a floor finder subroutine which simply returns the best guess as to the floor in that area. The floor finder logic in this case is pre-defined but one could easily imagine other kinds of floor finding strategies that were more flexible.

2. Follow the player

Another common designer intent is to make sure that some content is always visible to the player. As designers in virtual or augmented reality it can be more challenging to direct a users attention to virtual objects. These are 3d immersive worlds, the player can be looking in any direction. Some kind of mechanic is needed to help make sure that the player sees what they need to see.

One common simple solution is to build an object that stays in front of the user. This can be itself a combination of multiple simpler behaviors. An object can be ordered to seek a position in front of the user, be at a certain height, and ideally billboarded so that any signage or message is always legible.

In this example a sign is decorated with two separate scripts, one to keep the sign in front of the player, and another to billboard the sign to face the player.

https://painted-traffic.glitch.me/.mred/build/?mode=edit&doc=doc_875751741&doctype=vr

Closing thoughts

We’ve only scratched the surface of the kinds of intent could be expressed or combined together. If you want to dive deeper there is a longer list in a separate article Laundry List of UX Patterns). I also invite you to help extend the industry; think both about what high level intentions you mean when you place objects and also how you'd communicate those intentions.

The key insight here is that preserving semantic intent means thinking of objects as intelligent, able to respond to simple high level goals. Virtual objects are more than just statues or art at a fixed position, but can be entities that can do your bidding, and follow high level rules.

Ultimately future 3d tools will almost certainly provide these kinds of services - much in the way CSS provides layout directives. We should also expect to see conventions emerge as more designers begin to work in this space. As a call to action, it's worth it to notice the high level intentions that you want, and to get the developers of the tools that you use to start to incorporate those intentions as primitives.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Debugging TypeScript in Firefox DevTools

Mozilla planet - do, 05/09/2019 - 16:49

Firefox Debugger has evolved into a fast and reliable tool chain over the past several months and it’s now supporting many cool features. Though primarily used to debug JavaScript, did you know that you can also use Firefox to debug your TypeScript applications?

Before we jump into real world examples, note that today’s browsers can’t run TypeScript code directly. It’s important to understand that TypeScript needs to be compiled into Javascript before it’s included in an HTML page.

Also, debugging TypeScript is done through a source-map, and so we need to instruct the compiler to produce a source-map for us as well.

You’ll learn the following in this post:

  • Compiling TypeScript to JavaScript
  • Generating source-map
  • Debugging TypeScript

Let’s get started with a simple TypeScript example.

TypeScript Example

The following code snippet shows a simple TypeScript hello world page.

// hello.ts interface Person { firstName: string; lastName: string; } function hello(person: Person) { return "Hello, " + person.firstName + " " + person.lastName; } function sayHello() { let user = { firstName: "John", lastName: "Doe" }; document.getElementById("output").innerText = hello(user); }

TypeScript (TS) is very similar to JavaScript and the example should be understandable even for JS developers unfamiliar with TypeScript.

The corresponding HTML page looks like this:

// hello.html <!DOCTYPE html> <html> <head> <script src="hello.js"></script> </head> <body"> <button onclick="sayHello()">Say Hello!</button> <div id="output"></div> </body> </html>

Note that we are including the hello.js not the hello.ts file in the HTML file. Today’s browser can’t run TS directly, and so we need to compile our hello.ts file into regular JavaScript.

The rest of the HTML file should be clear. There is one button that executes the sayHello() function and <div id="output"> that is used to show the output (hello message).

Next step is to compile our TypeScript into JavaScript.

Compiling TypeScript To JavaScript

To compile TypeScript into JavaScript you need to have a TypeScript compiler installed. This can be done through NPM (Node Package Manager).

npm install -g typescript

Using the following command, we can compile our hello.ts file. It should produce a JavaScript version of the file with the *.js extension.

tsc hello.ts

In order to produce a source-map that describes the relationship between the original code (TypeScript) and the generated code (JavaScript), you need to use an additional --sourceMap argument. It generates a corresponding *.map file.

tsc hello.ts --sourceMap

Yes, it’s that simple.

You can read more about other compiler options if you are interested.

The generated JS file should look like this:

function greeter(person) { return "Hello, " + person.firstName + " " + person.lastName; } var user = { firstName: "John", lastName: "Doe" }; function sayHello() { document.getElementById("output").innerText = greeter(user); } //# sourceMappingURL=hello.js.map

The most interesting thing is probably the comment at the end of the generated file. The syntax comes from old Firebug times and refers to a source map file containing all information about the original source.

Are you curious what the source map file looks like? Here it is.

{ "version":3, "file":"hello.js", "sourceRoot":"", "sources":["hello.ts"], "names":[], "mappings": "AAKA,SAAS,OAAO,CAAC,MAAc;IAC7B,OAAO,SAAS,GAAG,MAAM,CAAC,SAAS,GAAG,GAAG,GAAG,MAAM,CAAC,QAAQ,CAAC;AAC9D,CAAC;AAED,IAAI,IAAI,GAAG;IACT,SAAS,EAAE,MAAM;IACjB,QAAQ,EAAE,KAAK;CAChB,CAAC;AAEF,SAAS,QAAQ;IACf,QAAQ,CAAC,cAAc,CAAC,QAAQ,CAAC,CAAC,SAAS,GAAG,OAAO,CAAC,IAAI,CAAC,CAAC;AAC9D,CAAC" }

It contains information (including location) about the generated file (hello.js), the original file (hello.ts), and, most importantly, mappings between those two. With this information, the debugger knows how to interpret the TypeScript code even if it doesn’t know anything about TypeScript.

The original language could be anything (RUST, C++, etc.) and with a proper source-map, the debugger knows what to do. Isn’t that magic?

We are all set now. The next step is loading our little app into the Debugger.

Debugging TypeScript

The debugging experience is no different from how you’d go about debugging standard JS. You’re actually debugging the generated JavaScript, but since source-map is available the debugger knows how to show you the original TypeScript instead.

This example is available online, so if you are running Firefox you can try it right now.

Let’s start with creating a breakpoint on line 9 in our original TypeScript file. To hit the breakpoint you just need to click on the Say Hello! button introduced earlier.

Debugging TypeScript

See, it’s TypeScript there!

Note the Call stack panel on the right side, it properly shows frames coming from hello.ts file.

One more thing: If you are interested in seeing the generated JavaScript code you can use the context menu and jump right into it.

This action should navigate you to the hello.js file and you can continue debugging from the same location.

You can see that the Sources tree (on the left side) shows both these files at the same time.

Map Scopes

Let’s take a look at another neat feature that allows inspection of variables in both original and generated scopes.

Here is a more complex glitch example.

  1. Load https://firefox-devtools-example-babel-typescript.glitch.me/
  2. Open DevTools Toolbox and select the Debugger panel
  3. Create a breakpoint in Webpack/src/index.tsx file on line 45
  4. The breakpoint should pause JS execution immediately

Screenshot of the DevTools debugger panel allowing inspection of variables in both original and generated scopes

Notice the Scopes panel on the right side. It shows variables coming from generated (and also minified) code and it doesn’t correspond to the original TSX (TypeScript with JSX) code, which is what you see in the Debugger panel.

There is a weird e variable instead of localeTime, which is actually used in the source code.

This is where the Map scopes feature comes handy. In order to see the original variables (used in the original TypeScript code) just click the Map checkbox.

Debugger panel in Firefox DevTools,using the Map checkbox to see original TypeScript variables

See, the Scopes panel shows the localeTime variable now (and yes, the magic comes from the source map).

Finally, if you are interested in where the e variable comes from, jump into the generated location using the context menu (like we just did in the previous example).

DevTools showing Debugger panel using the context menu to locate the e variable

Stay tuned for more upcoming Debugger features!

Jan ‘Honza’ Odvarko

The post Debugging TypeScript in Firefox DevTools appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Georg Fritzsche: Introducing Glean — Telemetry for humans

Mozilla planet - do, 05/09/2019 - 12:25
Introducing Glean — Telemetry for humans<figcaption>Glean logo — subtitled “telemetry for humans”</figcaption>

When Firefox Preview shipped, it was also the official launch of Glean, our new mobile product analytics & telemetry solution true to Mozillas values. This post goes into how we got there and what it’s design principles are.

Background

In the last few years, Firefox development has become increasingly data-driven. Mozilla’s larger data engineering team builds & maintains most of the technical infrastructure that makes this possible; from the Firefox telemetry code to the Firefox data platform and hosting analysis tools. While data about our products is crucial, Mozilla has a rare approach to data collection, following our privacy principles. This includes requiring data review for every new piece of data collection to ensure we are upholding our principles — even when it makes our jobs harder.

One great success story for us is having the Firefox telemetry data described in machine-readable and clearly structured form. This encourages best practices like mandatory documentation, steering towards lean data practices and enables automatic data processing — from generating tables to powering tools like our measurement dashboard or the Firefox probe dictionary.

However, we also learned lessons about what didn’t work so well. While the data types we used were flexible, they were hard to interpret. For example, we use plain numbers to store counts, generic histograms to store multiple timespan measures and allow for custom JSON submissions for uncovered use-cases. The flexibility of these data types means it takes work to understand how to use them for different use-cases & leaves room for accidental error on the instrumentation side. Furthermore, it requires manual effort in interpreting & analysing these data points. We noticed that we could benefit from introducing higher-level data types that are closer to what we want to measure — like data types for “counters” and “timing distributions”.

What about our mobile telemetry?

Another factor was that our mobile product infrastructure that was not ideally integrated yet with the Firefox telemetry infrastructure above. Different products used different analytics solutions & different versions of our own mobile telemetry code, across Android & iOS. Also, our own mobile telemetry code did not describe its metrics in machine-readable form. This meant analysis was potentially different for each product & new instrumentations were higher effort. Integrating new products into the Firefox telemetry infrastructure meant substantial manual effort.

From reviewing the situation, one main question came up: What if we could provide one consistent telemetry SDK for our mobile products, bringing the benefits of our Firefox telemetry infrastructure but without the above mentioned drawbacks?

Introducing Glean

In 2018, we looked at how we could integrate Mozilla’s mobile products better. Putting together what we learned from our existing Firefox Telemetry system, feedback from various user interviews and what we found mattered for our mobile teams, we decided to reboot our telemetry and product analytics solution for mobile. We took input from a cross-functional set of people, data science, engineering, product management, QA and others to form a complete picture of what was required.

From that, we set out to build an end-to-end solution called Glean, consisting of different pieces:

  • Product-side tools — The data enters our system here through the Glean SDK, which is what products integrate and record data into. It provides mobile APIs and aims to hide away the complexities of reliable data collection.
  • Services — This is where the data is stored and made available for analysis, building on our Firefox data platform.
  • Data Tools — Here our users are able to look at the data, performing analysis and setting up dashboards. This goes from running SQL queries, visualizing core product analytics to data scientists digging deep into the raw data.

Our main goal was to support our typical mobile analytics & engineering use-cases efficiently, which came down to the following principles:

  • Basic product analytics are collected out-of-the-box in a standardized way. A baseline of analysis is important for all our mobile applications, from counting active users to retention and session times. This is supported out-of-the-box by our SDK and works consistently across mobile products that integrate it.
  • No custom code is required for adding new metrics to a product. To make our engineers more productive, the SDK keeps the amount of instrumentation code required for metrics as small as possible. Engineers only need to specify what they want to instrument, with which semantics and then record the data using the Glean SDK.
  • New metrics should be available for basic analysis without additional effort. Once a released product is enabled for Glean, getting access to newly added metrics shouldn’t require a time-consuming process. Instead they should show up automatically, both for end-to-end validation and basic analysis through SQL.

To make sure that what we build is true to Mozilla’s values, encourages best practices and is sustainable to work with, we added these principles:

  • Lean data practices are encouraged through SDK design choices. It’s easy to limit data collection to only what’s necessary and documentation can be generated easily, aiding both transparency & understanding for analysis.
  • Use of standardized data types & registering them in machine-readable files. By having collected data described in machine-readable files, our various data tools can read them and support metrics automatically, without manual work, including schema generation, etc.
  • Introduce high-level metric types, so APIs & data tools can better match the use-cases. To make the choice easier for which metric type to use, we introduced higher-level data types that offer clear and understandable semantics — for example, when you want to count something, you use the “counter” type. This also gives us opportunities to offer better tooling for the data, both on the client and for data tooling.
  • Basic semantics on how the data is collected are clearly defined by the library. To make it easier to understand the general semantics of our data, the Glean SDK will define and document when which kind of data will get sent. This makes data analysis easier through consistent semantics.

One crucial design choice here was to use higher-level metric types for the collected metrics, while not supporting free-form submissions. This choice allows us to focus the Glean end-to-end solution on clearly structured, well-understood & automatable data and enables us to scale analytics capabilities more efficiently for the whole organization.

Let’s count something

So how does this work out in practice? To have a more concrete example, let’s say we want to introduce a new metric to understand how many times new tabs are opened in a browser.

In Glean, this starts from declaring that metric in a YAML file. In this case we’ll add a new “counter” metric:

browser.usage:
tab_opened:
type: counter
description: Count how often a new tab is opened. …

Now from here, an API is automatically generated that the product code can use to record when something happens:

import org.mozilla.yourApplication.GleanMetrics.BrowserUsage

override fun tabOpened() {
BrowserUsage.tabOpened.add()

}

That’s it, everything else is handled internally by the SDK — from storing the data, packaging it up correctly and sending it out.

This new metric can then be unit-tested or verified in real-time, using a web interface to confirm the data is coming in. Once the product change is live, data starts coming in and shows up in standard data sets. From there it is available to query using SQL through Redash, our generic go-to data analysis tool. Other tools can also later integrate it, like the measurement dashboard or Amplitude.

Of course there is a set of other metric types available, including events, dates & times and other typical use cases.

Want to see how this looks in code? You can take a look at the Glean Android sample app, especially the metrics.yaml file and its main activity.

What’s next?

The first version of the Glean solution went live to support the launch of Firefox Preview, with an initial SDK support for Android applications & a priority set of data tools. iOS support for the SDK is already planned for 2019, as is improved & expanded integration with different analysis tools. We are also actively considering support for desktop platforms, to make Glean a true cross-platform analytics SDK.

If you’re interested in learning more, you can check out:

We’ll certainly expand on more technical details in future upcoming blog posts.

Special thanks

While this project took contributions from a lot of people, I especially want to call out Frank Bertsch (data engineering lead), Alessio Placitelli (Glean SDK lead) and Michael Droettboom (data engineer & SDK engineer). Without their substantial contributions to design & implementation, this project would not have been possible.

Introducing Glean — Telemetry for humans was originally published in Georg Fritzsche on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

David Humphrey: Some Assembly Required

Mozilla planet - wo, 04/09/2019 - 22:30

In my open source courses, I spend a lot of time working with new developers who are trying to make sense of issues on GitHub and figure out how to begin.  When it comes to how people write their issues, I see all kinds of styles.  Some people write for themselves, using issues like a TODO list: "I need to fix X and Y."  Other people log notes from a call or meeting, relying on the collective memory of those who attended: "We agreed that so-and-so is going to do such-and-such."  Still others write issues that come from outside the project, recording a bug or some other problem: "Here is what is happening to me..."

Because I'm getting ready to take another cohort of students into the wilds of GitHub, I've been thinking once more about ways to make this process better.  Recently I spent a number of days assembling furniture from IKEA with my wife.  Spending that much time with Allen keys got me thinking about what we could learn from IKEA's work to enable contribution from customers.

I am not a furniture maker.  Not even close.  While I own some power tools, most were gifts from my father, who actually knows how to wield them.  I'm fearless when it comes to altering bits in a computer; but anything that puts holes in wood, metal, or concrete terrifies me.  And yet, like so many other people around the world, I've "built" all kinds of furniture in our house--at least I've assembled it.

In case you haven't bought furniture from IKEA, they are famous for designing not only the furniture itself, but also the materials, packaging, and saving cost by offloading most of the assmbly to the customer.  Each piece comes with instructions, showing the parts manifest, tools you'll need (often simple ones are included), and pictorial, step-wise instructions for assembling the piece.

IKEA's model is amazing: amazing that people will do it, amazing that it's doable at all by the general public!  You're asking people to do a task that they a) probably have never done before; b) probably won't do again.  Sometimes you'll buy 4 of some piece, a chair, and through repeated trial and error, get to the point where you can assemble it intuitively.  But this isn't the normal use case.  For the most part, we buy something we don't have, assemble it, and then we have it.  This means that the process has to work during the initial attempt, without training.  IKEA is keen that it work because they don't want you to return it, or worse, never come back again.

Last week I assembled all kinds of things for a few rooms in our basement: chairs, a couch, tables, etc.  I spent hours looking at, and working my way through IKEA instructions.  Take another look at the Billy instructions I included above.  Here's some of what I notice:

  • It starts with the end-goal: here is how things should look when you're done
  • It tells you what tools you'll need in order to make this happen and, importantly, imposes strict limits on the sorts of tools that might be required.  An expert could probably make use of more advanced tools; but this isn't for experts.
  • It gives you a few GOTCHAs to avoid up front.  "Be careful to do it this way, not that way." This repeats throughout the rest of the steps.  Like this, not that.
  • It itemizes and names (via part number) all the various pieces you'll need.  There should be 16 of these, 18 of these, etc.
  • It takes you step-by-step through maniplating the parts on the floor into the product you saw in the store, all without words.
  • Now look at how short this thing is.  The information density is high even though the complexity is low.

It got me thinking about lessons we could learn when filing issues in open source projects.  I realize that there isn't a perfect analogy between assmbling furniture and fixing a bug.  IKEA mass produces the same bookshelf, chairs, and tables, and these instructions work on all of them.  Meanwhile, a bug (hopefully) vanishes as soon as it's fixed.  We can't put the same effort into our instructions for a one-off experience as we can for a mass produced one.  However, in both cases, I would stress that the experience is similar for the person working through the "assembly," it's often their first time following these steps.

When filing a GitHub issue, what could we learn from IKEA instructions?

  1. Show the end goal of the work.  "This issue is about moving this button to the right.  Currently it looks like this and we want it to look like this."  A lot of people do this, especially with visual, UI related bugs.  However, we could do a version of it on lots of non-visual bugs too.  Here is what you're trying to acheive with this work.  When we file bugs, we assume this is always clear.  But imagine it needs to be clear based solely on these "instructions."
  2. List the tools you'll need to accomplish this, and include any that are not common.  We do this sometimes. "Go read the CONTRIBUTING.md page."  That might be enough.  But we could put more effort into calling out specific things you'll need that might not be obvious, URLs to things, command invocation examples, etc.  I think a lot of people bristle at the idea of using issues to teach people "common knowledge."  I agree that there's a limit to what is reasonable in an issue (recall how short IKEA's was).  But we often err on the side of not-enough, and assume that our knowledge is the same as our reader's.  It almost certainly won't be if this is for a new contributor.
  3. Call out the obsticles in the way of accomplishing this work.  Probably there are some things you should know about how the tests run when we change this part of the code.  Or maybe you need to be aware that we need to run some script after we update things in this directory.  Any mistakes that people have made in the past, and which haven't been dealt with through automation, are probably in scope here.  Even better, put them in a more sticky location like the official docs, and link to them from here.
  4. Include a manifest of the small parts involved.  For example, see the lines of code here, here, and here.  You'll have to update this file, this file, and that file.  This is the domain of the change you're going to need to make.  Be clear about what's involved.  I've done this a lot, and it often doesn't take much time when you know the code well.  However, for the new contributor, this is a lifesaver.
  5. Include a set of steps that one could follow on the way to making this fix.  This is espeically important in the case that changes need to happen in a sequence.

These steps aren't always possible or practical.  But it takes less work than you might think, and the quality of the contributions you get as a result is worth the upfront investment.  In reality, you'll likely end up having to it in reviews after the fact, when people get it wrong.  Try doing it at the beginning.

Here's a fantastic example of how to do it well.  I've tweeted about this in the past, but Devon Abbott's issue in the Lona repo is fantastic: https://github.com/airbnb/Lona/issues/338.  Here we see many of the things outlined above.  As a result of this initial work, one of my students was able to jump in.

I want to be careful to not assume that everyone has time to do all this when filing bugs.  Not all projects are meant for external contributors (GitHub actually needs some kind of signal so that people know when to engage and when to avoid certain repos), and not all developers on GitHub are looking to mentor or work with new contributors.  Regardless, I think we could all improve our issues if we thought back to these IKEA instructions from time to time.  A lot of code fixes and regular maintenance tasks should really feel more like assembling furniture vs. hand carving a table leg.  There's so much to do to keep all this code working, we are going to have to find ways to engage and involve new generations of developers who need a hand getting started.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Stop video autoplay with Firefox

Mozilla planet - wo, 04/09/2019 - 18:00

You know that thing where you go to a website and a video starts playing automatically? Sometimes it’s a video created by the site, and sometimes it’s an ad. That … Read more

The post Stop video autoplay with Firefox appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Debugging WebAssembly Outside of the Browser

Mozilla planet - wo, 04/09/2019 - 16:31

WebAssembly has begun to establish itself outside of the browser via dedicated runtimes like Mozilla’s Wasmtime and Fastly’s Lucet. While the promise of a new, universal format for programs is appealing, it also comes with new challenges. For instance, how do you debug .wasm binaries?

At Mozilla, we’ve been prototyping ways to enable source-level debugging of .wasm files using traditional tools like GDB and LLDB.

The screencast below shows an example debugging session. Specifically, it demonstrates using Wasmtime and LLDB to inspect a program originally written in Rust, but compiled to WebAssembly.

This type of source-level debugging was previously impossible. And while the implementation details are subject to change, the developer experience—attaching a normal debugger to Wasmtime—will remain the same.

By allowing developers to examine programs in the same execution environment as a production WebAssembly program, Wasmtime’s debugging support makes it easier to catch and diagnose bugs that may not arise in a native build of the same code. For example, the WebAssembly System Interface (WASI) treats filesystem access more strictly than traditional Unix-style permissions. This could create issues that only manifest in WebAssembly runtimes.

Mozilla is proactively working to ensure that WebAssembly’s development tools are capable, complete, and ready to go as WebAssembly expands beyond the browser.

Please try it out and let us know what you think.

Note: Debugging using Wasmtime and LLDB should work out of the box on Linux with Rust programs, or with C/C++ projects built via the WASI SDK.

Debugging on macOS currently requires building and signing a more recent version of LLDB.

Unfortunately, LLDB for Windows does not yet support JIT debugging.

Thanks to Lin Clark, Till Schneidereit, and Yury Delendik for their assistance on this post, and for their work on WebAssembly debugging.

The post Debugging WebAssembly Outside of the Browser appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: FIPS ready with curl

Mozilla planet - wo, 04/09/2019 - 09:13

Download wolfSSL fips ready (in my case I got wolfssl-4.1.0-gplv3-fips-ready.zip)

Unzip the source code somewhere suitable

$ cd $HOME/src $ unzip wolfssl-4.1.0-gplv3-fips-ready.zip $ cd wolfssl-4.1.0-gplv3-fips-ready

Build the fips-ready wolfSSL and install it somewhere suitable

$ ./configure --prefix=$HOME/wolfssl-fips --enable-harden --enable-all $ make -sj $ make install

Download curl, the normal curl package. (in my case I got curl 7.65.3)

Unzip the source code somewhere suitable

$ cd $HOME/src $ unzip curl-7.65.3.zip $ cd curl-7.65.3

Build curl with the just recently built and installed fips ready wolfSSL version.

$ LD_LIBRARY_PATH=$HOME/wolfssl-fips/lib ./configure --with-wolfssl=$HOME/wolfssl-fips --without-ssl $ make -sj

Now, verify that your new build matches your expectations by:

$ ./src/curl -V

It should show that it uses wolfSSL and that all the protocols and features you want are enabled and present. If not, iterate until it does!

FIPS Ready means that you have included the FIPS code into your build and that you are operating according to the FIPS enforced best practices of default entry point, and Power On Self Test (POST).”

Categorieën: Mozilla-nl planet

Nick Fitzgerald: Combining Coverage-Guided and Generation-Based Fuzzing

Mozilla planet - wo, 04/09/2019 - 09:00

Coverage-guided fuzzing and generation-based fuzzing are two powerful approaches to fuzzing. It can be tempting to think that you must either use one approach or the other at a time, and that they can’t be combined. However, this is not the case. In this blog post I’ll describe a method for combining coverage-guided fuzzing with structure-aware generators that I’ve found to be both effective and practical.

What is Generation-Based Fuzzing?

Generation-based fuzzing leverages a generator to create random instances of the fuzz target’s input type. The csmith program, which generates random C source text, is one example generator. Another example is any implementation of the Arbitrary trait from the quickcheck property-testing framework. The fuzzing engine repeatedly uses the generator to create new inputs and feeds them into the fuzz target.

// Generation-based fuzzing algorithm. fn generate_input(rng: &mut impl Rng) -> MyInputType { // Generator provided by the user... } loop { let input = generate_input(rng); let result = run_fuzz_target(input); // If the fuzz target crashed/panicked/etc report // that. if result.is_interesting() { report_interesting(new_input); } }

The generator can be made structure aware, leveraging knowledge about the fuzz target to generate inputs that are more likely to be interesting. They can generate valid C programs for fuzzing a C compiler. They can make inputs with the correct checksums and length prefixes. They can create instances of typed structures in memory, not just byte buffers or strings. But naïve generation-based fuzzing can’t learn from the fuzz target’s execution to evolve its inputs. The generator starts from square one each time it is invoked.

What is Coverage-Guided Fuzzing?

Rather than throwing purely random inputs at the fuzz target, coverage-guided fuzzers instrument the fuzz target to collect code coverage. The fuzzer then leverages this coverage information as feedback to mutate existing inputs into new ones, and tries to maximize the amount of code covered by the total input corpus. Two popular coverage-guided fuzzers are libFuzzer and AFL.

// Coverage-guided fuzzing algorithm. let corpus = initial_set_of_inputs(); loop { let old_input = choose_one(&corpus); let new_input = mutate(old_input); let result = run_fuzz_target(new_input); // If the fuzz target crashed/panicked/etc report // that. if result.is_interesting() { report_interesting(new_input); } // If executing the input hit new code paths, add // it to our corpus. if result.executed_new_code_paths() { corpus.insert(new_input); } }

The coverage-guided approach is great at improving a fuzzer’s efficiency at creating new inputs that poke at new parts of the program, rather than testing the same code paths repeatedly. However, in its naïve form, it is easily tripped up by the presence of things like checksums in the input format: mutating a byte here means that a checksum elsewhere must be updated as well, or else execution will never proceed beyond validating the checksum to reach more interesting parts of the program.

The Problem

Coverage-based fuzzing lacks a generator’s understanding of which inputs are well-formed, while generation-based fuzzing lacks coverage-guided fuzzing’s ability to mutate inputs strategically. We would like to combine coverage-guided and generation-based fuzzing to get the benefits of both without the weaknesses of either.

So how do we implement a fuzz target?

When writing a fuzz target for use with coverage-guided fuzzers, you’re typically given some byte buffer, and you feed that into your parser or process it in whatever way is relevant.

// Implementation of a fuzz target for a coverage // -guided fuzzer. fn my_coverage_guided_fuzz_target(input: &[u8]) { // Do stuff with `input`... }

However, when writing a fuzz target for use with a generation-based fuzzer, instead of getting a byte buffer, you’re given a structure-aware input that was created by your generator.

// Implementation of a fuzz target for a generation- // based fuzzer using `csmith`. fn my_c_smith_fuzz_target(input: MyCsmithOutput) { // Compile the C source text that was created // by `csmith`... } // Implementation of a generator and fuzz target for // a generation-based fuzzer using the `quickcheck` // property-testing framework. impl quickcheck::Arbitrary for MyType { fn arbitrary(g: &mut impl quickcheck::Gen) -> MyType { // Generate a random instance of `MyType`... } } fn my_quickcheck_fuzz_target(input: MyType) { // Do stuff with the `MyType` instance that was // created by `MyType::arbitrary`... }

The signatures for coverage-guided and generation-based fuzz targets have different input types, and it isn’t obvious how we can reuse a single fuzz target with each approach to fuzzing separately, let alone combine both approaches at the same time.

A Solution

As a running example, let’s imagine we are authoring a crate that converts back and forth between RGB and HSL colors.

/// A color represented with RGB. #[derive(Clone, Copy, Debug, PartialEq, Eq)] pub struct Rgb { pub r: u8, pub g: u8, pub b: u8, } impl Rgb { pub fn to_hsl(&self) -> Hsl { // ... } } /// A color represented with HSL. #[derive(Clone, Copy, Debug, PartialEq)] pub struct Hsl { pub h: f64, pub s: f64, pub l: f64, } impl Hsl { pub fn to_rgb(&self) -> Rgb { // ... } }

Now, let’s start by writing a couple structure-aware test case generators by implementing quickcheck::Arbitrary for our color types, and then using quickcheck’s default test runner to get generation-based fuzzing up and running.

Here are our Arbitrary implementations:

use rand::prelude::*; use quickcheck::{Arbitrary, Gen}; impl Arbitrary for Rgb { fn arbitrary<G: Gen>(g: &mut G) -> Self { Rgb { r: g.gen(), g: g.gen(), b: g.gen(), } } } impl Arbitrary for Hsl { fn arbitrary<G: Gen>(g: &mut G) -> Self { Hsl { h: g.gen_range(0.0, 360.0), s: g.gen_range(0.0, 1.0), l: g.gen_range(0.0, 1.0), } } }

Now we need to define what kinds of properties we want to check and what oracles we want to implement to check those properties for us. One of the simplest thigns we can assert is “doing operation X does not panic or crash”, but we might also assert that converting RGB into HSL and back into RGB again is the identity function.

pub fn rgb_to_hsl_doesnt_panic(rgb: Rgb) { let _ = rgb.to_hsl(); } pub fn rgb_to_hsl_to_rgb_is_identity(rgb: Rgb) { assert_eq!(rgb, rgb.to_hsl().to_rgb()); } #[cfg(test)] mod tests { quickcheck::quickcheck! { fn rgb_to_hsl_doesnt_panic(rgb: Rgb) -> bool { super::rgb_to_hsl_doesnt_panic(rgb); true } } quickcheck::quickcheck! { fn rgb_to_hsl_to_rgb_is_identity(rgb: Rgb) -> bool { super::rgb_to_hsl_to_rgb_is_identity(rgb); true } } }

Now we can run cargo test and quickcheck will do some generation-based fuzzing for our RGB and HSL conversion crate — great!

However, so far we have just done plain old generation-based fuzzing. We also want to get the advantages of coverage-guided fuzzing without giving up our nice structure-aware generators.

An easy and practical — yet still effective — way to add coverage-guided fuzzing into the mix, is to treat the raw byte buffer input provided by the coverage-guided fuzzer as a sequence of random values and implement a “random” number generator around it. If our generators only use “random” values from this RNG, never from any other source, then the coverage-guided fuzzer can mutate and evolve its inputs to directly control the structure-aware fuzzer and its generated test cases!

The BufRng type from my bufrng crate is exactly this “random” number generator implementation:

// Snippet from the `bufrng` crate. use rand_core::{Error, RngCore}; use std::iter::{Chain, Repeat}; use std::slice; /// A "random" number generator that yields values /// from a given buffer (and then zeros after the /// buffer is exhausted). pub struct BufRng<'a> { iter: Chain<slice::Iter<'a, u8>, Repeat<&'a u8>>, } impl BufRng<'_> { /// Construct a new `BufRng` that yields values /// from the given `data` buffer. pub fn new(data: &[u8]) -> BufRng { BufRng { iter: data.iter().chain(iter::repeat(&0)), } } } impl RngCore for BufRng<'_> { fn next_u32(&mut self) -> u32 { let a = *self.iter.next().unwrap() as u32; let b = *self.iter.next().unwrap() as u32; let c = *self.iter.next().unwrap() as u32; let d = *self.iter.next().unwrap() as u32; (a << 24) | (b << 16) | (c << 8) | d } // ... }

BufRng will generate “random” values, which are copied from its given buffer. Once the buffer is exhausted, it will keep yielding zero.

let mut rng = BufRng::new(&[1, 2, 3, 4]); assert_eq!( rng.gen::<u32>(), (1 << 24) | (2 << 16) | (3 << 8) | 4, ); assert_eq!(rng.gen::<u32>(), 0); assert_eq!(rng.gen::<u32>(), 0); assert_eq!(rng.gen::<u32>(), 0);

Because there is a blanket implementation of quickcheck::Gen for all types that implement rand_core::RngCore, we can use BufRng with any quickcheck::Arbitrary implementation.

Finally, here is a fuzz target definition for a coverage-guided fuzzer that uses BufRng to reinterpret the input into something that our Arbitrary implementations can use:

fn my_fuzz_target(input: &[u8]) { // Create a `BufRng` from the raw input byte buffer // given to us by the fuzzer. let mut rng = BufRng::new(input); // Generate an `Rgb` instance with it. let rgb = Rgb::arbitrary(&mut rng); // Assert our properties! rgb_to_hsl_doesnt_panic(rgb); rgb_to_hsl_to_rgb_is_identity(rgb); }

With BufRng, going from quickcheck property tests to combined structure-aware test case generation and coverage-guided fuzzing only required minimal changes!

Conclusion

We can get the benefits of both structure-aware test case generators and coverage-guided fuzzing in an easy, practical way by interpreting the fuzzer’s raw input as a sequence of random values that our generator uses. Because BufRng can be used with quickcheck::Arbitrary implementations directly, it is easy to make the leap from generation-based fuzzing with quickcheck property tests to structure-aware, coverage-guided fuzzing with libFuzzer or AFL. With this setup, the fuzzer can both learn from program execution feedback to create new inputs that are more effective, and it can avoid checksum-style pitfalls.

If you’d like to learn more, check out these resources:

  • Structure-Aware Fuzzing with libFuzzer. A great tutorial describing a few different ways to do exactly what it says on the tin.

  • Write Fuzzable Code by John Regehr. A nice collection of things you can do to make your code easier to fuzz and get the most out of your fuzzing.

  • cargo fuzz. A tool that makes using libFuzzer with Rust a breeze.

  • quickcheck. A port of the popular property-testing framework to Rust.

  • bufrng. A “random” number generator that yields pre-determined values from a buffer (e.g. the raw input generated by libFuzzer or AFL).

Finally, thanks to Jim Blandy and Jason Orendorff for reading an early draft of this blog post. This text is better because of their feedback, and any errors that remain are my own.

Post Script

After I originally shared this blog post, a few people made a few good points that I think are worth preserving and signal boosting here!

Rohan Padhye et al implemented this same idea for Java, and they wrote a paper about it: JQF: Coverage-Guided Property-Based Testing in Java

Manish Goregaokar pointed out that cargo-fuzz uses the arbitrary crate, which has its own Arbitrary trait that is different from quickcheck::Arbitrary. Notably, its paradigm is “generate an instance of Self from this buffer of bytes” rather than “generate an instance of Self from this RNG” like quickcheck::Arbitrary does. If you are starting from scratch, rather than reusing existing quickcheck tests, it likely makes sense to implement the arbitrary::Arbitrary trait instead of (or in addition to!) the quickcheck::Arbitrary trait, since using it with a coverage-guided fuzzer doesn’t require the BufRng trick.

John Regehr pointed out that this pattern works well when small mutations to the input string result in small changes to the generated structure. It works less well when a small change (e.g. at the start of the string) guides the generator to a completely different path, generating a completely different structure, which leads the fuzz target down an unrelated code path, and ultimately we aren’t really doing “proper” coverage-guided, mutation-based fuzzing anymore. Rohan Padhye reported that he experimented with ways to avoid this pitfall, but found that the medicine was worse than the affliction: the overhead of avoiding small changes in the prefix vastly changing the generated structure was greater than just running the fuzz target on the potentially very different generated structure.

One thing that I think would be neat to explore would be a way to avoid this problem by design: design an Arbitrary-style trait that instead of generating an arbitrary instance of Self, takes &mut self and makes an arbitrary mutation to itself as guided by an RNG. Maybe we would allow the caller to specify if the mutation should grow or shrink self, and then you could build test case reduction “for free” as well. Here is a quick sketch of what this might look like:

pub enum GrowOrShrink { // Make `self` bigger. Grow, // Make `self` smaller. Shrink, } trait ArbitraryMutation { // Mutate `self` with a random mutation to be either bigger // or smaller. Return `Ok` if successfully mutated, `Err` // if `self` can't get any bigger/smaller. fn arbitrary_mutation( &mut self, rng: &mut impl Rng, grow_or_shrink: GrowOrShrink, ) -> Result<(), ()>; } // Example implementation for `u64`. impl ArbitraryMutation for u64 { fn arbitrary_mutation( &mut self, rng: &mut impl Rng, grow_or_shrink: GrowOrShrink, ) -> Result<(), ()> { match GrowOrShrink { GrowOrShrink::Grow if *self == u64::MAX => Err(()), GrowOrShrink::Grow => { *self = rng.gen_range(*self + 1, u64::MAX); Ok(()) } GrowOrShrink::Shrink if *self == 0 => Err(()), GrowOrShrink::Shrink => { *self = rng.gen_range(0, *self - 1); Ok(()) } } }
Categorieën: Mozilla-nl planet

The Firefox Frontier: Recommended Extensions program—where to find the safest, highest quality extensions for Firefox

Mozilla planet - di, 03/09/2019 - 18:00

Extensions can add powerful customization features to Firefox—everything from ad blockers and tab organizers to enhanced privacy tools, password managers, and more. With thousands of extensions to choose from—either those … Read more

The post Recommended Extensions program—where to find the safest, highest quality extensions for Firefox appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Mozilla’s Manifest v3 FAQ

Mozilla planet - di, 03/09/2019 - 17:01
What is Manifest v3?

Chrome versions the APIs they provide to extensions, and the current format is version 2. The Firefox WebExtensions API is nearly 100% compatible with version 2, allowing extension developers to easily target both browsers.

In November 2018, Google proposed an update to their API, which they called Manifest v3. This update includes a number of changes that are not backwards-compatible and will require extension developers to take action to remain compatible.

A number of extension developers have reached out to ask how Mozilla plans to respond to the changes proposed in v3. Following are answers to some of the frequently asked questions.

Why do these changes negatively affect content blocking extensions?

One of the proposed changes in v3 is to deprecate a very powerful API called blocking webRequest. This API gives extensions the ability to intercept all inbound and outbound traffic from the browser, and then block, redirect or modify that traffic.

In its place, Google has proposed an API called declarativeNetRequest. This API impacts the capabilities of content blocking extensions by limiting the number of rules, as well as available filters and actions. These limitations negatively impact content blockers because modern content blockers are very sophisticated and employ layers of algorithms to not only detect and block ads, but to hide from the ad networks themselves. Extensions would still be able to use webRequest but only to observe requests, not to modify or block them.

As a result, some content blocking extension developers have stated they can no longer maintain their add-on if Google decides to follow through with their plans. Those who do continue development may not be able to provide the same level of capability for their users.

Will Mozilla follow Google with these changes?

In the absence of a true standard for browser extensions, maintaining compatibility with Chrome is important for Firefox developers and users. Firefox is not, however, obligated to implement every part of v3, and our WebExtensions API already departs in several areas under v2 where we think it makes sense.

Content blocking: We have no immediate plans to remove blocking webRequest and are working with add-on developers to gain a better understanding of how they use the APIs in question to help determine how to best support them.

Background service workers: Manifest v3 proposes the implementation of service workers for background processes to improve performance. We are currently investigating the impact of this change, what it would mean for developers, and whether there is a benefit in continuing to maintain background pages.

Runtime host permissions: We are evaluating the proposal in Manifest v3 to give users more granular control over the sites they give permissions to, and investigating ways to do so without too much interruption and confusion.

Cross-origin communication: In Manifest v3, content scripts will have the same permissions as the page they are injected in. We are planning to implement this change.

Remotely hosted code: Firefox already does not allow remote code as a policy. Manifest v3 includes a proposal for additional technical enforcement measures, which we are currently evaluating and intend to also enforce.

Will my extensions continue to work in Manifest v3?

Google’s proposed changes, such as the use of service workers in the background process, are not backwards-compatible. Developers will have to adapt their add-ons to accommodate these changes.

That said, the changes Google has proposed are not yet stabilized. Therefore, it is too early to provide specific guidance on what to change and how to do so. Mozilla is waiting for more clarity and has begun investigating the effort needed to adapt.

We will provide ongoing updates about changes necessary on the add-ons blog.

What is the timeline for these changes?

Given Manifest v3 is still in the draft and design phase, it is too early to provide a specific timeline. We are currently investigating what level of effort is required to make the changes Google is proposing, and identifying where we may depart from their plans.

Later this year we will begin experimenting with the changes we feel have a high chance of being part of the final version of Manifest v3, and that we think make sense for our users. Early adopters will have a chance to test our changes in the Firefox Nightly and Beta channels.

Once Google has finalized their v3 changes and Firefox has implemented the parts that make sense for our developers and users, we will provide ample time and documentation for extension developers to adapt. We do not intend to deprecate the v2 API before we are certain that developers have a viable path forward to migrate to v3.

Keep your eyes on the add-ons blog for updates regarding Manifest v3 and some of the other work our team is up to. We welcome your feedback on our community forum.

 

The post Mozilla’s Manifest v3 FAQ appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Firefox 69 — a tale of Resize Observer, microtasks, CSS, and DevTools

Mozilla planet - di, 03/09/2019 - 16:15

For our latest excellent adventure, we’ve gone and cooked up a new Firefox release. Version 69 features a number of nice new additions including JavaScript public instance fields, the Resize Observer and Microtask APIs, CSS logical overflow properties (e.g. overflow-block), and @supports for selectors.

We will also look at highlights from the raft of new debugging features in the Firefox 69 DevTools, including console message grouping, event listener breakpoints, and text label checks.

This blog post provides merely a set of highlights; for all the details, check out the following:

The new CSS

Firefox 69 supports a number of new CSS features; the most interesting are as follows.

New logical properties for overflow

69 sees support for some new logical propertiesoverflow-block and overflow-inline — which control the overflow of an element’s content in the block or inline dimension respectively.

These properties map to overflow-x or overflow-y, depending on the content’s writing-mode. Using these new logical properties instead of overflow-x and overflow-y makes your content easier to localize, especially when adapting it to languages using a different writing direction. They can also take the same values — visible, hidden, scroll, auto, etc.

Note: Look at Handling different text directions if you want to read up on these concepts.

@supports for selectors

The @supports at-rule has long been very useful for selectively applying CSS only when a browser supports a particular property, or doesn’t support it.

Recently this functionality has been extended so that you can apply CSS only if a particular selector is or isn’t supported. The syntax looks like this:

@supports selector(selector-to-test) { /* insert rules here */ }

We are supporting this functionality by default in Firefox 69 onwards. Find some usage examples here.

JavaScript gets public instance fields

The most interesting addition we’ve had to the JavaScript language in Firefox 69 is support for public instance fields in JavaScript classes. This allows you to specify properties you want the class to have up front, making the code more logical and self-documenting, and the constructor cleaner. For example:

class Product { name; tax = 0.2; basePrice = 0; price; constructor(name, basePrice) { this.name = name; this.basePrice = basePrice; this.price = (basePrice * (1 + this.tax)).toFixed(2); } }

Notice that you can include default values if wished. The class can then be used as you’d expect:

let bakedBeans = new Product('Baked Beans', 0.59); console.log(`${bakedBeans.name} cost $${bakedBeans.price}.`);

Private instance fields (which can’t be set or referenced outside the class definition) are very close to being supported in Firefox, and also look to be very useful. For example, we might want to hide the details of the tax and base price. Private fields are indicated by a hash symbol in front of the name:

#tax = 0.2; #basePrice = 0; The wonder of WebAPIs

There are a couple of new WebAPIs enabled by default in Firefox 69. Let’s take a look.

Resize Observer

Put simply, the Resize Observer API allows you to easily observe and respond to changes in the size of an element’s content or border box. It provides a JavaScript solution to the often-discussed lack of “element queries” in the web platform.

A simple trivial example might be something like the following (resize-observer-border-radius.html, see the source also), which adjusts the border-radius of a <div> as it gets smaller or bigger:

const resizeObserver = new ResizeObserver(entries => { for (let entry of entries) { if(entry.contentBoxSize) { entry.target.style.borderRadius = Math.min(100, (entry.contentBoxSize.inlineSize/10) + (entry.contentBoxSize.blockSize/10)) + 'px'; } else { entry.target.style.borderRadius = Math.min(100, (entry.contentRect.width/10) + (entry.contentRect.height/10)) + 'px'; } } }); resizeObserver.observe(document.querySelector('div'));

“But you can just use border-radius with a percentage”, I hear you cry. Well, sort of. But that quickly leads to ugly-looking elliptical corners, whereas the above solution gives you nice square corners that scale with the box size.

Another, slightly less trivial example is the following (resize-observer-text.html , see the source also):

if(window.ResizeObserver) { const h1Elem = document.querySelector('h1'); const pElem = document.querySelector('p'); const divElem = document.querySelector('body > div'); const slider = document.querySelector('input'); divElem.style.width = '600px'; slider.addEventListener('input', () => { divElem.style.width = slider.value + 'px'; }) const resizeObserver = new ResizeObserver(entries => { for (let entry of entries) { if(entry.contentBoxSize) { h1Elem.style.fontSize = Math.max(1.5, entry.contentBoxSize.inlineSize/200) + 'rem'; pElem.style.fontSize = Math.max(1, entry.contentBoxSize.inlineSize/600) + 'rem'; } else { h1Elem.style.fontSize = Math.max(1.5, entry.contentRect.width/200) + 'rem'; pElem.style.fontSize = Math.max(1, entry.contentRect.width/600) + 'rem'; } } }); resizeObserver.observe(divElem); }

Here we use the resize observer to change the font-size of a header and paragraph as a slider’s value is changed, causing the containing <div> to change its width. This shows that you can respond to changes in an element’s size, even if they have nothing to do with the viewport size changing.

So to summarise, Resize Observer opens up a wealth of new responsive design work that was difficult with CSS features alone. We’re even using it to implement a new responsive version of our new DevTools JavaScript console!.

Microtasks

The Microtasks API provides a single method — queueMicrotask(). This is a low-level method that enables us to directly schedule a callback on the microtask queue. This schedules code to be run immediately before control returns to the event loop so you are assured a reliable running order (using setTimeout(() => {}, 0)) for example can give unreliable results).

The syntax is as simple to use as other timing functions:

self.queueMicrotask(() => { // function contents here })

The use cases are subtle, but make sense when you read the explainer section in the spec. The biggest benefactors here are framework vendors, who like lower-level access to scheduling. Using this will reduce hacks and make frameworks more predictable cross-browser.

Developer tools updates in 69

There are various interesting additions to the DevTools in 69, so be sure to go and check them out!

Event breakpoints and async functions in the JS debugger

The JavaScript debugger has some cool new features for stepping through and examining code:

New remote debugging

In the new shiny about:debugging page, you’ll find a grouping of options for remotely debugging devices, with more to follow in the future. In 69, we’ve enabled a new mechanism for allowing you to remotely debug other versions of Firefox, on the same machine or other machines on the same network (see Network Location).

Console message grouping

In the console, we now group together similar error messages, with the aim of making the console tidier, spamming developers less, and making them more likely to pay attention to the messages. In turn, this can have a positive effect on security/privacy.

The new console message grouping looks like this, when in its initial closed state:

When you click the arrow to open the message list, it shows you all the individual messages that are grouped:

Initially the grouping occurs on CSP, CORS, and tracking protection errors, with more categories to follow in the future.

Flex information in the picker infobar

Next up, we’ll have a look at the Page inspector. When using the picker, or hovering over an element in the HTML pane, the infobar for the element now shows when it is a flex container, item, or both.

website nav menu with infobar pointing out that it is a flex item

See this page for more details.

Text Label Checks in the Accessibility Inspector

A final great feature to mention is the new text label checks feature of the Accessibility Inspector.

When you choose Check for issues > Text Labels from the dropdown box at the top of the accessibility inspector, it marks all the nodes in the accessibility tree with a warning sign if it is missing a descriptive text label. The Checks pane on the right hand side then gives a description of the problem, along with a Learn more link that takes you to more detailed information available on MDN.

WebExtensions updates

Last but not least, Let’s give a mention to WebExtensions! The main feature to make it into Firefox 69 is User Scripts — these are a special kind of extension content script that, when registered, instruct the browser to insert the given scripts into pages that match the given URL patterns.

See also

In this post we’ve reviewed the main web platform features added in Firefox 69. You can also read up on the main new features of the Firefox browser — see the Firefox 69 Release Notes.

The post Firefox 69 — a tale of Resize Observer, microtasks, CSS, and DevTools appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Botond Ballo: A case study in analyzing C++ compiler errors: why is the compiler trying to copy my move-only object?

Mozilla planet - di, 03/09/2019 - 16:00

Recently a coworker came across a C++ compiler error message that seemed baffling, as they sometimes tend to be.

We figured it out together, and in the hope of perhaps saving some others form being stuck on it too long, I thought I’d describe it.

The code pattern that triggers the error can be distilled down into the following:

#include <utility> // for std::move // A type that's move-only. struct MoveOnly { MoveOnly() = default; // copy constructor deleted MoveOnly(const MoveOnly&) = delete; // move constructor defaulted or defined MoveOnly(MoveOnly&&) = default; }; // A class with a MoveOnly field. struct S { MoveOnly field; }; // A function that tries to move objects of type S // in a few contexts. void foo() { S obj; // move it into a lambda [obj = std::move(obj)]() { // move it again inside the lambda S moved = std::move(obj); }(); }

The error is:

test.cpp: In lambda function: test.cpp:19:28: error: use of deleted function ‘S::S(const S&)’ 19 | S moved = std::move(obj); | ^ test.cpp:11:8: note: ‘S::S(const S&)’ is implicitly deleted because the default definition would be ill-formed: 11 | struct S { | ^ test.cpp:11:8: error: use of deleted function ‘MoveOnly::MoveOnly(const MoveOnly&)’ test.cpp:6:3: note: declared here 6 | MoveOnly(const MoveOnly&) = delete; | ^~~~~~~~

The reason the error is baffling is that we’re trying to move an object, but getting an error about a copy constructor being deleted. The natural reaction is: “Silly compiler. Of course the copy constructor is deleted; that’s by design. Why aren’t you using the move constructor?”

The first thing to remember here is that deleting a function using = delete does not affect overload resolution. Deleted functions are candidates in overload resolution just like non-deleted functions, and if a deleted function is chosen by overload resolution, you get a hard error.

Any time you see an error of the form “use of deleted function F“, it means overload resolution has already determined that F is the best candidate.

In this case, the error suggests S’s copy constructor is a better candidate than S’s move constructor, for the initialization S moved = std::move(obj);. Why might that be?

To reason about the overload resolution process, we need to know the type of the argument, std::move(obj). In turn, to reason about that, we need to know the type of obj.

That’s easy, right? The type of obj is S. It’s right there: S obj;.

Not quite! There are actually two variables named obj here. S obj; declares one in the local scope of foo(), and the capture obj = std::move(obj) declares a second one, which becomes a field of the closure type the compiler generates for the lambda expression. Let’s rename this second variable to make things clearer and avoid the shadowing:

// A function that tries to move objects of type S in a few contexts. void foo() { S obj; // move it into a lambda [capturedObj = std::move(obj)]() { // move it again inside the lambda S moved = std::move(capturedObj); }(); }

We can see more clearly now, that in std::move(capturedObj) we are referring to the captured variable, not the original.

So what is the type of capturedObj? Surely, it’s the same as the type of obj, i.e. S?

The type of the closure type’s field is indeed S, but there’s an important subtlety here: by default, the closure type’s call operator is const. The lambda’s body becomes the body of the closure’s call operator, so inside it, since we’re in a const method, the type of capturedObj is const S!

At this point, people usually ask, “If the type of capturedObj is const S, why didn’t I get a different compiler error about trying to std::move() a const object?”

The answer to this is that std::move() is somewhat unfortunately named. It doesn’t actually move the object, it just casts it to a type that will match the move constructor.

Indeed, if we look at the standard library’s implementation of std::move(), it’s something like this:

template <typename T> typename std::remove_reference<T>::type&& move(T&& t) { return static_cast<typename std::remove_reference<T>::type&&>(t); }

As you can see, all it does is cast its argument to an rvalue reference type.

So what happens if we call std::move() on a const object? Let’s substitute T = const S into that return type to see what we get: const S&&. It works, we just get an rvalue reference to a const.

Thus, const S&& is the argument type that gets used as the input to choosing a constructor for S, and this is why the copy constructor is chosen (the move constructor is not a match at all, because binding a const S&& argument to an S&& parameter would violate const-correctness).

An interesting question to ask is, why is std::move() written in a way that it compiles when passed a const argument? The answer is that it’s meant to be usable in generic code, where you don’t know the type of the argument, and want it to behave on a best-effort basis: move if it can, otherwise copy. Perhaps there is room in the language for another utility function, std::must_move() or something like that, which only compiles if the argument is actually movable.

Finally, how do we solve our error? The root of our problem is that capturedObj is const because the lambda’s call operator is const. We can get around this by declaring the lambda as mutable:

void foo() { S obj; [capturedObj = std::move(obj)]() mutable { S moved = std::move(capturedObj); }(); }

which makes the lambda’s call operator not be const, and all is well.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Today’s Firefox Blocks Third-Party Tracking Cookies and Cryptomining by Default

Mozilla planet - di, 03/09/2019 - 15:00

Today, Firefox on desktop and Android will — by default — empower and protect all our users by blocking third-party tracking cookies and cryptominers. This milestone marks a major step in our multi-year effort to bring stronger, usable privacy protections to everyone using Firefox.

Firefox’s Enhanced Tracking Protection gives users more control

For today’s release, Enhanced Tracking Protection will automatically be turned on by default for all users worldwide as part of the ‘Standard’ setting in the Firefox browser and will block known “third-party tracking cookies” according to the Disconnect list. We first enabled this default feature for new users in June 2019. As part of this journey we rigorously tested, refined, and ultimately landed on a new approach to anti-tracking that is core to delivering on our promise of privacy and security as central aspects of your Firefox experience.

Currently over 20% of Firefox users have Enhanced Tracking Protection on. With today’s release, we expect to provide protection for 100% of ours users by default. Enhanced Tracking Protection works behind-the-scenes to keep a company from forming a profile of you based on their tracking of your browsing behavior across websites — often without your knowledge or consent. Those profiles and the information they contain may then be sold and used for purposes you never knew or intended. Enhanced Tracking Protection helps to mitigate this threat and puts you back in control of your online experience.

You’ll know when Enhanced Tracking Protection is working when you visit a site and see a shield icon in the address bar:

 

When you see the shield icon, you should feel safe that Firefox is blocking thousands of companies from your online activity.

For those who want to see which companies we block, you can click on the shield icon, go to the Content Blocking section, then Cookies. It should read Blocking Tracking Cookies. Then, click on the arrow on the right hand side, and you’ll see the companies listed as third party cookies that Firefox has blocked:

If you want to turn off blocking for a specific site, click on the Turn off Blocking for this Site button.

Protecting users’ privacy beyond tracking cookies

Cookies are not the only entities that follow you around on the web, trying to use what’s yours without your knowledge or consent. Cryptominers, for example, access your computer’s CPU, ultimately slowing it down and draining your battery, in order to generate cryptocurrency — not for yours but someone else’s benefit. We introduced the option to block cryptominers in previous versions of Firefox Nightly and Beta and are including it in the ‘Standard Mode‘ of your Content Blocking preferences as of today.

Another type of script that you may not want to run in your browser are Fingerprinting scripts. They harvest a snapshot of your computer’s configuration when you visit a website. The snapshot can then also be used to track you across the web, an issue that has been present for years. To get protection from fingerprinting scripts Firefox users can turn on ‘Strict Mode.’ In a future release, we plan to turn fingerprinting protections on by default.

Also in today’s Firefox release

To see what else is new or what we’ve changed in today’s release, you can check out our release notes.

Check out and download the latest version of Firefox available here.

The post Today’s Firefox Blocks Third-Party Tracking Cookies and Cryptomining by Default appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Cloud Services Blog: A New Policy for Mozilla Location Service

Mozilla planet - di, 03/09/2019 - 14:00

Several years ago we started a geolocation experiment called the Mozilla Location Service (MLS) to create a location service built on open-source software and powered through crowdsourced location data. MLS provides geolocation lookups based on publicly observable cell tower and WiFi access point information. MLS has served the public interest by providing location information to open-source operating systems, research projects, and developers.

Today Mozilla is announcing a policy change regarding MLS. Our new policy will impose limits on commercial use of MLS. Mozilla has not made this change by choice. Skyhook Holdings, Inc. contacted Mozilla some time ago and alleged that MLS infringed a number of its patents. We subsequently reached an agreement with Skyhook that avoids litigation. While the terms of the agreement are confidential, we can tell you that the agreement exists and that our MLS policy change relates to it. We can also confirm that this agreement does not change the privacy properties of our service: Skyhook does not receive location data from Mozilla or our users.

Our new policy preserves the public interest heart of the MLS project. Mozilla has never offered any commercial plans for MLS and had no intention to do so. Only a handful of entities have made use of MLS API Query keys for commercial ventures. Nevertheless, we regret having to impose new limits on MLS. Sometimes companies have to make difficult choices that balance the massive cost and uncertainty of patent litigation against other priorities.

Mozilla has long argued that patents can work to inhibit, rather than promote, innovation. We continue to believe that software development, and especially open-source software, is ill-served by the patent system. Mozilla endeavors to be a good citizen with respect to patents. We offer a free license to our own patents under the Mozilla Open Software Patent License Agreement. We will also continue our advocacy for a better patent system.

Under our new policy, all users of MLS API Query keys must apply.  Non-commercial users (such as academic, public interest, research, or open-source projects) can request an MLS API Query key capped at a daily usage limit of 100,000. This limit may be increased on request. Commercial users can request an MLS API Query key capped at a daily usage limit of 100,000. The daily limit cannot be increased for commercial uses and those keys will expire after 3 months. In effect, commercial use of MLS will now be of limited duration and restricted in volume.

Existing keys will expire on March 1, 2020. We encourage non-commercial users to re-apply for continued use of the service. Keys for a small number of commercial users that have been exceeding request limits will expire sooner. We will reach out to those users directly.

Location data and services are incredibly valuable in today’s connected world. We will continue to provide an open-source and privacy respecting location service in the public interest. You can help us crowdsource data by opting-in to the contribution option in our Android mobile browser.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 302

Mozilla planet - di, 03/09/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is cargo-udeps, a cargo subcommand to find unused dependencies.

Thanks to Christopher Durham for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

214 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs New RFCs Upcoming Events Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Threads are for working in parallel, async is for waiting in parallel.

ssokolow on /r/rust

Thanks to Philipp Oppermann for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Cameron Kaiser: The deformed yet thoughtful offspring of AppleScript and Greasemonkey

Mozilla planet - ma, 02/09/2019 - 22:15
Ah, AppleScript. I can't be the only person who's thinking Apple plans to replace AppleScript with Swift because it's not new and sexy anymore. And it certainly has its many rough edges and Apple really hasn't done much to improve this, which are clear signs it's headed for a room-temperature feet-first exit.

But, hey! If you're using TenFourFox, you're immune to Apple's latest self-stimulatory bright ideas. And while I'm trying to make progress on TenFourFox's various deficiencies, you still have the power to make sites work the way you want thanks to TenFourFox's AppleScript-to-JavaScript "bridge." The bridge lets you run JavaScript within the page and sample or expose data back to AppleScript. With AppleScript's other great powers, like even running arbitrary shell scripts, you can connect TenFourFox to anything else on the other end with AppleScript.

Here's a trivial example. Go to any Github wiki page, like, I dunno, the one for TenFourFox's AppleScript support. If there's a link there for more wiki entries, go ahead and click on it. It doesn't work (because of issue 521). Let's fix that!

You can either cut and paste the below examples directly into Script Editor and click the Run button to run them, or you can cut and paste them into a text file and run them from the command line with osascript filename, or you can be a totally lazy git and just download them from SourceForge. Unzip them and double click the file to open them in Script Editor.

In the below examples, change TenFourFoxWhatever to the name of your TenFourFox executable (TenFourFoxG5, etc.). Every line highlighted in the same colour is a continuous line. Don't forget the double quotes!

Here's the script for Github's wiki.

tell application "TenFourFoxWhatever"    tell current tab of front browser window to run JavaScript "     // Don't run if not on github wiki.    if (!window.location.href.match(/\\/\\/github.com\\//i) ||        !window.location.href.match(/\\/wiki\\//)) {        window.alert('not a Github wiki page');        return;    }     // Display the hidden links    let nwiki=document.getElementById('wiki-pages-box').getElementsByClassName('wiki-more-pages');    while (nwiki.length > 0) {        nwiki.item(0).classList.remove('wiki-more-pages');    }     // Remove the 'more pages' link (should be only one)    let jwiki=document.getElementById('wiki-pages-box').getElementsByClassName('wiki-more-pages-link');    if (jwiki.length > 0)        jwiki.item(0).style.display = 'none';     "end tell

Now, have the current tab on any Github wiki page. Run the script. Poof! More links! (If you run it on a page that isn't Github, it will give you an error box.)

Most of you didn't care about that. Some of you use your Power Macs for extremely serious business like YouTube videos. I ain't judging. Let me help you get rid of the crap, starting with Weird Al's anthem to alumin(i)um foil.

With comments in the five figures from every egoist fruitbat on the interwebs with an opinion on Weird Al, that's gonna take your poor Power Mac a while to process. Plus all those suggested videos! Let's make those go away!

tell application "TenFourFoxWhatever"    tell current tab of front browser window to run JavaScript "     // Don't run if not on youtube.    if (!window.location.href.match(/\\.youtube.com\\//i) ||        !window.location.href.match(/\\/watch/)) {        window.alert('not a YouTube video page');        return;    }     // Remove secondary column and comments.    // Wrap in try blocks in case the elements don't exist yet.    try {        document.getElementById('secondary').innerHTML = '';        document.getElementById('secondary').style.display = 'none';    } catch(e) { }    try {        document.getElementById('comments').innerHTML = '';        document.getElementById('comments').style.display = 'none';    } catch(e) { }     "end tell

This script not only makes those areas invisible, it even nukes their internal contents. This persists from video to video unless you reload the page.

As an interesting side effect, you'll notice that the video area scaled to meet the new width of the browser, but the actual video didn't. I consider this a feature rather than a bug because the browser isn't trying to enable a higher-resolution stream or scale up the video for display, so the video "just plays better." Just make sure you keep the mouse pointer out of this larger area or the browser will now have to composite the playback controls.

You can add things to a page, too, instead of just taking things away. Issue 533 has been one of our long-playing issues which has been a particular challenge because it requires a large parser update to fix. Fortunately, Webpack has been moving away from uglify and as sites upgrade their support (Citibank recently did so), this problem should start disappearing. Unfortunately UPS doesn't upgrade as diligently, so right now you can't track packages with TenFourFox from the web interface; you just get this:

Let's fix it! This script is a little longer, so you will need to download it. Here are the major guts though:

    // Attempt to extract tracking number.    let results = window.location.href.match(        /^https:..www.ups.com.track.loc=([a-zA-Z_]+)\\&tracknum=([a-zA-Z0-9]+)\\&/    );    if (!results || results.length != 3) {        window.alert('Unable to determine UPS tracking number.');        return;    }     // Construct payload.    let locale = results[1];    let tn = results[2];    let payload = JSON.stringify({'Locale':locale,'TrackingNumber':[tn]});

A bit of snooping on UPS's site from the Network tab in Firefox 69 on my Talos II shows that it uses an internal JSON API. We can inject script to complete the request that TenFourFox can't yet make. Best of all, it will look to UPS like it's coming from inside the house the browser ... because it is. Even the cookies are passed. When we get the JSON response back, we can process that and display it:

    // For each element, display delivery date and status.    // You can add more fields here from the JSON.    output.innerHTML = '';    data.trackDetails.forEach(function(pkg) {        output.innerHTML += (pkg.trackingNumber+' '+            pkg.packageStatus+' '+            pkg.scheduledDeliveryDate+'<p>');        });    }

So we just hit Run on the script, and ...

... my package arrives tomorrow.

Some of you will remember a related concept in Classilla called stelae, which were scraps of JavaScript that would automatically execute when you browse to a site the stela covers. I chose not to implement those in precisely that fashion in TenFourFox because the check imposes its own overhead on the browser on every request, and manipulating the DOM is a lot easier (and in TenFourFox, a lot more profitable) than manipulating the actual raw HTML and CSS that a stela does (and Classilla, because of its poorer DOM support, must). Plus, by being AppleScript, you can run them from anywhere at any time (even from the command line), including the very-convenient ever-present-if-you-enable-it Script menu, and they run only when you actually run them.

The next question I bet some of you will ask is, that's all fine for you because you're a super genius™, but how do us hoi polloi know the magic JavaScript incantations to chant? I wrote these examples to give you general templates. If you want to make a portion of the page disappear, you can work with the YouTube example and have a look with TenFourFox's built-in Inspector to find the ID or class of the element to nuke. Then, getElementById('x') will find the element with id="x", or getElementsByClassName('y') will find all elements with y in their class="..." (see the Github example). Make those changes and you can basically make it work. Remove the block limiting it to certain URLs if you don't care about it. If you do it wrong, look at the Browser Console window for the actual error message from JavaScript if you get an error back.

For adding functionality, though, this requires looking at what Firefox does on a later system. On my Talos II I had the Network tab in the Inspector open and ran a query for the tracking number and was able to see what came back, and then compared it with what TenFourFox was doing to find what was missing. I then simulated the missing request. This took about 15 minutes to do, granted given that I understood what was going on, but the script will still give you a template for how to do these kinds of specialized requests. (Be careful, though, about importing data from iffy sites that could be hacked or violating the same-origin policy. The script bridge has special privileges and assumes you know what you're doing.) Or, if you need more fields than the UPS script is providing, just look at the request the AppleScript sends and study the JSON object the response passes back, then add the missing fields you want to the block above. Tinker with the formatting. Sex it up a bit. It's your browser!

One last note. You will have noticed the scripts in the screen shot (and the ones you download) look a little different. That's because they use a little magic to figure out what TenFourFox you're actually running. It looks like this:

set tenfourfox to do shell script "ps x | perl -ne '/(TenFourFox[^.]+)\.app/ && print($1) && exit 0'"if {tenfourfox starts with "TenFourFox"} then    tell application tenfourfox        tell «class pCTb» of front «class BWin» to «event 104FxrJS» given «class jscp»:"

This group of commands runs a quick script through Perl to find the first TenFourFox instance running (another reason to start TenFourFox before running foxboxes). However, because we dynamically decide the application we'll send AppleEvents to (i.e., "tell-by-variable"), the Script Editor doesn't have a dictionary available, so we have to actually provide the raw classes and events the dictionary would ordinarily map to. Otherwise it is exactly identical to tell current tab of front browser window to run JavaScript " and this is actually the true underlying AppleEvent that gets sent to TenFourFox. If TenFourFox isn't actually found, then we can give you a nice error message instead of the annoying "Where is ?" window that AppleScript will give you for orphaned events. Again, if you don't want to type these scripts in, grab them here.

No, I'm not interested in porting this to mainline Firefox, but the source code is in our tree if someone else wants to. At least until Apple decides that all other scripting languages than the One True Swift Language, even AppleScript, must die.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR16 available

Mozilla planet - zo, 01/09/2019 - 05:52
TenFourFox Feature Parity Release 16 final is now available for testing (downloads, hashes, release notes). This final version has a correctness fix to the VMX text fragment scanner found while upstreaming it to mainline Firefox for the Talos II, as well as minor outstanding security updates. Assuming no issues, it will become live on Monday afternoon-evening Pacific time (because I'm working on Labor Day).
Categorieën: Mozilla-nl planet

About:Community: Firefox 69 new contributors

Mozilla planet - vr, 30/08/2019 - 17:41

With the release of Firefox 69, we are pleased to welcome the 50 developers who contributed their first code change to Firefox in this release, 39 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

Pagina's