mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - https://planet.mozilla.org/
Bijgewerkt: 1 week 2 dagen geleden

Karl Dubost: Saving Webcompat images as a microservice

do, 21/11/2019 - 02:40

Update: You may want to fast forward to the latest part… of this blog post. (Head explodes).

Thinking out loud on separating our images into a separate service. The initial goal was to push the images to the cloud, but I think we could probably have a first step. We could keep the images on our server, but instead of the current save, we could send them to another service, let say upload.webcompat.com with a HTTP PUT. And this service would save them locally.

That way it would allow us two things:

  1. Virtualize the core app on heroku if needed
  2. Replace when we are ready the microservice by another cloud hosting solution.

All of this is mainly thinking for now.

Anatomy of our environment

config/environment.py defines:

UPLOADS_DEFAULT_DEST = os.environ.get('PROD_UPLOADS_DEFAULT_DEST') UPLOADS_DEFAULT_URL = os.environ.get('PROD_UPLOADS_DEFAULT_URL')

The maximum limit for images is defined in __init__.py Currently in views.py, there is a route for localhost upload.

# set limit of 5.5MB for file uploads # in practice, this is ~4MB (5.5 / 1.37) # after the data URI is saved to disk app.config['MAX_CONTENT_LENGTH'] = 5.5 * 1024 * 1024

The localhost part would probably not changed much. This is just for reading the images URL.

if app.config['LOCALHOST']: @app.route('/uploads/<path:filename>') def download_file(filename): """Route just for local environments to send uploaded images. In production, nginx handles this without needing to touch the Python app. """ return send_from_directory( app.config['UPLOADS_DEFAULT_DEST'], filename)

then the api for uploads is defined in api/uploads.py

This is where the production route is defined.

@uploads.route('/', methods=['POST']) def upload(): '''Endpoint to upload an image. If the image asset passes validation, it's saved as: UPLOADS_DEFAULT_DEST + /year/month/random-uuid.ext Returns a JSON string that contains the filename and url. ''' … # cut some stuff. try: upload = Upload(imagedata) upload.save() data = { 'filename': upload.get_filename(upload.image_path), 'url': upload.get_url(upload.image_path), 'thumb_url': upload.get_url(upload.thumb_path) } return (json.dumps(data), 201, {'content-type': JSON_MIME}) except (TypeError, IOError): abort(415) except RequestEntityTooLarge: abort(413)

upload.save is basically where we should replace this by an HTTP PUT to a micro service.

What is Amazon S3 doing?

In these musings, I wonder if we could mimick the way Amazon S3 operates at a very high level. No need to replicate everything. We just need to save some bytes into a folder structure.

boto 3 has a documentation for uploading files.

def upload_file(file_name, bucket, object_name=None): """Upload a file to an S3 bucket :param file_name: File to upload :param bucket: Bucket to upload to :param object_name: S3 object name. If not specified then file_name is used :return: True if file was uploaded, else False """ # If S3 object_name was not specified, use file_name if object_name is None: object_name = file_name # Upload the file s3_client = boto3.client('s3') try: response = s3_client.upload_file(file_name, bucket, object_name) except ClientError as e: logging.error(e) return False return True

We could keep the image validation on the size of webcompat.com, but then the naming and checking is done. We can save this to a service the same way aws is doing.

So our priviledged service could accept images and save them locally in the same folder structure a separate flask structure. And later on, we could adjust it to use S3.

Surprise. Surprise.

I just found out that each time you put an image in an issue or a comment. GitHub is making a private copy of this image. Not sure if it's borderline with regards to property.

If you enter:

!['m root](http://www.la-grange.net/2019/01/01/2535-misere)

Then it creates this markup.

<p><a target="_blank" rel="noopener noreferrer" href="https://camo.githubusercontent.com/a285646de4a7c3b3cdd3e82d599e46607df8d3cc/687474703a2f2f7777772e6c612d6772616e67652e6e65742f323031392f30312f30312f323533352d6d6973657265"><img src="https://camo.githubusercontent.com/a285646de4a7c3b3cdd3e82d599e46607df8d3cc/687474703a2f2f7777772e6c612d6772616e67652e6e65742f323031392f30312f30312f323533352d6d6973657265" alt="I'm root" data-canonical-src="http://www.la-grange.net/2019/01/01/2535-misere" style="max-width:100%;"></a></p>

And we can notice that the img src is pointing to… GitHub?

I checked in my server logs to be sure. And I found…

140.82.115.251 - - [20/Nov/2019:06:44:54 +0000] "GET /2019/01/01/2535-misere HTTP/1.1" 200 62673 "-" "github-camo (876de43e)"

That will seriously challenge the OKR for this quarter.

Update: 2019-11-21 So I tried to decipher what was really happening. It seems GitHub acts as a proxy using camo, but still has a caching system keeping a real copy of the images, instead of just a proxy. And this can become a problem in the context of webcompat.com.

Early on, we had added s3.amazonaws.com to our connect-src since we had uses that were making requests to https://s3.amazonaws.com/github-cloud. However, this effectively opened up our connect-src to any Amazon S3 bucket. We refactored our URL generation and switched all call sites and our connect-src to use https://github-cloud.s3.amazonaws.com to reference our bucket.

GitHub is hosting the images on Amazon S3.

Otsukare!

Categorieën: Mozilla-nl planet

The Firefox Frontier: Firefox Extension Spotlight: Image Search Options

wo, 20/11/2019 - 23:29

Let’s say you stumble upon an interesting image on the web and you want to learn more about it, like… where did it come from? Who are the people in … Read more

The post Firefox Extension Spotlight: Image Search Options appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Can Your Holiday Gift Spy on You?

wo, 20/11/2019 - 16:57
Mozilla is unveiling its annual holiday ranking of the creepiest and safest connected devices. Our researchers reviewed the security and privacy features and flaws of 76 popular gifts for 2019’s *Privacy Not Included guide

Mozilla today launches the third-annual *Privacy Not Included, a report and shopping guide identifying which connected gadgets and toys are secure and trustworthy — and which aren’t. The goal is two-fold: arm shoppers with the information they need to choose gifts that protect the privacy of their friends and family. And, spur the tech industry to do more to safeguard consumers.

Mozilla researchers reviewed 76 popular connected gifts available for purchase in the United States across six categories: Toys & Games; Smart Home; Entertainment; Wearables; Health & Exercise; and Pets. Researchers combed through privacy policies, sifted through product and app specifications, reached out to companies about their encryption and bug bounty programs, and more. As a result, we can answer questions like: How accessible is the privacy policy, if there is one? Does the product require strong passwords? Does it collect biometric data? And, Are there automatic security updates?

The guide also showcases the Creep-O-Meter, an interactive tool allowing shoppers to rate the creepiness of a product using an emoji sliding scale from “Super Creepy” to “Not Creepy.

Says Ashley Boyd, Mozilla’s Vice President of Advocacy: “This year we found that many of the big tech companies like Apple and Google are doing pretty well at securing their products, and you’ll see that most products in the guide meet our Minimum Security Standards. But don’t let that fool you. Even though devices are secure, we found they are collecting more and more personal information on users, who often don’t have a whole lot of control over that data.”

For the first time ever, this year’s guide is launching alongside new longform research from Mozilla’s Internet Health Report. Two companion articles are debuting alongside the guide and provide additional context and insight into the realm of connected devices: what’s working, what’s not, and how consumers can wrestle back control. The articles include “How Smart Homes Could Be Wiser,” an exploration of why trustworthy connected devices are so scarce, and what consumers can do to remedy this. And “5 key decisions for every smart device,” a look at five key areas manufacturers should address when designing private and secure connected devices.

*Privacy Not Included highlights include:

Top trends identified by Mozilla researchers include:

  • Good on security, questionable on privacy: Many of the big tech companies like Apple and Google are doing pretty well at securing their products. But even when devices are secure, they can still collect a lot of data about users. This year saw an expansion of smart home ecosystems from big tech companies, allowing companies like Amazon to reach deeper into user’s lives. Customer data is also being used in ways users may not have anticipated, even if it’s stated in the privacy policy. For instance, Ring users may not realize their videos are being used in marketing campaigns and that photos of all visitors are stored on servers.
  • Small companies are not doing so well on privacy and security: Smaller companies often do not have the resources to prioritize the privacy and security of their products. Many of the products in the pet category, for example, seem weak on privacy and security. Mozilla could only confirm four of the 13 products meet our Minimum Security Standards. The $500 Litter Robot 3 Connect didn’t even have a privacy policy for the device or the app the device uses. Also, it appears to use the default password “neverscoop” to connect the device to WiFi.
  • Privacy policy readability is improving: Companies are making strides in how they present privacy information, with a lot more privacy pages — like those by Roomba and Apple — being written in simple, accessible language and housed in one central place.
  • Products are becoming more privacy friendly, but sometimes at a cost to consumers: Sonos removed the microphone for the Sonos One SL to make it more privacy-friendly, while Parrot, which made one of the creepiest products in the 2018 guide, launched the Anafi drone, which met the Minimum Security Standards. However, Parrot left the low end consumer market: the Anafi drone costs $700.

 

*Privacy Not Included builds on Mozilla’s work to ensure the internet remains open, safe, and accessible to all people. Mozilla’s initiatives include its annual Internet Health Report; its roster of Fellows who develop research, policies, and products around privacy, security, and other internet health issues; and its advocacy campaigns, such as putting public pressure on apps like Snapchat and Instagram to let users know if they are using facial emotion recognition software.

 

About Mozilla

Mozilla is a nonprofit that believes the internet must always remain a global public resource, open and accessible to all. Its work is guided by the Mozilla Manifesto. The direct work of the Mozilla Foundation focuses on fueling the movement for an open Internet. Mozilla does this by connecting open Internet leaders with each other and by mobilizing grassroots activists around the world. The Foundation is also the sole shareholder in the Mozilla Corporation, the maker of Firefox and other open source tools. Mozilla Corporation functions as a self-sustaining social enterprise — money earned through its products is reinvested into the organization.

The post Can Your Holiday Gift Spy on You? appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Multiple-column Layout and column-span in Firefox 71

wo, 20/11/2019 - 16:13

Firefox 71 is an exciting release for anyone who cares about CSS Layout. While I am very excited to have subgrid available in Firefox, there is another property that I’ve been keeping an eye on. Firefox 71 implements column-span from Multiple-column Layout. In this post I’ll explain what it is and a little about the progress of the Multiple-column Layout specification.

Multiple-column Layout, usually referred to as multicol, is a layout method that does something quite different to layout methods such as flexbox and grid. If you have some content marked up and displaying in Normal Flow, and turn that into a multicol container using the column-width or column-count properties, it will display as a set of columns. Unlike Flexbox or Grid however, the content inside the columns flows just as it did in Normal Flow. The difference is that it now flows into a number of anonymous column boxes, much like content in a newspaper.

See the Pen
Columns with multicol
by rachelandrew (@rachelandrew)
on CodePen.

Multicol is described as fragmenting the content when it creates these anonymous column boxes to display content. It does not act on the direct children of the multicol container in a flex or grid-like way. In this way it is most similar to the fragmentation that happens when we print a web document, and the content is split between pages. A column-box is essentially the same thing as a page.

What is column-span?

We can use the column-span property to take an element appearing in a column, and cause it to span across all of the columns. This is a pattern common in print design. In the CodePen below I have two such spanning elements:

  • The h1 is inside the article as the first child element and is spanning all of the columns.
  • The h2 is inside the second section, and also spans all of the columns.

See the Pen
Columns with multicol and column-span
by rachelandrew (@rachelandrew)
on CodePen.

This example highlights a few things about column-span. Firstly, it is only possible to span all of the columns, or no columns. The allowable values for column-span are all, or none.

Secondly, when a span interrupts the column boxes, we end up with two lines of columns. The columns are created in the inline direction above the spanning element, then they restart below. Content in the columns does not “jump over” the spanning element and continue.

In addition, the h1 is a direct child of the multicol container, however the h2 is not. The h2 is nested inside a section. This demonstrates the fact that items do not need to be a direct child to have column-span applied to them.

Firefox has now joined other browsers in implementing the column-span property. This means that we have good support for the property across all major browsers, as the Compat data for column-span shows.

The compat data for column-span on MDN

The multicol specification

My interest in the implementation of column-span is partly because I am one of the editors of the multicol specification. I volunteered to edit the multicol specification as it had been stalled for some time, with past resolutions by the WG not having been edited into the spec. There were also a number of unresolved issues, many of which were to do with the column-span feature. I started work by digging through the mailing list archives to find these issues and resolutions where we had them. I then began working through them and editing them into the spec.

At the time I started working on the specification it was at Candidate Recommendation (CR) status, which infers that the specification is deemed to be fairly complete. Given the number of issues, the WG decided to return it to Working Draft (WD) status while these issues were resolved.

CSS development needs teamwork between browsers and spec editors

As a spec editor, it’s exciting when features are being implemented, as it helps to progress the spec. CSS is created via an iterative and collaborative process; the CSS WG do not create a complete specification and fling it over the wall at browser engineers. The process involves working on a feature in the WG, which browser engineers try to implement. Questions and problems discovered during that implementation phase are brought back to the working group. The WG then have to decide what to do about such issues, and the spec editor then gets the job of clarifying the spec based on the resolution. The process repeats — each time we tease out issues. Any lack of clarity could cause an interoperability issue if two browsers interpreted the description of the feature in a different way.

Based on the work that Mozilla have been doing to implement column-span, several issues were brought to the CSS WG and discussed in our calls and face-to-face meetings. We’ve been able to make the specification much clearer on a number of issues with column-span and related issues. Therefore, I’m very happy to have a new property implemented across browsers, and also happy to have a more resilient spec! We recently published an updated WD of multicol, which includes many changes made during the time Mozilla were implementing multicol in Firefox.

Other multicol related issues

With the implementation of column-span, multicol will work in much the same way across browsers. We do have an outstanding issue with regards to the column-fill property, which controls how the columns are filled. The default way that multicol fills columns is to try to balance the content, so equal amounts of content end up in each column.

By using the column-fill property, you can change this behavior to fill columns sequentially. This would mean that a multicol container with a height could fill columns to the specified height, potentially leaving empty columns if there was not enough content.

See the Pen
Columns with multicol and column-fill
by rachelandrew (@rachelandrew)
on CodePen.

Due to specification ambiguity, Firefox and Chrome do different things if the multicol container does not have a height. Chrome ignores the column-fill property and balances, whereas Firefox fills the first column with all of the content. This is the kind of issue that arises when we have a vague or unclear spec. It’s not a case of a browser “getting things wrong”, or trying to make the lives of web developers hard. It’s what happens when specifications aren’t crystal clear! For anyone interested, the somewhat lengthy issue trying to resolve this is here. Most developers won’t come across this issue in practice. However, if you are seeing differences when using column-fill, it is worth knowing about.

The implementation of column-span is a step towards making multicol robust and useful on the web. To read more about multicol and possible use cases see the Guides to Multicol on MDN, and my article When And How To Use Multiple-column Layout.

The post Multiple-column Layout and column-span in Firefox 71 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Updates to the Mozilla Web Security Bounty Program

di, 19/11/2019 - 16:10

Mozilla was one of the first companies to establish a bug bounty program and we continually adjust it so that it stays as relevant now as it always has been. To celebrate the 15 years of the 1.0 release of Firefox, we are making significant enhancements to the web bug bounty program.

Increasing Bounty Payouts

We are doubling all web payouts for critical, core and other Mozilla sites as per the Web and Services Bug Bounty Program page. In addition we are tripling payouts to $15,000 for Remote Code Execution payouts on critical sites!

Adding New Critical Sites to the Program

As we are constantly improving the services behind Firefox, we also need to ensure that sites we consider critical to our mission get the appropriate attention from the security community. Hence we have extended our web bug bounty program by the following sites in the last 6 months:

  • Autograph – a cryptographic signature service that signs Mozilla products.
  • Lando – Mozilla’s new automatic code-landing service which allows us to easily commit Phabricator revisions to their destination repository.
  • Phabricator – a code management tool used for reviewing Firefox code changes.
  • Taskcluster  the task execution framework that supports Mozilla’s continuous integration and release processes (promoted from core to critical).
Adding New Core Sites to the Program

The sites we consider core to our mission have also been extended to include:

  • Firefox Monitor – a site where you can register your email address so that you can be informed if your account details are part of a data breach.
  • Localization – a service contributors can use to help localize Mozilla products.
  • Payment Subscription – a service that is used as the interface in front of the payment provide (Stripe).
  • Firefox Private Network – a site from which you can download a desktop extension that helps secure and protect your connection everywhere you use Firefox.
  • Ship It – a system that accepts requests for releases from humans and translates them into information and requests that our Buildbot-based release automation can process.
  • Speak To Me – Mozilla’s Speech Recognition API.

The new payouts have already been applied to the most recently reported web bugs.

We hope the new sites and increased payments will encourage you to have another look at our sites and help us keep them safe for everyone who uses the web.

Happy Birthday, Firefox. And happy bug hunting to you all!

The post Updates to the Mozilla Web Security Bounty Program appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Creating UI Extensions for WebThings Gateway

di, 19/11/2019 - 16:00

Version 0.10 of Mozilla’s WebThings Gateway brings support for extension-type add-ons. Released last week, this powerful new capability lets developers modify the user interface (UI) to their liking with JavaScript and CSS.

Although the initial set of extension APIs is fairly minimal, we believe that they will already enable a large amount of functionality. To go along with the UI extensions, developers can also extend the gateway’s REST API with their own handlers, allowing for back-end analytics, for example.

In this post, we’ll walk through a simple example to get you started with building your own extension.

The Basics

If you’re completely new to building add-ons for the WebThings Gateway, there are a couple things you should know.

An add-on is a set of code that runs alongside the gateway. In the case of extensions, the code runs as part of the UI in the browser. Add-ons can provide all sorts of functionality, including support for new devices, the ability to notify users via some outlet, and now, extending the user interface.

Add-ons are packaged up in a specific way and can then be published to the add-on list, so that they can be installed by other users. For best results, developers should abide by these basic guidelines.

Furthermore, add-ons can theoretically be written in any language, as long as they know how to speak to the gateway via IPC (interprocess communication). We provide libraries for Node.js and Python.

The New APIs

There are two new groups of APIs you should know about.

First, the front end APIs. Your extension should extend the Extension class, which is global to the browser window. This gives you access to all of the new APIs. In this 0.10 release, extensions can add new entries to the top-level menu and show and hide top-level buttons. Each extension gets an empty block element that they can draw to as they please, which can be accessed via the menu entry or some other means.

Second, the back end APIs. An add-on can register a new APIHandler. When an authenticated request is made to /extensions/<extension-id>/api/*, your API handler will be invoked with request information. It should send back the appropriate response.

Basic Example

Now that we’ve covered the basics, let’s walk through a simple example. You can find the code for this example on GitHub. Want to see the example in Python, instead of JavaScript? It’s available here.

This next example is really basic: create a form, submit the form, and echo the result back as JSON.

Let’s go ahead and create our API handler. For this example, we’ll just echo back what we received.

const {APIHandler, APIResponse} = require('gateway-addon'); const manifest = require('./manifest.json'); /** * Example API handler. */ class ExampleAPIHandler extends APIHandler { constructor(addonManager) { super(addonManager, manifest.id); addonManager.addAPIHandler(this); } async handleRequest(request) { if (request.method !== 'POST' || request.path !== '/example-api') { return new APIResponse({status: 404}); } // echo back the body return new APIResponse({ status: 200, contentType: 'application/json', content: JSON.stringify(request.body), }); } } module.exports = ExampleAPIHandler;

The gateway-addon library provides nice wrappers for the API requests and responses. You fill in the basics: status code, content type, and content. If there is no content, you can omit those fields.

Now, let’s create a UI that can actually use the new API we’ve just made.

(function() { class ExampleExtension extends window.Extension { constructor() { super('example-extension'); this.addMenuEntry('Example Extension'); this.content = ''; fetch(`/extensions/${this.id}/views/content.html`) .then((res) => res.text()) .then((text) => { this.content = text; }) .catch((e) => console.error('Failed to fetch content:', e)); } show() { this.view.innerHTML = this.content; const key = document.getElementById('extension-example-extension-form-key'); const value = document.getElementById('extension-example-extension-form-value'); const submit = document.getElementById('extension-example-extension-form-submit'); const pre = document.getElementById('extension-example-extension-response-data'); submit.addEventListener('click', () => { window.API.postJson( `/extensions/${this.id}/api/example-api`, {[key.value]: value.value} ).then((body) => { pre.innerText = JSON.stringify(body, null, 2); }).catch((e) => { pre.innerText = e.toString(); }); }); } } new ExampleExtension(); })();

The above code does the following things:

  1. Adds a top-level menu entry for our extension.
  2. Loads some HTML asynchronously from the server.
  3. Sets up an event listener for the form to submit it and display the results.

The HTML loaded from the server is not a full document, but rather a snippet, since we’re using it to fill in a <section> tag. You could do all this synchronously within the JavaScript, but it can be nice to keep the view content separate. The manifest for this add-on instructs the gateway which resources to load, and which are allowed to be accessed via the web:

{ "author": "Mozilla IoT", "content_scripts": [ { "css": [ "css/extension.css" ], "js": [ "js/extension.js" ] } ], "description": "Example extension add-on for Mozilla WebThings Gateway", "gateway_specific_settings": { "webthings": { "exec": "{nodeLoader} {path}", "primary_type": "extension", "strict_max_version": "*", "strict_min_version": "0.10.0" } }, "homepage_url": "https://github.com/mozilla-iot/example-extension", "id": "example-extension", "license": "MPL-2.0", "manifest_version": 1, "name": "Example Extension", "short_name": "Example", "version": "0.0.3", "web_accessible_resources": [ "css/*.css", "images/*.svg", "js/*.js", "views/*.html" ] }

The content_scripts property of the manifest tells the gateway which CSS and JavaScript files to load into the UI. Meanwhile, the web_accessible_resources tells it which files can be accessed by the extension over HTTP. This format is based on the WebExtension manifest.json format, so if you’ve ever built a browser extension, it may look familiar to you.

As a quick note to developers, this new manifest.json format is required for all add-ons now, as it replaces the old package.json format.

Testing the Add-on

To test, you can do the following on your Raspberry Pi or development machine.

  1. Clone the repository: cd ~/.mozilla-iot/addons git clone https://github.com/mozilla-iot/example-extension
  2. Restart the gateway: sudo systemctl restart mozilla-iot-gateway
  3. Enable the add-on by navigating to Settings -> Add-ons in the UI. Click the Enable button for “Example Extension”. You then need to refresh the page for your extension to show up in the UI, as extensions are loaded when the page first loads.
Wrapping Up

Hopefully this has been helpful. The example itself is not very useful, but it should give you a nice skeleton to start from.

Another possible use case we’ve identified is creating a custom UI for complex devices, where the auto-generated UI is less than ideal. For instance, an adapter add-on could add an alternate UI link which just links to the extension, e.g. /extensions/<extension-id>. When accessed, the UI will bring up the extension’s interface.

If you have more questions, you can always reach out on Discourse, GitHub, or IRC (#iot). We can’t wait to see what you build!

The post Creating UI Extensions for WebThings Gateway appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Alessio Placitelli: GeckoView + Glean = Fenix performance metrics

di, 19/11/2019 - 14:48
(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. The previous post of the series lives here.) This week in … →
Categorieën: Mozilla-nl planet

Mike Hommey: Five years of git-cinnabar

di, 19/11/2019 - 10:04

On this very day five years ago, I committed the initial code of what later became git-cinnabar. It is kind of an artificial anniversary, because I didn’t actually publish anything until 3 weeks later, and I also had some prototypes months earlier.

The earlier prototypes of what I’ll call “pre-git-cinnabar” could handle doing git clone hg::https://hg.mozilla.org/mozilla-central (that is, creating a git clone of a Mercurial repository), but they couldn’t git pull later. That pre-git-cinnabar initial commit, however, was the first version that did.

The state of the art back then was similar git helpers, the most popular choice being Felipec’s git-remote-hg, or the opposite tool: hg-git, a mercurial plugin that allows to push to a git repository.

They both had the same caveats: they were slow to handle a repository the size of mozilla-central back then, and both required a local mercurial repository (hidden in the .git directory in the case of Felipec’s git-remote-hg).

This is what motivated me to work on pre-git-cinnabar, which was also named git-remote-hg back then because of how git requires a git-remote-hg executable to handle hg::-prefixed urls.

Fast forward five years, mozilla-central has grown tremendously, and another mozilla-unified repository was created that aggregates the various mozilla branches (esr*, release, beta, central, integration/*).

git-cinnabar went through multiple versions, multiple changes to the metadata it keeps, and while I actually haven’t cumulatively worked all that much on it considering the number of years, a lot of progress has been made.

But let’s go back to the 19th of November 2014. Thankfully, Mercurial allows to strip everything past a certain date, artificially allowing to restore the state of the repository at that date. Unfortunately, pre-git-cinnabar only supports the old Mercurial bundle format, which both the mozilla-central and mozilla-unified repositories now don’t allow. So pre-git-cinnabar can’t clone them out of the box anymore. It’s still possible to allow it in mirror repositories, but because they now use generaldelta, that incurs a server-side conversion that is painfully slow (the hg.mozilla.org server rejects clients that don’t support the new format for this reason).

So for testing purposes, I setup a nginx reverse-proxy and cache, such that the conversion only happens once, and performed clones multiple times, removing any bundling and conversion cost out of the equation. And tested the current version of Felipec’s git-remote-hg, the current version of hg-git, pre-git-cinnabar, and last git-cinnabar release (0.5.2 as of writing), on some AWS instances, with Xeon Platinum 8124M 3Ghz CPUs. That’s a different CPU from what I had back in 2014, yielding some different results from what I wrote in that first announcement.

I’ve thus cloned both mozilla-central (denoted m-c) and mozilla-unified (denoted m-u), with simulated old states of the repositories. Mozilla-unified didn’t exist before 2016, but it’s still interesting to simulate its content as if it had existed because it allows to learn how the tools perform with the additional branches it contains, with the implication they have on how the data is stored in the repository.

Note: I didn’t test older versions of git-remote-hg or hg-git to see how they performed at the time, and how things evolved for them.

Clone times in 2014

There are multiple things of note in the results above:

  • I wrote back then that cloning took 12 hours with git-remote-hg and 30 minutes with pre-git-cinnabar on the machine I used. And while cloning with pre-git-cinnabar on more modern hardware was much faster (16 minutes), cloning with git-remote-hg wasn’t. The pre-git-cinnabar improvement could, though, be attributed in part to improvements in git-fast-import itself (I haven’t tested older versions). But it’s also not impossible that git-remote-hg regressed. Only further testing would tell.
  • mozilla-unified is bigger than mozilla-central, because it is a superset, and that reflects on the clone times, but hg-git and pre-git-cinnabar are both much slower to clone mozilla-unified than you’d expect from the difference in size, especially hg-git. git-cinnabar made a lot of progress in that regard.
  • I hadn’t tested hg-git back then, but while it’s not as slow as git-remote-hg, it’s still horribly slow for a repository this size.

Let’s now look at the .git sizes:

.git sizes in 2014

Those are the sizes for the .git directory fresh after cloning. In all cases, git gc --aggressive would make the clone smaller, at the cost of CPU time (although not significantly smaller in the git-cinnabar case). And after you spent 12 hours cloning, are you really going to spend another large number of hours on a git gc to save disk space?

It is worth noting that in the case of hg-git, this doesn’t include the size of the mercurial repository required to maintain the git repository, while it is included for git-remote-hg, where it is hidden in .git, as mentioned earlier. That puts them about on par w.r.t size.

It’s interesting how close hg-git and git-remote-hg are in disk usage, when the former uses dulwich, a pure Python implementation of Git, and the latter uses git-fast-import. pre-git-cinnabar used git-fast-import too, but optimized the data it sent to git-fast-import to allow for a more compact .git. Recent git-cinnabar made it even better, although it doesn’t use git-fast-import directly, instead using a helper derived from git-fast-import.

But that was 2014. Let’s look at how things evolved over time, by taking “snapshots” of the repositories at one year interval, starting in November 2007.

Clone times over time

Of note:

  • pre-git-cinnabar somehow invalidated the nginx cache for years >= 2016 for mozilla-unified, which didn’t allow to get reliable measures.
  • Things went well out of hand with git-remote-hg and hg-git, so much so that I wasn’t able to get results for git-remote-hg clones for 2019 in time for this post. They’re now getting clone times that count in days rather than hours.
  • Things are getting much worse with mozilla-unified, relatively to mozilla-central, for hg-git than they do for git-remote-hg or git-cinnabar, while they were really bad with pre-git-cinnabar.
  • pre-git-cinnabar clone times for mozilla-central are indistinguishable from git-cinnabar’s at this scale (but see further below).
  • the progression is not linear, but the progression in repository size wasn’t linear either. In order to get a slightly better picture, it is better to look at the clone times vs. the size of the repositories. One measure of that size is number of objects (changeset, manifests and file revisions they contain).

Clone times over repo size

The progression here looks more linear, but still not quite linear. The difference between the mozilla-central and mozilla-unified clone times is the most damning, especially for hg-git and pre-git-cinnabar. At this scale things don’t look so bad for git-cinnabar, but looking closer, they aren’t actually much better:

Clone times over repo size, pre-git-cinnabar and git-cinnabar only

mozilla-central clone times have slightly improved since pre-git-cinnabar days, at least more than the comparison with hg-git and git-remote-hg suggested. mozilla-unified clone times, however, have dramatically improved (notwithstanding the fact that it’s currently not possible to clone with pre-git-cinnabar at all directly from hg.mozilla.org).

But clone times are starting to get a little out of hand, especially for mozilla-unified, which is why I’ve recently added support for “clone bundles”. But I also have work in progress that I expect will make non-bundle clones faster too, and hopefully more linear.

As for .git sizes:

.git sizes over repo size

  • hg-git and git-remote-hg are still hand in hand.
  • Here the progression is mostly linear, with almost no difference between mozilla-central and mozilla-unified, as one could expect.
  • I think the larger increase in size between what would be 2017 and 2018 is largely due to the testing/web-platform/meta/MANIFEST.json file.
  • People who try to clone the Mozilla repositories with hg-git or git-remote-hg at this point better have a lot of time and a lot of free disk space.

While git-cinnabar is demonstrably significantly faster than both git-remote-hg and hg-git by a large margin for the Mozilla repositories (more than an order of magnitude), looking at the data more closely revealed something interesting that can be pictured in the following graph, plotting how much slower than git-cinnabar the other tools are.

Clone time ratios against git-cinnabar

The ratio is not constant, and has surprisingly been decreasing steadily since 2016, correlating with the observation that clone times are getting slower more quickly than the repositories are growing. But they are doing more so with git-cinnabar than with the other tools. Clone times with git-cinnabar have multiplied by more than 5 in five years, for a repository that only has 2.5 times more objects. At this pace, in five more years, clones will take well above 10 hours, and that’s not counting for the accelerated slowdown. Hopefully, the current work in progress will help.

It’s also interesting to see how the ratios changed after 2011 between mozilla-central and mozilla-unified. 2011 is when Firefox 4 was released and the release process switched to multiple repositories, which mozilla-unified, well, unified in a single repository. So mozilla-unified and mozilla-central were largely identical when stripped of commits after 2011 and started diverging afterwards.

To conclude this rather long post, pre-git-cinnabar improved the state of the art to clone large Mercurial repositories, and git-cinnabar went further in the past five years. But without more work, things will get out of hand. And that only accounts for clone times. I haven’t spent much time working on other aspects, like negotiation times (the time it takes to agree with a Mercurial server what the git clone has in common with it), or bundling times (the time it takes to generate a bundle to send a Mercurial server). Both are the relevant parts of push times.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 313

di, 19/11/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts #Rust2020

Find all #Rust2020 posts at Read Rust.

Crate of the Week

This week's crate is wasmtime, a standalone JIT-style runtime for WebAssembly.

Thanks to Josh Triplett for the suggestions!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

252 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs New RFCs Upcoming Events Online Africa Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

This week, we have two quotes:

Telling a programmer there's already a library to do X is like telling a songwriter there's already a song about love.

PeteCordell on twitter, as quoted in a recent Rust Gamedev meetup

Well a Museum purpose is also memory safety, I guess.

/u/xav_19 on /r/rust commenting on a post asking why "The Rust Programming Language" is sold in Washington D.C.'s spy museum's gift shop

Thanks to Matthieu M. and ZiCog for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Chris H-C: Distributed Teams: Why I Don’t Go to the Office More Often

ma, 18/11/2019 - 16:00

I was invited to a team dinner as part of a work week the Data Platform team was having in Toronto. I love working with these folks, and I like food, so I set about planning my logistics.

The plan was solid, but unimpressive. It takes three hours or so to get from my home to the Toronto office by transit, so I’d be relying on the train’s WiFi to allow me to work on the way to Toronto, and I’d be arriving home about 20min before midnight.

Here’s how it went:

  1. 0800 Begin
  2. 0816 Take the GRT city bus to Kitchener train station
  3. 0845 Try to find a way to get to the station (the pedestrian situation around the station is awful)
  4. 0855 Learn that my 0918 train is running 40min late.
  5. 0856 Purchase a PRESTO card for my return journey, being careful to not touch any of the blood stains on the vending machine. (Seriously. Someone had a Bad Time at Kitchener Station recently)
  6. 0857 Learn that they had removed WiFi from the train station, so the work I’ll be able to do is limited to what I can manage on my phone’s LTE
  7. 0900 Begin my work day (Slack and IRC only), and eat the breakfast I packed because I didn’t have time at home.
  8. 0943 Train arrives only 35min late. Goodie.
  9. 0945 Learn from the family occupying my seat that I actually bought a ticket for the wrong day. Applying a discount code didn’t keep the date and time I selected, and I didn’t notice until it was too late. Sit in a different seat and wonder what the fare inspector will think.
  10. 0950 Start working from my laptop. Fear of authority can build on its own time, I have emails to answer and bugs to shuffle.
  11. 1030 Fare inspector finally gets around to me as my nervousness peaks. Says they’ll call it in and there might be an adjustment charge to reschedule it.
  12. 1115 Well into Toronto, the fare inspector just drops my ticket into my lap on their way to somewhere else. I… guess everything’s fine?
  13. 1127 Train arrives at Toronto Union Station. Disconnect WiFi, disembark and start walking to the office. (Public transit would be slower, and I’m saving my TTC token for tonight’s trip)
  14. 1145 Arrive at MoTo just in time for lunch.

Total time to get to Mozilla Toronto: 3h45min. Total distance traveled: 95km Total cost: $26 for the Via rail ticket, $2.86 for the GRT city bus.

The way back wasn’t very much better. I had to duck out of dinner at 8pm to have a hope of getting home before the day turned into tomorrow:

  1. 2000 Leave the team dinner, say goodnights. Start walking to the subway
  2. 2012 At the TTC subway stop learn that the turnstiles don’t take tokens any more. Luckily there’s someone in the booth to take my fare.
  3. 2018 Arrive at Union station and get lost in the construction. I thought the construction was done (the construction is never done).
  4. 2025 Ask at the PRESTO counter how to use PRESTO properly. I knew it was pay-by-distance but I was taking a train _and_ a bus, so I wasn’t sure if I needed to tap in between the two modes (I do. Tap before the train, after the train, on the bus when you get on, and on the bus when you get off. Seems fragile, but whatever).
  5. 2047 Learn that the train’s been rescheduled 6min later. Looks like I can still make my bus connection in Bramalea.
  6. 2053 Tap on the thingy, walk up the flights of stairs to the train, find a seat.
  7. 2102 “Due to platform restrictions, the doors on car 3107 will not open at Bramalea”… what car am I on? There’s no way to tell from where I’m sitting.
  8. 2127 Arrive at Bramalea. I’m not on car 3107.
  9. 2130 Learn that there’s one correct way to leave the platform and I took the other one that leads to the parking lot. Retrace my steps.
  10. 2132 Tap the PRESTO on the thingy outside the station building (closed)
  11. 2135 Tap the PRESTO on the thingy inside the bus. BEEP BEEP. Bus driver says insufficient funds. That can’t be, I left myself plenty of room. Tick tock.
  12. 2136 Cold air aching in my lungs from running I load another $20 onto the PRESTO
  13. 2137 Completely out of breath, tap the PRESTO on the thingy inside the bus. Ding. Collapse in a seat. Bus pulls out just moments later.
  14. 2242 Arrive in Kitchener. Luckily the LRT, running at 30min headways, is 2min away. First good connection of the day.
  15. 2255 This is the closest the train can get me. There’s a 15min wait (5 of which I’ll have to walk in the cold to get to the stop) for a bus that’ll get me, in 7min, within a 10min walk from home. I decide to walk instead, as it’ll be faster.
  16. 2330 Arrive home.

Total time to get home: 3h30min. Total distance traveled: 103km. Total cost: $3.10 for the subway token, $46 PRESTO ($6 for the card, $20 for the fare, $20 for the surprise fare), $2.86 for the LRT.

At this point I’ve been awake for over 20 hours.

Is it worth it? Hard to say. Every time I plan one of these trips I look forward to it. Conversations with office folks, eating office lunch, absconding with office snacks… and this time I even got to go out to dinner with a bunch of data people I work with all the time!

But every time I do this, as I’m doing it, or as I’m recently back from doing it… I don’t feel great about it. It’s essentially a full work day (nearly eight full hours!) just in travel to spend 5 hours in the office, and (this time) a couple hours afterwards in a restaurant.

Ultimately this — the share of my brain I need to devote purely to logistics, the manifold ways things can go wrong, the sheer _time_ it all takes — is why I don’t go into the office more often.

And the people are the reason I do it at all.

:chutten

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla Mornings on the future of openness and data access in the EU

ma, 18/11/2019 - 10:51

On 10 December, Mozilla will host the next installment of our Mozilla Mornings series – regular breakfast meetings where we bring together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

The next installment will focus on openness and data access in the European Union. We’re bringing together an expert panel to discuss how the European Commission should approach a potential framework on data access, sharing and re-use.

Speakers

Agustín Reyna
Head of Legal and Economic Affairs
BEUC, the European Consumer Organisation

Benjamin Ledwon
Head of Brussels Office
Bitkom

Maud Sacquet
Public Policy Manager
Mozilla Corporation

Moderated by Jennifer Baker
EU tech journalist

Logistical information

10 December, 2019
08:30 – 10:30
The Office cafe, Rue d’Arlon 80, Brussels 1040

Register your attendance here

 

The post Mozilla Mornings on the future of openness and data access in the EU appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

QMO: Firefox 71 Beta 12 Testday – November 22nd

ma, 18/11/2019 - 10:07

Hello Mozillians,

We are happy to let you know that FridayNovember 22nd, we are organizing Firefox 71 Beta 12 Testday. We’ll be focusing our testing on: Inactive CSS.

Check out the detailed instructions via this gdoc.

*Note that this events are no longer held on etherpad docs since public.etherpad-mozilla.org was disabled.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Categorieën: Mozilla-nl planet

Botond Ballo: Trip Report: C++ Standards Meeting in Belfast, November 2019

vr, 15/11/2019 - 16:00
Summary / TL;DR

Project What’s in it? Status C++20 See below On track Library Fundamentals TS v3 See below Under development Concepts Constrained templates In C++20 Parallelism TS v2 Task blocks, library vector types and algorithms, and more Published! Executors Abstraction for where/how code runs in a concurrent context Targeting C++23 Concurrency TS v2 See below Under active development Networking TS Sockets library based on Boost.ASIO Published! Not in C++20. Ranges Range-based algorithms and views In C++20 Coroutines Resumable functions (generators, tasks, etc.) In C++20 Modules A component system to supersede the textual header file inclusion model In C++20 Numerics TS Various numerical facilities Under active development C++ Ecosystem TR Guidance for build systems and other tools for dealing with Modules Under active development Contracts Preconditions, postconditions, and assertions Under active development Pattern matching A match-like facility for C++ Under active development Reflection TS Static code reflection mechanisms Publication imminent Reflection v2 A value-based constexpr formulation of the Reflection TS facilities Under active development Metaclasses Next-generation reflection facilities Early development

A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of November 25, 2019). If you encounter such a link, please check back in a few days.

Introduction

Last week I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Belfast, Northern Ireland. This was the third and last committee meeting in 2019; you can find my reports on preceding meetings here (July 2019, Cologne) and here (February 2019, Kona), and previous ones linked from those. These reports, particularly the Cologne one, provide useful context for this post.

At the last meeting, the committee approved and published the C++20 Committee Draft (CD), a feature-complete draft of the C++20 standard which includes wording for all of the new features we plan to ship in C++20. The CD was then sent out to national standards bodies for a formal ISO ballot, where they have the opportunity to file technical comments on it, called “NB (national body) comments”.

We have 10-15 national standards bodies actively participating in C++ standardization, and together they have filed several hundred comments on the CD. This meeting in Belfast was the first of two ballot resolution meetings, where the committee processes the NB comments and approves any changes to the C++20 working draft needed to address them. At the end of the next meeting, a revised draft will be published as a Draft International Standard (DIS), which will likely be the final draft of C++20.

NB comments typically ask for bug and consistency fixes related to new features added to C++20. Some of them ask for fixes to longer-standing bugs and consistency issues, and some for editorial changes such as fixes to illustrative examples. Importantly, they cannot ask for new features to be added (or at least, such comments are summarily rejected, though the boundary between bug fix and feature can sometimes be blurry).

Occasionally, NB comments ask for a newly added feature to be pulled from the working draft due to it not being ready. In this case, there were comments requesting that Modules and Coroutines (among other things) be postponed to C++23 so they can be better-baked. I’m pleased to report that no major features were pulled from C++20 at this meeting. In cases where there were specific technical issues with a feature, we worked hard to address them. In cases of general “this is not baked yet” comments, we did discuss each one (at length in some cases), but ultimately decided that waiting another 3 years was unlikely to be a net win for the community.

Altogether, over half of the NB comments have been addressed at this meeting, putting us on track to finish addressing all of them by the end of the next meeting, as per our standardization schedule.

While C++20 NB comments were prioritized above all else, some subgroups did have time to process C++23 proposals as well. No proposals were merged into the C++23 working draft at this time (in fact, a “C++23 working draft” doesn’t exist yet; it will be forked from C++20 after the C++20 DIS is published at the end of the next meeting).

Procedural Updates

A few updates to the committee’s structure and how it operates:

  • As the Networking TS prepares to be merged into C++23, it has been attracting more attention, and the committee has been receiving more networking-related proposals (notable among them, one requesting that networking facilities be secure by default), so the Networking Study Group (SG4) has been re-activated so that a dedicated group can give these proposals the attention they deserve.
  • An ABI Review Group (ARG) was formed, comprised of implementors with ABI-related expertise on various different platforms, to advise the committee about the ABI impacts of proposed changes. The role of this group is not to set policy (such as to what extent we are willing to break ABI compatibility), but rather to make objective assessments of ABI impact on various platforms, which other groups can then factor into their decision-making.
  • Not something new, just a reminder: the committee now tracks its proposals in GitHub. If you’re interested in the status of a proposal, you can find its issue on GitHub by searching for its title or paper number, and see its status — such as which subgroups it has been reviewed by and what the outcome of the reviews were — there.
  • At this meeting, GitHub was also used to track NB comments, one issue per comment, and you can also see their status and resolution (if any) there.
Notes on this blog post

This blog post will be a bit different from previous ones. I was asked to chair the Evolution Working Group Incubator (EWG-I) at this meeting, which meant that (1) I was not in the Evolution Working Group (EWG) for most of the week, and thus cannot report on EWG proceedings in as much detail as before; and (2) the meeting and the surrounding time has been busier for me than usual, leaving less time for blog post writing.

As a result, in this blog post, I’ll mostly stick to summarizing what happened in EWG-I, and then briefly mention a few highlights from other groups. For a more comprehensive list of what features are in C++20, what NB comment resolutions resulted in notable changes to C++20 at this meeting, and which papers each subgroup looked at, I will refer you to the excellent collaborative Reddit trip report that fellow committee members have prepared.

Evolution Working Group Incubator (EWG-I)

EWG-I (pronounced “oogie” by popular consensus) is a relatively new subgroup, formed about a year ago, whose purpose is to give feedback on and polish proposals that include core language changes — particularly ones that are not in the purview of any of the domain-specific subgroups, such as SG2 (Modules), SG7 (Reflection), etc. — before they proceed to EWG for design review.

EWG-I met for two and a half days at this meeting, and reviewed 17 proposals. All of this was post-C++20 material.

I’ll go through the proposals that were reviewed, categorized by the review’s outcome.

Forwarded to EWG

The following proposals were considered ready to progress to EWG in their current state:

  • Narrowing contextual conversions to bool. This proposal relaxes a recently added restriction which requires an explicit conversion from integer types to bool in certain contexts. The motivation for the restriction was noexcept(), to remedy the fact that it was very easy to accidentally declare a function as noexcept(f()) (which means “the function is noexcept if f() returns a nonzero value”) instead of noexcept(noexcept(f())) (which means “the function is noexcept if f() doesn’t throw”), and this part of the restriction was kept. However, the proposal argued there was no need for the restriction to also cover static_assert and if constexpr.
  • Structured bindings can introduce a pack. This allows a structured binding declaration to introduce a pack, e.g. auto [...x] = f();, where f() is a function that returns a tuple or other decomposable object, and x is a newly introduced pack of bindings, one for each component of the object; the pack can then be expanded as x... just like a function parameter pack.
  • Reserving attribute names for future use. This reserves attribute names in the global attribute namespace, as well as the attribute namespace std (or std followed by a number) for future standardization.
  • Accessing object representations. This fixes a defect introduced in C++20 that makes it undefined behaviour to access the bytes making up an object (its “object representation”) by reinterpret_casting its address to char*.
  • move = relocates. This introduces “relocating move constructors”, which are move constructors declared using = relocates in places of = default. This generates the same implementation as for a defaulted move constructor, but the programmer additionally guarantees to the compiler that it can safely optimize a move + destructing the old object, into a memcpy of the bytes into the new location, followed by a memcpy of the bytes of a default-constructed instance of the type into the old location. This essentially allows compilers to optimize moves of many types (such as std::shared_ptr), as well as of arrays / vectors of such types, into memcpys. Currently, only types which have an explicit = relocates move constructor declaration are considered relocatable in this way, but the proposal is compatible with future directions where the relocatability of a type is inferred from that of its members (such as in this related proposal).
Forwarded to EWG with modifications

For the following proposals, EWG-I suggested specific revisions, or adding discussion of certain topics, but felt that an additional round of EWG-I review would not be helpful, and the revised paper should go directly to EWG:

  • fiber_context – fibers without scheduler. This is the current formulation of “stackful coroutines”, or rather a primitive on top of which stackful coroutines and other related things like fibers can be built. It was seen by EWG-I so that we can brainstorm possible interactions with other language features. TLS came up, as discussed in more detail in the EWG section.
  • Making operator?: overloadable. See the paper for motivations, which include SIMD blend operations and expression template libraries. The biggest sticking point here is we don’t yet have a language mechanism for making the operands lazily evaluated, the way the built-in operator behaves. However, not all use cases want lazy evaluation; moreover, the logical operators (&& and ||) already have this problem. EWG-I considered several mitigations for this, but ultimately decided to prefer an unrestricted ability to overload this operator, relying on library authors to choose wisely whether or not to overload it.
  • Make declaration order layout mandated. This is largely standardizing existing practice: compilers technically have the freedom to reorder class fields with differing access specifiers, but none are known to do so, and this is blocking future evolution paths for greater control over how fields are laid out in memory. The “modification” requested here is simply to catalogue the implementations that have been surveyed for this.
  • Portable optimisation hints. This standardizes __builtin_assume() and similar facilities for giving the compiler a hint it can use for optimization purposes. EWG-I expressed a preference for an attribute-based ([[assume(expr)]]) syntax.

Note that almost all of the proposals that were forwarded to EWG have been seen by EWG-I at a previous meeting, sometimes on multiple occasions, and revised since then. It’s rare for EWG-I to forward an initial draft (“R0”) of a proposal; after all, its job is to polish proposals and save time in EWG as a result.

Forwarded to another subgroup

The following proposals were forwarded to a domain-specific subgroup:

  • PFA: a generic, extendable and efficient solution for polymorphic programming. This proposed a mechanism for generalized type erasure, so that types like std::function (which a type-erased wrapper for callable objects) can easily be built for any interface. EWG-I forwarded this to the Reflection Study Group (SG7) because the primary core language facility involves synthesizing a new type (the proxy / wrapper) based on an existing one (the interface). EWG-I also recommended expressing the interface as a regular type, rather than introducing a new facade entity to the language.
Feedback given

For the following proposals, EWG-I gave the author feedback, but did not consider it ready to forward to another subgroup. A revised proposal would come back to EWG-I.

No proposals were outright rejected at this meeting, but the nature of the feedback did vary widely, from requesting minor tweaks, to suggesting a completely different approach to solving the problem.

  • Provided operator= returns lvalue-ref on an rvalue. This attempts to rectify a long-standing inconsistency in the language, where operator= for a class type can be called on temporaries, which is not allowed for built-in types; this can lead to accidental dangling. EWG-I agreed that it would be nice to resolve this, but asked the author to assess how much code this would break, so we can reason about its feasibility.
  • Dependent static assertion. The problem this tries to solve is that static_assert(false) in a dependent context fails eagerly, rather than being delayed until instantiation. The proposal introduces a new syntax, static_assert<T>(false), where T is some template parameter that’s in scope, for the delayed behaviour. EWG-I liked the goal, but not the syntax. Other approaches were discussed as well (such as making static_assert(false) itself have the delayed behaviour, or introducing a static_fail() operator), but did not have consensus.
  • Generalized pack declaration and usage. This is an ambitious proposal to make working with packs and pack-like types much easier in the language; it would allow drastically simplifying the implementations of types like tuple and variant, as well as making many compile-time programming tasks much easier. A lot of the feedback concerned whether packs should become first-class language entities (a “language tuple” of sorts, as previously proposed), or remain closer to their current role as depenent constructs that only become language entities after expansion.
  • Just-in-time compilation. Another ambitious proposal, this takes aim at use cases where static polymorphism (i.e. use of templates) is desired for performance, but the parameters (e.g. the dimensions of a matrix) are not known at compile time. Rather than being a general-purpose JIT or eval() like mechanism, the proposal aims to focus on the ability to instantiate some templates at runtime. EWG-I gave feedback related to syntax, error handling, restricting runtime parameters to non-types, and consulting the Reflection Study Group.
  • Interconvertible object representations. This proposes a facility to assert, at compile time, than one type (e.g. a struct containing two floats) has the same representation in memory as another (e.g. an array of two floats). EWG-I felt it would be more useful it the proposed annotation would actually cause the compiler to use the target layout.
  • Language support for class layout control. This aims to allow the order in which class members are laid out in memory to be customized. It was reviewed by SG7 (Reflection) as well, which expressed a preference for performing the customization in library code using reflection facilities, rather than having a set of built-in layout strategies defined by the core language. EWG-I preferred a keyword-based annotation syntax over attributes, though metaclasses might obviate the need for a dedicated syntax.
  • Epochs: a backward-compatible language evolution mechanism. This was probably the most ambitious proposal EWG-I looked at, and definitely the one that attracted the largest audience. It proposes a mechanism similar to Rust’s editions for evolving the language in ways we have not been able to so far. Central to the proposal is the ability to combine different modules which use different epochs in the same program. This generated a lot of discussion around the potential for fracturing the language into dialects, the granularity at which code opts into an epoch, and what sorts of new features should be allowed in older epochs, among other topics.
Thoughts on the role of EWG-I

Having spent time in both EWG and EWG-I, one difference that’s apparent is that EWG is the tougher crowd: features that make it successfully through EWG-I are often still shot down in EWG, sometimes on their first presentation. If EWG-I’s role is to act as a filter for EWG, it is effective in that role already, but there is probably potential for it to be more effective.

One dynamic that you often see play out in the committee is the interplay between “user enthusiasm” and “implementer skepticism”: users are typically enthusiastic about new features, while implementers will often try to curb enthusiasm and be more realistic about a feature’s implementation costs, interactions with other features, and source and ABI compatibility considerations. I’d say that EWG-I tends to skew more towards “user enthusiasm” than EWG does, hence the more permissive outcomes. I’d love for more implementers to spend time in EWG-I, though I do of course realize they’re in short supply and are needed in other subgroups.

Evolution Working Group

As mentioned, I didn’t spend as much time in EWG as usual, but I’ll call out a few of the notable topics that were discussed while I was there.

C++20 NB comments

As with all subgroups, EWG prioritized C++20 NB comments first.

  • The C++20 feature that probably came closest to removal at this meeting was class types as non-type template parameters (NTTPs). Several NB comments pointed out issues with their current specification and asked for either the issues to be resolved, or the feature to be pulled. Thankfully, we were able to salvage the feature. The fix approach involves axing the feature’s relationship with operator==, and instead having template argument equivalence be based on a structural identity, essentially a recursive memberwise comparison. This allows a larger category of types to be NTTPs, including unions, pointers and references to subobjects, and, notably, floating-point types. For class types, only types with public fields are allowed at this time, but future directions for opting in types with private fields are possible.
  • Parenthesized initialization of aggregates also came close to being removed but was fixed instead.
  • A suggestion to patch a functionality gap in std::is_constant_evaluated() by introducing a new syntactic construct if consteval was discussed at length but rejected. The feature may come back in C++23, but there are enough open design questions that it’s too late for C++20.
  • To my mild (but pleasant) surprise, ABI isolation for member functions, a proposal which divorces a method’s linkage from whether it is physically defined inline or out of line, and which was previously discussed as something that’s too late for C++20 but which we could perhaps sneak is as a Defect Report after publication, was now approved for C++20 proper. (It did not get to a plenary vote yet, where it might be controversial.)
  • A minor consistency fix between constexpr and consteval was approved.
  • A few Concepts-related comments:
    • The ability to constrain non-templated functions was removed because their desired semantics were unclear. They could come back in C++23 with clarified semantics.
    • One remaining visual ambiguity in Concepts is that in a template parameter list, Foo Bar can be either a constrained type template parameter (if Foo names a concept) or a non-type template parameter (if Foo names a type). The compiler knows which by looking up Foo, but a reader can’t necessarily tell just by the syntax of the declaration. A comment proposed resolving this by changing the syntax to Foo auto Bar for the type parameter case (similar to the syntax for abbreviated function templates). There was no consensus for this change; a notable counter-argument is that the type parameter case is by far the more common one, and we don’t want to make the common case more verbose (and the non-type syntax can’t be changed because it’s pre-existing).
    • Another comment pointed out that Concept<X> can also mean two different things: a type constraint (which is satisifed by a type T if Concept<T, X> is true), or an expression which evaluates Concept applied to the single argument X. The comment suggested disambiguating by e.g. changing the first case to Concept<, X>, but there was no consensus for this either.
Post-C++20 material

Having gotten through all Evolutionary NB comments, EWG proceeded to review post-C++20 material. Most of this had previously gone through EWG-I (you might recognize a few that I mentioned above because they went through EWG-I this week).

  • (Approved) Reserving attribute names for future use. In addition to approing this for C++23, EWG also approved it as a C++20 Defect Report. (Thanks to Erich Keane for pointing that out!)
  • (Approved) Accessing object representations. EWG agreed with the proposal’s intent and left it to the Core Working Group to figure out the exact way to specify this intent.
  • (Further work) std::fiber_context – stackful context switching. This was discussed at some length, with at least one implementer expressing significant reservations due to the feature’s interaction with thread-local storage (TLS). Several issues related to TLS were raised, such as the fact that compilers can cache pointers to TLS across function calls, and if a function call executes a fiber switch that crosses threads (i.e. the fiber is resumed on a different OS thread), the cache becomes invalidated without the compiler having expected that; addressing this at the compiler lever would be a performance regression even for code that doesn’t use fibers, because the compiler would need to assume that any out of line function call could potentially execute a fiber switch. A possible alternative that was suggested was to have a mechanism for a user-directed kernel context switch that would allow coordinating threads of execution (ToEs) in a co-operative way without needing a distinct kind of ToE (namely, fibers).
  • (Further work) Structured bindings can introduce a pack. EWG liked the direction, but some implementers expressed concerns about the implementation costs, pointing out that in some implementations, handling of packs is closely tied to templates, while this proposal would allow packs to exist outside of templates. The author and affected implementers will discuss the concerns offline.
  • (Further work) Automatically generate more operators. This proposal aims to build on the spaceship operator’s model of rewriting operators (e.g. rewriting a < b to a <=> b < 0), and allow other kinds of rewriting, such as rewriting a += b to a = a + b. EWG felt any such facility should be strictly opt-in (e.g. you could give you class an operator+=(...) = default to opt into this rewriting, but it wouldn’t happen by default), with the exception of rewriting a->b to (*a).b (and the less common a->*b to (*a).*b) which EWG felt could safely happen by default.
  • (Further work) Named character escapes. This would add a syntax for writing unicode characters in source code by using their descriptive names. Most of the discussion concerned the impacts of implementations having to ship a unicode database containing such descriptive names. EWG liked the direction but called for further exploration to minimize such impacts.
  • (Further work) tag_invoke. This concerns making it easier to write robust customization points for library facilities. There was a suggestion of trying to model the desired operations more directly in the language, and EWG suggested exploring that further.
  • (Rejected) Homogeneous variadic function parameters. This would have allowed things like template <typename T> void foo(T...); to mean “foo is a function template that takes zero or more parameters, all of the same type T“. There were two main arguments against this. First, it would introduce a novel interpretation of template-ids (foo<int> no longer names a single specialization of foo, it names a family of specializations, and there’s no way to write a template-id that names any individual specialization). The objection that seems to have played the larger role in the proposal’s rejection, however, is that it breaks the existing meaning of e.g. (int...) as an alternative way of writing (int, ...) (meaning, an int parameter followed by C-style variadic parameters). While the (int...) form is not allowed in C (and therefore, not used by any C libraries that a C++ project might include), apparently a lot of old C++ code uses it. The author went to some lengths to analyze a large dataset of open-source C++ code for occurrences of such use (of which there were vanishingly few), but EWG felt this wasn’t representative of the majority of C++ code out there, most of which is proprietary.
Other Highlights
  • Ville Voutilainen’s paper proposing a high-level direction for C++23 was reviewed favourably by the committee’s Direction Group.
  • While I wasn’t able to attend the Reflection Study Group (SG7)’s meeting (it happened on a day EWG-I was in session), I hear that interesting discussions took place. In particular, a proposal concerning side effects during constant evaluation prompted SG7 to consider whether we should revise the envisioned metaprogramming model and take it even further in the direction of “compile-time programming is like regular programming”, such that if you e.g. wanted compile-time output, then rather than using a facility like the proposed constexpr_report, you could just use printf (or whatever you’d use for runtime output). The Circle programming language was pointed to as prior art in this area. SG7 did not make a decision about this paradigm shift, just encouraged exploration of it.
  • The Concurrency Study Group (SG1) came to a consensus on a design for executors. (Really, this time.)
  • The Networking Study Group (SG4) pondered whether C++ networking facilities should be secure by default. The group felt that we should standardize facilities that make use of TLS if and when they are ready, but not block networking proposals on it.
  • The determistic exceptions proposal was not discussed at this meeting, but one of the reactions that its previous discussions have provoked is a resurgence in interest in better optimizing today’s exceptions. There was an evening session on this topic, and benchmarking and optimization efforts were discussed.
  • web_view was discussed in SG13 (I/O), and I relayed some of Mozilla’s more recent feedback; the proposal continues to enjoy support in this subgroup. The Library Evolution Incubator did not get a chance to look at it this week.
Next Meeting

The next meeting of the Committee will be in Prague, Czech Republic, the week of February 10th, 2020.

Conclusion

This was an eventful and productive meeting, as usual, with the primary accomplishment being improving the quality of C++20 by addressing national body comments. While C++20 is feature-complete and thus no new features were added at this meeting, we did solidify the status of recently added features such as Modules and class types as non-type template parameters, greatly increasing the chances of these features remaining in the draft and shipping as part of C++20.

There is a lot I didn’t cover in this post; if you’re curious about something I didn’t mention, please feel free to ask in a comment.

Other Trip Reports

In addition to the collaborative Reddit report which I linked to earlier, here are some other trip reports of the Belfast meeting that you could check out:

Categorieën: Mozilla-nl planet

Karl Dubost: Best viewed with… Mozilla Dev Roadshow Asia 2019

vr, 15/11/2019 - 09:26

I was invited by Sandra Persing to participate to the Mozilla Developer Roadshow 2019 in Asia. The event is going through 5 cities: Tokyo, Seoul, Taipei, Singapore, Bangkok. I committed to participate to Tokyo and Seoul. The other speakers are still on the road. As I'm writing this, they are speaking in Taipei, when I'm back home.

Let's go through the talk and then some random notes about the audience, people and cities.

The Webcompat Talk

The talk was half about webcompat and half about the tools helping developers using Firefox to avoid Web compatibility issues. The part about the Firefox devtools was introduced by Daisuke. My notes here are a bit longer than what I actually said. I have the leisure of more space.

Let's talk about webcompat

Intro slide

The market dominance by one browser is not new. It has happened a couple of times already. In an article by Wired on September 2008 (Time flies!), they have this nice graph about the browser market share space. The first dominant browser was Netscape in 1996, the child of Mosaic. It already had influence on the other player. I remember how the introduction of images and tables. For example, I remember a version of Netscape where we had to close the table element absolutely. If not, the full table was not being displayed. This had consequences on the web developer job-At that time, we were webmasters. This seems to be a good consequence in this case by imposing a cleaner, leaner code.

browser market shares

Then Internet Explorer entered the market and took it all. The browser being distributed with Windows, it became de factor the default engine in all work environments. Internet Explorer reached his dominance peak around 2003, then started to decline through the effort of Firefox. Firefox never reached a peak (and that's good!). Probably the maximum market share in the world was around 20%-30% in 2009. Since there has been a steady decline of Firefox market share. The issue is not the loss of market share, the issue is the dominance by one player whichever the player is. I would not be comfortable to have Firefox as a dominant browser too. A balance in between all browsers is healthy.

note: World market shares are interesting, but they do not represent the full picture. There can be extreme diversity in between markets. That was specifically the case 10 years ago. A browser would have 80% of the market share in a specific country and 0% in another one. The issue is increased through mobile operators. It happened on Japan market, which went from 0 to very high dominance of Safari on iOS, to a shared dominance in between Chrome (Android) and Safari (iOS).

The promises of a website

Fantasy website

When a website is sold to a client. We sell the package, the look and feel, the design. In the cases of web applications, performances, conversion rates, user engagement pledge will be added into the basket. We very rarely talk about the engine. And still, people are more and more conscious about the quality and origin of food they buy, the sustainability of materials used to build a house, the maintenance cost of their car.

There is a lot of omissions in what is being promised to the client. This is accentuated by the complete absence of thinking about the resilience of the information contained on the website. Things change radically when we introduce the notion of archivability, time resilience, robustness over devices diversity and obsolescence. These are interesting questions that should be part of the process of thinking a website.

years ago websites were made of files; now they are made of dependencies. — Simon Pitt

A simple question such as "What the content of this page becomes when the site is not updated anymore and the server is not maintained anymore?" A lot of valuable information disappears every day on the Web, just because we didn't ask the right questions.

But I'm drifting a bit from webcompat. Let's come back on track.

The reality of a website

And here the photo of an actual website.

mechanics workshop

This is dirty, messy, full of dependencies, and forgotten bits. Sometimes different version of the same library is used in the code with conflicting ways of doing things. CSS is botched, JS is in a dire state of complexity, html and accessibility are a deeply nested soup of codes where the meaning has been totally forgotten. With the rise of frameworks such as ReactJS and their components, we fare a lot of worse in terms of semantics than what we did a couple of years ago with table layouts.

These big piles of codes have consequences. Maintainability suffers. Web compatibility issues increase. By Web compatibility I mean the ability of a website to work correctly on any devices, any context. Not as it should look the same everywhere, but as in any users should be able to perform the tasks they were expecting doing on it.

Examples?

  • Misconfigured user agent sniffing creating a ping-pong game (http/JS redirection) in between a mobile and a desktop site.
  • User agent detection in JavaScript code to deliver a specific feature, which fails when the user agent change, or the browser is being fixed.
  • Detection of a feature to change the behavior of the browser. window.event was not standard, and not implemented in Firefox for a long time. Webcompat issues pushed Mozilla to implement it for solving issues. In return, it created new webcompat issues, because some sites where using this to mean Firefox and not IE and then choose in between keyCode and charCode, which had yet another series of unintended consequences.
  • WebKit prefixes for CSS and JS… Circa 2015, 20% of the Japanese and Chinese mobile websites were breaking in a way or the other on Firefox on Android. It make impossible to have a reasonable impact on the Japanese market or to create an alliance with an operator to distribute Firefox more widely. So some of the WebKit prefixes became part of the Web platform, because you can't exist when developing a new browser if you do not have these aliases. Forget about asking web developers to do the right thing. Some sites are not maintained anymore, but users are still using them.

The list goes on.

The ultimate victim?

one person in the subway

The user who decided to use a specific browser for personal reasons is the victim. They are caught in between a website not doing the right thing, and a tool (the browser) not working properly. If you are not the user of the dominant browser of the market, you are up for a bumpy ride on the Web. Your browser usage becomes more a conviction of doing the right thing, more than making your life easier.

This should not.

A form to save the Web

webcompat site screenshot

We, the Web compatibility team, created a form to help users report issues about website working in one browser but not the others. The report can be made for any browsers, not only Mozilla Firefox (for which I'm working). The majority of our issues are coming from Firefox for two reasons.

  1. Firefox not having a market dominance, web developers do not test in Firefox. It's even more acute on mobile.
  2. The bug reporting is integrated into the Firefox UI (Developer, Nightly releases).

When we triage the issues, an amazing team of 3 persons (Ciprian, Oana and Sergiu), we ping the right persons working for other browsers companies for analyzing the issues. I would love more participation from the other browsers, but the participation is too irregular to make it effective for other browsers.

On Mozilla side, we have a good crew.

Diagnosis: a web plumbers team

person in a workshop

Every day, someone on the Mozilla webcompat team (Dennis, Ksenia, Thomas and myself) will handle the diagnosis of incoming issues. We call this diagnosis. You remember the messy website picture? Well, we put our sleeves up and dig right into the greasy bolts and screws. Minified code, broken code, unmaintained libraries, ill-defined CSS, we go through it and try to make sense of it without sometimes access to the original sources.

Our goal is to determine if it's really one of these:

  • not a webcompat issue. There's a difference but created by different widget rendering, or browser features specific to one vendor (font inflation for example)
  • a bug in Gecko (core engine of Firefox) that needs to be fixed
  • a mistake of the website

Once the website has been diagnosed, we have options…

Charybdis and Scylla: Difficult or… dirty

chess player, people in phone booth, tools to package a box

The options once we know what the issue is are not perfect.

Difficult: We can try to do outreach. It means trying to find out the owner of the website or the developer who created it. It's a very difficult task to discover who is in charge and is very dependent on the country and type of business the site is doing. Contacting a bank site and getting it fixed is nearly impossible. Finding a Japanese developer of a big corporation website is very hard (a lot of secrecy is going around). It's a costly process, with not always great results. If you are part of the owners of broken websites, please contact us.

Probably we should create a search engine for broken websites. So people can find their own sites.

Dirty: The other option is to fix on the browser side. The ideal scenario is when there is really a bug in Firefox and Mozilla needs to fix it. But sometimes, the webcompat team is pushing Gecko engineers to introduce dirty fix into the code, so Firefox can be compatible with what a big website is doing. These can be a site intervention that will modify the code on the fly or it can be a more general issue which requires to fix both the browser and the technical specification.

The specification… ??? Yes. Back to the browser market share and its influences on the world. If a dominant browser implements wrongly the specification, it doesn't matter what is right. The Web developers will plunge into using the feature as it behaves in the dominant browser. This solidifies the technical behavior of a feature and creates the burden of implementing a different behavior on smaller browsers, and then changing the specification to match what everyone was forced to implement.

The Web is amazing!

korean palace corridor and scissors

All of that said, the Web is an amazing place. It's a space which gave us the possibility to express ourselves, to discover each other, to work together, to create beautiful things on a daily basis. Just continue doing that, but be mindful that the web is for everyone.

Notes from a dilettantish speaker

I wanted to mention a couple of things, but I think I will do that in a separate blog post with other things that have happened this week.

Otsukare!

Categorieën: Mozilla-nl planet

The Firefox Frontier: Here’s why pop culture and passwords don’t mix

do, 14/11/2019 - 19:40

Were they on a break or not?! For nearly a decade, Ross and Rachel’s on-screen relationship was a point of contention for millions of viewers around the world. It’s no … Read more

The post Here’s why pop culture and passwords don’t mix appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Adding CodeQL and clang to our Bug Bounty Program

do, 14/11/2019 - 19:03

At Github Universe, Github announced the GitHub Security Lab, an initiative to help secure open source software alongside the community and an initial set of partners including Mozilla. As part of this announcement, Github is providing free access to CodeQL, a security research tool which makes it easier to identify flaws in open source software. Mozilla has used these tools privately for the past two years, and have been very impressed and hopeful about how these tools will improve software security. Mozilla recognizes the need to scale security to work automatically, and tighten the feedback loop in the development <-> security auditing/engineering process.

One of the ways we’re supporting this initiative at Mozilla is through renewed investment in automation and static analysis. We think the broader Mozilla community can participate, and we want to encourage it. Today, we’re announcing a new area of our bug bounty program to encourage the community to use the CodeQL tools.  We are exploring the use of CodeQL tools and will award a bounty – above and beyond our existing bounties – for static analysis work that identifies present or historical flaws in Firefox.

The highlights of the bounty are:

  • We will accept static analysis queries written in CodeQL or as clang-based checkers (clang analyzer, clang plugin using the AST API or clang-tidy).
  • Each previously unknown security vulnerability your query matches will be eligible for a bug bounty per the normal policy.
  • The query itself will also be eligible for a bounty, the amount dependent upon the quality of the submission.
  • Queries that match historical issues but do not find new vulnerabilities are eligible. This means you can look through our historical advisories to find examples of issues you can write queries for.
  • Mozilla and Github’s Bug Bounties are compatible not exclusive so if you meet the requirements of both, you are eligible to receive bounties from both. (More details below.)
  • The full details of this program are available at our bug bounty program’s homepage.

When fixing any security bug, retrospective is an important part of the remediation process which should provide answers to the following questions: Was this the only instance of this issue? Is this flaw representative of a wider systemic weakness that needs to be addressed? And most importantly: can we prevent an issue like this from ever occurring again? Variant analysis, driven manually, is usually the way to answer the first two questions. And static analysis, integrated in the development process, is one of the best ways to answer the third.

Besides our existing clang analyzer checks, we’ve made use of CodeQL over the past two years to do variant analysis. This tool allows identifying bugs both in the context of targeted, zero-false-positive queries, and more expansive results where the manual analysis starts from a more complete and less noise-filled point than simple string matching. To see examples of where we’ve successfully used CodeQL, we have a meta tracking bug that illustrates the types of bugs we’ve identified.

We hope that security researchers will try out CodeQL too, and share both their findings and their experience with us. And of course regardless of how you find a vulnerability, you’re always welcome to submit bugs using the regular bug bounty program. So if you have custom static analysis tools, fuzzers, or just the mainstay of grep and coffee – you’re always invited.

Getting Started with CodeQL

Github is publishing a guide covering how to use CodeQL at https://securitylab.github.com/tools/codeql

Getting Started with Clang Analyzer

We currently have a number of custom-written checks in our source tree. So the easiest way to write and run your query is to build Firefox, add ‘ac_add_options –enable-clang-plugin’ to your mozconfig, add your check, and then ‘./mach build’ again.

To learn how to add your check, you can review this recent bug that added a couple of new checks – it shows how to add a new plugin to Checks.inc, ChecksIncludes.inc, and additionally how to add tests. This particular plugin also adds a couple of attributes that can be used in the codebase, which your plugin may or may not need. Note that depending on how you view the diffs, it may appear that the author modified existing files, but actually they copied an existing file, then modified the copy.

Future of CodeQL and clang within our Bug Bounty program

We retain the ability to be flexible. We’re planning to evaluate the effectiveness of the program when we reach $75,000 in rewards or after a year. After all, this is something new for us and for the bug bounty community. We—and Github—welcome your communication and feedback on the plan, especially candid feedback. If you’ve developed a query that you consider more valuable than what you think we’d reward – we would love to hear that. (If you’re keeping the query, hopefully you’re submitting the bugs to us so we can see that we are not meeting researcher expectations on reward.) And if you spent hours trying to write a query but couldn’t get over the learning curve – tell us and show us what problems you encountered!

We’re excited to see what the community can do with CodeQL and clang; and how we can work together to improve on our ability to deliver a browser that answers to no one but you.

The post Adding CodeQL and clang to our Bug Bounty Program appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: 2019 Add-ons Community Meetup in London

do, 14/11/2019 - 17:15

At the end of October, the Firefox add-ons team hosted a day-long meetup with a group of privacy extension developers as part of the Mozilla Festival in London, UK. With 2019 drawing to a close, this meetup provided an excellent opportunity to hear feedback from developers involved in the Recommended Extensions program and to get input about some of our plans for 2020.

Recommended Extensions

Earlier this summer we launched the Recommended Extensions program to provide Firefox users with a list of curated extensions that meet the highest standards of security, utility, and user experience. Participating developers agree to actively maintain their extensions and to have each new version undergo a code review. We invited a handful of Recommended developers to attend the meetup and gather their feedback about the program so far. We also discussed more general issues around publishing content on addons.mozilla.org (AMO), such as ways of addressing user concerns over permission prompts.

Scott DeVaney, Senior Editorial & Campaign Manager for AMO, led a session on ways developers can improve a few key experiential components of their extensions. These tips may be helpful to the developer community at large:

  • AMO listing page. Use clear, descriptive language to convey exactly what your extension does and how it benefits users. Try to avoid overly technical jargon that average users might not understand. Also, screenshots are critical. Be sure to always include updated, relevant screenshots that really capture your extension’s experience.
  • Extension startup/post-install experience. First impressions are really important. Developers are encouraged to take great care in how they introduce new users to their extension experience. Is it clear how users are supposed to engage with the content? Or are they left to figure out a bunch of things on their own with little or no guidance? Conversely, is the guidance too cumbersome (i.e. way too much text for a user to comfortably process?)
  • User interface. If your extension involves customization options or otherwise requires active user engagement, be sure your settings management is intuitive and all UI controls are obvious.

Monetization. It is of course entirely fine for developers to solicit donations for their work or possibly even charge for a paid service. However, monetary solicitation should be tastefully appropriate. For instance, some extensions solicit donations just after installation, which makes little sense given the extension hasn’t proven any value to the user yet. We encourage developers to think through their user experience to find the most compelling moments to ask for donations or attempt to convert users to a paid tier.

WebExtensions API and Manifest v3

One of our goals for this meetup was to learn more about how Firefox extension developers will be affected by Chrome’s proposed changes to their extensions API (commonly referred to as Manifest v3).  As mentioned in our FAQ about Manifest v3, Mozilla plans to adopt some of these changes to maintain compatibility for developers and users, but will diverge from Chrome where it makes sense.

Much of the discussion centered around the impact of changes to the `blocking webRequest` API and replacing background scripts with service workers. Attendees outlined scenarios where changes in those areas will cause breakage to their extensions, and the group spent some time exploring possible alternative approaches for Firefox to take. Overall, attendees agreed that Chrome’s proposed changes to host permission requests could give users more say over when extensions can run. We also discussed ideas on how the WebExtensions API could be improved in light of the goals Manifest v3 is pursuing.

More information about changes to the WebExtensions API for Manifest v3 compatibility will be available in early 2020. Many thanks to everyone who has contributed to this conversation over the last few months on our forums, mailing list, and blogs!

Firefox for Android

We recently announced that Firefox Preview, Mozilla’s next generation browser for Android built on GeckoView, will support extensions through the WebExtensions API. Members of the Android engineering team will build select APIs needed to initially support a small set of Recommended Extensions.

The group discussed a wishlist of features for extensions on Android, including support for page actions and browser actions, history search, and the ability to manipulate context menus. These suggestions will be considered as work on Firefox Preview moves forward.

Thank you

Many thanks to the developers who joined us for the meetup. It was truly a pleasure to meet you in person and to hear first hand about your experiences.

The add-ons team would also like to thank Mandy Chan for making us feel at home in Mozilla’s London office and all of her wonderful support during the meetup.

The post 2019 Add-ons Community Meetup in London appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Thermostats, Locks and Extension Add-ons – WebThings Gateway 0.10

do, 14/11/2019 - 16:38

Happy Things Thursday! Today we are releasing WebThings Gateway 0.10. If you have a gateway using our Raspberry Pi builds then it should already have automatically updated itself.

This new release comes with support for thermostats and smart locks, as well as an updated add-ons system including extension add-ons, which enable developers to extend the gateway user interface. We’ve also added localisation settings so that you can choose your country, language, time zone and unit preferences. From today you’ll be able to use the gateway in American English or Italian, but we’re already receiving contributions of translations in different languages!

Thermostat and lock in Things UI

Thermostats

Version 0.10 comes with support for smart thermostats like the Zigbee Zen Thermostat, the Centralite HA 3156105 and the Z-Wave Honeywell TH8320ZW1000.

Thermostat UIYou can view the current temperature of your home remotely, set a heating or cooling target temperature and set the current heating mode. You can also create rules which react to temperature or control your heating/cooling via the rules engine. In this way, you could set the heating to come on at a particular time of day or change the colour of lights based on how warm it is, for example.

Smart Locks

Ever wonder if you’ve forgotten to lock your front door? Now you can check when you get to work, and even lock or unlock the doors remotely. With the help of the rules engine, you can also set rules to lock doors at a particular time of day or notify you when they are unlocked.

Lock UI

So far we have support for Zigbee and Z-Wave smart locks like the Yale YRD226 Deadbolt and Yale YRD110 Deadbolt.

Extension Add-ons

Version 0.10 also comes with a revamped add-ons system which includes a new type of add-on called extensions. Like a browser extension, an extension add-on can be used to augment the gateway’s user interface.

For example, an extension can add its own entry in the gateway’s main menu and display its own dedicated screen with new functionality.

Together with a new mechanism for add-on developers to extend the gateway’s REST API, this opens up a whole new world of possibilities for add-on developers to customise the gateway.

Note that the updated add-ons system comes with a new manifest format inspired by Web Extensions. Michael Stegeman’s blog post explains in more depth how to use the new add-ons system. We’ll walk you through building your own extension add-on.

Localisation Settings

Many add-ons use location-specific data like weather, sunrise/sunset and tide times, but it’s no fun to have to configure your location for each add-on. It’s now possible to choose your country, time zone and language via the gateway’s web interface.

With time zone support, time-based rules should now correctly adjust for daylight savings time in your region. Since the gateway is configured to use Greenwich Mean Time by default, your rules may show times you didn’t expect at first. To fix this, you’ll need to set your time zone appropriately and adjust your rule times. You can also set your preference of unit used to display temperature, to either degrees Celsius or Fahrenheit.

And finally, many of you have asked for the user interface to support multiple languages. We are shipping with an Italian translation in this release thanks to our resident Italian speaker Kathy. We already have French, Dutch and Polish translations in the pipeline thanks to our wonderful community. Stand by for more information on how to contribute to translations in your language!

API Changes & Standardisation

For developers, in addition to the new add-ons system, it’s now possible to communicate with all the gateway’s web things via a single WebSocket connection. Previously it was necessary to open a WebSocket per device, so this is a significant enhancement.

We’ve recently started the Web Thing Protocol Community Group at the W3C with the intention of standardising this WebSocket sub-protocol in order to further improve interoperability on the Web of Things. We welcome developers to join this group to contribute to the standardisation process.

Coming Soon

Coming up next, expect Mycroft voice controls, translations into more languages and new ways to install and use WebThings Gateway.

As always, you can head over to the forums for support. And we welcome your contributions on GitHub.

The post Thermostats, Locks and Extension Add-ons – WebThings Gateway 0.10 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Upcoming notification permission changes in Firefox 72

wo, 13/11/2019 - 16:30

Notifications. Can you keep count of how many websites or services prompt you daily for permission to send notifications? Can you remember the last time you were thrilled to get one?

Earlier this year we decided to reduce the amount of unsolicited notification permission prompts people receive as they move around the web using the Firefox browser. We see this as an intrinsic part of Mozilla’s commitment to putting people first when they are online.

In preparation, we ran a series of studies and experiments. We wanted to understand how to improve the user experience and reduce annoyance. In response, we’re now making some changes to the workflow for how sites ask users for permission to send them notifications. Firefox will require explicit user interaction on all notification permission prompts, starting in Firefox 72.

For the full background story, and details of our analysis and experimentation, please read Restricting Notification Permission Prompts in Firefox. Today, we want to be sure web developers are aware of the upcoming changes and share best practices for these two key scenarios:

  1. How to guide the user toward the prompt.
  2. How to acknowledge the user changing the permission.

an animation showing the browser UI where a user can click on the small permission icon that appears in the address bar.

We anticipate that some sites will be impacted by changes to the user flow. We suspect that many sites do not yet deal with the latter in their UX design. Let’s briefly walk through these two scenarios:

How to guide the user toward the prompt

Ideally, sites that want permission to notify or alert a user already guide them through this process. For example, they ask if the person would like to enable notifications for the site and offer a clickable button.

document.getElementById("notifications-button").addEventListener("click", () => { Notification.requestPermission().then(setupNotifications); });

Starting with Firefox 72, the notification permission prompt is gated behind a user gesture. We will not deliver prompts on behalf of sites that do not follow the guidance above. Firefox will instantly reject the promise returned by Notification.requestPermission() and PushManager.subscribe(). However, the user will see a small notification permission icon in the address bar.

Note that because PushManager.subscribe() requires a ServiceWorkerRegistration, Firefox will carry user-interaction flags through promises that return ServiceWorkerRegistration objects. This enables popular examples to continue to work when called from an event handler.

Firefox shows the notification permission icon after a successful prompt. The user can select this icon to make changes to the notification permission. For instance, if they decide to grant the site the permission, or change their preference to no longer receive notifications.

How to acknowledge the user changing the permission

When the user changes the notification permission through the notification permission icon, this is exposed via the Permissions API:

navigator.permissions.query({ name: "notifications" }).then(status => { status.onchange = () => potentiallyUpdateNotificationPermission(status.state); potentiallyUpdateNotificationPermission(status.state); }

We believe this improves the user experience and makes it more consistent. And allows to align the site interface with the notification permission. Please note that the code above works in earlier versions of Firefox as well. However, users are unlikely to change the notification permission dynamically in earlier Firefox releases. Why? Because there was no notification permission icon in the address bar.

Our studies show that users are more likely to engage with prompts that they’ve interacted with explicitly. We’ve seen that through pre-prompting in the user interface, websites can inform the user of the choice they are making before presenting a prompt. Otherwise, unsolicited prompts are denied in over 99% of cases.

We hope these changes will lead to a better user experience for all and better and healthier engagement with notifications.

The post Upcoming notification permission changes in Firefox 72 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla plays role in Kenya’s adoption of crucial data protection law

di, 12/11/2019 - 21:45

The Kenyan Data Protection and Privacy Act 2019, was signed into law last week. This GDPR-like law is the first data protection law in Kenyan history, and marks a major step forward in the protection of Kenyans’ privacy. Mozilla applauds the Government of Kenya, the National Assembly, and all stakeholders who took part in the making of this historic law. It is indeed a huge milestone that sees Kenya become the latest addition to the list of countries with data protection related laws in place; providing much-needed safeguards to its citizens in the digital era.

Strong data protection laws are critical in ensuring that user rights are protected; that companies and governments are compelled to appropriately handle the data that they are entrusted with. As part of its policy work in Africa, Mozilla has been at the forefront in advocating for the new law since 2018. The latest development is most welcome, as Mozilla continues to champion the 5 policy hot-spots that are key to Africa’s digital transformation.

Mozilla is pleased to see that the Data Protection Act is consistent with international data protection standards, through its approach to placing users’ rights at the centre of the digital economy. They also applaud the creation of an empowered data protection commission with a high degree of independence from the government. The law also imposes strong obligations placed on data controllers and processors. It requires them to abide by principles of meaningful user consent, collection limitation, purpose limitation, data minimization, and data security.

It is commendable that the law has maintained integrity throughout the process, and many of the critical comments Mozilla submitted on the initial Data Protection Bill (2018) have been reflected in the final act. The suggestions included the requirement for robust protections of data subjects with the rights to rectification, erasure of inaccurate data, objection to processing of their data, as well as the right to access, and to be informed of the use of their data; with the aim of providing users with control over their personal data and online experiences.

Mozilla continues to be actively engaged in advocating for strong data privacy and  protection in the entire African region, where fewer than 30 countries have a data protection law. Considering that the region has the world’s fastest growth in internet use over the past decade, the continent is poised for great opportunities around accessing the internet. However, without the requisite laws in place, many users, many of whom are accessing the internet for the first time, will be put at risk.

The post Mozilla plays role in Kenya’s adoption of crucial data protection law appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Pagina's