mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Air Mozilla: MozFest 2017 - Volunteer Meetup 30th August

Mozilla planet - wo, 30/08/2017 - 20:30

MozFest 2017 - Volunteer Meetup 30th August First Meetup for Mozilla Festival 2017 Volunteers

Categorieën: Mozilla-nl planet

Air Mozilla: MozFest 2017 - Volunteer Meetup 30th August

Mozilla planet - wo, 30/08/2017 - 20:30

MozFest 2017 - Volunteer Meetup 30th August First Meetup for Mozilla Festival 2017 Volunteers

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Removing Disabled WoSign and StartCom Certificates from Firefox 58

Mozilla planet - wo, 30/08/2017 - 19:47

In October 2016, Mozilla announced that, as of Firefox 51, we would stop validating new certificates chaining to the root certificates listed below that are owned by the companies WoSign and StartCom.

The announcement also indicated our intent to eventually completely remove these root certificates from Mozilla’s Root Store, so that we would no longer validate any certificates issued by those roots. That time has now arrived. We plan to release the relevant changes to Network Security Services (NSS) in November, and then the changes will be picked up in Firefox 58, due for release in January 2018. Websites using certificates chaining up to any of the following root certificates need to migrate to another root certificate.

This announcement applies to the root certificates with the following names:

  • CA 沃通根证书
  • Certification Authority of WoSign
  • Certification Authority of WoSign G2
  • CA WoSign ECC Root
  • StartCom Certification Authority
  • StartCom Certification Authority G2

Mozilla Security Team

The post Removing Disabled WoSign and StartCom Certificates from Firefox 58 appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting August 30, 2017

Mozilla planet - wo, 30/08/2017 - 18:00

Weekly SUMO Community Meeting August 30, 2017 This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Chris H-C: The Photonization of about:telemetry

Mozilla planet - wo, 30/08/2017 - 15:37

This summer I mentored :flyingrub for a Google Summer of Code project to redesign about:telemetry. You can read his Project Submission Document here.

Background

Google Summer of Code is a program funded by Google to pay students worldwide to contribute in meaningful ways to open source projects.

about:telemetry is a piece of Firefox’s UI that allows users to inspect the anonymous usage data we collect to improve Firefox. For instance, we look at the maximum number of tabs our users have open during a session (someone or several someones have more than one thousand tabs open!). If you open up a tab in Firefox and type in about:telemetry (then press Enter), you’ll see the interface we provide for users to examine their own data.

Mozilla is committed to putting users in control of their data. about:telemetry is a part of that.

Then

When :flyingrub started work on about:telemetry, it looked like this (Firefox 55):

oldAboutTelemetry

It was… functional. Mostly it was intended to be used by developers to ensure that data collection changes to Firefox actually changed the data that was collected. It didn’t look like part of Firefox. It didn’t look like any other about: page (browse to about:about to see a list of about: pages). It didn’t look like much of anything.

Now

After a few months of polishing and tweaking and input from UX, it looks like this (Firefox Nightly 57):

newAboutTelemetry

Well that’s different, isn’t it?

It has been redesigned to follow the Photon Design System so that it matches how Firefox 57 looks. It has been reorganized into more functional groups, has a new top-level search, and dozens of small tweaks to usability and visibility so you can see more of your data at once and get to it faster.

newAboutTelemetry-histograms.png

Soon

Just because Google Summer of Code is done doesn’t mean about:telemetry is done. Work on about:telemetry continues… and if you know some HTML, CSS, and JavaScript you can help out! Just pick a bug from the “Depends on” list here, and post a comment asking if you can help out. We’ll be right with you to help get you started. (Though you may wish to read this first, since it is more comprehensive than this blog post.)

Even if you can’t or don’t want to help out, you can take sneak a peek at the new design by downloading and using Firefox Nightly. It is blazing fast with a slick new design and comes with excellent new features to help be your agent on the Web.

We expect :flyingrub will continue to contribute to Firefox (as his studies allow, of course. He is a student, and his studies should be first priority now that GSoC is done), and we thank him very much for all of his good work this Summer.

:chutten


Categorieën: Mozilla-nl planet

Chris AtLee: Taskcluster migration update

Mozilla planet - wo, 30/08/2017 - 11:40
All your nightlies are belong to Taskcluster

In January I announced that we had just migrated Linux nightly builds to Taskcluster.

We completed a huge milestone in July: starting in Firefox 56, we've been doing all our nightly Firefox builds in Taskcluster.

https://media.giphy.com/media/MOWPkhRAUbR7i/giphy.gif

This includes all Windows, macOS, Linux, and Android builds. You can see all the builds and repacks on Treeherder.

In August, after 56 merged to Beta, we've also been doing our Firefox Beta builds using Taskcluster. We're on track to be shipping Firefox 56, built from Taskcluster to release users at the end of September.

Windows and macOS each had their own challenges to get them ready to build and ship to our nightly users.

Windows signing

We've had Windows builds running in Taskcluster for quite a while now. The biggest missing piece stopping us from shipping these builds was signing. Windows builds end up being a bit complicated to sign.

First, each compiled .exe and .dll binary needs to be signed. Signing binaries in windows changes their contents, and so we need to regenerate some files that depend on the exact contents of binaries. Next, we need to create packages in various formats: a "setup.exe" for installing Firefox, and also MAR files for updates. Each of these package formats in turn need to be signed.

In buildbot, this process was monolithic. All of the binary generation and signing happened as part of the same build process. The same process would also publish symbols to the symbol server and publish updates to Balrog The downside of this monolithic process is that it adds additional dependencies to the build, which is already a really long process. If something goes wrong with signing, or publishing updates, you don't want to have to restart a 2 hour build!

As part of our migration to Taskcluster, we decided that builds should minimize their external dependencies. This means that the build task produces only unsigned binaries, and it is the responsibility of downstream tasks to sign them. We also wanted discrete tasks for symbol and update submission.

One wrinkle in this approach is that the logic that defines how to create a setup.exe package or a MAR file lives in tree. We didn't want to run that code in the same context as the code that generates signatures.

Our solution to this was to create a sequence of build -> signing -> repackage -> signing tasks. The signing tasks run in a restricted environment while the build and repackage tasks have access to the build system in order to produce the required artifacts. Using the chain of trust, we can demonstrate that the artifacts weren't tampered with between intermediate tasks.

Finally, we need to consider l10n repacks. We ship Firefox in over 90 locales. The repacking process downloads the en-US build and replaces the English strings with localized strings. Each of these repacks needs to be based on the signed en-US build. Each will also generate its own setup.exe and complete MAR for updates.

macOS performance (and why your build directory matters)

Like Windows, we've had macOS builds running on Taskcluster for a long time. Also like Windows, we had to solve signing for macOS.

However, the biggest blocker for the macOS build migration, was a performance bug. Builds produced on Taskcluster showed some serious performance regressions as compared to the builds produced on buildbot.

Many very smart people looked at this bug since it was first discovered in February. They compared library versions being used. They compared compiler versions and compiler flags. They even inspected the generated assembly code from both systems.

Mike Shal stumbled across the first clue to what was going on in June: if he stripped the Taskcluster binaries, then the performance problems disappeared! At this point we decided that we could go ahead and ship these builds to nightly users, knowing that the performance regression would disappear on beta and release.

Later on, Mike realized that it's not the presence or absence of symbols in the binary that cause the performance hit, it's what directory the builds are done in. On buildbot we build under /builds/..., and on Taskcluster we build under /home/...

https://media.giphy.com/media/zjQrmdlR9ZCM/giphy.gif

Read the bug for more gory details. This is definitely one of the strangest bugs I've seen.

Lessons learned

We learned quite a bit in the process of migrating Windows and macOS nightly builds to Taskcluster.

First, we gained a huge amount of experience with the in-tree scheduling system. There's a bit of a learning curve to climb, but it's an extremely powerful and flexible system. Many kudos to Dustin for his work creating the foundation of this system here. His blog post, "What's So Special About "In-Tree"?", is a great explanation of why having this code as part of Firefox's repository is so important.

One of the killer features of having all the scheduling logic live in-tree is that you can do quite a bit of work locally, without requiring any build infrastructure. This is extremely useful when working on the complex build / signing / repackage sequence of tasks described above. You can make your changes, generate a new task graph, and inspect the results.

Once you're happy with your local changes, you can push them to try to validate your local testing, get your patch reviewed, and then finally landed in gecko. Your scheduling changes will take effect as soon as they land into the repo. This made it possible for us to do a lot of testing on another project branch, and then merge the code to central once we were ready.

What's next?

We're on track to ship builds produced in Taskcluster as part of the 56.0 release scheduled for late September. After that the only Firefox builds being produced by buildbot will be for ESR52.

Meanwhile, we've started tackling the remaining parts of release automation. We prioritized getting nightly and CI builds migrated to Taskcluster, however, there are still parts of the release process still implemented in Buildbot.

We're aiming to have release automation completely migrated off of buildbot by the end of the year. We've already seen many benefits from migrating CI to Taskcluster, and migrating the release process will realize many of those same benefits.

Thanks!

Thank you for reading this far!

Members from the Release Engineering, Release Operations, Taskcluster, Build, and Product Integrity teams all were involved in finishing up this migration. Thanks to everyone involved (there are a lot of you!) to getting us across the finish line here.

In particular, if you come across one of these fine individuals at the office, or maybe on IRC, I'm sure they would appreciate a quick "thank you":

  • Aki Sasaki
  • Dustin Mitchell
  • Greg Arndt
  • Joel Maher
  • Johan Lorenzo
  • Justin Wood
  • Kim Moir
  • Mihai Tabara
  • Mike Shal
  • Nick Thomas
  • Rail Aliiev
  • Rob Thijssen
  • Simon Fraser
  • Wander Costa
Categorieën: Mozilla-nl planet

Myk Melez: Headless Firefox in Node.js with selenium-webdriver

Mozilla planet - wo, 30/08/2017 - 09:47

As of version 56 (currently in Beta), Firefox supports running headlessly on Windows, macOS, and Linux. Brendan Dahl has previously described how to use SlimerJS to drive headless Firefox. You can also drive it via the W3C WebDriver API, and this blog post explains how to do that in Node.js with the selenium-webdriver package.

(For a similar introduction using Python on Windows, see Andre Perunicic’s Using Selenium with Headless Firefox.)

First, ensure you have a version of Firefox that supports headless. On Linux, the current release version (55) is sufficient. On Windows and macOS, however, you’ll need at least version 56, which is currently in Beta (scheduled for release next month). You can also use Developer Edition (based on Beta) or a Nightly build; any pre-release build will do.

Next, install geckodriver (and ensure it’s on your PATH). You can download and install it manually from the geckodriver releases page, or you can install it using NPM via npm install -g geckodriver or yarn global add geckodriver (mind node-geckodriver #30 on Windows). On macOS, you can also use Homebrew to install it via brew install geckodriver.

Finally, create a Node project, initializing it with your favorite package management tool and installing the selenium-webdriver package:

mkdir project-dir cd project-dir npm --yes init # yarn --yes init npm install selenium-webdriver # yarn add selenium-webdriver

Now you’re ready to drive headless Firefox from Node scripts in your project.

For example, here’s how to create a script that searches for “testing” on the Mozilla Developer Network and takes a screenshot of the result. It uses features available only in Node 8, but scroll to the bottom for a reference to the equivalent for Node 6.

First, import some useful core Node methods:

const { writeFile } = require('fs'); const { promisify } = require('util');

Then import APIs from selenium-webdriver and selenium-driver/firefox:

const { Builder, By, Key, promise, until } = require('selenium-webdriver'); const firefox = require('selenium-webdriver/firefox');

Tell selenium-webdriver to disable its “promise manager” so we can use Node’s native async/await (which will become unnecessary when the promise manager is removed in selenium-webdriver #2969):

promise.USE_PROMISE_MANAGER = false;

Then create a Binary instance:

const binary = new firefox.Binary();

On Windows and macOS, if you have multiple versions of Firefox installed, configure it with the distribution channel (NIGHTLY, AURORA, BETA) to ensure you get the correct one:

const binary = new firefox.Binary(firefox.Channel.NIGHTLY);

On Linux, if you’d like to use a different version of Firefox than the one on your PATH, specify the path to the executable:

const binary = new firefox.Binary('/path/to/firefox');

Add the --headless argument to the binary:

binary.addArguments("--headless");

(Eventually selenium-webdriver #4591 will make this a driver configuration option.)

Then start Firefox with the Binary you previously created:

const driver = new Builder() .forBrowser('firefox') .setFirefoxOptions(new firefox.Options().setBinary(binary)) .build();

Finally, tell Firefox to load the Mozilla Developer Network home page, enter “testing” into its search form, hit the RETURN key to submit the form, await loading of the search results page, take a screenshot of the page, and save the screenshot data to a screenshot.png file in your current directory:

async function main() { await driver.get('https://developer.mozilla.org/'); await driver.findElement(By.id('home-q')).sendKeys('testing', Key.RETURN); await driver.wait(until.titleIs('Search Results for "testing" | MDN')); await driver.wait(async () => { const readyState = await driver.executeScript('return document.readyState'); return readyState === 'complete'; }); const data = await driver.takeScreenshot(); await promisify(writeFile)('screenshot.png', data, 'base64'); await driver.quit(); } main();

That’s it!

For the complete script, along with a version that works on Node 6, see the headless-examples repository on GitHub. And for additional information on using selenium-webdriver, see the selenium-webdriver README, the API documentation, and this directory of example scripts.

Note: Updated on 2017 September 1 to specify headless mode using the --headless command-line argument rather than the MOZ_HEADLESS=1 environment variable.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Easier HTTP requests with h2c

Mozilla planet - wo, 30/08/2017 - 00:15

I spend a large portion of my days answering questions and helping people use curl and libcurl. With more than 200 command line options it certainly isn’t always easy to find the correct ones, in combination with the Internet and protocols being pretty complicated things at times… not to mention the constant problem of bad advice. Like code samples on stackoverflow that repeats non-recommended patterns.

The notorious -X abuse is a classic example, or why not the widespread disease called too much use of the –insecure option (at a recent count, there were more than 118,000 instances of “curl –insecure” uses in code hosted by github alone).

Sending HTTP requests with curl

HTTP (and HTTPS) is by far the most used protocol out of the ones curl supports. curl can be used to issue just about any HTTP request you can think of, even if it isn’t always immediately obvious exactly how to do it.

h2c to the rescue!

h2c is a new command line tool and associated web service, that when passed a complete HTTP request dump, converts that into a corresponding curl command line. When that curl command line is then run, it will generate exactly(*) the HTTP request you gave h2c.

h2c stands for “headers to curl”.

Many times you’ll read documentation somewhere online or find a protocol/API description showing off a full HTTP request. “This is what the request should look like. Now send it.” That is one use case h2c can help out with.

Example use

Here we have an HTTP request that does Basic authentication with the POST method and a small request body. Do you know how to tell curl to send it?

The request:

POST /receiver.cgi HTTP/1.1 Host: example.com Authorization: Basic aGVsbG86eW91Zm9vbA== Accept: */* Content-Length: 5 Content-Type: application/x-www-form-urlencoded hello

I save the request above in a text file called ‘request.txt’ and ask h2c to give the corresponding curl command line:

$ ./h2c < request.txt curl --http1.1 --header User-Agent: --user "hello:youfool" --data-binary "hello" https://example.com/receiver.cgi

If we add “–trace-ascii dump” to that command line, run it, and then inspect the dump file after curl has completed, we can see that it did indeed issue the HTTP request we asked for!

Web Site

Maybe you don’t want to install another command line tool written by me in your system. The solution is the online version of h2c, which is hosted on a separate portion of the official curl web site:

https://curl.haxx.se/h2c/

The web site lets you paste a full HTTP request into a text form and the page then shows the corresponding curl command line for that request.

h2c “as a service”

Inception alert: you can also use the web version of h2c by sending over a HTTP request to it using curl. You’ll then get nothing but the correct curl command line output on stdout.

To send off the same file we used above:

curl --data-urlencode http@request.txt https://curl.haxx.se/h2c/

or of course if you rather want to pass your HTTP request to curl on stdin, that’s equally easy:

cat request.txt | curl --data-urlencode http@- https://curl.haxx.se/h2c/ Early days, you can help!

h2c was created just a few days ago. I’m sure there are bugs, issues and quirks to iron out. You can help! Files issues or submit pull-requests!

(*) = barring bugs, there are still some edge cases where the exact HTTP request won’t be possible to repeat, but where we instead will attempt to do “the right thing”.

Categorieën: Mozilla-nl planet

Frédéric Wang: The AMP Project and Igalia working together to improve WebKit and the Web Platform

Mozilla planet - wo, 30/08/2017 - 00:00
TL;DR

The AMP Project and Igalia have recently been collaborating to improve WebKit’s implementation of the Web platform. Both teams are committed to make the Web better and we expect that all developers and users will benefit from this effort. In this blog post, we review some of the bug fixes and features currently being considered:

  • Frame sandboxing: Implementing sandbox values to allow trusted third-party resources to open unsandboxed popups or restrict unsafe operations of malicious ones.

  • Frame scrolling on iOS: Trying to move to a more standard and interoperable approach via iframe elements; addressing miscellaneous issues with scrollable nodes (e.g. visual artifacts while scrolling, view not scrolled when using “Find Text”…).

  • Root scroller: Finding a solution to the old interoperability issue about how to scroll the main frame; considering a new rootScroller API.

Some demo pages for frame sandboxing and scrolling are also available if you wish to test features discussed in this blog post.

Introduction

AMP is an open-source project to enable websites and ads that are consistently fast, beautiful and high-performing across devices and distribution platforms. Several interoperability bugs and missing features in WebKit have caused problems to AMP users and to Web developers in general. Although it is possible to add platform-specific workarounds to AMP, the best way to help the Web Platform community is to directly fix these issues in WebKit, so that everybody can benefit from these improvements.

Igalia is a consulting company with a team dedicated to Web Platform developments in all open-source Web Engines (Chromium, WebKit, Servo, Gecko) working in the implementation and standardization of miscellaneous technologies (CSS Grid/flexbox, ECMAScript, WebRTC, WebVR, ARIA, MathML, etc). Given this expertise, the AMP Project sponsored Igalia so that they can lead these developments in WebKit. It is worth noting that this project aligns with the Web Predictability effort supported by both Google and Igalia, which aims at making the Web more predictable for developers. In particular, the following aspects are considered:

  • Interoperability: Effort is made to write Web Platform Tests (WPT), to follow Web standards and ensure consistent behaviors between web engines or operating systems.
  • Compatibility: Changes are carefully analyzed using telemetry techniques or user feedback in order to avoid breaking compatibility with previous versions of WebKit.
  • Reducing footguns: Removals of non-standard features (e.g. CSS vendor prefixes) are attempted while new features are carefully introduced.

Below we provide further description of the WebKit improvements, showing concretely how the above principles are followed.

Frame sandboxing

A sandbox attribute can be specified on the iframe element in order to enable a set of restrictions on any content it hosts. These conditions can be relaxed by specifying a list of values such as allow-scripts (to allow javascript execution in the frame) or allow-popups (to allow the frame to open popups). By default, the same restrictions apply to a popup opened by a sandboxed frame.

iframe sandboxing Figure 1: Example of sandboxed frames (Can they navigate their top frame or open popups? Are such popups also sandboxed?)

However, sometimes this behavior is not wanted. Consider for example the case of an advertisement inside a sandboxed frame. If a popup is opened from this frame then it is likely that a non-sandboxed context is desired on the landing page. In order to handle this use case, a new allow-popups-to-escape-sandbox value has been introduced. The value is now supported in Safari Technology Preview 34.

While performing that work, it was noticed that some WPT tests for the sandbox attribute were still failing. It turns out that WebKit does not really follow the rules to allow navigation. More specifically, navigating a top context is never allowed when such context corresponds to an opened popup. We have made some changes to WebKit so that it behaves more closely to the specification. This is integrated into Safari Technology Preview 35 and you can for example try this W3C test. Note that this test requires to change preferences to allow popups.

It is worth noting that web engines may slightly depart from the specification regarding the previously mentioned rules. In particular, WebKit checks a same-origin condition to be sure that one frame is allowed to navigate another one. WebKit always has contained a special case to ignore this condition when a sandboxed frame with the allow-top-navigation flag tries and navigate its top frame. This feature, sometimes known as “frame busting,” has been used by third-party resources to perform malicious auto-redirecting. As a consequence, Chromium developers proposed to restrict frame busting to the case where the navigation is triggered by a user gesture.

According to Chromium’s telemetry frame busting without a user gesture is very rare. But when experimenting with the behavior change of allow-top-navigation several regressions were reported. Hence it was instead decided to introduce the allow-top-navigation-by-user-activation flag in order to provide this improved safety context while still preserving backward compatibility. We implemented this feature in WebKit and it is now available in Safari Technology Preview 37.

Finally, another proposed security improvement is to use an allow-modals flag to explicitly allow sandboxed frames to display modal dialogs (with alert, prompt, etc). That is, the default behavior for sandboxed frames will be to forbid such modal dialogs. Again, such a change of behavior must be done with care. Experiments in Chromium showed that the usage of modal dialogs in sandboxed frames is very low and no users complained. Hence we implemented that behavior in WebKit and the feature should arrive in Safari Technology Preview soon.

Check out the frame sandboxing demos if if you want to test the new allow-popup-to-escape-sandbox, allow-top-navigation-without-user-activation and allow-modals flags.

Frame scrolling on iOS

Apple’s UI choice was to (almost) always “flatten” (expand) frames so that users do not require to scroll them. The rationale for this is that it avoids to be trapped into hierarchy of nested frames. Changing that behavior is likely to cause a big backward compatibility issue on iOS so for now we proposed a less radical solution: Add a heuristic to support the case of “fullscreen” iframes used by the AMP Project. Note that such exceptions already exist in WebKit, e.g. to avoid making offscreen content visible.

We thus added the following heuristic into WebKit Nightly: do not flatten out-of-flow iframes (e.g. position: absolute) that have viewport units (e.g. vw and vh). This includes the case of the “fullscreen” iframe previously mentioned. For now it is still under a developer flag so that WebKit developers can control when they want to enable it. Of course, if this is successful we might consider more advanced heuristics.

The fact that frames are never scrollable in iOS is an obvious interoperability issue. As a workaround, it is possible to emulate such “scrollable nodes” behavior using overflow: scroll nodes with the -webkit-overflow-scrolling: touch property set. This is not really ideal for our Web Predictability goal as we would like to get rid of browser vendor prefixes. Also, in practice such workarounds lead to even more problems in AMP as explained in these blog posts. That’s why implementing scrolling of frames is one of the main goals of this project and significant steps have already been made in that direction.

Class Hierarchy Figure 2: C++ classes involved in frame scrolling

The (relatively complex) class hierarchy involved in frame scrolling is summarized in Figure 2. The frame flattening heuristic mentioned above is handled in the WebCore::RenderIFrame class (in purple). The WebCore::ScrollingTreeFrameScrollingNodeIOS and WebCore::ScrollingTreeOverflowScrollingNodeIOS classes from the scrolling tree (in blue) are used to scroll, respectively, the main frame and overflow nodes on iOS. Scrolling of non-main frames will obviously have some code to share with the former, but it will also have some parts in common with the latter. For example, passing an extra UIScrollView layer is needed instead of relying on the one contained in the WKWebView of the main frame. An important step is thus to introduce a special class for scrolling inner frames that would share some logic from the two other classes and some refactoring to ensure optimal code reuse. Similar refactoring has been done for scrolling node states (in red) to move the scrolling layer parameter into WebCore::ScrollingStateNode instead of having separate members for WebCore::ScrollingStateOverflowScrollingNode and WebCore::ScrollingStateFrameScrollingNode.

The scrolling coordinator classes (in green) are also important, for example to handle hit testing. At the moment, this is not really implemented for overflow nodes but it might be important to have it for scrollable frames. Again, one sees that some logic is shared for asynchronous scrolling on macOS (WebCore::ScrollingCoordinatorMac) and iOS (WebCore::ScrollingCoordinatorIOS) in ancestor classes. Indeed, our effort to make frames scrollable on iOS is also opening the possibility of asynchronous scrolling of frames on macOS, something that is currently not implemented.

Class Hierarchy Figure 4: Video of this demo page on WebKit iOS with experimental patches to make frame scrollables (2017/07/10)

Finally, some more work is necessary in the render classes (purple) to ensure that the layer hierarchies are correctly built. Patches have been uploaded and you can view the result on the video of Figure 4. Notice that this work has not been reviewed yet and there are known bugs, for example with overlapping elements (hit testing not implemented) or position: fixed elements.

Various other scrolling bugs were reported, analyzed and sometimes fixed by Apple. The switch from overflow nodes to scrollable iframes is unlikely to address them. For example, the “Find Text” operation in iOS has advanced features done by the UI process (highlight, smart magnification) but the scrolling operation needed only works for the main frame. It looks like this could be fixed by unifying a bit the scrolling code path with macOS. There are also several jump and flickering bugs with position: fixed nodes. Finally, Apple fixed inconsistent scrolling inertia used for the main frame and the one used for inner scrollable nodes by making the former the same as the latter.

Root Scroller

The CSSOM View specification extends the DOM element with some scrolling properties. That specification indicates that the element to consider to scroll the main view is document.body in quirks mode while it is document.documentElement in no-quirks mode. This is the behavior that has always been followed by browsers like Firefox or Interner Explorer. However, WebKit-based browsers always treat document.body as the root scroller. This interoperability issue has been a big problem for web developers. One convenient workaround was to introduce the document.scrollingElement which returns the element to use for scrolling the main view (document.body or document.documentElement) and was recently implemented in WebKit. Use this test page to verify whether your browser supports the document.scrollingElement property and which DOM element is used to scroll the main view in no-quirks mode.

Nevertheless, this does not solve the issue with existing web pages. Chromium’s Web Platform Predictability team has made a huge communication effort with Web authors and developers which has drastically reduced the use of document.body in no-quirks mode. For instance, Chromium’s telemetry on Figure 3 indicates that the percentage of document.body.scrollTop in no-quirks pages has gone from 18% down to 0.0003% during the past three years. Hence the Chromium team is now considering shipping the standard behavior.

UseCounter for ScrollTopBodyNotQuirksMode Figure 3: Use of document.body.scrollTop in no-quirks mode over time (Chromium's UseCounter)

In WebKit, the issue has been known for a long time and an old attempt to fix it was reverted for causing regressions. For now, we imported the CSSOM View tests and just marked the one related to the scrolling element as failing. An analysis of the situation has been left on WebKit’s bug; Depending on how things evolve on Chromium’s side we could consider the discussion and implementation work in WebKit.

Related to that work, a new API is being proposed to set the root scroller to an arbitrary scrolling element, giving more flexibility to authors of Web applications. Today, this is unfortunately not possible without losing some of the special features of the main view (e.g. on iOS, Safari’s URL bar is hidden when scrolling the main view to maximize the screen space). Such API is currently being experimented in Chromium and we plan to investigate whether this can be implemented in WebKit too.

Conclusion

In the past months, The AMP Project and Igalia have worked on analyzing some interoperability issue and fixing them in WebKit. Many improvements for frame sandboxing are going to be available soon. Significant progress has also been made for frame scrolling on iOS and collaboration continues with Apple reviewers to ensure that the work will be integrated in future versions of WebKit. Improvements to “root scrolling” are also being considered although they are pending on the evolution of the issues on Chromium’s side. All these efforts are expected to be useful for WebKit users and the Web platform in general.

Igalia Logo AMP Logo

Last but not least, I would like to thank Apple engineers Simon Fraser, Chris Dumez, and Youenn Fablet for their reviews and help, as well as Google and the AMP team for supporting that project.

Categorieën: Mozilla-nl planet

Air Mozilla: Intern Presentations: Round 6: Tuesday, August 29th

Mozilla planet - di, 29/08/2017 - 22:00

 Tuesday, August 29th Intern Presentations 5 presenters Time: 1:00PM - 2:15PM (PDT) - each presenter will start every 15 minutes 3 MTV, 1 PDX, 1 TOR

Categorieën: Mozilla-nl planet

Air Mozilla: Intern Presentations: Round 6: Tuesday, August 29th

Mozilla planet - di, 29/08/2017 - 22:00

 Tuesday, August 29th Intern Presentations 5 presenters Time: 1:00PM - 2:15PM (PDT) - each presenter will start every 15 minutes 3 MTV, 1 PDX, 1 TOR

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Life After Flash: Multimedia for the Open Web

Mozilla planet - di, 29/08/2017 - 16:53

Flash delivered video, animation, interactive sites and, yes, ads to billions of users for more than a decade, but now it’s going away. Adobe will drop support for Flash by 2020. Firefox no longer supports Flash out of the box, and neither does Chrome. So what’s next? There are tons of open standards that can do what Flash does, and more.

Truly Open Multimedia

Flash promised to deliver one unifying platform for building and delivering interactive multimedia websites. And, for the most part, it delivered. But the technology was never truly open and accessible, and Flash Player was too resource-hungry for mobile devices. Now open-source alternatives can do everything Flash does—and more. These are the technologies you should learn if you’re serious about building tomorrow’s interactive web, whether you’re doing web animation, games, or video.

Web Animation

 

CSS
CSS animation is relatively new, but it’s the easiest way to get started with web animation. CSS is made to style websites with basic rules that dictate layout, typography, colors, and more. With the release of CSS3, animations are now baked into the standard, and as a developer, it’s up to you to tell the browser how to animate. CSS is human readable, which means it basically does what it says on the tin. For example, the property “animation-direction,” does exactly that: specifies the direction of your animation.

Right now you can create smooth, seamless animations with CSS. It’s simple to create keyframes, adjust timing, animate opacity, and more.  And all the animations work with anything you’d style normally with CSS: text, images, containers, and so on.

You can do animation with CSS, even if you’re unfamiliar with programming languages. Like many open-source projects, the code is out there on the web for you to play around with. Mozilla has also created (and maintains) exhaustive CSS animation documentation. Most developers recommend using CSS animation for simple projects and JavaScript for more complex sites.

JavaScript
Developers have been animating with JavaScript since the early days. Basic mouseover scripts have been around for more than two decades and today JavaScript, along with HTML5 <canvas> elements, can do some pretty amazing things. Even simple scripts can yield great results. With JavaScript, you can draw shapes, change colors, move and change images, and animate transparency. JavaScript animation uses the SVG (scalable vector graphics) format for animations, meaning artwork is actually drawn live based on math rather than being loaded and rendered. That means they remain crisp at any scale (thus the name) and can be completely controlled. SVG offers anti-aliased rendering, pattern and gradient fills, sophisticated filter-effects, clipping to arbitrary paths, text and animations. And, of course, it’s an open standard W3C recommendation rather than a closed binary. Using SVG, JavaScript, and CSS3, developers can create impressive interactive animations that don’t require any specialized formats or players.

JavaScript animation can be very refined, including bouncing, stop, pause, rewind, or slow down. It’s also interactive and can be programmed to respond to mouse clicks and rollovers. The new Web Animations API, built with JavaScript, lets you fine-tune animations with more control over keyframes and elements, but it’s still in the early, experimental phases of development and some features may not be supported by all browsers.

Additionally, JavaScript animations can be programmed to respond to input fields, form submissions, and keystrokes. And that makes it perfect for building web games.

Web Games

At one time, Flash ruled web games. It was easy to learn, use, and distribute. It was also robust, able to deliver massively multiplayer online games to millions. But today it’s possible to deliver the same—if not better—experience using JavaScript, HTML5, WebGL and WebAssembly. With modern browsers and open-source frameworks, it’s possible to build 3D action shooters, RPGs, adventure games, and more. In fact, you can now even create fully immersive virtual reality experiences for the web with technologies like WebVR and A-Frame.

Web games rely on an ecosystem of open-source frameworks and platforms to work. Each one plays an important role, from visuals to controls to audio to networking. The Mozilla Developer Network has a thorough list of technologies that are currently in use. Here are just a few of them and what they’re used for:

WebGL
Lets you create high-performance, hardware-accelerated 3D (and 2D) graphics from Web content. This is a Web-supported implementation of OpenGL ES 2.0. WebGL 2 goes even further, enabling OpenGL ES 3.0 level of support in browsers.

JavaScript
JavaScript, the programming language used on the Web, works well in browsers and is getting faster all the time. It’s already used to build thousands of games and new game frameworks are being developed constantly.

HTML audio
The <audio> element lets you easily play simple sound effects and music. If your needs are more involved, check out the Web Audio API for real audio processing power!

Web Audio API
This API for controlling the playback, synthesis, and manipulation of audio from JavaScript code lets you create awesome sound effects as well as play and manipulate music in real time.

WebSockets
The WebSocket API lets you connect your app or site to a server to transmit data back and forth in real-time. Perfect for multiplayer turn-based or even-based gaming, chat services, and more.

WebRTC
WebRTC is an ultra-fast API that can be used by video-chat, voice-calling, and P2P-file-sharing Web apps. It can be used for real-time multiplayer games that require low latency.

WebAssembly
HTML5/JavaScript game engines are better than ever, but they still can’t quite match the performance of native apps. WebAssembly promises to bring near-native performance to web apps. The technology lets browsers run compiled C/C++ code, including games made with engines like Unity and Unreal.

With WebAssembly, web games will be able to take advantage of multithreading. Developers will be able to produce staggering 3D games for the web that run close to the same speed as native code, but without compromising on security. It’s a tremendous breakthrough for gaming — and the open web. It means that developers will be able to build games for any computer or system that can access the web. And because they’ll be running in browsers, it’ll be easy to integrate online multiplayer modes.

Additionally, there are many HTML5/JavaScript game engines out there. These engines take care of the basics like physics and controls, giving developers a framework/world to build on. They range from lightweight and fast, like atom and Quick 2D engines, to full-featured 3D engines like WhitestormJS and Gladius. There are dozens to choose from, each with their own unique advantages and disadvantages for developers. But in the end, they all produce games that can be played on modern web browsers without plug-ins. And most of those games can run on less-powerful hardware, meaning you can reach even more users. In fact, games written for the web can run on tablets, smartphones, and even smart TVs.

MDN has extensive documentation on building web games and several tutorials on building games using pure JavaScript and the Phaser game framework. It’s a great place to start for web game development.

Video

Most video services have already switched to HTML5-based streaming using web technologies and open codecs; others are  sticking with the Flash-based FLV or FV4 codecs. As stated earlier, Flash video formats rely on software rendering that can tax web browsers and mobile platforms. Modern video codecs can use hardware rendering for video playback, greatly increasing responsiveness and efficiency. Unfortunately, there’s only one way to switch from Flash to HTML5: Re-encoding your video. That means converting your source material into HTML5-friendly formats via a free converter like FFmpeg and Handbrake.

Mozilla is actively helping to build and improve the HTML5-friendly and open-source video format WebM. It’s based on the Matroska container and uses VP8 and VP9 video codecs and Vorbis or Opus codecs.

Once your media has been converted to an HTML5-friendly format, you can repost your videos on your site. HTML5 has built-in media controls, so there’s no need to install any players. It’s as easy as pie. Just use a single line of HTML:

<video src="videofile.webm" controls></video>

Keep in mind that native controls are inconsistent between browsers. Because they’re made with HTML5, however, you can customize them with CSS and link them to your video with JavaScript. That means you can build for accessibility, add your own branding, and keep the look and feel consistent between browsers.

HTML5 can also handle adaptive streaming with Media Source Extensions (MSEs). Although they may be difficult to set up on their own, you can use pre-packaged players like Shaka Player and JW Player that can handle the details.

The developers at MDN have created an in-depth guide for converting Flash video to HTML5 video with many more details on the process. Fortunately, it’s not as difficult as it seems.

Flash Forward

The future of the web is open (hopefully) and Flash, despite being a great tool for creatives, wasn’t open enough. Thankfully, many open source tools  can do what Flash does, and more. But we’re still in the early stages and creating animations, interactive websites, and web games takes some coding knowledge. Everything you need to know is out there, just waiting for you to learn it.

Open web technologies promise to be better than Flash ever was, and will be accessible to anyone with an Internet connection.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Flash, In Memoriam

Mozilla planet - di, 29/08/2017 - 16:52

Adobe will drop Flash by 2020. Firefox no longer supports Flash out of the box, and neither does Chrome. The multimedia platform is being replaced with open internet technologies like HTML5, CSS3, and JavaScript. But at one time, Flash was cutting edge. It inspired a generation of animators and developers and gave us some fantastic websites, games, TV shows, and even movies.

Macromedia launched Flash 1.0 (originally FutureWave SmartSketch) in 1996 with a grand vision: A single multimedia platform that would work flawlessly in any browser or any computer. No pesky browser interoperability issues, no laborious cross-browser testing. Just experiences that looked and acted the same in every browser.

A slick GUI, novel drawing and animation tools, and a simple scripting language made Flash a smash hit. Many artists, developers, filmmakers, and storytellers (myself included) were smitten. The platform sparked a revolution of multimedia websites rife with elaborate mouseover effects, thumping electronic music, and motion-sickness-inducing transitions. Corporations and businesses of all shapes and sizes created Flash websites. Millions of Flash-based games hit the web via sites like Newgrounds and many popular games were developed with Flash, including Angry Birds,Clash of Clans,FarmVille,AdventureQuest andMachinarium.

Flash also became a popular animation tool. Hit kids’ shows like Pound Puppies and My Little Pony: Friendship is Magic and comedy series like Total Drama and Squidbillies were made exclusively in Flash. The 2009 Academy Award nominated animated movie The Secret of Kells was also made in Flash. Then, of course, there was the Internet phenomenon Homestar Runner—animated web series, interactive website, and games hub.

In 2005, Macromedia was purchased by Adobe. That same year, YouTube launched. The streaming video service used the Flash player to deliver video to millions. At one time, 75% of all video content on the web was delivered via the Flash player.

Over the years, Flash grew, but didn’t necessarily improve. Its codebase became bloated and processor-power hungry. Then Apple released the iPhone, famously without Flash support. Flash used software rendering for video, which hurt battery life and performance on mobile devices. Instead, Apple recommended the HTML5 <video> tag  for video delivery on the web, using formats which can be rendered in hardware much more efficiently. YouTube added support for HTML5-friendly video and in 2015 announced that it would drop all support for Flash.

Flash is also, at its core, a closed and proprietary platform. Its code is controlled exclusively by Adobe with little or no community support.

Finally, Adobe itself announced the end of Flash. The company will no longer support Flash after 2020. It will continue to support Adobe AIR, however, which packages Flash material and scripts into a runtime for desktop and mobile devices.

Flash undoubtedly made a huge contribution to the web, despite it’s drawbacks. It triggered a wave of creativity and inspired millions of people around the world to create digital media for the web.

In my next post, Life After Flash, I’ll walk you through some of new open standards, tools, and technologies that make online multimedia more performant and interactive than ever.

Categorieën: Mozilla-nl planet

Honza Bambas: Mozilla Log Analyzer added basic network diagnostics

Mozilla planet - di, 29/08/2017 - 14:57

Mozilla Log Analyzer objects search results

Few weeks ago I’ve published Mozilla Log Analyzer (logan).  It is a very helpful tool itself when diagnosing our logs, but looking at the log lines doesn’t give answers about what’s wrong or right with network requests scheduling.  Lack of other tools, like Backtrack, makes informed decisions on many projects dealing with performance and prioritization hard or even impossible.  The same applies to verification of the changes.

Hence, I’ve added a simple network diagnostics to logan to get at least some notion of how we do with network request and response parallelization during a single page load.  It doesn’t track dependencies, by means of where from exactly a request originates, like which script has added the DOM node leading to a new request (hmm… maybe bug 1394369 will help?) or what all has to load to satisfy DOMContentLoaded or early first paint.  That’s not in powers of logan right now, sorry, and I don’t much plan investing time in it.  My time will be given to Backtrack.

But what logan can give us now is a breakdown of all requests being opened and active before and during a request you pick as your ‘hero request.’  May tell you what the concurrent bandwidth utilization was during the request in question, or what lower priority requests have been scheduled, been active or even done before the hero request.  What requests were blocking the socket where your request was finally dispatched on, and so on…

To obtain this diagnostic breakdown, use the current Nightly (at this time its Firefox 57) and capture logs from the parent AND also child processes with the following modules set:

MOZ_LOG=timestamp,sync,nsHttp:5,cache2:5,DocumentLeak:5,PresShell:5,DocLoader:5,nsDocShellLeak:5,RequestContext:5,LoadGroup:5,nsSocketTransport:5

(sync is optional, but you never know.)

Make sure you let the page you are analyzing to load, it’s OK to cancel too.  It’s best to close the browser then and only after that load all the produced logs (parent + children) to logan.  Find your ‘hero’ nsHttpChannel.  Expand it and then click its breadcrumb at the top of the search results.  There is a small [ diagnose ] button at the top.  Clicking it brings you to the breakdown page with number of sections listing the selected channel and also all concurrent channels according few – I found interesting – conditions.

This all is tracked on github and open to enhancements.

The post Mozilla Log Analyzer added basic network diagnostics appeared first on mayhemer's blog.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 197

Mozilla planet - di, 29/08/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

Sadly, we had no nomination for the crate of the week.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

120 pull requests were merged in the last week

New Contributors
  • David Ross
  • Evgeniy A. Dushistov
  • Jouan Amate
  • Matthew Hammer
  • Michal 'vorner' Vaner
  • Samuel Holland
  • Sebastian Humenda
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

We're currently writing up the discussions, we'd love some help. Check out the tracking issue for details.

PRs:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Abomonation has no safe methods. […] If you are concerned about safety, it may be best to avoid Abomonation all together. It does several things that may be undefined behavior, depending on how undefined behavior is defined.

Frank McSherry in Abomonation docs.

Thanks to Adwhit for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Emma Humphries: Triage Summary 2017-08-28

Mozilla planet - di, 29/08/2017 - 02:56

It's the weekly report on the state of triage in Firefox-related components.

Poll

Would you like a logged in BMO home page like: https://fitzgen.github.io/bugzilla-todos/?

https://twitter.com/triagegirl/status/902327322178609153

Hotspots

The components with the most untriaged bugs remain the JavaScript Engine and Build Config.

**Rank** **Component** **Last Week** **This Week** ---------- ------------------------------ --------------- --------------- 1 Core: JavaScript Engine 471 477 2 Core: Build Config 450 459 3 Firefox for Android: General 406 408 4 Firefox: General 246 254 5 Core: General 235 241 6 Core: XPCOM 178 180 7 Core: JavaScript: GC 168 171 8 Core: Networking 161 159 All Components 8,703 8,822

Please make sure you’ve made it clear what, if anything will happen with these bugs.

Not sure how to triage? Read https://wiki.mozilla.org/Bugmasters/Process/Triage.

Next Release

**Version** 56 56 57 57 57 ------------------------------------- ------- ----- ------ ------- ------- -- **Date** 7/31 8/7 8/14 8/21 8/28 **Untriaged this Cycle** 4,479 479 835 1,196 1,481 **Unassigned Untriaged this Cycle** 3,674 356 634 968 1,266 **Affected this Release** 139 125 123 119 42 **Enhancements** 103 3 5 11 17 **Orphaned P1s** 192 196 191 183 18 **Inactive P1s** 179 157 152 155 13 **Stale Bugs** – – – – 117

What should we do with these bugs? Bulk close them? Make them into P3s? Bugs without decisions add noise to our system, cause despair in those trying to triage bugs, and leaves the community wondering if we listen to them.

Bugs I Want to Close

There are over 10,000 unconfirmed bugs with no activity in at least a year.

https://mzl.la/2wNLrbS

There are over 45,000 bugs which are not in the General or Untriaged components with no activity in at least a year.

https://mzl.la/2wNXiGC

I would like to close these.

Once the noise-free closing script is ready, my plan is to automate bug stewardship.

Methods and Definitions

In this report I talk about bugs in Core, Firefox, Firefox for Android, Firefox for IOs, and Toolkit which are unresolved, not filed from treeherder using the intermittent-bug-filer account*, and have no pending needinfos.

By triaged, I mean a bug has been marked as P1 (work on now), P2 (work on next), P3 (backlog), or P5 (will not work on but will accept a patch).

https://wiki.mozilla.org/Bugmasters#Triage_Process

A triage decision is not the same as a release decision (status and tracking flags.)

https://mozilla.github.io/triage-report/#report

Untriaged Bugs in Current Cycle

Bugs filed since the start of the Firefox 57 release cycle (August 2nd, 2017) which do not have a triage decision.

https://mzl.la/2wzJxLP

Recommendation: review bugs you are responsible for (https://bugzilla.mozilla.org/page.cgi?id=triage_owners.html) and make triage decision, or RESOLVE.

Untriaged Bugs in Current Cycle Affecting Next Release

Bugs marked status_firefox56 = affected and untriaged.

https://mzl.la/2wzjHaH

Enhancements in Release Cycle

Bugs filed in the release cycle which are enhancement requests, severity = enhancement, and untriaged.

https://mzl.la/2wzCBy8

​Recommendation: ​product managers should review and mark as P3, P5, or RESOLVE as WONTFIX.

High Priority Bugs without Owners

Bugs with a priority of P1, which do not have an assignee, have not been modified in the past two weeks, and do not have pending needinfos.

https://mzl.la/2sJxPbK

Recommendation: review priorities and assign bugs, re-prioritize to P2, P3, P5, or RESOLVE.

Inactive High Priority Bugs

There 159 bugs with a priority of P1, which have an assignee, but have not been modified in the past two weeks.

https://mzl.la/2u2poMJ

Recommendation: review assignments, determine if the priority should be changed to P2, P3, P5 or RESOLVE.

Stale Bugs

Bugs in need of review by triage owners. Updated weekly.

https://mzl.la/2wNyONP

* New intermittents are filed as P5s, and we are still cleaning up bugs after this change, See https://bugzilla.mozilla.org/show_bug.cgi?id=1381587, https://bugzilla.mozilla.org/show_bug.cgi?id=1381960, and https://bugzilla.mozilla.org/show_bug.cgi?id=1383923

If you have questions or enhancements you want to see in this report, please reply to me here, on IRC, or Slack and thank you for reading.



comment count unavailable comments
Categorieën: Mozilla-nl planet

William Lachance: Functional is the future

Mozilla planet - ma, 28/08/2017 - 23:02

Just spent well over an hour tracking down a silly bug in my code. For the mission control project, I wrote this very simple API method that returns a cached data structure to our front end:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 def measure(request): channel_name = request.GET.get('channel') platform_name = request.GET.get('platform') measure_name = request.GET.get('measure') interval = request.GET.get('interval') if not all([channel_name, platform_name, measure_name]): return HttpResponseBadRequest("All of channel, platform, measure required") data = cache.get(get_measure_cache_key(platform_name, channel_name, measure_name)) if not data: return HttpResponseNotFound("Data not available for this measure combination") if interval: try: min_time = datetime.datetime.now() - datetime.timedelta(seconds=int(interval)) except ValueError: return HttpResponseBadRequest("Interval must be specified in seconds (as an integer)") # Return any build data in the interval empty_buildids = set() for (build_id, build_data) in data.items(): build_data['data'] = [d for d in build_data['data'] if d[0] > min_time] if not build_data['data']: empty_buildids.add(build_id) # don't bother returning empty indexed data for empty_buildid in empty_buildids: del data[empty_buildid] return JsonResponse(data={'measure_data': data})

As you can see, it takes 3 required parameters (channel, platform, and measure) and one optional one (interval), picks out the required data structure, filters it a bit, and returns it. This is almost what we wanted for the frontend, unfortunately the time zone information isn’t quite what we want, since the strings that are returned don’t tell the frontend that they’re in UTC format — they need a ‘Z’ appended to them for that.

After a bit of digging, I found out that Django’s json serializer will only add the Z if the tzinfo structure is specified. So I figured out a simple pattern for adding that (using the dateutil library, which we are fortunately already using):

1 2 from dateutil.tz import tzutc datetime.datetime.fromtimestamp(mydatestamp.timestamp(), tz=tzutc())

I tested this quickly on the python console and it seemed to work great. But when I added the code to my function, the unit tests mysteriously failed. Can you see why?

1 2 3 4 5 6 7 8 for (build_id, build_data) in data.items(): # add utc timezone info to each date, so django will serialize a # 'Z' to the end of the string (and so javascript's date constructor # will know it's utc) build_data['data'] = [ [datetime.datetime.fromtimestamp(d[0].timestamp(), tz=tzutc())] + d[1:] for d in build_data['data'] if d[0] > min_time ]

Trick question: there’s actually nothing wrong with this code. But if you look at the block in context (see the top of the post), you see that it’s only executed if interval is specified, which it isn’t necessarily. The first case that my unit tests executed didn’t specify interval, so fail they did. It wasn’t immediately obvious to me why this was happening, so I went on a wild-goose chase of trying to figure out how the Django context might have been responsible for the unexpected output, before realizing my basic logic error.

This was fairly easily corrected (my updated code applies the datetime-mapping unconditionally to set of optionally-filtered results) but perfectly illustrates my issue with idiomatic python: while the language itself has constructs like map and reduce that support the functional programming model, the language strongly steers you towards writing things in an imperative style that makes costly and annoying mistakes like this much easier to make. Yes, list and dictionary comprehensions are nice and compact but they start to break down in the more complex cases.

As an experiment, I wrote up what this function might look like in a pure functional style with immutable data structures:

1 2 3 4 5 6 7 8 def transform_and_filter_data(build_data): new_build_data = copy.copy(build_data) new_build_data['data'] = [ [datetime.datetime.fromtimestamp(d[0].timestamp(), tz=tzutc())] + d[1:] for d in build_data['data'] if d[0] > min_time ] return new_build_data transformed_build_data = {k: v for k, v in {k: transform_and_filter_data(v) for k, v in data}.items() if len(v['data']) > 0}

A work of art it isn’t — and definitely not “pythonic”. Compare this to a similar piece of code written in Javascript (ES6) with lodash (using a hypothetical tzified function):

1 2 3 4 5 6 7 let transformedBuildData = _.filter(_.mapValues(data, (buildData) => ({ ...buildData, data: buildData['data'] .filter(datum => datum[0] > minTimestamp) .map(datum => [tzcified(datum[0])].concat(datum.slice(1))) })), (data, buildId) => data['data'].length > 0);

A little bit easier to understand, but more importantly it comes across as idiomatic and natural in a way that the python version just doesn’t. I’ve been happily programming Python for the last 10 years, but it’s increasingly feeling time to move on to greener pastures.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 28 Aug 2017

Mozilla planet - ma, 28/08/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #3

Mozilla planet - ma, 28/08/2017 - 19:57

WebRender work is coming along nicely. I haven’t managed to properly track what landed this week so the summary below is somewhat short. This does little justice to the great stuff that is happening on the side. For example I won’t list the many bugs that Sotaro finds and fixes on a daily basis, or the continuous efforts Kats puts into keeping Gecko’s repository in sync with WebRender’s, or Ryan’s work on cbindgen (the tool we made to auto-generate C bindings for WebRender), or the unglamorous refactoring I got myself into in order to get some parts of Gecko to integrate with WebRender without breaking the web. Lee has been working on the dirty and gory details of fonts for a while but that won’t make it to the newsletter until it lands. Morris’s work on display items conversion hasn’t yet received due credit here, nor Jerry’s work on handling the many (way too many) texture formats that have to be supported by WebRender for video playback. Meanwhile Gankro is working on changes to the rust language itself that will make our life easier when dealing with fallible allocation and Kvark, after reviewing most of what lands in the WebRender repo and triaging all of the issues, manages to find the time to add tools to measure pixel coverage of render passes, and plenty of other things I don’t even know about because following everything closely would be a full-time job. You get the idea. I just wanted to give a little shout out to the people working on very important parts of the project that may not always appear in the highlights below, either because the work hasn’t landed yet, because I missed it, or because it was hidden behind Glenn’s usual round of epic optimization.

Notable WebRender changes
  • Glenn optimized the allocation of clip masks. Improvements with this fix on a test case generated from running cnn.com in Gecko:
GPU time 10ms -> 1.7ms. Clip target allocations 54 -> 1. CPU compositor time 2.8ms -> 1.8ms. CPU backend time 1.8ms -> 1.6ms. Notable Gecko changes
  • Jeff landed tiling support for blob images. Tiling is currently only used for very large images, but when used we get parallel rasterization across tiles for free.
  • Fallback blob images are no longer manually clipped. This means that we don’t have to redraw them while scrolling anymore. This gives a large performance improvement when scrolling mozilla.org

Categorieën: Mozilla-nl planet

Shruti Jasoria: Replicate Distribution on Perfherder

Mozilla planet - zo, 27/08/2017 - 20:30

I would have put up this post a tad early had the Game of Thrones season finale not consumed my entire afternoon today. What an episode! It’s saddening that the final season of the show requires a two-year long wait.

All the code which I have written for GSoC has been merged!

After squashing Bug 1273513 and Bug 1164891, I started working on Bug 1350384 which constituted a major portion of my GSoC project.

Up till now, Perfherder provided aggregated results and graphical visualization of various test suites in its comparison view, but not for the individual replicates that are used to generate them. For tests where there is a large natural variation in these individual numbers, it can be difficult to determine if there is regression when the summarised values change because there could be many different underlying reasons for that. For these cases, a detailed view of the individual test results could be extremely helpful.

These individual test results can now be analysed using the new replicate view.This can be accessed from the subtest comparison:

Link to the replicate distribution view from subtest comparison.

In this view, bar charts are used to compare the replicate results. The values for the base and original projects run-side by-side in the order in which they were obtained.

The new replicate view.

This feature is currently available for Talos framework when two specific revisions are compared.

With this bug comes an end to Google Summer of Code. It was a truly amazing experience. Over this summer, I have developed a better understanding of how Perfherder works and my front-end development skills have improved a lot. I got a chance to fly to San Francisco and attend the All hands meeting. The best part about GSoC was the fact that the code which I have written would make an impact on how performance testing is done in Mozilla.

In other news, my seventh semester in the college has begun. There’s so much to explore. This semester, I’ll try my hand on Artificial Intelligence and Cryptography. And probably a new Mozilla project too.

I hope you find the features which I have added to Perfherder useful. :)

Categorieën: Mozilla-nl planet

Pagina's