mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Don Marti: Welcome Planet Mozilla readers

Mozilla planet - fr, 10/11/2017 - 09:00

Welcome Planet Mozilla readers. (I finally figured out how to do a tagged feed for this blog, to go along with the full feed. So now you can get the items from the tagged feed on Planet Mozilla.)

The main feed has some items that aren't in the Mozilla feed.

Anyway, if you're coming to Austin, please mark your calendar now.

Two more links: I'm on Keybase and Mozillians. And @dmarti on Twitter.

Categorieën: Mozilla-nl planet

Gian-Carlo Pascutto: Linux sandboxing improvements in Firefox 57

Mozilla planet - to, 09/11/2017 - 18:19
Firefox 57 not only ships a large amount of performance improvements and a UI refresh, it also contains a number of technological improvements under the hood. One of these is that the security sandbox was tightened, making it harder for attackers - should they find a security hole in Firefox in the first place - to escalate that attack against the rest of the system, or your data.

The content process - that is the one that renders the web pages from the internet and executes JavaScript - is now blocked from reading large parts of the filesystem, with some exceptions for libraries, configuration information, themes and fonts. Notably, it is no longer possible to read private information in the home directory or the Firefox user profile, even if Firefox were to be compromised.

We could not block the web content rendering entirely from reading the filesystem because Firefox still uses GTK directly - we draw webpage HTML widgets using the same theming as the native desktop. Rather than postpone the security improvements till this is reworked, we've elected to work around this by allowing a few very specific locations through. Similar things apply to the use of PulseAudio (to be fixed around Firefox 59 with a new Rust based audio backend), ffmpeg (media decoding must stay sandboxed) and WebGL.

We've made sure this works on all common, and many not-so common configurations. So most of you can stop reading here, but for those who like to know more details or are tinkerers, the following might be of interest. Due to the infinite configurability of Linux systems, it's always possible there will be cases where a non-standard setup can break things, and we've kept Firefox configurable, so you can at least help yourself, if you're so inclined.

For example, we know that in Firefox 57, allowing your system font configuration to search for fonts from the same directory as where your downloads are stored (a rather insecure configuration, for that matter!) can cause these fonts to appear blank in web-pages.

The following settings are available in about:config:

security.sandbox.content.level

This determines the strictness of the sandbox. 0 disables everything, 1 filters dangerous system calls, 2 additionally blocks writing to the filesystem, and 3 adds blocking of (most) reading from the filesystem. This is a high level knob, use it only to quickly check if an issue is caused by sandboxing. After changing this, you'll have to restart Firefox for it to take effect.

If lowering security.sandbox.level fixes your problems, turn it back to the default value (3 in Firefox 57) and restart Firefox with the MOZ_SANDBOX_LOGGING=1 environment variable set, which will log any accesses the Sandbox allows or blocks. "Denied" messages will give you a clue what is being blocked. Don't forget to file a bug in Bugzilla, so we can track the problem and if possible, make things work by default.

security.sandbox.content.read_path_whitelist

List of paths (directories and files) that Firefox is additionally allowed to read from, separated by commas. You can add things here if Firefox can't reach some libraries, config files or fonts that are in a non-standard location, but avoid pointing it to directories that contain personal information.

security.sandbox.content.write_path_whitelist

List of paths that Firefox is additionally allowed to write to, separated by commas. It should almost never be necessary to change this.

security.sandbox.content.syscall_whitelist

List of system call numbers that Firefox will additionally allow, separated by commas. A disallowed system call will crash Firefox with a message mentioning "seccomp violation". It should almost never be necessary to change this. We'd particularly like to hear from your in Bugzilla if you require this.

Categorieën: Mozilla-nl planet

About:Community: Firefox 57 new contributors

Mozilla planet - to, 09/11/2017 - 17:31

With the upcoming release of Firefox 57, we are pleased to welcome the 53 developers who contributed their first code change to Firefox in this release, 47 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Nov. 9, 2017

Mozilla planet - to, 09/11/2017 - 17:00

Reps Weekly Meeting Nov. 9, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Applying Open Practices — Automattic

Mozilla planet - to, 09/11/2017 - 12:15

In this article, the third in our series on being ‘Open by Design,’ we focus on Automattic, a business which despite many challengers (Medium in blogging, Squarespace in websites) has managed to build a sustainable business built upon an open platform. The symbiotic relationship between the open source platform (wordpress.org) and the commercial service provider (wordpress.com) is an interesting case of how to balance contribution and commerce, and nurture a growing ecosystem.

Founded by Matt Mullenweg in 2005, Automattic is a globally dispersed team of about 550 employees, which develops and manages a set of services and commercial projects for Wordpress users. The Wordpress Foundation (Wordpress.org), established in 2010, maintains trademark for the Wordpress project core under a GPL license, and is the lighthouse for the wordpress developer and user community.

While the two organisations are separate, Automattic invests in cultivating and nurturing the Wordpress community, the ecosystem of web developers and small-scale entrepreneurs who develop plugins and themes for Wordpress. This community collectively manages the Wordpress open source core code.

<figcaption>WordPress Contributors Community</figcaption>Diversity and Self-Determination: Key Drivers for Organic Growth?

Achieving a large-scale, self-sustaining community is a holy grail of many organisations — yet success stories are few and far between. So how has Automattic managed to become the catalyst for a global, diverse, passionate community of contributors, developers and end users, which continues to expand not only geographically but also in variety of people? At least in part, this is thanks to Automattic’s embrace of democratic principles in community design: A relatively small team from Automattic ‘manages’ the Wordpress community, to facilitate dispute resolution, and ensure the tone of communication is respectful and that all individuals feel welcome regardless of gender or experience level.

Matt believes in open source, democracy — but not open democracy. He’s not excited about tolerating trolls … he’s not so committed to openness that he would let it affect the organisation or culture.
— Simon Phipps, Managing Director, Meshed InsightsRemember that there are people who are making their livelihood based on WordPress. Think before you type: the person asking a question may come from a very different position than you do.
— Experience Designer, Automattic, presenting to a Wordpress community audienceTurning Over Control

Another aspect of Automattic’s community success is their deliberate relinquishment of control over development of the Wordpress platform: the contributing community determines direction of the development in a democratic forum which is not beholden to Automattic.

Automattic’s approach to Wordcamps is similarly light-touch. Wordcamps are global face-to-face meetups between people using, developing and building businesses based on Wordpress. In 2016 there were 115 Wordcamps held in 41 countries, attended by over 62,000 people. Automattic offers organizing expertise, partial funding, and some community management, to ensure relevance and reach in local communities. But every Wordcamp is proposed, run and co-financed by local communities. Wordcamp events have become an important business networking opportunity, where a diverse set of service providers, designers, developers, and website-related businesses meet potential partners. The growth of this global small business ecosystem contributed a lot to the fact that over 25% of websites are now built using Wordpress.

We sponsor the WordCamp program heavily as Jetpack and as WooCommerce. To be honest our direct return there is low, but we want to be sure that other potential sponsors are seeing us as a major player in supporting the WordPress core — giving back to the community who keep WordPress alive.
 — Jetpack Growth Engineer, AutomatticGifting — a Sure-Fire Success?

As with many open source organisations, Automattic’s business was established through Gifting. On the face of it, turning technology ownership and management over to the community seems a great strategy for ensuring optimal feature development. However the practice bears challenges too: as the community has adopted and eventually come to rely on the PHP code base, plans to advance Automattic’s technology roadmap have met with resistance. For many the old platform simply worked fine and fulfilled their needs. What’s the problem then? Well, given the growth of competitive platforms such as Squarespace and Medium which benefit from responsive and faster protocols, the reluctance of the community to upgrade from PHP creates a risk of platform obsolescence in the long run. With project Calypso, a new WordPress.com interface built from the ground up, Automattic has been attempting to get around this challenge — by developing the project themselves. This has led to higher than expected investment of Automattic’s own resources.

Calypso had a very mixed reaction from our community — some people see it as a visionary brave move that can ensure the long term sustainability. Some feel it’s a waste of time, they want to keep using PHP.
— Jetpack Lead Growth Engineer, Automattic

The hope is that when the community sees the value of upgrading from PHP for their individual purposes, Calypso can become a successful community-owned and managed project.

Benefits Automattic Realises via Participation Modes

Automattic’s key innovation is in how the organisation is increasing market share and adoption by growing and nurturing an expanding ecosystem of entrepreneurs and small businesses that increase the overall usage of the Wordpress core. By opening up the ownership of the Wordpress core to this organised community of developers and designers who rely on it for their business, Automattic provides the structure for users to take ownership and co-develop tools. This has in turn lead to a better market fit for these, driving preference through better products and services. Automattic, as the other parts of this ecosystem, relies on the collectively-maintained Wordpress core as the basis for hosting and service businesses and in that way benefits of lowering product development & operating costs. Success, however, is not a guaranteed. An organization applying open practices cannot take for granted that the individual community goals automatically align with its own business goals.

Gitte Jonsdatter & Alex Klepel

Applying Open Practices — Automattic was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Q&A with Developer Stefan Van Damme

Mozilla planet - to, 09/11/2017 - 02:47

This is a guest post from Mozilla technical writer Judy DeMocker. She recently chatted with Stefan Van Damme about his extension Turn Off the Lights, and his experience porting it from its original Google Chrome version. Take it away, Judy…

Stefan Van Damme had a small problem—but it happened all the time. He liked to watch videos online, but video players on sites like YouTube don’t eliminate the other content on the screen—and that makes it hard to focus on the show. So Stefan, who lives in Antwerp, Belgium, built his first browser add-on to dim the lights on distracting content. And since so many people love movies, he built it for seven different browsers for users around the world.

Stefan’s extension, Turn Off the Lights, has been downloaded more than 3 million times. With that many users, it’s critical for him to be able to update it quickly and easily, without spending days or weeks on maintenance. So he’s excited about the new WebExtensions API, which makes it easy for him to port his extensions to Google Chrome, Mozilla Firefox, and Microsoft Edge using a universal code base.

Turn Off the Lights in action.

 

Porting to Firefox

What browser did you first create your extension for?
Google Chrome

Why was it important for you to write your extension for Firefox?
It is important to me that everyone can have the Turn Off the Lights experience in their favorite web browser. And Firefox is still one of the most popular web browsers out there today.

Did you migrate your add-on from the legacy Firefox platform, XUL? How difficult was that?
In the first version of Turn Off the Lights, I used the XUL technology. If I had to migrate to the new version, it would be difficult. However, I already had the Chrome extension, so migrating that code to Firefox was very easy. There was only one file I had to change, the manifest file. All the other files, I had to do nothing.

How difficult was it to learn and write to the WebExtensions API? (1 = easiest; 10 = hardest)
Since Firefox now supports the WebExtensions API, it was very easy to take code that runs on Chrome or Edge and put it on Firefox. I can use the same code base and just change the settings to work with each browser. If I continue with Chrome extensions, then it’s just a “1,” very easy.

Did you find all the functionality of your XUL add-on in the WebExtensions API? Or did you have to learn a new way to write the same features?
At the time I wrote the XUL add-on from my Chrome extension code, it was difficult, but I got all the functions inside. Today WebExtensions have more APIs, even those that extend outside the website content. For example, the extension can now dim the toolbar of Firefox thanks to the browser.theme API. And that is very unique and also cool.

What problems, if any, did you experience developing for Firefox?
Mostly I had trouble with the performance of the browser. If I click on my gray lamp button, it goes very slowly to that capacity level. On other browsers, it’s one click and done. I understand Mozilla is working hard to improve this.

What do you think of the new Quantum version of Firefox?
I see some good improvement in the Firefox Quantum web browser. That is what I like, and it can also be good for my users.

Tools & Resources

How has the technology changed since 2009?
At first, I used Notepad ++ on Windows to write my code. Now I use a Mac and Microsoft Visual Studio. Visual Studio is a better experience for both platforms. I can use it on Mac and Windows (using Boot Camp). I can switch to a Windows PC and use the same developer kit to write code and test it also.

How long does it take to publish a Firefox extension?
It’s very quick to publish an update to an add-on. Normally I just zip it and click on “Publish” and it’s done. Yesterday, I updated my Date Today add-on, and it took 10 to 15 minutes.

How is adoption of your new extension?
It’s good. Turn Off the Lights has been downloaded more than 3,000,000 times. I’’ve set up my website to detect a visitor’s browser and send them to the correct hyperlink, so they can download the version that works for them.

How long does it take up update your different extensions?
So in browsers like Chrome, Firefox, and Opera, it takes about two hours to update my add-on. I do one or two major updates for Turn Off the Lights a year, for instance moving from version 3.3 to 3.4. Those take more time. But it’s worth it. I get user feedback from my users that those updates provides better harmony in the current web experience.

What resources helped you learn about the WebExtensions API?
The MDN website was helpful. I was working with the Chrome documentation, but their site only shows information for the Chrome platform. That’s a minus for the Google team. They didn’t have a browser compatibility table that could show me if a feature is available on another web browser.

What help, if any, did you get from Mozilla?
I didn’t talk to anybody at Mozilla. But I do report bugs and performance issues. It’s important to get a great experience on all web browsers.

What advice would you give other developers who are thinking of creating extensions for Firefox?
Just do it. And, listen to your users’ feedback. They are the experts on how you can improve your Firefox extension.

The post Q&A with Developer Stefan Van Damme appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Mike Hoye: Planetary Alignment

Mozilla planet - wo, 08/11/2017 - 22:24

I’m going to be closing down a number of disused or underused sub-Planets off of Planet Mozilla later this week. If you have any objections to this, you should let me know directly.

I’m probably going to do it anyway, but I promise I’ll at least hear you out.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 119

Mozilla planet - wo, 08/11/2017 - 19:00

The Joy of Coding - Episode 119 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Multiview support lands in Servo: architecture and optimizations

Mozilla planet - wo, 08/11/2017 - 18:37
 architecture and optimizations

We have implemented a new WebGL architecture and support for the multiview extension in Servo that will make our WebGL and WebVR much faster. The Multiview extension allows for stereo rendering in a single pass and can improve VR rendering performance up to 40%. We implemented it as a WebVR 1.1 extension and it’s compatible with all the mobile headsets (Google Daydream, Samsung Gear VR, and Snapdragon VR).

We’ve been also working on a multiview enabled render path in Three.js and plan to land it upstream, once the extension is standardized. This will allow for great optimizations to everyone using A-Frame or raw Three.js.

Additionally, we have improved the Servo VR build scripts. In parallel, the Servo team continues to work on embedding APIs and they are going to build a high level drop-in replacement for the Android WebView component.

WebGL architecture redesign

The new Servo WebGL architecture brings render path optimizations, improved source code organization and testability, better synchronization, faster compilation times, and a flexible design to get the best out of new features such as multiview.

 architecture and optimizations

The new WebGL render path reduces the steps and latency for each WebGL call to hit the driver and improves the memory footprint for creating new WebGL based canvases. All the WebGL implementation has been moved to it’s own component in Servo, instead of sharing the same code base as WebRender, speeding up the development cycle.

The new component gets rid of fixed IpcSender<T> types and relies on using a variety of trait types and some templating in the main WebGLThread struct. This enables to switch GL threading and command queuing models using cargo features at compile time or using runtime preferences (e.g. use a more performant command queue when multiprocess is not enabled or enable straight multiview rendering to the FBOs exposed by VR headsets).

All WebGL calls are by default queued and sent to the WebGL GPU process. This provides the best security and parallelism because the JavaScript thread does not have access to the GPU and JS code can be run ahead while running heavy GL stuff in a different thread or process.

We also toyed with an experimental WebGL threading model which used in-script GPU thread and totally sync GPU calls. This approach allows for less parallelism but provides the most optimized VR latency when the render tick can be run within a safe frame time. Some WebVR browsers introduce extra frames of latency partly due to remoting GL commands, which may be very noticeable in VR. We want to make these kind of optimizations configurable for packaging WebGL/WebVR applications. When you package trusted source code some validations, error checking, and security rules that the spec enforces could be relaxed in favor of performance and latency boosts.

Multiview architecture

The new WebGL architecture, combined with the existing cross-platform rust-webvr library provided a solid base for Multiview integration into Servo.

Our first step was to implement multiview enabled VR FBOs in rust-webvr library. Under the hood, it uses OVR_multiview extension, which lets to bind a texture array to an FBO and reuse a single GPU command buffer for stereo rendering in a single render pass. Once the extension is active, the gl_ViewID_OVR built-in variable can be used in vertex or fragment shaders to render the specifics for each eye/view:

For the Servo integration we decided to directly expose VR framebuffers provided by the headsets using the opaque multiview framebuffers approach proposed in the WEBGL multiview draft. This enables the use of multiview in WebGL 1.0 as long as the WebGL implementation allows GLSL 300 version (which isn’t true in all browsers). We updated our Angle dependency in order to support the OVR_multiview shader validations but it didn’t work correctly and we had to submit some Angle patches upstream for correct transpilations.

Servo supports multiview rendering straight to the headsets (e.g. Daydream GVR context). This render path doesn’t require any texture copy in the rendering process of a frame, which improves memory footprint and latency.

The technical part was solved but there wasn’t a clean way to expose multiview for WebVR in JavaScript with the current status of the specs:

  • WebVR 1.1 API is not multiview friendly because the API uses a side by side rendered canvas element.
  • WebVR 2.0 API includes multiview support but it’s still under heavy churn and with a “do not implement” status.

For now we opted to include part of the WebVR 2.0 WebGLFramebuffer API in WebVR 1.1 using an ad-hoc extension method vrDisplay.getViews(). We adapted an official WebVR sample to test the API. This is the entry point:

All the native work we are doing will be reused when we implement the WebVR 2.0 API. That also applies to the efforts we are making to add multiview support in Three.js. We are using opaque WebGLFramebuffers, which will make all the contributions totally compatible with the WebVR 2.0 spec.

We used a webvr.info sample to measure multiview impact in our WebVR implementation. We changed it to use duplicated draw calls to make it more CPU bound and test a more extreme case. We plan to do a lot more detailed comparisons using Three.js, once all the patches are ready. From our measurements, you can expect up to 40% improvements in CPU bound applications:

 architecture and optimizations

Conclusions

We love to save draw calls and squeeze performance. Multiview will be a performance booster for WebVR, improving the quality of VR experiences in the browser. We are also really glad to help the WebVR community by contributing the multiview support to Three.js.

We will keep improving the WebGL and WebvR implementations in Servo. We will soon start adding AR capabilities and improve Firefox by sharing our optimizations under the Quantum project. Ah! And we've already kicked off the Servo WebGL 2.0 implementation ;)

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting November 8, 2017

Mozilla planet - wo, 08/11/2017 - 18:00

Weekly SUMO Community Meeting November 8, 2017 This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Eric Shepherd: Results of the MDN “Thin Pages” SEO experiment

Mozilla planet - wo, 08/11/2017 - 17:22

The MDN team has been working on a number of experiments designed to make decisions around prioritization of various forms of SEO problems we should strive to resolve. In this document, we will examine the results of our first such experiment, the “Thin Pages” experiment.

The goal of this experiment was to select a number of pages on MDN which are considered “thin”—that is, too short to be usefully analyzed—and update them using guidelines provided by our SEO contractor.

Once the changes were made and a suitable time had passed, we re-evaluated the pages’ statistics to determine whether or not the changes had an appreciable effect. With that information in hand, we then made a determination as to whether or not prioritizing this work makes sense.

The content updates

We selected 20 pages, choosing from across the /Web and /Learn areas of MDN and across the spectrum of usage levels. Some pages had little to no traffic at the outset, while others were heavily trafficked. Then, I went through these pages and updated them substantially, adding new content and modernizing their layouts in order to bring them up to a more useful size.

The changes were mostly common-sense ones:

  • Pages that aren’t necessary were deleted (as it turns out, none of the pages we selected were in this category).
  • Ensuring each page had all of the sections they’re meant to have.
  • Ensuring that every aspect of the topic is covered fully.
  • Ensuring that examples are included and cover an appropriate set of cases.
  • Ensuring that all examples include complete explanations of what they’re doing and how they work.
  • Ensuring that pages include not only the standard tags, but additional tags that may add useful keywords to the page.
  • Fleshing out ideas that aren’t fully covered.

The pages we chose to update are:

The results

After making the changes we were able to make in a reasonable amount of time, we then allowed the pages some time to percolate through Google’s crawler and such. Then we re-measured the impression and click counts, and the results were not quite what we expected.

First, of all the pages involved, only a few actually got any search traffic at all. The following pages were not seen by users searching on Google at all during either/or the starting or ending analysis:

The remaining pages did generally see measurable gains, some of them quite high, but none are clearly outside the range of growth expected giving MDN’s ongoing growth:

June 1-30 Sept. 24 – Oct. 23 Page URL Clicks Impressions Clicks Impressions Clicks Chg. % Impressions Chg. % https://developer.mozilla.org/en-US/docs/Web/CSS/Media_Queries 15 112 111 2600 640.00% 2221.43% https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/translateZ 1789 6331 1866 9004 4.30% 42.22% https://developer.mozilla.org/en-US/docs/Web/HTML/Inline_elements 3151 60793 4729 100457 50.08% 65.24%

This is unfortunately not a very large data set, but we can draw some crude results from it. We’ll also continue to watch these pages over the next few months to see if there’s any further change.

The number of impressions went up, in some cases dramatically. But there’s just not enough here to be sure this was related to the thin page revisions or related to other factors, such as the large-scale improvements to the HTML docs recently made.

Uncertainties

There are, as mentioned already, some uncertainties around these results:

  • The number of pages that had useful results was smaller than we would have preferred.
  • We had substantial overall site growth during the same time period, and certain areas of the site were heavily overhauled. Both of these facts may have impacted the results.
  • We only gave the pages a couple of months after making the changes before measuring the results. We were advised that six months is a more helpful time period to monitor (so we’ll look again in a few months).
Decisions

After reviewing these results, and weighing the lack of solid data at this stage, we did come to some initial conclusions, which are open to review if the numbers change going forward:

  1. We won’t launch a full-scale formal project around fixing thin pages. It’s just not worth it given the dodginess of the numbers we have thus far.
  2. We will, however, update the meta-documentation to incorporate the recommendations around thin pages.That means providing advice about the kinds of content to include, reminding people to be thorough, reminding writers to include plenty of samples that cover a variety of use cases and situations, and so forth. We will also add a new “SEO” area to the meta docs that covers these recommendations more explicitly in terms of the SEO impact.
  3. We will check these numbers again in a couple of months to see if there’s been further improvement. The recommendation was to wait six months for results, but we did not have that kind of time.
Discussion?

For discussion of this experiment, and of the work updating MDN that will come from it, I encourage you to follow-up or comment in this thread on the Mozilla Discourse site.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Go beyond console.log with the Firefox Debugger

Mozilla planet - wo, 08/11/2017 - 17:00

console.log is no debugger. It’s great for figuring out what your JavaScript app is up to, but it’s limited to spitting out a minimal amount of information. If your code is complex, you’ll need a proper debugger. That’s why we’ve added a new section to the Firefox DevTools Playground that’s all about debugging. We’ve built four basic lessons that use the Firefox Debugger to examine and repair a simple JavaScript to-do app.

Introducing the Debugger Playground

The lessons are completely free and the to-do app code is available for download from GitHub.

These lessons are a new format for us and we’re very excited to bring them to you. We’re always looking for new ways to help developers learn things and improve the daily workflow. If you have an idea, let us know. We’ll be expanding the Playground in the coming months and we’d love to hear from developers like you.

If you’re not familiar with the Firefox Debugger, take a look at the debugger docs on MDN and watch this quick intro video:

Now let’s take a look at a lesson from the new Debugger Playground. Ever use console.log to find the value of a variable? There’s an easier and more accurate way to do this with the Debugger.

Use Debugger to find the value of a variable

It’s much easier to find a variable with Firefox Debugger than it is with console.log. Here’s how it works:

Let’s take a look at a simple to-do app. Open the to-do app in new tab.

This app has a function called addTodo which will take the value of the input form, create an object, and then push that object onto an array of tasks. Let’s test it out by adding a new task. You’d expect to have this new task added to the list, but instead you see “[object HTMLInputElement]”.

Something is broken, and we need to debug the code. The temptation is to start adding console.log throughout the function, to pinpoint where the problem is. The approach might look something like this:

const addTodo = e => { e.preventDefault(); const title = document.querySelector(".todo__input"); console.log('title is: ', title); const todo = { title }; console.log('todo is: ', todo'); items.push(todo); saveList(); console.log(‘The updated to-do list is: ‘, items); document.querySelector(".todo__add").reset(); };

This can work, but it is cumbersome and awkward. We also have to remember to remove these lines after fixing the code. There’s a much better way to do it with the Debugger using what is called a breakpoint…

Learn more on the Debugger Playground

The Debugger Playground covers the basics of using the Firefox Debugger, examining the call stack, setting conditional breakpoints, and more. We know there’s a steep learning curve to using the Debugger (and debugging JavaScript), so we’ve pieced together a simple to-do app that’s easy to understand and decode. It’s also useful to run in your browser to keep things on track throughout your work day. The app is available here for download on GitHub. Grab it and then head over to the Playground to walk through the lessons there.

Let us know what you’d like to see next. We’re working on new lessons about the latest web technologies and we’d love to hear from you. Post in the comments below.

Categorieën: Mozilla-nl planet

Anne van Kesteren: Using GitHub

Mozilla planet - wo, 08/11/2017 - 12:31

I’ve been asked a few times how I stay on top of GitHub:

  • I only watch repositories I can reasonably keep up with in aggregate.
  • I aggresively unsubscribe from threads.
  • I trust others to mention me if needed.
  • I disabled email and exclusively use /notifications. This collapses long threads to a single item.

This works well for me, it may work for you.

What I miss is Bugzilla’s needinfo. I could see this as a persistent notification that cannot be dismissed until you go into the thread and perform the action asked of you. What I also miss on /notifications is the ability to see if someone mentioned me in a thread. I often want to unsubscribe based on the title, but I don’t always do it out of fear of neglecting someone.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Firefox Quantum

Mozilla planet - wo, 08/11/2017 - 11:34

Next week, Mozilla will release Firefox 57. Also referred to as Firefox Quantum, from the project name we’ve used for all the work that has been put into making this the most awesome Firefox release ever. This is underscored by the fact that I’ve gotten mailed release-swag for the first time during my four years so far as a Mozilla employee.

Firefox 57 is the major milestone hundreds of engineers have worked really hard toward during the last year or so, and most of the efforts have been focused on performance. Or perhaps perceived end user snappiness. Early comments I’ve read and heard also hints that it is also quite notable. I think every single Mozilla engineer (and most non-engineers as well) has contributed to at least some parts of this, and of course many have done a lot. My personal contributions to 57 are not much to write home about, but are mostly a stream of minor things that combined at least move the notch forward.

[edited out some secrets I accidentally leaked here.] I’m a proud Mozillian and being part of a crowd that has put together something as grand as Firefox 57 is an honor and a privilege.

Releasing a product to hundreds of millions of end users across the world is interesting. People get accustomed to things, get emotional and don’t particularly like change very much. I’m sure Firefox 57 will also get a fair share of sour feedback and comments written in uppercase. That’s inevitable. But sometimes, in order to move forward and do good stuff, we have to make some tough decisions for the greater good that not everyone will agree with.

This is however not the end of anything. It is rather the beginning of a new Firefox. The work on future releases goes on, we will continue to improve the web experience for users all over the world. Firefox 58 will have even more goodies, and I know there are much more good stuff planned for the releases coming in 2018 too…

Onwards and upwards!

(Update: as I feared in this text, I got a lot of negativism, vitriol and criticism in the comments to this post. So much that I decided to close down comments for this entry and delete the worst entries.)

Categorieën: Mozilla-nl planet

Air Mozilla: Free Open Shared: A conversation with UN Special Rapporteur on Freedom of Expression David Kaye about the global threats to freedom of expression online

Mozilla planet - wo, 08/11/2017 - 03:30

 A conversation with UN Special Rapporteur on Freedom of Expression David Kaye about the global threats to freedom of expression online On November 7, Wikimedia Foundation, and Mozilla, and the International Justice Resource Center will host UN Special Rapporteur on Freedom of Expression David Kaye for...

Categorieën: Mozilla-nl planet

Air Mozilla: Martes Mozilleros, 08 Nov 2017

Mozilla planet - wo, 08/11/2017 - 02:00

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Categorieën: Mozilla-nl planet

David Humphrey: When the music's over, turn out the lights

Mozilla planet - ti, 07/11/2017 - 20:49

This week I've been thinking about ways in which GitHub could do a better job with projects and code as they age. Unfortunately, since technology is fundamentally about innovation, growth, and development, we don't tend (or want) to talk about decline, neglect, or endings. GitHub has some great docs on how to create a new repo, how to fork an existing repo, and how to file bugs, make PRs, etc. What they don't have is any advice for what you should do when you're done.

GitHub isn't alone in this. Long ago I wrote about the Ethics of Virtual Endings, and how the game WebKinz failed my daughter when she was done wanting to play. It turned out to be impossible to properly say goodbye to a virtual pet, whose neglect would go on indefinitely, and lead to sickness and suffering. I wrote then that "endings are, nevertheless, as real as beginnings, and need great care," and nothing has changed. Today, instead of dealing with a child and her virtual pets, I'm thinking about adults and their software projects.

It's important to establish the fact that every repo on GitHub is going to stop getting updated, cease being maintained, drift into the past, and die. I don't think people realize this. It can be hard to see it, since absence is difficult to observe unless you know what used to be. Our attention is drawn to what's new and hot. GitHub directs our attention to repos that are Trending. To be clear, I love this page, because I too love seeing what's new, who is doing great work (side note: my friend Kate Hudson is currently at the top of what's trending today for git-flight-rules), and what's happening around me. I've had my own repos there in the past, and it's a nice feeling.

Without taking anything away from the phenomenal growth at GitHub, let me also show you a contribution graph:

This is the graph of every project on GitHub. The years might not line up, the contribution levels might be different, and the duration of activity might be wider; but make no mistake: the longest stretch in every project is going to be the flat-line that follows its final commit. Before you tell me that natural selection simply weeds out failed projects, and good ones go on, this was a very successful project.

Software solves a problem for a group of people at a given time in a given context. At some point, it's over. Either the money runs out, or the problem goes away, or the people lose interest, or the language dies, or any number of other things happens. But at some point, it's done. And that's OK. That's actually how it's always been. I've been programming non-stop for 35 years, and I could tell you all kinds of nostalgia-filled tales of software from another time, software that can't be used today.

I say used intentionally. You can't use most software from the past. As a binary, as a product, as a project, they have died. But we can do much more with software than just use it. I spend more time reading code than I do writing it. Often I need to fix a bug, and doing so involves studying parallel implementations to see what they do and don't share in common. Other times I'm trying to learn how to approach a problem, and want to see how someone else did it. Still other times I'm interested in picking up patterns and practices from developers I admire. There are lots of reasons that one might want to read vs. run a piece of code, and having access to it is important.

Which brings me back to GitHub. If we can agree that software projects are all going to end a some point, it seems logical to plan for it. As a community we over-plan and over-engineer every aspect of our work, with continuous, automated processes running 24x7 for every semi-colon insertion and removal. You'd think we'd have some kind of "best practice" or process ready to deploy when it's time to call it a day. However, if we do, I'm not aware of it.

What I see instead are people trying to cope with the lack of such a process. GitHub is overflowing with abandoned repos that have been forked and forgotten. I've seen people add a note in their README file to indicate the project is no longer maintained. I've seen other people do a similar thing, but point to some new repo and suggest people go there. The more typical thing you see is that PRs and Issues start to pile up without an answer. Meanwhile, maintainers take to Medium to write long essays about burnout and the impossibilities of maintaining projects on their own. It's a hard problem.

I think GitHub could help to improve things systematically by addressing the end of a project as a first-class thing worthy of some design and engineering effort. For example, I would argue that after a certain point of in activity, it's no longer useful to have Issues and PRs open for a dead repository. After the developers have moved on to other things, the code, however, continues to be useful. Long before GitHub existed, we all dealt with source tarballs, or random code archives on the web. We didn't worry about the age of the code, if it did what we needed. By always forcing a project-management-lense on a repo, GitHub misses the opportunity to also be a great archive.

Another thing I'd like to see is some better UX for repos changing hands. GitHub does make it possible to move a repo to a new org or account. However, since everything in git is a clone, there's no reason that we shouldn't make cloning a dead project a bit easier. This week I've been working on a project that needed to do HLS live video streaming from nginx. The repo you get when you search for this is https://github.com/arut/nginx-rtmp-module. This makes sense, since this is where the work began. However, what you don't see when you go there is that you should actually probably use https://github.com/sergey-dryabzhinsky/nginx-rtmp-module, which is now quite a bit ahead. It would be great if GitHub offered to help me find this info from a project page: given that a fork on GitHub has gone further than this original repo, why not point me to it?

Bound up in this problem are the often unspoken and conflicting expectations of maintainers, downstream developers, and users. We love surprise: the release of something great, a demo, a hack, a new thing in the world that you didn't see coming. We hate the opposite: the end of a thing we love, the lack of updates, the disappearance without explanation. I think GitHub makes this worse by pretending that everything is always, or about to be, worked on. Commits, Issues, Pull Requests, Insight graphs, contributions--things are happening here! The truth is, lots and lots of what's there will never be touched again, and we should be honest about that so that no one is led to believe something that isn't true. Sure, the lights are still on, but nobody lives here anymore.

Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: is the 1GHz Sonnet G4 card worth it?

Mozilla planet - ti, 07/11/2017 - 17:46
First of all, ObTenFourFox announcements: we are on track for TenFourFox Feature Parity Release 4 launching with Firefox 57/52.5 (but still supporting classic extensions, because we actually like our users) on November 14. All new features and updates have stuck, so the only new changes will be the remaining security patches and certificate/pin updates. In the meantime, I have finally started work on adding AltiVec-accelerated VP9 intra frame prediction to our in-tree fork of libvpx, the WebM decoder library. This is the last major portion of the VP9 codec that was lacking AltiVec SIMD acceleration, which I'm doing as a more or less direct port of the Intel SSE2 version with some converted MMX and SSE routines; we don't use the loop filter and have not since VP9 was first officially supported in TenFourFox. Already there are some obvious performance improvements but the partial implementation that I've checked in so far won't be enabled in FPR4 since I haven't tested it thoroughly on G4 systems yet. The last little bit will be rewriting the convolution and averaging code sections that are still in unaccelerated generic C and a couple little odds and ends. Watch for the first draft to appear in FPR5.

Also, in the plain-layouts-are-beautiful dept., I encountered a fun search engine for the way the Web used to be. Floodgap is listed, of course. More about that philosophy at a later date.

On to the main event. One of the computers in my stable of systems is my beloved Power Macintosh 7300, a classic Old World beige PCI Power Mac. This 7300 served as my primary personal computer -- at that time with a 500MHz Sonnet G3, 192MB of RAM and a Rage Orion 3D card -- for about three and a half years and later became the first gopher.floodgap.com before I resurrected it as a gaming system. Currently it has 1GB of RAM, the max for this unit; the same Rage Orion (RAGE 128 GL) 3-D accelerator, which I prefer to PCI Radeons for those games that have 3-D support but weren't patched for various changes in the early Radeon cards; two 7200rpm SCSI drives; a 24x CDROM; a (rather finicky) Orange Micro OrangePC 620 "PC on a card" with 128MB of RAM and a 400MHz AMD K6-II CPU; and, most relevantly to this article, a Sonnet Crescendo/PCI 800MHz G4 CPU upgrade card, running a PowerPC G4 7455 CPU with 256K L2 cache at CPU speed and 1MB of L3 at 200MHz. The system boots Mac OS 9.1 and uses CPU Director to disable speculative access and, for those hardware situations that require it, L2 and L3 caches.

Overall, this system runs pretty well. It naturally can chug through Classilla pretty well, but it also has the Mac ports of a large number of games from a smattering of 68K titles to software-rendered titles like Doom, System Shock, Full Throttle, Wing Commander III and up through 3-D titles near the end of OS 9's life such as Shogo MAD and Quake III and its derivatives like Star Trek Voyager: Elite Force. The PC card boots both Windows 95 OSR2 and Windows 98 to run games like Outlaws and Dark Forces II: Jedi Knight that were never ported to PowerPC Mac OS or OS X.

It's a project of mine to trick this sucker out, which is why I jumped at the chance to buy one when three of the nearly unobtainium 1.0 GHz G4 Sonnet Crescendo/PCI cards turned up on eBay unused in original boxes and factory livery. Although Sonnet obviously makes faster processor upgrades for later Power Macs, and in fact I have one of their dual 1.8GHz upgrades in my FW400 MDD (the Mac that replaced the 7300 as my daily driver), this was the fastest you could cram in a pre-G3 beige PCI Power Mac, i.e., pretty much anything with PCI slots from the 7300 to the 9600. Only the sticker on the box would have told you this was more than the prior top-of-the-line 800MHz card; nothing else mentioned anything of it, not even the manual (an info sheet was tucked inside to reassure you). The urban legend goes that Sonnet's board manufacturer under contract was out of business and Freescale-Motorola was no longer producing the 800MHz 7455. This was clearly the end of the Crescendo/PCI product since it didn't make enough money to be redesigned for a new manufacturer, but left Sonnet with about 140 otherwise useless daughtercards for which no CPU was available either. Possibly as an enticement, Freescale offered to complete Sonnet's order with 1GHz parts instead, which would have been a more-or-less drop-in replacement, and Sonnet quietly sold out their remaining stock with the faster chip installed. Other than a couple blowout NOS deals, all of which would sell out nearly instantly, this was the first time in years that I ever saw one of these cards offered. (I won't comment on the price offered by this gentleman, but clearly I was willing to pay it.)

The Crescendo/PCI cards struggle against the relatively weak system bus speed of these Macs which tops out at 50MHz. I've heard apocryphally of hacks to exceed this, but the details are unknown to me and all of them also allegedly have compatibility problems ranging from moderate to serious, so I won't discuss them here. To counter that, the 1GHz card not only increases its L3 cache speed from 200MHz to 250MHz (using the same 4:1 multiplier as the 800MHz card it's based on), but doubles its size to a beefy 2MB (the L2 cache remains 256K, at full CPU speed). The system must slow to the bus speed for video and other peripherals, but CPU-bound tasks will hit the slower RAM much less. None of this is unusual for this type of upgrade, and anyone in the market for a card like this is already well aware it won't be as fast as a dedicated system. The real question for someone like me who has an investment in such a system is, is it worth finding such a beast to know you've pushed your beloved classic beige Mac to its absolute limit, or is the 800MHz card the extent of practicality?

First, let's look at the card itself. I've photographed it front and back compared with the 800MHz card.

With the exception of some minor board revisions, the two cards are nearly identical except for the stickers and the larger heat sink. More about that in a moment.

If your system already had the 800MHz card in it, the 1GHz card can simply be swapped in; the Mac OS extension and OpenFirmware patches are the same. (If not, the available Sonnet Crescendo installers will work.) Using my lovely wife as a convenient monitor stand while swapping the CPUs, for which I still haven't been forgiven, I swapped cards and immediately fired up MacBench 5 to see what difference it made. And boy howdy, does it:

The card doesn't bench 3.33x the speed of the baseline G3/300 used by MacBench, but it does get almost 2.5x the speed. It runs about 25% faster than the G4/800, which makes sense given the clock speed differential and the fact that the MacBench code probably entirely fits within the caches of both upgrade cards.

Besides the louder fan, the other thing I noticed right away was that CPU-bound tasks like Virtual PC definitely improve. It is noticeably, if not dramatically, smoother than the 800MHz card, and the responsiveness is obviously better.

With this promising start, I fired up Quake III. It didn't feel a great deal faster but I didn't find this surprising, since beyond a certain threshold games of this level are generally limited by the 3D card rather than the CPU. I was about to start checking framerates when, about a minute into the game, the 7300 abruptly froze. I rebooted and tried again. This time it got around 45 seconds in before locking up. I tried Elite Force. Same thing. RAVE Quake and GLQuake could run for awhile, but in general the higher-end 3-D accelerated games just ground the system to a halt. Perhaps I had a defective card? Speculative I/O accesses were already disabled, so I turned off the L2 and the L3 just to see if there was some bad SRAM in there, though I would have thought the stress test with MacBench and Virtual PC would have found any glitches. Indeed, other than making OS 9 treacle in January, it failed to make any difference, implying the card itself was probably not defective. My wife was put back into monitor stand service and the 800MHz card was replaced. Everything functioned as it did before. So what happened?

In this system there are two major limitations, both of which probably contributed: heat, and power draw. Notice that larger heat sink, which would definitely imply the 1GHz card draws more watts and therefore generates more heat within a small, largely passively cooled case in which there are also two 7200rpm hard disks, a passively cooled 3D accelerator and an actively cooled PC card. Yes, all those little fans inside the unit certainly do get a bit loud when the system is cranked up.

The other problem is making all those things work within a 150W power envelope, the maximum the stock Power Mac 7300 power supply can put out. Let's add this all up. For the two 7200rpm SCSI drives we have somewhere between 20 and 25W draw each, so say 50W for the two of them if they're chugging away. Each PCI card can pull up to a maximum of 25W per spec; while the PC card was not running during these tests, it was probably not drawing nothing, and the Rage Orion was probably pulling close to its limits, so say 30-35W. The CD-ROM probably pulls around 5W when idle. If we assume a generous, low-power draw of about 2W per RAM stick, that's eight 128MB sticks to equal our gigabyte and 15-20W total. Finally, the CPU card is harder to compute, but Freescale's specs on the 1GHz 7455 estimate around 15 to 22W for the CPU alone, not including the very busy 2MB SRAM in the L3; add another 5 or so for that. That's up to 137W of power draw plus any other motherboard resources in play, and we're charitably assuming the PSU can continuously put out at max to maintain that. If there's any power sag, that could be enough to glitch the CPU. Running this close to the edge, the 3-6W power differential between the 800MHz and 1GHz cards is no longer a rounding error.

Now, if heat and/or power were the rate limiting characteristics, I could certainly yank the PC card or get rid of one of the hard drives, but that's really the trick, isn't it? The entire market for these kinds of processor upgrades consists of people like me who have a substantial investment in their old hardware, and that investment often consists of other kinds of power hungry upgrades. Compared to the 800MHz G4, the 1GHz card clearly pushes the envelope just enough extra to kick a system probably already at its limits over the edge. It's possible Sonnet had some inkling of this, and if so, that could be one reason why they never had a 1GHz G4 card in regular production for the beige Power Macs.

The 1GHz card is still a noticeable improvement particularly in CPU-bound tasks; the 2MB of L3 cache in particular helps to reduce the need to hit slower RAM on the system bus. For gaming, however, these cards have never been the optimal choice even though they can get many titles within reach of previously unsupported configurations; on PCI Power Macs, the 3D accelerator has to be accessed over the bus as well, and it's usually the 3D accelerator that limits overall framerate in higher-end titles. In addition, none of these CPU cards are particularly power-thrifty and it's pretty clear this uses more juice than any other such card. Overall, if you can get your hands on one and you have a beefier PSU like an 8500 (225W) or a 9600 (390W), this would be a great upgrade if you can find one at a nice price and certainly the biggest grunt you can get out of that class of system. If you have a smaller 150W system like my 7300 or the other Outrigger Power Macs, however, I'd look at your power budget first and see if this is just going to be a doorstop. Right now, unfortunately, mine is now just a spare in a box because of all the other upgrades. And that's a damn shame.

Categorieën: Mozilla-nl planet

Marco Zehe: Firefox 57 from an NVDA user’s perspective

Mozilla planet - ti, 07/11/2017 - 15:57

Firefox 57, also known as Firefox Quantum, will be released on November 14. It will bring some significant changes to the Firefox rendering engine to improve performance and open the door for more new features in the future. Here is what you need to know if you are a user of the NVDA screen reader.

For users of the NVDA screen reader, some of these changes may initially seem like a step backward. To make the accessibility features work with the new architecture, we had to make some significant changes which will initially feel less performant than before. Especially complex pages and web applications such as Facebook or Gmail will feel slower to NVDA users in this Firefox release.

Improvements in the pipeline

Fortunately, NVDA users will only have to put up with these slowdowns for one Firefox release. Firefox 58, which will move to beta the moment Firefox 57 is being released, will already improve performance so significantly that most smaller pages will feel as snappy as before, larger pages will take a lot less time to be loaded into NVDA’s browse mode buffer, and web applications such as Gmail or Facebook will feel more fluid.

And we’re not stopping there. In Firefox Nightly, then on version 59, performance improvements will continue, and more big pages and web applications should return to a normal working speed with NVDA.

I need full speed

If you do require Firefox to perform as fast as before and cannot or do not want to wait until the above mentioned performance improvements arrive on your machine, you have the option to switch to the Extended Support Release (ESR), which is on version 52 and will receive security fixes until long into 2018.

However, we encourage you to stick with us on the current release if you possibly can. Your findings, if you choose to report them to us, will greatly help us improve Firefox further even faster, because even we might not think of all the scenarios that might be day to day sites for you.

I want to stick with you. How can I help?

That’s great! If you encounter any big problems, like pages that take unusually long to load, we want to know about them. We already know that long Wikipedia articles such as the one about World War I will take about 12 seconds to load on an average Windows 10 machine and a current NVDA release. In Firefox 58 beta, we will have brought this down to less than 8 seconds already, and the goal is to bring that time down even further. So if you really want to help, you can choose to upgrade to our beta channel and re-test the problem you encountered there. If it is already improved, you can be certain we’re on top of the underlying problem. If not, we definitely want to know where you found the problem and what steps led to it.

And if you really want to be on the bleeding edge, getting the latest fixes literally hours or days after they landed in our source code, you can choose to update to our Firefox Nightly channel, and get new builds of Firefox twice a day. There, if you encounter problems like long lags, or even crashes, they will be very closely tied to what we were recently working on, and we will be able to resolve the problems quickly, before they even hit the next beta cycle.

In conclusion

We know we’re asking a lot of you since you’ve always had a very fast and efficient browsing experience when you used Firefox in combination with NVDA. And we are truly sorry that we’ll have to temporarily slip here. But rest assured that we’re working hard with the full team to kick Firefox back into gear so that each followup release will bring us back closer to where we were before 57, plus the added benefits Quantum brings for all users.

More information

The post Firefox 57 from an NVDA user’s perspective appeared first on Marco's Accessibility Blog.

Categorieën: Mozilla-nl planet

Gervase Markham: The Future Path of Censorship

Mozilla planet - ti, 07/11/2017 - 15:03

On Saturday, I attended the excellent ORGCon in London, put on by the Open Rights Group. This was a conference with a single track and a full roster of speakers – no breakouts, no seminars. And it was very enjoyable, with interesting contributions from names I hadn’t heard before.

One of those was Jamie Bartlett, who works at the think tank Demos. He gave some very interesting insights into the nature and future of extremism. he talked about the dissolving of the centre-left/centre-right consensus in the UK, and the rise of views further out on the wings of politics. He feels this is a good thing, as this is always the source of political change, but it seems like the ability and scope to express those views is being reduced and suppressed.

He (correctly, in my view) identified the recent raising by Amber Rudd, the Home Secretary, of the penalty for looking at extremist content on the web to 15 years as a sign of weakness, because they know they can’t actually stop people looking using censorship so have to scare them instead.

The insight which particularly stuck with me was the following. He suggested that in the next decade in the West, two things will happen to censorship. Firstly, it will get more draconian, as governments try harder to suppress things and pass more laws requiring ISPs to censor people’s feeds. Secondly, it will get less effective, as tools like Tor and VPNs become more mainstream and easier to use. This is a concerning combination for those concerned about freedom of speech.

Categorieën: Mozilla-nl planet

Pages