mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

David Humphrey: When the music's over, turn out the lights

Mozilla planet - ti, 07/11/2017 - 20:49

This week I've been thinking about ways in which GitHub could do a better job with projects and code as they age. Unfortunately, since technology is fundamentally about innovation, growth, and development, we don't tend (or want) to talk about decline, neglect, or endings. GitHub has some great docs on how to create a new repo, how to fork an existing repo, and how to file bugs, make PRs, etc. What they don't have is any advice for what you should do when you're done.

GitHub isn't alone in this. Long ago I wrote about the Ethics of Virtual Endings, and how the game WebKinz failed my daughter when she was done wanting to play. It turned out to be impossible to properly say goodbye to a virtual pet, whose neglect would go on indefinitely, and lead to sickness and suffering. I wrote then that "endings are, nevertheless, as real as beginnings, and need great care," and nothing has changed. Today, instead of dealing with a child and her virtual pets, I'm thinking about adults and their software projects.

It's important to establish the fact that every repo on GitHub is going to stop getting updated, cease being maintained, drift into the past, and die. I don't think people realize this. It can be hard to see it, since absence is difficult to observe unless you know what used to be. Our attention is drawn to what's new and hot. GitHub directs our attention to repos that are Trending. To be clear, I love this page, because I too love seeing what's new, who is doing great work (side note: my friend Kate Hudson is currently at the top of what's trending today for git-flight-rules), and what's happening around me. I've had my own repos there in the past, and it's a nice feeling.

Without taking anything away from the phenomenal growth at GitHub, let me also show you a contribution graph:

This is the graph of every project on GitHub. The years might not line up, the contribution levels might be different, and the duration of activity might be wider; but make no mistake: the longest stretch in every project is going to be the flat-line that follows its final commit. Before you tell me that natural selection simply weeds out failed projects, and good ones go on, this was a very successful project.

Software solves a problem for a group of people at a given time in a given context. At some point, it's over. Either the money runs out, or the problem goes away, or the people lose interest, or the language dies, or any number of other things happens. But at some point, it's done. And that's OK. That's actually how it's always been. I've been programming non-stop for 35 years, and I could tell you all kinds of nostalgia-filled tales of software from another time, software that can't be used today.

I say used intentionally. You can't use most software from the past. As a binary, as a product, as a project, they have died. But we can do much more with software than just use it. I spend more time reading code than I do writing it. Often I need to fix a bug, and doing so involves studying parallel implementations to see what they do and don't share in common. Other times I'm trying to learn how to approach a problem, and want to see how someone else did it. Still other times I'm interested in picking up patterns and practices from developers I admire. There are lots of reasons that one might want to read vs. run a piece of code, and having access to it is important.

Which brings me back to GitHub. If we can agree that software projects are all going to end a some point, it seems logical to plan for it. As a community we over-plan and over-engineer every aspect of our work, with continuous, automated processes running 24x7 for every semi-colon insertion and removal. You'd think we'd have some kind of "best practice" or process ready to deploy when it's time to call it a day. However, if we do, I'm not aware of it.

What I see instead are people trying to cope with the lack of such a process. GitHub is overflowing with abandoned repos that have been forked and forgotten. I've seen people add a note in their README file to indicate the project is no longer maintained. I've seen other people do a similar thing, but point to some new repo and suggest people go there. The more typical thing you see is that PRs and Issues start to pile up without an answer. Meanwhile, maintainers take to Medium to write long essays about burnout and the impossibilities of maintaining projects on their own. It's a hard problem.

I think GitHub could help to improve things systematically by addressing the end of a project as a first-class thing worthy of some design and engineering effort. For example, I would argue that after a certain point of in activity, it's no longer useful to have Issues and PRs open for a dead repository. After the developers have moved on to other things, the code, however, continues to be useful. Long before GitHub existed, we all dealt with source tarballs, or random code archives on the web. We didn't worry about the age of the code, if it did what we needed. By always forcing a project-management-lense on a repo, GitHub misses the opportunity to also be a great archive.

Another thing I'd like to see is some better UX for repos changing hands. GitHub does make it possible to move a repo to a new org or account. However, since everything in git is a clone, there's no reason that we shouldn't make cloning a dead project a bit easier. This week I've been working on a project that needed to do HLS live video streaming from nginx. The repo you get when you search for this is https://github.com/arut/nginx-rtmp-module. This makes sense, since this is where the work began. However, what you don't see when you go there is that you should actually probably use https://github.com/sergey-dryabzhinsky/nginx-rtmp-module, which is now quite a bit ahead. It would be great if GitHub offered to help me find this info from a project page: given that a fork on GitHub has gone further than this original repo, why not point me to it?

Bound up in this problem are the often unspoken and conflicting expectations of maintainers, downstream developers, and users. We love surprise: the release of something great, a demo, a hack, a new thing in the world that you didn't see coming. We hate the opposite: the end of a thing we love, the lack of updates, the disappearance without explanation. I think GitHub makes this worse by pretending that everything is always, or about to be, worked on. Commits, Issues, Pull Requests, Insight graphs, contributions--things are happening here! The truth is, lots and lots of what's there will never be touched again, and we should be honest about that so that no one is led to believe something that isn't true. Sure, the lights are still on, but nobody lives here anymore.

Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: is the 1GHz Sonnet G4 card worth it?

Mozilla planet - ti, 07/11/2017 - 17:46
First of all, ObTenFourFox announcements: we are on track for TenFourFox Feature Parity Release 4 launching with Firefox 57/52.5 (but still supporting classic extensions, because we actually like our users) on November 14. All new features and updates have stuck, so the only new changes will be the remaining security patches and certificate/pin updates. In the meantime, I have finally started work on adding AltiVec-accelerated VP9 intra frame prediction to our in-tree fork of libvpx, the WebM decoder library. This is the last major portion of the VP9 codec that was lacking AltiVec SIMD acceleration, which I'm doing as a more or less direct port of the Intel SSE2 version with some converted MMX and SSE routines; we don't use the loop filter and have not since VP9 was first officially supported in TenFourFox. Already there are some obvious performance improvements but the partial implementation that I've checked in so far won't be enabled in FPR4 since I haven't tested it thoroughly on G4 systems yet. The last little bit will be rewriting the convolution and averaging code sections that are still in unaccelerated generic C and a couple little odds and ends. Watch for the first draft to appear in FPR5.

Also, in the plain-layouts-are-beautiful dept., I encountered a fun search engine for the way the Web used to be. Floodgap is listed, of course. More about that philosophy at a later date.

On to the main event. One of the computers in my stable of systems is my beloved Power Macintosh 7300, a classic Old World beige PCI Power Mac. This 7300 served as my primary personal computer -- at that time with a 500MHz Sonnet G3, 192MB of RAM and a Rage Orion 3D card -- for about three and a half years and later became the first gopher.floodgap.com before I resurrected it as a gaming system. Currently it has 1GB of RAM, the max for this unit; the same Rage Orion (RAGE 128 GL) 3-D accelerator, which I prefer to PCI Radeons for those games that have 3-D support but weren't patched for various changes in the early Radeon cards; two 7200rpm SCSI drives; a 24x CDROM; a (rather finicky) Orange Micro OrangePC 620 "PC on a card" with 128MB of RAM and a 400MHz AMD K6-II CPU; and, most relevantly to this article, a Sonnet Crescendo/PCI 800MHz G4 CPU upgrade card, running a PowerPC G4 7455 CPU with 256K L2 cache at CPU speed and 1MB of L3 at 200MHz. The system boots Mac OS 9.1 and uses CPU Director to disable speculative access and, for those hardware situations that require it, L2 and L3 caches.

Overall, this system runs pretty well. It naturally can chug through Classilla pretty well, but it also has the Mac ports of a large number of games from a smattering of 68K titles to software-rendered titles like Doom, System Shock, Full Throttle, Wing Commander III and up through 3-D titles near the end of OS 9's life such as Shogo MAD and Quake III and its derivatives like Star Trek Voyager: Elite Force. The PC card boots both Windows 95 OSR2 and Windows 98 to run games like Outlaws and Dark Forces II: Jedi Knight that were never ported to PowerPC Mac OS or OS X.

It's a project of mine to trick this sucker out, which is why I jumped at the chance to buy one when three of the nearly unobtainium 1.0 GHz G4 Sonnet Crescendo/PCI cards turned up on eBay unused in original boxes and factory livery. Although Sonnet obviously makes faster processor upgrades for later Power Macs, and in fact I have one of their dual 1.8GHz upgrades in my FW400 MDD (the Mac that replaced the 7300 as my daily driver), this was the fastest you could cram in a pre-G3 beige PCI Power Mac, i.e., pretty much anything with PCI slots from the 7300 to the 9600. Only the sticker on the box would have told you this was more than the prior top-of-the-line 800MHz card; nothing else mentioned anything of it, not even the manual (an info sheet was tucked inside to reassure you). The urban legend goes that Sonnet's board manufacturer under contract was out of business and Freescale-Motorola was no longer producing the 800MHz 7455. This was clearly the end of the Crescendo/PCI product since it didn't make enough money to be redesigned for a new manufacturer, but left Sonnet with about 140 otherwise useless daughtercards for which no CPU was available either. Possibly as an enticement, Freescale offered to complete Sonnet's order with 1GHz parts instead, which would have been a more-or-less drop-in replacement, and Sonnet quietly sold out their remaining stock with the faster chip installed. Other than a couple blowout NOS deals, all of which would sell out nearly instantly, this was the first time in years that I ever saw one of these cards offered. (I won't comment on the price offered by this gentleman, but clearly I was willing to pay it.)

The Crescendo/PCI cards struggle against the relatively weak system bus speed of these Macs which tops out at 50MHz. I've heard apocryphally of hacks to exceed this, but the details are unknown to me and all of them also allegedly have compatibility problems ranging from moderate to serious, so I won't discuss them here. To counter that, the 1GHz card not only increases its L3 cache speed from 200MHz to 250MHz (using the same 4:1 multiplier as the 800MHz card it's based on), but doubles its size to a beefy 2MB (the L2 cache remains 256K, at full CPU speed). The system must slow to the bus speed for video and other peripherals, but CPU-bound tasks will hit the slower RAM much less. None of this is unusual for this type of upgrade, and anyone in the market for a card like this is already well aware it won't be as fast as a dedicated system. The real question for someone like me who has an investment in such a system is, is it worth finding such a beast to know you've pushed your beloved classic beige Mac to its absolute limit, or is the 800MHz card the extent of practicality?

First, let's look at the card itself. I've photographed it front and back compared with the 800MHz card.

With the exception of some minor board revisions, the two cards are nearly identical except for the stickers and the larger heat sink. More about that in a moment.

If your system already had the 800MHz card in it, the 1GHz card can simply be swapped in; the Mac OS extension and OpenFirmware patches are the same. (If not, the available Sonnet Crescendo installers will work.) Using my lovely wife as a convenient monitor stand while swapping the CPUs, for which I still haven't been forgiven, I swapped cards and immediately fired up MacBench 5 to see what difference it made. And boy howdy, does it:

The card doesn't bench 3.33x the speed of the baseline G3/300 used by MacBench, but it does get almost 2.5x the speed. It runs about 25% faster than the G4/800, which makes sense given the clock speed differential and the fact that the MacBench code probably entirely fits within the caches of both upgrade cards.

Besides the louder fan, the other thing I noticed right away was that CPU-bound tasks like Virtual PC definitely improve. It is noticeably, if not dramatically, smoother than the 800MHz card, and the responsiveness is obviously better.

With this promising start, I fired up Quake III. It didn't feel a great deal faster but I didn't find this surprising, since beyond a certain threshold games of this level are generally limited by the 3D card rather than the CPU. I was about to start checking framerates when, about a minute into the game, the 7300 abruptly froze. I rebooted and tried again. This time it got around 45 seconds in before locking up. I tried Elite Force. Same thing. RAVE Quake and GLQuake could run for awhile, but in general the higher-end 3-D accelerated games just ground the system to a halt. Perhaps I had a defective card? Speculative I/O accesses were already disabled, so I turned off the L2 and the L3 just to see if there was some bad SRAM in there, though I would have thought the stress test with MacBench and Virtual PC would have found any glitches. Indeed, other than making OS 9 treacle in January, it failed to make any difference, implying the card itself was probably not defective. My wife was put back into monitor stand service and the 800MHz card was replaced. Everything functioned as it did before. So what happened?

In this system there are two major limitations, both of which probably contributed: heat, and power draw. Notice that larger heat sink, which would definitely imply the 1GHz card draws more watts and therefore generates more heat within a small, largely passively cooled case in which there are also two 7200rpm hard disks, a passively cooled 3D accelerator and an actively cooled PC card. Yes, all those little fans inside the unit certainly do get a bit loud when the system is cranked up.

The other problem is making all those things work within a 150W power envelope, the maximum the stock Power Mac 7300 power supply can put out. Let's add this all up. For the two 7200rpm SCSI drives we have somewhere between 20 and 25W draw each, so say 50W for the two of them if they're chugging away. Each PCI card can pull up to a maximum of 25W per spec; while the PC card was not running during these tests, it was probably not drawing nothing, and the Rage Orion was probably pulling close to its limits, so say 30-35W. The CD-ROM probably pulls around 5W when idle. If we assume a generous, low-power draw of about 2W per RAM stick, that's eight 128MB sticks to equal our gigabyte and 15-20W total. Finally, the CPU card is harder to compute, but Freescale's specs on the 1GHz 7455 estimate around 15 to 22W for the CPU alone, not including the very busy 2MB SRAM in the L3; add another 5 or so for that. That's up to 137W of power draw plus any other motherboard resources in play, and we're charitably assuming the PSU can continuously put out at max to maintain that. If there's any power sag, that could be enough to glitch the CPU. Running this close to the edge, the 3-6W power differential between the 800MHz and 1GHz cards is no longer a rounding error.

Now, if heat and/or power were the rate limiting characteristics, I could certainly yank the PC card or get rid of one of the hard drives, but that's really the trick, isn't it? The entire market for these kinds of processor upgrades consists of people like me who have a substantial investment in their old hardware, and that investment often consists of other kinds of power hungry upgrades. Compared to the 800MHz G4, the 1GHz card clearly pushes the envelope just enough extra to kick a system probably already at its limits over the edge. It's possible Sonnet had some inkling of this, and if so, that could be one reason why they never had a 1GHz G4 card in regular production for the beige Power Macs.

The 1GHz card is still a noticeable improvement particularly in CPU-bound tasks; the 2MB of L3 cache in particular helps to reduce the need to hit slower RAM on the system bus. For gaming, however, these cards have never been the optimal choice even though they can get many titles within reach of previously unsupported configurations; on PCI Power Macs, the 3D accelerator has to be accessed over the bus as well, and it's usually the 3D accelerator that limits overall framerate in higher-end titles. In addition, none of these CPU cards are particularly power-thrifty and it's pretty clear this uses more juice than any other such card. Overall, if you can get your hands on one and you have a beefier PSU like an 8500 (225W) or a 9600 (390W), this would be a great upgrade if you can find one at a nice price and certainly the biggest grunt you can get out of that class of system. If you have a smaller 150W system like my 7300 or the other Outrigger Power Macs, however, I'd look at your power budget first and see if this is just going to be a doorstop. Right now, unfortunately, mine is now just a spare in a box because of all the other upgrades. And that's a damn shame.

Categorieën: Mozilla-nl planet

Marco Zehe: Firefox 57 from an NVDA user’s perspective

Mozilla planet - ti, 07/11/2017 - 15:57

Firefox 57, also known as Firefox Quantum, will be released on November 14. It will bring some significant changes to the Firefox rendering engine to improve performance and open the door for more new features in the future. Here is what you need to know if you are a user of the NVDA screen reader.

For users of the NVDA screen reader, some of these changes may initially seem like a step backward. To make the accessibility features work with the new architecture, we had to make some significant changes which will initially feel less performant than before. Especially complex pages and web applications such as Facebook or Gmail will feel slower to NVDA users in this Firefox release.

Improvements in the pipeline

Fortunately, NVDA users will only have to put up with these slowdowns for one Firefox release. Firefox 58, which will move to beta the moment Firefox 57 is being released, will already improve performance so significantly that most smaller pages will feel as snappy as before, larger pages will take a lot less time to be loaded into NVDA’s browse mode buffer, and web applications such as Gmail or Facebook will feel more fluid.

And we’re not stopping there. In Firefox Nightly, then on version 59, performance improvements will continue, and more big pages and web applications should return to a normal working speed with NVDA.

I need full speed

If you do require Firefox to perform as fast as before and cannot or do not want to wait until the above mentioned performance improvements arrive on your machine, you have the option to switch to the Extended Support Release (ESR), which is on version 52 and will receive security fixes until long into 2018.

However, we encourage you to stick with us on the current release if you possibly can. Your findings, if you choose to report them to us, will greatly help us improve Firefox further even faster, because even we might not think of all the scenarios that might be day to day sites for you.

I want to stick with you. How can I help?

That’s great! If you encounter any big problems, like pages that take unusually long to load, we want to know about them. We already know that long Wikipedia articles such as the one about World War I will take about 12 seconds to load on an average Windows 10 machine and a current NVDA release. In Firefox 58 beta, we will have brought this down to less than 8 seconds already, and the goal is to bring that time down even further. So if you really want to help, you can choose to upgrade to our beta channel and re-test the problem you encountered there. If it is already improved, you can be certain we’re on top of the underlying problem. If not, we definitely want to know where you found the problem and what steps led to it.

And if you really want to be on the bleeding edge, getting the latest fixes literally hours or days after they landed in our source code, you can choose to update to our Firefox Nightly channel, and get new builds of Firefox twice a day. There, if you encounter problems like long lags, or even crashes, they will be very closely tied to what we were recently working on, and we will be able to resolve the problems quickly, before they even hit the next beta cycle.

In conclusion

We know we’re asking a lot of you since you’ve always had a very fast and efficient browsing experience when you used Firefox in combination with NVDA. And we are truly sorry that we’ll have to temporarily slip here. But rest assured that we’re working hard with the full team to kick Firefox back into gear so that each followup release will bring us back closer to where we were before 57, plus the added benefits Quantum brings for all users.

More information

The post Firefox 57 from an NVDA user’s perspective appeared first on Marco's Accessibility Blog.

Categorieën: Mozilla-nl planet

Gervase Markham: The Future Path of Censorship

Mozilla planet - ti, 07/11/2017 - 15:03

On Saturday, I attended the excellent ORGCon in London, put on by the Open Rights Group. This was a conference with a single track and a full roster of speakers – no breakouts, no seminars. And it was very enjoyable, with interesting contributions from names I hadn’t heard before.

One of those was Jamie Bartlett, who works at the think tank Demos. He gave some very interesting insights into the nature and future of extremism. he talked about the dissolving of the centre-left/centre-right consensus in the UK, and the rise of views further out on the wings of politics. He feels this is a good thing, as this is always the source of political change, but it seems like the ability and scope to express those views is being reduced and suppressed.

He (correctly, in my view) identified the recent raising by Amber Rudd, the Home Secretary, of the penalty for looking at extremist content on the web to 15 years as a sign of weakness, because they know they can’t actually stop people looking using censorship so have to scare them instead.

The insight which particularly stuck with me was the following. He suggested that in the next decade in the West, two things will happen to censorship. Firstly, it will get more draconian, as governments try harder to suppress things and pass more laws requiring ISPs to censor people’s feeds. Secondly, it will get less effective, as tools like Tor and VPNs become more mainstream and easier to use. This is a concerning combination for those concerned about freedom of speech.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Don’t Buy Gifts That Snoop: Introducing Mozilla’s Holiday Buyers’ Guide

Mozilla planet - ti, 07/11/2017 - 14:00
This gift-giving season, Mozilla is helping you choose gadgets that respect your online privacy and security: advocacy.mozilla.org/privacynotincluded

 

Is your smart toaster spying on you? Does your toddler’s new toy have an easily-hackable microphone or camera?

This holiday season, don’t buy your loved ones an Internet-connected gadget that compromises their privacy or security — no matter how nifty or cute that gadget may be.

Today, Mozilla is publishing *Privacy Not Included — a shopping companion to help consumers identify Internet-connected products that meet their privacy and security needs.

Mozilla’s researchers reviewed dozens of popular toys, game consoles, exercise gadgets, and smart home accessories ranging in price from $25 to $900. We asked critical questions, like:

Does this product have privacy controls? Does the company share data with third parties? And does the company claim to obey child-related privacy rules? Our goal: To make products’ privacy and security features as obvious as their price.

Our reviews are guided by the Digital Standard, a comprehensive rubric for evaluating items’ privacy and security features. The Standard is developed by Consumer Reports and its partners Disconnect, Ranking Digital Rights, and the Cyber Independent Testing Lab.

We also integrated Talk — an open-source commenting platform built by Mozilla — across our buyers’ guide, so consumers can talk to one another. *Privacy Not Included is available in both English and Spanish.

We’re releasing *Privacy Not Included at a critical moment. Every day, more and more products — from cars to dolls to salt shakers — connect to the Internet and collect our personal data. But people feel they can’t control these connected devices, according to a recent Mozilla poll of 190,000 individuals across scores of countries. 35% of respondents were “wary and nervous” about the future of IoT, and 45% feared a “loss of privacy.”

Unfortunately, the expectation in digital life today is that it’s the consumer’s responsibility to protect their online privacy and security. It’s the consumer’s job to wield VPNs and encryption, and to master a host of other technical tools.  

It’s important to empower consumers — but it’s not enough. Makers of digital products must prioritize online privacy and security. We don’t ask people to install their own seat belts to stay safe in cars. Why are we asking people to install VPNs to stay safe online?

“Right now, the Internet of Things is at an inflection point,” Mark Surman, Mozilla’s Executive Director, recently wrote. “It’s pervasive, but also still in its infancy. Rules have yet to be written, and social mores yet to be established. There are many possible futures — some darker than others.”

With *Privacy Not Included, we can help shoppers choose more responsible technology. We can also do something bigger — fuel a movement for rules and mores that enshrine online privacy and security. We can demand change from the businesses that make digital products, and the governments that oversee them, to ensure privacy and security are built into our digital lives.

Ashley Boyd is VP, Advocacy at Mozilla.

The post Don’t Buy Gifts That Snoop: Introducing Mozilla’s Holiday Buyers’ Guide appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 207

Mozilla planet - ti, 07/11/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is futures-await, a crate to simplify writing futures-based async code. Thanks to LilianMoraru for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

125 pull requests were merged in the last week

New Contributors
  • David Wood
  • Fredrik Larsson
  • Jonathan Behrens
  • Lance John
  • laurent
  • matt rice
  • Rolf Karp
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

durka42: IMO the name "dangling" is scary enough :) Havvy gives durka42 a ptr::dangling::(). durka42 declines to unwrap() it

durka42 and Havvy discussing PR #45527.

Thanks to Centril for the suggestion!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: MozMEAO SRE Status Report - November 7, 2017

Mozilla planet - ti, 07/11/2017 - 01:00

Here’s what happened on the MozMEAO SRE team from October 31st - November 7th.

Current work SUMO

Work progresses on a SUMO development environment for use with Kubernetes in AWS.

MDN Links
Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: Kuma Report, October 2017

Mozilla planet - ti, 07/11/2017 - 01:00

Here’s what happened in October in Kuma, the engine of MDN:

  • MDN Migrated to AWS
  • Continued Migration of Browser Compatibility Data
  • Shipped tweaks and fixes

Here’s the plan for November:

  • Ship New Compat Table to Beta Users
  • Improve Performance of MDN and the Interactive Editor
  • Update Localization of KumaScript Macros

I’ve also included an overview of the AWS migration project, and an introduction to our new AWS infrastructure in Kubernetes, which helps make this the longest Kuma Report yet.

Done in October MDN Migrated to AWS

On October 10, we moved MDN from Mozilla’s SCL3 datacenter to a Kubernetes cluster in the AWS us-west2 (Oregon) region. The database move went well, but we needed five times the web resources as the maintenance mode tests. We were able to smoothly scale up in the four hours we budgeted for the migration. Dave Parfitt and Ryan Johnson did a great job implementing a flexible set of deployment tools and monitors, that allowed us to quickly react to and handle the unexpected load.

The extra load was caused by mdn.mozillademos.org, which serves user uploads and wiki-based code samples. These untrusted resources are served from a different domain so that browsers will protect MDN users from the worst security issues. I excluded these resources from the production traffic tests, which turned out to be a mistake, since they represent 75% of the web traffic load after the move.

Ryan and I worked to get this domain behind a CDN. This included avoiding a Vary: Cookie header that was being added to all responses (PR 4469), and adding caching headers to each endpoint (PR 4462 and PR 4476).

We added CloudFront to the domain on October 26. Now most of these resources are served from the CloudFront CDN, which is fast and often closer to the MDN user (for example, served to French users from a server in Europe rather than California). Over a week, 197 GB was served from the CDN, versus 3 GB (1.5%) served from Kuma.

Bytes to User

There is a reduced load on Kuma as well. The CDN can handle many requests, so Kuma doesn’t see them at all. The CDN periodically checks with Kuma that content hasn’t changed, which often requires a short 304 Not Modified rather than the full response.

Backend requests for attachments have dropped by 45%:

Attachment Throughput

Code samples requests have dropped by 96%: Code Sample Throughput

We continue to use a CDN for our static assets, but not for developer.mozilla.org itself. We’d have to do similar work to add caching headers, ideally splitting anonymous content from logged-in content. The untrusted domain had 4 endpoints to consider, while developer.mozilla.org has 35 to 50. We hope to do this work in 2018.

Continued Migration of Browser Compatibility Data

The Browser Compatibility Data project was the most active MDN project in October. Another 700 MDN pages use the BCD data, bringing us up to 2200 MDN pages, or 35.5% of the pages with compatibility data.

Daniel D. Beck continues migrating the CSS data, which will take at least the rest of 2017. wbamberg continues to update WebExtension and API data, which needs to keep up with browser releases. Chris Mills migrated the Web Audio data with 32 PRs, starting with PR 433. This data includes mixin interfaces, and prompted some discussion about how to represent them in BCD in issue #472.

Florian Scholz added MDN URLs in PR 344, which will help BCD integrators to link back to MDN for more detailed information.

Browser names and versions are an important part of the compatibility data, and Florian and Jean-Yves Perrier worked to formalize their representation in BCD. This includes standardization of the first version, preferring “33” to “33.0” (PR 447 and more), and fixing some invalid version numbers (PR 449 and more). In November, BCD will add more of this data, allowing automated validation of version data, and enabling some alternate ways to present compat data.

Florian continues to release a new NPM package each Monday, and enabled tag-based releases (PR 565) for the most recent 0.0.12 release. mdn-browser-compat-data had over 900 downloads last month.

Shipped Tweaks and Fixes

There were 276 PRs merged in October:

Many of these were from external contributors, including several first-time contributions. Here are some of the highlights:

Planned for November Ship New Compat Table to Beta Users

Stephanie Hobson and Florian are collaborating on a new compat table design for MDN, based on the BCD data. The new format summarizes support across desktop and mobile browsers, while still allowing developers to dive into the implementation details. We’ll ship this to beta users on 2200 MDN pages in November. See Beta Testing New Compatability Tables on Discourse for more details.

New Compat Table

Improve Performance of MDN and the Interactive Editor

Page load times have increased with the move to AWS. We’re looking into ways to increase performance across MDN. You can follow our MDN Post-migration project for more details. We also want to enable the interactive editor for all users, but we’re concerned about further increasing page load times. You can follow the remaining issues in the interactive-examples repo.

Update Localization of KumaScript Macros

In August, we planned the toolkit we’d use to extract strings from KumaScript macros (see bug 1340342). We put implementation on hold until after the AWS migration. In November, we’ll dust off the plans and get some sample macros converted. We’re hopeful the community will make short work of the rest of the macros.

MDN in AWS

The AWS migration project started in November 2014, bug 1110799. The original plan was to switch by summer 2015, but the technical and organizational hurdles proved harder than expected. At the same time, the team removed many legacy barriers making Kuma hard to migrate. A highlight of the effort was the Mozilla All Hands in December 2015, where the team merged several branches of work-in-progress code to get Kuma running in Heroku. Thanks to Jannis Leidel, Rob Hudson, Luke Crouch, Lonnen, Will Kahn-Greene, David Walsh, James Bennet, cyliang, Jake, Sean Rich, Travis Blow, Sheeri Cabral, and everyone else who worked on or influenced this first phase of the project.

The migration project rebooted in Summer 2016. We switched to targeting Mozilla Marketing’s deployment environment. I split the work into smaller steps leading up to AWS. I thought each step would take about a month. They took about 3 months each. Estimating is hard.

2016 MDN Tech Plan

Changes to MDN Services

MDN no longer uses Apache to serve files and proxy Kuma. Instead, Kuma serves requests directly with gunicorn with the meinheld worker. I did some analysis in January, and Dave Parfitt and Ryan Johnson led the effort to port Apache features to Kuma:

  • Redirects are implemented with Paul McLanahan’s django-redirect-urls.
  • Static assets (CSS, JavaScript, etc.) are served directly with WhiteNoise.
  • Kuma handles the domain-based differences between the main website and the untrusted domain.
  • Miscellaneous files like robots.txt, sitemaps, and legacy files (from the early days of MDN) are served directly.
  • Kuma adds security headers to responses.

Another big change is how the services are run. The base unit of implementation in SCL3 was multi-purpose virtual machines (VMs). In AWS, we are switching to application-specific Docker containers.

In SCL3, the VMs were split into 6 user-facing web servers and 4 backend Celery servers. In AWS, the EC2 servers act as Docker hosts. Docker uses operating system virtualization, which has several advantages over machine virtualization for our use cases. The Docker images are distributed over the EC2 servers, as chosen by Kubernetes.

SCL3 versus AWS Servers

The SCL3 servers were maintained as long-running servers, using Puppet to install security updates and new software. The servers were multi-purpose, used for Kuma, KumaScript, and backend Celery processes. With Docker, we instead use a Python/Kuma image and a node.js/KumaScript image to implement MDN.

SCL3 versus AWS MDN units

The Python/Kuma image is configurable through environment variables to run in different domains (such as staging or production), and to be configured as one of our three main Python services:

  • web - User-facing Kuma service
  • celery - Backend Kuma processes outside of the request loop
  • api - A backend Kuma service, used by KumaScript to render pages. This avoids an issue in SCL3 where KumaScript API calls were competing with MDN user requests.

Our node.js/KumaScript service is also configured via environment variables, and implements the fourth main service of MDN:

  • kumascript - The node.js service that renders wiki pages

Building the Docker images involves installing system software, installing the latest code, creating the static files, compiling translations, and preparing other run-time assets. AWS deployments are the relatively fast process of switching to newer Docker images. This is an improvement over SCL3, which required doing most of the work during deployment while developers watched.

An Introduction to Kubernetes

Kubernetes is a system for automating the deployment, scaling, and management of containerized applications. Kubernetes’s view of MDN looks like this:

AWS MDN from Kubernetes' Perspective

A big part of understanding Kubernetes is learning the vocabulary. Kubernetes Concepts is a good place to start. Here’s how some of these concepts are implemented for MDN:

  • Ten EC2 instances in AWS are configured as Nodes, and joined into a Kubernetes Cluster. Our “Portland Cluster” is in the us-west2 (Oregon) AWS region. Nine Nodes are available for application usage, and the master Node runs the Cluster.
  • The mdn-prod Namespace collects the resources that need to collaborate to make MDN work. The mdn-stage Namespace is also in the Portland Cluster, as well as other Mozilla projects.
  • A Service defines a service provided by an application at a TCP port. For example, a webserver provides an HTTP service on port 80.
    • The web service is connected to the outside world via an AWS Elastic Load Balancer (ELB), which can reach it at https://developer.mozilla.org (the main site) and https://mdn.mozillademos.org (the untrusted resources).
    • The api and kumascript services are available inside the cluster, but not routed to the outside world.
    • celery doesn’t accept HTTP requests, and so it doesn’t get a Service.
  • The application that provides a service is defined by a Deployment, which declares what Docker image and tag will be used, how many replicas are desired, the CPU and memory budget, what disk volumes should be mounted, and what the environment configuration should be.
  • A Kubernetes Deployment is a higher-level object, implemented with a ReplicaSet, which then starts up several Pods to meet the demands. ReplicaSets are named after the Service plus a random number, such as web-61720, and the Pods are named after the ReplicaSets plus a random string, like web-61720-s7l.

ReplicaSets and Pods come into play when new software is rolled out. The Deployment creates a new ReplicaSet for the desired state, and creates new Pods to implement it, while it destroys the Pods in the old ReplicaSet. This rolling deployment ensures that the application is fully available while new code and configurations are deployed. If something is wrong with the new code that makes the application crash immediately, the deployment is cancelled. If it goes well, the old ReplicaSet is kept around, making it easier to rollback for subtler bugs.

Kubernetes Rolling Deployment

This deployment style puts the burden on the developer to ensure that the two versions can run at the same time. Caution is needed around database changes and some interface changes. In exchange, deployments are smooth and safe with no downtime. Most of the setup work is done when the Docker images are created, so deployments take about a minute from start to finish.

Kubernetes takes control of deploying the application and ensures it keeps running. It allocates Pods to Nodes (called Scheduling), based on the CPU and memory budget for the Pod, and the existing load on each Node. If a Pod terminates, due to an error or other cause, it will be restarted or recreated. If a Node fails, replacement Pods will be created on surviving Nodes.

The Kubernetes system allows several ways to scale the application. We used some for handling the unexpected load of the user attachments:

  • We went from 10 to 11 Nodes, to increase the total capacity of the Cluster.
  • We scaled the web Deployment from 6 to 20 Pods, to handle more simultaneous connections, including the slow file requests.
  • We scaled the celery Deployment from 6 to 10 Pods, to handle the load of populating the cold cache.
  • We adjusted the gunicorn worker threads from 4 to 8, to increase the simultaneous connections
  • We rolled out new code to improve caching

There are many more details, which you can explore by reading our configuration files in the infra repo. We use Jinja for our templates, which we find more readable than the Go templates used by many Kubernetes projects. We’ll continue to refine these as we adjust and improve our infrastructure. You can see our current tasks by following the MDN Post-migration project.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 06 Nov 2017

Mozilla planet - mo, 06/11/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Eitan Isaacson: Phoropter: A Vision Simulator

Mozilla planet - mo, 06/11/2017 - 18:57

After porting Aaron’s NoCoffee extension to Firefox, I thought it would be neat to make a camera version of that. Something you can carry around with you, and take snapshots of websites, signs, or print material. You can then easily share the issues you see around you.

I’m calling it Phoropter, and you can see it here (best viewed with Chrome or Firefox on Android).

I could imagine this is what Pokémon Go is like if instead of creatures you collected mediocre designs.

Say you are looking at a London Underground map, and you notice the legend is completely color reliant. Looking through Phoropter you will see what the legend would look like to someone with protanopia, red-green color blindness.

Screenshot (Nov 2, 2017 2_10_50 PM)(1)

You can then grab a snapshot with the camera icon and get a side-by-side photo that shows the difference in perception. You can now alert the transit authorities, or at least shame them on Twitter.

A side-by-side snapshot of the London Tube's legend with typical vision on the left and protonopia on the right

Once you get into it, it’s quite addicting. No design is above scrutiny.

A page from a workbook displayed side-by-side with typical and green-red blindness.

I started this project thinking I can pull it off with CSS filters on a video element, but it turns out that is way to slow. So I ended up using WebGL via glfx.js. Tried to make is as progressive as possible, you can add it to your home screen. I won’t bore you with the details, check out the source when you have a chance.

There are still many more filters I can add later. In the meantime, open this in your mobile browser and,

Collect Them All!


Categorieën: Mozilla-nl planet

The Firefox Frontier: Firefox for Funsies

Mozilla planet - mo, 06/11/2017 - 16:54

Extensions—special tools and features you can add to Firefox—can make your browser do very serious things like help protect your online privacy, block ads, help with large media downloads, re-organize … Read more

The post Firefox for Funsies appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Async Pan/Zoom (APZ) lands in Firefox Quantum

Mozilla planet - mo, 06/11/2017 - 16:25

Asynchronous pan and zoom (APZ) is landing in Firefox Quantum, which means jank-free, smooth scrolling for all! We talked about APZ in this earlier article, but here’s a recap of how it works:

Until now, scrolling was part of the main JavaScript thread. This meant that when JavaScript code was being executed, the user could not scroll the page. With APZ, scrolling is decoupled from the JavaScript thread, and happens on its own, leading to a smoother scrolling experience, especially in slower devices, like mobile phones. There are some caveats, like checkerboarding, when scrolling happens faster than the browser is able to render the page, but even this is a reasonable trade-off for a better experience overall, in which the browser stays responsive and does not seem to hang or freeze.

In Firefox to date, we’ve gotten APZ working for some input methods (trackpad and mouse wheel), but in Quantum all of them will be supported, including touch and keyboard.

What does this mean for developers?

  • The scroll event will have a short delay until it is triggered.
  • There are circumstances in which the browser has to disable APZ, but we can prevent some of them with our code.
The scroll event

Without APZ, while the JavaScript thread is blocked, scrolling doesn’t occur and thus the scroll event is not triggered. But now, with APZ, this scrolling happens regardless of whether or not the thread is blocked.

However, there is something we need to be aware of: now there will be a delay between the scrolling taking place and the scroll event being dispatched.

Usually this delay will be of a few frames only, but sometimes we can bypass it by using a pure CSS solution instead of JavaScript. Some common uses cases that rely on scrolling events are sticky banners, or parallax scrolling.

In the case of sticky banners –i.e., those which remain fixed the same position regardless of scrolling–, there is already a CSS property to achieve this, so there is no need to track user scrolling via JavaScript. Meet position: sticky!

.banner { position: -webkit-sticky; position: sticky; top: 0; left: 0; /* … */ }

Sticky banner demo - screenshotNote: You can check out the live demo here.

Parallax scrolling is a popular effect in games, movies and animation. It creates the illusion of depth in a 2D environment by scrolling layers at different speeds. In the real world you can observe a similar effect when you are riding a vehicle: things that are closer to the road pass by really quickly (e.g. traffic signs, trees, etc.) whereas elements that are located further away move much more slowly (e.g. mountains, forests, etc.).

In this demo, parallax scrolling is achieved with only CSS. If you scroll, you will see how objects belong to different “layers” that move at different speeds: a spaceship, text, stars…

Parallax demo - screenshot

The trick to achieve parallax scrolling with CSS uses a combination of perspective and translateZ. When perspective has a value other than zero, translations over the Z axis will create the illusion of the element being closer to or further from the user. The further the element, the smaller it will appear and the slower it will move when the user scrolls. This is just what we need to achieve the parallax effect! To counter the “getting smaller” bit, we scale up the element.

.stars {     transform: translateZ(-4px) scale(5);     /* … */ }

It’s also important to note that perspective must be applied to a container that wraps all the parallax layers, and not to the layers themselves:

.parallax-wrapper {     perspective: 1px;     /* … */ }

You can read more about these techniques in the Scroll linked effects page on MDN, or this Pure CSS Parallax Websites article.

Preventing delaying of scrolling

Sometimes, the browser needs to delay or disable APZ because it doesn’t know whether a user action to initiate scrolling will be cancelled (for instance, by calling preventDefault on a wheel or touch event), or whether the user focus switches to an element that should get the input instead of scrolling. In these cases, scrolling is delayed so the browser can ensure consistency.

Note: Events that can delay scrolling by calling preventDefault are: wheel, touchstart, touchmove –plus the deprecated DOMMouseScroll, mousewheel and mozMousePixelScroll.

For events, there are two potential solutions:

It is possible to attach the event listener to the element that really needs it, instead of listening globally with document or window. In this solution, APZ is delayed only when that element triggers the event, but does not affect the rest of the page.

Another potential solution is to set the passive flag to truein the event listener. By using this flag, we tell the browser that we will not call preventDefault in the handler of that event, so it knows that scrolling will happen and does not need to wait until the callback is executed.

container.addEventListener('touchstart', function () {     // your handler here }, { passive: true });

You can read more about this technique for improved scrolling performance on MDN.

Keep in mind that APZ is very conservative in regards to keyboard input, and will be disabled for this input method many times. For instance, a click or mousedown can potentially change the focus, and maybe the input via keyboard should get directed to the newly focused element (like a spacebar keystroke to a <textarea>), instead of it being a scroll action. Unfortunately, there’s no coding workaround for these cases.

Altogether, I think that the experience that APZ provides for users is worth the small inconveniences, like the checkerboarding or the event delays. If you have any questions about APZ, feel free to leave a comment here!

Categorieën: Mozilla-nl planet

Adblock Plus: Adblock Plus browser add-on gets comfy with Firefox 57

Mozilla planet - mo, 06/11/2017 - 15:42

As Firefox fans and users know, Firefox will release version 57 later in November. The new version of the browser will only allow add-ons that are compatible with the WebExtensions API, so the Adblock Plus development team has already been busy getting our award-winning add-on ready.

Today, we’ve released Adblock Plus 3.0 for Firefox, our first Firefox release based on Firefox’s new WebExtensions rules.

Aside from all the things under the hood, you will immediately notice a few differences in the new ABP for Firefox. First and foremost, it will just look different; those who also use ABP for Chrome or Opera will notice some aesthetic similarities, for sure. Otherwise, you’ll probably pick out the following:

  • A new Adblock Plus icon: Our icon now works the same as it does for Chrome users. Specifically, this means that a counter will display the number of blocked requests, so users know more quickly what’s going on in the background and how many ads are being blocked. The more detailed statistics previously displayed in the icon’s tooltip are gone. The icon will open the bubble UI, same as in Chrome.
  • A similar-looking issue reporter: We added an issue reporter to this release, so this feature, which was part of the previous ABP for Firefox, wouldn’t go missing. Using it is also very similar to the old one. Right now we’re not able to collect as many issues as before, but we’ll improve that in subsequent releases.

Adblock Plus worked hard to release our 3.0 browser extension for Firefox early because all Firefox add-ons have to convert to the new WebExtensions API by the time Mozilla releases Firefox 57 later in the month. This is not even to mention those already running the development build of 57, on which the old extensions API does not work. Given that, there will be a few features that longtime Adblock Plus for Firefox users will miss in the new release. Rest assured that we’re working as hard as we can to bring as many features as possible to ABP using the new WebExtensions API for Firefox.

Cheers to all the Firefox development going on right now!

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #9

Mozilla planet - mo, 06/11/2017 - 14:09

Another late newsletter as I was side-tracked by conference travel, followed by a holiday and then a training. My apologies. Brace yourselves though, there has been a lot of good stuff landing in WebRender and Gecko during that time.

Enabling WebRender in Firefox Nightly

The set of prefs to enable WebRender changes every now and then. From now on, I will do recap of the current steps at the top of each newsletter.

In about:config:
– set “gfx.webrender.enabled” to true,
– set “gfx.webrender.blob-images” to true,
– if you are on Linux, set “layers.acceleration.force-enabled” to true.

Note that WebRender can only be enabled in Firefox Nightly.

Notable WebRender changes
  • Glenn mitigated the performance impact of the new box shadows somewhat, and followed it up with another improvement. This drops the GPU time on bugzilla from ~20ms to ~8ms. There’s still quite a bit of potential for improvement (#1894 among others), but this is probably the most noticeable performance change of the last few days.
  • Lee added communication of Gecko font settings (anti-aliasing, hinting, variations, etc.) to WebRender.
  • Hovering over tabs no longer causes the chrome to flash, thanks to some debugging work by Markus and a fix by Glenn.
  • Glenn fixed a visual glitch with rounded corners of invisible borders.
  • Lee and Markus fixed a ton of correctness issues related to text rendering: #1767, #1876, #1953, #1969.
  • Further improvements to serialization performance.
  • Glenn fixed the brightness and invert filters.
  • Kvark worked around a shader compiler bug.
  • subsevenx2001 fixed a very annoying border radius glitch.
Notable Gecko changes
  • Sotaro and Nical fixed a bug that caused video frames to use all of the memory.
  • Kats rewrote the way Gecko assembles clipping and scrolling information before it gets sent to WebRender, in a way that is more direct and faster (bugs 1409446 and 1405359).
  • Various people increased the number of display items that utilize WebRender for rendering:
  • Ethan fixed video centering.
  • Jeff shared fonts between blob images and WebRender. This means we can reuse fonts across different blob images. Not sharing fonts was one of the most noticeable performance problems with blob images. Sharing fonts will also mean that we can break blob images into smaller tiles which will rasterize in parallel.
  • Sotaro fixed various glitches and crashes on Windows (bugs 1410304, 1403439 and 1409594).
  • Sotaro fixed canvases that stopped to update (bugs 1401609 and 1411395).
  • Sotaro increased parallelism between slow frames, by making the different stages of frame rendering overlap more.
  • Ethan improved the memory usage for SVG significantly, by grouping all elements of an SVG image into one layer.
  • Markus enabled vibrancy on macOS and fixed text rendering on top of vibrancy. The text in the tab bar with the default theme is now readable again.
  • Andrew implemented decoding raster/vector images directly into shared memory, avoiding a copy into webrender. It may be enabled by turning on the image.mem.shared pref.
  • Gankro fixed the way text decoration is drawn across element boundaries.

Also, Thanks a lot Markus for helping with gathering info for the newsletter!


Categorieën: Mozilla-nl planet

Nick Cameron: When will the RLS be released?

Mozilla planet - fr, 03/11/2017 - 21:46

tl;dr: the RLS is currently in 'preview' status. The RLS preview is usable with stable Rust from version 1.21. We hope to have an official (1.0, non-preview) release of the RLS in early 2018.

The Rust Language Server (RLS) is the smarts behind support for Rust in Visual Studio Code, Atom, and many other editors. It is a key component of the Rust IDE story, and we expect it to be used by a large chunk of the Rust community. The RLS is still work in progress, but its stability and quality are improving and we are working towards a 1.0 release. In this blog post, I'm going to go over how the RLS is being versioned and distributed.

Since the RLS integrates closely with the compiler, it is 'inside the unstable boundary'. To support developers using the stable toolchain, therefore, the RLS needs to be a part of the Rust distribution. For most users, that means that the RLS will be installed using Rustup. The RLS will not be installed by default (since many users won't use it); editors which use the RLS should install it automatically (our Visual Studio Code extension does this, for example). If you're using a well-supported editor and Rustup, then most of the issues discussed here will be handled for you without issue.

The RLS preview

The RLS is currently in preview status, which means it is pre-1.0 in terms of stability, quality, and feature-completeness. We hope to announce a 1.0 release at the end of 2017 or early in 2018.

The naming of the RLS Rustup component reflects this status: it is currently called rls-preview. When we make our first (1.0) release it will be renamed to rls. Rustup and your editor will take care of renaming, and users of rls-preview should be moved to rls without any intervention.

Rust channels

The RLS preview can (from Rust version 1.21) be installed with the Rust nightly, beta, and stable toolchains.

Future changes to the RLS will 'ride the trains' from nightly to beta to stable Rust releases. When we are ready for the RLS to be released, the change from rls-preview to rls will also follow this model.

For nightly RLS users, there is a slight hiccup. Because the RLS is so closely linked with the compiler, it is sometimes impossible to make a change to the compiler without breaking the RLS. When this happens there might be a day or a few days where the nightly channel is missing the rls-preview component.

If you want to keep using the RLS you will need to either stick to a nightly release which includes the RLS (either by avoiding rustup update or by updating to a specific nightly which includes the RLS), or use the stable or beta channels. We realise this is sub-optimal for nightly users and we plan to mitigate this somewhat with UI improvements to rustup.

Version numbers

Since the RLS is being distributed with Rust, we want to link the version numbers to the Rust version numbers is a straightforward way. However, we also want to maintain semver compatibility.

While the RLS is in preview, it will will use a 0.x.y versioning scheme, where 'x' corresponds to the Rust version. So the first version of the RLS distributed with Rust 1.21.0 will be 0.121.0. Once we have an official release, we will use a 1.x.y scheme where '1.x' corresponds to the Rust version, so (assuming we are out of preview) the first version of the RLS distributed with Rust 1.30.0 will be 1.30.0.

In both cases 'y' is incremented with each RLS version available on beta or stable. Different versions of the RLS on the nightly channel will have the same version number, but can be identifed by the build date. (We will release multiple versions of the RLS on nightly, a small number of versions on beta, and hopefully never on stable). For example we might have version 1.30.2, which would be the third version of the RLS distributed with Rust 1.30 (stable or beta).

Other tools

We have no firm plans for other tools, but if this pattern works for the RLS, we expect to follow it for Rustfmt and perhaps other tools.

When will the RLS be ready for its 1.0 release?

We're hoping late 2017 or early 2018. Our criteria for a release are about stability for a majority of users. I.e., we are not blocking a release on any new features, but the features that exist today should work without bugs for nearly all users, nearly all of the time.

One key area for many users is support for Cargo workspaces, this is mostly implemented thanks to Xanewok (see this blog post for a summary, and look for some earlier blog posts for more details). The remaining work is ensuring that the implementation works reliably across different project layouts and then ironing out any bugs.

We also want to improve our coverage of Rust code - currently there are a few places where we lack sufficient detail in our data (unions are an obvious example). We won't be perfect here, but we plan to make some big improvements.

Exactly what will be in the 1.0 release is tracked in this milestone and this issue.

Beyond 1.0

Of course development doesn't stop with the first official release. There is plenty to work on post-1.0. There are many ways to incrementally improve the IDE experience - squash bugs, improve performance, and improve usability. There are also plenty of new features to implement - I'm particularly excited about adding more refactoring support and integrated debugging.

One of the biggest 'under the covers' changes will be working with incremental compilation. Currently, incremental compilation only covers code generation, which the RLS skips completely. Once type checking can reliably be done incrementally, we can take advantage of that in the RLS to give much quicker responses. We should then be able to use the compiler for code completion, to give a better experience, and for new features such as showing the traits which are implemented for a type.

Categorieën: Mozilla-nl planet

Armen Zambrano: Thank you, Mozilla, for caring for me

Mozilla planet - fr, 03/11/2017 - 21:15

Background story: I’ve been working with Mozilla full-time since 2009 (contributor in 2007 — intern in 2008). I’ve been working with the release engineering team, the automation team (A-team) and now within the Product Integrity organization. In all these years I’ve been blessed with great managers, smart and helpful co-workers, and enthusiastic support to explore career opportunities. It is an environment that has helped me flourish as a software engineer.

I will go straight to some of the benefits that I’ve enjoyed this year.

Parental leave

Three months at 100% of my salary. I did not earn bonus payouts during that time, however, it was worth the time I spent with my firstborn. We bonded very much during that time, I learned how to take care of my family while my wife worked, and I can proudly say that he’s a “daddy’s boy” :) (Not that I spoil him!).

Working from home 100% of the time

My favourite benefit. Period.

It really helps me as an employee, as I don’t enjoy commuting and I tend to talk a lot when I’m in the office. My family is very respectful of my work hours and I’m able to have deep-thought sessions in the comfort of my own home.

This is not a benefit that a lot of companies give, especially the bigger ones which expect you to relocate and come often to the office. I chuckle when I hear a company offer that their employees can work from home only a couple of days per week.

Wellness benefits

I appreciate that Mozilla allocaters some of their budget to pay for anything related to employee wellness (mental, spiritual & physical). Knowing that if I don’t use it I will lose it causes me to think about ways to apply the money to help me stay in shape.

Learning support/budget

This year, after a re-org and many years of doing the same work, I found myself in need of a new adventure — I get bored if I don’t feel as though I’m learning. With my manager’s support (thanks jmaher!), I embarked on a journey to become a front-end developer. Mozilla also supported me by paying for me to complete a React Nanodegree as part of the company’s learning budget.

To my great surprise, React has become rather popular inside Mozilla, and there is great need for front-end work within my org. It was also a nice surprise to see that switching to Javascript from Python was not as difficult as I thought it would be.

Thank you, Mozilla, for your continued support!

Categorieën: Mozilla-nl planet

Air Mozilla: Brown Bag: Mozilla Support's Community on Firefox 57

Mozilla planet - fr, 03/11/2017 - 17:15

 Mozilla Support's Community on Firefox 57 Check out what has been happening in the Support Community, join sumo for some reminiscing, questions and the amazing efforts being done post migration attempts...

Categorieën: Mozilla-nl planet

Air Mozilla: Brown Bag: Mozilla Support's Community on Firefox 57

Mozilla planet - fr, 03/11/2017 - 17:15

 Mozilla Support's Community on Firefox 57 Check out what has been happening in the Support Community, join sumo for some reminiscing, questions and the amazing efforts being done post migration attempts...

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): In Memoriam: Mamadou Niang, Fulah localizer

Mozilla planet - fr, 03/11/2017 - 15:34

Guest post from Mozilla Fulah community leader, Ibrahima Saar. UPDATE: Added pictures and a PayPal link for donations to the Mamadou’s family.

It is with deep pain that we announce that our friend, teammate Mamadou Niang died in an accident while traveling from rural town Matam to Dakar to attend a workshop at his organization’s headquarters. We had just had a meeting the day before to sprint to our 31st October deadline for Firefox localization. His last words, the morning before he traveled: “Don’t worry, I will be available to work on Pontoon while away.” Mamadou Niang has been working as a Fulah specialist and rural development project coordinator for the organization Tostan (Humanitarian and Literacy). He was the co-organizer of our 2014 workshop in Dakar while also very busy moving around villages on his motorbike. Also Niang was almost the person who took care of many families as he was the only one earning a salary among a large family. He was my friend and mentee and I am so sad.

After the online meeting we had, he told me that he would leave for the town of Thiès the next morning. The city of Thiès is about 500 kilometers from where he worked. Actually Mamadou is from a small village called Aram, on the river Senegal in the far north of the country. Aram is a well-known village because a very famous Fulah singer who invented a new musical genre called “Pekaan” is from there. it is a fisherman’s village where everyone has fishing as a traditional occupation and that occupation is a very important cultural aspect because it consists not only of fishing but all the cultural practices that go with fishing.

That community of villages across the north of Senegal and the south of Mauritania is well known for their knowledge of water and the spirits of the river. If some of you you remember, you might have heard about many Firefox terms derived from the practice of fishing. Terms like “aspect ratio” and “Time Out” are directly derived from that community’s fishing tools and practices. Mamadou and I are both from that community and are also a specialist of the language. That, plus the fact he was working in the field in rural areas, made Mamadou Niang a valuable asset for the Fulah localization team.

On Wednesday, October 25th Mamadou was on a trip to the headquarters of his organization for a workshop. He frequently traveled there on public transport to attend meeting, submit reports, and the like. Last year, he posted on Facebook a photo warning people that the trip from Matam to Dakar was extremely difficult and dangerous and people should be very careful. He also called on the government to repair roads and make them more secure. Sadly, he died in a public transport accident on the same type of vehicle that he posted it on Facebook.

When he left in the morning I told him that we would chat after he’d arrive. He also assured me that he would be available for working on Pontoon while away. The day before we struggled to get him migrate his account from Pootle to Pontoon, since I could not see him on the team list to change his permissions. He had an extremely slow connection and we only succeeded late in the afternoon. Actually, I proposed he translate one string so that I can see him on the list of contributors, which he did. I added him to the Fulah team finally.

I have been mentoring Mamadou Niang for a few years now and I was so happy to see him contribute so much especially on Firefox OS back in 2014. Then he was also very active spreading the word about Firefox because he had first-hand contact with people learning local languages as part of his work for the organization Tostan. In 2014 he was very active helping me recruit people who would participate in the workshop we organized in Dakar, the capital of Senegal from the 3rd to the 6th of March. He was very very valuable to me because most of the people who subsequently participated in the workshop did not know me and there was no way I could get in touch with them. Most of them indeed are working in rural areas where literacy work is the most needed it was the first time to meet him and to meet many of the people who participated in the workshop. Since, we have become very very good friends and we chatted on Facebook or spoke on the phone virtually every day.

Although he was using a very busy traveling across the countryside on his motorbike, he helped a lot with translation work on Pootle. Since we migrated translations to Pontoon, it was his first time to come to the new platform to set up his account and start working. Unfortunately that lasted less than 24 hours.

We will miss Mamadou very much because he was so kind, so helpful to all and always joking. He was also very active in his Village to help with projects on human development as well as literacy. He was a husband and a young father who took care of many families. Therefore he left his family with sorrow and also concerned for the future. May his soul rest in peace.

A fund is being raised for Mamadou Niang’s family. If you are interested in contributing, please visit PayPal.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Keeping Tabs on the Tab API

Mozilla planet - fr, 03/11/2017 - 14:00

tabsTabs are central to the modern browsing experience, so much so that it is hard to imagine that we once browsed the Internet without them, one single window at a time. Now, it’s common to have several tabs open at once — perhaps one playing music, several with online articles you want to read later (pro tip: check out Pocket for this use case), and of course, a few tabs with whatever you are supposed to be working on at the moment.

The Past

From the start, Firefox extensions that dealt with tabs were a natural fit and have proven to be quite popular. The good news is that there are already hundreds of extensions written with the WebExtensions API to help you configure, organize and otherwise manage your browser tabs. You can arrange your tabs as tiles or in a tree, put them on the side of the browser, or control where new tabs open, just to name a few.

Unfortunately, not every feature that was available in the past can be offered using the WebExtensions API. Several of the most popular tab extensions under the legacy add-on system used the unrestricted nature of that environment to offer powerful and unique features. Along with that power, however, came security risks. The WebExtensions API seeks to temper those risks by providing limited access to browser internals.

The Future

We’re working to support additional tab features, but how we achieve this goal will be shaped by our dedication to Web standards, the speed and stability of Firefox, our product vision, and especially our commitment to security and privacy and the principles of the Manifesto. It’s clear that some previously available tab features will not be available under the WebExtensions API; they just can’t be accommodated without potentially compromising user security or privacy.

However, we believe many other features can be added. Providing as much tab-related functionality as we can within these constraints is a high priority. Starting with tab hiding, you can expect to see additional functions added to the WebExtensions API over the next several releases that will allow developers to create rich, compelling extensions to style, manage and organize browser tabs.

All of this, of course, will be part of our push for open Web standards. However, while that process proceeds at its own pace, don’t expect to see us stand still. Using feedback from developers, we will continue to innovate within the WebExtensions API, providing new ways to surprise and delight users. As always, thank you for using Firefox and helping ensure that individuals have the ability to shape the Internet and their own experiences on it.

The post Keeping Tabs on the Tab API appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Pages