mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

The Mozilla Blog: Latest Mozilla VPN features keep your data safe

Mozilla planet - di, 30/03/2021 - 17:49

It’s been less than a year since we launched Mozilla VPN, our fast and easy-to-use Virtual Private Network service brought to you by a trusted name in online consumer security and privacy services. Since then we added our Mozilla VPN service to Mac and Linux platforms, joining our VPN service offerings on Windows, Android and iOS platforms. As restrictions are slowly easing up and people are becoming more comfortable leaving their homes, one of the ways to keep your information safe when you go online is our Mozilla VPN service. Our Mozilla VPN provides encryption and device-level protection of your connection and information when you are on the Web.

Today, we’re launching two new features to give you an added layer of protection with our trusted Mozilla VPN service. Mozilla has a reputation for building products that help you keep your information safe. These new features will help users do the following:

For those who watch out for unsecure networks

If you’re someone who keeps our Mozilla VPN service off and prefers to manually turn it on yourself, this feature will help you out. We’ll notify you when you’ve joined a network that is not password protected or has weak encryptions. By just clicking on the notification you can turn the Mozilla VPN service on, giving you an added layer of protection ensuring every conversation you have is encrypted over the network.  This feature is available on Windows, Linux, Mac, Android and iOS platforms.

For those at home, who want to keep all your devices connected

Occasionally, you might need to print out forms for an upcoming doctor visit or your kid’s worksheets to keep them busy. Now, we’ve added Local Area Network Access, so your devices can talk with each other without having to turn off your VPN. Just make sure that the box is checked in Network Settings when you are on your home network.  This feature is available on Windows, Linux, Mac and Android platforms.

Why use our trusted Mozilla VPN service?

Since our launch last year, we’ve had thousands of people sign up to use our trusted Mozilla VPN service. Mozilla has built a reputation for building products that respect your privacy and keeps your information safe. With Mozilla VPN service you can be sure your activity is encrypted across all applications and websites, whatever device you are on.

With no long-term contracts required, the Mozilla VPN is available for just $4.99 USD per month in the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand. We have plans to expand to other countries this Spring.

We know that it’s more important than ever for you to feel safe, and for you to know that what you do online is your own business. Check out the Mozilla VPN and subscribe today from our website.

 

The post Latest Mozilla VPN features keep your data safe appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Andrew Halberstadt: Advanced Mach Try

Mozilla planet - di, 30/03/2021 - 15:30

Following up last week’s post on some mach try fundamentals, I figured it would be worth posting some actual concrete tips and tricks. So without further ado, here are some things you can do with ./mach try you may not have known about in rapid fire format.

Categorieën: Mozilla-nl planet

Daniel Stenberg: HOWTO backdoor curl

Mozilla planet - di, 30/03/2021 - 11:24

I’ve previously blogged about the possible backdoor threat to curl. This post might be a little repeat but also a refresh and renewed take on the subject several years later, in the shadow of the recent PHP backdoor commits of March 28, 2021. Nowadays, “supply chain attacks” is a hot topic.

Since you didn’t read that PHP link: an unknown project outsider managed to push a commit into the PHP master source code repository with a change (made to look as if done by two project regulars) that obviously inserted a backdoor that could execute custom code when a client tickled a modified server the right way.

<figcaption>Partial screenshot of a diff of the offending commit in question</figcaption>

The commits were apparently detected very quickly. I haven’t seen any proper analysis on exactly how they were performed, but to me that’s not the ultimate question. I rather talk and think about this threat in a curl perspective.

PHP is extremely widely used and so is curl, but where PHP is (mostly) server-side running code, curl is client-side.

How to get malicious code into curl

I’d like to think about this problem from an attacker’s point of view. There are but two things an attacker need to do to get a backdoor in and a third adjacent step that needs to happen:

  1. Make a backdoor change that is hard to detect and appears innocent to a casual observer, while actually still being able to do its “job”
  2. Get that changed landed in the master source code repository branch
  3. The code needs to be included in a curl release that is used by the victim/target

These are not simple steps. The third step, getting into a release, is not strictly always necessary because there are sometimes people and organizations that run code off the bleeding edge master repository (against our advice I should add).

Writing the backdoor code

As was seen in this PHP attack, it failed rather miserably at step 1, making the attack code look innocuous, although we can suspect that maybe that was done so on purpose. In 2010 there was a lengthy discussion about an alleged backdoor in OpenBSD’s IPSEC stack that presumably had been in place for years and even while that particular backdoor was never proven to be real, the idea that it can be done certainly is.

Every time we fix a security problem in curl there’s that latent nagging question in the back of our collective minds: was this flaw placed here deliberately? Historically, we’ve not seen any such attacks against curl. I can tell this with a high degree of certainty since almost all of the existing security problems detected and reported in curl was done by me…!

The best attack code would probably do something minor that would have a huge impact in a special context for which the attacker has planned to use it. I mean minor as in doing a NULL-pointer dereference or doing a use-after-free or something. This, because doing a full-fledged generic stack based buffer overflow is much harder to land undetected. Maybe going with a single-byte overwrite outside of a malloc could be the way, like it was back in 2016 when such a flaw in c-ares was used as the first step in a multi-flaw exploit sequence to execute remote code as root on ChromeOS…

Ideally, the commit should also include an actual bug-fix that would be the public facing motivation for it.

Get that code landed in the repo

Okay let’s imagine that you have produced code that actually is a useful bug-fix or feature addition but with an added evil twist, and you want that landed in curl. I can imagine several different theoretical ways to do it:

  1. A normal pull-request and land using the normal means
  2. Tricking or forcing a user with push rights to circumvent the review process
  3. Use a weakness somewhere and land the code directly without involving existing curl team members
The Pull Request method

I’ve never seen this attempted. Submit the pull-request to the project the usual means and argue that the commit fixes a bug – which could be true.

This makes the backdoor patch to have to go through all testing and reviews with flying colors to get merged. I’m not saying this is impossible, but I will claim that it is very hard and also a very big gamble by an attacker. Presumably it is a fairly big job just to get the code for this attack to work, so maybe going with a less risky way to land the code is then preferable? But then which way is likely to have the most reliable outcome?

The tricking a user method

Social engineering is very powerful. I can’t claim that our team is immune to that so maybe there’s a way an outsider could sneak in behind our imaginary personal walls and make us take a shortcut for a made up reason that then would circumvent the project’s review process.

We can even include more forced “convincing” such as direct threats against persons or their families: “push this code or else…”. This way of course cannot be protected against using 2fa, better passwords or things like that. Forcing a users to do it is also likely to eventually get known and then immediately make the commit reverted.

Tricking a user doesn’t make the commit avoid testing and scrutinizing after the fact. When the code has landed, it will be scanned and tested in a hundred CI jobs that include a handful of static code analyzers and memory/address sanitizers.

Tricking a user could land the code, but it can’t make it stick unless the code is written as the perfect stealth change. It really needs to be that good attack code to work out. Additionally: circumventing the regular pull-request + review procedure is unusual so I believe it is likely that such commit will be reviewed and commented on after the fact, and there might then be questions about it and even likely follow-up actions.

The exploiting a weakness method

A weakness in this context could be a security problem in the hosting software or even a rogue admin in the company that hosts the main source code git repo. Something that allows code to get pushed into the code repository without it being the result of one of the existing team members. This seems to be the method that the PHP attack was done through.

This is a hard method as well. Not only does it shortcut reviews, it is also done in the name of someone on the team who knows for sure that they didn’t do the commit, and again, the commit will be tested and poked at anyway.

For all of us who sign our git commits, detecting such a forged commit is easy and quickly done. In the curl project we don’t have mandatory signed commits so the lack of a signature won’t actually block it. And who knows, a weakness somewhere could even possibly find a way to bypass such a requirement.

The skip-git-altogether methods

As I’ve described above, it is really hard even for a skilled developer to write a backdoor and have that landed in the curl git repository and stick there for longer than just a very brief period.

If the attacker instead can just sneak the code directly into a release archive then it won’t appear in git, it won’t get tested and it won’t get easily noticed by team members!

curl release tarballs are made by me, locally on my machine. After I’ve built the tarballs I sign them with my GPG key and upload them to the curl.se origin server for the world to download. (Web users don’t actually hit my server when downloading curl. The user visible web site and downloads are hosted by Fastly servers.)

An attacker that would infect my release scripts (which btw are also in the git repository) or do something to my machine could get something into the tarball and then have me sign it and then create the “perfect backdoor” that isn’t detectable in git and requires someone to diff the release with git in order to detect – which usually isn’t done by anyone that I know of.

But such an attacker would not only have to breach my development machine, such an infection of the release scripts would be awfully hard to pull through. Not impossible of course. I of course do my best to maintain proper login sanitation, updated operating systems and use of safe passwords and encrypted communications everywhere. But I’m also a human so I’m bound to do occasional mistakes.

Another way could be for the attacker to breach the origin download server and replace one of the tarballs there with an infected version, and hope that people skip verifying the signature when they download it or otherwise notice that the tarball has been modified. I do my best at maintaining server security to keep that risk to a minimum. Most people download the latest release, and then it’s enough if a subset checks the signature for the attack to get revealed sooner rather than later.

The further-down-the-chain method

As an attacker, get into the supply chain somewhere else: find a weaker link in the chain between the curl release tarball and the target system for your attack . If you can trick or social engineer maybe someone else along the way to get your evil curl tarball to get used there instead of the actual upstream tarball, that might be easier and give you more bang for your buck. Perhaps you target your particular distribution’s or Operating System’s release engineers and pretend to be from the curl project, make up a story and send over a tarball to help them out…

Fake a security advisory and send out a bad patch directly to someone you know build their own curl/libcurl binaries?

Better ways?

If you can think of other/better ways to get malicious code via curl code into a victim’s machine, let me know! If you find a security problem, we will reward you for it!

Similarly, if you can think of ways or practices on how we can improve the project to further increase our security I’ll be very interested. It is an ever-moving process.

Dependencies

Added after the initial post. Lots of people have mentioned that curl can get built with many dependencies and maybe one of those would be an easier or better target. Maybe they are, but they are products of their own individual projects and an attack on those projects/products would not be an attack on curl or backdoor in curl by my way of looking at it.

In the curl project we ship the source code for curl and libcurl and the users, the ones that builds the binaries from that source code will get the dependencies too.

Credits

Image by SeppH from Pixabay

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Intoducing Daryl Alexsy

Mozilla planet - di, 30/03/2021 - 11:19

Hey everybody,

Please join us to welcome Daryl Alexsy to he Customer Experience team! Daryl is a Senior User Experience Designer who will be helping SUMO as well as the MDN team. Please, say hi to Daryl!

 

Here’s a short introduction from her:

Hi everyone! I’m Daryl, and I’ll be joining the SUMO team as a UX designer. I am looking forward to working together with you all to create a better experience for both readers and contributors of the platform, so please don’t hesitate to reach out with any observations or suggestions for how we can make that happen.

 

Welcome Daryl!

Categorieën: Mozilla-nl planet

Cameron Kaiser: The end of TenFourFox and what I've learned from it

Mozilla planet - ma, 29/03/2021 - 19:40
Now that I have your attention.

I've been mulling TenFourFox's future for awhile now in light of certain feature needs that are far bigger than a single primary developer can reasonably embark upon, and recent unexpected changes to my employment, plus other demands on my time, have unfortunately accelerated this decision.

TenFourFox FPR32 will be the last official feature parity release of TenFourFox. (A beta will come out this week, stay tuned.) However, there are still many users of TenFourFox — the update server reports about 2,000 daily checkins on average — and while nothing has ever been owed or promised I also appreciate that many people depend on it, so there will be a formal transition period. After FPR32 is released TenFourFox will drop to security parity and the TenFourFox site will become a placeholder. Security parity means that the browser will only receive security updates plus certain critical fixes (as I define them, such as crash wallpaper, basic adblock and the font blacklist). I will guarantee security and stability patches through and including Firefox 93 (scheduled for September 7) to the best of my ability, which is also the point at which Firefox 78ESR will stop support, and I will continue to produce, generate and announce builds of TenFourFox with those security updates on the regular release schedule with chemspills as required. There will be no planned beta releases after FPR32 but Tenderapp will remain available to triage bugfixes for new changes only.

After that date, for my own use I will still make security patches backported from the new Firefox 91ESR publicly available on Github and possibly add any new features I personally need, but I won't promise these on any particular timeline, I won't make or release any builds for people to download, I won't guarantee any specific feature or fix, I won't guarantee timeliness or functionality, and there will be no more user support of any kind including on Tenderapp. I'll call this "hobby mode," because the browser will be a hobby I purely maintain for myself, with no concessions, no version tags (rolling release only), no beta test period and no regular schedule. You can still use it, but if you want to do so, you will be responsible for building the browser yourself and this gives you a few months to learn how. Also, effective immediately, there will be no further updates to TenFourFoxBox, the QuickTime Enabler, the MP4 Enabler or the TenFourFox Downloader, though you will still be able to download them.

Unless you have a patch or pull request or it's something I care about, if you open an issue on Github it will be immediately closed. Similarly, any currently open issues I don't intend to address will be wound down over the next few weeks. However, this blog and the Github wiki will still remain available indefinitely, including all the articles, and all downloads on SourceForge will remain accessible as well. I'll still post here as updates are available along with my usual occasional topics of relevance to Power Mac users.

Classilla, for its part, is entering "hobby mode" today and I will do no further official public work on it. However, I am releasing the work I've already done on 9.3.4, such as it is, plus support for using Crypto Ancienne for self-hosted TLS 1.2 if you are a Power MachTen user (or running it in Classic or under Mac OS in Rhapsody). You can read more about that on Old VCR, my companion retrocomputing blog.

I'm proud of what we've accomplished. While TenFourFox was first and foremost a browser for me personally, it obviously benefited others. It kept computers largely useable that today are over fifteen years old and many of them even older. In periods of a down economy and a global pandemic this helped people make ends meet and keep using what they had an investment in. One of my favourite reports was from a missionary in Myanmar using a beat-up G4 mini over a dialup modem; I hope he is safe during the present unrest.

I'm also proud of the fair number of TenFourFox features that were successfully backported or completely new. TenFourFox was the first and still one of the few browsers on PowerPC Mac OS X to support TLS 1.3 (or even 1.2), and we are the only such browser with a JavaScript JIT. We also finished a couple features long planned for mainline Firefox but that never made it, such as our AppleScript (and AppleScript-JavaScript bridge) support. Our implementation even lets you manipulate webpages that may not work properly to function usefully. Over the decade TenFourFox has existed we also implemented our own native date and time controls, basic ad block, advanced Reader View (including sticky and automatic features), additional media support (MP3, MP4 and WebP), additional features and syntax to JavaScript, and AltiVec acceleration in whatever various parts of the browser we could. There are also innumerable backported bug fixes throughout major portions of the browser which repair long-standing issues. All of this kept Firefox 45, our optimal platform base, useful for far longer than the sell-by date and made it an important upstream source for other legacy browsers (including, incredibly, OS/2). You can read about the technical differences in more detail.

Many people have contributed to TenFourFox and to the work above, and they're credited in the About window. Some, like Chris T, Ken Cunningham and OlgaTPark, still contribute. I've appreciated everyone's work on the source code, the localizations and their service in the user support forums. They've made the job a little easier. There are not enough thank yous for these good people.

When September rolls around, if you don't want to build the browser yourself it is possible some downstream builders like Olga may continue backports. I don't speak for them and I can't make promises on their behalf. Olga's builds run on 10.4, 10.5 and 10.6. If you choose to make your own builds and release them to users, please use a different name for your builds than TenFourFox so that I don't get bothered for support for your work (Olga has a particular arrangement with me but I don't intend to repeat it for others).

You might also consider another browser. On PowerPC 10.5 your best alternative is Leopard WebKit. It has not received recent updates but many of you use it already. I don't maintain or work on LWK, but there is some TenFourFox code in it, and Tobias has contributed to TenFourFox as well. If you don't want to use Safari specifically, LWK can be relinked against most WebKit shells including Stainless and Roccat.

If you are using TenFourFox on 10.6, you could try using Firefox Legacy, which is based on Firefox 52. It hasn't been updated in about a year but it does have a more recent platform base than official Firefox for 10.6 or TenFourFox.

However, if you are using TenFourFox on 10.4 (PowerPC or Intel), I don't have any alternative suggestions for you. I am not aware of any other vaguely modern browser that supports Tiger. Although some users have tried TenFourKit, it does not support TLS 1.1 or 1.2 (only Opera 10.63 does), and OmniWeb, Camino, Firefox 3.6 and the briefly available Tor Browser for PowerPC are now too old to recommend for any reasonable current use.

So, that's the how. Here's the why and what. I have a fairly firm rule that I don't maintain software I don't personally use. The reason for that is mostly time, since I don't have enough spare cycles to work on stuff that doesn't benefit me personally, but it's also quality: I can't maintain a quality product if I don't dogfood it myself. And my G5 has not been my daily driver for a good couple years; my daily driver is the Raptor Talos II. I do use the G5 but for certain specific purposes and not on a regular daily basis.

Additionally, I'm tired. It's long evenings coding to begin with, but actual development time is only the start of it. It's also tying up the G5 for hours to chug out the four architecture builds and debug (at least) twice a release cycle, replying to bug reports, scanning Bugzilla, reading the changelogs for security updates and keeping up with new web features in my shrinking spare time after doing the 40+-hour a week job I actually got paid for. Time, I might add, which is taken away from my other hobbies and my personal relaxation, and time which I would not need to spend if I did this purely as a hobby and never released any of it. Now that Firefox is on a four-week release schedule, it's just more than I feel I can continue to commit to and I'm neglecting the work I need to do on the system that I really do use every day.

We're running on fumes technologically as well. Besides various layout and DOM features we don't support well like CSS grid, there are large JavaScript updates we'll increasingly need which are formidably complex tasks. The biggest is async and await support which landed in Firefox 52, and which many sites now expect to run at all. However, at the time it required substantial changes to both JavaScript and the runtime environment and had lots of regressions and bugs to pick up. We have some minimal syntactic support for the feature but it covers only the simplest of use cases incompletely. There are also front end changes required to deal with certain minifiers (more about this in a moment) but they can all be traced back to a monstrous 2.5MB commit which is impossible to split up piecemeal. We could try to port 52ESR as a whole, but we would potentially suffer some significant regressions in the process, and because there is no Rust support for 32-bit PowerPC on OS X we couldn't build anything past Firefox 54 anyway. All it does is just get us that much closer to an impenetrable dead end. It pains me to say so, but it's just not worth it, especially if I, the browser's only official beneficiary, am rarely using it personally these days. It's best to hang it up here while the browser still works for most practical purposes and people can figure out their next move, rather than vainly struggling on with token changes until the core is totally useless.

Here is what I have learned working on TenFourFox and, for that matter, Classilla.

Writing and maintaining a browser engine is fricking hard and everything moves far too quickly for a single developer now. However, JavaScript is what probably killed TenFourFox quickest. For better or for worse, web browsers' primary role is no longer to view documents; it is to view applications that, by sheer coincidence, sometimes resemble documents. You can make workarounds to gracefully degrade where we have missing HTML or DOM features, but JavaScript is pretty much run or don't, and more and more sites just plain collapse if any portion of it doesn't. Nowadays front ends have become impossible to debug by outsiders and the liberties taken by JavaScript minifiers are demonstrably not portable. No one cares because it works okay on the subset of browsers they want to support, but someone bringing up the rear like we are has no chance because you can't look at the source map and no one on the dev side has interest in or time for helping out the little guy. Making test cases from minified JavaScript is an exercise in untangling spaghetti that has welded itself together with superglue all over your chest hair, worsened by the fact that stepping through JavaScript on geriatic hardware with a million event handlers like waiting mousetraps is absolute agony. With that in mind, who's surprised there are fewer and fewer minority browser engines? Are you shocked that attempts like NetSurf, despite its best intentions and my undying affection for it, are really just toys if they lack full script runtimes? Trying and failing to keep up with the scripting treadmill is what makes them infeasible to use. If you're a front-end engineer and you throw in a dependency on Sexy Framework just because you can, don't complain when you only have a minority of browser choices because you're a big part of the problem.

Infrastructure is at least as important as the software itself. A popular product incurs actual monetary costs to service it. It costs me about US$600 a month, on average, to run my home data center where Floodgap sits (about ten feet away from this chair) between network, electricity and cooling costs. TenFourFox is probably about half its traffic, so offloading what we can really reduces the financial burden, along with the trivial amount of ad revenue which basically only pays for the domain names. Tenderapp for user support, SourceForge for binary hosting, Github for project management and Blogger for bloviating are all free, along with Google Code where we originally started, which helped a great deal in making the project more sustainable for me personally even if ultimately I was shifting those ongoing costs to someone else. However, the biggest investment is time: trying to stick to a regular schedule when the ground is shifting under your feet is a big chunk out of my off hours, and given that my regular profession is highly specialized and has little to do with computing, you can't really pay me enough to dedicate my daily existence to TenFourFox or any other open-source project because I just don't scale. (We never accepted donations anyway, largely to avoid people thinking they were "buying" something.) I know some people make their entire living from free open source projects. I think those people are exceptions and noteworthy precisely because of their rarity. Most open source projects, even ones with large userbases, are black holes ultimately and always will be.

Gecko has a lot of technical baggage, but it is improving by leaps and bounds, and it is supported by an organization that has the Internet's best interests at heart. I have had an active Bugzilla account since 2004 and over those 16+ years I doubt I would have gotten the level of assistance or cooperation from anyone else that I've received from Mozilla employees and other volunteers. This is not to say that Mozilla (both MoFo and MoCo) has not made their blunders, or that I have agreed personally with everything they've done, and with respect to sustainability MoCo's revenues in particular are insufficiently diversified (speaking of black holes). But given my experience with other Mozillians and our shared values I would rather trust Mozilla any day with privacy and Web stewardship than, say, Apple, who understandably are only interested in what sells iDevices, and Google, who understandably are only interested in what improves the value proposition of their advertising platforms. And because Chrome and Chromium effectively represent the vast majority of desktop market share, Google can unilaterally drive standards and force everyone to follow. Monopolies, even natural ones, may be efficient but that doesn't make them benign. I'll always be a Firefox user for that reason and I still intend to continue contributing code.

Now for the mildly controversial part of this post and the one that will make a few people mad, but this is the end of TenFourFox, and a post-mortem must be comprehensive. For this reason I've chosen to disable comments on this entry. Here is what you should have learned from TenFourFox (much the same thing users should have learned from any open-source project where the maintainer eventually concluded it was more trouble than it was worth).

If you aren't paying for the software, then please don't be a jerk. There is a human at the other end of those complaints and unless you have a support contract, that person owes you exactly nothing. Whining is exhausting to read and "doesn't work" reports are unavoidably depressing, disparaging or jokey comments are unkind, and making reports nastier or more insistent doesn't make your request more important. This is true whether or not your request is reasonable or achievable, but it's certainly more so when it isn't.

As kindly as I can put it, not all bug reports are welcome. Many are legitimately helpful and improve the quality of the browser, and I did appreciate the majority of the reports I got, but even helpful bug reports objectively mean more work for me though it was work I usually didn't mind doing. Unfortunately, the ones that are unhelpful are at best annoying (and at worst incredibly frustrating) because they mean unhappy people with problems that may never be solvable.

The bug reports I liked least were the ones that complained about some pervasive, completely disabling flaw permeating the entire browser from top to bottom. Invariably this was that the browser "was slow," but startup crashes were probably a distant second place. The bug report would inevitably add something along the lines of this should be obvious, or talk about the symptom(s) as if everyone must be experiencing it (them).

I'm not doubting what people say they're seeing. But you should also consider that asserting the software has such a grave fault effectively alleges I either don't use the software or care about it, or I would have noticed. Most of the time my reply was to point out that my reply was being made in the browser itself, and to point out that we had regular beta phases where the alleged issue had not surfaced, so no, it must not be that pervasive, and let's figure out why your computer behaves that way. As far as the browser being slow, well, that's part personal expectation and part technical differences. TenFourFox would regularly win benchmarks against other Power Mac browsers because its JavaScript JIT would stomp everything else, but its older Mozilla branch has weaker pixelpushing and DOM that is demonstrably slower than WebKit, and no Power Mac browser is going to approach the performance you would get on an Intel Mac with any browser. Some of this is legitimate criticism, but overall if that's what you're expecting, TenFourFox will disappoint you. And it certainly did disappoint some people, who felt completely empowered to ignore all that context and say so.

Here is another unwelcome bug report, sometimes part of those same reports: "Version X+1 does something bad that Version X didn't, so I went back to Version X (or I've switched to another browser). Please let me know when it's fixed."

As a practical consideration, if you have such a serious issue where you can't use the browser for your desired purpose then I guess you do what you gotta do. But consider you may also be saying that you don't care about solving the problem. Part of it is, like the last report, making the sometimes incorrect assumption that everyone else must be seeing what you're seeing. But the other part is because you've already reverted to the previous release, you don't have any actual investment in the problem being solved. If it actually is a problem that can be fixed, and I do fix it, you're using the previous version and may or may not be in a position to test it. But if it's actually a problem I can't observe, then it won't get fixed assuming it actually does exist, because I don't see that problem on Version X+1 myself and the person who can see it, i.e., you, has bailed out. If you want me to fix it, especially if you are unwilling or unable to fix it yourself, then you need to stick with it like I'm sticking with it.

What should you do? Phrase it better. Post your reports with the attitude that you are just one user, using free software, from the humility of your own personal experience on your own system. Make it clear you don't expect anything from the report, you are grateful the software exists, you intend to keep using it and this is your small way of giving back. Say this in words because I can't see your face or hear your voice. Write "thank you" and mean it. Acknowledge the costs in time and money to bring it to you. Tell me what's good about it and what you use it for. That's how you create a relationship where I can see you as a person and not a demand request, and where you can see me as a maintainer and not a vending machine. Value my work so that I can value your insights into it. Politeness, courtesy and understanding didn't go out the window just because we're interacting through a computer screen.

Goodness knows I haven't been perfect and I've lost my temper at times with people (largely justifiably, I think, but still). All of us are only human. But today, looking back on everything that's happened, I'm still proud of TenFourFox and I'm still glad I started working on it over 10 years ago. Here's the first functional build of Firefox 4.0b7pre on Tiger (what became the first beta of TenFourFox), dated October 15, 2010:

This was back when Mozilla was sending thank-you cards to Firefox 4 beta testers: TenFourFox survived a lot of times when I thought it was finished for one technical reason or another, and it's still good enough for the couple thousand people who use it every day and the few thousand more who use it occasionally. It kept a lot of perfectly good hardware out of landfills. And most of all, it got me years more out of my own Quad G5 and iBook G4 and it still works well enough for the times I do still need it. Would I embark upon it again, knowing everything I know now and all the work and sweat that went into it? Heck yeah. In a heartbeat.

It was worth it.

Categorieën: Mozilla-nl planet

The Firefox Frontier: How one woman founder pivoted her company online while supporting small businesses

Mozilla planet - ma, 29/03/2021 - 18:48

Eighteen years ago Susie Daly started Renegade Craft as a way to build a community of artists through in-person events. When COVID-19 and the corresponding shutdown put a stop to … Read more

The post How one woman founder pivoted her company online while supporting small businesses appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: MDN localization in March — Tier 1 locales unfrozen, and future plans

Mozilla planet - do, 25/03/2021 - 17:05

Since we last talked about MDN localization, a lot of progress has been made. In this post we’ll talk you through the unfreezing of Tier 1 locales, and the next steps in our plans to stop displaying non-active and unmaintained locales.

Tier 1 locales unfrozen!

It has been a long time coming, but we’ve finally achieved our goal of unfreezing our Tier 1 locales. the fr, ja, ru, zh-CN, and zh-TW locales can now be edited, and we have active teams working on each of these locales. We added Russian (ru) to the list very recently, after great interest from the community helped us to rapidly assemble a team to maintain those docs — we are really excited about making progress here!

If you are interested in helping out with these locales, or asking questions, you can find all the information you need at our all-new translated-content README. This includes:

  • How to contribute
  • The policies in place to govern the work
  • Who is in the active localization teams
  • How the structure is kept in sync with the en-US version.

We’d like to thank everyone who helped us get to this stage, especially the localization team members who have stepped up to help us maintain our localized content:

Stopping the display of unmaintained locales on MDN

Previously we said that we were planning to stop the display of all locales except for en-US, and our Tier 1 locales.

We’ve revised this plan a little since then — we looked at the readership figures of each locale, as a percentage of the total MDN traffic, and decided that we should keep a few more than just the 5 we previously mentioned. Some of the viewing figures for non-active locales are quite high, so we thought it would be wise to keep them and try to encourage teams to start maintaining them.

In the end, we decided to keep the following locales:

  • en-US
  • es
  • ru (already unfrozen)
  • fr (already unfrozen)
  • zh-CN (already unfrozen)
  • ja (already unfrozen)
  • pt-BR
  • ko
  • de
  • pl
  • zh-TW (already unfrozen)

We are planning to stop displaying the other 21 locales. Many of them have very few pages, a high percentage of which are out-of-date or otherwise flawed, and we estimate that the total traffic we will lose by removing all these locales is less than 2%.

So what does this mean?

We are intending to stop displaying all locales outside the top ten by a certain date. The date we have chosen is April 30th.

We will remove all the source content for those locales from the translated-content repo, and put it in a new retired translated content repo, so that anyone who still wants to use this content in some way is welcome to do so. We highly respect the work that so many people have done on translating MDN content over the years, and want to preserve it in some way.

We will redirect the URLs for all removed articles to their en-US equivalents — this solves an often-mentioned issue whereby people would rather view the up-to-date English article than the low-quality or out-of-date version in their own language, but find it difficult to do so because of they way MDN works.

We are also intending to create a new tool whereby if you see a really outdated page, you can press a button saying “retire content” to open up a pull request that when merged will check it out to the retired content repo.

After this point, we won’t revive anything — the journey to retirement is one way. This may sound harsh, but we are taking determined steps to clean up MDN and get rid of out-of-date and out-of-remit content that has been around for years in some cases.

The post MDN localization in March — Tier 1 locales unfrozen, and future plans appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Friend of Add-ons: Mélanie Chauvel

Mozilla planet - do, 25/03/2021 - 16:00

I’m pleased to announce our newest Friend of Add-ons, Mélanie Chauvel! After becoming interested in free and open source software in 2012, Mélanie started contributing code to Tab Center Redux, a Firefox extension that displays tabs vertically on the sidebar. When the developer stopped maintaining it, she forked a version and released it as Tab Center Reborn.

As she worked on Tab Center Reborn, Mélanie became thoroughly acquainted with the tabs API. After running into a number of issues where the API didn’t behave as expected, or didn’t provide the functionality her extension needed, she started filing bugs and proposing new features for the WebExtensions API.

Changing code in Firefox can be scary to new contributors because of the size and complexity of the codebase. As she started looking into her pain points, Mélanie realized that she could make some of the changes she wanted to see. “WebExtensions APIs are implemented in JavaScript and are relatively isolated from the rest of the codebase,” she says. “I saw that I could fix some of the issues that bothered me and took a stab at it.”

Mélanie added two new APIs: sidebarAction.toggle, which can toggle the visibility of the sidebar if it belongs to an extension, and tabs.warmup, which can reduce the amount of time it takes for an inactive tab to load. She also made several improvements to the tabs.duplicate API. Thanks to her contributions, new duplicated tabs are activated as soon as they are opened, extensions can choose where a duplicate tab should be opened, and duplicating a pinned tab no longer causes unexpected visual glitches.

Mélanie is also excited to see and help others contribute to open source projects. One of her most meaningful experiences at Mozilla has been filing an issue and seeing a new contributor fix it a few weeks later. “It made me happy to be part of the path of someone else contributing to important projects like Firefox. We often feel powerless in our lives, and I’m glad I was able to help others participate in something bigger than them,” Mélanie says.

These days, Mélanie is working on translating Tab Center Reborn into French and Esperanto and contributing code to other open-source projects including Mastodon, Tusky, Rust, Exa, and KDE. She also enjoys playing puzzle games, exploring vegan cooking and baking, and watching TV shows and movies with friends.

Thank you for all of your contributions, Mélanie! If you’re a fan of Mélanie’s work and wish to offer support, you can buy her a coffee or contribute on Liberapay.

If you are interested in contributing to the add-ons ecosystem, please visit our Contribution wiki.

The post Friend of Add-ons: Mélanie Chauvel appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: Mailfence Encrypted Email Suite in Thunderbird

Mozilla planet - do, 25/03/2021 - 12:33
Mailfence Encrypted Email Suite in Thunderbird

Today, the Thunderbird team is happy to announce that we have partnered with Mailfence to offer their encrypted email service in Thunderbird’s account setup. To check this out, you click on “Get a new email address…” when you are setting up an account. We are excited that those using Thunderbird will have this easily accessible option to get a new email address from a privacy-focused provider with just a few clicks.

Why partner with Mailfence?

It comes down to two important shared values: a commitment to privacy and open standards. Mailfence has built a private and secure email experience, whilst using open standards that ensure its users can use clients like Thunderbird with no extra hoops to jump through – which respects their freedom. Also, Mailfence has been doing this for longer than most providers have been around and this shows real commitment to their cause.

We’ve known we wanted to work with the Mailfence team for well over a year, and this is just the beginning of our collaboration. We’ve made it easy to get an email address from Mailfence, and their team has created many great guides on how to get the most out of their service in Thunderbird. But this is just the beginning. The goal is that, in the near future, Mailfence users will benefit from the automatic sync of their contacts and calendars – as well as their email.

Why is this important?

If we’ve learned anything about the tech landscape these last few years it’s that big tech doesn’t always have your best interests in mind. Big tech has based its business model on the harvesting and exploitation of data. Your data that the companies gobble up is used for discrimination and manipulation – not to mention the damage done when this data is sold to or stolen by really bad actors.

We wanted to give our users an alternative, and we want to continue to show our users that you can communicate online and leverage the power of the Internet without giving up your right to privacy. Mailfence is a great service that we want to share with our community and users, to show there are good options out there.

Patrick De-Schutter, Co-Founder of Mailfence, makes an excellent case for why this partnership is important:

“Thunderbird’s mission and values completely align with ours. We live in times of ever growing Internet domination by big tech companies. These have repeatedly shown a total disrespect of online privacy and oblige their users to sign away their privacy through unreadable Terms of Service. We believe this is wrong and dangerous. Privacy is a fundamental human right. With this partnership, we create a user-friendly privacy-respecting alternative to the Big Tech offerings that are centered around the commodification of personal data.”

How to try out Mailfence

If you want to give Mailfence a try right now (and are already using Thunderbird), just open Thunderbird account settings, click “Account Actions” and then “Add Mail Account”, it is there that you will see the option to “Get a new email address”. There you can select Mailfence as your provider and choose your desired username, then you will be prompted to set up your account. Once you have done this your account will be set up in Thunderbird and you will be able to start your Mailfence trial.

It is our sincere hope that our users will give Mailfence a try because using services that respect your freedom and privacy is better for you, and better for society at large. We look forward to deepening our relationship with Mailfence and working hand-in-hand with them to improve the Thunderbird experience for those using their service.

We’ll share more about our partnership with Mailfence, as well as our other efforts to promote privacy and open standards as the year progresses. We’re so grateful to get to work with great people who share our values, and to then share that work with the world.

Categorieën: Mozilla-nl planet

Niko Matsakis: Async Vision Doc Writing Sessions II

Mozilla planet - do, 25/03/2021 - 05:00

I’m scheduling two more public drafting sessions for tomorrow, Match 26th:

If you’re available and have interest in one of those issues, please join us! Just ping me on Discord or Zulip and I’ll send you the Zoom link.

I also plan to schedule more sessions next week, so stay tuned!

The vision…what?

Never heard of the async vision doc? It’s a new thing we’re trying as part of the Async Foundations Working Group:

We are launching a collaborative effort to build a shared vision document for Async Rust. Our goal is to engage the entire community in a collective act of the imagination: how can we make the end-to-end experience of using Async I/O not only a pragmatic choice, but a joyful one?

Read the full blog post for more.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.51.0

Mozilla planet - do, 25/03/2021 - 01:00

The Rust team is happy to announce a new version of Rust, 1.51.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.51.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.51.0 on GitHub.

What's in 1.51.0 stable

This release represents one of the largest additions to the Rust language and Cargo in quite a while, stabilizing an MVP of const generics and a new feature resolver for Cargo. Let's dive right into it!

Const Generics MVP

Before this release, Rust allowed you to have your types be parameterized over lifetimes or types. For example if we wanted to have a struct that is generic over the element type of an array, we'd write the following:

struct FixedArray<T> { // ^^^ Type generic definition list: [T; 32] // ^ Where we're using it. }

If we then use FixedArray<u8>, the compiler will make a monomorphic version of FixedArray that looks like:

struct FixedArray<u8> { list: [u8; 32] }

This is a powerful feature that allows you to write reusable code with no runtime overhead. However, until this release it hasn't been possible to easily be generic over the values of those types. This was most notable in arrays which include their length in their type definition ([T; N]), which previously you could not be generic over. Now with 1.51.0 you can write code that is generic over the values of any integer, bool, or char type! (Using struct or enum values is still unstable.)

This change now lets us have our own array struct that's generic over its type and its length. Let's look at an example definition, and how it can be used.

struct Array<T, const LENGTH: usize> { // ^^^^^^^^^^^^^^^^^^^ Const generic definition. list: [T; LENGTH] // ^^^^^^ We use it here. }

Now if we then used Array<u8, 32>, the compiler will make a monomorphic version of Array that looks like:

struct Array<u8, 32> { list: [u8; 32] }

Const generics adds an important new tool for library designers in creating new, powerful compile-time safe APIs. If you'd like to learn more about const generics you can also check out the "Const Generics MVP Hits Beta" blog post for more information about the feature and its current restrictions. We can't wait to see what new libraries and APIs you create!

array::IntoIter Stabilisation

As part of const generics stabilising, we're also stabilising a new API that uses it, std::array::IntoIter. IntoIter allows you to create a by value iterator over any array. Previously there wasn't a convenient way to iterate over owned values of an array, only references to them.

fn main() { let array = [1, 2, 3, 4, 5]; // Previously for item in array.iter().copied() { println!("{}", item); } // Now for item in std::array::IntoIter::new(array) { println!("{}", item); } }

Note that this is added as a separate method instead of .into_iter() on arrays, as that currently introduces some amount of breakage; currently .into_iter() refers to the slice by-reference iterator. We're exploring ways to make this more ergonomic in the future.

Cargo's New Feature Resolver

Dependency management is a hard problem, and one of the hardest parts of it is just picking what version of a dependency to use when it's depended on by two different packages. This doesn't just include its version number, but also what features are or aren't enabled for the package. Cargo's default behaviour is to merge features for a single package when it's referred to multiple times in the dependency graph.

For example, let's say you had a dependency called foo with features A and B, which was being used by packages bar and baz, but bar depends on foo+A and baz depends on foo+B. Cargo will merge both of those features and compile foo as foo+AB. This has a benefit that you only have to compile foo once, and then it can be reused for both bar and baz.

However, this also comes with a downside. What if a feature enabled in a build-dependency is not compatible with the target you are building for?

A common example of this in the ecosystem is the optional std feature included in many #![no_std] crates, that allows crates to provide added functionality when std is available. Now imagine you want to use the #![no_std] version of foo in your #![no_std] binary, and use the foo at build time in your build.rs. If your build time dependency depends on foo+std, your binary now also depends on foo+std, which means it will no longer compile because std is not available for your target platform.

This has been a long-standing issue in cargo, and with this release there's a new resolver option in your Cargo.toml, where you can set resolver="2" to tell cargo to try a new approach to resolving features. You can check out RFC 2957 for a detailed description of the behaviour, which can be summarised as follows.

  • Dev dependencies — When a package is shared as a normal dependency and a dev-dependency, the dev-dependency features are only enabled if the current build is including dev-dependencies.
  • Host Dependencies — When a package is shared as a normal dependency and a build-dependency or proc-macro, the features for the normal dependency are kept independent of the build-dependency or proc-macro.
  • Target dependencies — When a package appears multiple times in the build graph, and one of those instances is a target-specific dependency, then the features of the target-specific dependency are only enabled if the target is currently being built.

While this can lead to some crates compiling more than once, this should provide a much more intuitive development experience when using features with cargo. If you'd like to know more, you can also read the "Feature Resolver" section in the Cargo Book for more information. We'd like to thank the cargo team and everyone involved for all their hard work in designing and implementing the new resolver!

[package] resolver = "2" # Or if you're using a workspace [workspace] resolver = "2" Splitting Debug Information

While not often highlighted in the release, the Rust teams are constantly working on improving Rust's compile times, and this release marks one of the largest improvements in a long time for Rust on macOS. Debug information maps the binary code back to your source code, so that the program can give you more information about what went wrong at runtime. In macOS, debug info was previously collected into a single .dSYM folder using a tool called dsymutil, which can take some time and use up quite a bit of disk space.

Collecting all of the debuginfo into this directory helps in finding it at runtime, particularly if the binary is being moved. However, it does have the drawback that even when you make a small change to your program, dsymutil will need to run over the entire final binary to produce the final .dSYM folder. This can sometimes add a lot to the build time, especially for larger projects, as all dependencies always get recollected, but this has been a necessary step as without it Rust's standard library didn't know how to load the debug info on macOS.

Recently, Rust backtraces switched to using a different backend which supports loading debuginfo without needing to run dsymutil, and we've stabilized support for skipping the dsymutil run. This can significantly speed up builds that include debuginfo and significantly reduce the amount of disk space used. We haven't run extensive benchmarks, but have seen a lot of reports of people's builds being a lot faster on macOS with this behavior.

You can enable this new behaviour by setting the -Csplit-debuginfo=unpacked flag when running rustc, or by setting the split-debuginfo [profile] option to unpacked in Cargo. The "unpacked" option instructs rustc to leave the .o object files in the build output directory instead of deleting them, and skips the step of running dsymutil. Rust's backtrace support is smart enough to know how to find these .o files. Tools such as lldb also know how to do this. This should work as long as you don't need to move the binary to a different location while retaining the debug information.

[profile.dev] split-debuginfo = "unpacked" Stabilized APIs

In total, this release saw the stabilisation of 18 new methods for various types like slice and Peekable. One notable addition is the stabilisation of ptr::addr_of! and ptr::addr_of_mut!, which allow you to create raw pointers to unaligned fields. Previously this wasn't possible because Rust requires &/&mut to be aligned and point to initialized data, and &addr as *const _ would then cause undefined behaviour as &addr needs to be aligned. These two macros now let you safely create unaligned pointers.

use std::ptr; #[repr(packed)] struct Packed { f1: u8, f2: u16, } let packed = Packed { f1: 1, f2: 2 }; // `&packed.f2` would create an unaligned reference, and thus be Undefined Behavior! let raw_f2 = ptr::addr_of!(packed.f2); assert_eq!(unsafe { raw_f2.read_unaligned() }, 2);

The following methods were stabilised.

Other changes

There are other changes in the Rust 1.51.0 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.51.0

Many people came together to create Rust 1.51.0. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

The Firefox Frontier: How two women are taking on the digital ad industry one brand at a time

Mozilla planet - wo, 24/03/2021 - 22:31

In the fall of 2016, Nandini Jammi co-founded Sleeping Giants to expose for brands how their digital advertisements were showing up on websites that they didn’t intend their marketing efforts … Read more

The post How two women are taking on the digital ad industry one brand at a time appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Mozilla Explains: What is an IP address?

Mozilla planet - wo, 24/03/2021 - 19:34

Every time you are on the internet, IP addresses are playing an essential role in the information exchange to help you see the sites you are requesting. Yet, there is … Read more

The post Mozilla Explains: What is an IP address? appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Play Store Support Program Updates

Mozilla planet - wo, 24/03/2021 - 15:32

TL;DR: By the end of March, 2021, the Play Store Support program will be moving from the Respond Tool to Conversocial. If you want to keep helping Firefox for Android users by responding to their reviews in the Google Play Store, please fill out this form to request a Conversocial account. You can learn more about the program here

 

In late August last year, to support the transition of Firefox for Android from the old engine (fennec) to the new one (fenix), we officially introduced a tool that we build in-house called the Respond Tool to support the Play Store Support campaign. The Respond Tool lets contributors and staff provide answers to reviews under 3-stars on the Google Play Store. That program was known as Play Store Support.

We learned a lot from the campaign and identified a number of improvements to functionality and user experience that were necessary. In the end, we decided to migrate the program from the Respond Tool to Conversocial, a third-party tool that we are already using with our community to support users on Twitter. This change will enable us to:

  • Segment reviews and set priorities.
  • Filter out reviews with profanity.
  • See when users change their ratings.
  • Track trends with a powerful reporting dashboard.
  • Save costs and engineering resources.

As a consequence of this change, we’re going to decommission the Respond Tool by March 31, 2021. You’re encouraged to request an account in Conversocial if you want to keep supporting Firefox for Android users. You can read more about the decommission plan in the Contributor Forum.

We have also updated the guidelines to reflect this change that you can learn more from the following article: Getting started with Play Store Support.

This will not be possible without your help

All this will not be possible without contributors like you, who have been helping us to provide great support for Firefox for Android users through the Respond Tool. From the Play Store Support campaign last year until today, 99 contributors have helped to reply to a total of 14484 reviews on the Google Play Store.

I’d like to extend my gratitude to Paul W, Christophe V, Andrew Truong, Danny Colin, and Ankit Kumar who have been very supportive and accommodating by giving us feedback throughout the transition process.

We’re excited about this change and hope that you can help us to spread the word and share this announcement to your fellow contributors.

Let’s keep on rocking the helpful web!

 

On behalf of the SUMO team,

Kiki

Categorieën: Mozilla-nl planet

Giorgio Maone: Welcome SmartBlock: Script Surrogates for the masses!

Mozilla planet - di, 23/03/2021 - 19:29

Today Mozilla released Firefox 87, introducing SmartBlock, a new feature which "intelligently fixes up web pages that are broken by our tracking protections, without compromising user privacy [...] by providing local stand-ins for blocked third-party tracking scripts. These stand-in scripts behave just enough like the original ones to make sure that the website works properly. They allow broken sites relying on the original scripts to load with their functionality intact."

As long time NoScript users may recall, this is exactly the concept behind "Script Surrogates", which I developed more than ten years ago as a NoScript "Classic" module.

In facts, in its launch post Mozilla kindly wants "to acknowledge the NoScript and uBlock Origin teams for helping to pioneer this approach.".

It's not the first time that concepts pioneered by NoScript percolate into mainstream browsers: from content blocking to XSS filters, I must admit it gets me emotional every time :)

Script Surrogates unfortunately could not be initially ported to NoScript Quantum, due to the radically different browser extensions technology it was forced into. Since then, many people using NoScript and other content blockers have been repeatedly asking for this feature to come back because it "fixed" many sites without requiring unwanted scripts (such as Google Analytics, for instance) to be enabled or ad-blocking / anti-tracking extensions to be disabled.

Script Surrogates were significantly more powerful, flexible and user-hackable than SmartBlock, and I find myself missing them in several circumstances.

I'm actually planning (i.e. trying to secure time and funds) to bring back Script Surrogates as a stand-alone extension for Firefox-based and Chromium-based browsers, both on desktop and mobile devices. This tool would complement and enhance the whole class of content blockers (including but not limited to NoScript), without requiring the specific installation of NoScript itself. Furthermore, its core functionality (on-demand script injection/replacement, native object wrapping/emulation...) would be implemented as NoScript Commons Library modules, ready to be reused by other browser extensions, like already happening with FSF's in-progress project JS-Shield.

In the meanwhile, we can all enjoy Script Surrogate's "light", mainstream young sibling, built-in in Firefox (and therefore coming soon in the Tor Browser too). Yay Mozilla!

Categorieën: Mozilla-nl planet

Daniel Stenberg: Github steel

Mozilla planet - di, 23/03/2021 - 17:02

I honestly don’t know what particular thing I did to get this, but GitHub gave me a 3D-printed steel version of my 2020 GitHub contribution “matrix”. You know that thing on your GitHub profile that normally looks something like this:

The gift package included this friendly note:

Hi @bagder,

As we welcome 2021, we want to thank and congratulate you on what you brought to 2020. Amidst the year’s challenges, you found time to continue giving back and contributing to the community.

Your hard work, care, and attention haven’t gone unnoticed.

Enclosed is your 2020 GitHub contribution graph, 3D printed in steel. You can also view it by pointing your browser to https://github.co/skyline. It tells a personal story only you can truly interpret.

Please accept this small gift as a token of appreciation on behalf of all of us here at GitHub, and everyone who benefits from your work.

Thank you and all the best for the year ahead!

With <3, from GitHub

I think I’ll put it under one of my screens here on my desk for now. The size is 145 mm x 30 mm x 30 mm. 438 grams.

Thanks GitHub!

Update: the print is done by shapeways.com

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: In March, we see Firefox 87

Mozilla planet - di, 23/03/2021 - 16:56

Nearing the end of March now, and we have a new version of Firefox ready to deliver some interesting new features to your door. This month, we’ve got some rather nice DevTools additions in the form of prefers-color-scheme media query emulation and toggling :target pseudo-classes, some very useful additions to editable DOM elements: the beforeinput event and getTargetRanges() method, and some nice security, privacy, and macOS screenreader support updates.

This blog post provides merely a set of highlights; for all the details, check out the following:

Developer tools

In developer tools this time around, we’ve first of all updated the Page Inspector to allow simulation of prefers-color-scheme media queries, without having to change the operating system to trigger light or dark mode.

Open the DevTools, and you’ll see a new set of buttons in the top right corner:

Two buttons marked with sun and moon icons

When pressed, these enable the light and dark preference, respectively. Selecting either button deselects the other. If neither button is selected then the simulator does not set a preference, and the browser renders using the default feature value set by the operating system.

And another nice addition to mention is that the Page Inspector’s CSS pane can now be used to toggle the :target pseudo-class for the currently selected element, in addition to a number of others that were already available (:hover, :active, etc.)

Firefox devtools CSS rules pane, showing a body selector with a number of following declarations, and a bar up the top with several pseudo classes written inside it

Find more out about this at Viewing common pseudo-classes.

Better control over user input: beforeinput and getTargetRanges()

The beforeinput event and getTargetRanges() method are now enabled by default. They allow web apps to override text edit behavior before the browser modifies the DOM tree, providing more control over text input to improve performance.

The global beforeinput event is sent to an <input> element — or any element whose contenteditable attribute is set to true — immediately before the element’s value changes. The getTargetRanges() method of the InputEvent interface returns an array of static ranges that will be affected by a change to the DOM if the input event is not canceled.

As an example, say we have a simple comment system where users are able to edit their comments live using a contenteditable container, but we don’t want them to edit the commenter’s name or other valuable meta data? Some sample markup might look like so:

<p contenteditable> <span>Mr Bungle:</span> This is my comment; isn't it good! <em>-- 09/16/21, 09.24</em> </p>

Using beforeinput and getTargetRanges(), this is now really simple:

const editable = document.querySelector('[contenteditable]'); editable.addEventListener('beforeinput', e => { const targetRanges = e.getTargetRanges(); if(targetRanges[0].startContainer.parentElement.tagName === 'SPAN' || targetRanges[0].startContainer.parentElement.tagName === 'EM') { e.preventDefault(); }; })

Here we respond to the beforeinput event so that each time a change to the text is attempted, we get the target range that would be affected by the change, find out if it is inside a <span> or <em> element, and if so, run preventDefault() to stop the edit happening. Voila — non-editable text regions inside editable text. Granted, this could be handled in other ways, but think beyond this trivial example — there is a lot of power to unlock here in terms of the control you’ve now got over text input.

Security and privacy

Firefox 87 sees some valuable security and privacy changes.

Referrer-Policy changes

First of all, the default Referrer-Policy has been changed to strict-origin-when-cross-origin (from no-referrer-when-downgrade), reducing the risk of leaking sensitive information in cross-origin requests. Essentially this means that by default, path and query string information are no longer included in HTTP Referrers.

You can find out more about this change at Firefox 87 trims HTTP Referrers by default to protect user privacy.

SmartBlock

We also wanted to bring our new SmartBlock feature to the attention of our readers. SmartBlock provides stand-ins for tracking scripts blocked by Firefox (e.g. when in private browsing mode), getting round the often-experienced problem of sites failing to load or not working properly when those tracking scripts are blocked and therefore not present.

The provided stand-in scripts behave close enough to the original ones that they allow sites that rely on them to load and behave normally. And best of all, these stand-ins are bundled with Firefox. No communication needs to happen with the third-party at all, so the potential for any tracking to occur is greatly diminished, and the affected sites may even load quicker than before.

Learn more about SmartBlock at Introducing SmartBlock

VoiceOver support on macOS

Firefox 87 sees us shipping our VoiceOver screen reader support on macOS! No longer will you have to switch over to Chrome or Safari to do significant parts of your accessibility testing.

Check it out now, and let us know what you think.

The post In March, we see Firefox 87 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Andrew Halberstadt: Understanding Mach Try

Mozilla planet - di, 23/03/2021 - 15:51

There is a lot of confusion around mach try. People frequently ask “How do I get task X in mach try fuzzy?” or “How can I avoid getting backed out?”. This post is not so much a tip, rather an explanation around how mach try works and its relationship to the CI system (taskgraph). Armed with this knowledge, I hope you’ll be able to use mach try a little more effectively.

Categorieën: Mozilla-nl planet

Mozilla Accessibility: VoiceOver Support for macOS in Firefox 87

Mozilla planet - di, 23/03/2021 - 14:35

Screen readers, an assistive technology that allows people to engage with computers through synthesized speech or a braille display, are available on all of the platforms where Firefox runs. However, until today we’ve had a gap in our support for this important technology. Firefox for Windows, Linux, Android, and iOS all work with the popular and included screen readers on those platforms, but macOS screen reader support has been absent.

For over a year the Firefox accessibility team has worked to bring high quality VoiceOver support to Firefox on macOS. Last August we delivered a developer preview of Firefox working with VoiceOver and in December we expanded that preview to all Firefox consumers. With Firefox 87, we think it’s complete enough for everyday use. Firefox 87 supports all the most common VoiceOver features and with plenty of performance. Users should be able to easily navigate through web content and all of the browser’s primary interface without problems.

If you’re a Mac user, and you rely on a screen reader, now’s the time to give Firefox another try. We think you’ll enjoy the experience and look forward to your feedback. You can learn more about Firefox 87 and download a copy at the Firefox release notes.

The post VoiceOver Support for macOS in Firefox 87 appeared first on Mozilla Accessibility.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Firefox 87 introduces SmartBlock for Private Browsing

Mozilla planet - di, 23/03/2021 - 13:55

Today, with the launch of Firefox 87, we are excited to introduce SmartBlock, a new intelligent tracker blocking mechanism for Firefox Private Browsing and Strict Mode. SmartBlock ensures that strong privacy protections in Firefox are accompanied by a great web browsing experience.

Privacy is hard

At Mozilla, we believe that privacy is a fundamental right and that everyone deserves to have their privacy protected while they browse the web. Since 2015, as part of the effort to provide a strong privacy option, Firefox has included the built-in Content Blocking feature that operates in Private Browsing windows and Strict Tracking Protection Mode. This feature automatically blocks third-party scripts, images, and other content from being loaded from cross-site tracking companies reported by Disconnect. By blocking these tracking components, Firefox Private Browsing windows prevent them from watching you as you browse.

In building these extra-strong privacy protections in Private Browsing windows and Strict Mode, we have been confronted with a fundamental problem: introducing a policy that outright blocks trackers on the web inevitably risks blocking components that are essential for some websites to function properly. This can result in images not appearing, features not working, poor performance, or even the entire page not loading at all.

New Feature: SmartBlock

To reduce this breakage, Firefox 87 is now introducing a new privacy feature we are calling SmartBlock. SmartBlock intelligently fixes up web pages that are broken by our tracking protections, without compromising user privacy.

SmartBlock does this by providing local stand-ins for blocked third-party tracking scripts. These stand-in scripts behave just enough like the original ones to make sure that the website works properly. They allow broken sites relying on the original scripts to load with their functionality intact.

The SmartBlock stand-ins are bundled with Firefox: no actual third-party content from the trackers are loaded at all, so there is no chance for them to track you this way. And, of course, the stand-ins themselves do not contain any code that would support tracking functionality.

In Firefox 87, SmartBlock will silently stand in for a number of common scripts classified as trackers on the Disconnect Tracking Protection List. Here’s an example of a performance improvement:

 before and after SmartBlock.

An example of SmartBlock in action. Previously (left), the website tiny.cloud had poor loading performance in Private Browsing windows in Firefox because of an incompatibility with strong Tracking Protection. With SmartBlock (right), the website loads properly again, while you are still fully protected from trackers found on the page.

We believe the SmartBlock approach provides the best of both worlds: strong protection of your privacy with a great browsing experience as well.

These new protections in Firefox 87 are just the start! Stay tuned for more SmartBlock innovations in upcoming versions of Firefox.

The team

This work was carried out in a collaboration between the Firefox webcompat and anti-tracking teams, including Thomas Wisniewski, Paul Zühlcke and Dimi Lee with support from many Mozillians including Johann Hofmann, Rob Wu, Wennie Leung, Mikal Lewis, Tim Huang, Ethan Tseng, Selena Deckelmann, Prangya Basu, Arturo Marmol, Tanvi Vyas, Karl Dubost, Oana Arbuzov, Sergiu Logigan, Cipriani Ciocan, Mike Taylor, Arthur Edelstein, and Steven Englehardt.

We also want to acknowledge the NoScript and uBlock Origin teams for helping to pioneer this approach.

 

The post Firefox 87 introduces SmartBlock for Private Browsing appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Pagina's