mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

The Firefox Frontier: Tracking Diaries with Melanie Ehrenkranz

Mozilla planet - vr, 31/01/2020 - 15:09

In Tracking Diaries, we invited people from all walks of life to share how they spent a day online while using Firefox’s privacy protections to keep count of the trackers … Read more

The post Tracking Diaries with Melanie Ehrenkranz appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: The 2020 Rust Event Lineup

Mozilla planet - vr, 31/01/2020 - 01:00

A new decade has started, and we are excited about the Rust conferences coming up. Each conference is an opportunity to learn about Rust, share your knowledge, and to have a good time with your fellow Rustaceans. Read on to learn more about the events we know about so far.

FOSDEM
February 2nd, 2020

FOSDEM stands for the Free and Open Source Developers European Meeting. At this event software developers around the world will meet up, share ideas and collaborate. FOSDEM will be hosting a Rust devroom workshop that aims to present the features and possibilities offered by Rust, as well as some of the many exciting tools and projects in its ecosystem.

Located in Brussels, Belgium RustFest Netherlands
Q2, 2020

The RustFest Netherlands team are working hard behind the scenes on getting everything ready. We hope to tell you more soon so keep an eye on the RustFest blog and follow us on Twitter!

Located in Netherlands Rust+GNOME Hackfest
April 29th to May 3rd, 2020

The goal of the Rust+GNOME hackfest is to improve the interactions between Rust and the GNOME libraries. During this hackfest, we will be improving the interoperability between Rust and GNOME, improving the support of GNOME libraries in Rust, and exploring solutions to create GObject APIs from Rust.

Located in Montréal, Quebec Rust LATAM
May 22nd-23rd, 2020

Where Rust meets Latin America! Rust Latam is Latin America's leading event for and by the Rust community. Two days of interactive sessions, hands-on activities and engaging talks to bring the community together. Schedule to be announced at this link.

Located in Mexico City, Mexico Oxidize
July, 2020

The Oxidize conference is about learning, and improving your programming skills with embedded systems and IoT in Rust. The conference plans on having one day of guided workshops for developers looking to start or improve their Embedded Rust skills, one day of talks by community members, and a two day development session focused on Hardware and Embedded subjects in Rust. The starting date is to be announced at a later date.

Located in Berlin, Germany RustConf
August 20th-21st, 2020

The official RustConf will be taking place in Portland, Oregon, USA. Last years' conference was amazing, and we are excited to see what happens next. See the website, and Twitter for updates as the event date approaches!

Located in Oregon, USA Rusty Days
Fall, 2020

Rusty Days is a new conference located in Wroclaw, Poland. Rustaceans of all skill levels are welcome. The conference is still being planned. Check out the information on their site, and twitter as we get closer to fall.

Located in Wroclaw, Poland RustLab
October 16th-17th, 2020

RustLab 2020 is a 2 days conference with talks and workshops. The date is set, but the talks are still being planned. We expect to learn more details as we get closer to the date of the conference.

Located in Florence, Italy

For the most up-to-date information on events, visit timetill.rs. For meetups, and other events see the calendar.

Categorieën: Mozilla-nl planet

Armen Zambrano: Web performance issue — reoccurrence

Mozilla planet - do, 30/01/2020 - 17:08
Web performance issue — reoccurrence

In June we discovered that Treeherder’s UI slowdowns were due to database slow downs (For full details you can read this post). After a couple of months of investigations, we did various changes to the RDS set up. The changes that made the most significant impact were doubling the DB size to double our IOPS cap and adding Heroku auto-scaling for web nodes. Alternatively, we could have used Provisioned IOPS instead of General SSD storage to double the IOPS but the cost was over $1,000/month more.

Looking back, we made the mistake of not involving AWS from the beginning (I didn’t know we could have used their help). The AWS support team would have looked at the database and would have likely recommended the parameter changes required for a write intensive workload (the changes they recommended during our November outage — see bug 1597136 for details). For the next four months we did not have any issues, however, their help would have saved a lot of time and it would have prevented the major outage we had in November.

There were some good things that came out of these two episodes: the team has learned how to better handle DB issues, there’s improvements we can do to prevent future incidents (see bug 1599095), we created an escalation path and we worked closely as a team to go through the crisis (thanks bobm, camd, dividehex, ekyle, fubar, habib, kthiessen & sclements for your help!).

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.41.0

Mozilla planet - do, 30/01/2020 - 01:00

The Rust team is happy to announce a new version of Rust, 1.41.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.41.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.41.0 on GitHub.

What's in 1.41.0 stable

The highlights of Rust 1.41.0 include relaxed restrictions for trait implementations, improvements to cargo install, a more git-friendly Cargo.lock, and new FFI-related guarantees for Box<T>. See the detailed release notes to learn about other changes not covered by this post.

Relaxed restrictions when implementing traits

To prevent breakages in the ecosystem when a dependency adds a new trait impl, Rust enforces the orphan rule. The gist of it is that a trait impl is only allowed if either the trait or the type being implemented is local to (defined in) the current crate as opposed to a foreign crate. What this means exactly is complicated, however, when generics are involved.

Before Rust 1.41.0, the orphan rule was unnecessarily strict, getting in the way of composition. As an example, suppose your crate defines the BetterVec<T> struct, and you want a way to convert your struct to the standard library's Vec<T>. The code you would write is:

impl<T> From<BetterVec<T>> for Vec<T> { // ... }

...which is an instance of the pattern:

impl<T> ForeignTrait<LocalType> for ForeignType<T> { // ... }

In Rust 1.40.0 this impl was forbidden by the orphan rule, as both From and Vec are defined in the standard library, which is foreign to the current crate. There were ways to work around the limitation, such as the newtype pattern, but they were often cumbersome or even impossible in some cases.

While it's still true that both From and Vec were foreign, the trait (in this case From) was parameterized by a local type. Therefore, Rust 1.41.0 allows this impl.

For more details, read the the stabilization report and the RFC proposing the change.

cargo install updates packages when outdated

With cargo install, you can install binary crates in your system. The command is often used by the community to install popular CLI tools written in Rust.

Starting from Rust 1.41.0, cargo install will also update existing installations of the crate if a new release came out since you installed it. Before this release the only option was to pass the --force flag, which reinstalls the binary crate even if it's up to date.

Less conflict-prone Cargo.lock format

To ensure consistent builds, Cargo uses a file named Cargo.lock, containing dependency versions and checksums. Unfortunately, the way the data was arranged in it caused unnecessary merge conflicts when changing dependencies in separate branches.

Rust 1.41.0 introduces a new format for the file, explicitly designed to avoid those conflicts. This new format will be used for all new lockfiles, while existing lockfiles will still rely on the previous format. You can learn about the choices leading to the new format in the PR adding it.

More guarantees when using Box<T> in FFI

Starting with Rust 1.41.0, we have declared that a Box<T>, where T: Sized is now ABI compatible with the C language's pointer (T*) types. So if you have an extern "C" Rust function, called from C, your Rust function can now use Box<T>, for some specific T, while using T* in C for the corresponding function. As an example, on the C side you may have:

// C header // Returns ownership to the caller. struct Foo* foo_new(void); // Takes ownership from the caller; no-op when invoked with NULL. void foo_delete(struct Foo*);

...while on the Rust side, you would have:

#[repr(C)] pub struct Foo; #[no_mangle] pub extern "C" fn foo_new() -> Box<Foo> { Box::new(Foo) } // The possibility of NULL is represented with the `Option<_>`. #[no_mangle] pub extern "C" fn foo_delete(_: Option<Box<Foo>>) {}

Note however that while Box<T> and T* have the same representation and ABI, a Box<T> must still be non-null, aligned, and ready for deallocation by the global allocator. To ensure this, it is best to only use Boxes originating from the global allocator.

Important: At least at present, you should avoid using Box<T> types for functions that are defined in C but invoked from Rust. In those cases, you should directly mirror the C types as closely as possible. Using types like Box<T> where the C definition is just using T* can lead to undefined behavior.

To read more, consult the documentation for Box<T>.

Library changes

In Rust 1.41.0, we've made the following additions to the standard library:

Reducing support for 32-bit Apple targets soon

Rust 1.41.0 is the last release with the current level of compiler support for 32-bit Apple targets, including the i686-apple-darwin target. Starting from Rust 1.42.0, these targets will be demoted to the lowest support tier.

You can learn more about this change in this blog post.

Other changes

There are other changes in the Rust 1.41.0 release: check out what changed in Rust, Cargo, and Clippy. We also have started landing MIR optimizations, which should improve compile time: you can learn more about them in the "Inside Rust" blog post.

Contributors to 1.41.0

Many people came together to create Rust 1.41.0. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Karl Dubost: Week notes - 2020 w04 - worklog - Python

Mozilla planet - di, 28/01/2020 - 18:32
Monday

Some webcompat diagnosis. Nothing exciting in the issues found, except maybe something about clipping and scrolling. Update: there is a bug! Thanks Daniel

Tuesday

scoping request to the webhook to the actual repo. That way we do not do useless work or worse conflicts of labels assignments. I struggled a bit with the mock Basically for the test I wanted to avoid to make the call to GitHub. When I think about it, there are possibly two options:

  • Mocking.
  • or putting a flag in the code to avoid the call if in test environment. Not sure what is the best strategy.

I also separated some tests which were tied together under the same function, so that it is clearer when one of them is failing.

Phone meeting with the webcompat team this night from 23:00 to midnight. Minutes.

About Mocking

Yes mocking is evil in unit tests, but it becomes necessary if you have dependencies on external services (that you do not control). A good reminder is that you need to mock the function where it is actually called and not where it is imported from. In my case, I wanted to make a couple of tests for our webhook without actually sending requests to GitHub. The HTTP response from GitHub which interests us would be either:

  • 4**
  • 200

So I created a mock for the case where it is successful and makes actually the call. I added comments here to explain.

@patch('webcompat.webhooks.new_opened_issue') def test_new_issue_right_repo(self, mock_proxy): """Test that repository_url matches the CONFIG for public repo. Success is: payload: 'gracias amigos' status: 200 content-type: text/plain """ json_event, signature = event_data('new_event_valid.json') headers = { 'X-GitHub-Event': 'issues', 'X-Hub-Signature': 'sha1=2fd56e551f8243a4c8094239916131535051f74b', } with webcompat.app.test_client() as c: mock_proxy.return_value.status_code = 200 rv = c.post( '/webhooks/labeler', data=json_event, headers=headers ) self.assertEqual(rv.data, b'gracias, amigo.') self.assertEqual(rv.status_code, 200) self.assertEqual(rv.content_type, 'text/plain') Wednesday Asia Dev Roadshow

Sandra has published the summary and the videos of the Developer Roadshow in Asia. This is the talk we gave about Web Compatibility and devtools in Seoul.

Anonymous reporting

Still working on our new anonymous reporting workflow.

Started to work on the PATCH issue when it is moderated positively but before adding code I needed to refactor a bit so we don't end up with a pile of things. I think we can further simplify. Unit tests make it so much easier to move things around. Because when moving code in different modules, files, we break tests. And then we need to fix both codes and tests, so it's working again. But we know in the end that all the features that were essential are still working.

Skiping tests before completion

I had ideas for tests and I didn't want to forget them, so I wanted to add them to the code, so that they will be both here, but not make fail the system.

I could use pass:

def test_patch_not_acceptable_issue(self): pass

but this will be silent, and so you might forget about them. Then I thought, let's use the NotImplementedError

def test_patch_not_acceptable_issue(self): raise NotImplementedError

but here everything will break and the test suite will stop working. So not good. I searched and I found unittest.SkipTest

def test_patch_not_acceptable_issue(self): """Test for not acceptable issues from private repo. payload: 'Moderated issue rejected' status: 200 content-type: text/plain """ raise unittest.SkipTest('TODO')

Exactly what I needed for nose.

(env) ~/code/webcompat.com % nosetests tests/unit/test_webhook.py -v

gives:

Extract browser label name. ... ok Extract 'extra' label. ... ok Extract dictionary of metadata for an issue body. ... ok Extract priority label. ... ok POST without bogus signature on labeler webhook is forbidden. ... ok POST with event not being 'issues' or 'ping' fails. ... ok POST without signature on labeler webhook is forbidden. ... ok POST with an unknown action fails. ... ok GET is forbidden on labeler webhook. ... ok Extract the right information from an issue. ... ok Extract list of labels from an issue body. ... ok Validation tests for GitHub Webhooks: Everything ok. ... ok Validation tests for GitHub Webhooks: Missing X-GitHub-Event. ... ok Validation tests for GitHub Webhooks: Missing X-Hub-Signature. ... ok Validation tests for GitHub Webhooks: Wrong X-Hub-Signature. ... ok Test that repository_url matches the CONFIG for public repo. ... ok Test when repository_url differs from the CONFIG for public repo. ... ok Test the core actions on new opened issues for WebHooks. ... ok Test for acceptable issues comes from private repo. ... SKIP: TODO Test for rejected issues from private repo. ... SKIP: TODO Test for issues in the wrong repo. ... SKIP: TODO Test the private scope of the repository. ... ok Test the public scope of the repository. ... ok Test the unknown of the repository. ... ok Test the signature check function for WebHooks. ... ok POST with PING events just return a 200 and contains pong. ... ok ---------------------------------------------------------------------- Ran 26 tests in 0.102s OK (SKIP=3)

It doesn't fail the test suite, but at least I know I have work to do. We can perfectly see what is missing.

Test for acceptable issues comes from private repo. ... SKIP: TODO Test for rejected issues from private repo. ... SKIP: TODO Test for issues in the wrong repo. ... SKIP: TODO Thursday and Friday

I dedicated most of my time in advancing the new anonymous workflow reporting. The interesting process in doing it was to have tests and having to refactor some functions a couple of times so it made more sense.

Tests are really a safe place to make progress. A new function will break tests results and we will work to fix the tests and/or the function to a place which is cleaner. And then we work on the next modification of the code. Tests become a lifeline in your development.

Another thing which I realize that it is maybe time we create a new module for our issues themselves. It would model, instantiate our issues and we can use in multiple places. Currently we have too many back and forth on parsing texts, calling dictionaries items, etc. We can probably improve this with a dedicated module. Probably for the phase 2 of our new workflow project.

Also I have not been effective as I wished. The windmill of thoughts about my ex-work colleagues future is running wild.

Otsukare!

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: Thunderbird’s New Home

Mozilla planet - di, 28/01/2020 - 17:15

As of today, the Thunderbird project will be operating from a new wholly owned subsidiary of the Mozilla Foundation, MZLA Technologies Corporation. This move has been in the works for a while as Thunderbird has grown in donations, staff, and aspirations. This will not impact Thunderbird’s day-to-day activities or mission: Thunderbird will still remain free and open source, with the same release schedule and people driving the project.

There was a time when Thunderbird’s future was uncertain, and it was unclear what was going to happen to the project after it was decided Mozilla Corporation would no longer support it. But in recent years donations from Thunderbird users have allowed the project to grow and flourish organically within the Mozilla Foundation. Now, to ensure future operational success, following months of planning, we are forging a new path forward. Moving to MZLA Technologies Corporation will not only allow the Thunderbird project more flexibility and agility, but will also allow us to explore offering our users products and services that were not possible under the Mozilla Foundation. The move will allow the project to collect revenue through partnerships and non-charitable donations, which in turn can be used to cover the costs of new products and services.

Thunderbird’s focus isn’t going to change. We remain committed to creating amazing, open source technology focused on open standards, user privacy, and productive communication. The Thunderbird Council continues to  steward the project, and the team guiding Thunderbird’s development remains the same.

Ultimately, this move to MZLA Technologies Corporation allows the Thunderbird project to hire more easily, act more swiftly, and pursue ideas that were previously not possible. More information about the future direction of Thunderbird will be shared in the coming months.

Update: A few of you have asked how to make a contribution to Thunderbird under the new corporation, especially when using the monthly option. Please check out our updated site at give.thunderbird.net!

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mapping the power of Mozilla’s Rebel Alliance

Mozilla planet - di, 28/01/2020 - 08:03

At Mozilla, we often speak of our contributor communities with gratitude, pride and even awe. Our mission and products have been supported by a broad, ever-changing rebel alliance — full of individual volunteers and organizational contributors — since we shipped Firefox 1.0 in 2004. It is this alliance that comes up with new ideas, innovative approaches and alternatives to the ongoing trends towards centralisation and an internet that doesn’t always work in the interests of people.

But we’ve been unable to speak in specifics. And that’s a problem, because the threats to the internet we love have never been greater. Without knowing the strength of the various groups fighting for a healthier internet, it’s hard to predict or achieve success.

We know there are thousands around the globe who help build, localize, test, de-bug, deploy, and support our products and services. They help us advocate for better government regulation and ‘document the web’ through the Mozilla Developer Network. They speak about Mozilla’s mission and privacy-preserving products and technologies at conferences around the globe. They help us host events around the globe too, like this year’s 10th anniversary of MozFest, where participants hacked on how to create a multi-lingual, equitable internet and so much more.

With the publication of the Mozilla and the Rebel Alliance report, we can now speak in specifics. And what we have to say is inspiring. As we rise to the challenges of today’s internet, from the injustices of the surveillance economy to widespread misinformation and the rise of untrustworthy AI, we take heart in how powerful we are as a collective.

Making the connections

In 2018, well over 14,000 people supported Mozilla by contributing their expertise, work, creativity, and insights. Between 2017 and 2019, more than 12,000 people contributed to Firefox. These counts only consider those people whose contributions we can see, such as through Bugzilla, GitHub, or Kitsune, our support platform. They don’t include non-digital contributions. Firefox and Gecko added almost 3,500 new contributors in 2018. The Mozilla Developer Network added over 1,000 in 2018. 52% of all traceable contributions in 2018 came from individual volunteers and commercial contributors, not employees.

Firefox Community Health

The report’s network graphs demonstrate that there are numerous Mozilla communities, not one. Many community members participate across multiple projects: core contributors participate in an average of 4.3 of them. Our friends at Analyse & Tal helped create an interactive version of Mozilla’s contributor communities, highlighting common patterns of contribution and distinguishing between levels of contribution by project. Also, it’s important to note what isn’t captured in the report: the value of social connections, the learning and the mutual support people find in our communities.

We can make a reasonable estimate of the discrete value of some contributions from our rebel alliance. For example, community contributions comprise 58% of all filed Firefox regression bugs, which are particularly costly in their impact on the number of people who use and keep using the browser.

But the real value in our rebel alliance and their contributions is in how they inform and amplify our voice. The challenges around the state of the internet are daunting: disinformation, algorithmic bias and discrimination, the surveillance economy and greater centralisation. We believe this report shows that with the creative strength of our diverse contributor communities, we’re up for the fight.

If you’d like to contribute yourself: check out various opportunities here or dive right into one of our Activate Campaigns!)

The post Mapping the power of Mozilla’s Rebel Alliance appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Adrian Gaudebert: react-content-marker Released – Marking Content with React

Mozilla planet - di, 28/01/2020 - 07:25

Last year, in a React side-project, I had to replace some content in a string with HTML markup. That is not a trivial thing to do with React, as you can't just put HTML as string in your content, unless you want to use dangerouslySetInnerHtml — which I don't. So, I hacked a little code to smartly split my string into an array of sub-strings and DOM elements.

More recently, while working on Translate.Next — the rewrite of Pontoon's translate page to React — I stumbled upon the same problem. After looking around the Web for a tool that would solve it, and coming up short handed, I decided to write my own and make it a library.

Introducing react-content-marker v1.0

react-content-marker is a library for React to mark content in a string based on rules. These rules can be simple strings or regular expressions. Let's look at an example.

Say you have a blob of text, and you want to make the numbers in that text more visible, for example by making them bold.

const content = 'The fellowship had 4 Hobbits but only 1 Dwarf.';

Matching numbers can be done with a simple regex: /(\d+)/. If we turn that into a parser:

const parser = { rule: /(\d+)/, tag: x => <strong>{ x }</strong>, };

We can now use that parser to create a content marker, and use it to enhance our content:

import createMarker from 'react-content-marker'; const Marker = createMarker([parser]); render(<Marker>{ content }</Marker>);

This will show:

The fellowship had 4 Hobbits but only 1 Dwarf.

Hurray!

Advanced usage Passing parsers

The first thing to note is that you can pass any number of parsers to the createMarker function, and they will all be called in turn. The order of the parsers is very important though, because content that has already been marked will not be parsed again. Let's look at another example.

Say you have a rule that matches content between brackets: /({.*})/, and a rule that matches content between brackets that contain only capital letters: /({[A-W]+})/. Now let's say you are marking this content: I have {CATCOUNT} cats. Whichever rule you passed first will match the content between brackets, and the second rule will not apply. You thus need to make sure that your rules are ordered so that the most important ones come first. Generally, that means you want to have the more specific rules first.

The reason why this happens is that, behind the scene, the matched content is turned into a DOM element, and parsers ignore non-string content. With the previous example, the initial string, I have {CATCOUNT} cats, would be turned into ['I have ', <mark>{CATCOUNT}</mark>, ' cats'] after the first parser is called. The second one then only looks at 'I have ' and ' cats', which do not match.

Using regex

The second important thing to know relates to regex. You might have noticed that I put parentheses in my examples above: they are required for the algorithm to capture content. But that also gives you more flexibility: you can use a regex that matches some content that you do not want to mark. Let's say you want to match only the name of someone who's being greeted, with this rule: /hello (\w+)/i. Applying it to Hello Adrian will only mark the Adrian part of that content.

Sometimes, however, you need to use more complex regex that include several groups of parentheses. When that's the case, by default react-content-marker will mark the content of the last non-null capturing group. In such cases, you can add a matchIndex number to your parser: that index will be used to select the capture group to mark.

Here's a simple example:

const parser = { rule: /(hello (world|folks))/i, tag: x => <b>{ x }</b>, };

Applying this rule to Hello World will show: Hello World. If we want to, instead, make the whole match bold, we'll have to use matchIndex:

const parser = { rule: /(hello (world|folks))/i, matchIndex: 0, tag: x => <b>{ x }</b>, };

Now our entire string will correctly be made bold: Hello World.

Advanced example

If you're interested in looking at an advanced usage example of this library, I recommend you check out how we use it in Pontoon, Mozilla's localization platform. We have a long list of parsers there, and they have a lot of edge-cases.

Installation and stuff

react-content-marker is available on npm, so you can easily install it with your favorite javascript package manager:

npm install -D react-content-marker # or yarn add react-content-marker

The code is released under the BSD 3-Clause License, and is available on github. If you hit any problems with it, or have a use case that is not covered, please file an issue. And of course, you are always welcome to contribute a patch!

I hope this is useful to someone out there. It has been for me at least, on Pontoon and on several React-based side-projects. I like how flexible it is, and I believe it does more than any other similar tools I could find around the Web.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR19b1 available

Mozilla planet - di, 28/01/2020 - 07:00
TenFourFox Feature Parity Release 19 beta 1 is now available (downloads, hashes, release notes). I was originally going to do more iteration on Reader mode in FPR19, but in a possible recurrence of the issue that broke SourceForge downloads temporarily, a user reported on Tenderapp they had a site that was failing in the same way.

On the test system I was able to reproduce the problem and it was due to the selected cipher having insufficient cryptographic strength to pass HTTP/2 TLS profile validation. The selected cipher was one I added as a stopgap for FPR7 to fix another site which was still working (and did not use HTTP/2, hence it didn't exhibit the issue). Disabling that cipher restored the new failing site, but caused the site I put the workaround for in FPR7 to fail, so in no situation could I get both sites to be happy with the set available. Although I didn't really want to do this, the only real solution here was to upgrade NSS, the underlying cryptographic library, to add additional more modern ciphers to replace the older one that now needed to be reverted. With this in place and some other fixes, now both sites work, and this probably fixes others.

The reason I was reticent to update NSS (and the underlying NSPR library) was because of some custom changes and because I was worried changes in cipher coverage would break compatibility. However, there wasn't a lot of choice here, so I manually patched up our custom AltiVec-accelerated NSPR to a current release and spliced in a newer NSS overlaid with our build system changes. I tested this on a few sites I knew to be using old crypto libraries and they still seemed to connect fine, and as a nice side benefit some of the more modern ciphers are more efficient and therefore improve throughput a bit. It also makes the likelihood of successfully updating TenFourFox to support TLS 1.3 much higher; if this sticks, I may attempt this as soon as FPR20.

There are a couple sundry minor changes to be implemented at final release, mostly minor bug fixes, but I want to get this beta in testing as quickly as possible within the shrinking rapid release timeframe. I have otherwise intentionally limited the scope of FPR19 to mostly just the crypto upgrade so that we have a clear regression range. If you notice sites have stopped being accessible in FPR19, please verify they are working in FPR18 (people say "I remember it worked," but sites change more than TenFourFox does, so please check and save us some time here), and if it does indeed work in FPR18 report it in the comments so I can analyse the problem. I am very unlikely to revert this change given that it's necessary going forward and probably the best of the available options, but if I can add exceptions that don't compromise overall security I'm willing to do so in the name of supporting backwards compatibility with sites the browser used to be able to access. FPR19 goes final parallel with Firefox 73 and 68.5 somewhere around February 11.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 323

Mozilla planet - di, 28/01/2020 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is test-case, a framework for parameterized testing.

Thanks to Synek317 for the suggestions!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

261 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs Upcoming Events Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust is basically Haskell's athletic younger brother. Not as intellectual, but still smart and lifts weights.

icefox, Jan 22 in community-Discord #games-and-graphics

Thanks to Duane for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Patrick Cloke: Squashing Django Migrations

Mozilla planet - ma, 27/01/2020 - 22:34

The Django migration system is great for modifying your database schema after a database is live. If you’re like me, you quickly end up with many 10s or 100s of migrations. There’s nothing inherently wrong with this, but there’s a few cases where it gets tiresome to …

Categorieën: Mozilla-nl planet

Wladimir Palant: Avast's broken data anonymization approach

Mozilla planet - ma, 27/01/2020 - 16:27

Avast used to collect the browsing history of their users without informing them and turn this data into profits via their Jumpshot subsidiary. After a public outcry and considerable pressure from browser vendors they decided to change their practices, so that only data of free antivirus users would be collected and only if these explicitly opt in. Throughout the entire debacle Avast maintained that the privacy impact wasn’t so wild because the data is “de-identified and aggregated,” so that Jumpshot clients never get to see personally identifiable information (PII).

Conveyor belt putting false noses on avatars in a futile attempt of masking their identity
Symbolic image, not the actual Avast De-identification Engine

The controversy around selling user data didn’t come up just now. Back in 2015 AVG (which was acquired by Avast later) changed their privacy policy in a way that allowed them to sell browser history data. At that time Graham Cluley predicted:

But let’s not kid ourselves. Advertisers aren’t interested in data which can’t help them target you. If they really didn’t feel it could help them identify potential customers then the data wouldn’t have any value, and they wouldn’t be interested in paying AVG to access it.

From what I’ve seen now, his statement was spot on and Avast’s data anonymization is nothing but a fig leaf.

Table of Contents Overview of Avast’s “de-identification”

No technical details on the “de-identification” were shared, neither in Avast’s public statements, nor when they communicated with me and I asked about it. My initial conclusion was that the approach is considered a company secret, or maybe that it simply doesn’t exist. So imagine my surprise when I realized that the approach was actually well-documented and public. In fact, Avast Forum features a post on it written in 2015 by none other than the then-CTO Ondřej Vlček. There is an example showing how it works:

With a shopping site like Amazon, the URL before stripping contains some PII:

https://www.amazon.com/gp/buy/addressselect/handlers/edit-address.html?ie=UTF8&addressID=jirptvmsnlp&addressIdToBeDeleted=&enableDeliveryPreferences=1&from=&isBillingAddress=&numberOfDistinctItems=1&showBackBar=0&skipFooter=0&skipHeader=0&hasWorkingJavascript=1

The algorithm automatically replaces the PII with the word REMOVED in order to protect our users’ privacy, like this:

https://www.amazon.com/gp/buy/addressselect/handlers/edit-address.html?ie=UTF8&addressID=REMOVED&addressIdToBeDeleted=&enableDeliveryPreferences=1&from=&isBillingAddress=&numberOfDistinctItems=1&showBackBar=0&skipFooter=0&skipHeader=0&hasWorkingJavascript=1

So when you edit your shipping address on Amazon, there will be a number of parameters in the page address collected by Avast. Only the addressID parameter is actually related to your identity however, so this one will be removed. But how does Avast know that addressID is the only problematic parameter here?

The patented data scrubbing approach

The forum post doesn’t document the decision process. Turns out however, there is US patent 2016 / 0203337 A1 filed by Jumpshot Inc. in January 2016. As it is the nature of all patents, their contents are publicly visible. This particular patent describes the methodology for removing private information from “clickstream data” (that’s Avast speak for your browsing history along with any context information they can get).

Most of the patent is trivial arrangements. It describes how Avast passes around browsing history data they receive from a multitude of users, even going as far as documenting parsing of web addresses. But the real essence of the patent is contained in merely a few sentences:

If there are many users which saw the same value of parameter, then it is safe to consider the value to be public. If a majority of values of a parameter are public, it is safe to conclude that parameter does not contain PII. On the other hand, if it is determined that a vast majority of values of a parameter are seen by very few users, it may be likely that that the parameter contains private information.

So if in their data a particular parameter typically has only values associated with a specific user (like addressID in the example above), that parameter is considered to carry personal information. Other parameters have values that are seen by many users (like hasWorkingJavascript in the example above), so the parameter is considered unproblematic and left unchanged. That looks like a feasible approach that will be able to scale with the size of the internet and adapt to changes automatically. And yet it doesn’t really solve the problem.

Side-note: How is their approach different from this patent filed by Amazon several years earlier that they actually cite? Beats me. I’m not a patent lawyer but I strongly suspect that they will lose this patent should there ever be a disagreement with Amazon.

How Amazon would deanonymize this data

The example used by Ondřej Vlček makes it very obvious who Avast tries to protect against. I mean, the address identifier they removed there is completely useless to me. Only Amazon, with access to their data, could turn that parameter value into user’s identity. So the concern is that Jumpshot customers (and Amazon could be one) owning large websites could cross-reference Jumpshot data with their own to deanonymize users. Their patent confirms this concern when explaining implicit private information.

But what if Amazon cannot see that addressID parameter any more? They can no longer determine directly which user the browsing history belongs to. But they could still check which users edited their address at this specific time. That’s probably going to be too many users at Amazon’s scale, so they will have to check which users edited their address at time X and then completed the purchase at time Z. That should be sufficient to identify a single user.

And if Jumpshot doesn’t expose request times to their clients or merely shows the dates without the exact times? Still, somebody like Amazon could for example take all the products viewed in a particular browser history and check it against their logs. Each individual product has been viewed by a large number of users, yet the combination of them is a sure way to identify a single user. Mission accomplished, anonymization failed.

How everybody else could deanonymize this data

Not everybody has access to the same amounts of data as Amazon or Google. Does this mean that in most scenarios Jumpshot data can be considered properly anonymized? Unfortunately not. Researchers already realized that social media contain huge amounts of publicly accessible data, which is why their deanonymization demonstrations such as this one focused on cross-referencing “anonymous” browsing histories with social media.

And if you think about it, it’s really not complicated. For example, if Avast were collecting my data, they would have received the web address https://twitter.com/pati_gallardo/status/1219582233805238272 which I visited at some point. This address contains no information about me, plenty of other people visited it as well, so it would have been passed on to Jumpshot clients unchanged. And these could retrieve the list of likes for the post. My Twitter account is one of the currently 179 who’s on that list.

Is that a concern? Not if it’s only one post. However, somebody could check all Twitter posts that I visited in a day for example, checking the list of likes for each of them, counting how often each user appears on these lists. I’m fairly certain that my account will be by far the most common one.

And that’s not the only possible approach of course. People usually get to a Twitter post because they follow either the post author or somebody who retweeted the post. The intersection of the extended follower groups should become pretty small for a bunch of Twitter posts already.

I merely used Twitter as an example here. In case of Facebook or Instagram the publicly available data would in most cases also suffice to identify the user that the browsing history belongs to. So – no, the browsing history data collected by Avast from their users and sold by Jumpshot is by no means anonymous.

What about aggregation?

But there is supposedly aggregation as well. In his forum post, Ondřej Vlček explicitly describes how data from all users is combined on per-domain and per-URL basis. He says:

To further protect our users‘ privacy, we only accept websites where we can observe at least 20 users.

And also:

These aggregated results are the only thing that Avast makes available to Jumpshot customers and end users.

This actually sounds good and could resolve the issues with the data anonymization. If it is true that is. On the now removed Jumpshot page advertising its “clickstream data” product it says:

Get ready to go deep. Dive in to understand the complete path to purchase, right down to individual products.

So at least some Jumpshot customers would get not only aggregated statistics but also the exact path through a website. Could that also be aggregated data? Yes, but it would require finding at least 20 users taking exactly the same path. It would mean that lots of data would have to be thrown away because users take an unusual path – the very data which could provide the insights advertised here. So while aggregated data here isn’t impossible, it’s also pretty unlikely.

A recent article published by PCMag also makes me suspect that the claims regarding aggregation aren’t entirely true. Their research indicates that some Jumpshot customers could access browser histories of individual users. Given everything else we know so far, I consider these claims credible.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl ootw: -k really means insecure

Mozilla planet - ma, 27/01/2020 - 09:57

(ootw stands for option of the week)

Long form: --insecure. The title is a little misleading, but this refers to the lowercase -k option.

This option has existed in the curl tool since the early days, and has been frequently misused ever since. Or I should perhaps call it “overused”.

Truly makes a transfer insecure

The name of the long form of this option was selected and chosen carefully to properly signal what it does: it makes a transfer insecure that could otherwise be done securely.

The option tells curl to skip the verification of the server’s TLS certificate – it will skip the cryptographic certificate verification (that it was signed by a trusted CA) and it will skip other certificate checks, like that it was made for the host name curl connects to and that it hasn’t expired etc.

For SCP and SFTP transfers, the option makes curlskip the known hosts verification. SCP and SFTP are SSH-based and they don’t use the general CA based PKI setup.

If the transfer isn’t using TLS or SSH, then this option has no effect or purpose as such transfers are already insecure by default.

Unable to detect MITM attacks

When this option is used, curl cannot detect man-in-the-middle attacks as it longer checks that it actually connects to the correct server. That is insecure.

Slips into production

One of the primary reasons why you should avoid this option in your scripts as far as possible is that it is very easy to let this slip through and get shipped into production scripts and then you’ve practically converted perfectly secure transfers into very insecure ones.

Instead, you should work on getting an updated CA cert bundle that holds certificates so that you can verify your server. For test and local servers you can get the server cert and use that to verify it subsequently.

Optionally, if you run your production server on a test or local server and you just have a server name mismatch, you can fix that in your test scripts too just by telling curl what server to use.

libcurl options

In libcurl speak, the -k functionality is accomplished with two different options: CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER. You should not allow your application to set them to 0.

Since many bindings to libcurl use the same option names you can also find PHP programs etc setting these to zero, and you should always treat that as a warning sign!

Why does it exist?

Every now and then people suggest we should remove this option.

It does serve a purpose in the chicken and egg scenario where you don’t have a proper certificate locally to verify your server against and sometimes the server you want to quickly poll is wrongly configured and a proper check fails.

How do you use it? curl -k https://example.com

You’d often use it in combination with -v to view the TLS and certificate information in the handshake so that you can fix it and remove -k again.

Related options

--cacert tells curl where to find the CA cert bundle. If you use a HTTPS proxy and want the -k functionality for the proxy itself you want --proxy-insecure.

An alternative approach to using a CA cert bundle for TLS based transfers, is to use public key pinning, with the --pinnedpubkey option.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Firefox Team Looks Within to Lead Into the Future

Mozilla planet - vr, 24/01/2020 - 21:14

For Firefox products and services to meet the needs of people’s increasingly complex online lives, we need the right organizational structure. One that allows us to respond quickly as we continue to excel at delivering existing products and develop new ones into the future.

Today, I announced a series of changes to the Firefox Product Development organization that will allow us to do just that, including the promotion of long-time Mozillian Selena Deckelmann to Vice President, Firefox Desktop.

“Working on Firefox is a dream come true,” said Selena Deckelmann, Vice President, Firefox Desktop. “I collaborate with an inspiring and incredibly talented team, on a product whose mission drives me to do my best work. We are all here to make the internet work for the people it serves.”

Selena Deckelmann, VP Firefox Desktop

During her eight years with Mozilla, Selena has been instrumental in helping the Firefox team address over a decade of technical debt, beginning with transitioning all of our build infrastructure over from Buildbot. As Director of Security and then Senior Director, Firefox Runtime, Selena led her team to some of our biggest successes, ranging from big infrastructure projects like Quantum Flow and Project Fission to key features like Enhanced Tracking Protection and new services like Firefox Monitor. In her new role, Selena will be responsible for growth of the Firefox Desktop product and search business.

Rounding out the rest of the Firefox Product Development leadership team are:

Joe Hildebrand, who moves from Vice President, Firefox Engineering into the role of Vice President, Firefox Web Technology. He will lead the team charged with defining and shipping our vision for the web platform.

James Keller who currently serves as Senior Director, Firefox User Experience will help us better navigate the difficult trade-off between empowering teams while maintaining a consistent user journey. This work is critically important because since the Firefox Quantum launch in November 2017 we have been focused on putting the user back at the center of our products and services. That begins with a coherent, engaging and easy to navigate experience in the product.

I’m extraordinarily proud to have such a strong team within the Firefox organization that we could look internally to identify this new leadership team.

These Mozillians and I, will eventually be joined by two additional team members. One who will head up our Firefox Mobile team and the other who will lead the team that has been driving our paid subscription work. Searches for both roles will be posted.

Alongside Firefox Chief Technology Officer Eric Rescorla and Vice President, Product Marketing Lindsey Shepard, I look forward to working with this team to meet Mozilla’s mission and serve internet users as we build a better web.

You can download Firefox here.

The post Firefox Team Looks Within to Lead Into the Future appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Coming to FOSDEM 2020

Mozilla planet - vr, 24/01/2020 - 16:48

I’m going to FOSDEM again in 2020, this will be my 11th consecutive year I’m travling to this awesome conference in Brussels, Belgium.

At this my 11th FOSDEM visit I will also deliver my 11th FOSDEM talk: “HTTP/3 for everyone“. It will happen at 16:00 Saturday the 1st of February 2020, in Janson, the largest room on the campus. (My third talk in the main track.)

For those who have seen me talk about HTTP/3 before, this talk will certainly have overlaps but I’m also always refreshing and improving slides and I update them as the process moves on, things changes and I get feedback. I spoke about HTTP/3 already at FODEM 2019 in the Mozilla devroom (at which time there was a looong line of people who tried, but couldn’t get a seat in the room) – but I think you’ll find that there’s enough changes and improvements in this talk to keep you entertained this year as well!

If you come to FOSDEM, don’t hesitate to come say hi and grab a curl sticker or two – I intend to bring and distribute plenty – and talk curl, HTTP and Internet transfers with me!

You will most likely find me at my talk, in the cafeteria area or at the wolfSSL stall. (DM me on twitter to pin me down! @bagder)

Categorieën: Mozilla-nl planet

Pagina's