mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Privacy Blog: Mozilla statement on the Christchurch terror attack

Mozilla planet - vr, 15/03/2019 - 22:19

Like millions of people around the world, the Mozilla team has been deeply saddened by the news of the terrorist attack against the Muslim community in Christchurch, New Zealand.

The news of dozens of people killed and injured while praying in their place of worship is truly upsetting and absolutely abhorrent.

This is a call to all of us to look carefully at how hate spreads and is propagated and stand against it.

The post Mozilla statement on the Christchurch terror attack appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Mark Surman: VP search update — and Europe

Mozilla planet - vr, 15/03/2019 - 15:03

A year ago, Mozilla Foundation started a search for a VP, Leadership Programs. The upshot of the job: work with people from around the world to build a movement to ensure our digital world stays open, healthy and humane. Over a year later, we’re in the second round of this search — finding the person to drive this work isn’t easy. However, we’re getting closer, so it’s time for an update.

At a nuts and bolts level, the person in this role will support teams at Mozilla that drive our thought leadership, fellowships and events programs. This is a great deal of work, but fairly straightforward. The tricky part is helping all the people we touch through these programs connect up with each other and work like a movement — driving to real outcomes that make digital life better.

While the position is global in scope, it will be based in Europe. This is in part because we want to work more globally, which means shifting our attention out of North America and towards African, European, Middle Eastern and South Asian time zones. Increasingly, it is also because we want to put a significant focus on Europe itself.

Europe is one of the places where a vision of an internet that balances public and private interests, and that respects people’s rights, has real traction. This vision spans everything from protecting our data to keeping digital markets open to competition to building a future where we use AI responsibly and ethically. If we want the internet to get better globally then learning from, and being more engaged with, Europe and Europeans has to be a central part of the plan.

The profile for this position is quite unique. We’re looking for someone who can think strategically and represent Mozilla publically, while also leading a distributed team within the organization; has a deep feel for both the political and technical aspects of digital life; and shares the values outlined in the Mozilla Manifesto. We’re also looking for someone who will add diversity to our senior leadership team.

In terms of an update: we retained the recruiting firm Perrett Laver in January to lead the current round of the search. We recently met with the recruiters to talk over 50 prospective candidates. There are some great people in there — people coming from the worlds of internet governance, open content, tech policy and the digital side of international development. We’re starting interviews with a handful of these people over the coming weeks — and still keeping our ear to the ground for a few more exceptional candidates as we do.

Getting this position filled soon is critical. We’re at a moment in history where the world really needs more people rolling up their sleeves to create a better digital world — this position is about gathering and supporting these people. The good news: I’m starting to feel optimistic that we can get this position filled by the middle of 2019.

PS. If you want to learn more about this role, here is the full recruiting package.

The post VP search update — and Europe appeared first on Mark Surman.

Categorieën: Mozilla-nl planet

Dave Townsend: Bridging an internal LAN to a server’s Docker containers over a VPN

Mozilla planet - do, 14/03/2019 - 21:07

I recently decided that the basic web hosting I was using wasn’t quite a configurable or powerful as I would like so I have started paying for a VPS and am slowly moving all my sites over to it. One of the things I decided was that I wanted the majority of services it ran to be running under Docker. Docker has its pros and cons but the thing I like about it is that I can define what services run, how they run and where they store all their data in a single place, separate from the rest of the server. So now I have a /srv/docker directory which contains everything I need to backup to ensure I can reinstall all the services easily, mostly regardless of the rest of the server.

As I was adding services I quickly realised I had a problem to solve. Some of the services were obviously external facing, nginx for example. But a lot should not be exposed to the public internet but needed to still be accessible, web management interfaces etc. So I wanted to figure out how to easily access them remotely.

I considered just setting up port forwarding or a socks proxy over ssh. But this would mean having to connect to ssh whenever needed and either defining all the ports and docker IPs (which I would then have to make static) in the ssh config or having to switch proxies in my browser whenever I needed to access a service and also would only really support web protocols.

Exposing them publicly anyway but requiring passwords was another option, I wasn’t a big fan of this either though. It would require configuring an nginx reverse proxy or something everytime I added a new service and I thought I could come up with something better.

At first I figured a VPN was going to be overkill, but eventually I decided that once set up it would give me the best experience. I also realised I could then set up a persistent VPN from my home network to the VPS so when at home, or when connected to my home network over VPN (already set up) I would have access to the containers without needing to do anything else.

Alright, so I have a home router that handles two networks, the LAN and its own VPN clients. Then I have a VPS with a few docker networks running on it. I want them all to be able to access each other and as a bonus I want to be able to just use names to connect to the docker containers, I don’t want to have to remember static IP addresses. This is essentially just using a VPN to bridge the networks, which is covered in many other places, except I had to visit so many places to put all the pieces together that I thought I’d explain it in my own words, if only so I have a single place to read when I need to do this again.

In my case the networks behind my router are 10.10.* for the local LAN and 10.11.* for its VPN clients. On the VPS I configured my docker networks to be under 10.12.*.

0. Configure IP forwarding.

The zeroth step is to make sure that IP forwarding is enabled and not firewalled any more than it needs to be on both router and VPS. How you do that will vary and it’s likely that the router will already have it enabled. At the least you need to use sysctl to set net.ipv4.ip_forward=1 and probably tinker with your firewall rules.

1. Set up a basic VPN connection.

First you need to set up a simple VPN connection between the router and the VPS. I ended up making the VPS the server since I can then connect directly to it from another machine either for testing or if my home network is down. I don’t think it really matters which is the “server” side of the VPN, either should work, you’ll just have to invert some of the description here if you choose the opposite.

There are many many tutorials on doing this so I’m not going to talk about it much. Just one thing to say is that you must be using certificate authentication (most tutorials cover setting this up), so the VPS can identify the router by its common name. Don’t add any “route” configuration yet. You could use redirect-gateway in the router config to make some of this easier, but that would then mean that all your internet traffic (from everything on the home LAN) goes through the VPN which I didn’t want. I set the VPN addresses to be in 10.12.10.* (this subnet is not used by any of the docker networks).

Once you’re done here the router and the VPS should be able to ping their IP addresses on the VPN tunnel. The VPS IP is 10.12.10.1, the router’s gets assigned on connection. They won’t be able to reach beyond that yet though.

2. Make the docker containers visible to the router.

Right now the router isn’t able to send packets to the docker containers because it doesn’t know how to get them there. It knows that anything for 10.12.10.* goes through the tunnel, but has no idea that other subnets are beyond that. This is pretty trivial to fix. Add this to the VPS’s VPN configuration:

push "route 10.12.0.0 255.255.0.0"

When the router connects to the VPS the VPN server will tell it that this route can be accessed through this connection. You should now be able to ping anything in that network range from the router. But neither the VPS nor the docker containers will be able to reach the internal LANs. In fact if you try to ping a docker container’s IP from the local LAN the ping packet should reach it, but the container won’t know how to return it!

3. Make the local LAN visible to the VPS.

Took me a while to figure this out. Not quite sure why, but you can’t just add something similar to a VPN’s client configuration. Instead the server side has to know in advance what networks a client is going to give access to. So again you’re going to be modifying the VPS’s VPN configuration. First the simple part. Add this to the configuration file:

route 10.10.0.0 255.255.0.0 route 10.11.0.0 255.255.0.0

This makes openVPN modify the VPS’s routing table telling it it can direct all traffic to those networks to the VPN interface. This isn’t enough though. The VPN service will receive that traffic but not know where to send it on to. There could be many clients connected, which one has those networks? You have to add some client specific configuration. Create a directory somewhere and add this to the configuration file:

client-config-dir /absolute/path/to/directory

Do NOT be tempted to use a relative path here. It took me more time than I’d like to admit to figure out that when running as a daemon the open vpn service won’t be able to find it if it is a relative path. Now, create a file in the directory, the filename must be exactly the common name of the router’s VPN certificate. Inside it put this:

iroute 10.10.0.0 255.255.0.0 iroute 10.11.0.0 255.255.0.0

This tells the VPN server that this is the client that can handle traffic to those networks. So now everything should be able to ping everything else by IP address. That would be enough if I didn’t also want to be able to use hostnames instead of IP addresses.

4. Setting up DNS lookups.

Getting this bit to work depends on what DNS server the router is running. In my case (and many cases) this was dnsmasq which makes this fairly straightforward. The first step is setting up a DNS server that will return results for queries for the running docker containers. I found the useful dns-proxy-server. It runs as the default DNS server on the VPS, for lookups it looks for docker containers with a matching hostname and if not forwards the request on to an upstream DNS server. The VPS can now find the a docker container’s IP address by name.

For the router (and so anything on the local LAN) to be able to look them up it needs to be able to query the DNS server on the VPS. This meant giving the DNS container a static IP address (the only one this entire setup needs!) and making all the docker hostnames share a domain suffix. Then add this line to the router’s dnsmasq.conf:

server=/<domain>/<dns ip>

This tells dnsmasq that anytime it receives a query for *.domain it passes on the request to the VPS’s DNS container.

5. Done!

Everything should be set up now. Enjoy your direct access to your docker containers. Sorry this got long but hopefully it will be useful to others in the future.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Thank you, Denelle Dixon

Mozilla planet - do, 14/03/2019 - 21:00

I want to take this opportunity to thank Denelle Dixon for her partnership, leadership and significant contributions to Mozilla over the last six years.

Denelle joined Mozilla Corporation in September 2012 as an Associate General Counsel and rose through the ranks to lead our global business and operations as our Chief Operating Officer. Next month, after an incredible tour of duty at Mozilla, she will step down as a full-time Mozillian to join the Stellar Development Foundation as their Executive Director and CEO.

As a key part of our senior leadership team, Denelle helped to build a stronger more resilient Mozilla, including leading the acquisition of Pocket, orchestrating our major partnerships, and helping refocus us to unlock the growth opportunities ahead. Denelle has had a huge impact here — on our strategy, execution, technology, partners, brand, culture, people, the list goes on. Although I will miss her partnership deeply, I will be cheering her on in her new role as she embarks on the next chapter of her career.

As we conduct a search for our next COO, I will be working more closely with our business and operations leaders and teams as we execute on our strategy that will give people more control over their connected lives and help build an Internet that’s healthier for everyone.

Thank you, Denelle for everything, and all the best on your next adventure!

The post Thank you, Denelle Dixon appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Fast, Bump-Allocated Virtual DOMs with Rust and Wasm

Mozilla planet - do, 14/03/2019 - 17:54

Dodrio is a virtual DOM library written in Rust and WebAssembly. It takes advantage of both Wasm’s linear memory and Rust’s low-level control by designing virtual DOM rendering around bump allocation. Preliminary benchmark results suggest it has best-in-class performance.

Background Virtual DOM Libraries

Virtual DOM libraries provide a declarative interface to the Web’s imperative DOM. Users describe the desired DOM state by generating a virtual DOM tree structure, and the library is responsible for making the Web page’s physical DOM reflect the user-generated virtual DOM tree. Libraries employ some diffing algorithm to decrease the number of expensive DOM mutation methods they invoke. Additionally, they tend to have facilities for caching to further avoid unnecessarily re-rendering components which have not changed and re-diffing identical subtrees.

Bump Allocation

Bump allocation is a fast, but limited approach to memory allocation. The allocator maintains a chunk of memory, and a pointer pointing within that chunk. To allocate an object, the allocator rounds the pointer up to the object’s alignment, adds the object’s size, and does a quick test that the pointer didn’t overflow and still points within the memory chunk. Allocation is only a small handful of instructions. Likewise, deallocating every object at once is fast: reset the pointer back to the start of the chunk.

The disadvantage of bump allocation is that there is no general way to deallocate individual objects and reclaim their memory regions while other objects are still in use.

These trade offs make bump allocation well-suited for phase-oriented allocations. That is, a group of objects that will all be allocated during the same program phase, used together, and finally deallocated together.

<figcaption>Pseudo-code for bump allocation</figcaption>bump_allocate(size, align): aligned_pointer = round_up_to(self.pointer, align) new_pointer = aligned_pointer + size if no overflow and new_pointer < self.end_of_chunk: self.pointer = new_pointer return aligned_pointer else: handle_allocation_failure() Dodrio from a User’s Perspective

First off, we should be clear about what Dodrio is and is not. Dodrio is only a virtual DOM library. It is not a full framework. It does not provide state management, such as Redux stores and actions or two-way binding. It is not a complete solution for everything you encounter when building Web applications.

Using Dodrio should feel fairly familiar to anyone who has used Rust or virtual DOM libraries before. To define how a struct is rendered as HTML, users implement the dodrio::Render trait, which takes an immutable reference to self and returns a virtual DOM tree.

Dodrio uses the builder pattern to create virtual DOM nodes. We intend to support optional JSX-style, inline HTML templating syntax with compile-time procedural macros, but we’ve left it as future work.

The 'a and 'bump lifetimes in the dodrio::Render trait’s interface and the where 'a: 'bump clause enforce that the self reference outlives the bump allocation arena and the returned virtual DOM tree. This means that if self contains a string, for example, the returned virtual DOM can safely use that string by reference rather than copying it into the bump allocation arena. Rust’s lifetimes and borrowing enable us to be aggressive with cost-saving optimizations while simultaneously statically guaranteeing their safety.

<figcaption>“Hello, World!” example with Dodrio</figcaption>struct Hello { who: String, } impl Render for Hello { fn render<'a, 'bump>(&'a self, bump: &'bump Bump) -> Node<'bump> where 'a: 'bump, { span(bump) .children([text("Hello, "), text(&self.who), text("!")]) .finish() } }

Event handlers are given references to the root dodrio::Render component, a handle to the virtual DOM instance that can be used to schedule re-renders, and the DOM event itself.

<figcaption>Incrementing counter example with Dodrio</figcaption>struct Counter { count: u32, } impl Render for Counter { fn render<'a, 'bump>(&'a self, bump: &'bump Bump) -> Node<'bump> where 'a: 'bump, { let count = bumpalo::format!(in bump, "{}", self.count); div(bump) .children([ text(count.into_bump_str()), button(bump) .on("click", |root, vdom, _event| { let counter = root.unwrap_mut::<Counter>(); counter.count += 1; vdom.schedule_render(); }) .children([text("+")]) .finish(), ]) .finish() } }

Additionally, Dodrio also has a proof-of-concept API for defining rendering components in JavaScript. This reflects the Rust and Wasm ecosystem’s strong integration story for JavaScript, that enables both incremental porting to Rust and heterogeneous, polyglot applications where just the most performance-sensitive code paths are written in Rust.

<figcaption>A Dodrio rendering component defined in JavaScript</figcaption>class Greeting { constructor(who) { this.who = who; } render() { return { tagName: "p", attributes: [{ name: "class", value: "greeting" }], listeners: [{ on: "click", callback: this.onClick.bind(this) }], children: [ "Hello, ", { tagName: "strong", children: [this.who], }, ], }; } async onClick(vdom, event) { // Be more excited! this.who += "!"; // Schedule a re-render. await vdom.render(); console.log("re-rendering finished!"); } } <figcaption>Using a rendering component defined in JavaScript</figcaption>#[wasm_bindgen] extern "C" { // Import the JS `Greeting` class. #[wasm_bindgen(extends = Object)] type Greeting; // And the `Greeting` class's constructor. #[wasm_bindgen(constructor)] fn new(who: &str) -> Greeting; } // Construct a JS rendering component from a `Greeting` instance. let js = JsRender::new(Greeting::new("World"));

Finally, Dodrio exposes a safe public interface, and we have never felt the need to reach for unsafe when authoring Dodrio rendering components.

Internal Design

Both virtual DOM tree rendering and diffing in Dodrio leverage bump allocation. Rendering constructs bump-allocated virtual DOM trees from component state. Diffing batches DOM mutations into a bump-allocated “change list” which is applied to the physical DOM all at once after diffing completes. This design aims to maximize allocation throughput, which is often a performance bottleneck for virtual DOM libraries, and minimize bouncing back and forth between Wasm, JavaScript, and native DOM functions, which should improve temporal cache locality and avoid out-of-line calls.

Rendering Into Double-Buffered Bump Allocation Arenas

Virtual DOM rendering exhibits phases that we can exploit with bump allocation:

  1. A virtual DOM tree is constructed by a Render implementation,
  2. it is diffed against the old virtual DOM tree,
  3. saved until the next time we render a new virtual DOM tree,
  4. when it is diffed against that new virtual DOM tree,
  5. and then finally it and all of its nodes are destroyed.

This process repeats ad infinitum.

<figcaption>Virtual DOM tree lifetimes and operations over time</figcaption> ------------------- Time -------------------> Tree 0: [ render | ------ | diff ] Tree 1: [ render | diff | ------ | diff ] Tree 2: [ render | diff | ------ | diff ] Tree 3: [ render | diff | ------ | diff ] ...

At any given moment in time, only two virtual DOM trees are alive. Therefore, we can double buffer two bump allocation arenas that switch back and forth between the roles of containing the new or the old virtual DOM tree:

  1. A virtual DOM tree is rendered into bump arena A,
  2. the new virtual DOM tree in bump arena A is diffed with the old virtual DOM tree in bump arena B,
  3. bump arena B has its bump pointer reset,
  4. bump arenas A and B are swapped.
<figcaption>Double buffering bump allocation arenas for virtual DOM tree rendering</figcaption> ------------------- Time -------------------> Arena A: [ render | ------ | diff | reset | render | diff | -------------- | diff | reset | render | diff ... Arena B: [ render | diff | -------------- | diff | reset | render | diff | -------------- | diff ...

This bump allocation approach to virtual DOM tree construction is similar to how a generational garbage collector works, except that in our case, as soon as we finish with a frame, we know that the whole of the old virtual DOM tree is garbage. This means we can get away without any of the bookkeeping a generational collector has to do, such as write barriers, remembered sets, and tracing pointers. After each frame, we can simply reset the old arena’s bump pointer. Furthermore, we don’t run the risk of a nursery collection promoting all of the old virtual DOM’s about-to-be-garbage nodes to the tenured object space when allocating the new virtual DOM.

Diffing and Change Lists

Dodrio uses a naïve, single-pass algorithm to diff virtual DOM trees. It walks both the old and new trees in unison and builds up a change list of DOM mutation operations whenever an attribute, listener, or child differs between the old and the new tree. It does not currently use any sophisticated algorithms to minimize the number of operations in the change list, such as longest common subsequence or patience diffing.

The change lists are constructed during diffing, applied to the physical DOM, and then destroyed. The next time we render a new virtual DOM tree, the process is repeated. Since at most one change list is alive at any moment, we use a single bump allocation arena for all change lists.

A change list’s DOM mutation operations are encoded as instructions for a custom stack machine. While an instruction’s discriminant is always a 32-bit integer, instructions are variably sized as some have immediates while others don’t. The machine’s stack contains physical DOM nodes (both text nodes and elements), and immediates encode pointers and lengths of UTF-8 strings.

The instructions are emitted on the Rust and Wasm side, and then batch interpreted and applied to the physical DOM in JavaScript. Each JavaScript function that interprets a particular instruction takes four arguments:

  1. A reference to the JavaScript ChangeList class that represents the stack machine,
  2. a Uint8Array view of Wasm memory to decode strings from,
  3. a Uint32Array view of Wasm memory to decode immediates from,
  4. and an offset i where the instruction’s immediates (if any) are located.

It returns the new offset in the 32-bit view of Wasm memory where the next instruction is encoded.

There are instructions for:

  • Creating, removing, and replacing elements and text nodes,
  • adding, removing, and updating attributes and event listeners,
  • and traversing the DOM.

For example, the AppendChild instruction has no immediates, but expects two nodes to be on the top of the stack. It pops the first node from the stack, and then calls Node.prototype.appendChild with the popped node as the child and the node that is now at top of the stack as the parent.

<figcaption>Emitting the AppendChild instruction</figcaption>// Allocate an instruction with zero immediates. fn op0(&self, discriminant: ChangeDiscriminant) { self.bump.alloc(discriminant as u32); } /// Immediates: `()` /// /// Stack: `[... Node Node] -> [... Node]` pub fn emit_append_child(&self) { self.op0(ChangeDiscriminant::AppendChild); } <figcaption>Interpreting the AppendChild instruction</figcaption>function appendChild(changeList, mem8, mem32, i) { const child = changeList.stack.pop(); top(changeList.stack).appendChild(child); return i; }

On the other hand, the SetText instruction expects a text node on top of the stack, and does not modify the stack. It has a string encoded as pointer and length immediates. It decodes the string, and calls the Node.prototype.textContent setter function to update the text node’s text content with the decoded string.

<figcaption>Emitting the SetText instruction</figcaption>// Allocate an instruction with two immediates. fn op2(&self, discriminant: ChangeDiscriminant, a: u32, b: u32) { self.bump.alloc([discriminant as u32, a, b]); } /// Immediates: `(pointer, length)` /// /// Stack: `[... TextNode] -> [... TextNode]` pub fn emit_set_text(&self, text: &str) { self.op2( ChangeDiscriminant::SetText, text.as_ptr() as u32, text.len() as u32, ); } <figcaption>Interpreting the SetText instruction</figcaption>function setText(changeList, mem8, mem32, i) { const pointer = mem32[i++]; const length = mem32[i++]; const str = string(mem8, pointer, length); top(changeList.stack).textContent = str; return i; } Preliminary Benchmarks

To get a sense of Dodrio’s speed relative to other libraries, we added it to Elm’s Blazing Fast HTML benchmark that compares rendering speeds of TodoMVC implementations with different libraries. They claim that the methodology is fair and that the benchmark results should generalize. They also subjectively measure how easy it is to optimize the implementations to improve performance (for example, by adding well-placed shouldComponentUpdate hints in React and lazy wrappers in Elm). We followed their same methodology and disabled Dodrio’s on-by-default, once-per-animation-frame render debouncing, giving it the same handicap that the Elm implementation has.

That said, there are some caveats to these benchmark results. The React implementation had bugs that prevented it from completing the benchmark, so we don’t include its measurements below. If you are curious, you can look at the original Elm benchmark results to see how it generally fared relative to some of the other libraries measured here. Second, we made an initial attempt to update the benchmark to the latest version of each library, but quickly got in over our heads, and therefore this benchmark is not using the latest release of each library.

With that out of the way let’s look at the benchmark results. We ran the benchmarks in Firefox 67 on Linux. Lower is better, and means faster rendering times.

<figcaption>Benchmark results</figcaption>Benchmark results graph

Library Optimized? Milliseconds Ember 2.6.3 No 3542 Angular 1.5.8 No 2856 Angular 2 No 2743 Elm 0.16 No 4295 Elm 0.17 No 3170 Dodrio 0.1-prerelease No 2181 Angular 1.5.8 Yes 3175 Angular 2 Yes 2371 Elm 0.16 Yes 4229 Elm 0.17 Yes 2696

Dodrio is the fastest library measured in the benchmark. This is not to say that Dodrio will always be the fastest in every scenario — that is undoubtedly false. But these results validate Dodrio’s design and show that it already has best-in-class performance. Furthermore, there is room to make it even faster:

  • Dodrio is brand new, and has not yet had the years of work poured into it that other libraries measured have. We have not done any serious profiling or optimization work on Dodrio yet!

  • The Dodrio TodoMVC implementation used in the benchmark does not use shouldComponentUpdate-style optimizations, like other implementations do. These techniques are still available to Dodrio users, but you should need to reach for them much less frequently because idiomatic implementations are already fast.

Future Work

So far, we haven’t invested in polishing Dodrio’s ergonomics. We would like to explore adding type-safe HTML templates that boil down to Dodrio virtual DOM tree builder invocations.

Additionally, there are a few more ways we can potentially improve Dodrio’s performance:

For both ergonomics and further performance improvements, we would like to start gathering feedback informed by real world usage before investing too much more effort.

Evan Czaplicki pointed us to a second benchmark — krausest/js-framework-benchmark — that we can use to further evaluate Dodrio’s performance. We look forward to implementing this benchmark for Dodrio and gathering more test cases and insights into performance.

Further in the future, the WebAssembly host bindings proposal will enable us to interpret the change list’s operations in Rust and Wasm without trampolining through JavaScript to invoke DOM methods.

Conclusion

Dodrio is a new virtual DOM library that is designed to leverage the strengths of both Wasm’s linear memory and Rust’s low-level control by making extensive use of fast bump allocation. If you would like to learn more about Dodrio, we encourage you to check out its repository and examples!

Thanks to Luke Wagner and Alex Crichton for their contributions to Dodrio’s design, and participation in brainstorming and rubber ducking sessions. We also discussed many of these ideas with core developers on the React, Elm, and Ember teams, and we thank them for the context and understanding these discussions ultimately brought to Dodrio’s design. A final round of thanks to Jason Orendorff, Lin Clark, Till Schneidereit, Alex Crichton, Luke Wagner, Evan Czaplicki, and Robin Heggelund Hansen for providing valuable feedback on early drafts of this document.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the Month – February 2019

Mozilla planet - do, 14/03/2019 - 13:32

Please join us in congratulating Edoardo Viola, our Rep of the Month for February 2019!

Edoardo is a long-time Mozillian from Italy and has been a Rep for almost two years. He’s a Resource Rep and has been on the Reps Council until January. When he’s not busy with Reps work, Edoardo is a Mentor in the Open Leadership Training Program. In the past he has contributed to Campus Clubs as well as MozFest, where he was a Space Wrangler for the Web Literacy Track.

edoardo

 

Recently Edoardo helped out at FOSDEM in Brussels as part of the Mozilla volunteers organizing our presence. He helped out at the booth, as well as helping to moderate the Mozilla Dev Room. He also contributes to the Internet Health Report as part of the volunteer team to give input for the next edition of the report.

To congratulate him, please head over to Discourse!

Categorieën: Mozilla-nl planet

Eric Rahm: Doubling the Number of Content Processes in Firefox

Mozilla planet - do, 14/03/2019 - 00:13

Over the past year, the Fission MemShrink project has been working tirelessly to reduce the memory overhead of Firefox. The goal is to allow us to start spinning up more processes while still maintaining a reasonable memory footprint. I’m happy to announce that we’ve seen the fruits of this labor: as of version 66 we’re doubling the default number of content processes from 4 to 8.

Doubling the number of content processes is the logical extension of the e10s-multi project. Back when that project wrapped up we chose to limit the default number of processes to 4 in order to balance the benefits of multiple content processes — fewer crashes, better site isolation, improved performance when loading multiple pages — with the impact on memory usage for our users.

Our telemetry has looked really good: if we compare beta 59 (roughly when this project started) with beta 66, where we decided to let the increase be shipped to our regular users, we see a virtually unchanged total memory usage for our 25th, median, and 75th percentile and a modest 9% increase for the 95th percentile on Windows 64-bit.

Doubling the number of content processes and not seeing a huge jump is quite impressive. Even on our worst-case-scenario stress test — AWSY which loads 100 pages in 30 tabs, repeated 3 times — we only saw a 6% increase in memory usage when turning on 8 content processes when compared to when we started the project.

This is a huge accomplishment and I’m very proud of the loose-knit team of contributors who have done some phenomenal feats to get us to this point. There have been some big wins, but really it’s the myriad of minor improvements that compounded into a large impact. This has ranged from delay-loading browser JavaScript code until it’s needed (or not at all), to low-level changes to packing C++ data structures more efficiently, to large system-wide changes to how we generate bindings that glue together our JavaScript and C++ code. You can read more about the background of this project and many of the changes in our initial newsletter and the follow-up.

While I’m pleased with where we are now, we still have a way to go to get our overhead down even further. Fear not, for we have a quite a few changes in the pipeline including a fork server to help further reduce memory usage on Linux and macOS, work to share font data between processes, and work to share more CSS data between processes. In addition to reducing overhead we now have a tab unloading feature in Nightly 67 that will proactively unload tabs when it looks like you’re about to run out of memory. So far the results in reducing the number of out-of-memory crashes are looking really good and we’re hoping to get that released to a wider audience in the near future.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Get better password management with Firefox Lockbox on iPad

Mozilla planet - wo, 13/03/2019 - 16:01

We access the web on all sorts of devices from our laptop to our phone to our tablets. And we need our passwords everywhere to log into an account. This … Read more

The post Get better password management with Firefox Lockbox on iPad appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Looking for the Refresh header

Mozilla planet - di, 12/03/2019 - 22:49

The other day someone filed a bug on curl that we don’t support redirects with the Refresh header. This took me down a rabbit hole of Refresh header research and I’ve returned to share with you what I learned down there.

tl;dr Refresh is not a standard HTTP header.

As you know, an HTTP redirect is specified to use a 3xx response code and a Location: header to point out the new URL (I use the term URL here but you know what I mean). This has been the case since RFC 1945 (HTTP/1.0). According to an old mail from Roy T Fielding (dated June 1996), Refresh “didn’t make it” into that spec. That was the first “real” HTTP specification. (And the HTTP we used before 1.0 didn’t even have headers!)

The little detail that it never made it into the 1.0 spec or any later one, doesn’t seem to have affected the browsers. Still today, browsers keep supporting the Refresh header as a sort of Location: replacement even though it seems to never have been present in a HTTP spec.

In good company

curl is not the only HTTP library that doesn’t support this non-standard header. The popular python library requests apparently doesn’t according to this bug from 2017, and another bug was filed about it already back in 2011 but it was just closed as “old” in 2014.

I’ve found no support in wget or wget2 either for this header.

I didn’t do any further extensive search for other toolkits’ support, but it seems that the browsers are fairly alone in supporting this header.

How common is the the Refresh header?

I decided to make an attempt to figure out, and for this venture I used the Rapid7 data trove. The method that data is collected with may not be the best – it scans the IPv4 address range and sends a HTTP request to each TCP port 80, setting the IP address in the Host: header. The result of that scan is 52+ million HTTP responses from different and current HTTP origins. (Exactly 52254873 responses in my 59GB data dump, dated end of February 2019).

Results from my scans
  • Location is used in 18.49% of the responses
  • Refresh is used in 0.01738% of the responses (exactly 9080 responses featured them)
  • Location is thus used 1064 times more often than Refresh
  • In 35% of the cases when Refresh is used, Location is also used
  • curl thus handles 99.9939% of the redirects in this test
Additional notes
  • When Refresh is the only redirect header, the response code is usually 200 (with 404 being the second most)
  • When both headers are used, the response code is almost always 30x
  • When both are used, it is common to redirect to the same target and it is also common for the Refresh header value to only contain a number (for the number of seconds until “refresh”).
Refresh from HTML content

Redirects can also be done by meta tags in HTML and sending the refresh that way, but I have not investigated how common as that isn’t strictly speaking HTTP so it is outside of my research (and interest) here.

In use, not documented, not in the spec

Just another undocumented corner of the web.

When I posted about these findings on the HTTPbis mailing list, it was pointed out that WHATWG mentions this header in their iana page. I say mention because calling that documenting would be a stretch…

It is not at all clear exactly what the header is supposed to do and it is not documented anywhere. It’s not exactly a redirect, but almost?

Will/should curl support it?

A decision hasn’t been made about it yet. With such a very low use frequency and since we’ve managed fine without support for it so long, maybe we can just maintain the situation and instead argue that we should just completely deprecate this header use from the web?

Updates

After this post first went live, I got some further feedback and data that are relevant and interesting.

  • Yoav Wiess created a patch for Chrome to count how often they see this header used in real life.
  • Eric Lawrence pointed out that IE had several incompatibilities in its Refresh parser back in the day.
  • Boris pointed out (in the comments below) the WHATWG documented steps for handling the header.
  • The use of <meta> tag refresh in contents is fairly high. The Chrome counter says almost 4% of page loads!
Categorieën: Mozilla-nl planet

Andrew Halberstadt: Task Configuration at Scale

Mozilla planet - di, 12/03/2019 - 22:10

A talk I did for the Automationeer’s Assemble series on how Mozilla handles complexity in their CI configuration.


Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Iodide: an experimental tool for scientific communication and exploration on the web

Mozilla planet - di, 12/03/2019 - 16:03

In the last 10 years, there has been an explosion of interest in “scientific computing” and “data science”: that is, the application of computation to answer questions and analyze data in the natural and social sciences. To address these needs, we’ve seen a renaissance in programming languages, tools, and techniques that help scientists and researchers explore and understand data and scientific concepts, and to communicate their findings. But to date, very few tools have focused on helping scientists gain unfiltered access to the full communication potential of modern web browsers. So today we’re excited to introduce Iodide, an experimental tool meant to help scientists write beautiful interactive documents using web technologies, all within an iterative workflow that feels similar to other scientific computing environments.

Exploring the Lorenz attractor then examining the code in Iodide

Iodide in action.

 

Beyond being just a programming environment for creating living documents in the browser, Iodide attempts to remove friction from communicative workflows by always bundling the editing tool with the clean readable document. This diverges from IDE-style environments that output presentational documents like .pdf files (which are then divorced from the original code) and cell-based notebooks which mix code and presentation elements. In Iodide, you can get both a document that looks however you want it to look, and easy access to the underlying code and editing environment.

Iodide is still very much in an alpha state, but following the internet aphorism “If you’re not embarrassed by the first version of your product, you’ve launched too late”, we’ve decided to do a very early soft launch in the hopes of getting feedback from a larger community. We have a demo that you can try out right now, but expect a lot of rough edges (and please don’t use this alpha release for critical work!). We’re hoping that, despite the rough edges, if you squint at this you’ll be able to see the value of the concept, and that the feedback you give us will help us figure out where to go next.

How we got to Iodide Data science at Mozilla

At Mozilla, the vast majority of our data science work is focused on communication. Though we sometimes deploy models intended to directly improve a user’s experience, such as the recommendation engine that helps users discover browser extensions, most of the time our data scientists analyze our data in order to find and share insights that will inform the decisions of product managers, engineers and executives.

Data science work involves writing a lot of code, but unlike traditional software development, our objective is to answer questions, not to produce software. This typically results in some kind of report — a document, some plots, or perhaps an interactive data visualization. Like many data science organizations, at Mozilla we explore our data using fantastic tools like Jupyter and R-Studio. However, when it’s time to share our results, we cannot usually hand off a Jupyter notebook or an R script to a decision-maker, so we often end up doing things like copying key figures and summary statistics to a Google Doc.

We’ve found that making the round trip from exploring data in code to creating a digestible explanation and back again is not always easy. Research shows that many people share this experience. When one data scientist is reading through another’s final report and wants to look at the code behind it, there can be a lot of friction; sometimes tracking down the code is easy, sometimes not. If they want to attempt to experiment with and extend the code, things obviously get more difficult still. Another data scientist may have your code, but may not have an identical configuration on their machine, and setting that up takes time.

The virtuous cycle of data science work

The virtuous circle of data science work.

 

Why is there so little web in science?

Against the background of these data science workflows at Mozilla, in late 2017 I undertook a project that called for interactive data visualization. Today you can create interactive visualizations using great libraries for Python, R, and Julia, but for what I wanted to accomplish, I needed to drop down to Javascript. This meant stepping away from my favorite data science environments. Modern web development tools are incredibly powerful, but extremely complicated. I really didn’t want to figure out how to get a fully-fledged Javascript build toolchain with hot module reloading up and running, but short of that I couldn’t find much aimed at creating clean, readable web documents within the live, iterative workflow familiar to me.

I started wondering why this tool didn’t exist — why there’s no Jupyter for building interactive web documents — and soon zoomed out to thinking about why almost no one uses Javascript for scientific computing. Three big reasons jumped out:

  1. Javascript itself has a mixed reputation among scientists for being slow and awkward;
  2. there aren’t many scientific computing libraries that run in the browser or that work with Javascript; and,
  3. as I’d discovered, there are very few scientific coding tools that enable fast iteration loop and also grant unfiltered access to the presentational capabilities in the browser.

These are very big challenges. But as I thought about it more, I began to think that working in a browser might have some real advantages for the kind of communicative data science that we do at Mozilla. The biggest advantage, of course, is that the browser has arguably the most advanced and well-supported set of presentation technologies on the planet, from the DOM to WebGL to Canvas to WebVR.

Thinking on the workflow friction mentioned above, another potential advantage occurred to me: in the browser, the final document need not be separate from the tool that created it. I wanted a tool designed to help scientists iterate on web documents (basically single-purpose web apps for explaining an idea)… and many tools we were using were themselves basically web apps. For the use case of writing these little web-app-documents, why not bundle the document with the tool used to write it?

By doing this, non-technical readers could see my nice looking document, but other data scientists could instantly get back to the original code. Moreover, since the compute kernel would be the browser’s JS engine, they’d be able to start extending and experimenting with the analysis code immediately. And they’d be able to do all this without connecting to remote computing resources or installing any software.

Towards Iodide

I started discussing the potential pros and cons of scientific computing in the browser with my colleagues, and in the course of our conversations, we noticed some other interesting trends.

Inside Mozilla we were seeing a lot of interesting demos showing off WebAssembly, a new way for browsers to run code written in languages other than Javascript. WebAssembly allows programs to be run at incredible speed, in some cases close to native binaries. We were seeing examples of computationally-expensive processes like entire 3D game engines running within the browser without difficulty. Going forward, it would be possible to compile best-in-class C and C++ numerical computing libraries to WebAssembly and wrap them in ergonomic JS APIs, just as the SciPy project does for Python. Indeed, projects had started to do this already.

WebAssembly makes it possible to run code at near-native speed in the browser.

We also noticed the Javascript community’s willingness to introduce new syntax when doing so helps people to solve their problem more effectively. Perhaps it would be possible to emulate some of key syntactic elements that make numerical programming more comprehensible and fluid in MATLAB, Julia, and Python — matrix multiplication, multidimensional slicing, broadcast array operations, and so on. Again, we found other people thinking along similar lines.

With these threads converging, we began to wonder if the web platform might be on the cusp of becoming a productive home for scientific computing. At the very least, it looked like it might evolve to serve some of the communicative workflows that we encounter at Mozilla (and that so many others encounter in industry and academia). With the core of Javascript improving all the time and the possibility of adding syntax extensions for numerical programming, perhaps JS itself could be made more appealing to scientists. WebAssembly seemed to offer a path to great science libraries. The third leg of the stool would be an environment for creating data science documents for the web. This last element is where we decided to focus our initial experimentation, which brought us to Iodide.

The anatomy of Iodide

Iodide is a tool designed to give scientists a familiar workflow for creating great-looking interactive documents using the full power of the web platform. To accomplish that, we give you a “report” — basically a web page that you can fill in with your content — and some tools for iteratively exploring data and modifying your report to create something you’re ready to share. Once you’re ready, you can send a link directly to your finalized report. If your colleagues and collaborators want to review your code and learn from it, they can drop back to an exploration mode in one click. If they want to experiment with the code and use it as the basis of their own work, with one more click they can fork it and start working on their own version.

Read on to learn a bit more about some of the ideas we’re experimenting with in an attempt to make this workflow feel fluid.

The Explore and Report Views

Iodide aims to tighten the loop between exploration, explanation, and collaboration. Central to that is the ability to move back and forth between a nice looking write-up and a useful environment for iterative computational exploration.

When you first create a new Iodide notebook, you start off in the “explore view.” This provides a set of panes including an editor for writing code, a console for viewing the output from code you evaluate, a workspace viewer for examining the variables you’ve created during your session, and a “report preview” pane in which you can see a preview of your report.

Editing a Markdown code chunk in Iodide's Explore View

Editing a Markdown code chunk in Iodide’s explore view.

 

By clicking the “REPORT” button in the top right corner, the contents of your report preview will expand to fill the entire window, allowing you to put the story you want to tell front and center. Readers who don’t know how to code or who aren’t interested in the technical details are able to focus on what you’re trying to convey without having to wade through the code. When a reader visits the link to the report view, your code will runs automatically. if they want to review your code, simply clicking the “EXPLORE” button in the top right will bring them back into the explore view. From there, they can make a copy of the notebook for their own explorations.

Moving from Explore to Report View.

Moving from explore to report view.

 

Whenever you share a link to an Iodide notebook, your collaborator can always access to both of these views. The clean, readable document is never separated from the underlying runnable code and the live editing environment.

Live, interactive documents with the power of the Web Platform

Iodide documents live in the browser, which means the computation engine is always available. Whenever you share your work, you share a live interactive report with running code. Moreover, since the computation happens in the browser alongside the presentation, there is no need to call a language backend in another process. This means that interactive documents update in real-time, opening up the possibility of seamless 3D visualizations, even with the low-latency and high frame-rate required for VR.

MRI

Contributor Devin Bayly explores MRI data of his brain

 

Sharing, collaboration, and reproducibility

Building Iodide in the web simplifies a number of the elements of workflow friction that we’ve encountered in other tools. Sharing is simplified because the write-up and the code are available at the same URL rather than, say, pasting a link to a script in the footnotes of a Google Doc. Collaboration is simplified because the compute kernel is the browser and libraries can be loaded via an HTTP request like any webpage loads script — no additional languages, libraries, or tools need to be installed. And because browsers provide a compatibility layer, you don’t have to worry about notebook behavior being reproducible across computers and OSes.

To support collaborative workflows, we’ve built a fairly simple server for saving and sharing notebooks. There is a public instance at iodide.io where you can experiment with Iodide and share your work publicly. It’s also possible to set up your own instance behind a firewall (and indeed this is what we’re already doing at Mozilla for some internal work). But importantly, the notebooks themselves are not deeply tied to a single instance of the Iodide server. Should the need arise, it should be easy to migrate your work to another server or export your notebook as a bundle for sharing on other services like Netlify or Github Pages (more on exporting bundles below under “What’s next?”). Keeping the computation in the client allows us to focus on building a really great environment for sharing and collaboration, without needing to build out computational resources in the cloud.

Pyodide: The Python science stack in the browser

When we started thinking about making the web better for scientists, we focused on ways that we could make working with Javascript better, like compiling existing scientific libraries to WebAssembly and wrapping them in easy to use JS APIs. When we proposed this to Mozilla’s WebAssembly wizards, they offered a more ambitious idea: if many scientists prefer Python, meet them where they are by compiling the Python science stack to run in WebAssembly.

We thought this sounded daunting — that it would be an enormous project and that it would never deliver satisfactory performance… but two weeks later Mike Droettboom had a working implementation of Python running inside an Iodide notebook. Over the next couple months, we  added Numpy, Pandas, and Matplotlib, which are by far the most used modules in the Python science ecosystem. With help from contributors Kirill Smelkov and Roman Yurchak at Nexedi, we landed support for Scipy and scikit-learn. Since then, we’ve continued adding other libraries bit by bit.

Running the Python interpreter inside a Javascript virtual machine adds a performance penalty, but that penalty turns out to be surprisingly small — in our benchmarks, around 1x-12x slower than native on Firefox and 1x-16x slower on Chrome. Experience shows that this is very usable for interactive exploration.

Matplotlib running in the browser

Running Matplotlib in the browser enables its interactive features, which are unavailable in static environments

 

Bringing Python into the browser creates some magical workflows. For example, you can import and clean your data in Python, and then access the resulting Python objects from Javascript (in most cases, the conversion happens automatically) so that you can display them using JS libraries like d3. Even more magically, you can access browser APIs from Python code, allowing you to do things like manipulate the DOM without touching Javascript.

Of course, there’s a lot more to say about Pyodide, and it deserves an article of its own — we’ll go into more detail in a follow up post next month.

JSMD (JavaScript MarkDown)

Just as in Jupyter and R’s R-Markdown mode, in Iodide you can interleave code and write-up as you wish, breaking your code into “code chunks” that you can modify and run as a separate units. Our implementation of this idea parallels R Markdown and MATLAB’s “cell mode”: rather than using an explicitly cell-based interface, the content of an Iodide notebook is just a text document that uses a special syntax to delimit specific types of cells. We call this text format “JSMD”.

Following MATLAB, code chunks are defined by lines starting with %% followed by a string indicating the language of the chunk below. We currently support chunks containing Javascript, CSS, Markdown (and HTML), Python, a special “fetch” chunk that simplifies loading resources, and a plugin chunk that allows you to extend Iodide’s functionality by adding new cell types.

We’ve found this format to be quite convenient. It makes it easy to use text-oriented tools like diff viewers and your own favorite text editor, and you can perform standard text operations like cut/copy/paste without having to learn shortcuts for cell management. For more details you can read about JSMD in our docs.

What’s next?

It’s worth repeating that we’re still in alpha, so we’ll be continuing to improve overall polish and squash bugs. But in addition to that, we have a number of features in mind for our next round of experimentation. If any of these ideas jump out as particularly useful, let us know! Even better, let us know if you’d like to help us build them!

Enhanced collaborative features

As mentioned above, so far we’ve built a very simple backend that allows you to save your work online, look at work done by other people, and quickly fork and extend existing notebooks made by other users, but these are just the initial steps in a useful collaborative workflow.

The next three big collaboration features we’re looking at adding are:

  1. Google Docs-style comment threads
  2. The ability to suggest changes to another user’s notebook via a fork/merge workflow similar to Github pull requests
  3. Simultaneous notebook editing like Google Docs.

At this point, we’re prioritizing them in roughly that order, but if you would tackle them in a different order or if you have other suggestions, let us know!

More languages!

We’ve spoken to folks from the R and Julia communities about compiling those languages to WebAssembly, which would allow their use in Iodide and other browser-based projects. Our initial investigation indicates that this should be doable, but that implementing these languages might be a bit more challenging than Python. As with Python, some cool workflows open up if you can, for example, fit statistical models in R or solve differential equations in Julia, and then display your results using browser APIs. If bringing these languages to the web interests you, please reach out — in particular, we’d love help from FORTRAN and LLVM experts.

Export notebook archive

Early versions of Iodide were self-contained runnable HTML files, which included both the JSMD code used in the analysis, and the JS code used to run Iodide itself, but we’ve moved away from this architecture. Later experiments have convinced us that the collaboration benefits of having an Iodide server outweigh the advantages of managing files on your local system. Nonetheless, these experiments showed us that it’s possible to take a runnable snapshot of an Iodide notebook by inling the Iodide code along with any data and libraries used by a notebook into one big HTML file. This might end up being a bigger file than you’d want to serve to regular users, but it could prove useful as a perfectly reproducible and archivable snapshot of an analysis.

Iodide to text editor browser extension

While many scientists are quite used to working in browser-based programming environments, we know that some people will never edit code in anything other than their favorite text editor. We really want Iodide to meet people where they are already, including those who prefer to type their code in another editor but want access to the interactive and iterative features that Iodide provides. To serve that need, we’ve started thinking about creating a lightweight browser extension and some simple APIs to let Iodide talk to client-side editors.

Feedback and collaboration welcome!

We’re not trying to solve all the problems of data science and scientific computing, and we know that Iodide will not be everyone’s cup of tea. If you need to process terabytes of data on GPU clusters, Iodide probably doesn’t have much to offer you. If you are publishing journal articles and you just need to write up a LaTeX doc, then there are better tools for your needs. If the whole trend of bringing things into the browser makes you cringe a little, no problem — there are a host of really amazing tools that you can use to do science, and we’re thankful for that! We don’t want to change the way anyone works, and for many scientists web-focused communication is beside the point. Rad! Live your best life!

But for those scientists who do produce content for the web, and for those who might like to do so if you had tools designed to support the way you work: we’d really love to hear from you!

Please visit iodide.io, try it out, and give us feedback (but again: keep in mind that this project is in alpha phase — please don’t use it for any critical work, and please be aware that while we’re in alpha everything is subject to change). You can take our quick survey, and Github issues and bug reports are very welcome. Feature requests and thoughts on the overall direction can be shared via our Google group or Gitter.

If you’d like to get involved in helping us build Iodide, we’re open source on Github. Iodide touches a wide variety of software disciplines, from modern frontend development to scientific computing to compilation and transpilation, so there are a lot of interesting things to do! Please reach out if any of this interests you!

Huge thanks to Hamilton Ulmer, William Lachance, and Mike Droettboom for their great work on Iodide and for reviewing this article.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Use Firefox Send to safely share files for free

Mozilla planet - di, 12/03/2019 - 14:04

Moving files around the web can be complicated and expensive, but with Firefox Send it doesn’t have to be. There are plenty of services that let you send files for … Read more

The post Use Firefox Send to safely share files for free appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Introducing Firefox Send, Providing Free File Transfers while Keeping your Personal Information Private

Mozilla planet - di, 12/03/2019 - 14:03

At Mozilla, we are always committed to people’s security and privacy. It’s part of our long-standing Mozilla Manifesto. We are continually looking for new ways to fulfill that promise, whether it’s through the browser, apps or services. So, it felt natural to graduate one of our popular Test Pilot experiments, Firefox Send, send.firefox.com. Send is a free encrypted file transfer service that allows users to safely and simply share files from any browser. Additionally, Send will also be available as an Android app in beta later this week. Now that it’s a keeper, we’ve made it even better, offering higher upload limits and greater control over the files you share.

Here’s how Firefox Send works:



Encryption & Controls at your fingertips

Imagine the last time you moved into a new apartment or purchased a home and had to share financial information like your credit report over the web. In situations like this, you may want to offer the recipient one-time or limited access to those files. With Send, you can feel safe that your personal information does not live somewhere in the cloud indefinitely.

Send uses end-to-end encryption to keep your data secure from the moment you share to the moment your file is opened. It also offers security controls that you can set. You can choose when your file link expires, the number of downloads, and whether to add an optional password for an extra layer of security.

Choose when your file link expires, the number of downloads and add an optional password

Share large files & navigate with ease

Send also makes it simple to share large file sizes – perfect for sharing professional design files  or collaborating on a presentation with co-workers. With Send you can share file sizes up to 1GB quickly. To send files up to 2.5GB, sign up for a free Firefox account.

Send makes it easy for your recipient, too. No hoops to jump through. They simply receive a link to click and download the file. They don’t need to have a Firefox account to access your file. Overall, this makes the sharing experience seamless for both parties, and as quick as sending an email.

Sharing large file sizes is simple and quick

We know there are several cloud sharing solutions out there, but as a continuation of our mission to bring you more private and safer choices, you can trust that your information is safe with Send. As with all Firefox apps and services, Send is Private By Design, meaning all of your files are protected and we stand by our mission to handle your data privately and securely.

Whether you’re sharing important personal information, private documents or confidential work files you can start sending your files for free with Firefox Send.

The post Introducing Firefox Send, Providing Free File Transfers while Keeping your Personal Information Private appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Apply for a Mozilla Fellowship

Mozilla planet - di, 12/03/2019 - 14:00
We’re seeking technologists, activists, policy experts, and scientists devoted to a healthy internet. Apply to be a 2019-2020 Mozilla Fellow

 

Today, we’re opening applications for Mozilla Fellowships. Mozilla is seeking technologists, activists, policy experts, and scientists who are building a more humane digital world:

http://mozilla.fluxx.io/apply/fellowship

Mozilla Fellows work on the front lines of internet health, at a time when the internet is entwined with everything from elections and free expression to justice and personal safety. Fellows ensure the internet remains a force for good — empowerment, equality, access — and also combat online ills, like abuse, exclusion, and closed systems.

Mozilla is particularly interested in individuals whose expertise aligns with our 2019 impact goal: “better machine decision making,” or ensuring the artificial intelligence in our lives is designed with responsibility and ethics top of mind. For example: Fellows might research how disinformation spreads on Facebook. Or, build a tool that identifies the blind spots in algorithms that detect cancer. Or, advocate for a “digital bill of rights” that protects individuals from invasive facial recognition technology.

During a 10-month tenure, Mozilla Fellows may run campaigns, build products, and influence policy. Along the way, Fellows receive competitive funding and benefits; mentorship and trainings; access to the Mozilla network and megaphone; and more. Mozilla Fellows hail from a range of disciplines and geographies: They are scientists in the UK, human rights researchers in Germany, tech policy experts in Nigeria, and open-source advocates in New Zealand. The Mozilla Fellowship runs from October 2019 through July 2020.

Specifically, we’re seeking Fellows who identify with one of three profiles:

  • Open web activists: Individuals addressing issues like privacy, security, and inclusion online. These Fellows will embed at leading human rights and civil society organizations from around the world, working alongside the organizations and also exchanging advocacy and technical insights among each other.

 

  • Tech policy professionals: Individuals who examine the interplay of technology and public policy — and craft legal, academic, and governmental solutions.

 

  • Scientists and researchers: Individuals who infuse open-source practices and principles into scientific research. These Fellows are based at the research institution with which they are currently affiliated.

Learn more about Mozilla Fellowships, and then apply. Part 1 of the applications closes on Monday April 8, 2019 at 5:00pm ET. Below, meet a handful of current Mozilla Fellows:

 

Valentina Pavel

Valentina is a digital rights advocate working on privacy, freedom of speech, and open culture. Valentina is currently investigating the implications of digital feudalism, and exploring different visions for shared data ownership. Read her latest writing.

 

 

Selina Musuta

Selina is a web developer and infosec expert. Selina is currently embedded at Consumer Reports, and supporting the organization’s privacy and security work.

 

 

 

Julia Lowndes | @juliesquid

Julia is a environmental scientist and open-source advocate. Julia is currently evangelizing openness in the scientific community, and training fellow researchers how to leverage open data and processes. Learn about her latest project.

 

 

Richard Whitt | @richardswhitt

Richard is a tech policy expert and industry veteran. Richard is currently exploring how to re-balance the user-platform dynamic online, by putting powerful AI and other emerging tech in the hands of users. Read his recent essay in Fast Company.

The post Apply for a Mozilla Fellowship appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: EU takes major step forward on government vulnerability disclosure review processes

Mozilla planet - di, 12/03/2019 - 13:49

We’ve argued for many years that governments should implement transparent processes to review and disclose the software vulnerabilities that they learn about. Such processes are essential for the cybersecurity of citizens, businesses, and governments themselves. For that reason, we’re delighted to report that the EU has taken a crucial step forward in that endeavour, by giving its cybersecurity agency an explicit new mandate to help European governments establish and implement these processes where requested.

The just-adopted EU Cybersecurity Act is designed to increase the overall level of cybersecurity across the EU, and a key element of the approach focuses on empowering the EU’s cybersecurity agency (‘ENISA’) to play a more proactive role in supporting the Union’s Member States in cybersecurity policy and practices. Since the legislative proposal was launched in 2017, we’ve argued that ENISA should be given the power to support EU Member States in the area of government vulnerability disclosure (GVD) review processes.

Malicious actors can exploit vulnerabilities to cause significant harm to individuals and businesses, and can cripple critical infrastructure. At the same time, governments often learn about software vulnerabilities and face competing incentives as to whether to disclose the existence of the vulnerability to the affected company immediately, or delay disclosure so they can use the vulnerability as an offensive/intelligence-gathering tool. For those reasons, it’s essential that governments have processes in place for reviewing and coordinating the disclosure of the software vulnerabilities that they learn about, as a key pillar in their strategy to defend against the nearly daily barrage of cybersecurity attacks, hacks, and breaches.

For several years, we’ve been at the forefront of calls for governments to put in place these processes. In the United States, we spoke out strongly in favor of the Protecting Our Ability to Counter Hacking Act (PATCH Act) and participated in the Centre for European Policy Studies’ Task Force on Software Vulnerability Disclosure, a broad stakeholder taskforce that in 2018 recommended EU and national-level policymakers to implement GVD review processes. In that context, our work on the EU Cybersecurity Act is a necessary and important continuation of this commitment.

We’re excited to see continued progress on this important issue of cybersecurity policy. The adoption of the Cybersecurity Act by the European Parliament today ensures that, for the first time, the EU has given legal recognition to the importance of EU Member States putting in place processes to review and manage the disclosure of vulnerabilities that they learn about. In addition, by giving the EU Cybersecurity Agency the power to support Member States in developing and implementing these processes upon request, the EU will help ensure that Member States with weaker cybersecurity resilience are supported in implementing this ‘next generation’ of cybersecurity policy.

We applaud EU lawmakers for this forward-looking approach to cybersecurity, and are excited to continue working with policymakers within the 28 EU Member States to see this vision for government vulnerability disclosure review processes realised at national and EU level. This will help Europe and all Europeans to be more secure.

Further reading:

The post EU takes major step forward on government vulnerability disclosure review processes appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 277

Mozilla planet - di, 12/03/2019 - 05:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is validator, a crate offering simple validation for Rust structs. Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

173 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs Upcoming Events Online Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Ownership is hard. It indeed is. And you managed to do that exact hard thing by hand, without any mechanical checks. (Or so you think.)

– @Cryolite on twitter (translated from Japanese)

Thanks to Xidorn Quan and GolDDranks for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Meet the newest walled garden

Mozilla planet - di, 12/03/2019 - 00:59

Recently, Mark Zuckerberg posted a lengthy note outlining Facebook’s vision to integrate its three messaging services – WhatsApp, Messenger, and Instagram (through its Direct messaging functionality) – into one privacy and security oriented platform. The post positioned Facebook’s future around individual and small group conversations, rather than the “public forum” style communications through Facebook’s newsfeed platform. Initial coverage of the move, largely critical, has focused on the privacy and security aspects of this integrated platform, the history of broken promises on privacy and the changes that would be needed for Facebook’s business model to realize the goal. However, there’s a yet darker side to the proposal, one mostly lost in the post and coverage so far: Facebook is taking one step further to make its family of services into the newest walled garden, at the expense of openness and the broader digital economy.

Here’s the part of the post that highlights the evolution in progress:


Sounds good on its face, right? Except, what Facebook is proposing isn’t interoperability as most use that term. It’s more like intraoperability – making sure the various messaging services in Facebook’s own walled garden all can communicate with each other, not with other businesses and services. In the context of this post, it seems clear that Facebook will intentionally box out other companies, apps, and networks in the course of this consolidation. Rather than creating the next digital platform to take the entire internet economy forward, encouraging downstream innovation, investment, and growth, Facebook is closing out its competitors and citing privacy and security considerations as its rationale.

This is not an isolated incident – it’s a trend. For example, on August 1, 2018, Facebook shut off the “publish_actions” feature in one of its core APIs. This change gutted the practical ability of independent companies and developers to interoperate with Facebook’s services. Some services were forced to disconnect Facebook profiles or stop interconnecting with Facebook entirely. Facebook subsequently changed a long-standing platform policy that had prohibited the use of their APIs to compete, but the damage was done, and the company’s restrictive practices continue.

We can see further evidence of the intent to create a silo under the guise of security in Zuckerberg’s note where he says: “Finally, it would create safety and spam vulnerabilities in an encrypted system to let people send messages from unknown apps where our safety and security systems couldn’t see the patterns of activity.” Security and spam are real problems, but interoperability doesn’t need to mean opening a system up to every incoming message from any service. APIs can be secured by tokens and protected by policies and legal systems.

Without doubt, Facebook needs to prioritize the privacy of its users. Shutting down overly permissive APIs, even at the cost of some amount of competition and interoperability, can be necessary for that purpose – as with the Cambridge Analytica incident. But there’s a difference between protecting users and building silos. And designing APIs that offer effective interoperability with strong privacy and security guarantees is a solvable problem.

The long-term challenge we need to be focused on with Facebook isn’t just whether we can trust the company with our privacy and security – it’s also whether they’re using privacy and security simply as a cover to get away with anticompetitive behavior.

How does this square with the very active conversations around competition and centralization in tech we’re witnessing around the world today? The German competition authority just issued a decision forcing Facebook to stop sharing data amongst its services. This feels like quite the discordant note for Facebook to be casting, even as the company is (presumably) thinking about how to comply with the German decision. Meanwhile, the Federal Trade Commission is actively pursuing an investigation into Facebook’s data practices. And regulators in the U.S., the European Union, India, Israel, and Australia are actively reviewing their antitrust and competition laws to ensure they can respond to the challenges posed by technology and data.

It’s hard to say whether integrating its messaging services will further entrench Facebook’s position, or make it harder to pursue the kinds of remedies directed by the Bundeskartellamt and being considered by politicians around the world. But it seems like Facebook is on a collision course towards finding out.

If Facebook believes that messaging as a platform offers incredible future innovation, the company has a choice. It could either seek to develop that potential within a silo, the way AT&T fostered innovation in telephones in the 1950s – or it could try the way the internet was built to work: offering real interoperability on reasonable terms so that others can innovate downstream.

The post Meet the newest walled garden appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Chris AtLee: Smaller Firefox Updates

Mozilla planet - ma, 11/03/2019 - 23:03

Back in 2014 I blogged about several ideas about how to make Firefox updates smaller.

Since then, we have been able to implement some of these ideas, and we also landed a few unexpected changes!

tl;dr

It's hard to measure exactly what the impact of all these changes are over time. As Firefox continues to evolve, new code and dependencies are added, old code removed, while at the same time the build system and installer/updater continue to see improvements. Nevertheless I was interested in comparing what the impact of all these changes would be.

To attempt a comparison, I've taken the latest release of Firefox as of March 6, 2019, which is Firefox 65.0.2. Since most of our users are on Windows, I've downloaded the win64 installer.

Next, I tried to reverse some of the changes made below. I re-compressed omni.ja, used bz2 compression for the MAR files, re-added the deleted images and startup cache, and used the old version of mbsdiff to generate the partial updates.

Format Current Size "Old" Size Improvement (%) Installer 45,693,888 56,725,712 19% Complete Update 49,410,488 70,366,869 30% Partial Update (from 64.0.2) 14,935,692 28,080,719 47% Small updates FTW!

Ideally most of our users are getting partial updates from version to version, and a nearly 50% reduction in partial update size is quite significant! Smaller updates mean users can update more quickly and reliably!

One of the largest contributors to our partial update sizes right now are the binary diff size for compiled code. For example, the patch for xul.dll alone is 13.8MB of the 14.9MB partial update right now. Diffing algorithms like courgette could help here, as could investigations into making our PGO process more deterministic.

Here are some of the things we've done to reduce update sizes in Firefox.

Shipping uncompressed omni.ja files

This one is a bit counter-intuitive. omni.ja files are basically just zip files, and originally were shipped as regular compressed zips. The zip format compressed each file in the archive independently, in contrast to something like .tar.bz2 where the entire archive is compressed at once. Having the individual files in the archive compressed makes both types of updates inefficient: complete updates are larger because compressing (in the MAR file) already compressed data (in the ZIP file) doesn't yield good results, and partial updates are larger because calculating a binary diff between two compressed blobs also doesn't yield good results. Also, our Windows installers have been using LZMA compression for a long time, and after switching to LZMA for update compression, we can achieve much greater compression ratios with LZMA of the raw data versus LZMA of zip (deflate) compressed data.

The expected impact of this change was ~10% smaller complete updates, ~40% smaller partial updates, and ~15% smaller installers for Windows 64 en-US builds.

Using LZMA compression for updates

Pretty straightforward idea: LZMA does a better job of compression than bz2. We also looked at brotli and zstd for compression, but LZMA performs the best so far for updates, and we're willing to spend quite a bit of CPU time to compress updates for the benefit of faster downloads.

LZMA compressed updates were first shipped for Firefox 56.

The expected impact of this change was 20% reduction for Windows 64 en-US updates.

Disable startup cache generation

This came out of some investigation about why partial updates were so large. I remember digging into this in the Toronto office with Jeff Muizelaar, and we noticed that one of the largest contributors to partial update sizes were the startup cache files. The buildid was encoded into the header of startup cache files, which effectively changes the entire compressed file. It was unclear whether shipping these provided any benefit, and so we experimented with turning them off. Telemetry didn't show any impact to startup times, and so we stopped shipping the startup cache as of Firefox 55.

The expected impact of this change was about 25% for a Windows 64 en-US partial update.

Optimized bsdiff

Adam Gashlin was working on a new binary diffing tool called bsopt, meant to generate patch files compatible with bspatch. As part of this work, he discovered that a few changes to the current mbsdiff implementation could substantially reduce partial update sizes. This first landed in Firefox 61.

The expected impact of this change was around 4.5% for partial updates for Window 64 builds.

Removed unused theme images

We removed nearly 1MB of unused images from Firefox 55. This shrinks all complete updates and full installers by about 1MB.

Optimize png images

By using a tool called zopflipng, we were able to losslessly recompress PNG files in-tree, and reduce the total size of these files by 2.4MB, or about 25%.

Reduce duplicate files we ship

We removed a few hundred kilobytes of duplicate files from Firefox 52, and put in place a check to prevent further duplicates from being shipped. It's hard to measure the long term impact of this, but I'd like to think that we've kept bloat to a minimum!

Categorieën: Mozilla-nl planet

Firefox Nightly: Firefox Student Projects in 2018: A Recap

Mozilla planet - ma, 11/03/2019 - 19:24

Firefox is an open-source project, created by a vibrant community of paid and volunteer contributors from all over the world. Did you know that some of those contributors are students, who are sponsored or given course credit to make improvements to Firefox?

In this blog post, we want to talk about some student projects that have wrapped up recently, and also offer the students themselves an opportunity to reflect on their experience working on them.

If you or someone you know might be interested in developing Firefox as a student, there are some handy links at the bottom of this article to help get you started with some student programs. Not a student? No problem – come hack with us anyways!

Now let’s take a look at some interesting things that have happened in Firefox recently, thanks to some hard-working students.

 

Multi-select Tabs by Abdoulaye O. Ly

In the summer of 2018, Abdoulaye O. Ly worked on Firefox Desktop for Google Summer of Code. His project was to work on adding multi-select functionality to the tab bar in Firefox Desktop, and then adding context menu items to work on sets of tabs rather than individual ones. This meant Abdoulaye would be making changes in the heart of one of the most complicated and most important parts of Firefox’s user interface.

Abdoulaye’s project was a smash success! After a few months of baking and polish in Nightly, multi-select tabs shipped enabled by default to the general Firefox audience in Firefox 64. It was one of the top-line features for that release!

You can try it right now by holding down Ctrl/Cmd and clicking on individual tabs in the tab bar. You can also hold down Shift and select a range of tabs. Then, try right-clicking on the range to perform some operations. This way, you can bookmark a whole set of tabs, send them to another device, or close them all at once!

Here’s what Abdoulaye had to say about working on the project:

Being part of the multi-select project was one of my best experiences so far. Indeed, I had the opportunity to implement features that are being used by millions of Firefox users. In addition, it gave me the privilege of receiving a bunch of constructive reviews from my mentor and other Mozilla engineers, which has greatly boosted my software development skills. Now, I feel less intimidated when skimming through large code bases. Another aspect on which I have also made significant progress is on team collaboration, which was the most challenging part of my GSoC internship.

We want to thank Abdoulaye for collaborating with us on this long sought-after feature! He will continue his involvement at Mozilla with a summer internship in our Toronto office. Congratulations on shipping, and great work!

 

Better Certificate Error Pages by Trisha Gupta

University student Trisha Gupta contributed to Firefox as part of an Outreachy open source internship.

Her project was to make improvements to the certificate error pages that Firefox users see when a website presents a (seemingly) invalid security certificate. These sorts of errors can show up for a variety of reasons, only some of which are the fault of the websites themselves.

The Firefox user experience and security engineering teams collaborated on finding ways to convey these types of errors to users in a better, more understandable way. They produced a set of designs, fine-tuned them in user testing, and handed them off to Trisha so she could start implementing them in Firefox.

In some cases, this meant adding entirely new pages, such as the “clock skew” error page. That page tells users that the certificate error is caused by their system clocks being off by a few years, which happens a lot more often than one might think.

The new certificate error page displayed when the computer clock is set incorrectly.

What year is it?

We are really grateful for Trisha’s fantastic work on this important project, and for the Outreachy program that enabled her to find this opportunity. Here is what Trisha says about her internship:

The whole experience working with the Mozillians was beyond fruitful. Everyone on my team, especially my mentor were very helpful and welcoming. It was my first time working with such a huge company and such an important team. Hence, the biggest challenge for me was to practically not let anybody down. I was very pleasantly surprised when at All Hands, all my team members were coding in homerooms together, and each one of us had something to learn from the other, regardless of the hierarchy or position or experience! It was overwhelming yet motivating to see the quality of code, the cohesion in teamwork and the common goal to make the web safer.

Right from filing bugs, to writing patches, to seeing them pass the tests and not breaking the tree, the whole process of code correction, review, testing and submission was a great learning experience. Of course, seeing the error pages live in Nightly was the most satisfying feeling ever, and I am super grateful to Mozilla for the best summer of my life!”

Trisha’s project is currently enabled in Firefox Nightly and Beta and scheduled to be released to the general user population in Firefox 66.

 

Dark Theme Darkening by Dylan Stokes, Vivek Dhingra, Lian Zhengyi, Connor Masini, and Bogdan Pozderca

Michigan State University students Dylan Stokes, Vivek Dhingra, Lian Zhengyi, Connor Masini, and Bogdan Pozderca extended Firefox’s Theming API to allow for theme authors to style more parts of the browser UI as well as increase cross-browser compatibility with Google Chrome themes.

The students worked as part of their CSE498: Collaborative Design course, often called their “Capstone.” Students enroll in this course during their last year of an undergraduate degree in computer science and are assigned a company in the industry to work with for a semester project.

With the work contributed by this team, themes can now customize the Firefox menu, findbar, location bar and search box dropdown, icon colors and more. These new features are also used by the “Dark” theme that ships by default with Firefox. Dylan Stokes published an end-of-semester blog post on the Mozilla Hacks blog that goes into further details of the technical work that the team did. The following video was created by the team to recap their work: Customizable theme development in Firefox 61

Here’s what Vivek had to say about the project:

Working on ‘Dark Theme Darkening’ was one of the most rewarding experiences of my student life. It was extremely motivating to code something and see that deployed on different Firefox branches in weeks. Emphasis on thorough testing, rigorous code reviews, efficient communication with Mozilla contributors across the globe and working in a fast pace team environment are just some of the many useful experiences I had as part of this project.

Mike [Conley] and Jared [Wein] were also kind enough to spend a weekend with us in East Lansing, MI. None of us expected that as we got started on the project so we were very thankful for their time and effort for having a coding marathon with us, and answering all our questions.

Mike and Jared also had a very thorough project plan for us which made it easier to work in an organized manner. We also had the opportunity to interact with Mozilla contributors and employees across the world as we worked on different tasks. I love working in diverse teams and as part of this project, I had the opportunity to do that a lot.

Dylan also had some comments about the project:

Working on the Dark Theme Darkening project was an amazing opportunity. It was the first time I got a sense of developing for a large application. It was a bit overwhelming at first, but our mentors: Jared, Mike, and Tim [Nguyen] were extremely helpful in guiding us to get started crushing our first bugs.

The code review process really helped me grow as a developer. Not only was I writing working code; I was writing efficient, production level code. It still amazes me that code that I was able to write is currently being used by millions of users.

One thing that surprised me the most was the community. Anyone I interacted with wanted to help in anyway they could. We were all part of a team and everyone’s end goal was to create a web browser that is fast for good.

The students’ work for the Dark Theme Darkening project shipped in Firefox 61. See their work by going to the Firefox menu, choosing Customize, then enabling the Dark theme. You can also create your own theme with the colors of your choice.

 

Other projects

We’ve just shown you 3 projects, but a whole bunch of other amazing students have been working on improving Firefox in 2018. A few more examples:

And that’s just the projects that focused on Firefox development itself! The Mozilla community mentored so many projects that it would be too much for us to list them all here, so please check them out on the project pages for GSoC and Outreachy.

 

Get Involved

The projects described above are just a small sampling of the type of work that students have contributed to the Mozilla project. If you’re interested in getting involved, consider signing up to Mozilla’s Open Source Student Network and keep checking for opportunities at Google Summer of Code, Outreachy, codetribute. Finally, you should consider applying for an internship with Mozilla. Applications for the next recruiting cycle (Summer 2020) open in September/October 2019.

Feel free to connect with any of us through Mozilla’s IRC servers if you have any questions related to becoming a new contributor to Firefox. Thanks, and happy hacking!

Thanks

We are really grateful for all Mozilla contributors who invested countless hours mentoring a student in the last year and for all the great people driving our student programs behind the scenes.

A special thank you to Mike Conley and Jared Wein for authoring this blog post with me.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Spring Cleaning with Browser Extensions

Mozilla planet - ma, 11/03/2019 - 15:55

Flowers in bloom, birds singing, cluttered debris everywhere. It’s Spring cleaning season. We may not be able to help with that mystery odor in the garage, but here are some … Read more

The post Spring Cleaning with Browser Extensions appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Pagina's