mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mic Berman: What do you want for you life? knowing oneself

Mozilla planet - ma, 07/08/2017 - 16:52
Roman-mosaic-know-thyself Socrates

 

There are many wiser than me that have offered knowing yourself as a valuable pursuit that brings great rewards.

 

Here are a few of my favourite quotes on why to do this:

 

This is how I invest in knowing myself - I hope it inspires you to create your own practice

  1. I spend time understanding my motivations, values in action or inaction, and my triggers. I leverage my coach to deconstruct situations that were particularly difficult or rewarding, where I’m overwhelmed by emotion and don’t feel I can think rationally - I check in to get crystal clear on what is going on, how am I feeling, what was the trigger(s) and how will I be at choice in action going forward.
  2. I challenge myself in areas I want to understand more about myself by reading, going to lectures, sharing honestly with leanred or experienced friends.
  3. I keep a daily journal - particularly around areas in my life I want to change or improve like being on time and creating sufficient time in my day for reflection. I’ve long run my calendar to ‘maximize time and service’ i.e. every available minute is in meetings, working on a project, etc. This is not only un-sustainable for me, it doesn’t leave me any room for the unexpected and more importantly an opportunity to reflect on what may have just happened or prepare for what or who I may be seeing next. This is not fair to me nor to the people I work with.
Categorieën: Mozilla-nl planet

Mic Berman: How are you taking care of yourself?

Mozilla planet - ma, 07/08/2017 - 16:42

The leaders I coach drive themselves and their teams to great achievements, are engaged in what they do, love their work and have passion and compassion in how they work for their teams and customers. They face tough situations - impossible-seeming deadlines or goals, difficult conversations, constant re-balancing of work-life priorities, and crazy business scenarios we’ve never faced before.

Their days can be both energizing and completely draining. And each day they face those choices and predicaments at times with full grace and others with total foolishness.

Along the way I hear and offer the questions - how are you taking care of yourself? how will you rejuvenate? how will you maintain balance? so you I ask these questions of the leaders I work with so that they can keep driving their goals, over-achieving each day and showing up for the important people in your life :)

 

I focus on three ways to do this myself.

  • Knowing myself - spending time to understand and check in with my values, triggers, and motivations.

  • Doing a daily practice - i’ve created a daily and weekly practice that touches on my mind, body and spirit. this discipline and evolving practice keeps me learning, present and ‘in balance’

  • Being discerning about my influences - choose the people, experiences and beauty that influence my life and what’s important about that today, this week or month or year.

Categorieën: Mozilla-nl planet

Shing Lyu: Porting Chrome Extension to Firefox

Mozilla planet - ma, 07/08/2017 - 08:37

Edit: Andreas from the Mozilla Add-on team points out a few errors, I’ll keep them here before I can inline them into the post:

  • Do NOT create a new version of the extension on AMO, upload and replace your legacy extension using the same listing.
  • The user drop is related to https://blog.mozilla.org/addons/2017/06/21/upcoming-changes-usage-statistics/
  • The web-ext run should work without an ID
  • strict_min_version is not mandatory

Three years ago, I wrote the FocusBlocker to help me focus on my master thesis. It’s basically a website blocker that stops me from checking Facebook every five minute. But is different from other blockers like LeechBlock that requires you to set a fixed schedule. FocusBlocker lets you set a quota, e.g. I can browse 10 minutes of Facebook then block it for 50 minutes. So as long as you have remaining quota, you can check Facebook anytime. I’m glad that other people find it useful, and I even got my first donation through AMO because of happy users.

Since this extension serves my need, I’m not actively maintaining it or adding new features. But I was aware of Firefox’s transition from the legacy Add-on SDK to WebExtension API. So before WebExtension API is fully available, I started to migrate it to Chrome’s extension format. But I didn’t got the time to actually migrate it back to Firefox, until a user emails me asking for a WebExtension version. I looked into the statistics, the daily active user count drops from ~1000 to ~300. That’s when I rolled up my sleeve and actually migrated it in one day. Here is how I did it and what I’ve learned from the process.

daily_user.png

What needs to be changed

To evaluate the scope of the work. We need to first look at what APIs I used. The FocusBlocker Chrome version uses the three main APIs:

  • chrome.tabs: to monitor new tabs opening and actually block existing tabs.
  • chrome.alarm: Set timers for blocking and unblocking.
  • chrome.storage.sync: To store the settings and persist the timer across browser restarts.

It’s nice that these APIs are all supported (at least the parts I used) in Firefox, so I don’t really need to modify any JavaScript code.

I loaded the manifest directly in Firefox’s about:debugging page (you can also consider use the convenient web-ext command line tool), but Firefox rejects it.

about_debugging.png

That’s because Firefox requires you to set a unique id for each extension (you can read more about the id requirement here), and you must set a minimal version of Firefox on which the extension works, like so:

"applications": { "gecko": { "id": "focusblocker@shing.lyu", "strict_min_version": "48.0" } },

There is one more modification need. In my Chrome extension I used the old options_page setting setting to set the preference page. But Firefox only support the newer options_ui. You can also apply browser’s system style for your settings page, so the UI looks like part of the Firefox setting. Firefox generalized the name from chrome_style to browser_style. So this is what I need to add to my manifest.json file (and remove the options_page setting):

"options_ui": { "page": "options.html", "browser_style": true },

about_addon.png browser_style.png

That’s all I need to port the extension from Chrome to Firefox. Super easy! The WebExtension team really did a good job on making the extensions compatible. In case you are curious, you can find the full source code of focusblocker on GitHub.

Publishing the extension on AMO

To publish the extension on addons.mozilla.org, you need to zip all the files in a zip and upload it. Here are some tips for passing the review more easily.

  • You can’t just upload a WebExtension-API-backed extension to replace your already-listed legacy extension, so please create a new listing.
  • Don’t pack any unnecessary file into the zip, exclude all the temporary test files from the zip.
  • Remove or comment out all the console.log() calls. Although it’s not a strict requirement, but it will make the review process much smoother.
  • If you use any third party library, consider including (i.e. “vendoring”) the file into the zip, or at least upload the source for review.
  • If you’ve upload one version and you’d like to make some modifications or fix, you need to bump the version number, no matter how small the change is.

Firefox is planning to completely roll out the new format in version 57 (around November, 2017). So if you have a legacy Firefox extension, or a Chrome extension you want to convert, now is a perfect timing.

If you want to try out the new FocusBlocker, please head to the install page. You can also find the Chrome version here.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Stabilizing The rr Trace Format With Cap’n Proto

Mozilla planet - ma, 07/08/2017 - 06:00

In the past we've modified the rr trace format quite frequently, and there has been no backward or forward compatibility. In particular most of the time when people update rr — and definitely when updating between releases — all their existing traces become unreplayable. This is a problem for rr-based services, so over the last few weeks I've been fixing it.

Prior to stabilization I made all the trace format updates that were obviously already desirable. I extended the event counter to 64 bits since a pathological testcase could overflow 2^31 events in less than a day. I simplified the event types to eliminate some unnecessary or redundant events. I switched the compression algorithm from zlib to brotli.

Of course it's not realistic to expect that the trace format is now perfect and won't ever need to be updated again. We need an extensible format so that future versions of rr can add to it and still be able to read older traces. Enter Cap’n Proto! Cap’n Proto lets us write a schema describing types for our trace records and then update that schema over time in constrained ways. Cap’n Proto generates code to read and write records and guarantees that data using older versions of the schema is readable by implementations using newer versions. (It also has guarantees in the other direction, but we're not planning to rely on them.)

This has all landed now, so the next rr release should be the last one to break compatibility with old traces. I say should, because something could still go wrong!

One issue that wasn't obvious to me when I started writing the schema is that rr can't use Cap’n Proto's Text type — because that requires text be valid UTF-8, and most of rr's strings are data like Linux pathnames which are not guaranteed to be valid UTF-8. For those I had to use the Data type instead (an array of bytes).

Another interesting issue involves choosing between signed and unsigned integers. For example a file descriptor can't be negative, but Unix file descriptors are given type int in kernel APIs ... so should the schema declare them signed or not? I made them signed, on the grounds that we can then check while reading traces that the values are non-negative, and when using the file descriptor we don't have to worry about the value overflowing as we coerce it to an int.

I wrote a microbenchmark to evaluate the performance impact of this change. It performs 500K trivial (non-buffered) system calls, producing 1M events (an 'entry' and 'exit' event per system call). My initial Cap’n Proto implementation (using "packed messages") slowed rr recording down from 12 to 14 seconds. After some profiling and small optimizations, it slows rr recording down from 9.5 to 10.5 seconds — most of the optimizations benefited both configurations. I don't think this overhead will have any practical impact: any workload with such a high frequency of non-buffered system calls is already performing very poorly under rr (the non-rr time for this test is only about 20 milliseconds), and if it occurred in practice we'd buffer the relevant system calls.

One surprising datum is that using Cap’n Proto made the event data significantly smaller — from 7.0MB to 5.0MB (both after compression with brotli-5). I do not have an explanation for this.

Another happy side effect of this change is that it's now a bit easier to read rr traces from other languages supported by Cap’n Proto.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR2 available

Mozilla planet - zo, 06/08/2017 - 07:51
As I type in what is not quite the worst hotel room in Mountain View while Rockford Files reruns play in the background, TenFourFox FPR2 final is available for testing (downloads, hashes, release notes). The original plan was not to have a Debug build with this release, but we're still trying to smoke out issue 72, so there is a Debug build as well. Again, it is not intended for general use unless you know what you're doing and why.

The only differences from this and the beta, besides the usual certificate, HPKP and HSTS updates, are some additional debug sections in the widget code for issue 72 and the remaining security and stability update backports. One of these updates fixes a bug in HTTP/2 transactions which helps reduce latency and dropped connections on some sites, notably many Google properties and some CDNs, and affects pretty much any version of Firefox since HTTP/2 support was added. As always, the plan is to go live on Monday PM Pacific.

Day 2 of the Vintage Computer Festival West is tomorrow! Be there, or, um, be not there! And that is clearly worse!

Categorieën: Mozilla-nl planet

Ehsan Akhgari: Quantum Flow Engineering Newsletter #18

Mozilla planet - vr, 04/08/2017 - 10:04

This has been a busy week.  A lot of fixes have landed, setting up the Firefox 57 cycle for a good start.  On the platform side, a notable change that will be in the upcoming Nightly is the fix for document.cookie using synchronous IPC.  This super popular API call slows down various web pages today in Firefox, and starting from tomorrow, the affected pages should experience a great speedup.  I have sometimes seen the slowdown caused by this one issue to amount to a second or more in some situations, thanks a lot to Amy and Josh for their hard work on this feature.  The readers of these newsletters know that the work on fixing this issue has gone on for a long time, and it’s great to see it land early in the cycle.

On the front-end side, more and more of the UI changes of Photon are landing in Nightly.  One of the overall changes that I have seen is that the interface is starting to feel a lot more responsive and snappy than it was around this time last year.  This is due to many different details.  A lot of work has gone into fixing rough edges in the performance of the existing code, some of which I have covered but most of which is under the Photon Performance project.  Also the new UI is built with performance in mind, so for example where animations are used, they use the compositor and don’t run on the main thread.  All of the pieces of this performance puzzle are nicely coming to fit in together, and it is great to see that this hard work is paying off.

On the Speedometer front, things are progressing with fast pace.  We have been fixing issues that have been on our list from the previous findings, which has somewhat slowed down the pace of finding new issues to work on.  Although the SpiderMonkey team haven’t waited around and are continually finding new optimization opportunities out of further investigations.  There is still more work to be done there!

I will now move own to acknowledge the great work of all of those who helped make Firefox faster last week.  I hope I am not mistakenly forgetting any names here!

Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: when good Mac apps go bought

Mozilla planet - vr, 04/08/2017 - 09:38
The Unarchiver, one of the more handy tools for, uh, unarchiving, um, archives, is now a commercial app. 3.11.1 can run on 10.4 PowerPC, but the "legacy" download they offer has a defective resource fork, and the source code is no longer available.

The same author also wrote an image display tool called Xee. 2.2 would run on 10.4 PowerPC. After Unarchiver's purchase, it seems Xee was part of the same deal and now only Xee 3 is available.

Fortunately my inveterate digital hoarding habit came in handy, because I managed to get both a working archive of The Unarchiver 3.11.1 and Xee 2.2 and the source code, so I can try to maintain them for our older platforms. (Xee I have compiling and running happily; Unarchiver will need a little work, but it's doable.) But that's kind of the trick, isn't it? If I hadn't thought to grab these and their source code a number of months ago as part of my standard operating procedure, they'd be gone, probably forever. I'm sure MacPaw (the new owners) are good people but I don't foresee them putting any time in to toss a bone to legacy Power Macs, let alone actually continue support. When these things happen without warning to a long-time open source free utility, that's even worse.

That said, the X Lossless Decoder, which I use regularly to rip CDs and change audio formats and did donate to, is still trucking along. Here's a real Universal app: it runs on any system from 10.4 to 10.12, on PowerPC or Intel, and the latest version of July 29, 2017 actually fixes a Tiger PowerPC bug. I'm getting worried about its future on our old machines, though: it's a 32-bit Intel app, and Apple has ominously said High Sierra "will be the last macOS release to support 32-bit apps without compromise." They haven't said what they mean by that, but my guess is that 10.14 might be the first release where Intel 32-bit Carbon apps either no longer run or have certain features disabled, and it's very possible 10.15 might not run any 32-bit applications (Carbon or Cocoa) at all. It might be possible to build 64-bit Intel and still lipo it with a 32-bit PowerPC executable, but these are the kinds of situations that get previously working configurations tossed in the eff-it bucket, especially if the code bases for each side of the fat binary end up diverging substantially. I guess I'd better grab a source snapshot too just in case.

As these long lived apps founder and obsolesce, if you want to something kept right, you keep it yourself.

Categorieën: Mozilla-nl planet

David Teller: Towards a JavaScript Binary AST

Mozilla planet - do, 03/08/2017 - 23:31
In this blog post, I would like to introduce the JavaScript Binary AST, an ongoing project that we hope will help make webpages load faster, along with a number of other benefits. A little background Over the years, JavaScript has grown from one of the slowest scripting languages available to a high-performance powerhouse, fast enough that it can run desktop, server, mobile and even embedded applications, whether through web browsers or other environments.
Categorieën: Mozilla-nl planet

Air Mozilla: Intern Presentations: Round 3: Thursday, August 3rd

Mozilla planet - do, 03/08/2017 - 22:00

 Thursday, August 3rd Intern Presentations 10 presenters Time: 1:00PM - 3:30PM (PDT) - each presenter will start every 15 minutes 8 SF, 2 PDX

Categorieën: Mozilla-nl planet

Air Mozilla: Intern Presentations: Round 3: Thursday, August 3rd

Mozilla planet - do, 03/08/2017 - 22:00

 Thursday, August 3rd Intern Presentations 10 presenters Time: 1:00PM - 3:30PM (PDT) - each presenter will start every 15 minutes 8 SF, 2 PDX

Categorieën: Mozilla-nl planet

Firefox Test Pilot: Gaining Insights Early: Concept Evaluation for Firefox Send

Mozilla planet - do, 03/08/2017 - 21:23

Earlier this week, the Firefox Send experiment launched in Test Pilot. The experiment allows people to transfer files in a simple and secure way using Firefox.

The idea for Send stemmed from what we’ve learned from past research about what people do online. For instance, from the multi-phase study on workflows conducted by the Firefox User Research team last year, we know that transferring files — images, text documents, videos — to oneself or other people is an atomic step within many common workflows. Once the Test Pilot team has an idea for an experiment, one of the first steps in our process is to evaluate the viability of that idea or concept.

Concept evaluation vs. usability testing

During the early idea stage of an experiment, our user research efforts focus on trying to understand the problem space the experiment is intended to address. We do this by seeking answers to the following types of questions:

  • Does the experiment idea address an existing need participants have?
  • Is the experiment intelligible by participants who might use it?
  • Which, if any, mental models do participants most closely associate with the experiment idea?
  • Is the experiment idea comparable to any tools participants are already using? If so, how is the experiment idea unique?
  • What expectations do participants have of the experiment?
  • Do participants have any concerns about the experiment?

The findings from this early research help determine the kinds and amount of effort that our product, UX, visual design, content strategy, and engineering team members will invest in the experiment moving forward. At this stage, we are less concerned with how usable an early design of an experiment is because we know that as we grow our understanding of existing needs, behaviors, mental models, and attitudes, the design could change significantly.

Research design for Firefox Send

To evaluate the idea for Firefox Send, we conducted remote interviews with individuals who reported having transferred a file online in the last week. Five participants were recruited to represent a mix of age, gender, ethnicity, U.S. location, level of educational attainment, household income, and primary desktop browser. Participants were asked to complete a series of think-aloud tasks using a desktop prototype of Send and were also asked about the ways they currently transfer files online. Each interview lasted approximately 45 minutes, and in addition to the researcher, other members of the Test Pilot team joined the interviews as notetakers and observers.

One of the screens from the prototype used in the Firefox Send concept evaluation

What We Learned

  • Participants use a variety of methods to transfer files online. Email was the most common method reported. Methods like email that involve uploading locally-stored files in order to share were perceived as more secure — because of greater perceived control over local files — than sharing that commenced from files already in the cloud.
  • Participants could not tell what the Send feature did based on the early UI for the browser toolbar.
  • Participants expected to be able to email the share link from the Send UI, which would require integration with email services.
  • Participants were unclear whether people receiving files transferred via Send had to use Firefox to access the files.
  • Participants expected file view and download settings to be more flexible than suggested by the prototype UI. For example, one participant noted that the default one download limit would be problematic for her because she sometimes needs to download the same file on her work computer and then on her computer at home.
  • No participants expressed preference for cloud versus peer-to-peer file sharing.

What We Needed to Do After Research

The Send concept evaluation study produced a list of 15 detailed recommendations for the Test Pilot team. To summarize, we needed to take take three actions:

  1. Make the Send functionality more discernible (including accessible) in the browser
  2. Make the secure nature of transferring files via Send apparent
  3. Give people more control over how long and/or how many times shared files can be viewed

What’s next

Now that Firefox Send has launched, we will monitor metrics and conduct additional qualitative research to understand the usability of the experiment for people transferring files as well as people receiving files. Give Send a try.

The report

The full report contains detailed findings and recommendations

View the full report from this study. The Test Pilot team is working hard to make all of our user research reports public in the remainder of this year. As we do this, links to other study-related documents may break. If you have any questions about details related to Test Pilot user research, please contact Sharon Bautista at: sharon@mozilla.com

Gaining Insights Early: Concept Evaluation for Firefox Send was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Aug. 03, 2017

Mozilla planet - do, 03/08/2017 - 18:00

Reps Weekly Meeting Aug. 03, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Aug. 03, 2017

Mozilla planet - do, 03/08/2017 - 18:00

Reps Weekly Meeting Aug. 03, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Extension Examples: See the APIs in Action

Mozilla planet - do, 03/08/2017 - 17:00

In the past year, we’ve added a tremendous amount of add-on documentation to MDN Web Docs. One resource we’ve spent time building out is the Extension Examples repository on GitHub, where you can see sample extension code using various APIs. This is helpful for seeing how WebExtensions APIs are used in practice, and it is especially helpful for people just getting started building extensions.

To make the example extensions easier to understand, there is a short README page for each example. There is also a page on MDN Web Docs that lists the JavaScript APIs used in each example.

With the work the Firefox Developer Tools team has completed for add-on developers, it is easier to temporarily install extensions in Firefox for debugging purposes. Feel free to try it out with the example extensions.

As we ramp up our efforts for Firefox 57, expect more documentation and examples to be available on MDN Web Docs and our GitHub repository. There are currently 47 example extensions, and you can help grow it by following these instructions.

Let us know in the comments if you find these examples useful, or contact us using these methods. We encourage you to contribute your own examples as well!

Thank you to all who have contributed to growing the repository.

The post Extension Examples: See the APIs in Action appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Frameworks for Governance, Incentive and Consequence in FOSS

Mozilla planet - do, 03/08/2017 - 16:44

This is the fourth in a series of posts reporting findings from research into the state of D&I in Mozilla’s communities. The current state of our communities is a mix, when it comes to inclusivity: we can do better, and as with the others, this blog post is an effort to be transparent about what we’ve learned in working toward that goal.

Mobilizing the Community Participation Guidelines

In May 2017 after extensive community feedback we revised our guidelines to be much more specific, comprehensible, and actionable.

Click to view Mozilla’s Community Participation Guidelines in Full

In community D&I research interviews, we asked people what they knew about Mozilla’s Community Participation Guidelines. A majority were not aware of the CPG or, we suspect, shared guesses based on what they knew about Code of Conducts generally. And while awareness is growing thanks to circulated feedback and learning opportunities, there remain many ‘myths to bust’ around our guidelines, who they apply to and why they are as much a toolkit for community health and empowerment as they are for consequence.

…this moment in time is a pivotal one for all open projects who have adopted a Code of Conduct: We’re at a critical stage in making inclusive open project governance effective, and understood — real.

And this is not only true for Mozilla. In recent conversations with other open project leaders, I’ve started to see that this moment in time is a pivotal one for all open projects that have adopted a Code of Conduct: We’re at a critical stage in making inclusive open project governance effective, and understood — real. While effectively enforcing our guidelines will at times feel uncomfortable, and even met with resistance, there are far more for whom empowerment, safety and inclusion will be celebrated and embraced.

Photo credit: florianric via Visual Hunt

I tried to imagine the necessary components in developing a framework for embedding our CPG in our community workflows and culture. (And as much as possible we need to collaborate with other open source communities, building-on and extending each other’s work.)

Education — Curated learning resources, online and in-person that deliver meaningful and personalized opportunities to interact with the guidelines, and ways to measure educational approaches to inclusion across differences including cultural and regional ones.

Culture & Advocacy — Often the first time people interact with a Code of Conduct it’s in response to something negative — the CPG needs champions and experiments in building trust, self-reflection, empowerment, psychological safety, and opportunity.

Designed Reporting & Resolution Processes — Well-designed resolution processes mean getting serious about building templates, resources, investigative methods, and decision making workflows. We’re starting to do just this testing with regional community conflicts. It also means building on the work of our peers in other open communities; and we’re starting to do that too.

Consultation and Consensus — As part of resolution process — understanding and engaging key stakeholders, and important perspectives will drive effective resolutions, and key health initiatives. Right now this is showing up in formation of conflict-specific working groups, but it should also leverage what we’ve learned from the past.

Development — Strengthening our guidelines by treating them as a living document, improving as we learn.

Standardizing IncentivePhoto credit: mozillaeu via VisualHunt

Mozilla communities are filled with opportunity — opportunity to learn, grow, innovate, build, collaborate and be the change the world needs. And this enthusiasm overflowed in interviews — even when gatekeeping, and other negative attributes of community health were present.

While feeling valued was important, our interviews highlighted the need for contributors to surface and curate their accomplishments in formats that can be validated by external sources as having real-world-value.

Despite positive sentiment, and optimism we heard a great deal of frustration (and some delivered tears) when people were asked to discuss elements of participatory design that made contributing feel valuable to them. Opportunity, recognition and resources were perceived to be largely dependent on staff and core contributors. Additionally, recognition itself varies wildly across the project to the omission or inflation of achievement and impact on the project. We heard that those best at being seen, are also the loudest and most consistent at seeking recognition — further proof that meritocracy doesn’t exist.

While feeling valued was important, our interviews highlighted the need for contributors to surface and curate their accomplishments in formats that can be validated by external sources as having real-world-value.

“Social connections are the only way volunteers progress: You are limited by what you know, and who you know, not by what you do” (community interview)

Emerging from this research was a sense that standards for recognition across the project would be incredibly valuable in combating variability, creating visions for success and surfacing the achievements. Minimally standards help people understand where they are going, and the potential of their success; most optimistically standards make contributing a portal for learning and achievement to rival formal education and mentorship programs. Success of diverse groups is almost certainly dependent on getting recognition right.

If you are involved in open source project governance, please reach out ! I would love to talk to you — to build a bridge between our work and yours ❤

Our next post in this series ‘Designing Inclusive Events’, will be published in the second week of August.

Frameworks for Governance, Incentive and Consequence in FOSS was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Emma Irwin: Frameworks for Governance, Incentive and Consequence in FOSS

Mozilla planet - do, 03/08/2017 - 16:20
This is the fourth in a series of posts reporting findings from research into the state of D&I in Mozilla’s communities. The current state of our communities is a mix, when it comes to inclusivity: we can do better, and as with the others, this blog post is an effort to be transparent about what we’ve learned in working toward that goal. Mobilizing the Community Participation Guidelines

In May 2017 after extensive community feedback we revised our guidelines to be much more specific, comprehensible, and actionable.

Click to view Mozilla’s Community Participation Guidelines in Full

In community D&I research interviews, we asked people what they knew about Mozilla’s Community Participation Guidelines. A majority were not aware of the CPG or, we suspect, shared guesses based on what they knew about Code of Conducts generally. And while awareness is growing thanks to circulated feedback and learning opportunities, there remain many ‘myths to bust’ around our guidelines, who they apply to and why they are as much a toolkit for community health and empowerment as they are for consequence.

…this moment in time is a pivotal one for all open projects who have adopted a Code of Conduct: We’re at a critical stage in making inclusive open project governance effective, and understood — real.

And this is not only true for Mozilla. In recent conversations with other open project leaders, I’ve started to see that this moment in time is a pivotal one for all open projects that have adopted a Code of Conduct: We’re at a critical stage in making inclusive open project governance effective, and understood — real. While effectively enforcing our guidelines will at times feel uncomfortable, and even met with resistance, there are far more for whom empowerment, safety and inclusion will be celebrated and embraced.

Photo credit: florianric via Visual Hunt

I tried to imagine the necessary components in developing a framework for embedding our CPG in our community workflows and culture. (And as much as possible we need to collaborate with other open source communities, building-on and extending each other’s work.)

Education — Curated learning resources, online and in-person that deliver meaningful and personalized opportunities to interact with the guidelines, and ways to measure educational approaches to inclusion across differences including cultural and regional ones.

Culture & Advocacy — Often the first time people interact with a Code of Conduct it’s in response to something negative — the CPG needs champions and experiments in building trust, self-reflection, empowerment, psychological safety, and opportunity.

Designed Reporting & Resolution Processes — Well-designed resolution processes mean getting serious about building templates, resources, investigative methods, and decision making workflows. We’re starting to do just this testing with regional community conflicts. It also means building on the work of our peers in other open communities; and we’re starting to do that too.

Consultation and Consensus — As part of resolution process — understanding and engaging key stakeholders, and important perspectives will drive effective resolutions, and key health initiatives. Right now this is showing up in formation of conflict-specific working groups, but it should also leverage what we’ve learned from the past.

Development — Strengthening our guidelines by treating them as a living document, improving as we learn.

Standardizing Incentive

Photo credit: mozillaeu via VisualHunt

Mozilla communities are filled with opportunity — opportunity to learn, grow, innovate, build, collaborate and be the change the world needs. And this enthusiasm overflowed in interviews — even when gatekeeping, and other negative attributes of community health were present.

While feeling valued was important, our interviews highlighted the need for contributors to surface and curate their accomplishments in formats that can be validated by external sources as having real-world-value.

Despite positive sentiment, and optimism we heard a great deal of frustration (and some delivered tears) when people were asked to discuss elements of participatory design that made contributing feel valuable to them. Opportunity, recognition and resources were perceived to be largely dependent on staff and core contributors. Additionally, recognition itself varies wildly across the project to the omission or inflation of achievement and impact on the project. We heard that those best at being seen, are also the loudest and most consistent at seeking recognition — further proof that meritocracy doesn’t exist.

While feeling valued was important, our interviews highlighted the need for contributors to surface and curate their accomplishments in formats that can be validated by external sources as having real-world-value.

“Social connections are the only way volunteers progress: You are limited by what you know, and who you know, not by what you do” (community interview)

Emerging from this research was a sense that standards for recognition across the project would be incredibly valuable in combating variability, creating visions for success and surfacing the achievements. Minimally standards help people understand where they are going, and the potential of their success; most optimistically standards make contributing a portal for learning and achievement to rival formal education and mentorship programs. Success of diverse groups is almost certainly dependent on getting recognition right.

If you are involved in open source project governance, please reach out ! I would love to talk to you — to build a bridge between our work and yours ❤

Our next post in this series ‘Designing Inclusive Events’, will be published in the second week of August. 

FacebookTwitterGoogle+Share

Categorieën: Mozilla-nl planet

Nick Fitzgerald: Scrapmetal — Scrap Your Rust Boilerplate

Mozilla planet - do, 03/08/2017 - 09:00

TLDR: I translated some of the code and ideas from Scrap Your Boilerplate: A Practical Design Pattern for Generic Programming by Lämmel and Peyton Jones to Rust and it’s available as the scrapmetal crate.

Say we work on some software that models companies, their departments, sub-departments, employees, and salaries. We might have some type definitions similar to this:

pub struct Company(pub Vec<Department>); pub struct Department(pub Name, pub Manager, pub Vec<SubUnit>); pub enum SubUnit { Person(Employee), Department(Box<Department>), } pub struct Employee(pub Person, pub Salary); pub struct Person(pub Name, pub Address); pub struct Salary(pub f64); pub type Manager = Employee; pub type Name = &'static str; pub type Address = &'static str;

One of our companies has had a morale problem lately, and we want to transform it into a new company where everyone is excited to come in every Monday through Friday morning. But we can’t really change the nature of the work, so we figure we can just give the whole company a 10% raise and call it close enough. This requires writing a bunch of functions with type signatures like fn(self, k: f64) -> Self for every type that makes up a Company, and since we recognize the pattern, we should be good Rustaceans and formalize it with a trait:

pub trait Increase: Sized { fn increase(self, k: f64) -> Self; }

A company with increased employee salaries is made by increasing the salaries of each of its departments’ employees:

impl Increase for Company { fn increase(self, k: f64) -> Company { Company( self.0 .into_iter() .map(|d| d.increase(k)) .collect() ) } }

A department with increased employee salaries is made by increasing its manager’s salary and the salary of every employee in its sub-units:

impl Increase for Department { fn increase(self, k: f64) -> Department { Department( self.0, self.1.increase(k), self.2 .into_iter() .map(|s| s.increase(k)) .collect(), ) } }

A sub-unit is either a single employee or a sub-department, so either increase the employee’s salary, or increase the salaries of all the people in the sub-department respectively:

impl Increase for SubUnit { fn increase(self, k: f64) -> SubUnit { match self { SubUnit::Person(e) => { SubUnit::Person(e.increase(k)) } SubUnit::Department(d) => { SubUnit::Department(Box::new(d.increase(k))) } } } }

An employee with an increased salary, is that same employee with the salary increased:

impl Increase for Employee { fn increase(self, k: f64) -> Employee { Employee(self.0, self.1.increase(k)) } }

And finally, a lone salary can be increased:

impl Increase for Salary { fn increase(self, k: f64) -> Salary { Salary(self.0 * (1.0 + k)) } }

Pretty straightforward.

But at the same time, that’s a whole lot of boilerplate. The only interesting part that has anything to do with actually increasing salaries is the impl Increase for Salary. The rest of the code is just traversal of the data structures. If we were to write a function to rename all the employees in a company, most of this code would remain the same. Surely there’s a way to factor all this boilerplate out so we don’t have to manually write it all the time?

In the paper Scrap Your Boilerplate: A Practical Design Pattern for Generic Programming, Lämmel and Peyton Jones show us a way to do just that in Haskell. And it turns out the ideas mostly translate into Rust pretty well, too. This blog post explores that translation, following much the same outline from the original paper.

When we’re done, we’ll be able to write the exact same salary increasing functionality with just a couple lines:

// Definition let increase = |s: Salary| Salary(s.0 * 1.1); let mut increase = Everywhere::new(Transformation::new(increase)); // Usage let new_company = increase.transform(old_company);

We have a few different moving parts involved here:

  • A function that transforms a specific type: FnMut(T) -> T. In the increase example this is the closure |s: Salary| Salary(s.0 * 1.1).

  • We have Transformation::new, which lifts the transformation function from transforming a single, specific type (FnMut(T) -> T) into transforming all types (for<U> FnMut(U) -> U). If we call this new transformation with a value of type T, then it will apply our T-specific transformation function. If we call it with a value of any other type, it simply returns the given value.

    Of course, Rust doesn’t actually support rank-2 types, but we can work around this by passing a trait with a generic method, anywhere we wanted to pass for<U> FnMut(U) -> U as a parameter. This trait gets implemented by Transformation:

// Essentially, for<T> FnMut(T) -> T pub trait GenericTransform { fn transform<U>(&mut self, t: U) -> U; }
  • Next is Everywhere::new, whose result is also a for<U> FnMut(U) -> U (aka implements the GenericTransform trait). This is a combinator that takes a generic transformation function, and traverses a tree of values, applying the generic transformation function to each value along the way.

  • Finally, behind the scenes there are two traits: Term and Cast. The former provides enumeration of a value’s immediate edges in the value tree. The latter enables us to ask some generic U if it is a specific T. These traits completely encapsulate the boilerplate we’ve been trying to rid ourselves of, and neither require any implementation on our part. Term can be generated mechanically with a custom derive, and Cast can be implemented (in nightly Rust) with specialization.

Next, we’ll walk through the implementation of each of these bits.

Implementing Cast

The Cast trait is defined like so:

trait Cast<T>: Sized { fn cast(self) -> Result<T, Self>; }

Given some value, we can try and cast it to a T or if that fails, get the original value back. You can think of it like instanceof in JavaScript, but without walking some prototype or inheritance chain. In the original Haskell, cast returns the equivalent of Option<T>, but we need to get the original value back if we ever want to use it again because of Rust’s ownership system.

To implement Cast requires specialization, which is a nightly Rust feature. We start with a default blanket implementation of Cast that fails to perform the conversion:

impl<T, U> Cast<T> for U { default fn cast(self) -> Result<T, Self> { Err(self) } }

Then we define a specialization for when Self is T that allows the cast to succeed:

impl<T> Cast<T> for T { fn cast(self) -> Result<T, Self> { Ok(self) } }

That’s it!

Here is Cast in action:

assert_eq!(Cast::<bool>::cast(1), Err(1)); assert_eq!(Cast::<bool>::cast(true), Ok(true)); Implementing Transformation

Once we have Cast, implementing generic transformations is easy. If we can cast the value to our underlying non-generic transformation function’s input type, then we call it. If we can’t, then we return the given value:

pub struct Transformation<F, U> where F: FnMut(U) -> U, { f: F, } impl<F, U> GenericTransform for Transformation<F, U> where F: FnMut(U) -> U, { fn transform<T>(&mut self, t: T) -> T { // Try to cast the T into a U. match Cast::<U>::cast(t) { // Call the transformation function and then cast // the resulting U back into a T. Ok(u) => match Cast::<T>::cast((self.f)(u)) { Ok(t) => t, Err(_) => unreachable!("If T=U, then U=T."), }, // Not a U, return unchanged. Err(t) => t, } } }

For example, we can lift the logical negation function into a generic transformer. For booleans, it will return the complement of the value, for other values, it leaves them unchanged:

let mut not = Transformation::new(|b: bool| !b); assert_eq!(not.transform(true), false); assert_eq!(not.transform("str"), "str"); Implementing Term

The next piece of the puzzle is Term, which enumerates the direct children of a value. It is defined as follows:

pub trait Term: Sized { fn map_one_transform<F>(self, f: &mut F) -> Self where F: GenericTransform; }

In the original Haskell, map_one_transform is called gmapT for “generic map transform”, and as mentioned earlier GenericTransform is a workaround for the lack of rank-2 types, and would otherwise be for<U> FnMut(U) -> U.

It is important that map_one_transform does not recursively call its children’s map_one_transform methods. We want a building block for making all different kinds of traversals, not one specific traversal hard coded.

If we were to implement Term for Employee, we would write this:

impl Term for Employee { fn map_one_transform<F>(self, f: &mut F) -> Self where F: GenericTransform, { Employee(f.transform(self.0), f.transform(self.1)) } }

And for SubUnit, it would look like this:

impl Term for SubUnit { fn map_one_transform<F>(self, f: &mut F) -> Self where F: GenericTransform, { match self { SubUnit::Person(e) => SubUnit::Person(f.transform(e)), SubUnit::Department(d) => SubUnit::Department(f.transform(d)), } } }

On the other hand, a floating point number has no children to speak of, and so it would do less:

impl Term for f64 { fn map_one_transform<F>(self, _: &mut F) -> Self where F: GenericTransform, { self } }

Note that each of these implementations are driven purely by the structure of the implementation’s type. enums transform whichever variant they are, structs and tuples transfrom each of their fields, etc. It’s 100% mechanical and 100% uninteresting.

It’s easy to write a custom derive for implementing Term. After that’s done, we just add #[derive(Term)] to our type definitions:

#[derive(Term)] pub struct Employee(pub Person, pub Salary); // Etc... Implementing Everywhere

Everywhere takes a generic transformation and then uses Term::map_one_transform to recursively apply it to the whole tree. It does so in a bottom up, left to right order.

Its definition and constructor are trivial:

pub struct Everywhere<F> where F: GenericTransform, { f: F, } impl<F> Everywhere<F> where F: GenericTransform, { pub fn new(f: F) -> Everywhere<F> { Everywhere { f } } }

Then, we implement GenericTransform for Everywhere. First we recursively map across the value’s children, then we transform the given value. This transforming of children first is what causes the traversal to be bottom up.

impl<F> GenericTransform for Everywhere<F> where F: GenericTransform, { fn transform<T>(&mut self, t: T) -> T where T: Term, { let t = t.map_one_transform(self); self.f.transform(t) } }

If instead we wanted to perform a top down traversal, our choice to implement mapping non-recursively for Term enables us to do so:

impl<F> GenericTransform for EverywhereTopDown<F> where F: GenericTransform, { fn transform<T>(&mut self, t: T) -> T where T: Term, { // Calling `transform` before `map_one_transform` now. let t = self.f.transform(t); t.map_one_transform(self) } } So What?

At this point, you might be throwing up your hands and complaining about all the infrastructure we had to write in order to get to the two line solution for increasing salaries in a company. Surely all this infrastructure is at least as much code as the original boilerplate? Yes, but this infrastructure can be shared for all the transformations we ever write, and not even just for companies, but values of all types!

For example, if we wanted to make sure every employee in the company was a good culture fit, we might want to rename them all to “Juan Offus”. This is all the code we’d have to write:

// Definition let rename = |p: Person| Person("Juan Offus", p.1); let mut rename = Everywhere::new(Transformation::new(rename)); // Usage let new_company = rename.transform(old_company);

Finally, the paper notes that this technique is more future proof than writing out the boilerplate:

Furthermore, if the data types change – for example, a new form of SubUnit is added – then the per-data-type boilerplate code must be re-generated, but the code for increase [..] is unchanged.

Queries

What if instead of consuming a T and transforming it into a new T, we wanted to non-destructively produce some other kind of result type R? In the Haskell code, generic queries have this type signature:

forall a. Term a => a -> R

Translating this into Rust, thinking about ownership and borrowing semantics, and using a trait with a generic method to avoid rank-2 function types, we get this:

// Essentially, for<T> FnMut(&T) -> R pub trait GenericQuery<R> { fn query<T>(&mut self, t: &T) -> R where T: Term; }

Similar to the Transformation type, we have a Query type, which lifts a query function for a particular U type (FnMut(&U) -> R) into a generic query over all types (for<T> FnMut(&T) -> R aka GenericQuery). The catch is that we need some way to create a default instance of R for the cases where our generic query function is invoked on a value that isn’t of type &U. This is what the D: FnMut() -> R is for.

pub struct Query<Q, U, D, R> where Q: FnMut(&U) -> R, D: FnMut() -> R, { make_default: D, query: Q, }

When constructing a Query, and our result type R implements the Default trait, we can use Default::default as D:

impl<Q, U, R> Query<Q, U, fn() -> R, R> where Q: FnMut(&U) -> R, R: Default, { pub fn new(query: Q) -> Query<Q, U, fn() -> R, R> { Query { make_default: Default::default, query, } } }

Otherwise, we require a function that we can invoke to give us a default value when we need one:

impl<Q, U, D, R> Query<Q, U, D, R> where Q: FnMut(&U) -> R, D: FnMut() -> R, { pub fn or_else(make_default: D, query: Q) -> Query<Q, U, D, R> { Query { make_default, query, } } }

Here we can see Query in action:

let mut char_to_u32 = Query::or_else(|| 42, |c: &char| *c as u32); assert_eq!(char_to_u32.query(&'a'), 97); assert_eq!(char_to_u32.query(&'b'), 98); assert_eq!(char_to_u32.query("str is not a char"), 42);

Next, we extend the Term trait with a map_one_query method, similar to map_one_transform, that applies the generic query to each of self’s direct children.

Note that this produces zero or more R values, not a single R! The original Haskell code returns a list of R values, and its laziness allows one to only actually compute as many as end up getting used. But Rust is not lazy, and is much more explicit about things like physical layout and storage of values. We don’t want to allocate a (generally small) vector on the heap for every single map_one_query call. Instead, we use a callback interface, so that callers can decide if and when to heap allocate the results.

pub trait Term: Sized { // ... fn map_one_query<Q, R, F>(&self, query: &mut Q, each: F) where Q: GenericQuery<R>, F: FnMut(&mut Q, R); }

Implementing map_one_query for Employee would look like this:

impl Term for Employee { // ... fn map_one_query<Q, R, F>(&self, q: &mut Q, mut f: F) where Q: QueryAll<R>, F: FnMut(&mut Q, R), { let r = q.query(&self.0); f(q, r); let r = q.query(&self.1); f(q, r); } }

And implementing it for SubUnit like this:

impl Term for SubUnit { // ... fn map_one_query<Q, R, F>(&self, q: &mut Q, mut f: F) where Q: QueryAll<R>, F: FnMut(&mut Q, R), { match *self { SubUnit::Person(ref p) => { let r = q.query(p); f(q, r); } SubUnit::Department(ref d) => { let r = q.query(d); f(q, r); } } } }

Once again, map_one_query’s implementation directly falls out of the structure of the type: querying each field of a struct, matching on a variant and querying each of the matched variant’s children. It is also mechanically implemented inside #[derive(Term)].

The final querying puzzle piece is a combinator putting the one-layer querying traversal together with generic query functions into recursive querying traversal. This is very similar to the Everywhere combinator, but now we also need a folding function to reduce the multiple R values we get from map_one_query into a single resulting R value.

Here is its definition and constructor:

pub struct Everything<Q, R, F> where Q: GenericQuery<R>, F: FnMut(R, R) -> R, { q: Q, fold: F, } impl<Q, R, F> Everything<Q, R, F> where Q: GenericQuery<R>, F: FnMut(R, R) -> R, { pub fn new(q: Q, fold: F) -> Everything<Q, R, F> { Everything { q, fold, } } }

We implement the Everything query traversal top down by querying the given value before mapping the query across its children and folding their results together. The wrapping into and unwrapping out of Options allow fold and the closure to take r by value; Option is essentially acting as a “move cell”.

impl<Q, R, F> GenericQuery<R> for Everything<Q, R, F> where Q: GenericQuery<R>, F: FnMut(R, R) -> R, { fn query<T>(&mut self, t: &T) -> R where T: Term, { let mut r = Some(self.q.query(t)); t.map_one_query( self, |me, rr| { r = Some((me.fold)(r.take().unwrap(), rr)); }, ); r.unwrap() } }

With Everything defined, we can perform generic queries! For example, to find the highest salary paid out in a company, we can query by grabbing an Employee’s salary (wrapped in an Option because we could have a shell company with no employees), and folding all the results together with std::cmp::max:

use std::cmp::max; // Definition let get_salary = |e: &Employee| Some(e.1.clone()); let mut query_max_salary = Everything::new(Query::new(get_salary), max); // Usage let max_salary = query_max_salary.query(&some_company);

If we were only querying for a single value, for example a Department with a particular name, the Haskell paper shows how we could leverage laziness to avoid traversing the whole search tree once we’ve found an acceptable answer. This is not an option for Rust. To have equivalent functionality, we would need to thread a break-or-continue control value from the query function through to map_one_query implementations. I haven’t implemented this, but if you want to, send me a pull request ;-)

However, we can prune subtrees from the search/traversal with the building blocks we’ve defined so far. For example, EverythingBut is a generic transformer combinator that only transforms the subtrees for which its predicate returns true, and leaves other subtrees as they are:

pub struct EverywhereBut<F, P> where F: GenericTransform, P: GenericQuery<bool>, { f: F, predicate: P, } impl<F, P> GenericTransform for EverywhereBut<F, P> where F: GenericTransform, P: GenericQuery<bool>, { fn transform<T>(&mut self, t: T) -> T where T: Term, { if self.predicate.query(&t) { let t = t.map_one_transform(self); self.f.transform(t) } else { t } } } What’s Next?

The paper continues by generalizing transforms, queries, and monadic transformations into brain-twisting generic folds over the value tree. Unfortunately, I don’t think that this can be ported to Rust, but maybe you can prove me wrong. I don’t fully grok it yet :)

If the generic folds can’t be expressed in Rust, that means that for every new kind of generic operation we might want to perform (eg add a generic cloning operation for<T> FnMut(&T) -> T) we would need to extend the Term trait and its custom derive. The consequences are that downstream crates are constrained to only use the operations predefined by scrapmetal, and can’t define their own arbitrary new operations.

The paper is a fun read — go read it!

Finally, check out the scrapmetal crate, play with it, and send me pull requests. I still need to implement Term for all the types that are exported in the standard library, and would love some help in this department. I’d also like to figure out what kinds of operations should come prepackaged, what kinds of traversals and combinators should be built in, and of course some help implementing them.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Fighting Crime Shouldn’t Kill the Internet

Mozilla planet - do, 03/08/2017 - 02:39

The internet has long been a vehicle for creators and commerce. Yesterday, the Senate introduced a bill that would impose significant limitations on protections that have created vibrant online communities and content platforms, and allow users to create and consume uncurated material. While well intentioned, the liabilities placed on intermediaries in the bill would chill online speech and commerce. This is a counterproductive way to address sex trafficking, the ostensible purpose of the bill.

The internet, from its inception, started as a place to foster platforms and creators. In 1996 a law was passed that was intended to limit illegal content online – the Communications Decency Act (CDA). However, section 230 of the CDA provided protections for intermediaries: if you don’t know about particular illegal content, you aren’t held responsible for it. Intermediaries include platforms, websites, ISPs, and hosting providers, who as a result of CDA 230 are not held responsible for the actions of users. Section 230 is one of the reasons that YouTube, Facebook, Medium and online commenting systems can function without the technical burden or legal risk of screening every piece of user-generated content. Online platforms – love ‘em or hate ‘em – have enabled millions of less technical creators to share their work and opinions.

A fundamental part of the CDA is that it only punishes “knowing conduct” by intermediaries. This protection is missing from the changes this new bill proposes to CDA 230. The authors of the bill appear to be trying to preserve this core balance of – but they don’t add the “knowing conduct” language back into the CDA. Because they put it in the sex trafficking criminal statute instead, only Federal criminal cases would need to show that the site knew about the problematic content. The bill would introduce gaps in liability protections into CDA 230 that are not so easily covered. State laws can target intermediary behavior too, and without a “knowing conduct” standard in CDA directly, platforms of all types could be held liable for conduct of others that they know nothing about. This is also true of the (new) Federal civil right of action that this bill introduces. That means a small drafting choice strikes at the heart of the safe harbor provisions that make CDA 230 a powerful driver of the internet.

This bill is not well scoped to solve the problem, and does not impact the actual perpetrators of sex trafficking. Counterintuitively, it disincentivizes content moderation by removing the safe harbor around moderation (including automated moderation) that companies develop, including to detect illegal content like trafficking. And why would a company want to help law enforcement find the criminal content on their service when someone is going to turn around and sue them for having had it in the first place? Small and startup companies who are relying on the safe harbor to be innovative would face a greater risk environment for any user activity they facilitate. And users would have a much harder time finding places to do business, create, and speak.

The bill claims that CDA was never intended to protect websites that promote trafficking – but it was carefully tailored to ensure that intermediaries are not responsible for the conduct of their users. It has to work this way in order for the internet we know and love to exist. That doesn’t mean law enforcement can’t do its job – the CDA was built to provide ways to go after the bad guys (and to incentivize intermediaries to help). The proposed bill doesn’t do that.

The post Fighting Crime Shouldn’t Kill the Internet appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Daniel Stenberg: The curl bus factor

Mozilla planet - wo, 02/08/2017 - 23:57

bus factor: the minimum number of team members that have to suddenly disappear from a project before the project stalls due to lack of knowledgeable or competent personnel.

Projects should strive to survive

If a project is worth using and deploying today and if it is a project worth sending patches to right now, it is also a project that should position itself to survive a loss of key individuals. However unlikely or unfortunate such an event would be.

Tools to calculate bus factor

All the available tools that determine the bus factor for a given project only run on code and check for commits, code churn or check how many files each person has done a significant share of changes in etc.

This number is really impossible to figure out without tools and tools really cannot take “general knowledge” into account, or “this person answers a lot of email on the list”, or this person has 48k in reputation on stack overflow already for responding to questions about the project.

The bus factor as evaluated by a tool pretty much has to be about amount of code, size of code or number of code changes, which may or may not be a good indicator of who knows what about the code. Those who author and commit changes probably have a good idea but a real problem is that you can’t reverse that view and say that just because you didn’t commit or change something, you don’t know. Do you know more about the code if you did many commits? Do you know more about the code if you changed more lines of code?

We can’t prove or assume lack of knowledge or interest by an absence of commits, edits or changes. And yet we can’t calculate bus factor if there’s no tool or way to calculate it.

A look at curl

curl is soon 20 years old and boasts 22k something commits. I’m the author of about 57% of them, and the second-most committer (who’s not involved anymore) has about 12%. That makes two committers having done 15.3k commits out of the 22k. If we for simplicity calculate bus factor based on commit numbers, we’d need 8580 commits from others and I would stop completely, to reach bus factor >2 (when the 2 top committers have less than 50% of the commits), which at the current commit rate equals in about 5 years. And it would take about 3 years to just push the factor above 1. So even when new people joins the project, they have a really hard time to significantly change the bus factor…

The image above shows the relative share of commits done in the curl project’s git source code repository (as a share of the total amount) by the top 4 commiters from January 1 2010 to July 5 2017 (click for higher resolution). The top dotted line shows the combined share of all four (at 82% right now) and the dark blue line is my share. You can see how my commit share has shrunk from 72% down to 57% over these last 7.5 years. If this trend holds, I’ll have less than 50% of the total commits done in curl in 3-4 years.

At the same time, the thicker light blue line that climbs up into the right is the total number of authors in the git repository, which recently surpassed 500 as you can see. (The line uses the right Y-axes)

We’re approaching 1600 individually named contributors thanked in the project and every release we do (we ship one every 8 weeks) has around 40 contributors, out of which typically around half are newcomers. The long tail is very long and the amount of drive-by just-once contributors is high. Also note how the number 1600 is way higher than the 500 something that has authored commits. Lots of people contribute in other ways.

When we ask our users “why don’t you contribute (more) to the project?” (which we do annually) what do they answer? They say its because 1) everything works, 2) I don’t have time 3) things get fixed fast enough 4) I don’t know the programming language 5) I don’t have the energy.

First as the 6th answer (at 5% 2017) comes “other” where some people actually say they wouldn’t know where to start and so on.

All of this taken together: there are no visible signs of us suffering from having a low bus factor. Lots of signs that people can do things when they want to if I don’t do it. Lots of signs that the code and concepts are understood.

Lots of signs that a low bus factor is not a big problem here. Or perhaps rather that the bus factor isn’t really as low as any tool would calculate it.

What if I…

Do I know who would pick up the project and move on if I die today? No. We’re a 100% volunteer-driven project. We create one of the world’s most widely used software components (easily more than three billion instances and counting) but we don’t know who’ll be around tomorrow to work on it. I can’t know because that’s not how the project works.

Given the extremely wide use of our stuff, given the huge contributor base, given the vast amounts of documentation and tests I think it’ll work out.

Just because you have a large bus factor doesn’t necessarily make the project a better place to ask questions. We’ve seen projects in the past where N persons involved are all from the same company and when that company removes its support for that project those people all go away. High bus factor, no people to ask.

Finally, let me just add that I would of course love to have many more committers and contributors in the curl project, and I think we would be an even better project if we did. But that’s a separate issue.

Categorieën: Mozilla-nl planet

About:Community: Firefox 55 new contributors

Mozilla planet - wo, 02/08/2017 - 22:23

With the release of Firefox 55, we are pleased to welcome the 108 developers who contributed their first code change to Firefox in this release, 89 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

Pagina's