mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 1 maand 3 dagen geleden

Mozilla Open Innovation Team: Reflection — Inclusive Organizing for Open Source Communities

di, 11/07/2017 - 15:08

This, the second in a series of posts reporting findings from three months of research into the state of D&I in Mozilla’s communities. The current state of our communities is a mix, when it comes to inclusivity: we can do better, and this blog post is an effort to be transparent about what we’ve learned in working toward that goal.

If we are to truly unleash the full value of a participatory, open Mozilla, we need to personalize and humanize Mozilla’s value by designing diversity and inclusion into our strategy. To unlock the full potential of an open innovation model we need radical change in how we develop community and who we include.Photo credit: national museum of american history via VisualHunt.com / CC BY-NC

We learned that while bottom-up, or grassroots organizing enables ideas and vision to surface within Mozilla, those who succeed in that environment tend to be of the same backgrounds, usually the more dominant or privileged background, and often in tenured* community roles.

*tenured — Roles assigned by Mozilla, or legacy to the community that exist without timelines, or cycles for renewal/election.

Our research dove into many areas of inclusive organizing, and three clusters of recommendations surfaced:

1. Provide Organizational Support to Identity Groups

For a diversity of reasons including, safety, friendship, mentorship, advocacy and empowerment — identity groups serve both as a safe space, and as a springboard into, and out of greater community participation — for the betterment of both. Those in Mozilla’s Tech Speakers program, spoke very positively of the opportunity, responsibility and camaraderie provided by this program, which brings together contributors focused on building speaking skills for conferences.

Building a sense of belonging in small groups, no matter focus (Tech Speakers, Women of Color, Spanish Speaking), is an inclusion methodology.” — Non-Binary contributor, Europe

Womoz (Women of Mozilla) showed up in our research an example of an identity group trying to solve and overcome systemic problems faced by women in open source. However, without centralized goals or structured organizational support for the program, communities were left on their own to define the group, its goals, who is included, and why. As a result we found vastly different perspectives, and implementations of ‘Womoz’, from successful inclusion methodologies to exclusive or even relegatory groups:

  • As a ‘label’. ‘Catch-all’ for non- male non-binary contributors.
  • A way for women to gather that aligns with cultural traditions for convening of young women — and approved by families within specific cultural norms.
  • A safe, and empowered way to meet and talk with people of the same gender-identity about topics and problems specific to women.
  • A way to escape toxic communities where discussion and opportunity is dominated by men, or where women were purposely excluded.
  • An online community with several different channels for sharing advocacy (mailing list, Telegram group, website, wiki page).
  • As a stereotypical way to reference women-only contributing in specific areas (such as crafts, non-technical) or relegating women to those areas.

For identity groups to fully flourish and to avoid negative manifestations like tokenism and relegation, the recommendation is that organizational support (dedicated strategic planning, staffing, time, and resources) should be prioritized where leadership and momentum exists, and where diversity can thrive. Mozilla is already investing in pilot for identity with staff and core contributors which is exciting progress on what we’ve learned. The full potential is that identity groups act as ‘diversity launch pads’ into and out of the larger community for benefit of both.

2. Build Project-Wide Strategies for Toxic BehaviorPhoto by Paul Riismandel BY-NC-SA 2.0

Research exposed toxic behavior as a systemic issue being tackled in many different ways, and by both employees and community leaders truly dedicated to creating healthy communities.

Insights amplified external findings into toxic behavior, that shows toxic people tend to be highly productive thus masking the true impact of their behavior which deters, and excludes large numbers of people. Furthermore, the perceived value or reliance on the work of toxic individuals and regional communities was suspected to complicate, or deter and dilute intervention effectiveness.

Within in this finding we have three recommendations:

  1. Develop regionally-Specific strategies for community organizing. Global regions with laws and cultural norms that exclude, or limit opportunity to women and other marginalized groups demonstrate the challenge of a global-approach to open source community health. Generally, we found that those groups with cultural and economic privilege within a given local context were also the majority of the people who succeeded in participation. Where toxic communities, and individuals exist, the disproportionate representation of diverse groups was magnified to the point where standard approaches to community health were not successful. Specialized strategies might be very different per region, but with investment can amplify the potential of marginalized people within.
  2. Create an organization-wide strategy for tracking decisions, and outcomes of Community Participation Guidelines violations. While some project and project maintainers have been successful in banning and managing toxic behavior, there is no central mechanism for tracking those decisions for the benefit of others who may encounter and be faced with similar decisions, or the same people. We learned of at least one case where a banned toxic contributor surfaced in another area of the project without the knowledge of that team. Efforts in this vein including recent revisions to the guidelines and current efforts to train community leaders and build a central mechanism need to be continued and expanded.
  3. Build team strategies for managing conflict and toxic behavior. A range of approaches to community health surfaced success, and struggle managing conflict, and toxic behavior within projects, and community events. What appears to be important in successful moderation and resolution is that within project-teams there are designed specific roles, with training, accountability and transparency.
3. Develop Inclusive Community Leadership Models

In qualitative interviews and data analysis, ‘gatekeeping* showed up as one of the greatest blockers for diversity in communities.

Characteristics of gatekeeping in Mozilla’s communities included withholding opportunity and recognition (like vouching), purposeful exclusion, delay of, or limiting language translation of opportunity, and well as lying about the availability or existence of leadership roles, and event invitations/applications. We also heard a lot about nepotism, as well as both passive and deliberate banishment of individuals — especially women, who showed a sharp decrease in participation over time compared with men.

*Gatekeeping is noted as only one reason participation for women decreased, and we will expand on that in other posts.

Much of this data reinforced what we know about the myth of meritocracy, and that despite its promise, meritocracy actually enables toxic, exclusionary norms that prevent development of resilient, diverse communities.

“New people are not really welcomed, they are not given a lot of information on purpose.” (male contributor, South America)

As a result of these findings, we recommend all community leadership roles be designed with accountability for community health, inclusion and surfacing the achievement of others as core function. Furthermore, renewable cycles for community roles should be required, with designed challenges (like elections) for terms that extend beyond one year, with outreach to diverse groups for participation.

The long-term goal is that leadership begins a cultural shift in community that radiates empowerment for diverse groups who might not otherwise engage.

Our next post in this series ‘We See you — Reaching Diverse Audiences in FOSS’, will be published on July 21st. Until then, check out ‘Increasing Rusts Reach an initiative that emerged from a design sprint focused on this research.

Reflection — Inclusive Organizing for Open Source Communities was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla Fully Paid Parental Leave Program Officially Rolls Out Worldwide

di, 11/07/2017 - 14:59

For most countries around the world, school is out, and parents are reconnecting with their kids to enjoy road trips and long days. Many of our Mozilla employees have benefited from the expanded parental leave program we introduced last year to spend quality time with their families. The program offers childbearing parents up to 26 weeks of fully paid leave and non-childbearing (foster and adoptive parents, partners of childbearing) parents up to 12 weeks of fully paid leave.

This July, we completed the global roll out of the program making Mozilla a leader in the tech industry and among organizations with a worldwide employee base.

What makes Mozilla’s parental leave program unique

And sets us apart from other tech companies and other organizations:

  • 2016 Lookback: A benefit for employees who welcomed a child in the calendar year prior to the expanded benefit being rolled out.
  • Global Benefit: As a US-based company with employees all over the world, we chose to offer it to employees around the world — US, Canada, Belgium, Finland, France, Germany, the Netherlands, Spain, Sweden, UK, Australia, New Zealand, Taiwan.
  • Fully Paid Leave: For all parents, they’ll receive their full salary during that time.
What our Mozilla employees have to say:

“Our second son was born in January 2017. When I heard about the new policy that Mozilla will launch globally one month before, I first was not sure how that will work out with the statutory parental leave rules in Germany. But I have to say that I first enjoyed working with Rachel to work out all the details — and now I get enjoy a summer with my family. The second child has changed my life completely, it was hard to match work and family needs. I am grateful that I will have time to give back to my son and my family and grow even more closer together.”  Dominik Strohmeier, based in Berlin, Germany.  Two children, with second child born in 2017.

Chelsea Novak with baby

“Our daughter was born in 2016,” says Chelsea Novak, Firefox Editorial Lead. “When Mozilla announced this new parental leave policy we were excited for parents that were expecting in 2017, but a little sad that we missed out. Having Mozilla extend these new parental leave benefits to us was very generous and gave us some precious time with our family that we weren’t expecting.”  Chelsea and Matej Novak, both longtime Canadian Mozilla employees, based in Toronto. Two children, ages 1 and 3.

 

 

 

“I started with Mozilla in the beginning of 2016, and delivered my child that same year. When I first heard of the policy, I didn’t think the new parental leave would apply to me. Then, Rachel told me the good news. I was amazed that they would extend the parental leave policy to me so that I can take additional time off in 2017.  Mozilla is so generous to parents like myself to enjoy special moments like watching my daughter take her first steps or saying her first words.”   Jen Boscacci, based in Mountain View, California.  Two children, with second child born in 2016.

 

Maura Tuohy with baby

“Being able to take advantage of the 26 weeks of leave — and have the flexibility of when to take it — was an incredible gift for our family. Knowing that the company was so supportive made the experience as stress free as having a newborn can be! I’m so grateful to work for such a progressive and kind company — not just in policies but in culture and practice.”  Maura Tuohy, based in San Francisco.  Her first child was born in 2017.

 

 

This program helps us embrace and celebrate families of all kinds, whether its adoption and foster care, we expanded our support for both childbearing and non-childbearing parents, independent of gender or situation. We value our Mozilla employees, because juggling between work and family responsibilities is no easy feat.

The post Mozilla Fully Paid Parental Leave Program Officially Rolls Out Worldwide appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Shing Lyu: Install Ubuntu 16.04 on ThinkPad 13 (2nd Gen)

di, 11/07/2017 - 09:48

It has been a while since my last post. I’ve been busy for the first half of this year but now I got more free time. Hopefully I can get back to my usual pace of one post per months.

My old laptop (Inhon Carbonbook, discontinued) had a swollen battery. I kept using it for a few months but then the battery squeezed my keyboard so I can no longer type correctly. After some research I decided to buy the ThinkPad 13 model because it provides descent hardware for its price, and the weight (~1.5 kg) is acceptable. Every time I got a new computer the first thing is to get Linux up and running. So here are my notes on how to install Ubuntu Linux on it.

TL;DR: Everything works out of the box. Just remember to turn off secure boot and shrink the disk in Windows before you install.

Some assumptions Before Installing Ubuntu

First we need to clean up some disk space for Ubuntu. But if you are going to delete Windows completely you can skip the following steps.

  • (In Windows) Right click on the start menu, select “PowerShell (administrator)”, then run diskmgmt.msc.
  • Right click the C: disk and select the shrink disk option.

Before we can install Ubuntu, we need to disable the secure boot feature in BIOS.

  • Press Shift+restart to be able to get into BIOS, otherwise you’ll keep booting into Windows.
  • Press Enter followed by F1 to go into BIOS during boot.
  • Disable Secure Boot.

BIOS secure_boot secure_boot_sub_menu

UEFI boot seems to work with Ubuntu 16.04’s installer, so you can keep all the UEFI options enabled in the BIOS. Download the Ubuntu installation disk and use a bootable USD disk creator that supports UEFI, for example Rufus.

Installing Ubuntu

Installing Ubuntu should be pretty straightforward if you’ve done it before.

  • Go to BIOS again and set the USB drive as top priority boot medium.
  • Boot into Ubuntu, select “Install Ubuntu”.
  • Follow the installer wizard.
  • Select “Something else” when asked how to partition the disk.
  • Create a linux-swap partition of 4GB for 8GB of RAM. I followed Redhat’s suggestion, but there are different theories out there, so use your own judgement here.
  • Create a EXT4 main disk and set the mount point to /
  • After the installer finished, reboot. You should see the GRUB2 menu. The Windows options should also work without trouble.

GRUB2

What works?

Almost everything works out of the box. All the media keys, like volume control, brightness and WiFi and Bluetooth toggle is recognized by the built-in Unity desktop. But I am using i3 so I have to set them up myself, more on this later. The USB Type-C port is a nice surprise for me. It supports charging, so you can charge with any Type-C charger. I tested Apple’s Macbook charger and it works well. I also tested Apple’s and Dell’s Type-C multi-port adapter and both works, so I can plug my external monitor and my keyboard/mouse to it so it works like a docking station.

media_key

A side note, I’m also glad to find that Ubuntu now use the Fcitx IME by default. Most of the IME bugs I found in previous versions and ibus are now gone.

What doesn’t work?

Not that I’m aware of. The only complaint I have is that the Ethernet and Wi-Fi naming method has changed somehow (e.g. enp0s31f6 instead of eth0). But I believe that is something we can fix by software. People also complain that the power button and the hinge is not very sturdy. But I guess that’s the compromise you have to make for the relatively cheap price.

power_button

More on setting up media keys for i3 window manager

Since I use the i3 window manager, I don’t have Unity to handling all my media keys’ functionality. But it’s not hard to set them up using the following i3 config:

bindsym XF86AudioRaiseVolume exec amixer -q set Master 2dB+ unmute bindsym XF86AudioLowerVolume exec amixer -q set Master 2dB- unmute bindsym XF86AudioMute exec amixer -D pulse set Master toggle # Must assign device "pulse" to successfully unmute # Only two level of brightness bindsym XF86MonBrightnessUp exec xrandr --ouptut eDP-1 --brightness 0.8 bindsym XF86MonBrightnessDown exec xrandr --ouptut eDP-1 --brightness 0.5

The only drawback is that the LED indicator on the mute buttons mighty be out of sync with the real software state.

Conclusion

Overall, ThinkPad 13 is a descent laptop for its price range. Ubuntu 16.04 works out of the box. No sweat! If you are looking for a good Linux laptop, but can’t afford ThinkPad X1 Carbon or Dell’s XPS 13/15, ThinkPad 13 might be a good choice.

Categorieën: Mozilla-nl planet

Andy McKay: Manual review of add-ons

di, 11/07/2017 - 09:00

As we start to expand WebExtension APIs beyond parity with Chrome, a common theme is appearing in bug comments when proposing new APIs. That theme is something like "we'll have to give add-ons using that API a special manual review".

Put simply, that's not happening. Either we feel comfortable with an API and everyone can use it, or we don't implement it. There won't be any special manual review process for WebExtensions for specific APIs.

Manual review has quite a few problems but bluntly, it costs Mozillians resources and time and upsets developers.

On the cost side, we've had to put an awful lot of developer and reviewer (both paid and volunteer) time into reviewing extensions. There's tools and sites supported by Mozilla to support the review process.

But more than that, loud and clear developers have told us they dislike the review process and complain about it. It causes delays and developers get upset when people (many of whom are volunteer) aren't able to turn around reviews within reasonable time scales.

Further, this makes it harder for developers because it forces developers to upload unobfuscated sources. Something that its getting harder and harder as webpack, browserify and other tools gain in popularity.

And finally manual review isn't perfect. It's hard to review code, look for all the possible security and policy problems and ensure that questionable API didn't do something we felt uncomfortable with.

Manual review has its place in Mozilla, but one thing we shouldn't be do is placing more burdens on the process. We should be aiming to streamline review and ease the burden on reviewers and developers.

The result is we've got to either say no to the API or find a way to make everyone comfortable with the API.

Categorieën: Mozilla-nl planet

Andy McKay: Mail filters

di, 11/07/2017 - 09:00

Wil posted on his blog some mail filters he uses to cope with all the incoming mail. Here's a few of mine:

Highlight mentions on mentored bugs:

Matches: from:bugzilla-daemon@mozilla.org "X-Bugzilla-Mentors" Do this: Skip Inbox, Apply label "Bugzilla/Mentored"

Filter out intermittents:

Matches: from:bugzilla-daemon@mozilla.org "X-Bugzilla-Keywords: intermittent-failure" Do this: Skip Inbox, Apply label "Bugzilla/Intermittents"

Filter down by a specific product and component:

Matches: from:bugzilla-daemon@mozilla.org "X-Bugzilla-Product: Firefox" "X-Bugzilla-Component: Extension Compatibility" Do this: Skip Inbox, Apply label "Bugzilla/Extension Compat"
Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 190

di, 11/07/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Friends of the Forest

Our community likes to recognize people who have made outstanding contributions to the Rust Project, its ecosystem, and its community. These people are 'friends of the forest'.

Our this week's friend of the forest is Guillaume Gomez, whose influence is evident everywhere you look in Rust.

Crate of the Week

Sadly, no crate was nominated this week.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

113 pull requests were merged in the last week

New Contributors
  • boreeas
  • Kornel
  • oyvindln
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

An interesting issue:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Unsafe is your friend! It's maybe not a friend like you would invite to your sister's wedding, or the christening of her first-born child. But it's sort of the friend who lives in the country and has a pick-up truck and 37 guns. And so you might not want to hang out with them all the time, but if you need something blown up he is there for you.

Simon Heath on game development in Rust (at 38:35 in video).

Thanks to G2P and David Tolnay for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Niko Matsakis: Non-lexical lifetimes: draft RFC and prototype available

di, 11/07/2017 - 06:00

I’ve been hard at work the last month or so on trying to complete the non-lexical lifetimes RFC. I’m pretty excited about how it’s shaping up. I wanted to write a kind of “meta” blog post talking about the current state of the proposal – almost there! – and how you could get involved with helping to push it over the finish line.

TL;DR

What can I say, I’m loquacious! In case you don’t want to read the full post, here are the highlights:

  • The NLL proposal is looking good. As far as I know, the proposal covers all major intraprocedural shortcomings of the existing borrow checker. The appendix at the end of this post talks about the problems that we don’t address (yet).
  • The draft RFC is available in a GitHub repository:
    • Read it over! Open issues! Open PRs!
    • In particular, if there is some pattern you think may not be covered, please let me know about it by opening an issue.
  • There is a working prototype as well:
    • The prototype includes region inference as well as the borrow checker.
    • I hope to expand it to become the normative prototype of how the borrow checker works, allowing us to easily experiment with extensions and modifications – analogous to Chalk.
Background: what the proposal aims to fix

The goal of this proposal is to fix the intra-procedural shortcomings of the existing borrow checker. That is, to fix those cases where, without looking at any other functions or knowing anything about what they do, we can see that some function is safe. The core of the proposal is the idea of defining reference lifetimes in terms of the control-flow graph, as I discussed (over a year ago!) in my introductory blog post; but that alone isn’t enough to address some common annoyances, so I’ve grown the proposal somewhat. In addition to defining how to infer and define non-lexical lifetimes themselves, it now includes an improved definition of the Rust borrow checker – that is, how to decide which loans are in scope at any particular point and which actions are illegal as a result.

When combined with RFC 2025, this means that we will accept two more classes of programs. First, what I call “nested method calls”:

impl Foo { fn add(&mut self, value: Point) { ... } fn compute(&self) -> Point { ... } fn process(&mut self) { self.add(self.compute()); // Error today! But not with RFC 2025. } }

Second, what I call “reference overwrites”. Currently, the borrow checker forbids you from writing code that updates an &mut variable whose referent is borrowed. This most commonly shows up when iterating down a slice in place (try it on play):

fn search(mut data: &mut [Data]) -> bool { loop { if let Some((first, tail)) = data.split_first_mut() { if is_match(first) { return true; } data = tail; // Error today! But not with the NLL proposal. } else { return false; } } }

The problem here is that the current borrow checker sees that data.split_first_mut() borrows *data (which has type [Data]). Normally, when you borrow some path, then all prefixes of the path become immutable, and hence borrowing *data means that, later on, modifying data in data = tail is illegal. This rule makes sense for “interior” data like fields: if you’ve borrowed the field of a struct, then overwriting the struct itself will also overwrite the field. But the rule is too strong for references and indirection: if you overwrite an &mut, you don’t affect the data it refers to. You can workaround this problem by forcing a move of data (e.g., by writing {data}.split_first_mut()), but you shouldn’t have to. (This issue has been filed for some time as #10520, which also lists some other workarounds.)

Draft RFC

The Draft RFC is almost complete. I’ve created a GitHub repository containing the text. I’ve also opened issues with some of the things I wanted to get done before posting it, though the descriptions are vague and it’s not clear that all of them are necessary. If you’re interested in helping out – please, read it over! Open issues on things that you find confusing, or open PRs with suggestions, typos, whatever. I’d like to make this RFC into a group effort.

The prototype

The other thing that I’m pretty excited about is that I have a working prototype of these ideas. The prototype takes as input individual .nll files, each of which contains a few struct definitions as well as the control-flow graph of a single function. The tests are aimed at demonstrating some particular scenario. For example, the borrowck-walk-linked-list.nll test covers the “reference overwrites” that I was talking about earlier. I’ll go over it in some detail to give you the idea.

The test begins with struct declarations. These are written in a very concise form because I was too lazy to make it more user-friendly:

struct List<+> { value: 0, successor: Box<List<0>> } // Equivalent to: // struct List<T> { // value: T, // successor: Box<List<T>> // }

As you can see, the type parameters are not named. Instead, we specify the variance (+ here means “covariant”). Within the function body, we reference type parameters via a number, counting backwards from the end of the list. Since there is only one parameter (T, in the Rust example), then 0 refers to T.

(In real life, this struct would use Option<Box<List<T>>>, but the prototype doesn’t model enums yet, so this is using a simplified form that is “close enough” from the point-of-view of the checker itself. We also don’t model raw pointers yet. PRs welcome!)

After the struct definitions, there are some let declarations, declaring the global variables:

let list: &'list mut List<()>; let value: &'value mut ();

Perhaps surprisingly, the named lifetimes like 'list and 'value correspond to inference variables. That is, they are not like named lifetimes in a Rust function – which are the one major thing I’ve yet to implement – but rather correspond to inference variables. Giving them names allows for us to add “assertions” (we’ll see one later) that test what results got inferred. You can also use '_ to have the parser generate a unique name for you if you don’t feel like giving an explicit one.

After the local variables, comes the control-flow graph declarations, as a series of basic-block declarations:

block START { list = use(); goto LOOP; }

Here, list = use() means “initialize list and use the (empty) list of arguments”. I’d like to improve this to support named function prototypes, but for now the prototype just has the idea of an ‘opaque use’. Basic blocks can optionally have successors, specified using goto.

One thing the prototype understands pretty well are borrows:

block LOOP { value = &'b1 mut (*list).value; list = &'b2 mut (*list).successor.data; use(value); goto LOOP EXIT; }

An expression like &'b1 mut (*list).value borrows (*list).value mutably for the lifetime 'b1 – note that the lifetime of the borrow itself is independent from the lifetime where the reference ends up. Perhaps surprisingly, the reference can have a bigger lifetime than the borrow itself: in particular, a single reference variable may be assigned from multiple borrows in disjoint parts of the graph.

Finally, the tests support two kinds of assertions. First, you can mark a given line of code as being “in error” by adding a //! comment. There isn’t one in this example, but you can see them in other tests; these identify errors that the borrow checker would report. We can also have assertions of various kinds. These check the output from lifetime inference. This test has a single assertion:

assert LOOP/0 in 'b2;

This assertion specifies that the point LOOP/0 (that is, the start of the loop) is contained within the lifetime 'b2 – that is, we realize that the reference produced by (*list).successor.data may still be in use at LOOP/0. But note that this does not prevent us from reassigning list (nor borrowing (*list).successor.data). This is because the new borrow checker is smart enough to understand that list has been reassigned in the meantime, and hence that the borrows from different loop iterations do not overlap.

Conclusion and how you can help

I think the NLL proposal itself is close to being ready to submit – I want to add a section on named lifetimes first, and add them to the prototype – but there is still lots of interesting work to be done. Naturally, reading and improving the RFC would be useful. However, I’d also like to improve the prototype. I would like to see it evolve into a more complete – but simplified – model of the borrow checker, that could serve as a good basis for analyzing the Rust type system and investigating extensions. Ideally, we would merge it with chalk, as the two complement one another: put together, they form a fairly complete model of the Rust type system (the missing piece is the initial round of type checking and coercion, which I would eventually like to model in chalk anyhow). If this vision interests you, please reach out! I have open issues on both projects, though I’ve not had time to write in tons of details – leave a comment if something sparks your interest, and I’d be happy to give more details and mentor it to completion as well.

Questions or comments?

Take it to internals!

Appendix: What the proposal won’t fix

I also want to mention a few kinds of borrow check errors that the current RFC will not eliminate – and is not intended to. These are generally errors that cross procedural boundaries in some form or another. For each case, I’ll give a short example, and give some pointers to the current thinking in how we might address it.

Closure desugaring. The first kind of error has to do with the closure desugaring. Right now, closures always capture local variables, even if the closure only uses some sub-path of the variable internally:

let get_len = || self.vec.len(); // borrows `self`, not `self.vec` self.vec2.push(...); // error: self is borrowed

This was discussed on an internals thread; as I commented there, I’d like to fix this by making the closure desugaring smarter, and I’d love to mentor someone through such an RFC! However, it is out of scope for this one, since it does not concern the borrow check itself, but rather the details of the closure transformation.

Disjoint fields across functions. Another kind of error is when you have one method that only uses a field a and another that only uses some field b; right now, you can’t express that, and hence these two methods cannot be used “in parallel” with one another:

impl Foo { fn get_a(&self) -> &A { &self.a } fn inc_b(&mut self) { self.b.value += 1; } fn bar(&mut self) { let a = self.get_a(); self.inc_b(); // Error: self is already borrowed use(a); } }

The fix for this is to refactor so as to expose the fact that the methods operate on disjoint data. For example, one can factor out the methods into methods on the fields themselves:

fn bar(&mut self) { let a = self.a.get(); self.b.inc(); use(a); }

This way, when looking at bar() alone, we see borrows of self.a and self.b, rather than two borrows of self. Another technique is to introduce “free functions” (e.g., get(&self.a) and inc(&mut self.b)) that expose more clearly which fields are operated upon, or to inline the method bodies. I’d like to fix this, but there are a lot of considerations at play: see this comment on an internals thread for my current thoughts. (A similar problem sometimes arises around Box<T> and other smart pointer types; the desugaring leads to rustc being more conservative than you might expect.)

Self-referential structs. The final limitation we are not fixing yet is the inability to have “self-referential structs”. That is, you cannot have a struct that stores, within itself, an arena and pointers into that arena, and then move that struct around. This comes up in a number of settings. There are various workarounds: sometimes you can use a vector with indices, for example, or the owning_ref crate. The latter, when combined with associated type constructors, might be an adequate solution for some uses cases, actually (it’s basically a way of modeling “existential lifetimes” in library code). For the case of futures especially, the ?Move RFC proposes another lightweight and interesting approach.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: MozMEAO SRE Status Report - July 11, 2017

di, 11/07/2017 - 02:00

Here’s what happened on the MozMEAO SRE team from July 5th - July 11th.

This weeks report is brief as the team is returning from the Mozilla San Francisco All Hands and vacation.

Current work Static site hosting Kubernetes
  • Our main applications are being moved to our new Frankfurt Kubernetes cluster.
Links
Categorieën: Mozilla-nl planet

J.C. Jones: Cutting over Let's Encrypt's Statistics to Map/Reduce

ma, 10/07/2017 - 22:54

We're changing the methodology used to calculate the Let's Encrypt Statistics page, primarily to better cope with the growth of Let's Encrypt. Over the past several months it's become clear that the existing methodology is less accurate than we had expected, over-counting the number of websites using Let's Encrypt, and the number of active certificates. The new methodology is more easily spot-checked, and thus, we believe, is more accurate.

We're planning to soon cut-over all data in the Let's Encrypt statistics dataset used for the graphs, and use the new and more accurate data from 3 July 2017 onward. Because of this the data and graphs will show that between 2 and 3 July the count of Active Certificates will fall ~14%, and the count of Registered Domains and Fully-Qualified Domain Names each also fall by ~7%.

Growth Discontinuity

You can preview the new graphs at https://ct.tacticalsecret.com/, as well as look at the old and new datasets, and a diff.

Shifting Methodology

These days I volunteer to process the Certificate Transparency logs for Let's Encrypt's statistics.

Previously, I used a tool to process Certificate Transparency logs and insert metadata into a SQL database, and then made queries against that SQL database to derive all of the statistics we display and use for Let's Encrypt's growth. As the database size has gotten larger, it has been increasingly expensive to maintain. The SQL methodology wasn't intended for long-term statistics, but as with most established infrastructure, it was difficult to carve out the time needed to revamp it.

The revamp is finally ready, using a Map/Reduce approach that can scale much further than a SQL database.

Why did the old way overcount?

Some of the domain overcounting appears to have been due to domains issued SAN-certificates sometimes not being purged when those certificates expire without being renewed. This only happens in cases where the domains are part of a SAN cert, and then the SAN cert is re-issued with a somewhat different set of domains. Those removed, while expired, were still counted. It appears that this seeming-edge case happened quite a lot for some hosting providers.

The active certificate overcounting is in-part due to timing of new certificates being added during nightly maintenance being essentially double-counted. Jacob pointed out that if Let's Encrypt had average issuance, for every hour maintenance takes, the active certificate count would inflate by ~5%. Maintenance with the SQL code took between 1 and 4 hours to complete each night, so this could easily account for the discrepancy in the active certificate count.

There are likely other more subtle counting errors, too.

How do we know it's better now?

The nature of the new Map/Reduce effort produces discrete lists of domains for each issuance day, which are more easily inspected for debugging, so I feel more confident in it. These domain lists are also available as (huge) datasets (which I should move to S3 for performance reasons) at https://ct.tacticalsecret.com/. Line counts in the "FQDN" and "RegDom" lists should match the figures for the most recent day's entry. At least, so far they have...

Reprocessing historical logs?

It's technically possible to re-process the historical data in Certificate Transparency for Let's Encrypt to ensure more accuracy, but I've not yet decided whether I will do this. All the data and software is public, so others could perform this effort, if desired.

Technical Details

The SQL effort moved around through 2016 from various hosting providers to get the best deal on RAM to keep the growing database in check, ultimately moving to Amazon's RDS last winter. A single db.r3.large RDS instance is handling the size well, but is quite expensive for this use case.

The new Map/Reduce effort is currently on a single Amazon m4.xlarge EC2 instance with 150 GB of disk space to hold the 90 days of raw CT log information, the daily summary files, and the 90-day summaries that populate the statistics. This EC2 instance only needs to run about 2 hours a day to catch-up on Certificate Transparency, and then produce the resulting data set. When it needs to scale upward again, I'll likely move to an Apache Spark cluster.

We'll see how fast Let's Encrypt needs it. :)

(Also posted at https://community.letsencrypt.org/t/adjustments-to-the-lets-encrypt-statistics-methodology/37866)

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 10 Jul 2017

ma, 10/07/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Reps Mobilizer Experiment

ma, 10/07/2017 - 15:25

During the second quarter of 2017, and in order to understand how to better identify, recruit and support mobilizers, we decided to run a small experiment with a reduced set of existing “best in class” mobilizers and walk with them during their work supporting technical communities.

Why

Reps program is a program for core mobilizers, who create, grow, sustain and engage communities around Mozilla projects. There are still improvement areas in order to become  a state of the art mobilizer program, so we wanted to identify which are these areas and which are the changes we can implement.

Participants

Bob Chao (Taiwan) – WebVR

Long time contributors, Bob has been empowering and growing different Mozilla related communities in Taiwan, more recently Rust and WebVR.

Full Report

 

Srushtika Neelakantam (India) – WebVR

Deeply involved with the WebVR community since its formation, Srushtika has been empowering the local community in India for a few years now. She has even wrote a book about WebVR.

Full Report

 

Daniele Scasciafratte (Italy) – WebExtensions

Extremely involved contributors, Daniele has been supporting the community in Italy for many years. He has been key to develop the first Addons activity for the MozActivate campaign.

Full Report

Vigneshwer Dhinakaran (India) – Rust

He has been key for the formation and growth of the Rust community in India, he is author of a book about the technology.

Full Report

 

Process Overview

We decided to use a human centered design approach to test this hypothesis. Each project started with a research phase followed by multiple iterations of potential solutions. Each iteration involved testing, reflecting on the learnings and iterating on the approach.

Overall main learnings
  1. A certain degree of understanding of the technology is needed for the mobilizer to be truly effective and understand the communities.
  2. It’s key to devote enough time to do research and understand the local environment and the potential contributors needs
    1. After a solid research, we can start thinking on which are and implement the best channels for communications between the community (sync and async) as well as information distribution (announces, materials…)
  3. There are two important areas when working with technical communities:
    1. Getting people excited about the tech and the community
    2. Keeping people engaged after the main activities take place. The top priority should be designing for the follow-up instead of the activity.
  4. Establishing a direct conversation between mobilizers and the functional area staff is key for having a correct direction and impactful outputs.
    1. Teams that work with more closed tools (slack) presented a bigger challenge

As a result of these learnings we will evaluate a set of recommendations to improve the Reps program and we will share with some early ideas soon on the Reps discourse.

Thank you Vigneshwer, Daniele, Srushtika and Bob, your work is an inspiration to all Reps and to the rest of Mozilla, you have demostrated strong leadership and an impact-oriented strategy thinking that will help others to follow your steps.

Categorieën: Mozilla-nl planet

Mozilla Open Design Blog: Zilla Slab: A common language through a shared font

ma, 10/07/2017 - 13:00

How do the rules change when you design in the open? That is a question I asked myself many times as I prepared to join the Mozilla brand team back in November 2016. Having never worked in an open source company, I knew that I would need to prepare myself for a mental shift.

Brand design systems are often managed tightly by internal creative teams, with strict guidelines. The guidelines,  are shared like canons, with outside agencies. At Mozilla, we pride ourselves on our relationship to a passionate community of volunteers who outnumber our employees 10 to 1. With this community, strict control doesn’t work, but we still need to provide a design system that includes their views and results in better designs. Here’s a conversation discussing the process with Yuliya Gorlovetsky, Mozilla’s Associate Creative Director and font expert Peter Biľak of Typotheque who helped with the font design.

Tools not rules

Logo refinement from Johnson Banks and Fontsmith.

Yuliya: When I joined the Mozilla brand team, the new identity was close to completion. The concept relied heavily on typography, and Johnson Banks had chosen a slab serif for the logo. Slab serifs are the less common and less well known serifs. They are often bold, sharper, and have strong personality. We saw an opportunity for Mozilla to build on this by creating a custom font to support the new identity system – a distinct typeface for the identity refresh that we planned to open source for all to use.

Jordan Gushwa, a Creative Lead at Mozilla Foundation, and I both agreed that adding an elegant yet flexible slab serif would help us refine the brand. It would allow us to design experiences that lead with typography and written messages, akin to experiences that you might find in lifestyle magazines or design publications when the opportunity presented itself. Mozilla is investing in content creation, as it becomes more important today to tell the stories about how we stay safe and connected on the web.

To develop the Zilla Slab, we worked with Peter Biľak and Nikola Djurek from Typotheque. We respected their work and valued their expertise in multi-language support. I remember that Peter was surprised during our first call when I said we were interested in Slab Serifs that would be at home in top notch magazines or web publications. He happily added those options to our font explorations.

Peter: Slab serif typefaces, sometimes called Egyptians, are usually angular and robust in construction. They hint at a technological underpinning, with aesthetics that are characteristic of the early 19th century Industrial Revolution.

When Yuliya called us at Typotheque, the decision had already been made to use a Slab serif typeface. I wanted to understand the rationale and possibly consider alternatives. Quickly I came to understand that this was a well-informed decision, looking for less explored areas of typeface categorisation.

Yuliya: Peter pulled together multiple options of baseline slab serifs from his catalog and showed us how they could evolve to support our logo design needs. Looking for a font that could flex from display to body copy, we quickly settled on Tesla as our base font. Tesla had a good balance of original details but also the evenness that would support a variety of subject matter.

 

Getting into the details

At the start, the main focus was was on the logotype. Peter kept us accountable by illustrating the impact that the decisions we made for the logotype letters would have on the full font family. When showing different logo options, he would extract the letters from the logo and show how the different design decisions would show up in other letters in a typeface.

This way we could consider side by side, not only the logo, but the full typeface we would have. It was a hard line to walk. You want to just focus on the logo, because that is the combination of letters that will appear time and time again, but since we wanted the logo to connect to the font, we had to find a place of compromise. We needed to make decisions that equally supported both, the logo and the font. Most of the exploration played out in the three main letters: m, z, and a. It’s amazing how many people will ask if they can stop coming to meetings when you talk through 10 different ‘a’ options.

Peter: We looked at various Slab Serif models. Earlier models proposed by Johnson Banks, which were typical heavy geometric Slabs, and then we looked at Sans serif typefaces and considered turning them into Slabs. From our collection we looked at Irma Text Slab, Charlie, Lumin, Zico, before settling on Tesla Slab as a starting point. Perhaps too much detailed information here? Yuliya was intrigued by models  that exhibited unusual traits —  shifted axes of contrast of thick and thin strokes, based on cursive writing rather than geometric construction, or even an asymmetric serif structure. The lowercase ‘a’ is a more complex letter that provides clues to how other letters may look in a full exploration of the logotype. We wanted to bring an angled stroke to the top of the ‘a’, mimicking the slashes of the internet protocol. The other letters would then need to follow the same construction principles.

Yuliya: We were able to narrow it down to two design directions:

  1. A dressed down simplified slab font with a more geometric “a”. Geometric fonts are created from basic shapes, such as straight, monolinear lines and circular shapes. They lack ornamentation, and rarely appear with serifs. This “a”, that we considered, was more round, upright, and had no serif on the end.
  2. A more serifed font that had a more humanist “a” with a very pronounced serif. Humanist serifs are the very first kind of Roman typeface. The letters were originally drawn with a pen held at a consistent angle, creating a consistent visual rhythm. This “a”, that we considered, had a very pronounced two-level serif, that was tilted at a slight angle to respond to the “/”  that came before it.

 

Peter: Most of the Slab typefaces are static and geometric in construction. “Static” refers here to the axis of contrast. When the axis is 0 degree, typefaces are usually described as static. When there is an angle, they became ‘dynamic’. We experimented with injecting more humanistic values into the traditional Slab model, with the help of more calligraphic stroke terminations, which not only improve legibility but create a more flowing rhythm of letter shapes.

Yuliya: After much debate and 35 rounds of review, we had narrowed the directions down to 2 top choices. We then guerilla tested on folks, we showed the 2 directions to some folks within Mozilla, and some colleagues and friends outside of Mozilla and asked which one they gravitated towards and why. We also asked folks what emotions the different directions inspired in them. These conversation gave us the insight to proceed with the more humanist version, but to simplify the serif on the “a”. The result was an overall simplification of the font, which we lovingly call Zilla Slab. We think that Zilla Slab is a casual and contemporary slab serif that still has a good amount of quirk.

We launched the logo in the middle of January, and applied it to just a few assets; web headers, signage around the office and some print applications. After the launch, Peter and his team continued to work with us through the details of the full font family. Peter regularly shared the progress, which we in turn shared with many designers and fellow type enthusiasts across Mozilla. Turns out a lot of folks across Mozilla self identify as type nerds!

Sophisticated italics

Italics came next, and they are graceful and sharp. As a humanist slab, the Zilla Slab italics are closer to a true italics and add a softness to the overall typography system. There are moments when I’m reviewing work and I have to pause and stare at the curved slant detail in the v, w, x, and y of the italics letters.

Peter: Italics are generally not used for longer texts.  The function of italics is to emphasise short passages, so they offer more space for expression. Since we aimed to make Zilla Slab a more affable typeface with humanistic elements, the Italics offered an opportunity to go fully in that direction. We based the Italic not on a slanted Roman, but on a cursive broad-nib writing style. Diagonals in italic often break the rhythm of writing, so we introduced curved diagonals that work well with cursive italic by maintaining a smoother flow.

 

Building in the highlight effect

Yuliya: Johnson Banks originally modeled the highlight effect for the identity system on the functional act of highlighting a piece of type with your mouse on a screen or within software, or code inspector.

From the original guideline from Johnson Banks. The red rectangle is the size of the colon rectangle in the logo.

 

We continued to expand and ask people for feedback on the identity system internally. As we put it to use we quickly discovered that typing out the words and manually drawing out the highlight box to contain those letters not only took a lot of effort, but also created visual inconsistencies. Sizing the letters and the highlight box separately and having them come together in the same way time and time again, requires a lot of math and visual tuning. Following our idea of focusing on tools that enable people rather than rules that restrict, we asked Typotheque to create a true highlight version of Zilla Slab. The highlight weight would show the counter shapes between letters rather than the letterforms themselves. These additional weights of Zilla Slab make it easy for anyone to contribute to the brand identity and not to be limited by design software or design knowledge. Building the logic and rules into the font makes it a seamless part of the system.

Peter: Crafting a wordmark or a logo is usually a different process from developing a functional typeface. as Wordmarks can create more context specific design and brand solutions, which may not work well in a font. With Zilla Slab, we worked on a wordmark and the typeface at the same time, and had to anticipate how the wordmark features could be translated to other glyphs not present in the original wordmark. This extended possibly to other writing scripts.

Typing “Mozilla” in Zilla Slab Highlight Bold

 

Yuliya: The final touch on making the font a completely functional tool for the brand started by asking Peter if it would be possible to automate Zilla Slab so that it would be possible to type out the Mozilla logo using the bold highlight weight of the Zilla Slab. This would allow browsers and native applications to  turn “Mozilla” automatically into “moz://a” in specific cases. This would free people from needing to place and attach a static image version of the file, and frees us from managing those files. A logo for an open source company, typed out in its open source font.

Peter: Since the Zilla font is used to create the new Mozilla logo, we included the OpenType substitution feature triggered by the Ligature function that replaces the ‘ill’ letters by ‘://”. This should only replace the “ill” when intended —  not in a word ‘illustration’, but always in mozilla or ‘Mozilla’, for example.

 

Looking Ahead

Design work from the design team. Special attribution to Patrick Crawford.

 

Over five months of development, as Peter’s team worked through the font details, our design team worked in parallel to test Zilla Slab in a broad range of design applications. It has proven to be as unique and flexible as we had hoped it would be. Zilla Slab has taken its place as the unifying component of our design system. After all, it’s through typography that we see language, and at Mozilla we all have a lot to say.

The roll out of Zilla Slab has also helped to unite our different design teams and give all of our Mozilla contributors a shared visual voice. We are launching Zilla Slab with support for 70 European Latin based languages, and we can’t wait to continue our work with Typotheque and localize it to additional alphabets.

This is one of our first shared tools within our identity system. We will keep adding more tools, writing about them, and designing these tools in the open with you.

Download Zilla Slab on Github or Google Fonts.

The post Zilla Slab: A common language through a shared font appeared first on Mozilla Open Design.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Easily customized environments using the Aframe-Environment-Component

ma, 10/07/2017 - 12:23
Easily customized environments using the Aframe-Environment-Component

Get a fresh and new environment for your A-Frame demos and experiments with the aframe-environment component!

Just include the aframe-environment-component.min.js component in your html file, add an <a-entity environment></a-entity> to your <a-scene>, and voila!

Easily customized environments using the Aframe-Environment-Component

<html> <head> <script src="path/to/aframe.js"></script> <script src="path/to/aframe-environment-component.js"></script> </head> <body> <a-scene> <a-entity environment><a-entity> </a-scene> </body> </html>

The component generates a new environment with presets for lights and geometry. These presets can be easily customized by using the inspector (ctrl + alt + i) and tweaking the individual values until you find the look you like. Presets are a combination of property values that define a particular style, they are a starting point that you can later customize:

<a-entity environment="preset: goldmine; sunPosition: 1 5 -2; groundColor: #742"><a-entity>

You can view and try all the presets from the aframe-environment-component example page.

And of course, the component is fully customizable without a preset:

<a-entity environment="skyType: gradient; skyColor: #1d7444; horizonColor: #7ae0e0; groundTexture: checkerboard; groundColor: #523c60; groundColor2: #544264; dressing: cubes; dressingAmount: 15; dressingColor: #7c5c45"></a-entity>

TIP: If you are using the inspector and are happy with the look of your environment, open your browser's dev tools (F12) and copy the latest parameters from the console.

Customizing your environment

The environment component defines four different aspects of the scene: lighting, sky, ground terrain and dressing objects.

Lighting and mood

The lighting in your scene is easily adjusted by changing the sunPosition property. Scene objects will subtly receive a bounce light from the ground, and the color of the fog will also change to match the sky color at the horizon.

Easily customized environments using the Aframe-Environment-Component

To fully control the lighting of the scene, you can disable the environment lights with lighting: none, and you can set lighting: point if you want a point light instead of a distant light for the sun.

Add realism to your scene by adding shadows toggling on the shadow parameter and adding the shadow component on objects that should cast shadows onto the ground. Learn more about A-Frame shadows.

Sky and atmosphere

The 200m radius sky dome can have a basic color, a top-down gradient, or a realistic looking atmospheric look by using skyType: atmosphere sky type. Lowering the sun near or below the horizon will give you a starry night sky.

Ground terrain

The ground is a flat subdivided plane that can be deformed to various different terrain patterns like hills, canyons, or spikes. The appearance can also be customized by its texture and colors.

The center play area where the player is initially positioned is always flat, so nobody will get buried ;)

The grid property will add a grid texture to the ground and can be adjusted to different colors and patterns.

Dressing objects

A sky and ground with nothing more could be a little too simple sometimes. The environment component includes many families of objects that can be used to spice up your scene, including cubes, pyramids, towers, mushrooms and more. Among other parameters, you can adjust their variation using dressingVariance, or the ratio of objects that will be inside or outside the play area with dressingOnPlayArea.

All dressing objects share the same material and are all merged in one single geometry for better performance.

Easily customized environments using the Aframe-Environment-Component

Further customization

To see the full list of parameters of the component, check out GitHub's aframe-environment-component repository.

Help make this component better

We could use your help!

  • File github issues
  • Create a new preset
  • Share your presets! So anyone can copy/paste and even try live
  • Create new dressing geometries
  • Create new procedural textures
  • Create new ground types
  • Create new grid styles

Feel free to send a pull request to the repository!

Performance considerations

The main idea of this component is to have a complete and visually interesting environment by including a single Javascript file, with no extra includes or dependencies. This requires that assets have to be included into the Javascript or (in most cases) generated procedurally . Despite of the computing time and increased file size, both options are normally faster than requesting and waiting for additional textures or model files.

Apart from the parameter dressingAmount, there is not much difference among different parameters in terms of performance.

Categorieën: Mozilla-nl planet

Christian Heilmann: Debugging JavaScript – console.loggerheads?

za, 08/07/2017 - 19:35

The last two days I ran a poll on Twitter asking people what they use to debug JavaScript.

  • console.log() which means you debug in your editor and add and remove debugging steps there
  • watches which means you instruct the (browser) developer tools to log automatically when changes happen
  • debugger; which means you debug in your editor but jump into the (browser) developer tools
  • breakpoints which means you debug in your (browser) developer tools

The reason was that having worked with editors and developer tools in browsers, I was curious how much either are used. I also wanted to challenge my own impression of being a terrible developer for not using the great tools we have to the fullest. Frankly, I feel overwhelmed with the offerings and choices we have and I felt that I am pretty much set in my ways of developing.

Developer tools for the web have been going leaps and bounds in the last years and a lot of effort of browser makers goes into them. They are seen as a sign of how important the browser is. The overall impression is that when you get the inner circle of technically savvy people excited about your product, the others will follow. Furthermore, making it easier to build for your browser and giving developers insights as to what is going on should lead to better products running in your browser.

I love the offerings we have in browser developer tools these days, but I don’t quite find myself using all the functionality. Turns out, I am not alone:

The results of 3970 votes in my survey where overwhelmingly in favour of console.log() as a debugging mechanism.

Twitter pollPoll results: 67% console, 2% watches, 15% debugger and 16% breakpoints.

Both the Twitter poll and its correlating Facebook post had some interesting reactions.

  • As with any too simple poll about programming, a lot of them argued with the questioning and rightfully pointed out that people use a combination of all of them.
  • There was also a lot of questioning why alert() wasn’t an option as this is even easier than console().
  • There was quite some confusion about debugger; – seems it isn’t that common
  • There was only a small amount of trolling – thanks.
  • There was also quite a few mentions of how tests and test driven development makes debugging unimportant.

There is no doubt that TDD and tests make for less surprises and are good development practice, but this wasn’t quite the question here. I also happily discard the numerous mentions of “I don’t make mistakes”. I was pretty happy to have had only one mention of document.write() although you do still see it a lot in JavaScript introduction courses.

What this shows me is a few things I’ve encountered myself doing:

  • Developers who’ve been developing in a browser world have largely been conditioned to use simple editors, not IDEs. We’ve been conditioned to use a simple alert() or console.log() in our code to find out that something went wrong. In a lot of cases, this is “good enough”
  • With browser developer tools becoming more sophisticated, we use breakpoints and step-by-step debugging when there are more baffling things to figure out. After all, console.log() doesn’t scale when you need to track various changes. It is, however, not our first go-to. This is still adding something in our code, rather than moving away from the editor to the debugger
  • I sincerely hope that most of the demands for alert() were in a joking fashion. Alert had its merits, as it halted the execution of JavaScript in a browser. But all it gives you is a string and a display of [object object] is not the most helpful.
Why aren’t we using breakpoint debugging?

There should not be any question that breakpoint debugging in vastly superior to simply writing values into the console from our code:

  • You get proper inspection of the whole state and environment instead of one value
  • You get all the other insights proper debuggers give you like memory consumption, performance and so on
  • It is a cleaner way of development. All that goes in your code is what is needed for execution. You don’t mix debugging and functionality. A stray console.log() can give out information that could be used as an attack vector. A forgotten alert() is a terrible experience for our end users. A forgotten “debugger;” or breakpoint is a lot less likely to happen as it does pause execution of our code. I also remember that in the past, console.log() in loops had quite a performance impact of our code.

Developers who are used to an IDE to create their work are much more likely to know their way around breakpoint debugging and use it instead of extra code. I’ve been encountering a lot of people in my job that would never touch a console.log() or an alert() since I started working in Microsoft. As one response of the poll rightfully pointed out it is simpler:

It's even longer to write console.log than to put a breakpoint...

— Chen Eshchar (@cheneshchar) July 6, 2017

So, why do we then keep using console logging in our code rather than the much more superior way of debugging code that our browser tooling gives us?

I think it boils down to a few things:

  • Convenience and conditioning – we’ve been doing this for years, and it is easy. We don’t need to change and we feel familiar with this kind of back and forth between editor and browser
  • Staying in one context – we write our code in our editors, and we spent a lot of time customising and understanding that one. We don’t want to spend the same amount of work on learning debuggers when logging is good enough
  • Inconvenience of differences in implementation – whilst most debuggers work the same there are differences in their interfaces. It feels taxing to start finding your way around these.
  • Simplicity and low barrier of entry – the web became the big success it is by being independent of platform and development environment. It is simple to show a person how to use a text editor and debug by putting console.log() statements in your JavaScript. We don’t want to overwhelm new developers by overloading them with debugger information or tell them that they need a certain debugging environment to start developing for the web.

The latter is the big one that stops people embracing the concept of more sophisticated debugging workflows. Developers who are used to start with IDEs are much more used to breakpoint debugging. The reason is that it is built into their development tools rather than requiring a switch of context. The downsides of IDEs is that they have a high barrier to entry. They are much more complex tools than text editors, many are expensive and above all they are huge. It is not fun to download a few Gigabyte for each update and frankly for some developers it is not even possible.

How I started embracing breakpoint debugging

One thing that made it much easier for me to embrace breakpoint debugging is switching to Visual Studio Code as my main editor. It is still a light-weight editor and not a full IDE (Visual Studio, Android Studio and XCode are also on my machine, but I dread using them as my main development tool) but it has in-built breakpoint debugging. That way I have the convenience of staying in my editor and I get the insights right where I code.

For a node.js environment, you can see this in action in this video:

Are hackable editors, linters and headless browsers the answer?

I get the feeling that this is the future and it is great that we have tools like Electron that allow us to write light-weight, hackable and extensible editors in TypeScript or just plain JavaScript. Whilst in the past the editor you use was black arts for web developers we can now actively take part in adding features to them.

I’m even more a fan of linters in editors. I like that Word tells me I wrote terrible grammar by showing me squiggly green or red underlines. I like that an editor flags up problems with your code whilst you code it. It seems a better way to teach than having people make mistakes, load the results in a browser and then see what went wrong in the browser tools. It is true that it is a good way to get accustomed to using those and – let’s be honest – our work is much more debugging than coding. But by teaching new developers about environments that tell them things are wrong before they even save them we might turn this ratio around.

I’m looking forward to more movement in the editor space and I love that we are able to run code in a browser and get results back without having to switch the user context to that browser. There’s a lot of good things happening and I want to embrace them more.

We build more complex products these days – for good or for worse. It may be time to reconsider our development practices and – more importantly – how we condition newcomers when we tell them to work like we learned it.

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): New L10n Report: July Edition

za, 08/07/2017 - 02:08

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

New localizers:

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community and locales added
  • azz (Sierra Puebla Nahuatl): it was onboarded in recent months and already made a lot of progress.
  • be (Belarusian): when we sadly had to drop Belarusian from our desktop and mobile builds due to inactivity, new contributors immediately contacted us to revive localization work. A big shout-out to this newly formed community!
  • Tg (Tajik): successfully localized their first project, Thimble!
New content and projects What’s new or coming up in Firefox desktop

Big changes are coming to Firefox 57, with some of them sneaking up even in 56 (the version currently localized in Nightly, until August 7). The new feature with more significant impact for localization is the new Onboarding experience: it consists of both an Onboarding overlay (a tour displayed by clicking the fox icon in the New tab), and Onboarding notifications, displayed at the bottom of New tab.

If you haven’t seen them yet (we always make sure to tweet the link), we strongly suggest to read the latest news about Photon in the Photon Engineering Newsletter (here’s the latest #8).

On a side note, you should be using Nightly for your daily browsing, it’s exciting (changes every day!) and fundamental to ensure the quality of your localization.

There is a bug on file to stop localizing about:networking, given how small the target is (users debugging network issues) and how obscure some of these strings are.

A final reminder: The deadline to update Firefox Beta is July 25. Remember that Firefox Beta should mainly be used to fix small issues, since new translations added directly to Beta need to be manually added to the Nightly project in your localization tools.

What’s new or coming up in Test Pilot

The new set of experiments, originally planned for July 5, has been moved to July 25. Make also sure to read this mail on dev-l10n if you have issues testing the website on the dev server.

What’s new or coming up in mobile
  • Mobile (both Android and iOS projects) is going to align with the visual changes coming up on desktop by getting a revamped UI thanks to Photon. Check it out!
    • Firefox for Android Photon meta-bug.
    • Please note that due to the current focus on desktop with Firefox 57, Firefox for Android work is slower than usual. Expect to see more and more Photon updates on Nightly builds though as time passes by.
  • We recently launched Focus for Android v1 with 52 languages! Have you tried it out yet? Reviews speak for themselves. Expect a new release soon, and with that, more locales (and of course, more features and improvements to the app)!
  • Mobile experiments are on the rise. The success of Focus is paving the way to many other great mobile projects. Stay tuned on our mailing list because there just may be cool stuff arriving very soon!
What’s new or coming up in on web projects
  • With the new look and feel, a new set of Firefox pages and unified footer were released for l10n. Make sure to localize firefox/shared.lang before localizing the new pages. Try to complete these new pages before deadline or the pages will redirect to English on August 15
  • Monthly snippets have expanded to more locales. Last month, we launched the first set in RTL locales: ar, fa, he, and ur. The team is considering creating regional specific snippets.
  • A set of Internet Health pages were launched. Some recent updates were made to fit the new look and layout. Many communities have participated in localizing some or all pages.
  • The newly updated Community Participation Guideline is now localized in six languages: de, es, fr, hi-IN, pt-BR, and zh-TW. Thanks to the impacted communities for reviewing the document before publishing.
  • Expect more updates of existing pages in the coming months so the look and feel are consistent between pages.
What’s new or coming up in Foundation projects
  • Thimble, the educational code editor, got a makeover and new useful features, that all got localized in more than 20 locales.
  • The fundraising campaign will start ramping up earlier than November this year, so it’s a great idea to make sure the project is up-to-date for your locale, if it isn’t already.
  • The EU Copyright campaign is in slow mode over the summer while MEPs are on holiday, but we will rock again full speed in September before committees are voting
  • We will launch an Internet of Things survey over the summer to get a better understanding of what people know about IoT.
Newly published localizer facing documentation Events
  • Next l10n workshop will be in Asuncion, Paraguay (August)
  • Berlin l10n workshop is coming up in September!
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)
Opportunities Accomplishments Some numbers Friends of the Lion

Image by Elio Qoshi

  • Shout-out to all Mozilla RTL communities who have been doing a great job at filing and fixing bugs on mobile – as well as providing much needed feedback and insights during these past few months! Tomer, Reza, ItielMaN, Umer, Amir, Manel, Yaron, Abdelrahman, Yaseen – just to name a few. Thank you!
  • Thanks to Jean-Bernard and Adam for jumping in to translate the new Firefox pages in French.
  • Thanks to Nihad of Bosnian community for leading the effort in localizing the mozilla.org site that is now on production.
  • A big thank you to Duy Thanh for his effort in rebuilding the Vietnamese community and his ongoing localization contribution.
  • kab (Kabyle) community started a while back, their engagement is impressive in all products and projects.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Image by Slip Premier, Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)

Categorieën: Mozilla-nl planet

Smokey Ardisson: iOSify Bookmarklets and adding bookmarklets on iOS

za, 08/07/2017 - 01:14

I don’t do much web browsing on iOS (I just can’t handle the small screen), but I do visit a number of sites regularly, some of which do stupid things that make them annoying or hard to use, especially in Mobile Safari. On the desktop, it’s usually easy to “fix” these annoyances with a bit of JavaScript or CSS or a quick trip to the Document Inspector, but none of these are readily available on iOS. Fortunately, however, bookmarklets still work in Mobile Safari.

Unfortunately, though, adding bookmarklets to Mobile Safari is cumbersome at best. Unless you sync all of your bookmarks from the desktop, it’s almost impossible to add a bookmarklet to Mobile Safari unless the bookmarklet’s author has done some work for you. On the desktop, you’d typically just drag the in-page bookmarklet link to your bookmarks toolbar and be done, or control-/right-click on the in-page bookmarklet link and make a new bookmark using the context menu. One step, so simple a two-year-old could do it. The general process of adding a bookmarklet to Mobile Safari goes like this:

  1. Bookmark a page, any page, in order to add a bookmark
  2. Manually edit the aforementioned bookmark’s URL to make it a bookmarklet, i.e. by pasting the bookmarklet’s code
  3. Fun! :P

To make things slightly easier, Digital Inspiration has a collection of common bookmarklets that you can bookmark directly and then edit back into functioning bookmarklets.1 It’s still two steps, but step 2 becomes much simpler (probably a five-year-old could do it). This is great if Digital Inspiration has the bookmarklet you want (or if the bookmarklet’s author has included an “iOS-friendly” link on the page), but what if you want to add Alisdair McDiarmid’s Kill Sticky Headers bookmarklet?

To solve that problem, I wrote “iOSify Bookmarklets”—a quick-and-dirty sort-of “meta-bookmarklet” to turn any standard in-page bookmarklet link into a Mobile Safari-friendly bookmarkable link.

Once you add iOSify Bookmarklets to Mobile Safari (more on that below), you tap it in your bookmarks to covert the in-page bookmarklet link into a tapable link, tap the link to “load” it, bookmark the resulting page, and then edit the URL of the new bookmark to “unlock” the bookmarklet.

Say you’re visiting http://example.com/foo and it has a bookmarklet, bar, that you want to add to Mobile Safari.

  1. Open your Mobile Safari bookmarks and tap iOSify Bookmarklets. (The page appears unchanged afterwards, but iOSify Bookmarklets did some work for you.)
  2. Tap the in-page link to the bookmarklet (bar) you want to add to Mobile Safari. N.B. It may appear again that nothing happens, but if you tap the location bar and finger-scrub, you should see the page’s URL has been changed to include the code for the “bar” bookmarklet after a ?bookmarklet# string.
  3. Tap the Share icon in Mobile Safari’s bottom bar and add the current page as a bookmark; you can’t edit the URL at this point, so just choose Done.
  4. Tap the Bookmarks icon, then Edit, then the bookmark you just added. Edit the URL and delete everything before the javascript: in the bookmark’s URL. (Tap Done when finished, and Done again to exit editing mode.)

The “bar” bookmarklet is now ready for use on any page on the web.

Here’s an iOS-friendly bookmarkable version of iOSify Bookmarklets (tap this link, then start at step 3 above to add this to Mobile Safari): iOSify Bookmarklets

The code, for those who like that sort of thing:

I hope this is helpful to someone out there :-)

        

1 For the curious, Digital Inspiration uses query strings and fragments in the URL in order to include the bookmarklet code in the page URL you bookmark, and iOSify Bookmarklets borrows this method. ↩︎

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Link Traversal and Portals in A-Frame

vr, 07/07/2017 - 01:28
Link Traversal and Portals in A-Frame

Demo for the impatients (It requires controllers: Oculus, HTC Vive, Daydream or GearVR)

A-Frame 0.6.0 and Firefox Nightly now support navigating across pages while in VR mode. WebVR has finally earn the Web badge. The Web gets its name from the structure of interconnected content that uses the link as the glue. Until now, The VR Web was fragmented and had to be consumed in bite size pieces since the VR session was not preserved on page navigation. In the first iteration of the WebVR API we focused on displaying pixels and meet the performance requirements on a single page. Thanks to the efforts of Doshheng Mu and Kip, Firefox Nightly now also ships the mechanism that enables a seamless transition between VR sites.

Link Traversal and Portals in A-Frame

Link traversal in VR relies on the onvrdisplayactivate event. It is fired on the window object on page load if the precedent site was presenting content in the headset.

To enter VR mode for the first time, the user is expected to explicitly trigger VR mode with an action like mouse click or a keyboard shortcut to prevent sites to take control of the headset inadvertently. Once VR is engaged subsequent page transitions can present content in the headset without further user intervention. A page can be granted permissions to enter VR automatically by simply attaching an event handler:

window.addEventListener('vrdisplayactivate' function (evt) { /* A page can now start presenting in the headset */ vrDisplay.requestPresent([{ source: myCanvas }]).then(function () { ... }); } Links in A-Frame

A-frame 0.6.0 ships with a link component and a-link primitive. The link component can be configured in several ways:

<a-entity link="href: index.html; title:My Home; image: #homeThumb"></a-entity> PropertyDescription hrefURL where the link points to titleText displayed on the link. href is used if not defined onevent that triggers link traversal image360 panorama used as scene preview in the portal colorBackground color of the portal highlightedtrue if the link is highlighted highlightedColorcolor used to highlight the link visualAspectEnabledenable/disable the visual aspect if you want to implement your own

The a-link primitive provides a compact interface that feels like the traditional <a> tag that we are all used to.

<a-link href="index.html" image="#thumbHome" title="my home"></a-link>

The image property points to the <img> element that will be used as a background of the portal and the title is the text displayed on the link.

The UX of VR links

Using the wonderful art of arturitu, both the A-Frame component and primitive come with a first interpretation on how links could be visually represented in VR. It is a starting point for an exciting conversation that will develop in the next years.

Our first approach addresses several problems we identified:

Links visual aspect should be consistent.

So users can quickly identify the interconnected experiences at a glance. Thanks to kip's shader wisdom we chose a portal representation that gives each link a distinct look representative of the referenced content. A-Frame provides a built in screenshot component to easily generate the 360 panoramas necessary for the portal preview.

Link Traversal and Portals in A-Frame

Links should be useful at any distance.

Portals help discriminate between links in the proximity but the information becomes less useful in the distance. From far away, portals alone can be difficult to spot because they might blend with the scene background or become hard to see at wide angles. To solve the problem, we made links fade into a solid fuchsia circle with a white border that grows in thickness with the distance so all the links have a consistent look (colors are configurable). Portals will also face the camera to avoid wide angles that reduce the visible surface.

Link Traversal and Portals in A-Frame

Link Traversal and Portals in A-Frame

Links should provide information about the referenced experience.

One can use the surrounding space to contextualize and give a hint where the link will point to. In addition, links themselves display either the title or the url that they refer to. This provides additional information to the user about the linked content.

Link Traversal and Portals in A-Frame

There should be a convenient way to explore the visible links.

While portals are a good way to preview an experience it can be hard to explore the available options if the user has to move around the scene to inspect the links one by one. We developed a peek feature that allows the user to point to any link and quickly zoom into the preview without having to move.

Link Traversal and Portals in A-Frame

Next steps

One of the limitations of the current API is that a Web Developer needs to manually point to the thumbnails that the links use to render the portals. We want to explore a way, via meta tags, web manifest or other conventions for a site to provide the thumbnail for 3rd party links to consume. This way a Web Developer has more control on how her website will be represented in other pages.

Another rough edge is what happens when after navigation a page takes a lot time to load or ends up in an error. There's no way at the moment to inform the user of those scenarios while keeping the VR headset on. We want to explore ways for the browser itself to intervene in VR mode and keep the user properly informed at each step when leaving, loading and finally navigating to a new piece of VR content.

Conclusion

With in-VR page navigation we're now one step closer to materialize the Open Metaverse on top the existing Web infrastructure. We hope you find our link proposal inspiring and sparks a good conversation. We don't really know how links will look like in the future: doors, inter-dimensional portals or exit burritos... We cannot wait to see what you come up with. All the code and demos are already available as part of the 0.6.0 version of A-Frame. Happy hacking!

Link Traversal and Portals in A-Frame

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Jul. 06, 2017

do, 06/07/2017 - 18:00

Reps Weekly Meeting Jul. 06, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

The Mozilla Blog: New Research: Is an Ad-Supported Internet Feasible in Emerging Markets?

do, 06/07/2017 - 16:56
Fresh research conducted by Caribou Digital and funded by Mozilla explores digital advertising models in the Global South — whether they can succeed, and what that means for users, businesses, and the health of the Internet

Since the Internet’s earliest days, advertising has been the linchpin of the digital economy, supporting businesses from online journalism to social networking. Indeed, two of the five largest companies in the world — Facebook and Google — earn almost all of their revenue through digital advertising.

As the Internet reaches new users in India, Kenya, and elsewhere across the Global South, this model is following close behind. But is the digital advertising model that has evolved in developed economies sustainable in emerging economies? And if it’s not: What does it mean for the billions of users who are counting on the Internet to unlock new pathways to education, economic growth, and innovation?

Publishers see drastically less revenue per user in these regions, partly because low-income populations are less valuable to advertisers, and partly because constraints on the user experience — low-quality hardware, unreliable network coverage, and a dearth of local content — fundamentally limit how people engage with digital content and services.

As a result, users in emerging markets will have fewer choices, as local content providers and digital businesses will struggle to earn enough from their home markets to compete with the global platforms.

Today, we’re publishing “Paying Attention to the Poor: Digital Advertising in Emerging Markets.”

It’s fresh research conducted by Caribou Digital and funded by Mozilla that explores the barriers traditional digital advertising models face in emerging economies; the consequent impact on users, businesses, and the health of the Internet; and what new models are emerging.  

In summary:

Ad revenue-wise, there is an order-of-magnitude difference between users in developed economies and users in the Global South.

Facebook earns a quarterly ARPU (average revenue per user) of $1.41 in Africa and Latin America, and $2.07 in Asia-Pacific — an order of magnitude less than  the $19.81 it earns in the U.S. and Canada

As a result, just over half of Facebook’s total global revenue comes from only 12% of its users

The high cost of data in emerging markets is one driver of ad blocking

Due to prohibitive data costs and slower network speeds, many Internet users in emerging markets use proxy browsers, such as UC Browser or Opera Mini, which reduce data consumption and also block ads

One report by PageFair claims over 309 million users around the world used mobile ad blockers in 2016 — with 89 million hailing from India and 28 million hailing from Indonesia

A dearth of user data — or, the “personal data gap” — presents another challenge to advertisers.

In developed economies, data profiling and ad targeting has been a boon to advertisers. But in the Global South, people have much smaller digital footprints

Limited online shopping, a glut of open-source Android devices, and a tendency toward multiple, fragmented social media accounts dilutes the value of personal data to advertisers

Limited advertising revenue in emerging markets challenges local innovation and competition.

Publishers and developers follow the money. As a result, content is targeted to, and localized for, developed markets like the U.S. or Japan — even producers in emerging markets will ignore their domestic market in favor of more lucrative ones

Large companies like Facebook have the resources to subsidize forays into unprofitable markets; smaller companies do not. As a result, the reigning giants become further entrenched

A lack of local content can have deeply negative implications.

Availability of local content is a key demand-side driver for increasing Internet access for marginalized populations, and localized media can foster inclusion and support democratic institutions

But without viable economic models for supporting this content, opportunity is squandered. Presently, the majority of digital content — including user-generated content such as Wikipedia — is in English

The outlook for digital advertising-supported businesses in emerging markets is bleak.

Low monetization rates will continue to limit the types of Internet businesses that can flourish in the Global South

To succeed, businesses in the Global South have to build more strategically, working toward profitability (and not user growth) from the very beginning

These constraints demand new business model innovations for an Internet ecosystem that is evolving differently in the Global South

“Sponsored data” or “incentivized action” models which offer free data in return for engagement with an advertiser’s content are one approach to mitigating the access and affordability constraint

Transactional revenue models, such as those seen in digital financial services, will play an increasingly important role as payments infrastructure matures

You can read the full report here.

In the coming weeks and months, Mozilla and Caribou Digital will share our findings with allies across the Internet health space — the network of NGOs, institutions, and individuals who are working toward a more healthy web. We hope our learnings will help unlock innovative solutions that balance commercial success with openness and freedom online.

The post New Research: Is an Ad-Supported Internet Feasible in Emerging Markets? appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Bay Area Progress Report

do, 06/07/2017 - 08:15

I'm in the middle of a three-week work trip to the USA.

Monday last week I met with a couple of the people behind the Julia programming language, who have been using and contributing to rr. We had good discussions and it's good to put faces to names.

Monday night through to Friday night I was honoured to be a guest at Mozilla's all-hands meeting in San Francisco. I had a really great time catching up with a lot of old friends. I was pleased to find more Mozilla developers using rr than I had known about or expected; they're mostly very happy with the tool and some of them have been using esoteric features like chaos mode with success. We had a talk about rr and I demoed some of the new stuff Kyle and I have been working on, and talked about which directions might be most useful to Mozilla.

Saturday through Monday I went camping in Yosemite National Park with some friends. We camped in the valley on Saturday night, then on Sunday hiked down from Tioga Road (near the Lukens Lake trailhead) along Yosemite Creek to north of Yosemite Falls and camped there overnight. The next day we went up Eagle Peak for a great view over Yosemite Valley, then hiked down past the Falls back to the valley. It's a beautiful place and rather unlike any New Zealand tramping I've done — hot, dry, various sorts of unusual animals, and ever-present reminders about bears. There were a huge number of people in the valley for the holiday weekend!

Tuesday was a bit more relaxed. Being the 4th of July, I spent the afternoon with friends playing games — Bang and Lords of Waterdeep, two of my favourites.

Today I visited Brendan and his team at Brave to talk about rr. On Friday I'll give a talk at Stanford. On Monday I'll be up in Seattle giving a talk at the University of Washington, then on Tuesday I'll be visiting Amazon to talk about the prospects for rr in the cloud. Then on Wednesday through Friday I'll be attending Usenix ATC in Santa Clara and giving yet another rr talk! On Saturday I'll finally go home.

I really enjoy talking to people about my work, and learning more about people's debugging needs, and I really really enjoy spending time with my American friends, but I do miss my family a lot.

Categorieën: Mozilla-nl planet

Pagina's