mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Nick Desaulniers: GCC vs LLVM Q3 2017: Active Developer Counts

Mozilla planet - ti, 05/09/2017 - 09:20

A blog post from a few years ago that really stuck with me was Martin Olsson’s Browser Engines 2015: Commit Rates and Active Developer Counts, where he shows information about the number of authors and commits to popular web browsers. The graphs and analysis had interesting takeaways like showing the obvious split in blink and webkit, and relative number of contributors of the projects. Martin had data comparing gcc to llvm from Q4 2015, but I wanted to see what the data looked like now in Q3 2017 and wanted to share my findings; simply rerunning the numbers. Luckily Martin open sourced the scripts he used for measurements so they could be rerun.

Commit count and active authors in the previous 60 days is a rough estimate for project health; the scripts don’t/can’t account for unique authors (same author using different git commit info) and commit frequency is meaningless for comparing developers that commit early and commit often, but let’s take a look. Active contributors over 60 days cuts out folks who do commit to either code bases, just not as often. Lies, damn lies, and statistics, right? Or torture the data enough, and it will confess anything…

Note that LLVM is split into a few repositories (llvm the common base, clang the C/C++ frontend, libc++ the C++ runtime, compiler-rt the sanitizers/built-ins/profiler lib, lld the linker, clang-tools-extra the utility belt, lldb the debugger (there are more, these are the most active LLVM projects)). Later, I refer to LLVM as the grouping of these repos.

There’s a lot of caveats with this data. I suspect that the separate LLVM repo’s have a lot of overlap and have fewer active contributors when looked at in aggregate. That is to say you can’t simply add them else you’d be double counting a bunch. Also, the comparison is not quite fair since the overlap in front-end-language and back-end-target support in these two massive projects does not overlap in a lot of places.

LLVM’s 60 day active contributors are ~3x-5x times GCC’s and growing, while GCC’s 100-count hasn’t changed much since ‘04. It’s safe to say GCC is not dying; it’s going steady and chugging away as it has been, but it seems LLVM has been very strong in attracting active contributors. Either way, I’m thankful to have not one, but two high quality open source C/C++ compilers.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 198

Mozilla planet - ti, 05/09/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is brain, a programming language transpiler to brainfuck of all things! Thank you, icefoxen for the weird suggestion. It's appreciated!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

120 pull requests were merged in the last week

New Contributors
  • Andrew Gauger
  • Andy Gauge
  • Jeremy Sorensen
  • Lukas H
  • Phlosioneer
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

We're currently writing up the discussions, we'd love some help. Check out the tracking issue for details.

PRs:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

you can ask a Future “are we there yet”, to which it can answer “yes”, “no”, or "don’t make me come back there" an Iterator is something you can keep asking “more?” until it gets fed up and stops listening Display is just a way to say “show me your moves”, with the other formatting traits being other dance moves if something isn’t Send, then it’s a cursed item you can’t give away, it’s yours to deal with if something isn’t Sync, then it won’t even appear for other people, it’s possibly an apparition inside your head things that are Clone can reproduce asexually, but only on command. things that are Copy won’t bother waiting for you

@QuietMisdreavus on Twitter.

Thanks to Havvy for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: MozMEAO SRE Status Report - September 5, 2017

Mozilla planet - ti, 05/09/2017 - 02:00

Here’s what happened on the MozMEAO SRE team from August 29th - September 5th.

Current work Deis Workflow: Final Release

The final release of Deis Workflow is scheduled for September 9th, 2017. We use Deis Workflow to help run Basket, Bedrock, Snippets, and Careers, so each project will need to be modified to use Kubernetes directly (instead of interfacing with Kubernetes via Deis).

More info here.

MDN Migration to AWS Analytics eval

We’re evaluating Snowplow to see if it will meet our analytics needs.

Upcoming Portland Deis 1 cluster decommissioning

The Deis 1 cluster in Portland decommissioning has been pushed out until next week due to support issues related to other applications.

Links
Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Rust 2017 Survey Results

Mozilla planet - ti, 05/09/2017 - 02:00

It’s that time of the year, where we take a good look at how things are going by asking the community at large – both Rust users and non-users. And wow, did you respond!

This year we had 5,368 responses. That’s over 2,000 more responses than we had last year!

The scale of the feedback was both inspiring and humbling, and we’ve worked hard to read through all of your comments and suggestions. There were so many helpful ideas and experiences that people shared, and we truly do appreciate it. Without further ado, let’s take a look.

 66.9% Rust users, 9.9% stopped using, 23.2% never used

Just as we saw last year, 2/3rds of responses are from Rust users and the remainder from non-users. This year, we separated out the “don’t use Rust” to capture those who used Rust and stopped from those who never used Rust. It’s inspiring to see so many developers wanting to help us make Rust better (even if they don’t use it!) so that it can reach even more people.

We’ll talk more about Rust non-users later in this report, but first let’s look at the responses from Rust users.

Using Rust

 0.5% less than a day, 4.2% less than a week, 13.1% less than a month, 39.7% less than a year, 42.5% over a year (hover for more info)

This year, we’re seeing a growing amount of experienced users sticking with Rust, with the “more than a year” users growing to over 42% (up from 30% from last year). The beginners are also an impressively large set, with the “less than a month” crowd at just about 18%, meaning we’re currently attracting nearly a 1/5th of our user base size, even as it grows larger, every month.

 36.5% less 1000 lines, 46.3% 1000 to 10000 lines, 14.2% 10000 to 100000 lines, 2.1% over 100000, 0.9% don't know (hover for more info)

People are working with ever-larger amounts of Rust, with medium- and large-scale lines of code totals both nearly doubling since last year as a percentage of the whole, now making up 16% of respondents (up from last year’s 8.9%). This shows a growing interest in using Rust in ever-larger projects, and a growing need for tools to support this growth.

 17.5% daily, 43.3% weekly, 24.4% monthly, 14.9% rarely

Despite the rising amount of code developers are working with, we’re seeing a small downtick in both daily and weekly users. Daily users have fallen from 19% to 17.5%, and weekly users have fallen from 48.8% to 43.3%. This could be a natural transition in this stage of our growth, as a broader range of developers begin using Rust.

Path to Stability

 92.5% no, 7.5% yes

In the last year, we made big strides in breakages caused by releases of the compiler. Last year, 16.2% of respondents said that upgrading to a new stable Rust compiler broke their code. This year, that number has fallen to 7.5% of respondents. This is a huge improvement, and one we’re proud of, though we’ll continue working to push this down even further.

Chart show strong support for nightly and current stable releases

Developers have largely opted to move to nightly or a recent stable (with some on beta), showing that developers are eager to upgrade and do so quickly. This simplifies the support structure a bit from last year, where developers were on a wider range of versions.

Stable users now make up 77.9% of Rust users. Unfortunately, despite our efforts with procedural macros and helping move crates like Serde to stable, we still have work to do to promote people moving away from the nightly Rust compiler. This year shows an increase in nightly users, now at 1,852 votes it represents 51.6% of respondents using nightly, up from 48.8% of last year.

How we use Rust

 90.2% rustup, 18.9% linux distros, 5% homebrew, 4.7% official .msi, 3.1% official tarball, 1.4% official mac pkg (hover for more info)

One of the big success stories with Rust tooling was rustup, the Rust toolchain installer. Last year, we saw a wide diversity in ways people were installing Rust. This year, many of these have moved to using rustup as their main way of installing Rust, totalling now 3,205 of the responses, which moves it from last year’s 52.8% to 90.2%.

 80.9% Linux, 35.5% macOS, 31.5% Windows, 3.2% BSD-variant

Linux still features prominently as one of the main platforms Rust developers choose. Of note, we also saw a rise in the use of Windows as a developer platform at 1,130 of the 3,588 total respondents, putting it at 31.5% of respondents, up from 27.6% of last year.

 91.5% Linux, 46.7% Windows, 38.2% macOS, 16.8% embedded, 13.2% WebAssembly and asm.js, 9.9% Android, 8.9% BSD-variant, 5.3% Apple iOS

Next, we asked what platforms people were targeting with their Rust projects. While we see a similar representation of desktop OSes here, we also see a growing range of other targets. Android and iOS are at healthy 9.9% and 5.3% respectively, both almost 10x larger than last year’s percentages. Embedded also has had substantial growth since last year’s single-digit percentage. As a whole, cross-compilation has grown considerably since this time last year.

 45.8% vim, 33.8% vscode, 16.1% intellij, 15.7% atom, 15.4% emacs, 12.2% sublime, 1.5% eclipse, 1.5% visual studio

Among editors, vim remains king, though we see healthy growth in VSCode adoption at 34.1% (up from last year’s 3.8%). This growth no doubt has been helped by VSCode being one of the first platforms to get support for the Rust Language Server.

 4.4% full-time, 16.6% part-time, 2.9% no but company uses Rust, 57.6% no, 2.4% not sure, 16.1% not applicable (hover for more info)

Rust in the workplace has also continued to grow. This year’s 4.4% full-time and 16.6% part-time Rust workers show a tick up from last year’s 3.7% full-time and 16.1% part-time.

 18.9% less than 1000 lines, 56% 1000 to 10000 lines, 23.1% 10000 to 100000 lines, 2% more than 100000 lines

Users who use Rust part-time in their companies showed a growth in larger projects since last year, with the medium- and large-scale projects taking up more percentage of total projects this time around.

 1.9% less than 1000 lines, 27.9% 1000 to 10000 lines, 52.6% 10000 to 100000 lines, 17.5% more than 100000 lines

Likewise, full-time Rust commercial users saw medium- and large-scale projects grow to taking a larger part of the pie, with projects over 100,000 lines of code making up almost 18% of the all full-time commercial respondents, and a large shift in the 10,000-100,000 lines range from 39.7% up to 52.6%.

Feeling Welcome

 75.1% feel welcome, 1.3% don't feel welcome, 23.6% don't know (hover for more info)

An important piece of the Rust community is to be one that is welcoming to new users, whether they are current users or potential users in the future. We’re pleased to see that over 3/4th of all respondents said they feel welcome in the Rust community, with 23.6% not sure.

chart showing 81.4% not underrepresented, and a variety of underrepresented, with no category above 5%

The demographics of respondents stayed about the same year over year. Diversity and inclusiveness continue to be vital goals for the Rust project at all levels. The Rust Bridge initiative aims for diversity at the entry level. The Rust Reach project, launched this year, brings in a wide range of expertise from people underrepresented in the Rust world, and pairs them with Rust team members to make Rust more accessible to a wider audience.

Stopped using Rust

New this year, we separated out the people who had stopped using Rust from those who had never used Rust to better understand why they stopped. Let’s take a look first at when they stopped.

 3.2% less than a day, 18.5% less than a week, 43.1% less than a month, 30.2% less than a year, 4.9% more than a year (hover for more info)

The first surprise we had here was how long people gave Rust a try before they stopped. Our initial hunch was that people would give up using Rust in the first day, or possibly the first week, if it didn’t suit them or their project. Instead, what we see is that people tried Rust for a much longer time on average than that.

Themes from people who stopped using Rust:

  • 23% responded that Rust is too difficult to use.
  • 20% responded that they didn’t have enough time to learn and use Rust effectively.
  • 10% responded that tools aren’t use mature enough.
  • 5% responded they needed better IDE support.
  • The rest of users mentioned the need for support for Rust in their jobs, they’d finished the project they need to use Rust in, were turned away by Rust’s syntax, couldn’t think of a project to build, or had a bad interaction with the Rust community.
Not using Rust

 666 company doesn't use Rust, 425 Rust is too intimidating hard to learn or too complicated, 295 Rust doesn't solve a problem for me, 255 Rust doesn't have good IDE support, 209 Rust doesn't have libraries I need, 161 Rust seems too risky for production, 89 Rust doesn't support platforms I need, 73 Rust doesn't have tools I need

While the learning curve and language complexity still played a role in preventing people from picking up Rust, one aspect that resonated with many people is that there just simply aren’t enough active commercial projects in Rust for people to be a part of. For some, they could surmount the learning curve if there was strong incentive to do so.

Areas for Improvement

Finally, at the end of the survey we we provided a free-form area to talk about where Rust could improve. Before we get to the themes we saw, we wanted to give a big “thank you!” to everyone who posted thoughtful comments. There are many, many good ideas, which we will be making available to the respective sub-teams for future planning. With that, let’s look at the themes that were important this year:

  • 17% of responses underscored the need for better ergonomics in the language. People had many suggestions about how to improve Rust for day-to-day use, to allow for easier prototyping, to work with async programming more easily, and to be more flexible with more data structure types. Just as before, the need for a much easier and smoother experience with the borrow checker and how to work with lifetimes was a popular request.
  • 16% of responses talk about the importance of creating better documentation. These covered a topics from helping users transition from other languages, creating more examples and sample projects, helping people get started with various tasks or crates, and creating video resources to facilitate learning.
  • 15% of responses point out that library support needs to improve. People mention the need for a strong support set of core libraries, of the difficulty finding high quality crates, the general maturity of the crates and the crate ecosystem, the need for libraries to cover a wide range of areas (eg web, GUIs, networking, databases, etc). Additionally, people mentioned that libraries can be hard to get started with depending on their API design and amount of documentation.
  • 9% of the responses encouraged us to continue to build our IDE support. Again, this year underscored that there are a sizeable section of developers that need support for Rust in their IDEs and tools. The Rust Language Server, the on-going effort to support IDEs broadly, was mentioned as one of the top items people are looking forward to this year, and comments pointed to these efforts needing to reach stable and grow support into other IDEs as well as continuing to grow the number of available features.
  • 8% of responses mention learning curve specifically. As more developers try to pick up Rust or teach it to coworkers and friends, they’re finding that there aren’t sufficient resources to do so effectively and that Rust itself resists a smooth learning experience.
  • Other strong themes included the need for: faster compile times, more corporate support of Rust (including jobs), better language interop, improved tooling, better error messages, more marketing, less marketing, and improved support for web assembly.
Conclusion

We’re blown away by the response this year. Not only is this a much larger number of responses than we had last year, but we’re also seeing a growing diversity in what people are using Rust for. Thank you so much for your thoughtful replies. We look forward to using your feedback, your suggestions, and your experience to help us plan for next year.

Categorieën: Mozilla-nl planet

Daniel Pocock: Spyware Dolls and Intel's vPro

Mozilla planet - mo, 04/09/2017 - 08:09

Back in February, it was reported that a "smart" doll with wireless capabilities could be used to remotely spy on children and was banned for breaching German laws on surveillance devices disguised as another object.

Would you trust this doll?

For a number of years now there has been growing concern that the management technologies in recent Intel CPUs (ME, AMT and vPro) also conceal capabilities for spying, either due to design flaws (no software is perfect) or backdoors deliberately installed for US spy agencies, as revealed by Edward Snowden. In a 2014 interview, Intel's CEO offered to answer any question, except this one.

The LibreBoot project provides a more comprehensive and technical analysis of the issue, summarized in the statement "the libreboot project recommends avoiding all modern Intel hardware. If you have an Intel based system affected by the problems described below, then you should get rid of it as soon as possible" - eerily similar to the official advice German authorities are giving to victims of Cayla the doll.

All those amateur psychiatrists suggesting LibreBoot developers suffer from symptoms of schizophrenia have had to shut their mouths since May when Intel confirmed a design flaw (or NSA backdoor) in every modern CPU had become known to hackers.

Bill Gates famously started out with the mission to put a computer on every desk and in every home. With more than 80% of new laptops based on an Intel CPU with these hidden capabilities, can you imagine the NSA would not have wanted to come along for the ride?

Four questions everybody should be asking
  • If existing laws can already be applied to Cayla the doll, why haven't they been used to alert owners of devices containing Intel's vPro?
  • Are exploits of these backdoors (either Cayla or vPro) only feasible on a targeted basis, or do the intelligence agencies harvest data from these backdoors on a wholesale level, keeping a mirror image of every laptop owner's hard disk in one of their data centers, just as they already do with phone and Internet records?
  • How long will it be before every fast food or coffee chain with a "free" wifi service starts dipping in to the data exposed by these vulnerabilities as part of their customer profiling initiatives?
  • Since Intel's admissions in May, has anybody seen any evidence that anything is changing though, either in what vendors are offering or in terms of how companies and governments outside the US buy technology?
Share your thoughts

This issue was recently raised on the LibrePlanet mailing list. Please feel free to join the list and click here to reply on the thread.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: September’s Featured Extensions

Mozilla planet - sn, 02/09/2017 - 01:00

Firefox Logo on blue background

Pick of the Month: Search Image

by Didier Lafleur
Highlight any text and perform a Google image search with a couple clicks.

“I’ve been looking for something like this for years, to the point I wrote my own script. This WORKS for me.”

Featured: Cookie AutoDelete

by Kenny Do
Automatically delete stagnant cookies from your closed tabs. Offers whitelist capability, as well.

“Very good replacement for Self-Destructing Cookies.”

Featured: Tomato Clock

by Samuel Jun
A super simple but effective time management tool. Use Tomato Clock to break your work bursts into meaningful 25-minute “tomato” intervals.

“A nice way to track my productivity for the day.”

Featured: Country Flags & IP Whois

by Andy Portmen
This extension will display the country flag of a website’s server location. Simple, informative.

“It does what it should.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post September’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Sean McArthur: Bye Mozilla, Hello Buoyant

Mozilla planet - fr, 01/09/2017 - 21:45
Bye, Mozilla

Today is my last day as a Mozilla employee.

It hurts to say that. I love Mozilla.1

I loved waking up to work knowing that I was working for you. For everyone’s internet. Truly, even if you feel Firefox is inferior to your preferred browser, you must admit that an internet ruled by profit-driven businesses is not a dream of anyone. What Mozilla does, by working to provide an alternative browser choice, is allow a non-profit organization to have a voice. Without Firefox, the group of people that make up Mozilla would just be yelling at the closed doors of “tech giants”.

I got to work on some amazing technology, and with superb humans. The concept of Persona is exactly the kind of thing that Mozilla’s voice can push for: a way to fix passwords, while curbing identity providers from tracking your every action. Unfortunately, we couldn’t get enough adoption before we realized that Firefox needed more help. Firefox was hurting, and without Firefox, well, our voice doesn’t mean much. I dream that we can attack that problem again someday.

Taking our Identity team off Persona, we boosted Firefox Sync from a nerd toy into something that all Firefox users could benefit from. This actually quite important, something that can often times be forgotten even inside the organization. With Sync benefiting from Firefox Accounts, users gain a whole lot more value from installing Firefox on multiple devices. Firefox’s Awesomebar is still far better at finding things than Chrome does, and in my own experience, it’s only gotten better since it can remember links I open on my phone or tablet.

The superb humans really are superb. Yes, they’re intelligent. But that’s the boring part. Many people are. They stand out, instead, because of their empathy, their optimism, their voice, their loyalty. My team members are loyal to each other, because each of us is loyal to the mission: a free, open internet where the user is in command. We all wanted each other to succeed, because that always meant wins for you, the user.

And yet, it’s time for me start the next step in my journey. I still wish Mozilla the very best. You should definitely be using Firefox. And I hope I’ll bump into my friends plenty of times more in the future.

Hello, Buoyant

Starting Monday, I’ll be working for Buoyant.

Over the past few years, I’ve been learning and writing Rust. I’ve really dug into the community2, and been absolutely loving working on tools for HTTP and servers and clients and whatnot.

Buoyant is working on tools that help big websites scale. One such tool is linkerd, described as a service mesh. This tool is to help websites that are receiving godzillions of requests, and so needs to be fast and use little memory. Also, it’s 2017, and so releasing a new tool that has CVEs about memory unsafety every couple months isn’t really acceptable, when we have alternatives. So, Rust!

It turns out, we’re a great fit! I’ll be continuing to work on HTTP pieces in Rust. In fact, this means I’ll now be working in Rust full-time, so hopefully pieces should be built faster. I’ll be working in open source still, so hey, perhaps you will still benefit!

This is a really sad day for me, but I’m also super excited for next week!3

  1. I’ve been at Mozilla for over 6 years! It’s like I’m leaving part of my family, part of how I identify myself in the world. Not many places really grip you personally like Mozilla does. 

  2. I really tried to get into the nodejs community a few years ago, but eventually ran into enough cases of elitism that I gave up. Thankfully, the Rust community is fantastic any way I can measure. 

  3. Worst. Roller coaster. Ever. 

Categorieën: Mozilla-nl planet

Ehsan Akhgari: Quantum Flow Engineering Newsletter #22

Mozilla planet - fr, 01/09/2017 - 07:59

With around three weeks left in the development cycle for Firefox 57, everyone seems to be busy getting the last fixes in to shape up this long-awaited release.  On the Quantum Flow project, we have kept up with the triage of the incoming bug reports that are tagged as [qf], and as we’re getting closer to the beta uplift date, the realistic opportunity for fixing bugs is getting narrower, and as such the bar for prioritizing incoming bug reports as [qf:p1] keeps getting higher.  This matches with the overall shift in focus in the past few weeks towards getting all the ongoing work that is targeting Firefox 57 under control to make sure we manage to do as much of what we have planned to do for this release as possible.

This past week we made more progress on optimizing the performance of Firefox for the Speedometer V2 benchmark.  Besides many of the usual optimizations, which you will read about in the acknowledgement section of the newsletter, one noteworthy item was David Major’s investigation for adding this benchmark to the set of pages that we load to train the PGO profile we use on Windows builds.  This allowed the MSVC code generator to generate better optimized code using the profile information and bought us a few benchmark score points.  Of course, earlier similar attempts hadn’t really gained us better performance, and it’s unclear whether this change will stick or get backed out due to PGO specific crashes or whatnot, but in the mean time we’re not stopping landing other improvements to Firefox for this benchmark either!  At the time of this writing, the Firefox Health Dashboard puts our benchmark score on Nightly within a 4.07% difference compared to Chrome.

Another news worthy of mention related to Speedometer is that recently Speedometer tests with Stylo were enabled on AWFY.  As can be seen on the reference hardware score page, Stylo builds are now a bit faster than normal Gecko when running Speedometer.  This has been achieved by the hard work of many people on the Stylo team and I’d like to take a moment to thank them, and especially call out Bobby Holley who helped make sure that we have a great performance story here.

In other performance related news, this past week the first implementation of our cooperative preemptive scheduling of web page JavaScript, more commonly known as Quantum DOM, landed.  The design document describes some of the background information which may be helpful if you need to understand the details of how the new world looks like.  For now, this feature is disabled by default while the ongoing work to iron out the remaining issues continues.

The Quantum DOM project has been a massive overhaul of our codebase.  A huge part of it has been the “labeling” project that Bevis Tseng has been tirelessly leading for many months now.  The basic idea behind this part of the project is to give each runnable a name and indicate which tab or document the runnable is associated with (I’m simplifying a bit, please see the wiki page for more details.)  Bill McCloskey had a great suggestion about some performance lessons that we have learned through this project for the performance story section of this newsletter, which was to highlight how this project ended up uncovering some unexpected performance issues in Firefox!

Bevis has some telemetry analysis which measures the number of runnables of a certain type (to view the interesting part, please scroll down to the “full runnable list” section).  This analysis has been used to prioritize which runnables need to be worked on next for labeling purposes.  But as this list shows the relative frequencies of runnables, we’ve ended up finding several surprises in where some runnables are showing up on this list, which have uncovered performance issues which would otherwise be very difficult to detect and diagnose.  Here are a few examples (thanks to Bill for enumerating them!):

  • We used to send the DidComposite notification to every tab, regardless of whether it was in the foreground or background.  We tried to fix this once, but the fix actually only fixed a related issue involving multiple windows.  The real fix finally got fixed later.
  • We used to have a “startup refresh driver” which used to have only for a few milliseconds during startup.  However, it was showing up as #33 on the list of runnables.  We found out that it was never being disabled after it was being started, so if we ever started running the startup refresh driver, it would run indefinitely in that browsing session, and get to the top of the list.  Unfortunately, while this runnable disappeared for a while after that bug was fixed, it is now back and we’re not sure why.
  • We found out that MediaStreamGraphStableStateRunnable is #20 on this list, which was surprising as this runnable is only supposed to be used for WebRTC and WebAudio, neither being extremely popular features on the Web.  Randell Jesup found out that there is a bug causing the runnable to be continually dispatched after a WebRTC or WebAudio session is over.
  • We run a runnable for the intersection observer feature a lot.  We tried to cut the frequency of this runnable once, but it doesn’t seem to have helped much.  This runnable still shows up quite high on the list, as #6.

I encourage people to look at the telemetry analysis to see if they can spot a runnable with a familiar name which appears too high on the list.  It’s very likely that there are other performance bugs lurking in our codebase which this tool can help uncover.

Now, please allow me to take a moment to acknowledge the hard work of everyone who helped make Firefox faster this past week.  I hope I’m not forgetting any names!

Categorieën: Mozilla-nl planet

Robert O'Callahan: rr Trace Portability

Mozilla planet - fr, 01/09/2017 - 05:39

We want to be able to record an rr trace on one machine but copy it to another machine for replay. For example, you might record a failing test on one machine and copy the trace to a developer's machine for debugging. Or, you might record a failure locally and upload the trace to some cloud service for analysis. In short: on rr master, this works!

It turned out there were only two big issues to solve. We needed a way to make traces fully self-contained, because for efficiency we don't always copy all needed files into the trace during recording. rr pack addressed that. rr pack also compacts the trace by eliminating duplicate copies of the same file. Switching to brotli also reduced trace size, as did using Cap'n Proto for trace data.

The other big issue was handling CPUID instructions. We needed a way to ensure that during replay CPUID instructions returned the same results as they did during recording — they generally won't if you switch machines. Modern Intel hardware supports "CPUID faulting", i.e. you can configure the CPU to trap every time a CPUID instruction occurs. Linux didn't expose this capability to user-space, so last year Kyle Huey did the hard work of adding a Linux system-call API to expose it: the ARCH_GET/SET_CPUID subfeature of arch_prctl. It works very much like the existing PR_GET/SET_TSC, which give control over the faulting of RDTSC/RDTSCP instructions. Getting the feature into the upstream kernel was a bit of an ordeal, but that's a story for another day. It finally shipped in the 4.12 kernel.

When CPUID faulting is available, rr recording stores the results of all CPUID instructions in the trace, and rr replay intercepts all CPUID instructions and takes their results from the trace. With this in place, we're able to move traces from one machine/distro/kernel to another and replay them successfully.

We also support situations where CPUID faulting is not available on the recording machine but is on the replay machine. At the start of recording we save all available CPUID data (there are only a relatively small number of possible CPUID "leaves"), and then rr replay traps CPUID instructions and emulates them using the stored data.

Caveat: the user is responsible for ensuring the destination machine supports all instructions and other CPU features used by the recorded program. At some point we could add an rr feature to mask the CPUID values reported during recording so you can limit the CPU features a recorded program uses. (We actually already do this internally so that applications running under rr believe that RTM transactional memory and RDRAND, which rr can't handle, are not available.)

CPUID faulting is supported on most modern Intel CPUs, at least on Ivy Bridge and its successor Core architectures. Kyle also added support to upstream Xen and KVM to virtualize it, and even emulate it regardless of whether the underlying hardware supports it. However, VM guests running on older Xen or KVM hypervisors, or on other hypervisors, probably can't use it. And as mentioned, you will need a Linux 4.12 kernel or later.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Keyboard Shortcuts: Command your QWERTY

Mozilla planet - fr, 01/09/2017 - 04:57

At this point, even Grandma has found CTRL + S, V and P. But Friends, these are merely the tip of the iceberg. There’s a whole language of keyboard shortcuts … Read more

The post Keyboard Shortcuts: Command your QWERTY appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Air Mozilla: Bay Area Rust Meetup August 2017

Mozilla planet - fr, 01/09/2017 - 04:00

Bay Area Rust Meetup August 2017 https://www.meetup.com/Rust-Bay-Area/ Siddon Tang from PingCAP will be speaking about Futures and gRPC in Rust Sean Leffler will be talking about Rust's Turing Complete typesystem

Categorieën: Mozilla-nl planet

Air Mozilla: Bay Area Rust Meetup August 2017

Mozilla planet - fr, 01/09/2017 - 04:00

Bay Area Rust Meetup August 2017 https://www.meetup.com/Rust-Bay-Area/ Siddon Tang from PingCAP will be speaking about Futures and gRPC in Rust Sean Leffler will be talking about Rust's Turing Complete typesystem

Categorieën: Mozilla-nl planet

The Mozilla Blog: Statement on U.S. DACA Program

Mozilla planet - fr, 01/09/2017 - 03:57

We believe that the young people who would benefit from the Deferred Action for Childhood Arrivals (DACA) deserve the opportunity to take their full and rightful place in the U.S. The possible changes to the DACA that were recently reported would remove all benefits and force people out of the U.S. – that is simply unacceptable.

Removing DREAMers from classrooms, universities, internships and workforces threaten to put the very innovation that fuels our technology sector at risk. Just as we said with previous Executive Orders on Immigration, the freedom for ideas and innovation to flow across borders is something we strongly believe in as a tech company. More importantly it is something we know is necessary to fulfill our mission to protect and advance the internet as a global public resource that is open and accessible to all.

We can’t allow talent to be pushed out or forced into hiding. We also shouldn’t stand by and allow families to be torn apart. More importantly, as employers, industry leaders and Americans — we have a moral obligation to protect these children from ill-willed policies and practices. Our future depends on it.

We want DREAMers to continue contributing to this country’s future and we do not want people to live in fear.  We urge the Administration to keep the DACA program intact. At the same time, we urge leaders in government to enact a bipartisan permanent solution, one that will allow these bright minds to prosper in the country we know and love.

The post Statement on U.S. DACA Program appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Rabimba: Linux Foundation Open Networking Summit

Mozilla planet - fr, 01/09/2017 - 02:16
The Open Networking Summit took place on April 3-6 – where Enterprise, Cloud and Service Providers gathered in Santa Clara, California to share insights, highlight innovation and discuss the future of open source networking. I was invited to give a talk about Web Virtual Reality and aframe at it.So, Open Networking Summit (ONS) actually consists of two events – there might be more, but I was involved with two. ONS is the big event itself. There is also the Symposium on SDN Research (SOSR). This is an academic conference that accepts papers. There were some pretty fantastic papers at the conference. My favorite one – there was a system called “NEAt: Network Error Auto-Correct”. The idea here is that the system keeps track of what’s going on with your network and notices problems and automatically corrects them. It was designed for an SDN setup where you have a controller that is responding to changes in the network and telling systems what to do.
The event was held at San Jose Convention center and was pretty packed up. Keynotes were sprawled across the first floor with a very big auditorium that encompassed the whole of the first floor. The individual talks were assigned different rooms on the two floors.
Poster sessions were held on the second floor near another hall where the accompanying talks with the poster were going on.
The talks were not recorded. I had roughly 35 people in my talk and that was a pretty perfect number of audience to have without being too overwhelmed.
A previous version of my talk is available here. It would be great to have some feedback on it, though the content has changed quite a bit after that
I frankly received quite a lot of interest in the talk and questions regarding it. The questions mostly were involving authoring tool for WebVR and about how we can create scene's that can interact with industrial hardware.
Something that urged me to work on some pet projects I will write on later about.

What do you think about how networking and industry can merge with WebVR and VR in general? Let me know by comments or tweeter. I will be posting soon my take on it with a few live example and demos.

Categorieën: Mozilla-nl planet

Allen Wirfs-Brock: Some ECMAScript Explanations and Stories for Dave

Mozilla planet - fr, 01/09/2017 - 00:45

Dave Winer recently blogged about his initial thoughts after dipping his toes into using some modern JavaScript features . He ends by suggesting that I might have some  explanations and stories about the features he are using.  I’ve given talks that cover some of this and normally I might just respond via some terse tweets. But Dave believes that blog posts should be responded to by blog posts so I’m taking a try at blogging back to him.

What To Call It?

The JavaScript language is defined by a specification maintained by the Ecma International standards organization. Because of trademark issues, dating back to 1996, the specification could not use the name JavaScript.  So they coined the name ECMAScript instead. Contrary to some myths, ECMAScript and JavaScript are not different languages. “ECMAScript” is simply the name used within the specification where it would really like to say “JavaScript”.

Standards organizations like to identify documents using numbers. The ECMAScript specification’s number is ECMA-262.  Each time an update to the specification is approved as “the standard” a new edition of ECMA-262 is released. Editions are sequentially numbered. Dave said “ES6 is the newest version of JavaScript”.  So, what is “ES6”? ES6 is colloquial shorthand for “ECMA-262, Edition 6”.  ES6 was published as a standard in 2015. The actual title of the ES6 specification is ECMAScript 2015 Language Specification and the preferred shorthand name is ECMAScript 2015 or just ES2015.

So, why the year-based designation?  The 6th edition of ECMA-262 took a long time to develop, arguably 15 years. As ES6 was approaching publication, TC39 (the Technical Committee within Ecma International that develops the ECMAScript specifications) already knew that it wanted to change its process in a way that enabled  yearly maintenance updates.  That meant a new edition of ECMA-262 every year with a new edition number. After a few years we would be talking about ES6, ES7, ES8, ES9, ES10, ES11, etc. Those numbers quickly loose any context for people who aren’t deeply involved in the standards development process. Who would know if the current standard ES7, or ES8, or ES9? Was some feature introduced in ES6 or ES7? TC39 couldn’t eliminate the actual edition numbers (standards organizations love their document numbers) but it could change the document title.  We decide that TC39 would incorporate the year of release into the documents title and to encourage people to use the year when referring to a specific edition. So, the “newest version of JavaScript” is ECMA-262, Edition 8 and its title is  ECMAScript 2017 Language Specification. Some people still refer to it as ES8, but the preferred shorthand name is ECMAScript 2017 or just ES2017.

But saying “ECMAScript” or mentioning specific ECMAScript editions is confusing to many people and probably is unnecessary for most situations.  The common name of the language really is JavaScript and unless you are talking about the actual specification document you probably don’t need to utter “ECMAScript”. But you may need to distinguish between old versions of JavaScript and what is implemented by newer, modern implementations.  The big change in the language and its specification occurred with  ES2015.  The subsequent editions make relatively small incremental extensions and corrections to what was standardized in 2015.  So, here is my recommendation.  Generally you should  just say “JavaScript” meaning the language as it is used in browsers, Node.js, and other environments.  If you need to specifically talk about JavaScript implementations that are based upon ECMAScript specifications published prior to ES2015 say “legacy JavaScript”. If you need to specifically talk about JavaScript that includes ES2015 (or later) features say “modern JavaScript”.

Can You Use It Yet?

Except for modules almost all of ES2015-ES2017 is implemented in the current versions of all the major evergreen browsers (Chrome, Firefox, Safari, Edge). Also in current versions of Node.js. If you need to write code that will run on non-evergreen browsers such as IE you can use Babel to pre-compile modern JavaScript code into legacy JavaScript code.

Module support exists in all of the evergreen browsers, but some of them still require setting a flag to use it.  Native ECMAScript module support will hopefully ship in Node.js in spring 2018. In the meantime @std/esm enables use of ECMAScript modules in current Node releases.

Block Scoped Declaration (let and const)

The main motivation for block scoped declarations was to eliminate the “closure in loop” bug hazard that may JavaScript programmer have encountered when they set event handlers within a loop. The problem is that var declarations look like they should be local to the loop body but in fact are hoisted to the top of the current function and hence each event handler defined in the loop use the last value assigned to such variables.

Replacing var with let gives each iteration of the loop a distinct variable binding.  So each event handler captures different variables with the values that were current when the event handler was installed:

The hardest part about adding block scoped declaration to ECMAScript was coming up with a rational set of rules for how the declaration  should interact with the already existing var declaration form. We could not change the semantics of var without breaking backwards compatibility, which is something we try to never do. But, we didn’t want to introduce new WTF surprises in programs that use both var and let. Here are the basic rules we eventually arrived at:


Most browsers, except for IE, had implemented const declarations (but without block scoping) starting in the early naughts. Firefox implemented block scoped let declaration (but not exactly the same semantics as ES2015) in 2006.  By the time TC39 started serious working on what ultimately became ES2015, the keywords const and let had become ingrained in our minds such that we didn’t really consider any other alternatives. I regret that.  In retrospect, I think we should have used let in place of  const for declaring immutable variable bindings because that is the most common use case. In fact, I’m pretty sure that many developers use let instead of const for variable they don’t intend to change, simply because let has fewer characters to type. If we had used let in place of const then perhaps var would have been adequate for the relatively rare cases where a mutable variable binding is needed.  A language with only let and var would have been simpler then what we ended up with using const, let, and var.

Arrow Functions

One of the primary motivations for arrow functions was to eliminate another JavaScript bug hazard.  The “wrong this” problem that occurs when you capture a function expression (for example, as an event handler) but forget that this used inside the function expression will not be the same value as this in the context where you created the function expression.  Conciseness was a consideration in the design of arrow functions, but fixing the “wrong this” problem was the real driver.

I’ve heard several JS programmers comment that at first they didn’t like arrow functions but that they grew upon them over time. Your mileage may vary. Here are a couple of good articles that address arrow function reluctance.

Modules

Actually, ES modules weren’t inspired by Node modules. But a lot of work went into making them feel familiar  to people who were used to Node modules. In fact,  ES modules are semantically more similar to the Pascal modules that Dave remembers then they are to Node modules.  The big difference is that in the ES design (and Pascal modules) the interfaces between modules are statically defined while in the Node modules design  module interfaces are dynamically defined. With static module interfaces the inter-dependencies between a set of modules are precisely defined by the source code prior to executing any code.  With dynamic modules, the module interfaces cannot be fully understood without actually executing the code of the modules.  Or stated another way, ES module interfaces are declaratively defined while Node module interfaces are imperatively defined. Static modules systems better support creation of ahead-of-time tools such as accurate module dependency linters or module linkers. Such tools for dynamic module interfaces usually depends upon applying heuristics that analyze modules as if they had static interfaces.  Such analysis can be wrong if the actual dynamically  interfaces construction does things that the heuristics didn’t account for.

The work on the ES module design actually started before the first release of Node. There were early proposals for dynamic module interfaces that are more like what Node adopted.  But TC39 made an early decision that declarative static module interfaces were a better design, for the long term. There has been much controversy about this decision. Unfortunately, it has created issues for Node which have been difficult for them to resolve. If TC39 had anticipated the rapid adoption of Node and the long time it would take to finish “ES6” we might have taken the dynamic module interface path. I’m glad we didn’t and I think it is becoming clear that we made the right choice.

Promises

Strictly speaking, the legacy JavaScript language didn’t do async at all.  It was host environments such as  browsers and Node that defined the APIs that introduced async programming into JavaScript.

ES2015 needed to include promises because they were being rapidly adopted by the developer community (include by new browser APIs) and we wanted to avoid the problem of competing incompatible promise libraries or of a browser defined promise API that didn’t take other host environments into consideration.

The real benefit of ES2015 promises is that they provided a foundation for better async abstractions that do bury more of the BS within the runtime.  Async functions, introduced in ES2017 are the “better way” to do async.  In the pipeline for the near future is Async Iteration which further simplifies a common async use case.

Categorieën: Mozilla-nl planet

Air Mozilla: Intern Presentations: Round 7: Thursday, August 31st

Mozilla planet - to, 31/08/2017 - 22:00

 Thursday, August 31st Intern Presentations 6 presenters Time: 1:00PM - 2:30PM (PDT) - each presenter will start every 15 minutes 6 SF

Categorieën: Mozilla-nl planet

Air Mozilla: Intern Presentations: Round 7: Thursday, August 31st

Mozilla planet - to, 31/08/2017 - 22:00

 Thursday, August 31st Intern Presentations 6 presenters Time: 1:00PM - 2:30PM (PDT) - each presenter will start every 15 minutes 6 SF

Categorieën: Mozilla-nl planet

Mozilla VR Blog: glTF Exporter in three.js and A-Frame

Mozilla planet - to, 31/08/2017 - 21:27
A brief introduction glTF Exporter in three.js and A-Frame

When creating WebVR experiences developers usually face a common problem: it’s hard to find assets other than just basic primitives. There’re several 3D packages to generate custom objects and scenes that use custom file formats, and although they give you the option to export to a common file format like Collada or OBJ, each exporter saves the information in a slightly different way. Because of these differences, when we try to import these files in the 3D engine that we are using, we often find that the result that we see on the screen is quite different from what we created initially.

glTF Exporter in three.js and A-Frame

The Khronos Group created the glTF 3d file format to have an open, application agnostic and well defined structure that can be imported and exported in a consistent way. The resulting file is smaller than most of the availables alternatives, it’s also optimized for real time applications to be fast to read since we don’t need to consolidate the data. Once we’ve read the buffers we can push them directly to the GPU.
The main features that glTF provides and a 3D file format comparison can be found in this article by Juan Linietsky

A few months ago feiss wrote an introduction to the glTF workflow he used to create the assets for our A-Saturday-Night demo.
Many things have improved since then. The glTF blender exporter is already stable and has glTF 2.0 support. The same goes for three.js and A-Frame: both have a much better support for 2.0.
Now, most of the pain he experienced by converting from Blender to collada and then to glTF has gone, and we can export directly to glTF from Blender.

glTF is here to stay and its support has grown widely in the last months, being available in most of the 3D web engines and applications out there like three.js, babylonjs, cesium, sketchfab, blocks ...
The following video from the first glTF BOF (held on Siggraph this year) illustrates how the community has embraced the format:


glTF Exporter on the web

One of the most requested features for A-Painter has been the ability to export to some standard format so people could reuse the drawing as an asset or placeholder in 3D content creation software (3ds Max, Maya,...) or engines like Unity or Unreal.
I started playing with the idea of exporting to OBJ but lot of changes were required on the original three.js exporter because of the lack of triangle_strip fully support so I left it in standby.

#A-painter triangleStrip lines exporter to OBJ, #wip :) /cc @utopiah @feiss #aframevr pic.twitter.com/skxbcJtoXy

— Fernando Serrano (@fernandojsg) January 16, 2017

After seeing all the industry support and adoption of glTF at Siggraph 2017 I decided to give it a second try.

The work was much easier than expected thanks to the nice THREE.js / A-Frame loaders that Don McCurdy and Takahiro have been driving. I thought it would be great to export content created directly on the web to glTF, and it would serve as a great excuse to go deep on the spec and understand it better.

glTF Exporter in three.js

Thanks to the great glTF spec documentation and examples, I got a glTF exporter working pretty fast.

The first version of the exporter has already landed in r87 is still in early stage and under development. There’s an open issue If you want to get involved and follow the conversations about the missing features: https://github.com/mrdoob/three.js/issues/11951

API

The API follows the same structure of the existing exporters available in three.js:

  • Create an instance of THREE.GLTFExporter.
  • Call parse with the objects or scene that you want to export.
  • Get the result in a callback and use it as you want.
var gltfExporter = new THREE.GLTFExporter(); gltfExporter.parse( input, function( result ) { var output = JSON.stringify( result, null, 2 ); console.log( output ); downloadJSON( output, 'scene.gltf' ); }, options );

More detailed and updated information for the API can be found on the three.js docs

Together with the exporter I created a simple example in three.js trying to combine the different type of primitives, helpers, rendering modes and materials and exposing all the options the exporter has so we could use it as a testing scene through the development

glTF Exporter in three.js and A-Frame

Integration in three.js editor

The integration with the three.js editor was pretty straightforward and I think it’s one of the most useful features, since the editor supports importing plenty of 3d formats, it can be used an an advanced converter from these formats to glTF, allowing the user to delete unneeded data, tweak parameters, modify materials etc before exporting.

glTF Exporter in three.js and A-Frame


glTF Exporter on A-Frame

Please note that as three.js v87 is required to use the GLTFExporter currently just master branch of A-Frame is supported, and the first stable version compatible will be 0.7.0 to be released later this month.

Integration with A-Frame inspector

After the successful integration with three.js’ editor the next step was to integrate the same functionality into the A-Frame inspector.
I’ve added two options to export the content to GLTF:

  • Clicking on the export icon on the scenegraph will export the whole scene to glTF

glTF Exporter in three.js and A-Frame

  • Clicking on the entity’s attributes panel will export the selected entity to glTF

glTF Exporter in three.js and A-Frame

Exporter component in A-Frame

Last but not least, I’ve created an A-Frame component so users could export scenes and entities programmatically.

The API is quite simple, just call the export function from the gltf-exporter system:

sceneEl.systems['gltf-exporter'].export(input, options);

The function accepts severals different input values: None (export the whole scene), one entity, an array of entities, or a NodeList (eg: the result from a querySelectorAll)

The options accepted are the same as the original three.js function.

A-Painter exporter

The whole history wouldn’t be complete if the initial issue that made me go into glTF wasn’t satisfied :) After all the previous work described above it was trivial to add support to export to gltf in A-Painter.

  • Include the aframe-gltf-exporter-component script:
<script src="https://unpkg.com/aframe-gltf-exporter-component/dist/aframe-gltf-exporter-component.min.js"></script>
  • Attach the component to a-scene:
<a-scene gltf-exporter/>
  • And finally register a shortcut (g) to save the current drawing to glTF:
if (event.keyCode === 71) { // Export to GTF (g) var drawing = document.querySelector('.a-drawing'); self.sceneEl.systems['gltf-exporter'].export(drawing); }

glTF Exporter in three.js and A-Frame


Extra: Exporter bookmarklet

While developing the exporter I found very useful creating a bookmarklet to inject the exporter code on every A-Frame or three.js page. This way I could just export the whole scene by clicking on it.
If A-FRAME is defined it will export AFRAME.scenes[0] as is the default scene loaded. If not, it will try to look for the global variable scene that is the most commonly used in three.js examples.
It is not bulletproof so you may need to do some changes if it doesn’t work on your app, probably by looking for something else than scene.

To use it you should create a new bookmark on your browser and paste the following code on the URL input box:

glTF Exporter in three.js and A-Frame

What’s next?

From Mozilla we are committed to help improving the glTF specification and its ecosystem.
glTF will keep evolving and many interesting features are being proposed on the roadmap discussion. If you have any suggestion don't hesitate to comment there, since all proposals are being discussed and taken into account.

As I stated before the glTF exporter is still in an early stage but it’s being actively developed so please feel free to jump into the discussion to prioritize on new features.

Finally: wouldn't it be great to see more content creation tools on the web with glTF support so you don't depend on a desktop application to generate your assets?.

Categorieën: Mozilla-nl planet

David Bryant: Mozilla Developer Roadshow: Asia Chapter

Mozilla planet - to, 31/08/2017 - 21:18

Mozilla Developer Roadshow events are fun, informative sessions for people who build the Web. Over the past eight months we’ve held thirty-six events all over the world sharing the word about the latest in Mozilla and Firefox technologies. Now we’re heading to Asia with the goals of finding local experts and connecting the community. Some of our most successful moments have been when we were able to bring local event organizers together to forge lasting relationships. Our first Asia event is in Singapore at the PayPal headquarters on September 19. (Check here for a full list of the cities.)

I’m excited to be coming along and be part of some of those events and so wanted to know what to anticipate plus get a little perspective from someone immersed in the local developer community. To do that I chatted with Hui Jing Chen, a front-end engineer based in Singapore who speaks globally on CSS Grid.

Q: What would you like to have come out of the event in Singapore? Should we look forward to more opportunities for collaboration between Mozilla and developers in Singapore and Asia?

Hui Jing (HJ): I definitely want to have more collaboration between Mozilla and developers in this region (Southeast Asia). I am aware that a lot of the work on web technologies comes out from Europe or North America, and there are lots of factors at play here, including the fact that digital computing was kickstarted in those regions. But it is the WORLD wide web, and I think it is important that developers from other regions contribute to the development of the web as well. For example, WebRTC expert Dr. Alex. Gouaillard, runs CoSMo Software Consultancy out of Singapore, and they are the key contributors to WebRTC’s development. Understandably, it will take time for our region to catch up, but I hope events like this encourage developers in the region to not only be users of web technologies, but shapers of them as well.

David (DB): And independent of where the technology might come from, clearly the use of the web on a day-to-day basis is as much if not more so driven by what people are doing in Asia and the information (or experiences) they need. We know from our steady stream of developer relations efforts and our Tech Speakers activities that the more engaged we are with developers in this region the richer the web will be and the better sense we’ll have of where the web needs to go. So yes, more opportunities for collaboration would be marvelous!

Q: Meetups have been great regional allies for our Developer Roadshows — What are the unique cultural aspects of the Singapore/Malaysia MeetUp Communities?

HJ: My web development career has taken place completely in Singapore, so I can only speak about the Singapore meetup community, but I find that there is less “networking” at the meetups, in that, you’ll see pockets of people chatting with each other, but a large number of people show up to listen to the talk then leave immediately after. Maybe this happens universally, I can’t say for sure that this is unique though.

DB: That’s something we’ve heard and seen elsewhere too. In part that’s why we like the smaller, more frequent, more community-oriented approach we’ve taken for our Developer Roadshows as opposed to more traditional conference-style events. Our hope is that keeping it more intimate, hosting jointly with well-established local partners, and engaging with an existing local community will give people a more comfortable way of considering ongoing collaboration opportunities yet still have an informative core topic that brings them together in the first place.

Q: Tell me a little bit about some challenges working with and participating with the community.

HJ: I’m the co-organizer of Talk.CSS, which is Singapore’s CSS meetup, and in general, the challenge is in finding new speakers. The community in Singapore is really great, so finding venues is never the problem, it’s usually getting people to speak that is much trickier. I sometimes joke that I’m amazed I still have friends left because I’ve almost strong-armed all of them to speak at my meetup at some point in time, and they’re all too polite to say no. This could be an Asian thing, but people here are a bit more reserved, and if they’ve done something cool, they’re less compelled to stand up in front of everyone and share what they did.

DB: Hmmm, perhaps that’s something we can help you with. (And I mean the finding speakers part, not the still having friends part. :-)

Q: Every region has its particular special interests and strengths. What are some things that the Singapore and possibly Malaysian community does exceptionally well?

HJ: Singapore has an exceptionally strong tech community (at least from what I’ve heard from my friends outside of Singapore). This can be attributed to the efforts from key people, who we will hopefully meet in Singapore, who are super active when it comes to organizing events, helping out newbies, encouraging developers to start their own meetups, and generally just making the tech community in Singapore really vibrant.

For example, webuild.sg is the go-to resource for all the tech meetups in Singapore, which is especially helpful if you want to start your own. They also have their own podcast, where they interview developers on their respective areas of expertise. Engineers.sg was originally a one-man operation which records almost every tech meetup in Singapore, and has now expanded into an entirely volunteer run team.

DB: I wasn’t familiar with webuild.sg, but now that you’ve pointed it out to me I keep finding valuable and informative information on the site, for example on organizing events and contributing to open source. So it’s not only a vital resource for the community in Singapore but valuable elsewhere too.

Q: What expectations should we have as a team visiting from the US/Europe?

HJ: Locals are generally more reserved, in that, usually the people who ask questions or speak up more are foreigners from Western countries. There is a sizeable population of developers from all over the world here in Singapore, so meetup attendance is very diverse. It seems that most people are more comfortable approaching speakers individually after the talk rather than during an open Q&A session.

DB: Individual conversations afterward are something I know our presenters and Roadshow team like very much too. I think our format for the Developer Roadshow works well for that so am looking forward to meeting people and talking to them one-on-one.

Q: Diversity and inclusion are very much highlighted in our tech communities — is this an issue of discussion here in Singapore?

HJ: These issues are not as hotly discussed here as in America, I think, largely because Singapore has always been a multicultural society. I’m not saying racism and misogyny do not exist here, but I dare say very few people are overtly so. I think the gender ratio in tech is male-dominated all over the world, including here.

DB: Certainly this is an issue that varies by region, though we’re committed to expressing our support for diversity and inclusion across all developer communities. That means, for example, having a clear code of conduct for events to promote the largest number of participants with the most varied backgrounds. And we love having these Developer Roadshow events play a part in that, having heard attendees express their delight when they meet other folks from similar backgrounds or come to hear presenters with diverse backgrounds. I know from talking to other people about their company’s developer outreach efforts that we’re going to see even more progress in this space going forward.

Our Developer Roadshow events have been enjoyable and very popular, and I’m looking forward to the upcoming sessions in Asia. We’ll have more later on in the year in other locations around the world too, and by time 2017 is over will have held about fifty-five sessions — more than one a week. Hopefully one has been near enough to you for you to take part, and as we’re keen to keep the program will be again soon. Let us know if not, though, and we’ll see what we can do!

Mozilla Developer Roadshow: Asia Chapter was originally published in Mozilla Tech on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Aug. 31, 2017

Mozilla planet - to, 31/08/2017 - 18:00

Reps Weekly Meeting Aug. 31, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Pages