mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Daniel Pocock: Spyware Dolls and Intel's vPro

Mozilla planet - ma, 04/09/2017 - 08:09

Back in February, it was reported that a "smart" doll with wireless capabilities could be used to remotely spy on children and was banned for breaching German laws on surveillance devices disguised as another object.

Would you trust this doll?

For a number of years now there has been growing concern that the management technologies in recent Intel CPUs (ME, AMT and vPro) also conceal capabilities for spying, either due to design flaws (no software is perfect) or backdoors deliberately installed for US spy agencies, as revealed by Edward Snowden. In a 2014 interview, Intel's CEO offered to answer any question, except this one.

The LibreBoot project provides a more comprehensive and technical analysis of the issue, summarized in the statement "the libreboot project recommends avoiding all modern Intel hardware. If you have an Intel based system affected by the problems described below, then you should get rid of it as soon as possible" - eerily similar to the official advice German authorities are giving to victims of Cayla the doll.

All those amateur psychiatrists suggesting LibreBoot developers suffer from symptoms of schizophrenia have had to shut their mouths since May when Intel confirmed a design flaw (or NSA backdoor) in every modern CPU had become known to hackers.

Bill Gates famously started out with the mission to put a computer on every desk and in every home. With more than 80% of new laptops based on an Intel CPU with these hidden capabilities, can you imagine the NSA would not have wanted to come along for the ride?

Four questions everybody should be asking
  • If existing laws can already be applied to Cayla the doll, why haven't they been used to alert owners of devices containing Intel's vPro?
  • Are exploits of these backdoors (either Cayla or vPro) only feasible on a targeted basis, or do the intelligence agencies harvest data from these backdoors on a wholesale level, keeping a mirror image of every laptop owner's hard disk in one of their data centers, just as they already do with phone and Internet records?
  • How long will it be before every fast food or coffee chain with a "free" wifi service starts dipping in to the data exposed by these vulnerabilities as part of their customer profiling initiatives?
  • Since Intel's admissions in May, has anybody seen any evidence that anything is changing though, either in what vendors are offering or in terms of how companies and governments outside the US buy technology?
Share your thoughts

This issue was recently raised on the LibrePlanet mailing list. Please feel free to join the list and click here to reply on the thread.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: September’s Featured Extensions

Mozilla planet - za, 02/09/2017 - 01:00

Firefox Logo on blue background

Pick of the Month: Search Image

by Didier Lafleur
Highlight any text and perform a Google image search with a couple clicks.

“I’ve been looking for something like this for years, to the point I wrote my own script. This WORKS for me.”

Featured: Cookie AutoDelete

by Kenny Do
Automatically delete stagnant cookies from your closed tabs. Offers whitelist capability, as well.

“Very good replacement for Self-Destructing Cookies.”

Featured: Tomato Clock

by Samuel Jun
A super simple but effective time management tool. Use Tomato Clock to break your work bursts into meaningful 25-minute “tomato” intervals.

“A nice way to track my productivity for the day.”

Featured: Country Flags & IP Whois

by Andy Portmen
This extension will display the country flag of a website’s server location. Simple, informative.

“It does what it should.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post September’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Sean McArthur: Bye Mozilla, Hello Buoyant

Mozilla planet - vr, 01/09/2017 - 21:45
Bye, Mozilla

Today is my last day as a Mozilla employee.

It hurts to say that. I love Mozilla.1

I loved waking up to work knowing that I was working for you. For everyone’s internet. Truly, even if you feel Firefox is inferior to your preferred browser, you must admit that an internet ruled by profit-driven businesses is not a dream of anyone. What Mozilla does, by working to provide an alternative browser choice, is allow a non-profit organization to have a voice. Without Firefox, the group of people that make up Mozilla would just be yelling at the closed doors of “tech giants”.

I got to work on some amazing technology, and with superb humans. The concept of Persona is exactly the kind of thing that Mozilla’s voice can push for: a way to fix passwords, while curbing identity providers from tracking your every action. Unfortunately, we couldn’t get enough adoption before we realized that Firefox needed more help. Firefox was hurting, and without Firefox, well, our voice doesn’t mean much. I dream that we can attack that problem again someday.

Taking our Identity team off Persona, we boosted Firefox Sync from a nerd toy into something that all Firefox users could benefit from. This actually quite important, something that can often times be forgotten even inside the organization. With Sync benefiting from Firefox Accounts, users gain a whole lot more value from installing Firefox on multiple devices. Firefox’s Awesomebar is still far better at finding things than Chrome does, and in my own experience, it’s only gotten better since it can remember links I open on my phone or tablet.

The superb humans really are superb. Yes, they’re intelligent. But that’s the boring part. Many people are. They stand out, instead, because of their empathy, their optimism, their voice, their loyalty. My team members are loyal to each other, because each of us is loyal to the mission: a free, open internet where the user is in command. We all wanted each other to succeed, because that always meant wins for you, the user.

And yet, it’s time for me start the next step in my journey. I still wish Mozilla the very best. You should definitely be using Firefox. And I hope I’ll bump into my friends plenty of times more in the future.

Hello, Buoyant

Starting Monday, I’ll be working for Buoyant.

Over the past few years, I’ve been learning and writing Rust. I’ve really dug into the community2, and been absolutely loving working on tools for HTTP and servers and clients and whatnot.

Buoyant is working on tools that help big websites scale. One such tool is linkerd, described as a service mesh. This tool is to help websites that are receiving godzillions of requests, and so needs to be fast and use little memory. Also, it’s 2017, and so releasing a new tool that has CVEs about memory unsafety every couple months isn’t really acceptable, when we have alternatives. So, Rust!

It turns out, we’re a great fit! I’ll be continuing to work on HTTP pieces in Rust. In fact, this means I’ll now be working in Rust full-time, so hopefully pieces should be built faster. I’ll be working in open source still, so hey, perhaps you will still benefit!

This is a really sad day for me, but I’m also super excited for next week!3

  1. I’ve been at Mozilla for over 6 years! It’s like I’m leaving part of my family, part of how I identify myself in the world. Not many places really grip you personally like Mozilla does. 

  2. I really tried to get into the nodejs community a few years ago, but eventually ran into enough cases of elitism that I gave up. Thankfully, the Rust community is fantastic any way I can measure. 

  3. Worst. Roller coaster. Ever. 

Categorieën: Mozilla-nl planet

Ehsan Akhgari: Quantum Flow Engineering Newsletter #22

Mozilla planet - vr, 01/09/2017 - 07:59

With around three weeks left in the development cycle for Firefox 57, everyone seems to be busy getting the last fixes in to shape up this long-awaited release.  On the Quantum Flow project, we have kept up with the triage of the incoming bug reports that are tagged as [qf], and as we’re getting closer to the beta uplift date, the realistic opportunity for fixing bugs is getting narrower, and as such the bar for prioritizing incoming bug reports as [qf:p1] keeps getting higher.  This matches with the overall shift in focus in the past few weeks towards getting all the ongoing work that is targeting Firefox 57 under control to make sure we manage to do as much of what we have planned to do for this release as possible.

This past week we made more progress on optimizing the performance of Firefox for the Speedometer V2 benchmark.  Besides many of the usual optimizations, which you will read about in the acknowledgement section of the newsletter, one noteworthy item was David Major’s investigation for adding this benchmark to the set of pages that we load to train the PGO profile we use on Windows builds.  This allowed the MSVC code generator to generate better optimized code using the profile information and bought us a few benchmark score points.  Of course, earlier similar attempts hadn’t really gained us better performance, and it’s unclear whether this change will stick or get backed out due to PGO specific crashes or whatnot, but in the mean time we’re not stopping landing other improvements to Firefox for this benchmark either!  At the time of this writing, the Firefox Health Dashboard puts our benchmark score on Nightly within a 4.07% difference compared to Chrome.

Another news worthy of mention related to Speedometer is that recently Speedometer tests with Stylo were enabled on AWFY.  As can be seen on the reference hardware score page, Stylo builds are now a bit faster than normal Gecko when running Speedometer.  This has been achieved by the hard work of many people on the Stylo team and I’d like to take a moment to thank them, and especially call out Bobby Holley who helped make sure that we have a great performance story here.

In other performance related news, this past week the first implementation of our cooperative preemptive scheduling of web page JavaScript, more commonly known as Quantum DOM, landed.  The design document describes some of the background information which may be helpful if you need to understand the details of how the new world looks like.  For now, this feature is disabled by default while the ongoing work to iron out the remaining issues continues.

The Quantum DOM project has been a massive overhaul of our codebase.  A huge part of it has been the “labeling” project that Bevis Tseng has been tirelessly leading for many months now.  The basic idea behind this part of the project is to give each runnable a name and indicate which tab or document the runnable is associated with (I’m simplifying a bit, please see the wiki page for more details.)  Bill McCloskey had a great suggestion about some performance lessons that we have learned through this project for the performance story section of this newsletter, which was to highlight how this project ended up uncovering some unexpected performance issues in Firefox!

Bevis has some telemetry analysis which measures the number of runnables of a certain type (to view the interesting part, please scroll down to the “full runnable list” section).  This analysis has been used to prioritize which runnables need to be worked on next for labeling purposes.  But as this list shows the relative frequencies of runnables, we’ve ended up finding several surprises in where some runnables are showing up on this list, which have uncovered performance issues which would otherwise be very difficult to detect and diagnose.  Here are a few examples (thanks to Bill for enumerating them!):

  • We used to send the DidComposite notification to every tab, regardless of whether it was in the foreground or background.  We tried to fix this once, but the fix actually only fixed a related issue involving multiple windows.  The real fix finally got fixed later.
  • We used to have a “startup refresh driver” which used to have only for a few milliseconds during startup.  However, it was showing up as #33 on the list of runnables.  We found out that it was never being disabled after it was being started, so if we ever started running the startup refresh driver, it would run indefinitely in that browsing session, and get to the top of the list.  Unfortunately, while this runnable disappeared for a while after that bug was fixed, it is now back and we’re not sure why.
  • We found out that MediaStreamGraphStableStateRunnable is #20 on this list, which was surprising as this runnable is only supposed to be used for WebRTC and WebAudio, neither being extremely popular features on the Web.  Randell Jesup found out that there is a bug causing the runnable to be continually dispatched after a WebRTC or WebAudio session is over.
  • We run a runnable for the intersection observer feature a lot.  We tried to cut the frequency of this runnable once, but it doesn’t seem to have helped much.  This runnable still shows up quite high on the list, as #6.

I encourage people to look at the telemetry analysis to see if they can spot a runnable with a familiar name which appears too high on the list.  It’s very likely that there are other performance bugs lurking in our codebase which this tool can help uncover.

Now, please allow me to take a moment to acknowledge the hard work of everyone who helped make Firefox faster this past week.  I hope I’m not forgetting any names!

Categorieën: Mozilla-nl planet

Robert O'Callahan: rr Trace Portability

Mozilla planet - vr, 01/09/2017 - 05:39

We want to be able to record an rr trace on one machine but copy it to another machine for replay. For example, you might record a failing test on one machine and copy the trace to a developer's machine for debugging. Or, you might record a failure locally and upload the trace to some cloud service for analysis. In short: on rr master, this works!

It turned out there were only two big issues to solve. We needed a way to make traces fully self-contained, because for efficiency we don't always copy all needed files into the trace during recording. rr pack addressed that. rr pack also compacts the trace by eliminating duplicate copies of the same file. Switching to brotli also reduced trace size, as did using Cap'n Proto for trace data.

The other big issue was handling CPUID instructions. We needed a way to ensure that during replay CPUID instructions returned the same results as they did during recording — they generally won't if you switch machines. Modern Intel hardware supports "CPUID faulting", i.e. you can configure the CPU to trap every time a CPUID instruction occurs. Linux didn't expose this capability to user-space, so last year Kyle Huey did the hard work of adding a Linux system-call API to expose it: the ARCH_GET/SET_CPUID subfeature of arch_prctl. It works very much like the existing PR_GET/SET_TSC, which give control over the faulting of RDTSC/RDTSCP instructions. Getting the feature into the upstream kernel was a bit of an ordeal, but that's a story for another day. It finally shipped in the 4.12 kernel.

When CPUID faulting is available, rr recording stores the results of all CPUID instructions in the trace, and rr replay intercepts all CPUID instructions and takes their results from the trace. With this in place, we're able to move traces from one machine/distro/kernel to another and replay them successfully.

We also support situations where CPUID faulting is not available on the recording machine but is on the replay machine. At the start of recording we save all available CPUID data (there are only a relatively small number of possible CPUID "leaves"), and then rr replay traps CPUID instructions and emulates them using the stored data.

Caveat: the user is responsible for ensuring the destination machine supports all instructions and other CPU features used by the recorded program. At some point we could add an rr feature to mask the CPUID values reported during recording so you can limit the CPU features a recorded program uses. (We actually already do this internally so that applications running under rr believe that RTM transactional memory and RDRAND, which rr can't handle, are not available.)

CPUID faulting is supported on most modern Intel CPUs, at least on Ivy Bridge and its successor Core architectures. Kyle also added support to upstream Xen and KVM to virtualize it, and even emulate it regardless of whether the underlying hardware supports it. However, VM guests running on older Xen or KVM hypervisors, or on other hypervisors, probably can't use it. And as mentioned, you will need a Linux 4.12 kernel or later.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Keyboard Shortcuts: Command your QWERTY

Mozilla planet - vr, 01/09/2017 - 04:57

At this point, even Grandma has found CTRL + S, V and P. But Friends, these are merely the tip of the iceberg. There’s a whole language of keyboard shortcuts … Read more

The post Keyboard Shortcuts: Command your QWERTY appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Air Mozilla: Bay Area Rust Meetup August 2017

Mozilla planet - vr, 01/09/2017 - 04:00

Bay Area Rust Meetup August 2017 https://www.meetup.com/Rust-Bay-Area/ Siddon Tang from PingCAP will be speaking about Futures and gRPC in Rust Sean Leffler will be talking about Rust's Turing Complete typesystem

Categorieën: Mozilla-nl planet

Air Mozilla: Bay Area Rust Meetup August 2017

Mozilla planet - vr, 01/09/2017 - 04:00

Bay Area Rust Meetup August 2017 https://www.meetup.com/Rust-Bay-Area/ Siddon Tang from PingCAP will be speaking about Futures and gRPC in Rust Sean Leffler will be talking about Rust's Turing Complete typesystem

Categorieën: Mozilla-nl planet

The Mozilla Blog: Statement on U.S. DACA Program

Mozilla planet - vr, 01/09/2017 - 03:57

We believe that the young people who would benefit from the Deferred Action for Childhood Arrivals (DACA) deserve the opportunity to take their full and rightful place in the U.S. The possible changes to the DACA that were recently reported would remove all benefits and force people out of the U.S. – that is simply unacceptable.

Removing DREAMers from classrooms, universities, internships and workforces threaten to put the very innovation that fuels our technology sector at risk. Just as we said with previous Executive Orders on Immigration, the freedom for ideas and innovation to flow across borders is something we strongly believe in as a tech company. More importantly it is something we know is necessary to fulfill our mission to protect and advance the internet as a global public resource that is open and accessible to all.

We can’t allow talent to be pushed out or forced into hiding. We also shouldn’t stand by and allow families to be torn apart. More importantly, as employers, industry leaders and Americans — we have a moral obligation to protect these children from ill-willed policies and practices. Our future depends on it.

We want DREAMers to continue contributing to this country’s future and we do not want people to live in fear.  We urge the Administration to keep the DACA program intact. At the same time, we urge leaders in government to enact a bipartisan permanent solution, one that will allow these bright minds to prosper in the country we know and love.

The post Statement on U.S. DACA Program appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Rabimba: Linux Foundation Open Networking Summit

Mozilla planet - vr, 01/09/2017 - 02:16
The Open Networking Summit took place on April 3-6 – where Enterprise, Cloud and Service Providers gathered in Santa Clara, California to share insights, highlight innovation and discuss the future of open source networking. I was invited to give a talk about Web Virtual Reality and aframe at it.So, Open Networking Summit (ONS) actually consists of two events – there might be more, but I was involved with two. ONS is the big event itself. There is also the Symposium on SDN Research (SOSR). This is an academic conference that accepts papers. There were some pretty fantastic papers at the conference. My favorite one – there was a system called “NEAt: Network Error Auto-Correct”. The idea here is that the system keeps track of what’s going on with your network and notices problems and automatically corrects them. It was designed for an SDN setup where you have a controller that is responding to changes in the network and telling systems what to do.
The event was held at San Jose Convention center and was pretty packed up. Keynotes were sprawled across the first floor with a very big auditorium that encompassed the whole of the first floor. The individual talks were assigned different rooms on the two floors.
Poster sessions were held on the second floor near another hall where the accompanying talks with the poster were going on.
The talks were not recorded. I had roughly 35 people in my talk and that was a pretty perfect number of audience to have without being too overwhelmed.
A previous version of my talk is available here. It would be great to have some feedback on it, though the content has changed quite a bit after that
I frankly received quite a lot of interest in the talk and questions regarding it. The questions mostly were involving authoring tool for WebVR and about how we can create scene's that can interact with industrial hardware.
Something that urged me to work on some pet projects I will write on later about.

What do you think about how networking and industry can merge with WebVR and VR in general? Let me know by comments or tweeter. I will be posting soon my take on it with a few live example and demos.

Categorieën: Mozilla-nl planet

Allen Wirfs-Brock: Some ECMAScript Explanations and Stories for Dave

Mozilla planet - vr, 01/09/2017 - 00:45

Dave Winer recently blogged about his initial thoughts after dipping his toes into using some modern JavaScript features . He ends by suggesting that I might have some  explanations and stories about the features he are using.  I’ve given talks that cover some of this and normally I might just respond via some terse tweets. But Dave believes that blog posts should be responded to by blog posts so I’m taking a try at blogging back to him.

What To Call It?

The JavaScript language is defined by a specification maintained by the Ecma International standards organization. Because of trademark issues, dating back to 1996, the specification could not use the name JavaScript.  So they coined the name ECMAScript instead. Contrary to some myths, ECMAScript and JavaScript are not different languages. “ECMAScript” is simply the name used within the specification where it would really like to say “JavaScript”.

Standards organizations like to identify documents using numbers. The ECMAScript specification’s number is ECMA-262.  Each time an update to the specification is approved as “the standard” a new edition of ECMA-262 is released. Editions are sequentially numbered. Dave said “ES6 is the newest version of JavaScript”.  So, what is “ES6”? ES6 is colloquial shorthand for “ECMA-262, Edition 6”.  ES6 was published as a standard in 2015. The actual title of the ES6 specification is ECMAScript 2015 Language Specification and the preferred shorthand name is ECMAScript 2015 or just ES2015.

So, why the year-based designation?  The 6th edition of ECMA-262 took a long time to develop, arguably 15 years. As ES6 was approaching publication, TC39 (the Technical Committee within Ecma International that develops the ECMAScript specifications) already knew that it wanted to change its process in a way that enabled  yearly maintenance updates.  That meant a new edition of ECMA-262 every year with a new edition number. After a few years we would be talking about ES6, ES7, ES8, ES9, ES10, ES11, etc. Those numbers quickly loose any context for people who aren’t deeply involved in the standards development process. Who would know if the current standard ES7, or ES8, or ES9? Was some feature introduced in ES6 or ES7? TC39 couldn’t eliminate the actual edition numbers (standards organizations love their document numbers) but it could change the document title.  We decide that TC39 would incorporate the year of release into the documents title and to encourage people to use the year when referring to a specific edition. So, the “newest version of JavaScript” is ECMA-262, Edition 8 and its title is  ECMAScript 2017 Language Specification. Some people still refer to it as ES8, but the preferred shorthand name is ECMAScript 2017 or just ES2017.

But saying “ECMAScript” or mentioning specific ECMAScript editions is confusing to many people and probably is unnecessary for most situations.  The common name of the language really is JavaScript and unless you are talking about the actual specification document you probably don’t need to utter “ECMAScript”. But you may need to distinguish between old versions of JavaScript and what is implemented by newer, modern implementations.  The big change in the language and its specification occurred with  ES2015.  The subsequent editions make relatively small incremental extensions and corrections to what was standardized in 2015.  So, here is my recommendation.  Generally you should  just say “JavaScript” meaning the language as it is used in browsers, Node.js, and other environments.  If you need to specifically talk about JavaScript implementations that are based upon ECMAScript specifications published prior to ES2015 say “legacy JavaScript”. If you need to specifically talk about JavaScript that includes ES2015 (or later) features say “modern JavaScript”.

Can You Use It Yet?

Except for modules almost all of ES2015-ES2017 is implemented in the current versions of all the major evergreen browsers (Chrome, Firefox, Safari, Edge). Also in current versions of Node.js. If you need to write code that will run on non-evergreen browsers such as IE you can use Babel to pre-compile modern JavaScript code into legacy JavaScript code.

Module support exists in all of the evergreen browsers, but some of them still require setting a flag to use it.  Native ECMAScript module support will hopefully ship in Node.js in spring 2018. In the meantime @std/esm enables use of ECMAScript modules in current Node releases.

Block Scoped Declaration (let and const)

The main motivation for block scoped declarations was to eliminate the “closure in loop” bug hazard that may JavaScript programmer have encountered when they set event handlers within a loop. The problem is that var declarations look like they should be local to the loop body but in fact are hoisted to the top of the current function and hence each event handler defined in the loop use the last value assigned to such variables.

Replacing var with let gives each iteration of the loop a distinct variable binding.  So each event handler captures different variables with the values that were current when the event handler was installed:

The hardest part about adding block scoped declaration to ECMAScript was coming up with a rational set of rules for how the declaration  should interact with the already existing var declaration form. We could not change the semantics of var without breaking backwards compatibility, which is something we try to never do. But, we didn’t want to introduce new WTF surprises in programs that use both var and let. Here are the basic rules we eventually arrived at:


Most browsers, except for IE, had implemented const declarations (but without block scoping) starting in the early naughts. Firefox implemented block scoped let declaration (but not exactly the same semantics as ES2015) in 2006.  By the time TC39 started serious working on what ultimately became ES2015, the keywords const and let had become ingrained in our minds such that we didn’t really consider any other alternatives. I regret that.  In retrospect, I think we should have used let in place of  const for declaring immutable variable bindings because that is the most common use case. In fact, I’m pretty sure that many developers use let instead of const for variable they don’t intend to change, simply because let has fewer characters to type. If we had used let in place of const then perhaps var would have been adequate for the relatively rare cases where a mutable variable binding is needed.  A language with only let and var would have been simpler then what we ended up with using const, let, and var.

Arrow Functions

One of the primary motivations for arrow functions was to eliminate another JavaScript bug hazard.  The “wrong this” problem that occurs when you capture a function expression (for example, as an event handler) but forget that this used inside the function expression will not be the same value as this in the context where you created the function expression.  Conciseness was a consideration in the design of arrow functions, but fixing the “wrong this” problem was the real driver.

I’ve heard several JS programmers comment that at first they didn’t like arrow functions but that they grew upon them over time. Your mileage may vary. Here are a couple of good articles that address arrow function reluctance.

Modules

Actually, ES modules weren’t inspired by Node modules. But a lot of work went into making them feel familiar  to people who were used to Node modules. In fact,  ES modules are semantically more similar to the Pascal modules that Dave remembers then they are to Node modules.  The big difference is that in the ES design (and Pascal modules) the interfaces between modules are statically defined while in the Node modules design  module interfaces are dynamically defined. With static module interfaces the inter-dependencies between a set of modules are precisely defined by the source code prior to executing any code.  With dynamic modules, the module interfaces cannot be fully understood without actually executing the code of the modules.  Or stated another way, ES module interfaces are declaratively defined while Node module interfaces are imperatively defined. Static modules systems better support creation of ahead-of-time tools such as accurate module dependency linters or module linkers. Such tools for dynamic module interfaces usually depends upon applying heuristics that analyze modules as if they had static interfaces.  Such analysis can be wrong if the actual dynamically  interfaces construction does things that the heuristics didn’t account for.

The work on the ES module design actually started before the first release of Node. There were early proposals for dynamic module interfaces that are more like what Node adopted.  But TC39 made an early decision that declarative static module interfaces were a better design, for the long term. There has been much controversy about this decision. Unfortunately, it has created issues for Node which have been difficult for them to resolve. If TC39 had anticipated the rapid adoption of Node and the long time it would take to finish “ES6” we might have taken the dynamic module interface path. I’m glad we didn’t and I think it is becoming clear that we made the right choice.

Promises

Strictly speaking, the legacy JavaScript language didn’t do async at all.  It was host environments such as  browsers and Node that defined the APIs that introduced async programming into JavaScript.

ES2015 needed to include promises because they were being rapidly adopted by the developer community (include by new browser APIs) and we wanted to avoid the problem of competing incompatible promise libraries or of a browser defined promise API that didn’t take other host environments into consideration.

The real benefit of ES2015 promises is that they provided a foundation for better async abstractions that do bury more of the BS within the runtime.  Async functions, introduced in ES2017 are the “better way” to do async.  In the pipeline for the near future is Async Iteration which further simplifies a common async use case.

Categorieën: Mozilla-nl planet

Air Mozilla: Intern Presentations: Round 7: Thursday, August 31st

Mozilla planet - do, 31/08/2017 - 22:00

 Thursday, August 31st Intern Presentations 6 presenters Time: 1:00PM - 2:30PM (PDT) - each presenter will start every 15 minutes 6 SF

Categorieën: Mozilla-nl planet

Air Mozilla: Intern Presentations: Round 7: Thursday, August 31st

Mozilla planet - do, 31/08/2017 - 22:00

 Thursday, August 31st Intern Presentations 6 presenters Time: 1:00PM - 2:30PM (PDT) - each presenter will start every 15 minutes 6 SF

Categorieën: Mozilla-nl planet

Mozilla VR Blog: glTF Exporter in three.js and A-Frame

Mozilla planet - do, 31/08/2017 - 21:27
A brief introduction glTF Exporter in three.js and A-Frame

When creating WebVR experiences developers usually face a common problem: it’s hard to find assets other than just basic primitives. There’re several 3D packages to generate custom objects and scenes that use custom file formats, and although they give you the option to export to a common file format like Collada or OBJ, each exporter saves the information in a slightly different way. Because of these differences, when we try to import these files in the 3D engine that we are using, we often find that the result that we see on the screen is quite different from what we created initially.

glTF Exporter in three.js and A-Frame

The Khronos Group created the glTF 3d file format to have an open, application agnostic and well defined structure that can be imported and exported in a consistent way. The resulting file is smaller than most of the availables alternatives, it’s also optimized for real time applications to be fast to read since we don’t need to consolidate the data. Once we’ve read the buffers we can push them directly to the GPU.
The main features that glTF provides and a 3D file format comparison can be found in this article by Juan Linietsky

A few months ago feiss wrote an introduction to the glTF workflow he used to create the assets for our A-Saturday-Night demo.
Many things have improved since then. The glTF blender exporter is already stable and has glTF 2.0 support. The same goes for three.js and A-Frame: both have a much better support for 2.0.
Now, most of the pain he experienced by converting from Blender to collada and then to glTF has gone, and we can export directly to glTF from Blender.

glTF is here to stay and its support has grown widely in the last months, being available in most of the 3D web engines and applications out there like three.js, babylonjs, cesium, sketchfab, blocks ...
The following video from the first glTF BOF (held on Siggraph this year) illustrates how the community has embraced the format:


glTF Exporter on the web

One of the most requested features for A-Painter has been the ability to export to some standard format so people could reuse the drawing as an asset or placeholder in 3D content creation software (3ds Max, Maya,...) or engines like Unity or Unreal.
I started playing with the idea of exporting to OBJ but lot of changes were required on the original three.js exporter because of the lack of triangle_strip fully support so I left it in standby.

#A-painter triangleStrip lines exporter to OBJ, #wip :) /cc @utopiah @feiss #aframevr pic.twitter.com/skxbcJtoXy

— Fernando Serrano (@fernandojsg) January 16, 2017

After seeing all the industry support and adoption of glTF at Siggraph 2017 I decided to give it a second try.

The work was much easier than expected thanks to the nice THREE.js / A-Frame loaders that Don McCurdy and Takahiro have been driving. I thought it would be great to export content created directly on the web to glTF, and it would serve as a great excuse to go deep on the spec and understand it better.

glTF Exporter in three.js

Thanks to the great glTF spec documentation and examples, I got a glTF exporter working pretty fast.

The first version of the exporter has already landed in r87 is still in early stage and under development. There’s an open issue If you want to get involved and follow the conversations about the missing features: https://github.com/mrdoob/three.js/issues/11951

API

The API follows the same structure of the existing exporters available in three.js:

  • Create an instance of THREE.GLTFExporter.
  • Call parse with the objects or scene that you want to export.
  • Get the result in a callback and use it as you want.
var gltfExporter = new THREE.GLTFExporter(); gltfExporter.parse( input, function( result ) { var output = JSON.stringify( result, null, 2 ); console.log( output ); downloadJSON( output, 'scene.gltf' ); }, options );

More detailed and updated information for the API can be found on the three.js docs

Together with the exporter I created a simple example in three.js trying to combine the different type of primitives, helpers, rendering modes and materials and exposing all the options the exporter has so we could use it as a testing scene through the development

glTF Exporter in three.js and A-Frame

Integration in three.js editor

The integration with the three.js editor was pretty straightforward and I think it’s one of the most useful features, since the editor supports importing plenty of 3d formats, it can be used an an advanced converter from these formats to glTF, allowing the user to delete unneeded data, tweak parameters, modify materials etc before exporting.

glTF Exporter in three.js and A-Frame


glTF Exporter on A-Frame

Please note that as three.js v87 is required to use the GLTFExporter currently just master branch of A-Frame is supported, and the first stable version compatible will be 0.7.0 to be released later this month.

Integration with A-Frame inspector

After the successful integration with three.js’ editor the next step was to integrate the same functionality into the A-Frame inspector.
I’ve added two options to export the content to GLTF:

  • Clicking on the export icon on the scenegraph will export the whole scene to glTF

glTF Exporter in three.js and A-Frame

  • Clicking on the entity’s attributes panel will export the selected entity to glTF

glTF Exporter in three.js and A-Frame

Exporter component in A-Frame

Last but not least, I’ve created an A-Frame component so users could export scenes and entities programmatically.

The API is quite simple, just call the export function from the gltf-exporter system:

sceneEl.systems['gltf-exporter'].export(input, options);

The function accepts severals different input values: None (export the whole scene), one entity, an array of entities, or a NodeList (eg: the result from a querySelectorAll)

The options accepted are the same as the original three.js function.

A-Painter exporter

The whole history wouldn’t be complete if the initial issue that made me go into glTF wasn’t satisfied :) After all the previous work described above it was trivial to add support to export to gltf in A-Painter.

  • Include the aframe-gltf-exporter-component script:
<script src="https://unpkg.com/aframe-gltf-exporter-component/dist/aframe-gltf-exporter-component.min.js"></script>
  • Attach the component to a-scene:
<a-scene gltf-exporter/>
  • And finally register a shortcut (g) to save the current drawing to glTF:
if (event.keyCode === 71) { // Export to GTF (g) var drawing = document.querySelector('.a-drawing'); self.sceneEl.systems['gltf-exporter'].export(drawing); }

glTF Exporter in three.js and A-Frame


Extra: Exporter bookmarklet

While developing the exporter I found very useful creating a bookmarklet to inject the exporter code on every A-Frame or three.js page. This way I could just export the whole scene by clicking on it.
If A-FRAME is defined it will export AFRAME.scenes[0] as is the default scene loaded. If not, it will try to look for the global variable scene that is the most commonly used in three.js examples.
It is not bulletproof so you may need to do some changes if it doesn’t work on your app, probably by looking for something else than scene.

To use it you should create a new bookmark on your browser and paste the following code on the URL input box:

glTF Exporter in three.js and A-Frame

What’s next?

From Mozilla we are committed to help improving the glTF specification and its ecosystem.
glTF will keep evolving and many interesting features are being proposed on the roadmap discussion. If you have any suggestion don't hesitate to comment there, since all proposals are being discussed and taken into account.

As I stated before the glTF exporter is still in an early stage but it’s being actively developed so please feel free to jump into the discussion to prioritize on new features.

Finally: wouldn't it be great to see more content creation tools on the web with glTF support so you don't depend on a desktop application to generate your assets?.

Categorieën: Mozilla-nl planet

David Bryant: Mozilla Developer Roadshow: Asia Chapter

Mozilla planet - do, 31/08/2017 - 21:18

Mozilla Developer Roadshow events are fun, informative sessions for people who build the Web. Over the past eight months we’ve held thirty-six events all over the world sharing the word about the latest in Mozilla and Firefox technologies. Now we’re heading to Asia with the goals of finding local experts and connecting the community. Some of our most successful moments have been when we were able to bring local event organizers together to forge lasting relationships. Our first Asia event is in Singapore at the PayPal headquarters on September 19. (Check here for a full list of the cities.)

I’m excited to be coming along and be part of some of those events and so wanted to know what to anticipate plus get a little perspective from someone immersed in the local developer community. To do that I chatted with Hui Jing Chen, a front-end engineer based in Singapore who speaks globally on CSS Grid.

Q: What would you like to have come out of the event in Singapore? Should we look forward to more opportunities for collaboration between Mozilla and developers in Singapore and Asia?

Hui Jing (HJ): I definitely want to have more collaboration between Mozilla and developers in this region (Southeast Asia). I am aware that a lot of the work on web technologies comes out from Europe or North America, and there are lots of factors at play here, including the fact that digital computing was kickstarted in those regions. But it is the WORLD wide web, and I think it is important that developers from other regions contribute to the development of the web as well. For example, WebRTC expert Dr. Alex. Gouaillard, runs CoSMo Software Consultancy out of Singapore, and they are the key contributors to WebRTC’s development. Understandably, it will take time for our region to catch up, but I hope events like this encourage developers in the region to not only be users of web technologies, but shapers of them as well.

David (DB): And independent of where the technology might come from, clearly the use of the web on a day-to-day basis is as much if not more so driven by what people are doing in Asia and the information (or experiences) they need. We know from our steady stream of developer relations efforts and our Tech Speakers activities that the more engaged we are with developers in this region the richer the web will be and the better sense we’ll have of where the web needs to go. So yes, more opportunities for collaboration would be marvelous!

Q: Meetups have been great regional allies for our Developer Roadshows — What are the unique cultural aspects of the Singapore/Malaysia MeetUp Communities?

HJ: My web development career has taken place completely in Singapore, so I can only speak about the Singapore meetup community, but I find that there is less “networking” at the meetups, in that, you’ll see pockets of people chatting with each other, but a large number of people show up to listen to the talk then leave immediately after. Maybe this happens universally, I can’t say for sure that this is unique though.

DB: That’s something we’ve heard and seen elsewhere too. In part that’s why we like the smaller, more frequent, more community-oriented approach we’ve taken for our Developer Roadshows as opposed to more traditional conference-style events. Our hope is that keeping it more intimate, hosting jointly with well-established local partners, and engaging with an existing local community will give people a more comfortable way of considering ongoing collaboration opportunities yet still have an informative core topic that brings them together in the first place.

Q: Tell me a little bit about some challenges working with and participating with the community.

HJ: I’m the co-organizer of Talk.CSS, which is Singapore’s CSS meetup, and in general, the challenge is in finding new speakers. The community in Singapore is really great, so finding venues is never the problem, it’s usually getting people to speak that is much trickier. I sometimes joke that I’m amazed I still have friends left because I’ve almost strong-armed all of them to speak at my meetup at some point in time, and they’re all too polite to say no. This could be an Asian thing, but people here are a bit more reserved, and if they’ve done something cool, they’re less compelled to stand up in front of everyone and share what they did.

DB: Hmmm, perhaps that’s something we can help you with. (And I mean the finding speakers part, not the still having friends part. :-)

Q: Every region has its particular special interests and strengths. What are some things that the Singapore and possibly Malaysian community does exceptionally well?

HJ: Singapore has an exceptionally strong tech community (at least from what I’ve heard from my friends outside of Singapore). This can be attributed to the efforts from key people, who we will hopefully meet in Singapore, who are super active when it comes to organizing events, helping out newbies, encouraging developers to start their own meetups, and generally just making the tech community in Singapore really vibrant.

For example, webuild.sg is the go-to resource for all the tech meetups in Singapore, which is especially helpful if you want to start your own. They also have their own podcast, where they interview developers on their respective areas of expertise. Engineers.sg was originally a one-man operation which records almost every tech meetup in Singapore, and has now expanded into an entirely volunteer run team.

DB: I wasn’t familiar with webuild.sg, but now that you’ve pointed it out to me I keep finding valuable and informative information on the site, for example on organizing events and contributing to open source. So it’s not only a vital resource for the community in Singapore but valuable elsewhere too.

Q: What expectations should we have as a team visiting from the US/Europe?

HJ: Locals are generally more reserved, in that, usually the people who ask questions or speak up more are foreigners from Western countries. There is a sizeable population of developers from all over the world here in Singapore, so meetup attendance is very diverse. It seems that most people are more comfortable approaching speakers individually after the talk rather than during an open Q&A session.

DB: Individual conversations afterward are something I know our presenters and Roadshow team like very much too. I think our format for the Developer Roadshow works well for that so am looking forward to meeting people and talking to them one-on-one.

Q: Diversity and inclusion are very much highlighted in our tech communities — is this an issue of discussion here in Singapore?

HJ: These issues are not as hotly discussed here as in America, I think, largely because Singapore has always been a multicultural society. I’m not saying racism and misogyny do not exist here, but I dare say very few people are overtly so. I think the gender ratio in tech is male-dominated all over the world, including here.

DB: Certainly this is an issue that varies by region, though we’re committed to expressing our support for diversity and inclusion across all developer communities. That means, for example, having a clear code of conduct for events to promote the largest number of participants with the most varied backgrounds. And we love having these Developer Roadshow events play a part in that, having heard attendees express their delight when they meet other folks from similar backgrounds or come to hear presenters with diverse backgrounds. I know from talking to other people about their company’s developer outreach efforts that we’re going to see even more progress in this space going forward.

Our Developer Roadshow events have been enjoyable and very popular, and I’m looking forward to the upcoming sessions in Asia. We’ll have more later on in the year in other locations around the world too, and by time 2017 is over will have held about fifty-five sessions — more than one a week. Hopefully one has been near enough to you for you to take part, and as we’re keen to keep the program will be again soon. Let us know if not, though, and we’ll see what we can do!

Mozilla Developer Roadshow: Asia Chapter was originally published in Mozilla Tech on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Aug. 31, 2017

Mozilla planet - do, 31/08/2017 - 18:00

Reps Weekly Meeting Aug. 31, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Aug. 31, 2017

Mozilla planet - do, 31/08/2017 - 18:00

Reps Weekly Meeting Aug. 31, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Matěj Cepl: EconTalk, Future of Cars and Telecommuting

Mozilla planet - do, 31/08/2017 - 14:50

(This has been comment on the episode of EconTalk)

It seems to me that however this interview was awesome (and it was) it is still in the danger of being the same kind as the prediction about colourful faxes.

I think we are standing on the edge of the end …

Categorieën: Mozilla-nl planet

The Mozilla Blog: A ₹1 Crore Fund to Support Open Source Projects in India

Mozilla planet - do, 31/08/2017 - 03:00

Today Mozilla is announcing the launch of “Global Mission Partners: India”, an award program specifically focused on supporting open source and free software.  The new initiative builds on the existing “Mission Partners” program. Applicants based in India can apply for funding to support any open source/free software projects which significantly further Mozilla’s mission.

Our mission, as embodied in our Manifesto, is to ensure the Internet is a global public resource, open and accessible to all; an Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.

We know that many other software projects around the world, and particularly in India, share the goals of a free and open Internet with us, and we want to use our resources to help and encourage others to work towards this end.

If you are based in India and you think your project qualifies, Mozilla encourages you to apply.  You can find the complete guidelines about this exciting award program on Mozilla’s wiki page.

The minimum award for a single application to the “Global Mission Partners: India” initiative is ₹1,25,000, and the maximum is ₹50,00,000.

The deadline for applications for the initial batch of “Global Mission Partners: India” is the last day of September 2017, at midnight Indian Time. Organizations can apply beginning today, in English or Hindi.

You can find a version of this post in Hindi here.

The post A ₹1 Crore Fund to Support Open Source Projects in India appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.20

Mozilla planet - do, 31/08/2017 - 02:00

The Rust team is happy to announce the latest version of Rust, 1.20.0. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed, getting Rust 1.20 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.20.0 on GitHub.

What’s in 1.20.0 stable

In previous Rust versions, you can already define traits, structs, and enums that have “associated functions”:

struct Struct; impl Struct { fn foo() { println!("foo is an associated function of Struct"); } } fn main() { Struct::foo(); }

These are called “associated functions” because they are functions that are associated with the type, that is, they’re attached to the type itself, and not any particular instance.

Rust 1.20 adds the ability to define “associated constants” as well:

struct Struct; impl Struct { const ID: u32 = 0; } fn main() { println!("the ID of Struct is: {}", Struct::ID); }

That is, the constant ID is associated with Struct. Like functions, associated constants work with traits and enums as well.

Traits have an extra ability with associated constants that gives them some extra power. With a trait, you can use an associated constant in the same way you’d use an associated type: by declaring it, but not giving it a value. The implementor of the trait then declares its value upon implementation:

trait Trait { const ID: u32; } struct Struct; impl Trait for Struct { const ID: u32 = 5; } fn main() { println!("{}", Struct::ID); }

Before this release, if you wanted to make a trait that represented floating point numbers, you’d have to write this:

trait Float { fn nan() -> Self; fn infinity() -> Self; ... }

This is slightly unwieldy, but more importantly, because they’re functions, they cannot be used in constant expressions, even though they only return a constant. Because of this, a design for Float would also have to include constants as well:

mod f32 { const NAN: f32 = 0.0f32 / 0.0f32; const INFINITY: f32 = 1.0f32 / 0.0f32; impl Float for f32 { fn nan() -> Self { f32::NAN } fn infinity() -> Self { f32::INFINITY } } }

Associated constants let you do this in a much cleaner way. This trait definition:

trait Float { const NAN: Self; const INFINITY: Self; ... }

Leads to this implementation:

mod f32 { impl Float for f32 { const NAN: f32 = 0.0f32 / 0.0f32; const INFINITY: f32 = 1.0f32 / 0.0f32; } }

much cleaner, and more versatile.

Associated constants were proposed in RFC 195, almost exactly three years ago. It’s been quite a while for this feature! That RFC contained all associated items, not just constants, and so some of them, such as associated types, were implemented faster than others. In general, we’ve been doing a lot of internal work for constant evaluation, to increase Rust’s capabilities for compile-time metaprogramming. Expect more on this front in the future.

We’ve also fixed a bug with the include! macro in documentation tests: for relative paths, it erroneously was relative to the working directory, rather than to the current file.

See the detailed release notes for more.

Library stabilizations

There’s nothing super exciting in libraries this release, just a number of solid improvements and continued stabilizing of APIs.

The unimplemented! macro now accepts messages that let you say why something is not yet implemented.

We upgraded to Unicode 10.0.0.

min and max on floating point types were rewritten in Rust, no longer relying on cmath.

We are shipping mitigations against Stack Clash in this release, notably, stack probes, and skipping the main thread’s manual stack guard on Linux. You don’t need to do anything to get these protections other than using Rust 1.20.

We’ve added a new trio of sorting functions to the standard library: slice::sort_unstable_by_key, slice::sort_unstable_by, and slice::sort_unstable. You’ll note that these all have “unstable” in the name. Stability is a property of sorting algorithms that may or may not matter to you, but now you have both options! Here’s a brief summary: imagine we had a list of words like this:

rust crate package cargo

Two of these words, cargo and crate, both start with the letter c. A stable sort that sorts only on the first letter must produce this result:

crate cargo package rust

That is, because crate came before cargo in the original list, it must also be before it in the final list. An unstable sort could provide that result, but could also give this answer too:

cargo crate package rust

That is, the results may not be in the same original order.

As you might imagine, less constraints often means faster results. If you don’t care about stability, these sorts may be faster for you than the stable variants. As always, best to check both and see! These functions were added by RFC 1884, if you’d like more details, including benchmarks.

Additionally, the following APIs were also stabilized:

See the detailed release notes for more.

Cargo features

Cargo has some nice upgrades this release. First of all, your crates.io authentication token used to be stored in ~/.cargo/config. As a configuration file, this would often be stored with 644 permissions, that is, world-readable. But it has a secret token in it. We’ve moved the token to ~/.cargo/credentials, so that it can be permissioned 600, and hidden from other users on your system.

If you used secondary binaries in a Cargo package, you know that they’re kept in src/bin. However, sometimes, you want multiple secondary binaries that have significant logic; in that case, you’d have src/bin/client.rs and src/bin/server.rs, and any submodules for either of them would go in the same directory. This is confusing. Instead, we now conventionally support src/bin/client/main.rs and src/bin/server/main.rs, so that you can keep larger binaries more separate from one another.

See the detailed release notes for more.

Contributors to 1.20.0

Many people came together to create Rust 1.20. We couldn’t have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Pagina's