Mozilla Nederland LogoDe Nederlandse

Abonneren op feed Mozilla planet
Planet Mozilla -
Bijgewerkt: 1 maand 2 uur geleden

Botond Ballo: Trip Report: C++ Standards Meeting in Albuquerque, November 2017

ma, 20/11/2017 - 16:00
Summary / TL;DR

Project What’s in it? Status C++17 See below Publication imminent Library Fundamentals TS v2 source code information capture and various utilities Published! Concepts TS Constrained templates Merged into C++20 with some modifications Parallelism TS v2 Task blocks, library vector types and algorithms and more Nearing feature-completion; expect PDTS ballot at next meeting Transactional Memory TS Transaction support Published! Not headed towards C++20 Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it headed for C++20 Concurrency TS v2 See below Under active development Networking TS Sockets library based on Boost.ASIO Publication imminent Ranges TS Range-based algorithms and views Publication imminent Coroutines TS Resumable functions, based on Microsoft’s await design Publication imminent Modules TS A component system to supersede the textual header file inclusion model Resolution of comments on Proposed Draft in progress Numerics TS Various numerical facilities Under active development; no new progress Graphics TS 2D drawing API Under active design review; no new progress Reflection Code introspection and (later) reification mechanisms Introspection proposal awaiting wording review. Targeting a Reflection TS. Contracts Preconditions, postconditions, and assertions Proposal under wording review

Some of the links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of November 27, 2017). If you encounter such a link, please check back in a few days.


A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Albuquerque, New Mexico. This was the third committee meeting in 2017; you can find my reports on previous meetings here (February 2017, Kona) and here (July 2017, Toronto). These reports, particularly the Toronto one, provide useful context for this post.

With the final C++17 International Standard (IS) having been voted for publication, this meeting was focused on C++20, and the various Technical Specifications (TS) we have in flight, most notably Modules.

What’s the status of C++17?

The final C++17 International Standard (IS) has been sent off for publication in September. The final document is based on the Draft International Standard (DIS), with only minor editorial changes (nothing normative) to address comments on the DIS ballot; it is now in ISO’s hands, and official publication is imminent.

In terms of implementation status, the latest versions of GCC and Clang both have complete support for C++17, modulo bugs. MSVC is said to be on track to be C++17 feature-complete by March 2018; if that ends up being the case, C++17 will be quickest standard version to date to be supported by these three major compilers.


This is the second meeting that the C++20 Working Draft has been open for changes. (To use a development analogy, think of the current Working Draft as “trunk”; it was opened for changes as soon as C++17 “branched” earlier this year). Here, I list the changes that have been voted into the Working Draft at this meeting. For a list of changes voted in at the previous meeting, see my Toronto report.

Technical Specifications

In addition to the C++ International Standard, the committee publishes Technical Specifications (TS) which can be thought of “feature branches” (to continue the development analogy from above), where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization.

At the last meeting, we published three TSes: Coroutines, Ranges, and Networking. The next steps for these features is to wait for a while (usually at least a year) to give users and implementers a chance to try them out and provide feedback. Once we’re confident the features are ripe for final standardization, they will be merged into a future version of the International Standard (possibly C++20).

Modules TS

The Modules TS made significant progress at the last meeting: its Proposed Draft (PDTS) was published and circulated for balloting, a process where national standards bodies evaluate, vote on, and submit comments on a proposed document. The ballot passed, but numerous technical comments were submitted that the committee intends to address before final publication.

A lot of time at this meeting was spent working through those comments. Significant progress was made, but not enough to vote out the final published TS at the end of the meeting. The Core Working Group (CWG) intends to hold a teleconference in the coming months to continue reviewing comment resolutions. If they get through them all, a publication vote may happen shortly thereafter (also by teleconference); otherwise, the work will be finished, and the publication vote held, at the next meeting in Jacksonville.

I summarize some of the technical discussion about Modules that took place at this meeting below.

The state of Modules implementation is also progressing: in addition to Clang and MSVC, Facebook has been contributing to a GCC implementation.

Parallelism TS v2

The Parallelism TS v2 is feature-complete, with one final feature, a template library for parallel for loops voted in at this meeting. A vote to send it out for its PDTS ballot is expected at the next meeting.

Concurrency TS v2

The Concurrency TS v2 (no working draft yet) continues to be under active development. Three new features targeting it have received design approval at this meeting: std::cell, a facility for deferred reclamation; apply() for synchronized_value; and atomic_ref. An initial working draft that consolidates the various features slated for the TS into a single document is expected at the next meeting.

Executors, slated for a separate TS, are making progress: the Concurrency Study Group approved the design of the unified executors proposal, thereby breaking the lockdown that has been holding the feature up for a number of years.

Stackful coroutines continue to be a unique beast of their own. I’ve previously reported them to be slated for the Concurrency TS v2; I’m not sure whether that’s still the case. They change the semantics of code in ways that impacts the core language, and thus need to be reviewed by the Evolution Working Group; one potential concern is that the proposal may not be implementable on all platforms (iOS came up as a concrete example during informal discussion). For the time being, the proposal is still being looked at by the Concurrency Working Group, where there continues to be strong interest in standardizing them in some form, but the details remain to be nailed down; I believe the latest development is that an older API proposal may end up being preferred over the latest call/cc one.

Future Technical Specifications

There are some planned future Technical Specifications that don’t have an official project or working draft yet:


The static introspection / “reflexpr” proposal (see its summary, design, and specification for details), headed for a Reflection TS, has been approved by the Evolution and Library Evolution Working Groups, and is awaiting wording review. The Reflection Study Group (recently renamed to “Compile-Time Programming Study Group”) approved an extension to it, concerning reflection over functions, at this meeting.

There are more reflection features to come beyond what will be in the static introspection TS. One proposal that has been drawing a lot of attention is metaclasses, an updated version of which was reviewed at this meeting (details below).


I’m not aware of much new progress on the planned Graphics TS (containing 2D graphics primitives inspired by cairo) since the last meeting. The latest draft spec can be found here, and is still on the Library Evolution Working Group’s plate.


Nothing particularly new to report here either; the Numerics Study Group did not meet this week. The high-level plan for the TS remains as outlined previously. There are concrete proposals for several of the listed topics, but not working draft for the TS yet.

Other major features Concepts

As I related in my previous report, Concepts was merged into C++20, minus abbreviated function templates (AFTs) and related features which remain controversial.

I also mentioned that there will likely be future proposals to get back AFTs in some modified form, that address the main objection to them (that knowing whether a function is a template or not requires knowing whether the identifiers in its signature name types or concepts). Two such proposals were submitted in advance of this paper; interestingly, both of them proposed a very similar design: an adjective syntax where in an AFT, a concept name would act as an adjective tacked onto the thing it’s constraining – most commonly, for a type concept, typename or auto. So instead of void sort(Sortable& s);, you’d have void sort(Sortable& auto s);, and that makes it clear that a template is being defined.

These proposals were not discussed at this meeting, because some of the authors of the original Concepts design could not make it to the meeting. I expect a lively discussion in Jacksonville.

Now that Concepts are in the language, the question of whether new library proposals should make use of them naturally arose. The Library Evolution Working Group’s initial guidance is “not yet”. The reason is that most libraries require some foundational concepts to build their more specific concepts on top of, and we don’t want different library proposals to duplicate each other / reinvent the wheel in that respect. Rather, we should start by adding a well-designed set of foundational concepts, and libraries can then start building on top of those. The Ranges TS is considered a leading candidate for providing that initial set of foundational concepts.

Operator Dot

I last talked about overloading operator dot a year ago, when I mentioned that there are two proposals for this: the original one, and an alternative approach that achieves a similar effect via inheritance-like semantics.

There hasn’t been much activity on those proposals since then. I think that’s for two reasons. First, the relevant people have been occupied with Concepts. Second, as the reflection proposals develop, people are increasingly starting to see them as a more general mechanism to satisfy operator dot’s use cases. The downside, of course, is that reflection will take longer to arrive in C++, while one of the above two proposals could plausibly have been in C++20.

Evolution Working Group

I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week.

All proposals discussed in EWG at this meeting were targeting C++20 (except for Modules, where we discussed some changes targeting the Modules TS). I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories:

Accepted proposals:

  • Standardizing feature test macros (and another paper effectively asking for the same thing). Feature test macros are macros like __cpp_lambdas that tell you whether your compiler or standard library supports a particular feature without having to resort to the more indirect approach of having a version check for each of your supported compilers. The committee maintains a list of them, but they’re not an official part of the standard, and this has led some implementations to refuse to support them, thus significantly undermining their usefulness. To rectify this, it was proposed that they are made part of the official standard. This was first proposed at the last meeting, but failed to gain consensus at that time. It appears that people have since been convinced (possibly by the arguments laid out in the linked papers), as this time around EWG approved the proposal.
  • Bit-casting object representations. This is a library proposal, but EWG was asked for guidance regarding making this function constexpr, which requires compiler support. EWG decided that it could be made constexpr for all types except a few categories – unions, pointers, pointers-to-members, and references – for which that would have been tricky to implement.
    • As a humorous side-note about this proposal, since it could only apply to “plain old data” types (more precisely, trivially copyable types; as mentioned above, “plain old data” was deprecated as a term of art), one of the potential names the authors proposed for the library function was pod_cast. Sadly, this was voted down in favour of bit_cast.
  • Language support for empty objects. This addresses some of the limitations of the empty base optimization (such as not being able to employ it with types that are final or otherwise cannot be derived from) by allowing data members to opt out of the rule that requires them to occupy at least 1 byte using an attribute, [[no_unique_address]]. The resulting technique is called the “empty member optimization”.
  • Efficient sized delete for variable-sized classes. I gave some background on this in my previous post. The authors returned with sign-off from all relevant implementers, and a clearer syntax (the “destroying delete” operator is now identified by a tag type, as in operator delete(Type*, std::destroying_delete_t), and the proposal was approved.
  • Attributes for likely and unlikely statements. This proposal has been updated as per previous EWG feedback to allow placing the attribute on all statements. It was approved with one modification: placing the attribute on a declaration statement was forbidden, because other attributes on declaration statements consistently apply to the entity being declared, not the statement itself.
  • Deprecate implicit capture of *this. Only the implicit capture of *this via [=] was deprecated; EWG felt that disallowing implicit capture via [&] would break too much idiomatic code.
  • Allow pack expansions in lambda init-capture. There was no compelling reason to disallow this, and the workaround of constructing a tuple to store the arguments and then unpacking it is inefficient.
  • String literals as template parameters. This fixes a longstanding limitation in C++ where there was previously no way to do compile-time processing of strings in such a way that the value of the string could affect the type of the result (as an example, think of a compile-time regex parsing library where the resulting type defines an efficient matcher (DFA) for the regex). The syntax is very simple: template <auto& String>; the auto then gets deduced as const char[N] (or const char16_t[N] etc. depending on the type of the string literal passed as argument) where N is the length of the string. (You can also write template <const char (&String)[N]> if you know N, but you can’t write template <size_t N, const char (&String)[N]> and have both N and String deduced from a single string literal template argument, because EWG did not want to create a precedent for a single template argument matching two template parameters. That’s not a big deal, though: using the auto form, you can easily recover N via traits, and even constrain the length or the character type using a requires-clause.)
  • A tweak to the Contracts proposal. An issue came up during CWG review of the proposal regarding inline functions with assertion checks inside them: what should happen if the function is called from two translation units, one of which is compiled with assertion checks enabled and one of them not? EWG’s answer was that, as with NDEBUG today, this is technically an ODR (one definition rule) violation. The behaviour in practice is fairly well understood: the linker will pick one version or the other, and that version will be used by both translation units. (There are some potential issues with this: what if, while compiling a caller in one of the translation units, the optimizer assumed that the assertion was checked, but the linker picks the version where the assertion isn’t checked? That can result in miscompilation. The topic remains under discussion.)

There were also a few that, after being accepted by EWG, were reviewed by CWG and merged into the C++20 working draft the same week, and thus I already mentioned them in the C++20 section above:

  • Fixing small-ish functionality gaps in concepts. This consisted of three parts, two of which were accepted:
    • requires-clauses in lambdas. This was accepted.
    • requires-clauses in template template parameters. Also accepted.
    • auto as a parameter type in regular (non-lambda) functions. This was mildly controversial due to the similarity to AFTs, whose design is still under discussion, so it was deferred to be dealt with together with AFTs.
  • Access specifiers and specializations.
  • Deprecating “plain old data” (POD).
  • Default constructible and assignable stateless lambdas.

  • Proposals for which further work is encouraged:

    • Standard containers and constexpr. This is the latest version of an ongoing effort by compiler implementers and others to get dynamic memory allocation working in a constexpr context. The current proposal allows most forms of dynamic allocation and related constructs during constant evaluation: non-trivial destructors, new and delete expressions, placement new, and use of std::allocator; this allows reusing a lot of regular code, including code that uses std::vector, in a constexpr context. Direct use of operator new is not allowed, because that returns void*, and constant evaluation needs to track the type of dynamically allocated objects. There is also a provision to allow memory that is dynamically allocated during constant evaluation to survive to runtime, at which point it’s treated as static storage. EWG liked the direction (and particularly the fact that compiler writers were on the same page regarding its implementability) and encouraged development of a more concrete proposal along these lines.
    • Supporting offsetof for stable-layout classes. “Stable-layout” is a new proposed category of types, broader than “standard-layout”, for which offsetof could be implemented. EWG observed that the definition of “standard-layout” itself could be broadened a bit to include most of the desired use cases, and expressed a preference for doing that instead of introducing a new category. There was also talk of potentially supporting offsetof for all types, which may be proposed separately as a follow-up.
    • short float. This proposal for a 16-bit floating-point type was approved by EWG earlier this year, but came back for some reason. There was some re-hashing of previous discussions about whether the standard should mandate the size (16 bits) and IEEE behaviour.
    • Adding alias declarations to concepts. This paper proposed three potential enhancements to concept declarations to make writing concepts easier. EWG was not particularly convinced about the need for this, but believed at least the first proposal could be entertained given stronger motivation.
    • [[uninitialized]] attribute. This attribute is intended to suppress compiler warnings about variables that are declared but not initialized in cases where this is done intentionally, thus facilitating the use of such warnings in a codebase to catch unintentional cases. EWG pointed out that most compiler these days warn not about uninitialized declarations, but uninitialized uses. There was also a desire to address the broader use case of allocating dynamic memory that is purposely uninitialized (e.g. std::vector<char> buffer(N) currently zero-initializes the allocated memory).
    • Relaxed incomplete multidimensional array type declaration. This is a companion proposal to the std::mdspan library proposal, which is a multi-dimensional array view. It would allow writing things like std::mdspan<double[][][]> to denote a three-dimensional array where the size in each dimension is determined at runtime. Note that you still would not be able to create an object of type double[][][]; you could only use it in contexts that do not require creating an object, like a template argument. Basically, mdspan is trying to (ab)use array types as a mini-DSL to describe its dimensions, similar to how std::function uses function types as a mini-DSL to describe its signature. This proposal was presented before, when mdspan was earlier in its design stage, and EWG did not find it sufficiently motivating. Now that the mdspan is going forward, the authors tried again. EWG was open to entertaining the idea, but only if technical issues such as the interaction with template argument deduction are ironed out.
    • Class types in non-type template parameters. This has been proposed before, but EWG was stuck on the question of how to determine equivalence (something you need to be able to do for template arguments) for values of class types. Now, operator<=> has given us a way to move forward on this question, basically by requiring that class types used in non-type template parameters have a defaulted operator<=>. It was observed that there is some overlap with the proposal to allow string literals as template parameters (since one way to pass a character array as a template parameter would be to wrap it in a struct), but it seemed like they also each have their own use cases and there may be room for both in the language.
    • Dynamic library loading. The C++ standard does not talk about dynamic libraries, but some people would find it useful to have a standardized library interface for dealing with them anyways. EWG was asked for input on whether it would be acceptable to standardize a library interface without saying too much about its semantics (since specifying the semantics would require that the C++ standard start talking about dynamic libraries, and specifying their behaviour in relation to exceptions, thread-local storage, the One Definition Rule, and so on). EWG was open to this direction, but suggested that the library interface be made much more general, as in its current incarnation it seemed to be geared towards certain platforms and unimplementable on others.
    • Various proposed extensions to the Modules TS, which I talk about below.

    There was also a proposal for recursive lambdas that wasn’t discussed because its author realized it needed some more work first.

    Rejected proposals:

    • A proposed trait has_padding_bits, the need for which came up during review of an atomics-related proposal by the Concurrency Study Group. EWG expressed a preference for an alternative approach that removed the need for the trait by putting the burden on compiler implementers to make things work correctly.
    • Attributes for structured bindings. This was proposed previously and rejected on the basis of insufficient motivation. The author came back with additional motivation: thread-safety attributes such as [[guarded_by]] or [[locks_held]]. However, it was pointed out that the individual bindings are just aliases to fields of an (unnamed) object, so it doesn’t make sense to apply attributes to them; attributes can be applied to the deconstructed object as a whole, or to one of its fields at the point of the field’s declaration.
    • Keeping the alias syntax extendable. This proposed reverting the part of the down with typename! proposal, approved at the last meeting, that allowed omitting the typename in using alias = typename T::type; where T was a dependent type. The rationale was that even though today only a type is allowed in that position (thus making the typename disambiguator redundant), this prevents us from reusing the same syntax for expression aliases in the future. EWG already considered this, and didn’t find it compelling: the preference was to make the “land grab” for a syntax that is widely used today, instead of keeping it in reserve for a hypothetical future feature.
    • Forward without forward.The idea here is to abbreviate the std::forward<decltype(x)>(x) boilerplate that often occurs in generic code, to >>x (i.e. a unary >> operator applied to x). EWG sympathized with the desire to eliminate this boilerplate, but felt that >>, or indeed any other unary operator, would be too confusing of a syntax, especially when occuring after an = in a lambda init-capture (e.g. [foo=>>foo](...){ ... }). EWG was willing to entertain a keyword instead, but the best people could come up with was fwdexpr and that didn’t have consensus; as a result, the future of this proposal is uncertain.
    • Relaxing the rules about invoking an explicit constructor with a braced-init-list. This would have allowed , among a few other changes, writing return {...}; instead of return T{...}; in a function whose declared return type is T, even if the invoked constructor was explicit. This has been proposed before, but rejected on the basis that it makes it easy to introduce bugs (see e.g. this response). The author proposed addressing those concerns by introducing some new rules to limit the cases in which this was allowed, but EWG did not find the motivation sufficiently compelling to further complicate C++’s already complex initialization rules.
    • Another attempt at standardizing arrays of runtime bound (ARBs, a pared-down version of C’s variable-length arrays), and a C++ wrapper class for them, stack_array. ARBs and a wrapper class called dynarray were previously headed for standardization in the form of an Array Extensions TS, before the project was scrapped because dynarray was found to be unimplementable. This proposal would solve the implementability concerns by restricting the usage of stack_array (e.g. it couldn’t be used as a class member). EWG was concerned that the restrictions would result in a type that’s not very usable. (It was pointed out that a design to make such a type more composable was proposed previously, but the author didn’t have time to pursue it further.) Ultimately, EWG didn’t feel that this proposal had a better chance of succeeding than the last time standardization of ARBs was attempted. However, a future direction that might be more promising was outlined: introducing a core language “allocation expression” that allocates a unnamed (and runtime-sized) stack array and returns a non-owning wrapper, such as a std::span, to access it.
    • A modern C++ signature for main(). This would have introduced a new signature for main() (alongside the existing allowed signatures) that exposed the command-line arguments using an iterable modern C++ type rather than raw pointers (the specific proposal was int main(std::initializer_list<std::string_view>). EWG was not convinced that such a thing would be easier to use and learn than int main(int argc, char*[] argv);. It was suggested that instead, a trivial library facility that took argc and argv as inputs and exposed an iterable interface could be provided; alternatively (or in addition), a way to access command-line arguments from anywhere in the program (similar to Rust’s std::env::args()) could be explored.
    • Abbreviated lambdas for fun and profit. This proposal would introduce a new abbreviated syntax for single-expression lambdas; a previous version of it was presented and largely rejected in Kona. Not much has changed to sway EWG’s opinion since then; if anything, additional technical issues were discovered.

      For example, one of the features of the abbreviated syntax is “automatic SFINAE”. That is, [x] => expr would mean [x] -> decltype(expr) { return expr; }; the appearance of expr in the return type rather than just the body would mean that a substitution failure in expr wouldn’t be a hard error, it would just remove the function overload being considered from the overload set (see the paper for an example). However, it was pointed out that in e.g. [x] -> decltype(x) { return x; }, the x in the decltype and the x in the body refer to two different entities: the first refers to the variable in the enclosing scope that is captured, and the second to the captured copy. If we try to make [x] => x “expand to” that, then we get into a situation where the x in the abbreviated form refers to two different entities for two different purposes, which would be rather confusing. Alternatively, we could say in the abbreviated form, x refers to the captured copy for both purposes, but then we are applying SFINAE in new scenarios, and some implementers are strongly opposed to that.

      It was also pointed out that the abbreviated form’s proposed return semantics were “return by reference”, while regular lambdas are “return by value” by default. EWG felt it would be confusing to have two different defaults like this.
    • Making the lambda capture syntax more liberal in what it accepts. C++ currently requires that in a lambda capture list, the capture-default, if present, come before any explicit captures. This proposal would have allowed them to be written in any order; in addition, it would have allowed repeating variables that are covered by the capture-default as explicit captures for emphasis. EWG didn’t find the motivation for either of these changes compelling.
    • Lifting overload sets into objects. This is a resurrection of an earlier proposal to allow passing around overload sets as objects. It addressed previous concerns with that proposal by making the syntax more explicit: you’d pass []f rather than just f, where f was the name of the overloaded function. There were also provisions for passing around operators, and functions that performed member access. EWG’s feedback was that this proposal seems to be confused between two possible sets of desired semantics:
      1. a way to build super-terse lambdas, which essentially amounts to packaging up a name; the overload set itself isn’t formed at the time you create the lambda, only later when you instantiate it
      2. a way to package and pass around overload sets themselves, which would be formed at the time you package them

      EWG didn’t have much of an appetite for #1 (possibly because it had just rejected another terse-lambda proposal), and argued that #2 could be achieved using reflection.

    Discussion papers

    There were also a few papers submitted to EWG that weren’t proposals per se, just discussion papers.

    These included a paper arguing that Concepts does not significantly improve upon C++17, and a response paper arguing that it in fact does. The main issue was whether Concepts delivers on its promise of making template error messages better; EWG’s consensus was that they do when compared to unconstrainted templates, but perhaps not as much as one would hope when compared to C++17 techniques for constraining templates, like enable_if. There may be room for implementations (to date there is just the one in GCC) to do a better job here. (Of course, Concepts are also preferable over enable_if in other ways, such as being much easier to read.)

    There was also a paper describing the experiences of the author teaching Concepts online. One of the takeaways here is that students don’t tend to find the variety of concept declaration syntaxes confusing; they tend to mix them freely, and they tend to like the abbreviated function template (AFT) syntax.


    I mentioned above that a significant focus of the meeting was to address the national body comments on the Modules PDTS, and hopefully get to a publication vote on the final Modules TS.

    EWG looked at Modules on two occasions: first to deal with PDTS comments that had language design implications, and second to look at new proposals concerning Modules. The latter were all categorized as “post-TS”: they would not target the Modules TS, but rather “Modules v2”, the next iteration of Modules (for which the ship vehicle has not yet been decided).

    Modules TS

    The first task, dealing with PDTS comments in EWG, was a short affair. Any comment that proposed a non-trivial design change, or even remotely had the potential to delay the publication of the Modules TS, was summarily rejected (with the intention that the concern could be addressed in Modules v2 instead). It was clear that the committee leadership was intent on shipping the Modules TS by the end of the meeting, and would not let it get derailed for any reason.

    “That’s a good thing, right?” you ask. After all, the sooner we ship the Modules TS, the sooner people can start trying it out and providing feedback, and thus the sooner we can get a refined proposal into the official standard, right? I think the reality is a bit more nuanced than that. As always, it’s a tradeoff: if we ship too soon, we can risk shipping a TS that’s not sufficiently polished for people to reasonably implement and use it; then we don’t get much feedback and we effectively waste a TS cycle. In this case, I personally feel like EWG could have erred a bit more on the side of shipping a slightly more polished TS, even if that meant delaying the publication by a meeting (it ended up being delayed by at least a couple of months anyways). That said, I can also sympathize with the viewpoint that Modules has been in the making for a very long time and we need to ship something already.

    Anyways, for this reason, most PDTS comments that were routed to EWG were rejected. (Again, I should emphasize that this means “rejected for the TS“, not “rejected forever”.) The only non-rejection response that EWG gave was to comment US 041, where EWG confirmed that the intent was that argument-dependent lookup could find some non-exported entities in some situations.

    Of course, there were other PDTS comments that weren’t routed to EWG because they weren’t design issues; these were routed to CWG, and CWG spent much of the week looking at them. At one point towards the end of the week, CWG did consult EWG about a design issue that came up. The question concerned whether a translation unit that imports a module sees a class type declared in that module as complete or incomplete in various situations. Some of the possibilities that have to be considered here are whether the module exports the class’s forward declaration, its definition, or both; whether the module interface unit contains a definition of the class (exported or not) at all; and whether the class appears in the signature of an exported entity (such as a function) without itself being exported.

    There are various use cases that need to be considered when deciding the behaviour here. For example, a module may want to export functions that return or take as parameters pointers or references to a type that’s “opaque” to the module’s consumer, i.e. the module’s consumer can’t create an instance of such a class or access its fields; that’s a use case for exporting a type as incomplete. At the same time, the module author may want to avoid splitting her module into separate interface and implementation units at all, and thus wants to define the type in the interface unit while still exporting it as incomplete.

    The issue that CWG got held up on was that the rules as currently specified seemed to imply that in a consumer translation unit, an imported type could be complete and incomplete at the same time, depending on how it was named (e.g. directly vs. via decltype(f()) where it was the return type of a function f). Some implementers indicated that this would be a significant challenge to implement, as it would require a more sophisticated implementation model for types (where completeness was a property of “views of types” rather than of types themselves) that no existing language feature currently requires.

    Several alternatives were proposed which avoided these implementation challenges. While EWG was favourable to some of them, there was also opposition to making what some saw as a design change to the Modules TS at this late stage, so it was decided that the TS would go ahead with the current design, possibly annotated as “we know there’s a potential problem here”, and it would be fixed up in v2.

    I find the implications of this choice a bit unfortunate. It sounded like the implementers that described this model as being a significant challenge to implement, are not planning to implement it (after all, it’s going to be fixed in v2; why redesign your compiler’s type system if ultimately you won’t need it). Other implementers may or may not implement this model. Either way, we’ll either have implementation divergence, or all implementations will agree on a de facto model that’s different from what the spec says. This is one of those cases where I feel like waiting to polish the spec a bit more, so that it’s not shipped in a known-to-be-broken state, may have been advised.

    I mentioned in my previous report that I thought the various Modules implementers didn’t talk to each other enough about their respective implementation strategies. I still feel like that’s very much the case. I feel like discussing each other’s implementation approaches in more depth would have unearthed this issue, and allowed it to be dealt with, sooner.

    Modules v2

    Now moving on to the proposals targeting Modules v2 that EWG reviewed:

    • Two of them (module interface imports and namespace pervasiveness and modules) it turned out were already addressed in the Modules TS by changes made in response to PDTS comments.
    • Placement of module declarations. Currently, if a module unit contains declarations in the global module, the module declaration (which effectively “starts” the module) needs to go after those global declarations. However, this makes it more difficult for both humans and tools to find the module declaration. This paper proposes a syntax that allows having the module declaration be the first declaration in the file, while still having a way to place declarations in the global module. It was observed that this proposal would make it easier to make module a context-sensitive keyword, which has also been requested. EWG encouraged continued exploration in this direction.
    • Module partitions. This iterates on the previous module partitions proposal (found in this paper), with a new syntax: module basename : partition; (unlike in the previous version, partition here is not a keyword, it’s the partition’s name). EWG liked this approach as well. Module partitions also make proclaimed-ownership-declarations unnecessary, so those can be axed.
    • Making module names strings. Currently, module names are identifier sequences separated by dots (e.g., with the dots not necessarily implying a hierarchical relationship; they are mapped onto files in an implementation-defined manner. Making them strings instead would allow mapping onto the filesystem more explicitly. There was no consensus for this change in EWG.
    • Making module a context-sensitive keyword. As always, making a common word like module a hard keyword breaks someone. In this case, it shows up as an identifier in many mature APIs like Vulkan, CUDA, Direct X 9, and others, and in some of these cases (like Vulkan) the name is enshrined into a published specification. In some cases, the problem can be solved by making the keyword context-sensitive, and that’s the case for module (especially if the proposal about the placement of module declarations is accepted). EWG agreed to make the keyword context-sensitive. The authors of this paper asked if this could be done for the TS rather than for Modules v2; that request was rejected, but implementers indicated that they would implement it as context-sensitive ASAP, thus avoiding problems in practice.
    • Modules TS does not support intended use case. The bulk of the concerns here were addressed in the Modules TS while addressing PDTS comments, except for a proposed extension to allow using-declarations with an unqualified name. EWG indicated it was open to such an extension for v2.
    • Two papers about support for exporting macros, which remains one of the most controversial questions about Modules. The first was a “rumination” paper, which was mostly arguing that we need a published TS and deployment experience before we can settle the question; the second argued that having deployed modules (clang’s pre-TS implementation) in a large codebase (Apple’s), it’s clear that macro support is necessary. A number of options for preserving hygiene, such as only exporting and importing individual macros, were discussed. EWG expressed a lukewarm preference to continuing to explore macro support, particularly with such fine-grained control for hygiene.
    Other Working Groups

    The Library Evolution Working Group, as usual, reviewed a decent amount of proposed new library features. While I can’t give a complete listing of the proposals discussed and their outcomes (having been in EWG all week), I’ll mention a few highlights of accepted proposals:

    Targeting C++20:

    std::span (formerly called array_view) is also targeting C++20, but has not quite gotten final approval from LEWG yet.

    Targeting the Library Fundamentals TS v3:

    • mdspan, a multi-dimensional array view. (How can a multi-dimensional array view be approved sooner than a single-dimensional one, you ask? It’s because mdspan is targeting a TS, but span is targeting the standard directly, so span needs to meet a higher bar for approval.)
    • std::expected<T>, a “value or error” variant type very similar to Rust’s Result

    Targeting the Ranges TS:

    • Range adaptors (“views”) and utilities. Range views are ranges that lazily consume elements from an underlying range, while performing an additional operation like transforming the elements or filtering them. This finally gives C++ a standard facility that’s comparable to C#’s LINQ (sans the SQL syntax), Java 8’s streams, or Rust’s iterators. C++11 versions of the facilities proposed here are available today in the range-v3 library (which was in turn inspired by Boost.Range).

    There was an evening session to discuss the future of text handling in C++. There was general agreement that it’s desirable to have a text handling library that has notions of code units, code points, and grapheme clusters; for many everyday text processing algorithms (like toupper), operating at the level of grapheme clusters makes the most sense. Regarding error handling, different people have different needs (safety vs. performance), and a policy-based approach to control error handling may be advisable. Some of the challenges include standard library implementations having to ship a database of Unicode character classifications, or hook into the OS’s database. The notion of whether we should have a separate character type to represent UTF-8 encoded text, or just use char for that purpose, remains contentious.

    SG 7 (Compile-Time Programming)

    SG 7, the Compile-Time Programming (previously Reflection) Study Group met for an evening session.

    An updated version of a proposed extension to the static reflection proposal to allow reflecting over functions was briefly reviewed and sent onwards for review in EWG and LEWG at future meetings.

    The rest of the evening was spent discussing an updated version of the metaclasses proposal. To recap, a metaclass defines a compile-time transformation on a class, and can be applied to a class to produce a transformed class (possibly among other things, like helper classes / functions). The discussion focused on one particular dimension of the design space here: how the transformation should be defined. Three options were given:

    1. The metaclass operates on a mutable input class, and makes changes to it to produce the transformed class. This is how it worked in the original proposal.
    2. Like #1, but the metaclass operates on an immutable input class, and builds the transformed class from the ground up as its output.
    3. Like #2, but the metaclass code operates on the “meta level”, where the representation of the input and output types is an ordinary object of type meta::type. This dispenses with most of the special syntax of the first two approaches, making the metaclass look a lot like a normal constexpr function.

    SG 7 liked the third approach the best, noting that it not only dispenses with the need for the $ syntax (which couldn’t have been the final syntax anyways, it would have needed to be something uglier), but makes the proposal more general (opening up more avenues for how and where you can invoke/apply the metaclass), and more in line with the preference the group previously expressed to have reflection facilities operate on a homogeneous value representation of the program’s entities.

    Discussion of other dimensions of the design space, such as what the invocation syntax for metaclasses should look like (i.e. how you apply them to a class) was deferred to future meetings.

    SG 12 (Undefined Behaviour and Vulnerabilities)

    SG 12, the Undefined Behaviour Study Group, recently had its scope expanded to also cover documenting vulnerabilities in the C++ language, and ways to avoid them.

    This latter task is a joint effort between SG 12 and WG 23, a sister committee of the C++ Standards Committee that deals with vulnerabilities in programming languages in general. WG 23 produces a language-agnostic document that catalogues vulnerabilities without being specific to a language, and then language-specific annexes for a number of programming languages. For the last couple of meetings, WG 23 has been collaborating with our SG 12 to produce a C++ annex; the two groups met for that purpose for two days during this meeting. The C++ annex is at a pretty early stage, but over time it has the potential to grow to be a comprehensive document outlining many interesting types of vulnerabilities that C++ programmers can run into, and how to avoid them.

    SG 12 also had a meeting of its own, where it looked at a proposal to make certain low-level code patterns that are widely used but technically have undefined behaviour, have defined behaviour instead. This proposal was reviewed favourably.

    C++ Stability and Velocity

    On Friday evening, there was a session to discuss the stability and velocity of C++.

    One of the focuses of the session was reviewing the committee’s policy on deprecating and removing old features that are known to be broken or that have been superseded by better alternatives. Several language features (e.g. dynamic exception specifications) and library facilities (e.g. std::auto_ptr) have been deprecated and removed in this way.

    One of the library facilities that were removed in C++17 was the deprecated “binders” (std::bind1st and std::bind2nd). These have been superseded by the C++11 std::bind, but, unlike say auto_ptr, they aren’t problematic or dangerous in any way. It was argued that the committee should not deprecate features like that, because it causes unnecessary code churn and maintenance cost for codebases whose lifetime and update cycle is very long (on the order of decades); embedded software such as an elevator control system was brought up as a specific example.

    While some sympathized with this viewpoint, the general consensus was that, to be able to evolve at the speed it needs to to satisfy the needs of the majority of its users, C++ does need to be able to shed old “cruft” like this over time. Implementations often do a good job of maintaining conformance modes with older standard versions (and even “escape hatches” to allow old features that have been removed to be used together with new features that have since been added), thus allowing users to continue using removed features for quite a while in practice. (Putting bind1st and bind2nd specifically back into C++20 was polled, but didn’t have consensus.)

    The other focus of the session was the more general tension between the two pressures of stability and velocity that C++ faces as it evolves. It was argued that there is a sense in which the two are at odds with each other, and the committee needs to take a clearer stance on which is the more important goal. Two examples of cases where backwards compatibility constraints have been a drag on the language that were brought up were the keywords used for coroutines (co_await, co_yield, etc. – wouldn’t it have been nice to just be able to claim await and yield instead?), and the const-correctness issue with std::function which still remains to be fixed. A poll on which of stability or velocity is more important for the future direction of C++ revealed a wide array of positions, with somewhat of a preference for velocity.


    This was a productive meeting, whose highlights included the Modules TS making good progress towards its publication; C++20 continuing to take shape as the draft standard gained the consistent comparisons feature among many other smaller ones; and range views/adaptors being standardized for the Ranges TS.

    The next meeting of the Committee will be in Jacksonville, Florida, the week of March 12th, 2018. It, too, should be an exciting meeting, as design discussion of Concepts resumes (with the future of AFTs possibly being settled), and the Modules TS is hopefully finalized (if that doesn’t already happen between meetings). Stay tuned for my report!

    Other Trip Reports

    Others have written reports about this meeting as well. Some that I’ve come across include Herb Sutter’s and Bryce Lelbach’s. I encourage you to check them out!

    Categorieën: Mozilla-nl planet

    The Mozilla Blog: Firefox Private Browsing vs. Chrome Incognito: Which is Faster?

    ma, 20/11/2017 - 15:00

    With the launch of Firefox Quantum, Mozilla decided to team up with Disconnect Inc. to compare page load times between desktop versions of Chrome’s Incognito mode and Firefox Quantum’s Private Browsing.

    Firefox Quantum is the fastest version of Firefox we’ve ever made. It is twice as fast as Firefox 52 and often faster than the latest version of Chrome in head to head page load comparisons. By using key performance benchmarks, we were able to optimize Firefox to eliminate unnecessary delays and give our users a great browsing experience.

    Most browser performance benchmarks focus on the use of a regular browsing mode. But, what about Private Browsing? Given that Private Browsing use is so common, we wanted to see how Firefox’s Private Browsing compared with Chrome’s Incognito when it came to page load time (that time between a click and the page being fully loaded on the screen).

    Spoiler Alert…. Firefox Quantum’s Private Browsing is fast…. really fast.

    Why would Private Browsing performance be any different?

    Websites have the ability to load content and run scripts from a variety of sources. Some of these scripts include trackers. Trackers are used for a variety of reasons including everything from website analytics to tracking website engagement for the purposes of targeted advertising. The use of trackers on websites is very common. Unfortunately trackers can delay the completion of page loads while the browser waits for tracking scripts to respond.

    In 2015, Firefox became the only browser to include Tracking Protection enabled by default in Private Browsing mode. Tracking Protection, as the name implies, blocks resources from loading if the URL being loaded is on a list of known trackers as defined by Disconnect’s Tracking Protection list. This list is a balanced approach to blocking and does not include sites that obey Do Not Track (as defined in the EFF guidelines).  While the feature is meant to help keep users from being tracked when they have explicitly opted to use Private Browsing, the side effect is a much faster browsing experience on websites which attempt to load content from URLs on the tracking list. A previous Firefox study in 2015 showed that there was a reduction in median page load time on top News websites of 44%.

    Since Firefox Quantum is the fastest version of Firefox yet, we thought it would be interesting to compare page load times between Firefox Quantum’s Private Browsing (which includes Tracking Protection), and Chrome’s Incognito mode which does not include a tracking protection feature.

    Study Methodology

    The study was conducted by Disconnect, the organization behind the domain lists used to power Tracking Protection. Page load times for the top 200 news websites as ranked by were measured using Firefox Quantum (v57.0b10v57 beta) in both default and Private Browsing modes and the most recent Chrome version (v61.0.3163.100) that was available at the time of testing – also in default and Incognito modes. News sites were tested because these sites tend to have the most trackers.

    Each of the news websites were loaded 10 times. In order for the test to measure comparable timings and to be reproducible by others, load times were measured using the PerformanceTiming API for both Firefox and Chrome for each page load. In particular, the total load time is considered as the difference between PerformanceTiming.loadEventEnd and PerformanceTiming.navigationStart. The tests were controlled through an automated script.

    All rounds of testing were conducted on a new Macbook Pro (13’’ Macbook Pro 2017, 3.1GHz i5, 16GB memory, OSX 10.13). We tested on a fast network connection with the Macbook Pro connected to a Webpass 100Mbps connection over WiFi (802.11ac, 867Mbit/s). For a deep dive into the methodology, check out our Mozilla Hacks post.


    Across the top 200 news websites tested, the average page load time for Firefox Quantum’s Private Browsing is 3.2 seconds compared to Chrome’s Incognito mode which took an average of 7.7 seconds to load a page for the fast Gigabit connection. This means that, on average, Firefox Quantum’s Private Browsing loads page 2.4x faster than Chrome in Incognito mode.

    On average, Firefox Quantum’s Private Browsing loads page 2.4x faster than Chrome in Incognito mode

    Comparing the average load times for Chrome also shows that Incognito mode alone does not bring any speed improvements. It is the Tracking Protection that makes the difference as can be seen from the results for Firefox Quantum.

    Another way to look at this data is by looking at the time that is acceptable to users for pages to be loaded. A third party study by SOASTA Inc. recently found that an average session load time of 6 seconds already leads to a 70% user bounce rate. Therefore, it makes sense to put our measurements in the context of looking at the share of pages per browser that took longer than 6 seconds to load.

    95% of page loads met the 6 second or faster threshold using Firefox Quantum Private Browsing with Tracking Protection

    95% of page loads met the 6 second or faster threshold using Firefox Quantum Private Browsing with Tracking Protection enabled, while only 70% of page loads made the cut on Chrome, leaving nearly a third of the news sites unable to load within that time frame.

    What’s next?

    While the speed improvements in Firefox Quantum will vary depending on the website, overall users can expect that Private Browsing in Firefox will be faster than Chrome’s Incognito mode right out of the box.

    In fact, due to these findings, we wanted users to be able to benefit from the increased speed and privacy outside of Private Browsing mode. With Firefox Quantum, users now have the ability to enable Tracking Protection in Firefox at any time.

    Interested? Then try Private Browsing for yourself!

    If you’d like to take it up a notch and enable Tracking Protection every time you use Firefox, then download Firefox Quantum, open Preferences. Choose Privacy & Security and scroll down until you find the Tracking Protection section. Alternatively, simply search for “Tracking Protection” in the Find in Preferences field. Enable Tracking Protection “Always” and you are set to enjoy both improved speed and privacy whenever you use Firefox Quantum.

    When enabling it, please keep in mind that Tracking Protection may block social “like” buttons, commenting tools and some cross-site video content.

    Tracking Protection in the new Firefox browser

    If Tracking Protection is a feature that you’ve commonly used or that you will want to use more regularly, give Firefox Quantum a try to experience how fast it is!

    Disconnect Inc. and Mozilla partnered up in 2015 to power Firefox’s Tracking Protection giving you control over the data that third parties receive from you online. The blocklist is based on a list of known trackers as defined by Disconnect’s Tracking Protection list. As a follow-up, we asked ourselves if Firefox’s Private Browsing mode with Tracking Protection might also offer speed benefits.

    Contributors: Peter Dolanjski & Dominik Strohmeier – Mozilla, Casey Oppenheim & Eason Goodale – Disconnect Inc.

    The post Firefox Private Browsing vs. Chrome Incognito: Which is Faster? appeared first on The Mozilla Blog.

    Categorieën: Mozilla-nl planet

    Mozilla VR Blog: Enabling the Social 3D Web

    ma, 20/11/2017 - 09:00
    Enabling the Social 3D Web

    As hinted in our recent announcement of our Mixed Reality program we’d like to share more on our efforts to help accelerate the arrival of the Social 3D Web.

    Mixed Reality is going to provide a wide variety of applications that will change the way we live and work. One which stands to be the most transformative is the ability to naturally communicate with another person through the Internet. Unlike traditional online communication tools like text, voice, and video, the promise of Mixed Reality is that you can be present with other people, much like real life, and engage with them in a more interactive way. You can make eye contact, nod your head, high five, dance, and even play catch with anyone around the world, regardless of where you are!

    Enabling the Social 3D Web

    Mozilla’s mission is to ensure the Internet remains a global public resource. As part of this mission, fostering this new and transformative form of Internet-based communication is something we care deeply about. We believe that the web is the platform that will provide the best user experience for Mixed Reality-based communication. It will ensure that people can connect and collaborate in this new medium openly, across devices and ecosystems, free from walled gardens, all while staying in control of their identity. Meeting with others around the world in Mixed Reality should be as easy as sharing a link, and creating a virtual space to spend time in should be as easy as building your first website.

    To help realize this vision, we have formed a dedicated team focused on Social Mixed Reality. Today, we’ll be sharing our vision and roadmap for accelerating the arrival of Social MR experiences to the web.

    WebVR has come a long way since its very first steps in 2014. The WebVR and A-Frame communities have produced amazing experiences, but multi-user social experiences are still few and far between. Without a shared set of libraries and APIs, Social MR experiences on the web are often inconsistent, with limited support for identity and avatars, and lack basic tools to allow users to find, share, and experience content together.

    To address this, we are investing in areas that we believe will help fill in some of the missing pieces of Social MR experiences on the web. To start, we will be delivering open source components and services focused on making it possible for A-Frame developers to deliver rich and compelling Social MR experiences with a few lines of code. In addition, we will be building experimental Social MR products of our own in order to showcase the technology and to provide motivating examples.

    Although we plan to share some initial demos in the near future, our work has advanced enough that we wanted to share our github repositories and invite the community to join the conversation, provide feedback, or even begin actively contributing.

    We’ll be focused on the following areas:

    Avatars + Identity

    We want to enable A-Frame (and eventually, other frameworks and game engines) to easily add real-time, natural, human communication to their experiences. This includes efficient networking of voice and avatars, consistent locomotion controls, and mechanisms for self-expression and customization with an identity that you control. We will also be exploring ways for creators to publish and distribute custom avatars and accessories to users of Social MR applications.


    Within a Social MR experience, you’ll often want to find your friends or meet new people, while having the controls you need to ensure your comfort and safety. We are aiming to bring traditional text and voice-based communication, social networking, and cross-app interactions like messaging to Mixed Reality A-Frame apps. This includes traditional features like friending, messaging, and blocking, as well as Mixed Reality-specific features like personal space management.


    Once you are able to be with other people in Mixed Reality, you’ll want to interact with shared 3D objects. You should be able to throw a frisbee to one another, play cards, or even create a sculpture together. Also, objects in the world should be where you left them when you come back later. We’ll make it easy for Social MR apps to support the live manipulation, networking, and persistence of physics-enabled 3D objects.


    There are some things you always want to be able to do in a social setting. In real life, no matter where you are, you can always pull out your phone to snap a photo or send a message. Similarly, we want to provide components and tools that are useful in — and across — all Social MR experiences. Tied together via your identity, these components could allow drawing in 3D, navigating to a web page, sharing a video, or taking a selfie in any app.

    Search + Discovery

    The MR Web is full of disconnected experiences. How do you find new apps to try, or share one with friends, or even join an app your friends are currently using? How do you meet new people? We’ll improve the discovery and sharing of MR content, and give users ways to meet new and interesting people, all through the web within Mixed Reality. For example, matchmaking for apps that need groups of multiple players, a friends list showing who is online and in which app, or a content feed highlighting the best Social MR experiences on the web.

    How to get involved

    If you are interested in helping bring Social MR to the web, first and foremost please join the conversation with us in the #social channel on the WebVR Slack.

    Additionally, feel free to browse our Github repositories and open an issue if you have questions or feedback. We are just getting started, so there may be quite a bit of churn over the coming months, but we’re excited to share what we’ve done so far.

    • mr-social-client - A prototype of a social experience, where we’ll be testing out features and services as we build them.
    • janus-plugin-sfu - A Selective Forwarding Unit plugin for the Janus WebRTC gateway that allows centralized WebRTC message processing, to reduce upstream bandwidth and (eventually) provide authoritative networking services.
    • reticulum - A Phoenix networking and application server that will be the entry point to most of the services we provide.
    • mr-ops - Our infrastructure as code, operations, and deployment scripts.
    • socialmr - A container for tracking Github issues we’re working on.

    We’ll also be contributing back to other existing projects. For example we recently contributed positional audio support and are working on a Janus networking adapter for the awesome networked-aframe project by Hayden Lee.


    How long has Mozilla been working on this?

    We’ve been exploring this area for over a year, but the formation of a dedicated team (and the repos published above) kicked off over the last several weeks. Although this is relatively new body of work, now that we have put together a roadmap and have some initial efforts in-progress we wanted to share it with the community.

    Will you support 3rd party game engines like Unity and Unreal Engine?

    This is something we will be definitely be exploring down the road. Our near-term goals are focused on developing services for the web and libraries for A-Frame and three.js Social MR applications, but we hope to make these services and libraries more generally available to other platforms. We would be interested in your feedback on this!

    Will you be proposing new browser APIs?

    As we progress, we may find it makes sense to incorporate some of the functionality we’ve built into the browsers directly. At that point we would propose a solution (similar to how we recently proposed the WebXR API to address extending WebVR to encompass AR concepts) to solicit feedback from developers and other browser vendors.

    Categorieën: Mozilla-nl planet

    Giorgio Maone: The Week is Not Over Yet

    za, 18/11/2017 - 21:43

    I apologize for not providing a constant information feed about NoScript 10's impending release, but I've got no press office or social media staff working for me: when I say "we" about NoScript, I mean the great community of volunteers helping with user support (and especially the wonderful moderators of the NoScript forum).NoScript 10 object placeholder

    By the way, as most but not all users know, there's no "NoScript development team" either: I'm the only developer, and yesterday I also had to temporarily suspend my NoScript 10 final rush, being forced to release two emergency 5.x versions (5.1.6 and 5.1.7) to cope with Firefox 58 compatibility breakages (yes, in case you didn't notice, "Classic" NoScript 5 still works on Firefox 58 Developer Edition with some tricks, even though Firefox 52 ESR is still the best "no surprises" option).

    Anyway, here's my update: the week, at least in Italy, finishes on Sunday night, there's no "disaster recovery" going on, and NoScript 10's delay on Firefox 57's release is still going to be measured in days, not weeks.

    Back to work now, and thank you again for your patience and support :)

    Categorieën: Mozilla-nl planet

    Mark Côté: Phabricator and Lando November Update

    vr, 17/11/2017 - 16:36

    With work on Phabricator–BMO integration wrapping up, the development team’s focus has switched to the new automatic-landing service that will work with Phabricator. The new system is called “Lando” and functions somewhat similarly to MozReview–Autoland, with the biggest difference being that it is a standalone web application, not tightly integrated with Phabricator. This gives us much more flexibility and allows us to develop more quickly, since working within extension systems is often painful for anything nontrivial.

    Lando is split between two services: the landing engine, “lando-api”, which transforms Phabricator revisions into a format suitable for the existing autoland service (called the “transplant server”), and the web interface, “lando-ui”, which displays information about the revisions to land and kicks off jobs. We split these services partly for security reasons and partly so that we could later have other interfaces to lando, such as command-line tools.

    When I last posted an update I included an early screenshot of lando-ui. Since then, we have done some user testing of our prototypes to get early feedback. Using a great article, “Test Your Prototypes: How to Gather Feedback and Maximise Learning”, as a guide, we took our prototype to some interested future users. Refraining from explaining anything about the interface and providing only some context on how a user would get to the application, we encouraged them to think out loud, explaining what the data means to them and what actions they imagine the buttons and widgets would perform. After each session, we used the feedback to update our prototypes.

    These sessions proved immensely useful. The feedback on our third prototype was much more positive than on our first prototype. We started out with an interface that made sense to us but was confusing to someone from outside the project, and we ended with one that was clear and intuitive to our users.

    For comparison, this is what we started with:

    And here is where we ended:

    A partial implementation of the third prototype, with a few more small tweaks raised during the last feedback session, is currently on There are currently some duplicated elements there just to show the various states; this redundant data will of course be removed as we start filling in the template with real data from Phabricator.

    Phabricator remains in a pre-release phase, though we have some people now using it for mozilla-central reviews. Our team continues to use it daily, as does the NSS team. Our implementation has been very stable, but we are making a few changes to our original design to ensure it stays rock solid. Lando was scheduled for delivery in October, but due to a few different delays, including being one person down for a while and not wanting to launch a new tool during the flurry of the Firefox 57 launch, we’re now looking at a January launch date. We should have a working minimal version ready for Austin, where we have scheduled a training session for Phabricator and a Lando demo.

    Categorieën: Mozilla-nl planet

    Mozilla Security Blog: November 2017 CA Communication

    do, 16/11/2017 - 21:46

    Mozilla has sent a CA Communication to inform Certificate Authorities (CAs) who have root certificates included in Mozilla’s program about Mozilla’s expectations regarding version 2.5 of Mozilla’s Root Store Policy, annual CA updates, and actions the CAs need to take. This CA Communication has been emailed to the Primary Point of Contact (POC) and an email alias for each CA in Mozilla’s program, and they have been asked to respond to the following 8 action items:

    1. Review version 2.5 of Mozilla’s Root Store Policy, and update the CA’s CP/CPS documents as needed to become fully compliant.
    2. Confirm understanding that non-technically-constrained intermediate certificates must be disclosed in the Common CA Database (CCADB) within one week of creation, and of new requirements for technical constraints on intermediate certificates issuing S/MIME certificates.
    3. Confirm understanding that annual updates (audits, CP, CPS, test websites) are to be provided via Audit Cases in the CCADB.
    4. Confirm understanding that audit statements that are not in English and do not contain all of the required information will be rejected by Mozilla, and may result in the CA’s root certificate(s) being removed from our program.
    5. Perform a BR Self Assessment and send it to Mozilla. This self assessment must cover the CA Hierarchies (and all of the corresponding CP/CPS documents) that chain up to their CA’s root certificates that are included in Mozilla’s root store and enabled for server authentication (Websites trust bit).
    6. Provide a tested email address for the CA’s Problem Reporting Mechanism.
    7. Follow new developments and effective dates for Certification Authority Authorization (CAA)
    8. Check issuance of certs to .tg domains between October 25 and November 11, 2017.

    The full action items can be read here. Responses to the survey will be automatically and immediately published by the CCADB.

    With this CA Communication, we re-iterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

    Mozilla Security Team

    The post November 2017 CA Communication appeared first on Mozilla Security Blog.

    Categorieën: Mozilla-nl planet

    Aaron Klotz: Legacy Firefox Extensions and "Userspace"

    do, 16/11/2017 - 21:30

    This week’s release of Firefox Quantum has prompted all kinds of feedback, both positive and negative. That is not surprising to anybody – any software that has a large number of users is going to be a topic for discussion, especially when the release in question is undoubtedly a watershed.

    While I have previously blogged about the transition to WebExtensions, now that we have actually passed through the cutoff for legacy extensions, I have decided to add some new commentary on the subject.

    One analogy that has been used in the discussion of the extension ecosystem is that of kernelspace and userspace. The crux of the analogy is that Gecko is equivalent to an operating system kernel, and thus extensions are the user-mode programs that run atop that kernel. The argument then follows that Mozilla’s deprecation and removal of legacy extension capabilities is akin to “breaking” userspace. [Some people who say this are using the same tone as Linus does whenever he eviscerates Linux developers who break userspace, which is neither productive nor welcomed by anyone, but I digress.] Unfortunately, that analogy simply does not map to the legacy extension model.

    Legacy Extensions as Kernel Modules

    The most significant problem with the userspace analogy is that legacy extensions effectively meld with Gecko and become part of Gecko itself. If we accept the premise that Gecko is like a monolithic OS kernel, then we must also accept that the analogical equivalent of loading arbitrary code into that kernel, is the kernel module. Such components are loaded into the kernel and effectively become part of it. Their code runs with full privileges. They break whenever significant changes are made to the kernel itself.

    Sound familiar?

    Legacy extensions were akin to kernel modules. When there is no abstraction, there can be no such thing as userspace. This is precisely the problem that WebExtensions solves!

    Building Out a Legacy API

    Maybe somebody out there is thinking, “well what if you took all the APIs that legacy extensions used, turned that into a ‘userspace,’ and then just left that part alone?”

    Which APIs? Where do we draw the line? Do we check the code coverage for every legacy addon in AMO and use that to determine what to include?

    Remember, there was no abstraction; installed legacy addons are fused to Gecko. If we pledge not to touch anything that legacy addons might touch, then we cannot touch anything at all.

    Where do we go from here? Freeze an old version of Gecko and host an entire copy of it inside web content? Compile it to WebAssembly? [Oh God, what have I done?]

    If that’s not a maintenance burden, I don’t know what is!

    A Kernel Analogy for WebExtensions

    Another problem with the legacy-extensions-as-userspace analogy is that it leaves awkward room for web content, whose API is abstract and well-defined. I do not think that it is appropriate to consider web content to be equivalent to a sandboxed application, as sandboxed applications use the same (albeit restricted) API as normal applications. I would suggest that the presence of WebExtensions gives us a better kernel analogy:

    • Gecko is the kernel;
    • WebExtensions are privileged user applications;
    • Web content runs as unprivileged user applications.
    In Conclusion

    Declaring that legacy extensions are userspace does not make them so. The way that the technology actually worked defies the abstract model that the analogy attempts to impose upon it. On the other hand, we can use the failure of that analogy to explain why WebExtensions are important and construct an extension ecosystem that does fit with that analogy.

    Categorieën: Mozilla-nl planet

    Aaron Klotz: Legacy Firefox Exensions and "Userspace"

    do, 16/11/2017 - 21:30

    This week’s release of Firefox Quantum has prompted all kinds of feedback, both positive and negative. That is not surprising to anybody – any software that has a large number of users is going to be a topic for discussion, especially when the release in question is undoubtedly a watershed.

    While I have previously blogged about the transition to WebExtensions, now that we have actually passed through the cutoff for legacy extensions, I have decided to add some new commentary on the subject.

    One analogy that has been used in the discussion of the extension ecosystem is that of kernelspace and userspace. The crux of the analogy is that Gecko is equivalent to an operating system kernel, and thus extensions are the user-mode programs that run atop that kernel. The argument then follows that Mozilla’s deprecation and removal of legacy extension capabilities is akin to “breaking” userspace. [Some people who say this are using the same tone as Linus does whenever he eviscerates Linux developers who break userspace, which is neither productive nor welcomed by anyone, but I digress.] Unfortunately, that analogy simply does not map to the legacy extension model.

    Legacy Extensions as Kernel Modules

    The most significant problem with the userspace analogy is that legacy extensions effectively meld with Gecko and become part of Gecko itself. If we accept the premise that Gecko is like a monolithic OS kernel, then we must also accept that the analogical equivalent of loading arbitrary code into that kernel, is the kernel module. Such components are loaded into the kernel and effectively become part of it. Their code runs with full privileges. They break whenever significant changes are made to the kernel itself.

    Sound familiar?

    Legacy extensions were akin to kernel modules. When there is no abstraction, there can be no such thing as userspace. This is precisely the problem that WebExtensions solves!

    Building Out a Legacy API

    Maybe somebody out there is thinking, “well what if you took all the APIs that extensions used, turned that into a ‘userspace,’ and then just left that part alone?”

    Which APIs? Where do we draw the line? Do we check the code coverage for every legacy addon in AMO and use that to determine what to include?

    Remember, there was no abstraction; installed legacy addons are fused to Gecko. If we pledge not to touch anything that legacy addons might touch, then we cannot touch anything at all.

    Where do we go from here? Freeze an old version of Gecko and host an entire copy of it inside web content? Compile it to WebAssembly? [Oh God, what have I done?]

    If that’s not a maintenance burden, I don’t know what is!

    A Kernel Analogy for WebExtensions

    Another problem with the legacy-extensions-as-userspace analogy is that it leaves awkward room for web content, whose API is abstract and well-defined. I do not think that it is appropriate to consider web content to be equivalent to a sandboxed application, as sandboxed applications use the same (albeit restricted) API as normal applications. I would suggest that the presence of WebExtensions gives us a better kernel analogy:

    • Gecko is the kernel;
    • WebExtensions are privileged applications;
    • Web content runs as unprivileged user applications.
    In Conclusion

    Declaring that legacy extensions are userspace does not make them so. The way that the technology actually worked defies the abstract model that the analogy attempts to impose upon it. On the other hand, we can use the failure of that analogy to explain why WebExtensions is important and allows us to construct an extension ecosystem that does fit with that analogy.

    Categorieën: Mozilla-nl planet

    Joel Maher: Measuring the noise in Performance tests

    do, 16/11/2017 - 21:20

    Often I hear about our talos results, why are they so noisy?  What is noise in this context- by noise we are referring to a larger stddev in the results we track, here would be an example:


    With the large spread of values posted regularly for this series, it is hard to track improvements or regressions unless they are larger or very obvious.

    Knowing the definition of noise, there are a few questions that we often need to answer:

    • Developers working on new tests- what is the level of noise, how to reduce it, what is acceptable
    • Over time noise changes- this causes false alerts, often not related to to code changes or easily discovered via infra changes
    • New hardware we are considering- is this hardware going to post reliable data for us.

    What I care about is the last point, we are working on replacing the hardware we run performance tests on from old 7 year old machines to new machines!  Typically when running tests on a new configuration, we want to make sure it is reliably producing results.  For our system, we look for all green:


    This is really promising- if we could have all our tests this “green”, developers would be happy.  The catch here is these are performance tests, are the results we collect and post to graphs useful?  Another way to ask this is are the results noisy?

    To answer this is hard, first we have to know how noisy things are prior to the test.  As mentioned 2 weeks ago, Talos collects 624 metrics that we track for every push.  That would be a lot of graph and calculating.  One method to do this is push to try with a single build and collect many data points for each test.  You can see that in the image showing the all green results.

    One method to see the noise, is to look at compare view.  This is the view that we use when comparing one push to another push when we have multiple data points.  This typically highlights the changes that are easy to detect with our t-test for alert generation.  If we look at the above referenced push and compare it to itself, we have:



    Here you can see for a11y, linux64 has +- 5.27 stddev.  You can see some metrics are higher and others are lower.  What if we add up all the stddev numbers that exist, what would we have?  In fact if we treat this as a sum of the squares to calculate the variance, we can generate a number, in this case 64.48!  That is the noise for that specific run.

    Now if we are bringing up a new hardware platform, we can collect a series of data points on the old hardware and repeat this on the new hardware, now we can compare data between the two:


    What is interesting here is we can see side by side the differences in noise as well as the improvements and regressions.  What about the variance?  I wanted to track that and did, but realized I needed to track the variance by platform, as each platform could be different- In bug 1416347, I set out to add a Noise Metric to the compare view.  This is on treeherder staging, probably next week in production.  Here is what you will see:


    Here we see that the old hardware has a noise of 30.83 and the new hardware a noise of 64.48.  While there are a lot of small details to iron out, while we work on getting new hardware for linux64, windows7, and windows10, we now have a simpler method for measuring the stability of our data.




    Categorieën: Mozilla-nl planet

    David Humphrey: Service Workers and a Promise to Catch

    do, 16/11/2017 - 21:16

    I love Service Workers. I've written previously about my work to use them in Thimble. They've allowed us to support all of the web's dynamic loading mechanisms for images, scripts, etc., which wasn't possible when we only used Blob URLs.

    But as much as I love them, they add a whole new layer of pain when you're trying to debug problems. Every web developer has dealt with the frustration of a long debug session, baffled when code changes won't get picked up in the browser, only to realize they are running on cached resources. Now add another ultra powerful cache layer, and an even more nuanced debug environment, and you have the same great taste with twice the calories.

    We've been getting reports from some users lately that Thimble has been doing odd things in some cases. One of thing things we do with a Service Worker is to simulate a web server, and load web resources out of the browser filesystem and/or editor. Instead of seeing their pages in the preview window, they instead get an odd 404 that looks like it comes from S3.

    Naturally, none of us working on the code can recreate this problem. However, today, a user was also kind enough to include a screenshot that included their browser console:

    And here, finally, is the answer! Our Service Worker has failed to register, which means requests for resources are hitting the network directly vs. the Service Worker and Cache Storage. I've already got a patch up that should fix this, but while I wait, I wanted to say something to you about how you can avoid this mess.

    First, let's start with the canonical Service Worker registration code one finds on the web:

    if ('serviceWorker' in navigator) { // Register a service worker hosted at the root of the // site using the default scope. navigator.serviceWorker.register('/sw.js').then(function(registration) { console.log('Service worker registration succeeded:', registration); }).catch(function(error) { console.log('Service worker registration failed:', error); }); } else { console.log('Service workers are not supported.'); }

    Here, after checking if serviceWorker is defined in the current browser, we attempt (I use the word intentionally) to register the script at /sw.js as a Service Worker. This returns a Promise, which we then() do something with after it completes. Also, there's an obligatory catch().

    I want to say something about that catch(). Of course we know, I know, that you need to deal with errors. However, errors come in all different shapes and sizes, and when you're only anticipating one kind, you can get surprised by rarer, and more deadly varieties.

    You might, for example, find that you have a syntax error in /sw.js, which causes registration to fail. And if you do, it's the kind of error you're going to discover quickly, because it will break instantly on your dev machine. There's also the issue that certain browsers don't (yet) support Service Workers. However, our initial if ('serviceWorker' in navigator) {...} check should deal with that.

    So having dealt with incompatible browsers, and incompatible code, it's tempting to conclude that you're basically done here, and leave a console.log() in your catch(), like so many abandoned lighthouses, still there for tourists to take pictures, but never used by mariners.

    Until you crash. Or more correctly, until a user crashes, and your app won't work. In which case you begin your investigation: "Browser? OS? Versions?" You replicate their environment, and can't make it happen locally. What else could be wrong?

    I took my grief to Ben Kelly, who is one of the people behind Firefox's Service Worker implementation. He in turn pointed me at Bug 1336364, which finally shed light on my problem.

    We run our app on two origins: one which manages the user credentials, and talks to our servers; the other for the editor, which allows for arbitrary user-written JS to be executed. We don't want the latter accessing the cookies, session, etc. of the former. Our Service Worker is thus being loaded in an iframe on our second domain.

    Or, it's being loaded sometimes. The user might have set their browser's privacy settings to block 3rd party cookies, which is really a proxy for "block all cookie-like storage from 3rd parties," and that includes Service Workers. When this happens, our app continues to load, minus the Service Worker (failing to register with a DOM security exception), which various parts of the app expect to be there.

    In our case, the solution is to add an automatic failover for the case that Service Workers are supported but not available. Doing so means having more than a console.log() in our catch(e) block, which is what I'd suggest you do when you try to register() your Service Workers.

    This is one of those things that makes lots of sense when you know about it, but until you've been burned by it, you might not take it seriously. It's an easy one to get surprised by, since different browsers behave differently here, and testing for it means not just testing with different browsers, but also different settings per browser.

    Having been burned by it, I wanted to at least write something that might help you in your our of need. If you're going to use Service Workers, you have to Promise to do more with what you Catch than just Log it.

    Categorieën: Mozilla-nl planet

    Air Mozilla: 11/16 Mozilla Curriculum Wksp. Fall 2017

    do, 16/11/2017 - 19:00

    11/16 Mozilla Curriculum Wksp. Fall 2017 Join us for a special, series finale “ask me anything” (AMA) episode of the Mozilla Curriculum Workshop at 1:00 PM ET, on Tuesday, November 16th,...

    Categorieën: Mozilla-nl planet

    Air Mozilla: 11/16 Mozilla Curriculum Wksp. Fall 2017

    do, 16/11/2017 - 19:00

    11/16 Mozilla Curriculum Wksp. Fall 2017 Join us for a special, series finale “ask me anything” (AMA) episode of the Mozilla Curriculum Workshop at 1:00 PM ET, on Tuesday, November 16th,...

    Categorieën: Mozilla-nl planet

    Chris H-C: Data Science is Hard: What’s in a Dashboard

    do, 16/11/2017 - 17:12
    1920x1200-4-COUPLE-WEEKS-AFTER<figcaption class="wp-caption-text">The data is fake, don’t get excited.</figcaption>

    Firefox Quantum is here! Please do give it a go. We have been working really hard on it for quite some time, now. We’re very proud of what we’ve achieved.

    To show Mozillians how the release is progressing, and show off a little about what cool things we can learn from the data Telemetry collects, we’ve built a few internal dashboards. The Data Team dashboard shows new user count, uptake, usage, install success, pages visited, and session hours (as seen above, with faked data). If you visit one of our Mozilla Offices, you may see it on the big monitors in the common areas.

    The dashboard doesn’t look like much: six plots and a little writing. What’s the big deal?

    Well, doing things right involved quite a lot more than just one person whipping something together overnight:

    1. Meetings for this dashboard started on Hallowe’en, two weeks before launch. Each meeting had between eight and fourteen attendees and ran for its full half-hour allotment each time.

    2. In addition there were several one-off meetings: with Comms (internal and external) to make sure we weren’t putting our foot in our mouth, with Data ops to make sure we weren’t depending on datasets that would go down at the wrong moment, with other teams with other dashboards to make sure we weren’t stealing anyone’s thunder, and with SVPs and C-levels to make sure we had a final sign-off.

    3. Outside of meetings we spent hours and hours on dashboard design and development, query construction and review, discussion after discussion after discussion…

    4. To say nothing of all the bikeshedding.

    It’s hard to do things right. It’s hard to do even the simplest things, sometimes. But that’s the job. And Mozilla seems to be pretty good at it.

    One last plug: if you want to nudge these graphs a little higher, download and install and use and enjoy the new Firefox Quantum. And maybe encourage others to do the same?


    Categorieën: Mozilla-nl planet

    Air Mozilla: Reps Weekly Meeting Nov. 16, 2017

    do, 16/11/2017 - 17:00

    Reps Weekly Meeting Nov. 16, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

    Categorieën: Mozilla-nl planet

    Air Mozilla: Reps Weekly Meeting Nov. 16, 2017

    do, 16/11/2017 - 17:00

    Reps Weekly Meeting Nov. 16, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

    Categorieën: Mozilla-nl planet

    Hacks.Mozilla.Org: A super-stable WebVR user experience thanks to Firefox Quantum

    do, 16/11/2017 - 16:50

    On Tuesday, Mozilla released Firefox Quantum, the 57th release of the Firefox browser since we started counting. This landmark release replaces some core browser components with newer, faster and more modern implementations. In addition, the Quantum release incorporates major optimizations from Quantum Flow, an holistic effort to modernize and improve the foundations of the Firefox web engine by identifying and removing the main sources of jank without rewriting everything from scratch, an effort my colleague Lin Clark describes as “a browser performance strike force.”

    Quantum Flow has had an important and noticeable effect on WebVR stability as you can see in the following video:

    This video shows the execution profiles of the Snowglobe demo while running Speedometer in the background to simulate the effect of multiple tabs open in a regular scenario.

    <figcaption>The charts at the top of the video show several gaps where the lines go flat. The width of the gap represents a period of time during which the browser could not meet the deadline imposed by the VR system. If this situation continues for long enough, the headset will try to bring the user to a safe space to prevent dizziness and other annoying conditions.</figcaption>

    The intermittent flashes in Firefox 55 correspond to the wide gaps in which the VR system tries to bring the user to the safe space. Notice that in Quantum the effect does not happen, and the experience is smoother and more comfortable.

    The difference is due to the fact that Quantum Flow has removed the bottlenecks interfering with the browser’s ability to send fresh images to the VR system on time.

    To understand how comprehensive optimizations affect virtual reality presentation, it is necessary to know the strict requirements of VR systems and to understand the communication infrastructure of Firefox.

    The VR frame

    In the browser, regular 3D content is displayed at 60 Hz. This implies that the web content has around 16.6 ms to simulate the world, render the scene, and send the new image to the browser compositor thread. If the web page meets the 16.6 ms deadline, the frame rate will be a constant 60 fps (frames per second) and the animations will run smoothly and jank-free.

    <figcaption>The picture above shows three frames, with the current frame highlighted in green. Each vertical line marks the end of the frame, the moment at which the rendered scene is shown to the user.</figcaption>

    VR content is displayed at 90 Hz so the rendering time for VR is reduced to 11.1 ms. In WebVR, the web content sends the rendered frames to a dedicated WebVR thread.

    More important, in VR, we should take into account a fact previously ignored: The delay between when a web page starts rendering the VR scene and when a new image is displayed in the headset has a direct impact on the user’s perception.

    This happens because the scene starts to render after placing the camera at the beginning of the frame based on the user’s head position, but the scene is displayed a little bit after, with just enough time for the user to change their orientation. This delay is known as motion-to-photon latency and can cause dizziness and nausea.

    <figcaption>The effect of motion-to-photon causes reality to fall behind the user’s view.</figcaption>

    Fortunately, VR systems can partially fix this effect without increasing latency, by warping the rendered scene before displaying it in the headset, in a process known as reprojection.

    However, the lower the latency, the more accurate the simulation. So, to reduce latency length, the browser does not start rendering immediately after showing the last frame.

    <figcaption>Following the same approach as the traditional frame, motion-to-photon latency would last for the complete frame.</figcaption>

    Instead, it asks the VR system for a waiting period to delay rendering the scene.

    <figcaption>Waiting at the beginning of the frame, without changing the rendering time, shortens the motion-to-photon latency.</figcaption>

    As discussed below, web content and the WebVR thread run in different processes but they need to coordinate to render the scene. Before Quantum Flow, communication between processes came with the potential risk of becoming a bottleneck. In the VR frame, there are two critical communication points: one after waiting, when WebVR yields the execution to the web page for rendering the scene; and another, after rendering, when the web content sends the frame to the WebVR thread.

    An unexpected delay in either would cause the motion-to-photon latency to peak and the headset would reject the frame.

    Inter-process communication messages in Firefox

    Firefox organizes its execution into multiple processes: the parent process, which contains the browser UI and can access the system resources; the GPU process, specifically intended to communicate with the graphics card and containing the Firefox compositor and WebVR threads; and several content processes, which run the web content but can not communicate with other parts of the system. This separation of processes enable future increases in browser security and prevents a buggy web page from crashing the entire browser.

    <figcaption>Parent, GPU and content processes communicate with each other using inter-process communication (IPC) messages.</figcaption>

    Quite often, processes need to communicate with each other. To do so, they use Inter-Process Communication (IPC) messages. An IPC message consists of three parts: 1) sending the request, 2) performing a task in the recipient, and 3) returning a result to the initiator. These messages can be synchronous or asynchronous.

    We speak of synchronous IPC when any other attempt at messaging via IPC must wait until the current communication finishes. This includes waiting to send the message, completing the task, and returning the result.

    <figcaption>Synchronous IPC implies long waiting times and slow queues.</figcaption>

    The problem with synchronous IPC is that an active tab attempting to communicate with the parent process may block delivery. This forces a wait until the result of a different communication reaches the initiator, even when the latter is a background tab (and therefore, not urgent) or the ongoing task has nothing to do with the attempted communication.

    In contrast, we speak of asynchronous IPC when sending the request, performing the task, and returning the result are independent operations. New communications don’t have to wait to be sent. Execution and result delivery can happen out-of-order and the tasks can be reprioritized dynamically.

    <figcaption>Although task duration and time spent on the trip remain the same, this animation is 34% shorter than the previous one. Asynchronous IPC does not avoid queues but it resolves them faster.</figcaption>

    One of the goals of Quantum Flow, over the course of Firefox 55, 56 and 57 releases, has been to identify synchronous IPCs and transform them into asynchronous IPCs. Ehsan Akhgari, in his series “Quantum Flow Engineering Newsletter”, perfectly reviews the progress of the Quantum Flow initiative this year.

    Now that we’ve explored the performance risks that come with synchronous IPC, let’s revisit the two critical communications inside the VR frame: the one for yielding execution to the web page to start rendering, and the one that sends the frame to the headset; both requiring an IPC from GPU-to-content and content-to-GPU respectively.

    <figcaption>Risk points during the VR frame happen once per frame, i.e., 180 times per second.</figcaption>

    Due to the VR frame rate, these critical moments of risk happen 180 times per second. During the early stages of Quantum Flow in Firefox 55, the high frame rates, in addition to the background activity of other open tabs, increased the probability of being delayed by an ongoing synchronous IPC request. Wait times were not uncommon. In this situation, the browser was constantly missing the deadlines imposed by the VR gear.

    After advancing Quantum Flow efforts in Firefox 56 and 57, the ongoing removal of synchronous IPC reduces the chance of being interrupted by an unexpected communication, and now the browser does not miss a deadline.

    Although Quantum Flow was not aimed specifically at improving WebVR, by removing communication bottlenecks new components can contribute effectively to the global performance gain. Without Quantum Flow, it does not matter how fast, new or modern the browser is, if new features and capabilities are blocked waiting for unrelated operations to finish.

    And thus, Firefox Quantum is not only the fastest version of Firefox for 2D content rendering, it is also the browser that will bring you the most stable and comfortable viewing experience in WebVR so far. And the best is yet to come.

    Categorieën: Mozilla-nl planet

    QMO: Firefox 58 Beta 3 Testday, November 17th

    do, 16/11/2017 - 15:36

    Hello Mozillians!

    We are happy to let you know that Friday, November 17th, we are organizing Firefox 58 Beta 3 Testday. We’ll be focusing our testing on Web Compatibility and Tabbed Browser.

    Check out the detailed instructions via this etherpad.

    No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

    Join us and help us make Firefox better!

    See you on Friday!

    Categorieën: Mozilla-nl planet

    Chris Cooper: Introducing The Developer Workflow Team

    do, 16/11/2017 - 15:12

    I’ve neglected to write about the *other* half of my team, not for any lack of desire to do so, but simply because the code sheriffing situation was taking up so much of my time. Now that the SoftVision contractors have gained the commit access required to be fully functional sheriffs, I feel that I can shift focus a bit.

    Meet the team

    The other half of my team consists of 4 Firefox build system peers. My team consists of:

    Justice League Unlimited

    When the group was first established, we talked a lot about what we wanted to work on, what we needed to work on, and what we should be working on. Those discussions revealed the following common themes:

    • We have a focus on developers. Everything we work on is to help developers be more productive, and go more quickly.
    • We accomplish this through tooling to support better/faster workflows.
    • Some of these improvements can also assist in automation, but that isn’t our primary focus, except where those improvements are also wins for developers, e.g. faster time to first feedback on commit.
    • We act as consultants/liaisons to many other groups that also touch the build system, e.g. Servo, WebRTC, NSS etc.

    Based on that list of themes, we’ve adopted the moniker of “Developer Workflow.” We are all build peers, yes, but to pigeon-hole ourselves as the build system group seemed short-sighted. Our unique position at the intersection of the build system, VCS, and other services meant that our scope needed to match what people expect of us anyway.

    While new to me, Developer Workflow is a logical continuation of build system tiger team organized by David Burns in 2016. This is the same effort that yielded sea change improvements such as artifact builds and sccache.

    In many ways, I feel extremely fortunate to be following on the heels of that work. During the previous year, all the members of my team formed the working relationships they would need to be more successful going forward. All the hard work for me as their manager was already done! ;)

    What are we doing

    We had our first, dedicated work week as a team last week in Mountain View. Aside from getting to know each other a little better, during the week we hashed out exactly what our team will be focused on next year, and made substantial progress towards bootstrapping those efforts.

    Next year, we’ll be tackling the following projects:

    • Finish the migration from Makefiles to files: A lot of important business logic resides in Makefiles for no good reason. As someone who has cargo-culted large portions of l10n Makefile logic during my tenure at Mozilla, I may be part of the problem.
    • Move build logic out of *.mk files: Greg recently announced his intent to remove, a foundational piece of code in the Mozilla recursive make build system that has existed since 1998. The other .mk files won’t be far behind. Porting true build logic to files and removing non-build tasks to task-based scripts will make the build system infinitely more hackable, and will allow us to pursue performance gains in many different areas. For example, decoupled tests like package tests could be run asynchronously, getting results to developers more quickly.
    • Stand-up a tup build in automation: this is our big effort for the near-term. A tup build is not necessarily an end goal in-and-of itself — we may very well end up on bazel or something else eventually — but since the Mike Shal created tup, we control enough of the stack to make quick progress. It’s a means of validating the Makefile migration.
    • Move our Linux builder in automation from Centos6 to Debian: This would move move us closer to deterministic builds, and has alignment with the TOR project, but requires we host our own package servers, CDN, etc. This would also make it easier for developers to reproduce automation builds locally. glandium has a proof-of-concept. We hope to dig into any binary compatibility issues next year.
    • Weening off mozharness for builds: mozharness was a good first step at putting automated build configuration information in the tree for developers. Now that functionality could be better encapsulated elsewhere, and largely hidden by mach. The ultimate goal would be to use the same workflow for developer builds and automation.
    What are we *not* doing

    It’s important to be explicit about things we won’t be tackling too, especially when it’s been unclear historically or where there might be different expectations.

    The biggest one to call out here is github integration. Many teams at Mozilla are using github for developing standalone projects or even parts of Firefox. While we’ve had some historical involvement here and will continue to consult as necessary, other teams are better positioned to drive this work.

    We are also not currently exploring moving Windows builds to WSL. This is something we experimented with in Q3 this year, but build performance is still so slow that it doesn’t warrant further action right now. We continue to follow the development of WSL and if Microsoft is able to fix filesystem performance, we may pick this back up.

    Categorieën: Mozilla-nl planet

    Daniel Glazman: BlueGriffon 3.0

    do, 16/11/2017 - 12:01

    I am insanely happy (and a bit proud too, ahem) to let you know that BlueGriffon 3.0 is now available. As I wrote earlier on this blog, implementing Responsive Design in a Wysiwyg editor supposed to handle all html documents whatever their original source has been a tremendous amount of work and something really painful to implement. Responsive Design in BlueGriffon is a commercial feature available to holders of a Basic or a EPUB license.

    BlueGriffon Responsive Design

    /* Enjoy! */

    Categorieën: Mozilla-nl planet

    Daniel Stenberg: curl up in Stockholm 2018

    do, 16/11/2017 - 10:25

    Welcome all curl hackers and fans to Stockholm 2018! We are hosting the curl up developers conference April 14-15 2018 in my home city in what now might really become an annual tradition. Remember Nuremberg 2017?

    All details are collected and updated on the curl up 2018 wiki page:


    curl up 2018  will be another two full days event over a weekend and we will make an effort to keep the attendance fee at a minimum.

    Presentations by core curl contributors on how things work, what we’ve done lately, what we’re working on how, what we should work on in the future and blue-sky visions about what curl should become when it grows up. Everyone attending are encouraged to present something. All this, mixed with lots of discussions, Q&As and socializing.

    This time, I hope you will also have the chance to arrive early and spend the Friday, or a part of it, working hands-on with other curl developers on actual programming/debugging curl features or bugs. The curl-hacking-Friday? While not firmly organized yet, I’d like to see this become reality. Let me know if that’s something you’d be interested to participate on.

    Who should come?

    Anyone interested in curl, curl development or how to best use curl in application or solutions. Here’s your chance to learn a lot and an excellent opportunity to influence the future of curl. And to get some short glimpses of a spring-time Stockholm when going back and forth to the curl up venue!

    Sign up?

    We will open up the “ticket booth” in January/February 2018, so just keep your eyes and ears open and you won’t miss it! The general idea is to keep the fee at a minimum, ideally zero. Currently the exact price depends on how we manage to cover the remaining costs with friendly sponsors.

    Sponsor us!

    We are looking for sponsors. If your company is interested and willing to help us out, in any capacity, please contact me! Remember that our project has no funds of its own and we have no particular company backing.

    Where in Stockholm exactly?

    We will spend the curl up 2018 weekend at Goto 10, which is near both the Skanstull and Gullmarsplan subway stations.

    Action photo from curl up 2017 when Kamil Dudka spoke about Redhat’s use of curl built with NSS.

    Categorieën: Mozilla-nl planet