mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Tom Schuster: The pitfalls of self-hosting JavaScript

Mozilla planet - vr, 09/06/2017 - 23:03

Recently the SpiderMonkey team has been looking into improving ECMAScript 6 and real world performance as part of the QuantumFlow project.

While working on this we realized that self-hosting functions can have significant downsides, especially with bad type information. Apparently even the v8 team is moving away from self-hosting to writing more functions in hand written macro assembler code.

Here is a list of things I can remember from the top of my head:

  • Self-hosted functions that always call out to C++ (native) functions that can not be inlined in IonMonkey are probably a bad idea.
  • Self-hosted functions often have very bad type-information, because they are called from a lot of different frameworks and user code etc. This means we need to absolutely be able to inline that function. (e.g. bug 1364854 about Object.assign or bug 1366372 about Array.from)
  • If a self-hosted function only runs in the baseline compiler we won’t get any inlining, this means all those small function calls to ToLength or Math.max add up. We should probably look into manually inling more or even using something like Facebook’s prepack.
  • We usually only inline C++ functions called from self-hosted functions in IonMonkey under perfect conditions, if those are not met we fall back to a slow JS to C++ call. (e.g. bug 1366263 about various RegExp methods)
  • Basically this all comes back to somehow making sure that even with bad type information (i.e. polymorphic types) your self-hosted JS code still reaches an acceptable level of performance. For example by introducing inline caching for the in operator we fixed a real world performance issue in the Array.prototype.concat method.
  • Overall just relying on IonMonkey inlining to save our bacon probably isn’t a good way forward.
Categorieën: Mozilla-nl planet

Jeff Walden: Not a gluten-free trail

Mozilla planet - vr, 09/06/2017 - 21:27

Sitting cool at mile 444 right now. I was aiming  to be at the Sierras by June 19 or 27, but the snow course I signed up for then got canceled, so I’m in no rush. Might slow down for particular recommended attractions, but otherwise the plan is consistent 20+-mile days.

Categorieën: Mozilla-nl planet

Sam Foster: Haiku Reflections: Experiences in Reality

Mozilla planet - vr, 09/06/2017 - 20:37

Over the several months we worked on Project Haiku, one of the questions we were repeatedly asked was “Why not just make a smartphone app to do this?” Answering that gets right to the heart of what we were trying to demonstrate with Project Haiku specifically, and wanted to see more of in general in IoT/Connected Devices.

This is part of a series of posts on a project I worked on for Mozilla’s Connected Devices group. For context and an overview of the project, please see my earlier post.

The problem with navigating virtual worlds

One of IoT’s great promises is to extend the internet and the web to devices and sensors in our physical world. The flip side of this is another equally powerful idea: to bring the digital into our environment; make it tangible and real and take up space. If you’ve lived through the emergence of the web over the last 20 years, web browsers, smart phones and tablets - that might seem like stepping backwards. Digital technology and the web specifically have broken down physical and geographical barriers to accessing information. We can communicate and share experiences across the globe with a few clicks or keystrokes. But, after 20 years, the web is still in “cyber-space”. We go to this parallel virtual universe and navigate with pointers and maps that have no reference to our analog lives and which confound our intuitive sense of place. This makes wayfinding and building mental models difficult. And without being grounded by inputs and context from our physical environment, the simultaneous existence of these two worlds remains unsettling and can cause a kind of subtle tension.

Imagined space, Hackers-style

As I write this, the display in front of me shows me content framed by a website, which is framed by my browser’s UI, which is framed by the operating system’s window manager and desktop. The display itself has it own frame - a bezel on an enclosure sitting on my desk. And these are just the literal boxes. Then there are the conceptual boxes - a page within a site, within a domain, presented by an application as one of many tabs. Sites, domains, applications, windows, homescreens, desktops, workspaces…

The flexibility this arrangement brings is truly incredible. But, for some common tasks it is also a burden. If we could collapse some of these worlds within worlds down to something simpler, direct and tangible, we could engage that ancestral part of our brains that really wants things to have three dimensions and take up space in our world. We need a way to tear off a piece of the web and pin it to the wall, make space for it on the desk, carry it with us; to give it physical presence.

Permission to uni-task

Assigning a single function to a thing - when the capability exists to be many things at once - was another source of skepticism and concern throughout Project Haiku. But in the history of invention, the pendulum swings continually between uni-tasking and multi-tasking; specialized and general. A synthesizer and an electric piano share origins and overlap in functions, but one does not supersede the other. They are different tools for distinct circumstances. In an age of ubiquitous smart phones, wrist watches still provide a function, and project status and values. There’s a pragmatism and attractive simplicity to dedicating a single task to an object we use. The problem is that as we stack functions into a single device, each new possibility requires a means of selecting which one we want. Reading or writing? Bold or italic text? Shared or private, published or deleted, for one group or broadcast to all? Each decision, each action is an interaction with a digital interface, stacked and overlaid into the same physical object that is our computer, tablet or phone. Uni-tasking devices give us an opportunity to dismantle this stack and peel away the layers.

The two ideas of single function and occupying physical space are complementary: I check the weather by looking out the window, I check the time by glancing at my wrist, the recipe I want is bookmarked in the last book on the shelf. We can create similar coordinates or landmarks for our digital interactions as well.

Our sense of place and proximity is also an important input to how we prioritize what needs doing. A sink full of dishes demands my attention - while I’m in the kitchen. But when I’m downtown, it has to wait while I attend to other matters. Similarly, a colleague raising a question can expect me to answer when I’m in the same room. But we both understand that as the distance between us changes, so does the urgency to provide an answer. When I’m at the office, work things are my priority. As I travel home, my context shifts. Expectations change as we move from place to place, and physical locations and boundaries help partition our lives. Its true that the smart phone started as a huge convenience by un-tethering us from the desk to carry our access to information - and its access to us - with us. But, by doing so, we lost some of the ability to walk away; to step out from a conversation or leave work behind.

A concept rendering using one of the proposed form-factors for the Haiku device

Addressing these tensions became one of the goals of Project Haiku. As we talked to people about their interactions with technology in their home and in their lives, we saw again and again how poor a fit the best of today’s solutions were. What began as empowering and liberating has started to infringe on people’s freedom to chose how to spend their time.

When I’m spending time on my computer, its just more opportunities for it to beep at me. Every chance I get I turn it off. Typing into a box - what fun is that? You guys should come up with something… good.

This is a quote from one of our early interviews. It was a refreshing perspective and sentiments like this - as well as the moments of joy and connectedness that we saw were possible - that helped steer this project. We weren’t able to finish the story by bringing a product to market. But the process and all we learned along the way will stick with me. It is my hope that this series of posts will plant some seeds and perhaps give other future projects a small nudge towards making our technology experiences more grounded in the world we move about in.

Categorieën: Mozilla-nl planet

Mike Hoye: Trimming The Roster

Mozilla planet - vr, 09/06/2017 - 20:25

This is a minor administrative note about Planet Mozilla.

In the next few weeks I’ll be doing some long-overdue maintenance and cleaning out dead feeds from Planet and the various sub-Planet blogrolls to help keep them focused and helpful.

I’m going to start by scanning existing feeds and culling any that error out every day for the next two weeks. After that I’ll go down the list of remaining feeds individually, and confirm their author’s ongoing involvement in Mozilla and ask for tagged feeds wherever possible. “Involved in Mozilla” can mean a lot of things – the mission, the many projects, the many communities – so I’ll be happy to take a yes or no and leave it at that.

The process should be pretty painless – with a bit of luck you won’t even notice – but I thought I’d give you a heads up regardless. As usual, leave a comment or email me if you’ve got questions.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Event Report: SUMO Community Meeting – Abidjan (3-4 June 2017)

Mozilla planet - vr, 09/06/2017 - 20:04

Hey there, SUMO Nation!

You may remember Abbackar’s previous post about meetings in Ivory Coast. I am very happy to inform you that the community there is going strong and keeps support Mozilla’s mission. Read Abbackar’s report from the recent meeting in Abidjan below.

On the weekend of 3rd and 4th of June, the community members of Côte d’Ivoire met in Abidjan for a SUMO Community Meetup. The event was attended by 21 people, six of who were new contributors, interested in participating in Mozilla’s mission through SUMO.

The Saturday meeting started at 9 and went on for six hours, with a small lunch break. During that time we talked about the state of SUMO and the Mozilla updates that had an influence for our community over the past months.

We also introduced new contributors to the website and the philosophy of SUMO – as well as the Respond social support tool. New contributors had a chance to see both sites in action, learn how they worked and discuss their future contributions.

After that, we had a practical session in Respond, allowing existing and new contributors to exchange knowledge and experiences.

An important fact to mention is that the computer we used for the event is a “Jerry” – a computer in a can – made from recycled materials and recycled by our community members.

After the training and a session of answering questions, we ended the first day of the meetup.

Sunday started with the analysis of the 2016 balance sheet and a discussion of our community’s roadmap for 2017. We talked about ways of increasing our community engagement in SUMO in 2017. Several solutions were discussed at length, allowing us to share and assign tasks to people present at the event.

We decided to train together on a single theme each month to increase focus. We also acknowledged the cancellation of our Nouchi localization project, due to the difficulties with creating a new technical vocabulary within that language. Our localization efforts will be focused on French from now on.

The Sunday lunch had in a great atmosphere as we shared a local dish called garba. The meeting ended with a Q&A session focused on addressing the concerns and doubts of the new contributors.

The meeting in Abidjan was a great opportunity to catch up, discuss the most recent updates, motivate existing contributors and recruit new ones for Mozilla’s mission. We ended the whole event with a family photo of all the people present.

We are all looking forward to the second session in the Bouake, in the center of Côte d’Ivoire.

We are humbled and grateful for the effort and passion of the community in Ivory Coast. Thank you for your inspiring report and local leadership, Abbackar :-) Onwards and forwards, to Bouake!

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: CSS Shapes, clipping and masking – and how to use them

Mozilla planet - vr, 09/06/2017 - 17:46

The release of Firefox 54 is just around the corner and it will introduce new features into an already cool CSS property: clip-path.

clip-path is a property that allows us to clip (i.e., cut away) parts of an element. Up until now, in Firefox you could only use an SVG to clip an element:

But with Firefox 54, you will be able to use CSS shapes as well: insets, circles, ellipses and polygons!

Note: this post contains many demos, which require support for clip-path and mask. To be able to see and interact with every demo in this post, you will need Firefox 54 or higher.

Basic usage

It’s important to take into account that clip-path does not accept “images” as input, but as <clipPath> elements:

See the Pen clip-path (static SVG mask) by ladybenko (@ladybenko) on CodePen.

A cool thing is that these <clipPath> elements can contain SVG animations:

See the Pen clip-path (animated SVG) by ladybenko (@ladybenko) on CodePen.

However, with the upcoming Firefox release we will also have CSS shape functions at our disposal. These allow us to define shapes within our stylesheets, so there is no need for an SVG. The shape functions we have at our disposal are: circle, ellipse, inset and polygon. You can see them in action here:

See the Pen oWJBwW by ladybenko (@ladybenko) on CodePen.

And not only that, but we can animate them with CSS as well. The only restrictions are that we cannot “mix” function shapes (i.e., morphing from a circle to an inset), and that when animating polygons, the polygons must preserve the same number of vertices during the whole animation.

Here’s a simple animation using a circle shape:

See the Pen Animated clip-path by ladybenko (@ladybenko) on CodePen.

And here is another animation using polygon. Note: Even though we are restricted to preserving our set number of vertices, we can “merge” them by repeating the values. This creates the illusion of animating to a polygon with any number of sides.

See the Pen Animated clip-path (polygon) by ladybenko (@ladybenko) on CodePen.

Note that clip-path also opens new possibilities layout-wise. The following demo uses clipping to make an image more interesting in a multi-column article:

See the Pen Layout example by ladybenko (@ladybenko) on CodePen.

Spicing things up with JavaScript

Clipping opens up cool possibilities. In the following example, clip-path has been used to isolate elements of a site –in this case, simulating a tour/tutorial:

See the Pen tour with clip-path by ladybenko (@ladybenko) on CodePen.

It’s done with JavaScript by fetching the dimensions of an element on the fly, and calculating the distance with respect to a reference container, and then using that distance to update the inset shape used on the clip-path property.

We can now also dynamically change the clipping according to user input, like in this example that features a “periscope” effect controlled by the mouse:

See the Pen clip-path (periscope) by ladybenko (@ladybenko) on CodePen.

clip-path or mask?

There is a similar CSS property, mask, but it is not identical to clip-path. Depending on your specific use case, you should choose one or the other. Also note that support varies across browsers, and currently Firefox is the only browser that fully supports all the mask features, so you will need to run Firefox 54 to interact with the demos below on Codepen.

Masking can use an image or a <mask> element in an SVG. clip-path, on the other hand, uses an SVG path or a CSS shape.

Masking modifies the appearance of the element it masks. For instance, here is a circular mask filled with a linear gradient:

Linear gradient mask

And remember that you can use bitmap images as well even if they don’t have an alpha channel (i.e., transparency), by tweaking the mask-mode:

mask-mode example

The key concept of masking is that it modifies the pixels of an image, changing their values – to the point of making some of them fully transparent.

On the other hand, clipping “cuts” the element, and this includes its collision surface. Check out the following demo showing two identical pictures masked and clipped with the same cross shape. Try hovering over the pictures and see what happens. You will notice that in the masked image the collision area also contains the masked parts. In the clipped image, the collision area is only the visible part (i.e., the cross shape) of the element.

Mask vs clip comparison

Is masking then superior to clipping, or vice versa? No, they are just used for different things.

I hope this post has made you curious about clip-path. Check out the upcoming version of Firefox to try it!

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Keeping Up with the Add-ons Community

Mozilla planet - vr, 09/06/2017 - 16:00

With the add-ons community spread out among multiple projects and several communication platforms, it can feel difficult to stay connected and informed.

To help bridge some of these gaps, here is a quick refresher guide on our most-used communication channels and how you can use them to stay updated about the areas you care about most.

Announcements

Announcements will continue to be posted to the Add-ons Blog and cross-posted to Discourse.

Find Documentation

MDN Web Docs has great resources for creating and publishing extensions and themes.

You can also find documentation and additional information about specific projects on the Add-ons wiki and the WebExtensions wiki.

Get Technical Help Join a Public Meeting

Please feel welcome to join any or all of the following public meetings:

Add-ons Community Meeting (every other Tuesday at 17:00 UTC)

Join the add-ons community as we discuss current and upcoming happenings in the add-ons world. Agendas will be posted in advance to the Add-ons > Contribute category on Discourse. See the wiki for the next meeting date and call-in information.

Good First Bugs Triage (every other Tuesday at 17:00 UTC)

Come and help triage good first bugs for new contributors! See the wiki for the next meeting date and call-in information.

WebExtensions API Triage (every Tuesday at 17:30 UTC)

Join us to discuss proposals for new WebExtension APIs. Agendas are distributed in advance to the dev-addons mailing list and the Add-ons > Contribute category on Discourse. See the wiki for the next meeting date and call-in information. To request a new API, please read this first.

Be Social With Us Get Involved

Check out the Contribute wiki for ways you can get involved.

The post Keeping Up with the Add-ons Community appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: CSS shapes, clipping and masking

Mozilla planet - vr, 09/06/2017 - 11:58

The release of Firefox 54, is just around the corner and it will introduce new features into an already cool CSS property: clip-path.

clip-path is a property that allows us to clip (i.e. cut away) parts of an element. Until today, in Firefox you could only use an SVG to clip an element:

But with Firefox 54, you will be able to use CSS shapes as well: insets, circles, ellipses and polygons!

Basic usage

It’s important to take into account that clip-path does not accept “images” as input, but <clipPath> elements:

See the Pen clip-path (static SVG mask) by ladybenko (@ladybenko) on CodePen.

A cool thing is that these <clipPath> elements can contain SVG animations:

See the Pen clip-path (animated SVG) by ladybenko (@ladybenko) on CodePen.

However, with the upcoming Firefox release we will also have CSS shape functions at our disposal. These allow us to define shapes within our stylesheets, so there is no need for an  SVG. The shape functions we have at our disposal are: circle, ellipse, inset and polygon. You can see them in action here:

See the Pen oWJBwW by ladybenko (@ladybenko) on CodePen.

And not only that, but we can animate them with CSS as well. The only restriction is that we cannot “mix” function shapes (i.e., morphing from a circle to an inset), and that when animating polygons they must preserve the same number of vertices during the whole animation.

Here’s a simple animation using a circle shape:

See the Pen Animated clip-path by ladybenko (@ladybenko) on CodePen.

And here is another animation using polygon. Note: Even though we are restricted to preserving  our set number of vertices, we can “merge” them by repeating the values. This creates the illusion of animating to a polygon with any number of sides.

Note that clip-path also opens new possibilities layout-wise. The following demo uses clipping to make an image more interesting in a multi-column article:

<INSERT https://codepen.io/ladybenko/pen/QgjLMp >

Spicing things up with JavaScript

Clipping opens up cool possibilities. In the following example, clip-path has been used to isolate elements of a site –in this case, simulating a tour/tutorial:

It’s done with JavaScript by fetching the dimensions of an element on the fly, and calculating the distance respect a reference container, and then using that distance to update the inset shape used on the clip-path property.

We can now also  dynamically change the clipping according to user input, like in this example that features a “periscope” effect controlled by the mouse:

clip-path or mask?

There is a similar CSS property, mask, but it is  not identical to clip-path. Depending on your specific use case, you should choose one or the other. Also note that support varies across browsers, and currently Firefox is the only browser that fully supports all the mask features, so you will need Firefox to interact with the demos below on Codepen.

Masking can use an image or a <mask> element in an SVG. clip-path, on the other hand, uses an SVG path or a CSS shape.

Masking modifies the appearance of the element it masks. For instance, here is a circular mask filled with a linear gradient:

And remember that you can use bitmap images as well even if they don’t have an alpha channel (i.e. transparency), by tweaking the mask-mode:

The key concept of masking is that it modifies the pixels of an image, changing their values –to the point of making some of them fully transparent.

On the other hand, clipping “cuts” the element, and this includes its collision surface. Check out the following demo showing two identical pictures masked and clipped with the same cross shape. Try hovering it and see what happens. You will notice that in the masked image the collision area also contains the masked parts. In the clipped image, the collision area is only the visible part (i.e. the cross shape) of the element.

 

Is masking then superior to clipping, or vice versa? No, they are just used for different things.

I hope this post has made you curious about clip-path. Stay tuned to the upcoming version of Firefox to try it!

Categorieën: Mozilla-nl planet

Andy McKay: Cleaning up intermittents

Mozilla planet - vr, 09/06/2017 - 09:00

Orange Factor robot creates bugs in Bugzilla components when it detects intermittents in the Firefox test suite. Unfortunately it never cleans up after itself. Trying to keep the bug count in a component manageable is something that really helps me understand whats going on and the orange factor bugs that never get closed don't help.

I found that as I was triaging through I found a common pattern, which is basically go look on Brasstacks and see if it occured in a while. From that came a simple script that looks for intermittents and checks to see if it occurred on Brasstacks in the last 180 days, if not then close it.

Both Brasstacks and Bugzilla have REST APIs, but last week or so Brasstacks went behind Mozilla internal authentication. To get around that, you need to pass the session cookie and user agent through any requests.

The resulting script is on Github and closes out a couple of bugs for us each week.

For this script to work, you need a bunch of environment variables: the brasstacks session, the brasstacks user agent, the bugzilla API key and bugzilla API token. But this script is written for me, not for your project, you'll probably want to do something different anyway.

Categorieën: Mozilla-nl planet

Ehsan Akhgari: Quantum Flow Engineering Newsletter #12

Mozilla planet - vr, 09/06/2017 - 08:39

It has been a few weeks since I have given an update about our progress on reducing the amount of slow synchronous IPC messages that we send across our processes.  This hasn’t been because there hasn’t been a lot to talk about, quite to the contrary, so much great work has happened here that for a while I decided it may be better to highlight other ongoing work instead.  But now as the development cycle of Firefox 55 comes to a closing point, it’s time to have another look at where we stand on this issue.

I’ve prepared a new Sync IPC Analysis for today including data from both JS and C++ initiated sync IPCs.  First bit of unfortunate news is that the historical data in the spreadsheet is lost because the server hosting the data had a few hiccups and Google Spreadsheets seems to not really not like that.  Second bit of unfortunate news is that our hopes for disabling the non-multiprocess compatible add-ons by default in Nightly helping with reducing some of the noise in this data don’t seem to have panned out.  The data still shows a lot of synchronous IPC triggered from JS as before, and the lion’s share of it are messages that are clearly coming from add-ons judging from their names.  My guess about why is that Nightly users have probably turned these add-ons back on manually.  So we will have to live with the noise in the data for now (this is an issue that we have to struggle with when dealing with a lot of telemetry data unfortunately, here is another recent example that wasted some time and energy).

This time I won’t give out a percentage based break-down because now after many of these bugs have been fixed, the impact of really commonly occurring IPC messages such as the one we have for document.cookie really makes the earlier method of exploring the data pointless (you can explore the pie chart to get a quick sense of why, I’ll just say that message alone is now 55% of the chart and that plus the second one together form 75% of the data.)  This is a great problem to have, of course, it means that we’re now starting to get to the “long tail” part of this issue.

The current top offenders, besides the mentioned bug (which BTW is still being made great progress on!) are add-on/browser CPOW messages, two graphics initialization messages that we send at content process startup, NotifyIMEFocus that’s in the process of being fixed, and window.open() which I’ve spent weeks on but have yet to fix all of our tests to be able to land my fixes for (which I’ve also temporarily given up working on looking for something that isn’t this bug to work on for a little while!).  Besides those if you look at the dependency list of the tracker bug, there are many other bugs that are very close to being fixed.  Firefox 55 is going to be much better from this perspective and I hope the future releases will improve on that!

The other effort that is moving ahead quite fast is optimizing for Speedometer V2.  See the chart of our progress on AreWeFastYet.com:

Last week, our score on this chart was about 84.  Now we are at about 91.  Not bad for a week worth a work!  If you’re curious to follow along, see our tracker bug.  Also, Speedometer is a very JS heavy benchmark, so a lot of the bugs that are filed and fixed for it happen inside SpiderMonkey so watching the SpiderMonkey specific tracker bug is probably a good idea as well.

It’s time for a short performance story!  This one is about technical debt.  I’ve looked at many performance bugs over the past few months of the Quantum Flow project, and in many cases the solutions have turned out to be just deleting the slow code, that’s it!  It turns out that in a large code base as code ages, there is a lot of code that isn’t really serving any purpose any more but nobody discovers this because it’s impractical to audit every single line of code with scrutiny.  But then some of this unnecessary code is bound to have severe performance issues, and when it does, your software ends up carrying that cruft for years!  Here are a few examples: a function call taking 2.7 seconds on a cold startup doing something that became unnecessary once we dropped support for Windows XP and Vista, some migration code that was doing synchronous IO during all startups to migrate users of Firefox 34 and older to a newer version, and an outdated telemetry probe that turned out to not in use any more scheduling many unnecessary timers causing unneeded jank.

I’ve been thinking about what to do about these issues.  The first step is fix them, which is what we are busy doing now, but finding these issues typically requires some work, and it would be nice if we had a systematic way of dealing with some of them.  For example, wouldn’t it be nice if we had a MIMIMUM_WINDOWS macro that controlled all Windows specific code in the tree, and in the case of my earlier example perhaps the original code would have checked that macro against the minimum version (7 or higher) and when we’d bump MINIMUM_WINDOWS up to 7 along with bumping our release requirements, such code will turn itself into preprocessor waste (hurray!), but of course, the hard part is finding all the code that needs to abide by this macro, and the harder part is enforcing this consistently going forward!  Some of the other issues aren’t possible to deal with this way, so we need to work on getting better at detecting these issues.  Not sure, definitely some food for thought!

I’ll stop here, and move on to acknowledge the great work of all of you who helped make Firefox faster this past week!  As per usual, apologies to those who I’m forgetting to mention here:

Categorieën: Mozilla-nl planet

Justin Dolske: Photon Engineering Newsletter #5

Mozilla planet - vr, 09/06/2017 - 01:11

Time for a solid update #5! So far the Photon project appears to be well on track — our work is scoped to what we think is achievable for Firefox 57, and we’re generally fixing bugs at a good rate.

Search Box?

If you’ve been paying attention to any of the Photon mockups and design specs floating around, you may have noticed the conspicuous absence of the search box in the toolbar (i.e. there’s just a location field and buttons). What’s up with that?

win10mock

For some time now, we’ve been working on improving searching directly from the location field. You can already search from there by simply entering your search term, see search suggestions as you type, and click the icons of other search engines to do a one-off search with them instead of your default. [The one-off search feature has been in Nightly for a while, and will start shipping in Firefox 55.] The location bar now can do everything the search box can, and more. So at this point the search box is a vestigial leftover from how browsers worked 10+ years ago, and we’d like to remove it to reclaim precious UI space. Today, no other major browser ships with both a location field and search box.

That said, we’re being careful about understanding the impact of removing the search box, since it’s been such a long-standing feature. We’re currently running a series of user studies to make sure we understand how users search, and that the unified search/location bar meets their needs. And since Mozilla works closely with our search engine partners, we also want to make sure any changes to how users search is not a surprise.

Finally, I should note that this is about removing the search box as the default for _new_ Firefox users. Photon won’t be removing the search box entirely, you’ll still be able to add it back through Customize Mode if you so desire. (Please put down your pitchforks and torches. Thank you.) We’re still discussing what to do for existing users… There’s a trade-off between proving a fresh, clean, and modern experience as part of the upgrade to Photon (especially for users who haven’t been using the search box), and removing a UI element that some people have come to expect and use.

Recent Changes Menus/structure:
  • Hamburger panel is now feature complete!
    • An exit/quit menu item was added (except on macOS, as the native menubar handles this).
    • A restyled zoom control was added.
    • One small feature-not-yet-complete: the library subview is reusing the same content as the library panel, and so won’t be complete until the library panel itself is complete.
    • A number of smaller bugs and regressions fixed.
  • Initial version of the new Library panel has landed
    • It still needs a lot of work, items are missing, and the styling isn’t done. Landing this early allows us to unblock work in other areas of Photon (notably animations) that need to interact with this button.
    • We haven’t placed it into the toolbar by default yet. Until we do so, if you want to play with it you’ll need to manually add it from Customization mode.
  • We’re getting ready to enable the Photon structure pref by default on Nightly, and are just fixing the last tests so that everything is green on our CI infrastructure. Soon!
Animation:
  • Landed a patch that allows us to move more of our animations to the compositor and off of the main thread. Previously, this optimization was only allowed when the item’s width was narrower than the window, now it’s based on available area. (Our animations are using very wide sprite sheet “filmstrips” which required this — more about this in a future update).
  • Work continues on animations for downloads toolbar button, stop/reload button, and page loading indicator. The first two have gone up for review, and unfortunately we found some issues with the plan for the downloads button which requires further work.
Preferences:
  • Finalized spec for the preferences reorg version 2. The team will now begin implementation of the changes.
  • The team is working on improving search highlighting in about:preferences to include sub-dialogs and fixing some highlight/tooltip glitches.
  • The spec for Search behavior is also being finalized.
Visual redesign: Onboarding:
  • Enabled the basic onboarding overlay on about:newtab and about:home. Now you can see a little fox icon on the top-left corner on about:newtab and about:home on Nightly! (Here’s the full spec for what will eventually be implemented. We’re working on getting the first version ready for Firefox 56.)
  • Finished creating the message architecture so that the Auto-migration code can talk with Activity Stream
Performance:

That’s all for this week!


Categorieën: Mozilla-nl planet

Michael Kelly: Q is Scary

Mozilla planet - do, 08/06/2017 - 21:41

q is the hands-down winner of my "Libraries I'm Terrified Of" award. It's a Python library for outputting debugging information while running a program.

On the surface, everything seems fine. It logs everything to /tmp/q (configurable), which you can watch with tail -f. The basic form of q is passing it a variable:

import q foo = 7 q(foo)

Take a good long look at that code sample, and then answer me this: What is the type of q?

If you said "callable module", you are right. Also, that is not a thing that exists in Python.

Also, check out the output in /tmp/q:

0.0s <module>: foo=7

It knows the variable name. It also knows that it's being called at the module level; if we were in a function, <module> would be replaced with the name of the function.

You can also divide (/) or bitwise OR (|) values with q to log them as well. And you can decorate a function with it to trace the arguments and return value. It also has a method, q.d(), that starts an interactive session.

And it does all this in under 400 lines, the majority of which is either a docstring or code to format the output.

Spooky Spooky. How in the Hell

So first, let's get this callable module stuff out of the way. Here's the last two lines in q.py:

# Install the Q() object in sys.modules so that "import q" gives a callable q. sys.modules['q'] = Q()

Turns out sys.modules is a dictionary with all the loaded modules, and you can just stuff it with whatever nonsense you like.

The Q class itself is super-fun. Check out the declaration:

# When we insert Q() into sys.modules, all the globals become None, so we # have to keep everything we use inside the Q class. class Q(object): __doc__ = __doc__ # from the module's __doc__ above import ast import code import inspect import os import pydoc import sys import random import re import time import functools

"When we insert Q() into sys.modules, all the globals become None"

What? Why?! I mean I can see how that's not an issue for modules, which are usually the only things inside sys.modules, but still. I tried chasing this down, but the entire sys module is written in C, and that ain't my business.

Most of the other bits inside Q are straightforward by comparison; a few helpers for outputting stuff cleanly, overrides for __truediv__ and __or__ for those weird operator versions of logging, etc. If you've never heard of callable types1 before, that's the reason why an instance of this class can be both called as a function and treated as a value.

So what's __call__ do?

Ghost Magic def __call__(self, *args): """If invoked as a decorator on a function, adds tracing output to the function; otherwise immediately prints out the arguments.""" info = self.inspect.getframeinfo(self.sys._getframe(1), context=9) # ... snip ...

Welcome to the inspect module. Turns out, Python has a built-in module that lets you get all sorts of fun info about objects, classes, etc. It also lets you get info about stack frames, which store the state of each subroutine in the chain of subroutine calls that led to running the code that's currently executing.

Here, q is using a CPython-specific function sys._getframe to get a frame object for the code that called q, and then using inspect to get info about that code.

# info.index is the index of the line containing the end of the call # expression, so this gets a few lines up to the end of the expression. lines = [''] if info.code_context: lines = info.code_context[:info.index + 1] # If we see "@q" on a single line, behave like a trace decorator. for line in lines: if line.strip() in ('@q', '@q()') and args: return self.trace(args[0])

...and then it just does a text search of the source code to figure out if it was called as a function or as a decorator. Because it can't just guess by the type of the argument being passed (you might want to log a function object), and it can't just return a callable that can be used as a decorator either.

trace is pretty normal, whatever that means. It just logs the intercepted arguments and return value / raised exception.

# Otherwise, search for the beginning of the call expression; once it # parses, use the expressions in the call to label the debugging # output. for i in range(1, len(lines) + 1): labels = self.get_call_exprs(''.join(lines[-i:]).replace('\n', '')) if labels: break self.show(info.function, args, labels) return args and args[0]

The last bit pulls out labels from the source code; this is how q knows the name of the variable that you pass in. I'm not going to go line-by-line through get_call_exprs, but it uses the ast module to parse the function call into an Abstract Syntax Tree, and walks through that to find the variable names.

It goes without saying that you should never do any of this. Ever. Nothing is sacred when it comes to debugging, though, and q is incredibly useful when you're having trouble getting your program to print anything out sanely.

Also, if you're ever bored on a nice summer evening, check out the list of modules in the Python standard library. It's got everything:

  1. Check out this page and search for "Callable Types" and/or __call__.

Categorieën: Mozilla-nl planet

Mike Hoye: A Security Question

Mozilla planet - do, 08/06/2017 - 17:06

To my shame, I don’t have a certificate for my blog yet, but as I was flipping through some referer logs I realized that I don’t understand something about HTTPS.

I was looking into the fact that I sometimes – about 1% of the time – I see non-S HTTP referers from Twitter’s t.co URL shortener, which I assume means that somebody’s getting man-in-the-middled somehow, and there’s not much I can do about it. But then I realized the implications of my not having a cert.

My understanding of how this works, per RFC7231 is that:

A user agent MUST NOT send a Referer header field in an unsecured HTTP request if the referring page was received with a secure protocol.

Per the W3C as well:

Requests from TLS-protected clients to non- potentially trustworthy URLs, on the other hand, will contain no referrer information. A Referer HTTP header will not be sent.

So, if that’s true and I have no certificate on my site, then in theory I should never see any HTTPS entries in my referer logs? Right?

Except: I do. All the time, from every browser vendor, feed reader or type of device, and if my logs are full of this then I bet yours are too.

What am I not understanding here? It’s not possible, there is just no way for me to believe that it’s two thousand and seventeen and I’m the only person who’s ever noticed this. I have to be missing something.

What is it?

FAST UPDATE: My colleagues refer me to this piece of the puzzle I hadn’t been aware of, and Francois Marier’s longer post on the subject. Thanks, everyone! That explains it.

SECOND UPDATE: Well, it turns out it doesn’t completely explain it. Digging into the data and filtering out anything referred via Twitter, Google or Facebook, I’m left with two broad buckets. The first is is almost entirely made of feed readers; it turns out that most and maybe almost all feed aggregators do the wrong thing here. I’m going to have to look into that, because it’s possible I can solve this problem at the root.

The second is one really persistent person using Firefox 15. Who are you, guy? Why don’t you upgrade? Can I help? Email me if I can help.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Science Lab June 2017 Bi-Monthly Community Call

Mozilla planet - do, 08/06/2017 - 17:00

Mozilla Science Lab June 2017 Bi-Monthly Community Call Mozilla Science Lab Bi-monthly Community Call

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Cross-browser extensions, available now in Firefox

Mozilla planet - do, 08/06/2017 - 16:54

We’re modernizing the way developers build extensions for Firefox! We call the new APIs WebExtensions , because they’re written using the technologies of the Web: HTML, CSS, and JavaScript. And just like the technologies of the Web, you can write one codebase that works in multiple places.

WebExtensions APIs are inspired by the existing Google Chrome extension APIs, and are supported by Opera, Firefox, and Microsoft Edge. We’re working to standardize these existing APIs as well as proposing new ones! Our goal is to make extensions as easy to share between browsers as the pages they browse, and powerful enough to let people customize their browsers to match their needs.

Want to know more? Build WebExtensions Port an existing Chrome extension
Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Jun. 08, 2017

Mozilla planet - do, 08/06/2017 - 16:00

Reps Weekly Meeting Jun. 08, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Increasing Momentum Around Tech Policy

Mozilla planet - do, 08/06/2017 - 14:30
Mozilla’s new tech policy fellowship brings together leading experts to advance Internet health around the world

 

Strong government policies and leadership are key to making the Internet a global public resource that is open and accessible to all.

To advance this work from the front lines, some of the world’s experts on these issues joined government service. These dedicated public servants have made major progress in recent years on issues like net neutrality, open data and the digital economy.

But as governments transition and government leaders move on, we risk losing momentum or even backsliding on progress made. To sustain that momentum and invest in those leaders, today the Mozilla Foundation officially launches a new Tech Policy Fellowship. The program is designed to give individuals with deep expertise in government and Internet policy the support and structure they need to continue their Internet health work.

The fellows, who hail from around the globe, will spend the next year working independently on a range of tech policy issues. They will collaborate closely with Mozilla’s policy and advocacy teams, as well as the broader Mozilla network and other key organizations in tech policy. Each fellow will bring their expertise to important topics currently at issue in the United States and around the world.

For example:

Fellow Gigi Sohn brings nearly 30 years of experience, most recently at the Federal Communications Commission (FCC), dedicated to defending and preserving fundamental competition and innovation policies for broadband Internet access. At a time when we are moving closer to a closed Internet in the United States, her expertise is more valuable than ever.

Fellow Alan Davidson will draw on his extensive professional history working to advance a free and open digital economy to support his work on education and advocacy strategies to combat Internet policy risks.

With the wave of data collection and use fast growing in government and the private sector, fellow Linet Kwamboka will analyze East African government practices for the collection, handling and publishing of data. She will develop contextual best practices for data governance and management.

Meet the initial cohort of the Tech Policy Fellows here and below, and keep an eye on the Tech Policy Fellowship website for ways to collaborate in this work.

 

Our Mozilla Tech Policy Fellows

 

Alan Davidson | @abdavdson

Alan will work to produce a census of major Internet policy risks and will engage in advocacy and educational strategy to minimize those risks. Alan is also a Fellow at New America in Washington, D.C. Until January 2017, he served as the first Director of Digital Economy at the U.S. Department of Commerce and a Senior Advisor to the Secretary of Commerce. Prior to joining the department, Davidson was the director of the Open Technology Institute at New America. Earlier, Davidson opened Google’s Washington policy office in 2005 and led the company’s public policy and government relations efforts in North and South America. He was previously Associate Director of the Center for Democracy and Technology. Alan has a bachelor’s degree in mathematics and computer science and a master’s degree in technology and policy from the Massachusetts Institute of Technology (MIT). He is a graduate of Yale Law School.

 

Credit: New America

Amina Fazlullah

Amina Fazlullah will work to promote policies that support broadband connectivity in rural and vulnerable communities in the United States. Amina joins the fellowship from her most recent role as Policy Advisor to the National Digital Inclusion Alliance, where she led efforts to develop policies that support broadband deployment, digital inclusion, and digital equity efforts across the United States. Amina has worked on a broad range of Internet policy issues including Universal Service, consumer protection, antitrust, net neutrality, spectrum policy and children’s online privacy. She has testified before Congress, the Federal Communications Commission, the Department of Commerce and Federal Trade Commission. Amina was formerly the Benton Foundation’s Director of Policy in Washington, D.C., where she worked to further government policies to address communication needs of vulnerable communities. Before that, Amina worked with the U.S. Public Interest Research Group, for the Honorable Chief Judge James M. Rosenbaum of the U.S. District Court of Minnesota and at the Federal Communications Commission. She is graduate of the University of Minnesota Law School and Pennsylvania State University.

 

Camille Fischer | @camfisch

Camille will be working to promote individual rights to privacy, security and free speech on the Internet. In the last year of the Obama Administration, Camille led the National Economic Council’s approach to consumers’ economic and civil rights on the Internet and in emerging technologies. She represented consumers’ voices in discussions with other federal agencies regarding law enforcement access to data, including encryption and international law enforcement agreements. She has run commercial privacy and security campaigns, like the BuySecure Initiative to increase consumers’ financial security, and also worked to promote an economic voice within national security policy and to advocate for due process protections within surveillance and digital access reform. Before entering the government as a Presidential Management Fellow, Camille graduated from Georgetown University Law Center where she wrote state legislation for the privacy-protective commercial use of facial recognition technology. Camille is also an amateur photographer in D.C.

 

Caroline Holland

Caroline will be working to to promote a healthy internet by exploring current competition issues related to the Internet ecosystem. Caroline served most recently as Chief Counsel for Competition Policy and Intergovernmental Relations at the U.S. Department of Justice Antitrust Division. In that role, she was involved in several high-profile matters while overseeing the Division’s competition policy and advocacy efforts, interagency policy initiatives, and congressional relations. Caroline previously served as Chief Counsel and Staff Director of the Senate Antitrust Subcommittee where she advised the committee chairmen on a wide variety of competition issues related to telecommunications, technology and intellectual property. Before taking on this role, she was a counsel on the Senate Judiciary Committee and an attorney in private practice focusing on public policy and regulatory work. Caroline holds a J.D. from Georgetown University Law Center and a B.A. in Public Policy from Trinity College in Hartford, Connecticut. Between college and law school, Caroline served in the Antitrust Division as an honors paralegal and as Clerk of the Senate Antitrust Subcommittee.

 

Linet Kwamboka | @linetdata

Linet will work on understanding the policies that guide data collection and dissemination in East Africa (Kenya, Uganda, Tanzania and Rwanda). Through this, she aims to publish policy recommendations on existing policies, proposed policy amendments and a report outlining her findings. Linet is the Founder and CEO of DataScience LTD, which builds information systems to generate and use data to discover intelligent insights about people, products and services for resource allocation and decision making. She was previously the Kenya Open Data Initiative Project Coordinator for the Government of Kenya at the Kenya ICT Authority. Linet is also a director of the World Data Lab–Africa, working to make data personal, tangible and actionable to help citizens make better informed choices about their lives. She also consults with the UNDP in the Strengthening Electoral Processes in Kenya Project, offering support to the Independent Electoral Boundaries Commission in information systems and technology. She has worked at the World Bank as a GIS and Technology Consultant and was a Software Engineering Fellow at Carnegie Mellon University, Pittsburgh. Her background is in computer science, data analysis and Geographical Information Systems. Linet is a recognized unsung hero by the American Embassy in Kenya in her efforts to encourage more women into technology and computing, has been a finalist in the Bloomberg award of global open data champions and is a member of the Open Data Institute Global Open Data Leaders’ Network.

 

Terah Lyons | @terahlyons

Terah will work on advancing policy and governance around the future of machine intelligence, with a specific focus on coordination in international governance of AI. Her work targets questions related to the responsible development and deployment of AI and machine learning, including how society can minimize the risks of AI while maximizing its benefits, and what AI development and advanced automation means for humankind across cultural and political boundaries. Terah is a former Policy Advisor to the U.S. Chief Technology Officer in the White House Office of Science and Technology Policy (OSTP). She most recently led a policy portfolio in the Obama Administration focused on machine intelligence, including AI, robotics, and intelligent transportation systems. In her work at OSTP, Terah helped establish and direct the White House Future of Artificial Intelligence Initiative, oversaw robotics policy and regulatory matters, led the Administration’s work from the White House on civil and commercial unmanned aircraft systems/drone integration into the U.S. airspace system, and advised on Federal automated vehicles policy. She also advised on issues related to diversity and inclusion in the technology industry and entrepreneurial ecosystem. Prior to her work at the White House, Terah was a Fellow with the Harvard School of Engineering and Applied Sciences based in Cape Town, South Africa. She is a graduate of Harvard University, where she currently sits on the Board of Directors of the Harvard Alumni Association.

 

Marilia Monteiro

Marilia will be analyzing consumer protection and competition policy to contribute to the development of sustainable public policies and innovation. From 2013-15, she was Policy Manager at the Brazilian Ministry of Justice’s Consumer Office coordinating public policies for the consumer protection in digital markets and law enforcement actions targeting ISP and Internet application. She has researched the intersection between innovation technologies and society in different areas: current democratic innovations in Latin America regarding e-participation at the Wissenschaftszentrum Berlin für Sozialforschung and the development of public policies on health privacy and data protection at the “Privacy Brazil” project with the Internet Lab in partnership with Ford Foundation in Brazil. She is a board member at Coding Rights, a Brazilian-born, women-led, think-and-do tank and active in Internet Governance fora. Marilia holds a Master in Public Policy from the Hertie School of Governance in Berlin focusing on policy analysis, a bachelor in Law from Fundação Getulio Vargas School of Law in Rio de Janeiro and specialises in digital rights.

 

Jason Schultz | @lawgeek

Jason will analyze the impacts and effects of new technologies such as artificial intelligence/machine learning and the Internet of Things through the lenses of consumer protection, civil liberties, innovation, and competition. His research aims to help policymakers navigate these important legal concerns while still allowing for open innovation and for competition to thrive. Jason is a Professor of Clinical Law, Director of NYU’s Technology Law & Policy Clinic, and Co-Director of the Engelberg Center on Innovation Law & Policy. His clinical projects, research, and writing primarily focus on the ongoing struggles to balance traditional areas of law such as intellectual property, consumer protection, and privacy with the public interest in free expression, access to knowledge, civil rights, and innovation in light of new technologies and the challenges they pose. During the 2016-2017 academic year, Jason was on leave at the White House Office of Science and Technology Policy, where he served as Senior Advisor on Innovation and Intellectual Property to former U.S. Chief Technology Officer Megan Smith. With Aaron Perzanowski, he is the author of The End of Ownership: Personal Property in the Digital Economy (MIT Press 2016), which argues for retaining consumer property rights in a marketplace that increasingly threatens them. Prior to joining NYU, Jason was an Assistant Clinical Professor of Law and Director of the Samuelson Law, Technology & Public Policy Clinic at the UC Berkeley School of Law (Boalt Hall). Before joining Boalt Hall, he was a Senior Staff Attorney at the Electronic Frontier Foundation and before that practiced intellectual property law at the firm of Fish & Richardson, PC. He also served as a clerk to the Honorable D. Lowell Jensen of the Northern District of California. He is a member of the American Law Institute.

 

Gigi Sohn | @gigibsohn

Gigi will be working to promote an open Internet in the United States. She is one of the nation’s leading public advocates for open, affordable, and democratic communications networks. Gigi is also a Distinguished Fellow at the Georgetown Law Institute for Technology Law & Policy and an Open Society Foundations Leadership in Government Fellow. For nearly 30 years, Gigi has worked across the United States to defend and preserve the fundamental competition and innovation policies that have made broadband Internet access more ubiquitous, competitive, affordable, open, and protective of user privacy. Most recently, Gigi was Counselor to the former Chairman of the U.S. Federal Communications Commission, Tom Wheeler, who she advised on a wide range of Internet, telecommunications and media issues. Gigi was named by the Daily Dot in 2015 as one of the “Heroes Who Saved the Internet” in recognition of her role in the FCC’s adoption of the strongest-ever net neutrality rules. Gigi co-founded and served as CEO of Public Knowledge, the leading communications policy advocacy organization. She was previously a Project Specialist in the Ford Foundation’s Media, Arts and Culture unit and Executive Director of the Media Access Project, the first public interest law firm in the communications space. Gigi holds a B.S. in Broadcasting and Film, Summa Cum Laude, from the Boston University College of Communication and a J.D. from the University of Pennsylvania Law School.

 

Cori Zarek | @corizarek

Cori is the Senior Fellow leading the Tech Policy Fellows team and serving as a liaison with the Mozilla Foundation. Her work as a fellow will focus on the intersection of tech policy and transparency. Before joining Mozilla, Cori was Deputy U.S. Chief Technology Officer at the White House where she led the team’s work to build a more digital, open, and collaborative government. Cori also coordinated U.S. involvement with the global Open Government Partnership, a 75-country initiative driving greater transparency and accountability around the world. Previously, she was an attorney at the U.S. National Archives, working on open government and freedom of information policy.  Before joining the U.S. government, Cori was the Freedom of Information Director at The Reporters Committee for Freedom of the Press where she assisted journalists with legal issues, and she also practiced for a Washington law firm. Cori received her B.A. from the University of Iowa where she was editor of the student-run newspaper, The Daily Iowan. Cori also received her J.D. from the University of Iowa where she wrote for the Iowa Law Review and The Des Moines Register. She was inducted into the Freedom of Information Hall of Fame in 2016. Cori is also the President of the D.C. Open Government Coalition and teaches a media law class at American University.

The post Increasing Momentum Around Tech Policy appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

About:Community: Firefox 54 new contributors

Mozilla planet - do, 08/06/2017 - 05:06

With the release of Firefox 54, we are pleased to welcome the 36 developers who contributed their first code change to Firefox in this release, 33 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.18

Mozilla planet - do, 08/06/2017 - 02:00

The Rust team is happy to announce the latest version of Rust, 1.18.0. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed, getting Rust 1.18 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.18.0 on GitHub.

What’s in 1.18.0 stable

As usual, Rust 1.18.0 is a collection of improvements, cleanups, and new features.

One of the largest changes is a long time coming: core team members Carol Nichols and Steve Klabnik have been writing a new edition of “The Rust Programming Language”, the official book about Rust. It’s being written openly on GitHub, and has over a hundred contributors in total. This release includes the first draft of the second edition in our online documentation. 19 out of 20 chapters have a draft; the draft of chapter 20 will land in Rust 1.19. When the book is done, a print version will be made available through No Starch Press, if you’d like a paper copy. We’re still working with the editors at No Starch to improve the text, but we wanted to start getting a wider audience now.

The new edition is a complete re-write from the ground up, using the last two years of knowledge we’ve gained from teaching people Rust. You’ll find brand-new explanations for a lot of Rust’s core concepts, new projects to build, and all kinds of other good stuff. Please check it out and let us know what you think!

As for the language itself, an old feature has learned some new tricks: the pub keyword has been expanded a bit. Experienced Rustaceans will know that items are private by default in Rust, and you can use the pub keyword to make them public. In Rust 1.18.0, pub has gained a new form:

pub(crate) bar;

The bit inside of () is a ‘restriction’, which refines the notion of how this is made public. Using the crate keyword like the example above means that bar would be public to the entire crate, but not outside of it. This makes it easier to declare APIs that are “public to your crate”, but not exposed to your users. This was possible with the existing module system, but often very awkward.

You can also specify a path, like this:

pub(in a::b::c) foo;

This means “usable within the hierarchy of a::b::c, but not elsewhere.” This feature was defined in RFC 1422 and is documented in the reference.

For our Windows users, Rust 1.18.0 has a new attribute, #![windows_subsystem]. It works like this:

#![windows_subsystem = "console"] #![windows_subsystem = "windows"]

These control the /SUBSYSTEM flag in the linker. For now, only "console" and "windows" are supported.

When is this useful? In the simplest terms, if you’re developing a graphical application, and do not specify "windows", a console window would flash up upon your application’s start. With this flag, it won’t.

Finally, Rust’s tuples, enum variant fields, and structs (without #[repr]) have always had an unspecified layout. We’ve turned on automatic re-ordering, which can result in smaller sizes through reducing padding. Consider a struct like this:

struct Suboptimal(u8, u16, u8);

In previous versions of Rust on the x86_64 platform, this struct would have the size of six bytes. But looking at the source, you’d expect it to have four. The extra two bytes come from padding; given that we have a u16 here, it should be aligned to two bytes. But in this case, it’s at offset one. To move it to offset two, another byte of padding is placed after the first u8. To give the whole struct a proper alignment, another byte is added after the second u8 as well, giving us 1 + 1 (padding) + 2 + 1 + 1 (padding) = 6 bytes.

But what if our struct looked like this?

struct Optimal(u8, u8, u16);

This struct is properly aligned; the u16 lies on a two byte boundary, and so does the entire struct. No padding is needed. This gives us 1 + 1 + 2 = 4 bytes.

When designing Rust, we left the details of memory layout undefined for just this reason. Because we didn’t commit to a particular layout, we can make improvements to it, such as in this case where the compiler can optimize Suboptimal into Optimal automatically. And if you check the sizes of Suboptimal and Optimal on Rust 1.18.0, you’ll see that they both have a size of four bytes.

We’ve been planning this change for a while; previous versions of Rust included this optimization on the nightly channel, but some people wrote unsafe code that assumed the exact details of the representation. We rolled it back while we fixed all instances of this that we know about, but if you find some code breaks due to this, please let us know so we can help fix it! Structs used for FFI can be given the #[repr(C)] annotation to prevent reordering, in addition to C-compatible field layout.

We’ve been planning on moving rustdoc to use a CommonMark compliant markdown parser for a long time now. However, just switching over can introduce regressions where the CommonMark spec differs from our existing parser, Hoedown. As part of the transition plan, a new flag has been added to rustdoc, --enable-commonmark. This will use the new parser instead of the old one. Please give it a try! As far as we know, both parsers will produce identical results, but we’d be interested in knowing if you find a scenario where the rendered results differ!

Finally, compiling rustc itself is now 15%-20% faster. Each commit message in this PR goes over the details; there were some inefficiencies, and now they’ve been cleaned up.

See the detailed release notes for more.

Library stabilizations

Seven new APIs were stabilized this release:

See the detailed release notes for more.

Cargo features

Cargo has added support for the Pijul VCS, which is written in Rust. cargo new my-awesome-project --vcs=pijul will get you going!

To supplement the --all flag, Cargo now has several new flags such as --bins, --examples, --tests, and --benches, which will let you build all programs of that type.

Finally, Cargo now supports Haiku and Android!

See the detailed release notes for more.

Contributors to 1.18.0

Many people came together to create Rust 1.18. We couldn’t have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 07 Jun 2017

Mozilla planet - wo, 07/06/2017 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Pagina's