mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Open Design Blog: Zilla Slab: A common language through a shared font

Mozilla planet - ma, 10/07/2017 - 13:00

How do the rules change when you design in the open? That is a question I asked myself many times as I prepared to join the Mozilla brand team back in November 2016. Having never worked in an open source company, I knew that I would need to prepare myself for a mental shift.

Brand design systems are often managed tightly by internal creative teams, with strict guidelines. The guidelines,  are shared like canons, with outside agencies. At Mozilla, we pride ourselves on our relationship to a passionate community of volunteers who outnumber our employees 10 to 1. With this community, strict control doesn’t work, but we still need to provide a design system that includes their views and results in better designs. Here’s a conversation discussing the process with Yuliya Gorlovetsky, Mozilla’s Associate Creative Director and font expert Peter Biľak of Typotheque who helped with the font design.

Tools not rules

Logo refinement from Johnson Banks and Fontsmith.

Yuliya: When I joined the Mozilla brand team, the new identity was close to completion. The concept relied heavily on typography, and Johnson Banks had chosen a slab serif for the logo. Slab serifs are the less common and less well known serifs. They are often bold, sharper, and have strong personality. We saw an opportunity for Mozilla to build on this by creating a custom font to support the new identity system – a distinct typeface for the identity refresh that we planned to open source for all to use.

Jordan Gushwa, a Creative Lead at Mozilla Foundation, and I both agreed that adding an elegant yet flexible slab serif would help us refine the brand. It would allow us to design experiences that lead with typography and written messages, akin to experiences that you might find in lifestyle magazines or design publications when the opportunity presented itself. Mozilla is investing in content creation, as it becomes more important today to tell the stories about how we stay safe and connected on the web.

To develop the Zilla Slab, we worked with Peter Biľak and Nikola Djurek from Typotheque. We respected their work and valued their expertise in multi-language support. I remember that Peter was surprised during our first call when I said we were interested in Slab Serifs that would be at home in top notch magazines or web publications. He happily added those options to our font explorations.

Peter: Slab serif typefaces, sometimes called Egyptians, are usually angular and robust in construction. They hint at a technological underpinning, with aesthetics that are characteristic of the early 19th century Industrial Revolution.

When Yuliya called us at Typotheque, the decision had already been made to use a Slab serif typeface. I wanted to understand the rationale and possibly consider alternatives. Quickly I came to understand that this was a well-informed decision, looking for less explored areas of typeface categorisation.

Yuliya: Peter pulled together multiple options of baseline slab serifs from his catalog and showed us how they could evolve to support our logo design needs. Looking for a font that could flex from display to body copy, we quickly settled on Tesla as our base font. Tesla had a good balance of original details but also the evenness that would support a variety of subject matter.

 

Getting into the details

At the start, the main focus was was on the logotype. Peter kept us accountable by illustrating the impact that the decisions we made for the logotype letters would have on the full font family. When showing different logo options, he would extract the letters from the logo and show how the different design decisions would show up in other letters in a typeface.

This way we could consider side by side, not only the logo, but the full typeface we would have. It was a hard line to walk. You want to just focus on the logo, because that is the combination of letters that will appear time and time again, but since we wanted the logo to connect to the font, we had to find a place of compromise. We needed to make decisions that equally supported both, the logo and the font. Most of the exploration played out in the three main letters: m, z, and a. It’s amazing how many people will ask if they can stop coming to meetings when you talk through 10 different ‘a’ options.

Peter: We looked at various Slab Serif models. Earlier models proposed by Johnson Banks, which were typical heavy geometric Slabs, and then we looked at Sans serif typefaces and considered turning them into Slabs. From our collection we looked at Irma Text Slab, Charlie, Lumin, Zico, before settling on Tesla Slab as a starting point. Perhaps too much detailed information here? Yuliya was intrigued by models  that exhibited unusual traits —  shifted axes of contrast of thick and thin strokes, based on cursive writing rather than geometric construction, or even an asymmetric serif structure. The lowercase ‘a’ is a more complex letter that provides clues to how other letters may look in a full exploration of the logotype. We wanted to bring an angled stroke to the top of the ‘a’, mimicking the slashes of the internet protocol. The other letters would then need to follow the same construction principles.

Yuliya: We were able to narrow it down to two design directions:

  1. A dressed down simplified slab font with a more geometric “a”. Geometric fonts are created from basic shapes, such as straight, monolinear lines and circular shapes. They lack ornamentation, and rarely appear with serifs. This “a”, that we considered, was more round, upright, and had no serif on the end.
  2. A more serifed font that had a more humanist “a” with a very pronounced serif. Humanist serifs are the very first kind of Roman typeface. The letters were originally drawn with a pen held at a consistent angle, creating a consistent visual rhythm. This “a”, that we considered, had a very pronounced two-level serif, that was tilted at a slight angle to respond to the “/”  that came before it.

 

Peter: Most of the Slab typefaces are static and geometric in construction. “Static” refers here to the axis of contrast. When the axis is 0 degree, typefaces are usually described as static. When there is an angle, they became ‘dynamic’. We experimented with injecting more humanistic values into the traditional Slab model, with the help of more calligraphic stroke terminations, which not only improve legibility but create a more flowing rhythm of letter shapes.

Yuliya: After much debate and 35 rounds of review, we had narrowed the directions down to 2 top choices. We then guerilla tested on folks, we showed the 2 directions to some folks within Mozilla, and some colleagues and friends outside of Mozilla and asked which one they gravitated towards and why. We also asked folks what emotions the different directions inspired in them. These conversation gave us the insight to proceed with the more humanist version, but to simplify the serif on the “a”. The result was an overall simplification of the font, which we lovingly call Zilla Slab. We think that Zilla Slab is a casual and contemporary slab serif that still has a good amount of quirk.

We launched the logo in the middle of January, and applied it to just a few assets; web headers, signage around the office and some print applications. After the launch, Peter and his team continued to work with us through the details of the full font family. Peter regularly shared the progress, which we in turn shared with many designers and fellow type enthusiasts across Mozilla. Turns out a lot of folks across Mozilla self identify as type nerds!

Sophisticated italics

Italics came next, and they are graceful and sharp. As a humanist slab, the Zilla Slab italics are closer to a true italics and add a softness to the overall typography system. There are moments when I’m reviewing work and I have to pause and stare at the curved slant detail in the v, w, x, and y of the italics letters.

Peter: Italics are generally not used for longer texts.  The function of italics is to emphasise short passages, so they offer more space for expression. Since we aimed to make Zilla Slab a more affable typeface with humanistic elements, the Italics offered an opportunity to go fully in that direction. We based the Italic not on a slanted Roman, but on a cursive broad-nib writing style. Diagonals in italic often break the rhythm of writing, so we introduced curved diagonals that work well with cursive italic by maintaining a smoother flow.

 

Building in the highlight effect

Yuliya: Johnson Banks originally modeled the highlight effect for the identity system on the functional act of highlighting a piece of type with your mouse on a screen or within software, or code inspector.

From the original guideline from Johnson Banks. The red rectangle is the size of the colon rectangle in the logo.

 

We continued to expand and ask people for feedback on the identity system internally. As we put it to use we quickly discovered that typing out the words and manually drawing out the highlight box to contain those letters not only took a lot of effort, but also created visual inconsistencies. Sizing the letters and the highlight box separately and having them come together in the same way time and time again, requires a lot of math and visual tuning. Following our idea of focusing on tools that enable people rather than rules that restrict, we asked Typotheque to create a true highlight version of Zilla Slab. The highlight weight would show the counter shapes between letters rather than the letterforms themselves. These additional weights of Zilla Slab make it easy for anyone to contribute to the brand identity and not to be limited by design software or design knowledge. Building the logic and rules into the font makes it a seamless part of the system.

Peter: Crafting a wordmark or a logo is usually a different process from developing a functional typeface. as Wordmarks can create more context specific design and brand solutions, which may not work well in a font. With Zilla Slab, we worked on a wordmark and the typeface at the same time, and had to anticipate how the wordmark features could be translated to other glyphs not present in the original wordmark. This extended possibly to other writing scripts.

Typing “Mozilla” in Zilla Slab Highlight Bold

 

Yuliya: The final touch on making the font a completely functional tool for the brand started by asking Peter if it would be possible to automate Zilla Slab so that it would be possible to type out the Mozilla logo using the bold highlight weight of the Zilla Slab. This would allow browsers and native applications to  turn “Mozilla” automatically into “moz://a” in specific cases. This would free people from needing to place and attach a static image version of the file, and frees us from managing those files. A logo for an open source company, typed out in its open source font.

Peter: Since the Zilla font is used to create the new Mozilla logo, we included the OpenType substitution feature triggered by the Ligature function that replaces the ‘ill’ letters by ‘://”. This should only replace the “ill” when intended —  not in a word ‘illustration’, but always in mozilla or ‘Mozilla’, for example.

 

Looking Ahead

Design work from the design team. Special attribution to Patrick Crawford.

 

Over five months of development, as Peter’s team worked through the font details, our design team worked in parallel to test Zilla Slab in a broad range of design applications. It has proven to be as unique and flexible as we had hoped it would be. Zilla Slab has taken its place as the unifying component of our design system. After all, it’s through typography that we see language, and at Mozilla we all have a lot to say.

The roll out of Zilla Slab has also helped to unite our different design teams and give all of our Mozilla contributors a shared visual voice. We are launching Zilla Slab with support for 70 European Latin based languages, and we can’t wait to continue our work with Typotheque and localize it to additional alphabets.

This is one of our first shared tools within our identity system. We will keep adding more tools, writing about them, and designing these tools in the open with you.

Download Zilla Slab on Github or Google Fonts.

The post Zilla Slab: A common language through a shared font appeared first on Mozilla Open Design.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Easily customized environments using the Aframe-Environment-Component

Mozilla planet - ma, 10/07/2017 - 12:23
Easily customized environments using the Aframe-Environment-Component

Get a fresh and new environment for your A-Frame demos and experiments with the aframe-environment component!

Just include the aframe-environment-component.min.js component in your html file, add an <a-entity environment></a-entity> to your <a-scene>, and voila!

Easily customized environments using the Aframe-Environment-Component

<html> <head> <script src="path/to/aframe.js"></script> <script src="path/to/aframe-environment-component.js"></script> </head> <body> <a-scene> <a-entity environment><a-entity> </a-scene> </body> </html>

The component generates a new environment with presets for lights and geometry. These presets can be easily customized by using the inspector (ctrl + alt + i) and tweaking the individual values until you find the look you like. Presets are a combination of property values that define a particular style, they are a starting point that you can later customize:

<a-entity environment="preset: goldmine; sunPosition: 1 5 -2; groundColor: #742"><a-entity>

You can view and try all the presets from the aframe-environment-component example page.

And of course, the component is fully customizable without a preset:

<a-entity environment="skyType: gradient; skyColor: #1d7444; horizonColor: #7ae0e0; groundTexture: checkerboard; groundColor: #523c60; groundColor2: #544264; dressing: cubes; dressingAmount: 15; dressingColor: #7c5c45"></a-entity>

TIP: If you are using the inspector and are happy with the look of your environment, open your browser's dev tools (F12) and copy the latest parameters from the console.

Customizing your environment

The environment component defines four different aspects of the scene: lighting, sky, ground terrain and dressing objects.

Lighting and mood

The lighting in your scene is easily adjusted by changing the sunPosition property. Scene objects will subtly receive a bounce light from the ground, and the color of the fog will also change to match the sky color at the horizon.

Easily customized environments using the Aframe-Environment-Component

To fully control the lighting of the scene, you can disable the environment lights with lighting: none, and you can set lighting: point if you want a point light instead of a distant light for the sun.

Add realism to your scene by adding shadows toggling on the shadow parameter and adding the shadow component on objects that should cast shadows onto the ground. Learn more about A-Frame shadows.

Sky and atmosphere

The 200m radius sky dome can have a basic color, a top-down gradient, or a realistic looking atmospheric look by using skyType: atmosphere sky type. Lowering the sun near or below the horizon will give you a starry night sky.

Ground terrain

The ground is a flat subdivided plane that can be deformed to various different terrain patterns like hills, canyons, or spikes. The appearance can also be customized by its texture and colors.

The center play area where the player is initially positioned is always flat, so nobody will get buried ;)

The grid property will add a grid texture to the ground and can be adjusted to different colors and patterns.

Dressing objects

A sky and ground with nothing more could be a little too simple sometimes. The environment component includes many families of objects that can be used to spice up your scene, including cubes, pyramids, towers, mushrooms and more. Among other parameters, you can adjust their variation using dressingVariance, or the ratio of objects that will be inside or outside the play area with dressingOnPlayArea.

All dressing objects share the same material and are all merged in one single geometry for better performance.

Easily customized environments using the Aframe-Environment-Component

Further customization

To see the full list of parameters of the component, check out GitHub's aframe-environment-component repository.

Help make this component better

We could use your help!

  • File github issues
  • Create a new preset
  • Share your presets! So anyone can copy/paste and even try live
  • Create new dressing geometries
  • Create new procedural textures
  • Create new ground types
  • Create new grid styles

Feel free to send a pull request to the repository!

Performance considerations

The main idea of this component is to have a complete and visually interesting environment by including a single Javascript file, with no extra includes or dependencies. This requires that assets have to be included into the Javascript or (in most cases) generated procedurally . Despite of the computing time and increased file size, both options are normally faster than requesting and waiting for additional textures or model files.

Apart from the parameter dressingAmount, there is not much difference among different parameters in terms of performance.

Categorieën: Mozilla-nl planet

Christian Heilmann: Debugging JavaScript – console.loggerheads?

Mozilla planet - za, 08/07/2017 - 19:35

The last two days I ran a poll on Twitter asking people what they use to debug JavaScript.

  • console.log() which means you debug in your editor and add and remove debugging steps there
  • watches which means you instruct the (browser) developer tools to log automatically when changes happen
  • debugger; which means you debug in your editor but jump into the (browser) developer tools
  • breakpoints which means you debug in your (browser) developer tools

The reason was that having worked with editors and developer tools in browsers, I was curious how much either are used. I also wanted to challenge my own impression of being a terrible developer for not using the great tools we have to the fullest. Frankly, I feel overwhelmed with the offerings and choices we have and I felt that I am pretty much set in my ways of developing.

Developer tools for the web have been going leaps and bounds in the last years and a lot of effort of browser makers goes into them. They are seen as a sign of how important the browser is. The overall impression is that when you get the inner circle of technically savvy people excited about your product, the others will follow. Furthermore, making it easier to build for your browser and giving developers insights as to what is going on should lead to better products running in your browser.

I love the offerings we have in browser developer tools these days, but I don’t quite find myself using all the functionality. Turns out, I am not alone:

The results of 3970 votes in my survey where overwhelmingly in favour of console.log() as a debugging mechanism.

Twitter pollPoll results: 67% console, 2% watches, 15% debugger and 16% breakpoints.

Both the Twitter poll and its correlating Facebook post had some interesting reactions.

  • As with any too simple poll about programming, a lot of them argued with the questioning and rightfully pointed out that people use a combination of all of them.
  • There was also a lot of questioning why alert() wasn’t an option as this is even easier than console().
  • There was quite some confusion about debugger; – seems it isn’t that common
  • There was only a small amount of trolling – thanks.
  • There was also quite a few mentions of how tests and test driven development makes debugging unimportant.

There is no doubt that TDD and tests make for less surprises and are good development practice, but this wasn’t quite the question here. I also happily discard the numerous mentions of “I don’t make mistakes”. I was pretty happy to have had only one mention of document.write() although you do still see it a lot in JavaScript introduction courses.

What this shows me is a few things I’ve encountered myself doing:

  • Developers who’ve been developing in a browser world have largely been conditioned to use simple editors, not IDEs. We’ve been conditioned to use a simple alert() or console.log() in our code to find out that something went wrong. In a lot of cases, this is “good enough”
  • With browser developer tools becoming more sophisticated, we use breakpoints and step-by-step debugging when there are more baffling things to figure out. After all, console.log() doesn’t scale when you need to track various changes. It is, however, not our first go-to. This is still adding something in our code, rather than moving away from the editor to the debugger
  • I sincerely hope that most of the demands for alert() were in a joking fashion. Alert had its merits, as it halted the execution of JavaScript in a browser. But all it gives you is a string and a display of [object object] is not the most helpful.
Why aren’t we using breakpoint debugging?

There should not be any question that breakpoint debugging in vastly superior to simply writing values into the console from our code:

  • You get proper inspection of the whole state and environment instead of one value
  • You get all the other insights proper debuggers give you like memory consumption, performance and so on
  • It is a cleaner way of development. All that goes in your code is what is needed for execution. You don’t mix debugging and functionality. A stray console.log() can give out information that could be used as an attack vector. A forgotten alert() is a terrible experience for our end users. A forgotten “debugger;” or breakpoint is a lot less likely to happen as it does pause execution of our code. I also remember that in the past, console.log() in loops had quite a performance impact of our code.

Developers who are used to an IDE to create their work are much more likely to know their way around breakpoint debugging and use it instead of extra code. I’ve been encountering a lot of people in my job that would never touch a console.log() or an alert() since I started working in Microsoft. As one response of the poll rightfully pointed out it is simpler:

It's even longer to write console.log than to put a breakpoint...

— Chen Eshchar (@cheneshchar) July 6, 2017

So, why do we then keep using console logging in our code rather than the much more superior way of debugging code that our browser tooling gives us?

I think it boils down to a few things:

  • Convenience and conditioning – we’ve been doing this for years, and it is easy. We don’t need to change and we feel familiar with this kind of back and forth between editor and browser
  • Staying in one context – we write our code in our editors, and we spent a lot of time customising and understanding that one. We don’t want to spend the same amount of work on learning debuggers when logging is good enough
  • Inconvenience of differences in implementation – whilst most debuggers work the same there are differences in their interfaces. It feels taxing to start finding your way around these.
  • Simplicity and low barrier of entry – the web became the big success it is by being independent of platform and development environment. It is simple to show a person how to use a text editor and debug by putting console.log() statements in your JavaScript. We don’t want to overwhelm new developers by overloading them with debugger information or tell them that they need a certain debugging environment to start developing for the web.

The latter is the big one that stops people embracing the concept of more sophisticated debugging workflows. Developers who are used to start with IDEs are much more used to breakpoint debugging. The reason is that it is built into their development tools rather than requiring a switch of context. The downsides of IDEs is that they have a high barrier to entry. They are much more complex tools than text editors, many are expensive and above all they are huge. It is not fun to download a few Gigabyte for each update and frankly for some developers it is not even possible.

How I started embracing breakpoint debugging

One thing that made it much easier for me to embrace breakpoint debugging is switching to Visual Studio Code as my main editor. It is still a light-weight editor and not a full IDE (Visual Studio, Android Studio and XCode are also on my machine, but I dread using them as my main development tool) but it has in-built breakpoint debugging. That way I have the convenience of staying in my editor and I get the insights right where I code.

For a node.js environment, you can see this in action in this video:

Are hackable editors, linters and headless browsers the answer?

I get the feeling that this is the future and it is great that we have tools like Electron that allow us to write light-weight, hackable and extensible editors in TypeScript or just plain JavaScript. Whilst in the past the editor you use was black arts for web developers we can now actively take part in adding features to them.

I’m even more a fan of linters in editors. I like that Word tells me I wrote terrible grammar by showing me squiggly green or red underlines. I like that an editor flags up problems with your code whilst you code it. It seems a better way to teach than having people make mistakes, load the results in a browser and then see what went wrong in the browser tools. It is true that it is a good way to get accustomed to using those and – let’s be honest – our work is much more debugging than coding. But by teaching new developers about environments that tell them things are wrong before they even save them we might turn this ratio around.

I’m looking forward to more movement in the editor space and I love that we are able to run code in a browser and get results back without having to switch the user context to that browser. There’s a lot of good things happening and I want to embrace them more.

We build more complex products these days – for good or for worse. It may be time to reconsider our development practices and – more importantly – how we condition newcomers when we tell them to work like we learned it.

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): New L10n Report: July Edition

Mozilla planet - za, 08/07/2017 - 02:08

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

New localizers:

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community and locales added
  • azz (Sierra Puebla Nahuatl): it was onboarded in recent months and already made a lot of progress.
  • be (Belarusian): when we sadly had to drop Belarusian from our desktop and mobile builds due to inactivity, new contributors immediately contacted us to revive localization work. A big shout-out to this newly formed community!
  • Tg (Tajik): successfully localized their first project, Thimble!
New content and projects What’s new or coming up in Firefox desktop

Big changes are coming to Firefox 57, with some of them sneaking up even in 56 (the version currently localized in Nightly, until August 7). The new feature with more significant impact for localization is the new Onboarding experience: it consists of both an Onboarding overlay (a tour displayed by clicking the fox icon in the New tab), and Onboarding notifications, displayed at the bottom of New tab.

If you haven’t seen them yet (we always make sure to tweet the link), we strongly suggest to read the latest news about Photon in the Photon Engineering Newsletter (here’s the latest #8).

On a side note, you should be using Nightly for your daily browsing, it’s exciting (changes every day!) and fundamental to ensure the quality of your localization.

There is a bug on file to stop localizing about:networking, given how small the target is (users debugging network issues) and how obscure some of these strings are.

A final reminder: The deadline to update Firefox Beta is July 25. Remember that Firefox Beta should mainly be used to fix small issues, since new translations added directly to Beta need to be manually added to the Nightly project in your localization tools.

What’s new or coming up in Test Pilot

The new set of experiments, originally planned for July 5, has been moved to July 25. Make also sure to read this mail on dev-l10n if you have issues testing the website on the dev server.

What’s new or coming up in mobile
  • Mobile (both Android and iOS projects) is going to align with the visual changes coming up on desktop by getting a revamped UI thanks to Photon. Check it out!
    • Firefox for Android Photon meta-bug.
    • Please note that due to the current focus on desktop with Firefox 57, Firefox for Android work is slower than usual. Expect to see more and more Photon updates on Nightly builds though as time passes by.
  • We recently launched Focus for Android v1 with 52 languages! Have you tried it out yet? Reviews speak for themselves. Expect a new release soon, and with that, more locales (and of course, more features and improvements to the app)!
  • Mobile experiments are on the rise. The success of Focus is paving the way to many other great mobile projects. Stay tuned on our mailing list because there just may be cool stuff arriving very soon!
What’s new or coming up in on web projects
  • With the new look and feel, a new set of Firefox pages and unified footer were released for l10n. Make sure to localize firefox/shared.lang before localizing the new pages. Try to complete these new pages before deadline or the pages will redirect to English on August 15
  • Monthly snippets have expanded to more locales. Last month, we launched the first set in RTL locales: ar, fa, he, and ur. The team is considering creating regional specific snippets.
  • A set of Internet Health pages were launched. Some recent updates were made to fit the new look and layout. Many communities have participated in localizing some or all pages.
  • The newly updated Community Participation Guideline is now localized in six languages: de, es, fr, hi-IN, pt-BR, and zh-TW. Thanks to the impacted communities for reviewing the document before publishing.
  • Expect more updates of existing pages in the coming months so the look and feel are consistent between pages.
What’s new or coming up in Foundation projects
  • Thimble, the educational code editor, got a makeover and new useful features, that all got localized in more than 20 locales.
  • The fundraising campaign will start ramping up earlier than November this year, so it’s a great idea to make sure the project is up-to-date for your locale, if it isn’t already.
  • The EU Copyright campaign is in slow mode over the summer while MEPs are on holiday, but we will rock again full speed in September before committees are voting
  • We will launch an Internet of Things survey over the summer to get a better understanding of what people know about IoT.
Newly published localizer facing documentation Events
  • Next l10n workshop will be in Asuncion, Paraguay (August)
  • Berlin l10n workshop is coming up in September!
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)
Opportunities Accomplishments Some numbers Friends of the Lion

Image by Elio Qoshi

  • Shout-out to all Mozilla RTL communities who have been doing a great job at filing and fixing bugs on mobile – as well as providing much needed feedback and insights during these past few months! Tomer, Reza, ItielMaN, Umer, Amir, Manel, Yaron, Abdelrahman, Yaseen – just to name a few. Thank you!
  • Thanks to Jean-Bernard and Adam for jumping in to translate the new Firefox pages in French.
  • Thanks to Nihad of Bosnian community for leading the effort in localizing the mozilla.org site that is now on production.
  • A big thank you to Duy Thanh for his effort in rebuilding the Vietnamese community and his ongoing localization contribution.
  • kab (Kabyle) community started a while back, their engagement is impressive in all products and projects.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Image by Slip Premier, Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)

Categorieën: Mozilla-nl planet

Smokey Ardisson: iOSify Bookmarklets and adding bookmarklets on iOS

Mozilla planet - za, 08/07/2017 - 01:14

I don’t do much web browsing on iOS (I just can’t handle the small screen), but I do visit a number of sites regularly, some of which do stupid things that make them annoying or hard to use, especially in Mobile Safari. On the desktop, it’s usually easy to “fix” these annoyances with a bit of JavaScript or CSS or a quick trip to the Document Inspector, but none of these are readily available on iOS. Fortunately, however, bookmarklets still work in Mobile Safari.

Unfortunately, though, adding bookmarklets to Mobile Safari is cumbersome at best. Unless you sync all of your bookmarks from the desktop, it’s almost impossible to add a bookmarklet to Mobile Safari unless the bookmarklet’s author has done some work for you. On the desktop, you’d typically just drag the in-page bookmarklet link to your bookmarks toolbar and be done, or control-/right-click on the in-page bookmarklet link and make a new bookmark using the context menu. One step, so simple a two-year-old could do it. The general process of adding a bookmarklet to Mobile Safari goes like this:

  1. Bookmark a page, any page, in order to add a bookmark
  2. Manually edit the aforementioned bookmark’s URL to make it a bookmarklet, i.e. by pasting the bookmarklet’s code
  3. Fun! :P

To make things slightly easier, Digital Inspiration has a collection of common bookmarklets that you can bookmark directly and then edit back into functioning bookmarklets.1 It’s still two steps, but step 2 becomes much simpler (probably a five-year-old could do it). This is great if Digital Inspiration has the bookmarklet you want (or if the bookmarklet’s author has included an “iOS-friendly” link on the page), but what if you want to add Alisdair McDiarmid’s Kill Sticky Headers bookmarklet?

To solve that problem, I wrote “iOSify Bookmarklets”—a quick-and-dirty sort-of “meta-bookmarklet” to turn any standard in-page bookmarklet link into a Mobile Safari-friendly bookmarkable link.

Once you add iOSify Bookmarklets to Mobile Safari (more on that below), you tap it in your bookmarks to covert the in-page bookmarklet link into a tapable link, tap the link to “load” it, bookmark the resulting page, and then edit the URL of the new bookmark to “unlock” the bookmarklet.

Say you’re visiting http://example.com/foo and it has a bookmarklet, bar, that you want to add to Mobile Safari.

  1. Open your Mobile Safari bookmarks and tap iOSify Bookmarklets. (The page appears unchanged afterwards, but iOSify Bookmarklets did some work for you.)
  2. Tap the in-page link to the bookmarklet (bar) you want to add to Mobile Safari. N.B. It may appear again that nothing happens, but if you tap the location bar and finger-scrub, you should see the page’s URL has been changed to include the code for the “bar” bookmarklet after a ?bookmarklet# string.
  3. Tap the Share icon in Mobile Safari’s bottom bar and add the current page as a bookmark; you can’t edit the URL at this point, so just choose Done.
  4. Tap the Bookmarks icon, then Edit, then the bookmark you just added. Edit the URL and delete everything before the javascript: in the bookmark’s URL. (Tap Done when finished, and Done again to exit editing mode.)

The “bar” bookmarklet is now ready for use on any page on the web.

Here’s an iOS-friendly bookmarkable version of iOSify Bookmarklets (tap this link, then start at step 3 above to add this to Mobile Safari): iOSify Bookmarklets

The code, for those who like that sort of thing:

I hope this is helpful to someone out there :-)

        

1 For the curious, Digital Inspiration uses query strings and fragments in the URL in order to include the bookmarklet code in the page URL you bookmark, and iOSify Bookmarklets borrows this method. ↩︎

Categorieën: Mozilla-nl planet

93% Of Websites Fail To Meet Security Standard - Hashed Out by The SSL Store™ (registration) (blog)

Nieuws verzameld via Google - vr, 07/07/2017 - 16:26

Hashed Out by The SSL Store™ (registration) (blog)

93% Of Websites Fail To Meet Security Standard
Hashed Out by The SSL Store™ (registration) (blog)
April King, a security engineer at Mozilla, found that the vast majority of the world's largest sites – 93.45% to be exact – are not implementing many modern security technologies which provide secure connections to their users and protect them from ...

Google Nieuws
Categorieën: Mozilla-nl planet

Mozilla VR Blog: Link Traversal and Portals in A-Frame

Mozilla planet - vr, 07/07/2017 - 01:28
Link Traversal and Portals in A-Frame

Demo for the impatients (It requires controllers: Oculus, HTC Vive, Daydream or GearVR)

A-Frame 0.6.0 and Firefox Nightly now support navigating across pages while in VR mode. WebVR has finally earn the Web badge. The Web gets its name from the structure of interconnected content that uses the link as the glue. Until now, The VR Web was fragmented and had to be consumed in bite size pieces since the VR session was not preserved on page navigation. In the first iteration of the WebVR API we focused on displaying pixels and meet the performance requirements on a single page. Thanks to the efforts of Doshheng Mu and Kip, Firefox Nightly now also ships the mechanism that enables a seamless transition between VR sites.

Link Traversal and Portals in A-Frame

Link traversal in VR relies on the onvrdisplayactivate event. It is fired on the window object on page load if the precedent site was presenting content in the headset.

To enter VR mode for the first time, the user is expected to explicitly trigger VR mode with an action like mouse click or a keyboard shortcut to prevent sites to take control of the headset inadvertently. Once VR is engaged subsequent page transitions can present content in the headset without further user intervention. A page can be granted permissions to enter VR automatically by simply attaching an event handler:

window.addEventListener('vrdisplayactivate' function (evt) { /* A page can now start presenting in the headset */ vrDisplay.requestPresent([{ source: myCanvas }]).then(function () { ... }); } Links in A-Frame

A-frame 0.6.0 ships with a link component and a-link primitive. The link component can be configured in several ways:

<a-entity link="href: index.html; title:My Home; image: #homeThumb"></a-entity> PropertyDescription hrefURL where the link points to titleText displayed on the link. href is used if not defined onevent that triggers link traversal image360 panorama used as scene preview in the portal colorBackground color of the portal highlightedtrue if the link is highlighted highlightedColorcolor used to highlight the link visualAspectEnabledenable/disable the visual aspect if you want to implement your own

The a-link primitive provides a compact interface that feels like the traditional <a> tag that we are all used to.

<a-link href="index.html" image="#thumbHome" title="my home"></a-link>

The image property points to the <img> element that will be used as a background of the portal and the title is the text displayed on the link.

The UX of VR links

Using the wonderful art of arturitu, both the A-Frame component and primitive come with a first interpretation on how links could be visually represented in VR. It is a starting point for an exciting conversation that will develop in the next years.

Our first approach addresses several problems we identified:

Links visual aspect should be consistent.

So users can quickly identify the interconnected experiences at a glance. Thanks to kip's shader wisdom we chose a portal representation that gives each link a distinct look representative of the referenced content. A-Frame provides a built in screenshot component to easily generate the 360 panoramas necessary for the portal preview.

Link Traversal and Portals in A-Frame

Links should be useful at any distance.

Portals help discriminate between links in the proximity but the information becomes less useful in the distance. From far away, portals alone can be difficult to spot because they might blend with the scene background or become hard to see at wide angles. To solve the problem, we made links fade into a solid fuchsia circle with a white border that grows in thickness with the distance so all the links have a consistent look (colors are configurable). Portals will also face the camera to avoid wide angles that reduce the visible surface.

Link Traversal and Portals in A-Frame

Link Traversal and Portals in A-Frame

Links should provide information about the referenced experience.

One can use the surrounding space to contextualize and give a hint where the link will point to. In addition, links themselves display either the title or the url that they refer to. This provides additional information to the user about the linked content.

Link Traversal and Portals in A-Frame

There should be a convenient way to explore the visible links.

While portals are a good way to preview an experience it can be hard to explore the available options if the user has to move around the scene to inspect the links one by one. We developed a peek feature that allows the user to point to any link and quickly zoom into the preview without having to move.

Link Traversal and Portals in A-Frame

Next steps

One of the limitations of the current API is that a Web Developer needs to manually point to the thumbnails that the links use to render the portals. We want to explore a way, via meta tags, web manifest or other conventions for a site to provide the thumbnail for 3rd party links to consume. This way a Web Developer has more control on how her website will be represented in other pages.

Another rough edge is what happens when after navigation a page takes a lot time to load or ends up in an error. There's no way at the moment to inform the user of those scenarios while keeping the VR headset on. We want to explore ways for the browser itself to intervene in VR mode and keep the user properly informed at each step when leaving, loading and finally navigating to a new piece of VR content.

Conclusion

With in-VR page navigation we're now one step closer to materialize the Open Metaverse on top the existing Web infrastructure. We hope you find our link proposal inspiring and sparks a good conversation. We don't really know how links will look like in the future: doors, inter-dimensional portals or exit burritos... We cannot wait to see what you come up with. All the code and demos are already available as part of the 0.6.0 version of A-Frame. Happy hacking!

Link Traversal and Portals in A-Frame

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Jul. 06, 2017

Mozilla planet - do, 06/07/2017 - 18:00

Reps Weekly Meeting Jul. 06, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

The Mozilla Blog: New Research: Is an Ad-Supported Internet Feasible in Emerging Markets?

Mozilla planet - do, 06/07/2017 - 16:56
Fresh research conducted by Caribou Digital and funded by Mozilla explores digital advertising models in the Global South — whether they can succeed, and what that means for users, businesses, and the health of the Internet

Since the Internet’s earliest days, advertising has been the linchpin of the digital economy, supporting businesses from online journalism to social networking. Indeed, two of the five largest companies in the world — Facebook and Google — earn almost all of their revenue through digital advertising.

As the Internet reaches new users in India, Kenya, and elsewhere across the Global South, this model is following close behind. But is the digital advertising model that has evolved in developed economies sustainable in emerging economies? And if it’s not: What does it mean for the billions of users who are counting on the Internet to unlock new pathways to education, economic growth, and innovation?

Publishers see drastically less revenue per user in these regions, partly because low-income populations are less valuable to advertisers, and partly because constraints on the user experience — low-quality hardware, unreliable network coverage, and a dearth of local content — fundamentally limit how people engage with digital content and services.

As a result, users in emerging markets will have fewer choices, as local content providers and digital businesses will struggle to earn enough from their home markets to compete with the global platforms.

Today, we’re publishing “Paying Attention to the Poor: Digital Advertising in Emerging Markets.”

It’s fresh research conducted by Caribou Digital and funded by Mozilla that explores the barriers traditional digital advertising models face in emerging economies; the consequent impact on users, businesses, and the health of the Internet; and what new models are emerging.  

In summary:

Ad revenue-wise, there is an order-of-magnitude difference between users in developed economies and users in the Global South.

Facebook earns a quarterly ARPU (average revenue per user) of $1.41 in Africa and Latin America, and $2.07 in Asia-Pacific — an order of magnitude less than  the $19.81 it earns in the U.S. and Canada

As a result, just over half of Facebook’s total global revenue comes from only 12% of its users

The high cost of data in emerging markets is one driver of ad blocking

Due to prohibitive data costs and slower network speeds, many Internet users in emerging markets use proxy browsers, such as UC Browser or Opera Mini, which reduce data consumption and also block ads

One report by PageFair claims over 309 million users around the world used mobile ad blockers in 2016 — with 89 million hailing from India and 28 million hailing from Indonesia

A dearth of user data — or, the “personal data gap” — presents another challenge to advertisers.

In developed economies, data profiling and ad targeting has been a boon to advertisers. But in the Global South, people have much smaller digital footprints

Limited online shopping, a glut of open-source Android devices, and a tendency toward multiple, fragmented social media accounts dilutes the value of personal data to advertisers

Limited advertising revenue in emerging markets challenges local innovation and competition.

Publishers and developers follow the money. As a result, content is targeted to, and localized for, developed markets like the U.S. or Japan — even producers in emerging markets will ignore their domestic market in favor of more lucrative ones

Large companies like Facebook have the resources to subsidize forays into unprofitable markets; smaller companies do not. As a result, the reigning giants become further entrenched

A lack of local content can have deeply negative implications.

Availability of local content is a key demand-side driver for increasing Internet access for marginalized populations, and localized media can foster inclusion and support democratic institutions

But without viable economic models for supporting this content, opportunity is squandered. Presently, the majority of digital content — including user-generated content such as Wikipedia — is in English

The outlook for digital advertising-supported businesses in emerging markets is bleak.

Low monetization rates will continue to limit the types of Internet businesses that can flourish in the Global South

To succeed, businesses in the Global South have to build more strategically, working toward profitability (and not user growth) from the very beginning

These constraints demand new business model innovations for an Internet ecosystem that is evolving differently in the Global South

“Sponsored data” or “incentivized action” models which offer free data in return for engagement with an advertiser’s content are one approach to mitigating the access and affordability constraint

Transactional revenue models, such as those seen in digital financial services, will play an increasingly important role as payments infrastructure matures

You can read the full report here.

In the coming weeks and months, Mozilla and Caribou Digital will share our findings with allies across the Internet health space — the network of NGOs, institutions, and individuals who are working toward a more healthy web. We hope our learnings will help unlock innovative solutions that balance commercial success with openness and freedom online.

The post New Research: Is an Ad-Supported Internet Feasible in Emerging Markets? appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

New Research: Is an Ad-Supported Internet Feasible in Emerging Markets?

Mozilla Blog - do, 06/07/2017 - 16:56
Fresh research conducted by Caribou Digital and funded by Mozilla explores digital advertising models in the Global South — whether they can succeed, and what that means for users, businesses, and the health of the Internet

Since the Internet’s earliest days, advertising has been the linchpin of the digital economy, supporting businesses from online journalism to social networking. Indeed, two of the five largest companies in the world — Facebook and Google — earn almost all of their revenue through digital advertising.

As the Internet reaches new users in India, Kenya, and elsewhere across the Global South, this model is following close behind. But is the digital advertising model that has evolved in developed economies sustainable in emerging economies? And if it’s not: What does it mean for the billions of users who are counting on the Internet to unlock new pathways to education, economic growth, and innovation?

Publishers see drastically less revenue per user in these regions, partly because low-income populations are less valuable to advertisers, and partly because constraints on the user experience — low-quality hardware, unreliable network coverage, and a dearth of local content — fundamentally limit how people engage with digital content and services.

As a result, users in emerging markets will have fewer choices, as local content providers and digital businesses will struggle to earn enough from their home markets to compete with the global platforms.

Today, we’re publishing “Paying Attention to the Poor: Digital Advertising in Emerging Markets.”

It’s fresh research conducted by Caribou Digital and funded by Mozilla that explores the barriers traditional digital advertising models face in emerging economies; the consequent impact on users, businesses, and the health of the Internet; and what new models are emerging.  

In summary:

Ad revenue-wise, there is an order-of-magnitude difference between users in developed economies and users in the Global South.

Facebook earns a quarterly ARPU (average revenue per user) of $1.41 in Africa and Latin America, and $2.07 in Asia-Pacific — an order of magnitude less than  the $19.81 it earns in the U.S. and Canada

As a result, just over half of Facebook’s total global revenue comes from only 12% of its users

The high cost of data in emerging markets is one driver of ad blocking

Due to prohibitive data costs and slower network speeds, many Internet users in emerging markets use proxy browsers, such as UC Browser or Opera Mini, which reduce data consumption and also block ads

One report by PageFair claims over 309 million users around the world used mobile ad blockers in 2016 — with 89 million hailing from India and 28 million hailing from Indonesia

A dearth of user data — or, the “personal data gap” — presents another challenge to advertisers.

In developed economies, data profiling and ad targeting has been a boon to advertisers. But in the Global South, people have much smaller digital footprints

Limited online shopping, a glut of open-source Android devices, and a tendency toward multiple, fragmented social media accounts dilutes the value of personal data to advertisers

Limited advertising revenue in emerging markets challenges local innovation and competition.

Publishers and developers follow the money. As a result, content is targeted to, and localized for, developed markets like the U.S. or Japan — even producers in emerging markets will ignore their domestic market in favor of more lucrative ones

Large companies like Facebook have the resources to subsidize forays into unprofitable markets; smaller companies do not. As a result, the reigning giants become further entrenched

A lack of local content can have deeply negative implications.

Availability of local content is a key demand-side driver for increasing Internet access for marginalized populations, and localized media can foster inclusion and support democratic institutions

But without viable economic models for supporting this content, opportunity is squandered. Presently, the majority of digital content — including user-generated content such as Wikipedia — is in English

The outlook for digital advertising-supported businesses in emerging markets is bleak.

Low monetization rates will continue to limit the types of Internet businesses that can flourish in the Global South

To succeed, businesses in the Global South have to build more strategically, working toward profitability (and not user growth) from the very beginning

These constraints demand new business model innovations for an Internet ecosystem that is evolving differently in the Global South

“Sponsored data” or “incentivized action” models which offer free data in return for engagement with an advertiser’s content are one approach to mitigating the access and affordability constraint

Transactional revenue models, such as those seen in digital financial services, will play an increasingly important role as payments infrastructure matures

You can read the full report here.

In the coming weeks and months, Mozilla and Caribou Digital will share our findings with allies across the Internet health space — the network of NGOs, institutions, and individuals who are working toward a more healthy web. We hope our learnings will help unlock innovative solutions that balance commercial success with openness and freedom online.

The post New Research: Is an Ad-Supported Internet Feasible in Emerging Markets? appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Bay Area Progress Report

Mozilla planet - do, 06/07/2017 - 08:15

I'm in the middle of a three-week work trip to the USA.

Monday last week I met with a couple of the people behind the Julia programming language, who have been using and contributing to rr. We had good discussions and it's good to put faces to names.

Monday night through to Friday night I was honoured to be a guest at Mozilla's all-hands meeting in San Francisco. I had a really great time catching up with a lot of old friends. I was pleased to find more Mozilla developers using rr than I had known about or expected; they're mostly very happy with the tool and some of them have been using esoteric features like chaos mode with success. We had a talk about rr and I demoed some of the new stuff Kyle and I have been working on, and talked about which directions might be most useful to Mozilla.

Saturday through Monday I went camping in Yosemite National Park with some friends. We camped in the valley on Saturday night, then on Sunday hiked down from Tioga Road (near the Lukens Lake trailhead) along Yosemite Creek to north of Yosemite Falls and camped there overnight. The next day we went up Eagle Peak for a great view over Yosemite Valley, then hiked down past the Falls back to the valley. It's a beautiful place and rather unlike any New Zealand tramping I've done — hot, dry, various sorts of unusual animals, and ever-present reminders about bears. There were a huge number of people in the valley for the holiday weekend!

Tuesday was a bit more relaxed. Being the 4th of July, I spent the afternoon with friends playing games — Bang and Lords of Waterdeep, two of my favourites.

Today I visited Brendan and his team at Brave to talk about rr. On Friday I'll give a talk at Stanford. On Monday I'll be up in Seattle giving a talk at the University of Washington, then on Tuesday I'll be visiting Amazon to talk about the prospects for rr in the cloud. Then on Wednesday through Friday I'll be attending Usenix ATC in Santa Clara and giving yet another rr talk! On Saturday I'll finally go home.

I really enjoy talking to people about my work, and learning more about people's debugging needs, and I really really enjoy spending time with my American friends, but I do miss my family a lot.

Categorieën: Mozilla-nl planet

Nicholas Nethercote: How we made compiler warnings fatal in Firefox

Mozilla planet - do, 06/07/2017 - 02:11

Compiler warnings are mostly good: they identify real problems, and when false positives do occur they are usually easy to work around. However, if they’re not fatal, they tend to be ignored and build up. (See bug 187528 for an idea!)

One way to prevent the build-up is to make them fatal, so they become errors. But won’t that cause problems? Not if you’re careful. Here’s how we did it for Firefox.

  • Choose with some care which warnings end up fatal. Don’t be afraid to modify your choices as time goes on.
  • Introduce a mechanism for enabling fatal warnings on a per-directory basis. Mozilla’s custom build system used to have a directive called FAIL_ON_WARNINGS for this purpose.
  • Set things up so that fatal warnings are off by default, but enabled on continuous integration (CI). This means the primary coverage is via CI. You don’t want fatal warnings on by default because it causes problems for developers who use non-standard compilers (e.g. pre-release versions with new warning classes). Developers using the same compilers as CI can turn it on locally if they want without problem.
  • Allow per-file exceptions for particular kinds of warnings, because there are occasionally warnings you just want to ignore.
  • Fix warnings one directory at a time, and turn on fatal warnings for that directory as soon as it’s warning-free.
  • Invert the sense of the per-directory mechanism once you’ve converted more than half of the directories. For Mozilla code we now have the ALLOW_COMPILER_WARNINGS directive. It’s almost exclusively used for directories containing third-party code which is not under our control.
  • Gradually expand the coverage of which compilers you have fatal warnings for. Mozilla code now does this for GCC, clang, and MSVC.
  • Congratulations! You now have fatal warnings on everywhere that is practical.

With a setup like this, it’s possible for a patch to compile on a developer’s machine but fail to compile on CI. But that’s just one of many ways in which full CI runs may fail when local runs don’t. So it’s not as bad as it seems.

Also, before upgrading the compilers on CI you need to address any new warnings, by fixing them or suppressing them or filing a compiler bug report. But this isn’t a bad thing.

It took a long time for Firefox to reach this stage, but I think it was worth the effort. Thank you to Chris Peterson and all the others who helped with this.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: Kuma Report, June 2017

Mozilla planet - do, 06/07/2017 - 02:00

Here’s what happened in June in Kuma, the engine of MDN Web Docs:

  • Shipped the New Design to Beta Testers
  • Added KumaScript macro tests
  • Continued MDN data projects
  • Shipped tweaks and fixes

Here’s the plan for July:

  • Continue the redesign
  • Experiment with on-site interactive examples
  • Update localization of macros
  • Ship the sample database
Done in June Shipped the new design to beta testers

This month, we revealed some long-planned changes. First, MDN is focusing on web docs, which includes changing our identity from “Mozilla Developer Network” to “MDN Web Docs”. Second, we’re shipping a new design to beta users, to reflect Mozilla’s new brand identity as well as the MDN Web Docs brand.

Stephanie Hobson did a tremendous amount of work over 26 Kuma PRs and 2 KumaScript PRs to launch a beta of the updated design on wiki pages. A lot of dead code has been removed, and non-beta users continue to get the current design. Schalk Neethling reviewed the PRs as fast as they were created, including checking the rendering in supported browsers. Our beta users have provided a lot of feedback and found some bugs, which Stephanie has been triaging, tracking, and fixing.

This work continues in July, with an update to the homepage and other pages. When we’ve completed the redesign, we’ll ship the update to all users. If you want to see it early, opt-in as a beta tester.

Added KumaScript Macro Tests

Macros used to be tested manually, in production. After moving the macros to GitHub, they were still tested manually, but in the development environment. In June, Ryan Johnson added an automated testing framework, and tests for five macros, in PR 204. This allows us to mock the Kuma APIs needed for rendering, and to test macros in different locales and situations. This will help us refine and refactor macros in the future.

Continued MDN Data Projects

The MDN data projects were very busy in June, with 48 browser-compat-data PRs and 10 data PRs merged. MDN “writers” Florian Scholz, wbamberg, and Eric Shepherd have been converting MDN browser compatibility tables to JSON data, refining the schema and writing documentation. This is already becoming a community project, with almost half of the PRs coming from contributors such as Andy McKay (1 PR). Dominik Moritz (2 PRs), Roman Dvornov (6 PRs), Ng Yik Phang (2 PRs), and Sebastian Noack (16 PRs!).

The tools and processes are updating as well, to keep up with the activity. browser-compat-data gained a linter in PR 240. mdn-browser-compat-data’s npm package was bumped to version 0.0.2, and then 0.0.3. mdn-data was released as 1.0.0. We’re loading the browser-compat-data from the NPM package in production, and hope to start loading the data NPM package soon.

We’re happy with the progress on the data projects. There’s a lot of work remaining to convert the data on MDN, and also a lot of work to automate the process so that changes are reflected in production as quickly as possible.

Shipped Tweaks and Fixes

Here’s some other highlights from the 44 merged Kuma PRs in June:

Here’s some other highlights from the 31 merged KumaScript PRs in June:

Planned for July Continue the Redesign

Some of the styling for the article pages is shared across other pages, but there is more work to do to complete the redesign. Up next is the homepage, which will change to reflect our new focus on documenting the open web. Other pages will need further work to make the site consistent. When we and the beta testers are mostly happy, we’ll ship the design to all MDN visitors, and then remove the old design code.

Experiment with On-site Interactive Examples

We’re preparing some interactive examples, so that MDN readers can learn by adjusting the code without leaving the site. We’re still working out the details of serving these examples at production scale, so we’re limiting the July release to beta users. You can follow the work at mdn/interactive-examples.

Update Localization of Macros

Currently, KumaScript macros use in-macro localization strings and utility functions like getLocalString to localize output for three to five languages. Meanwhile, user interface strings in Kuma are translated in Pontoon into 57 languages. We’d like to use a similar workflow for strings in macros, and will get started on this process in July.

Ship the Sample Database

The Sample Database has been promised every month since October 2016, and has slipped every month. We almost had to break the tradition, but we can say again it will ship next month. PR 4248, adding the scrape_document command, shipped in June. The final code, PR 4076, is in review, and should be merged in July.

Categorieën: Mozilla-nl planet

Karl Dubost: Some random thoughts in San Francisco

Mozilla planet - do, 06/07/2017 - 01:09

We had a working business week for Mozilla All Hands. The big bi-annual meeting of the full company to work, discuss, cooperate and make wonderful things.

These are random notes not work related about thoughts accumulated during San Francisco week. What we notice in spaces and people are primary ourselves. It's often the start of a self-introspection more than anything else.

Meeting with my thoughts
  • crossing the border went smoothly. I was feeling good. No mobile device. Blank laptop.
  • Mission street areas from the bus. A bus of wealthy geeks watching the poor, the abandonned, the drifter from up there behind a window. It takes a blaze and spirit to start a revolution. The spirit is drowned inside the body.
  • This insecure feeling of the touristic areas when you never know if and when a driver will plow into the crowd or someone will draw a gun.
  • Chinatown is peaceful and calming environment for me. Something familiar, something I relate to.
  • The charm of the hills of San Francisco is a beautiful pain to my legs. The effort and the view are a gift.
  • Brands using pseudo-political messages to sell more stuff. Nauseous.
  • Exhausted.
  • Sleeping.
  • Plastic surgery is a thing.
  • Cable cars packed with tourists. Do locals still take the cable cars? Private buses to go to Silicon Valley. Dedicated transportation for tourists… What a broken world.
  • Noisy reception, rooms full of people too loud.
  • The emptiness of ads.
  • Long discussion with a friend walking across the San Francisco streets about everything, about nothing, about simple things. On the road, we share.
  • From coit tower, foggy dark Golden Gate vaporized in the horizon.
  • Sirens of fire trucks are frequent
  • A black old man clapped my hand when crossing and wished me a good day. I replied "you too". I don't know if he heard me.
  • I hear the cable cars tongtong tongtong from the hotel window.
  • Things we hear in SF "Oh crap, can someone stash my kinder eggs?"
  • Industrial buildings and hipster shops. The world of rich people is leveled.
  • Freezing wind.
  • Wonderful broth of a Pho Bo
  • A woman shouting "bitch" multiple times at the window of a Carl's restaurant.
  • The feeling of meeting too many people.
  • The feeling of meeting too few people.
  • A perfume shop clerk, old and elegant woman talked to me English, then Japanese, then French. Smile for the day.
  • Missing the cafe latte barista from Whistler All Hands.
  • USA and too big hotel rooms. Ridiculous use of space in a time where everything should be counted and saved.
  • Written culture of street signs.
  • Walking from the hotel to the bye bye event, a moment for long life discussions.
  • American college kids doing bbq. A certain image of USA.
  • Long streets without any shops.
  • overheard: "Let's start the day with a bloody mary" (at the airport at 8am)

Otsukare!

Categorieën: Mozilla-nl planet

Ricky Rosario: SUMO Development Update 2012.1

Mozilla planet - wo, 05/07/2017 - 23:31
SUMO Dev goes agile

Inspired by the MDN Dev team, the SUMO Dev team decided to try an agile-style planning process in 2012.

To be fair, we have always been pretty agile, but perhaps we were more on the cowboy side than the waterfall side. We planned our big features for the quarter and worked towards that. Along the way, we picked up (or were thrown) lots of other bugs based on the hot issue of the day or week, contributor requests, scratching our own itch, etc. These bugs ended up taking time away from the major features we set as goals and, in some cases, ended up delaying them. This new process should help us become more predictable.

Starting out by copying what MDN has been doing for some time now, we are doing two week sprints. We will continue to push out new code weekly for now, so it is kind of weird in that each sprint has two two milestones within it. We will continue to name the milestones by the date of the push (ie, "2012-01-24" for today's push) and we are naming sprints as YEAR.sprint_number (ie, "2012.1" was our first sprint). We hope to will be doing continuous deployment soon. At that point we will only have to track one milestone (the sprint) at a time. For more details on our process, check out our Support/SUMOdev Sprints wiki page.

2012.1 sprint

We just pushed the second half of our first sprint to production. Some data:

  • Closed Stories: 26
  • Closed Points: 34
  • Developer Days: 36
  • Velocity: .94 pts/day

Our major focus of this sprint was getting our Elastic Search implementation (we are in the process of switching from Sphinx) to the point where we can index and start rolling it out to users. After today's push, we will find out whether this is working properly. *fingers crossed* (UPDATE: we did hit an issue with the indexing.)

Other stuff we landed:

  • Initial support for the apps marketplace. Basically, a landing page and a question workflow that integrates with zendesk for 1:1 help.
  • KPI (Key Performance Indicator) Dashboard. We landed the first chart which displays % of solved questions (it has a math bug in it that will get fixed in the next push).
  • Some minor UI fixes and improvements.
2012.2 sprint

We are currently halfway through our second sprint. Our main goals with this sprint are to get Elastic Search out to 15% of our users and to add a bunch of new metrics charts to the KPI Dashboard.

In my opinion, this new planning process is going well so far. The product team has better insight into what the dev team is up to day to day. And the dev team has better sense about what the short term priorities are. Probably the most awesome thing about it is that we are collecting lots of great data. The part I have liked the least so far has been the actual planning sessions, I end up pretty tired after those. I think it just needs a little getting used to and it is only 1-2 hours every two weeks.

:-)

Categorieën: Mozilla-nl planet

Ricky Rosario: SUMO Development: 2012.3 and 2012.4 Update

Mozilla planet - wo, 05/07/2017 - 23:31

Oops, I procrastinated forgot to post an update for 2012.3 and we are done with 2012.4 too now.

2012.3 sprint
  • Closed Stories: 26
  • Closed Points: 37 (3 aren't used in the velocity calculation as they were fixed by James and Kadir - Thanks!)
  • Developer Days: 28
  • Velocity: 1.21 pts/day

The 2012.3 sprint went very well. We accomplished most of the goals we set out to do. We rolled out Elastic Search to 50% of our users and had it going for several days. We fixed some of the blocker bugs and came up with a plan for reindexing without downtime. Everything was great until we decided to add some timers to the search view in order to compare times of the Elastic Search vs the Sphinx code path. As soon as we saw some data, we decided to shut down Elastic Search. Basically, the ES path was taking about 4X more time than the Sphinx path. Yikes! We got on that right away and started looking for improvements.

On the KPI Dashboard side, we landed 4 new charts as well as some other enhancements. The new charts show metrics for:

  • Search click-through rate
  • Number of active contributors to the English KB
  • Number of active contributors to the non-English KB
  • Number of active forum contributors

We did miss the goal of adding a chart for active Army of Awesome contributors, as it turned out to be more complicated than we initially thought. So that slipped to 2012.4.

2012.4 sprint
  • Closed Stories: 20
  • Closed Points: 24
  • Developer Days: 19
  • Velocity: 1.26 pts/day

The 2012.4 sprint was sad. It was the first sprint without ErikRose :-(. We initially planned to have TimW help us part time, but he ended up getting too busy with his other projects. We did miss some of our initial goals, but we did as good as we could.

The good news is that we improved the search performance with ES a bunch. It still isn't on par with Sphinx but it is good enough to where we went back to using it for 50% of the users. We have plans to make it faster, but for now it looks like the click-through rates on results are already higher than what we get with Sphinx. That makes us very happy :-D.

We added two new KPI dashboard charts: daily unique visitors and active Army of Awesome contributors. We also landed new themes for the new Aurora community discussion forums.

2012.5 sprint

This week we started working on the 2012.5 sprint. Our goals are:

  • Elastic Search: refactor search view to make it easier to do ES-specific changes.
  • Elastic Search: improve search view performance (get us closer to Sphinx).
  • Hide unanswered questions that are over 3 months old. They don't add any value, so there is no reason to show them to anybody or have them indexed by google and other search engines.
  • Branding and styling updates for Marketplace pages
  • KPI Dashboard: l10n chart
  • KPI Dashboard: Combine solved and responded charts

We are really hoping to be ready to start dialing up the Elastic Search flag to 100% by the time we are done with this sprint.

Categorieën: Mozilla-nl planet

Ricky Rosario: SUMO Development: 2012.2 Update

Mozilla planet - wo, 05/07/2017 - 23:31

Yesterday we shipped the second half of the 2012.2 sprint. We ended up accomplishing most of our goals:

  • [Elastic Search] Perform full index in prod - DONE
  • [Elastic Search] Roll out to 15% of users - DONE
  • Add more metrics to KPI dashboard - INCOMPLETE (We landed 3 out of the 4 new graphs we wanted).

Not too bad. In addition to this, we made other nice improvements to the site:

Great progress for two weeks of work! Some data from the sprint:

  • Closed Stories: 30
  • Closed Points: 38
  • Developer Days: 35
  • Velocity: 1.08 pts/day
Onward to 2012.3

We are now a little over halfway into the 2012.3 sprint. Our goals are to roll out Elastic Search to 50% of users, be ready to roll out to 100% (fix all blockers) and add 5 new KPI metrics to the KPI dashboard. So far so good, although we keep finding new issues as we continue to roll out Elastic Search to more users. That deserves it's own blog post though.

Categorieën: Mozilla-nl planet

Ricky Rosario: Joined the Mozilla Web Team

Mozilla planet - wo, 05/07/2017 - 23:31

After 3 great years at Razorfish, I decided to move on and joined Mozilla 2 weeks ago. I will be working remote, but I spent the first week in Mountain View doing new hire orientation, setting up my shiney new MBP i7, setting up development environments for zamboni (new addons site) and kitsune (new support site), and fixing some easy bugs to start getting familiar with the codebase.

So far, I am loving it. Some of my initial observations:

  • My coworkers are super smart and awesome.
  • The main communication channel is through IRC (even when people are sitting nearby in the office). This works out great for the remote peeps like myself.
  • We use git/github for the our branch -> work on bug/feature -> review -> commit workflow. I am loving the process and github helps a ton with their UI for commenting on code.
  • Continuous Integration is the nuts.
  • Automated functional testing ^^.
  • Writing open source software full-time, and getting paid? Unreal!

I am working on SUMO (support.mozilla.com). It is currently going through a rewrite from tiki wiki to django (kitsune project). Working full time with django is like a dream come true for me (a very nerdy dream :).

Anyway, it is very exciting to work for Mozilla serving over 400 million Firefox users. I am looking forward to this new chapter in my career!

Categorieën: Mozilla-nl planet

Ricky Rosario: dotjs: My first Firefox Add-on

Mozilla planet - wo, 05/07/2017 - 23:31

Inspired by defunkt's dotjs Chrome extension, I finally decided to play with the new add-on sdk to port the concept to Firefox. dotjs executes JavaScript files in ~/.js based on their filename and the domain of the site you are visitng. For example, if you navigate to http://www.twitter.com, dotjs will execute ~/.js/twitter.com.js. It also loads in jQuery so you can use jQuery with in your scripts even if the site doesn't use jQuery (it is loaded with .noConflict so it doesn't interfere with any existing jQuery on the page).

You can get the add-on for Firefox 4 on AMO and it doesn't require a browser restart (woot!). The code is on github. Feedback and patches welcome!

Categorieën: Mozilla-nl planet

Ricky Rosario: support.mozilla.org (SUMO) +dev in 2013

Mozilla planet - wo, 05/07/2017 - 23:31

This is my first and last blog post for 2013!

Whewww, 2013 has been another splendid year for SUMO and the SUMO/INPUT Engineering team. We did lose (and missed a ton) our manager, James Socol, early in the year and I took over the managerial duties for the team, but the core dev team stayed intact.

Some metrics

Here are some metrics about what our platform, team and community was up to in 2013:

  • Page views: 502,812,271
  • Visits: 255,122,331
  • Unique visits: 190,633,959
  • Questions asked: 33,482
  • Questions replied to: 31,746 (94.8%)
  • Questions solved: 9,048 (27%)
  • Replies to questions: 119,440
  • Support Forum contributors:
    1+ answers: 8,723
    2+ answers: 3,436
    3+ answers: 1,764
    5+ answers: 742
    10+ answers: 247
    25+ answers: 97
    50+ answers: 63
    100+ answers: 42
    250+ answers: 22
    500+ answers: 17
    1000+ answers: 11
    2500+ answers: 7
    5000+ answers: 3
    10000+ answers: 1 (20,057 answers by cor-el)
  • Army of Awesome tweets handled: 46,030
  • Army of Awesome contributors: 911
  • Knowledge Base (KB) Revisions: 16,561
    en-US KB Revisions: 2,975
    L10n KB Revisions: 13,586
  • Locales with activity: 55
  • en-US KB Contributors: 165
  • L10n KB Contributors: 607
  • KB Helpful votes: 4,214,528 (72.6%)
  • KB Unhelpful votes: 1,587,416 (27.4%)
More metrics

Willkg wrote a blog post with that contains a lot more metrics specific to our development (bugs filed, bugs resolved, commits, major projects, etc.). Go check it out!

I wanted to highlight a few things he mentioned:

In 2011, we had 19 people who contributed code changes.
In 2012, we had 23 people.
In 2013, we had 32 people.

YAY!

Like 2011 and 2012, we resolved more bugs than we created in 2013. That's three years in a row! I've never seen that happen on a project I work on.

WOOT!

Input also had a great year in 2013. Check out willkg's blog post about it.

Onward

2013 was a great year for the SUMO platform. We finetuned the KB information architecture work we began in 2012 and simplified all of the landing pages (home, product, topic). In 2014, I am hoping we can make the Support Forum as awesome as the KB is today.

In addition to making the KB awesomer... The Support Forums now support more locales than just English. We now send HTML and localized emails! We added Open Badges! We switched to YouTube for videos. We improved search for locales. We made deployments better. We implemented persona (not enabled yet). We implementated escalation of questions to the helpdesk. We added lots of new and improved dashboards and tools for contributors and community managers. At the same time, we made lots of backend and infrastructure improvements that make the site more stable and resilient and our code more awesome.

As a testament to the awesomeness of the platform, new products have come to us to be their support platform. We are now the support site for Webmaker and will be adding Open Badges and Thunderbird early in 2014.

Thanks to the amazing awesome splendid dev team, the SUMO staff and the community for an awesome 2013!

Categorieën: Mozilla-nl planet

Pagina's