mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Frédéric Wang: Review of Igalia's Web Platform activities (H1 2017)

Mozilla planet - to, 07/09/2017 - 00:00
Introduction

For many years Igalia has been committed to and dedicated efforts to the improvement of Web Platform in all open-source Web Engines (Chromium, WebKit, Servo, Gecko) and JavaScript implementations (V8, SpiderMonkey, ChakraCore, JSC). We have been working in the implementation and standardization of some important technologies (CSS Grid/Flexbox, ECMAScript, WebRTC, WebVR, ARIA, MathML, etc). This blog post contains a review of these activities performed during the first half (and a bit more) of 2017.

Projects CSS

A few years ago Bloomberg and Igalia started a collaboration to implement a new layout model for the Web Platform. Bloomberg had complex layout requirements and what the Web provided was not enough and caused performance issues. CSS Grid Layout seemed to be the right choice, a feature that would provide such complex designs with more flexibility than the currently available methods.

We’ve been implementing CSS Grid Layout in Blink and WebKit, initially behind some flags as an experimental feature. This year, after some coordination effort to ensure interoperability (talking to the different parties involved like browser vendors, the CSS Working Group and the web authors community), it has been shipped by default in Chrome 58 and Safari 10.1. This is a huge step for the layout on the web, and modern websites will benefit from this new model and enjoy all the features provided by CSS Grid Layout spec.

Since the CSS Grid Layout shared the same alignment properties as the CSS Flexible Box feature, a new spec has been defined to generalize alignment for all the layout models. We started implementing this new spec as part of our work on Grid, being Grid the first layout model supporting it.

Finally, we worked on other minor CSS features in Blink such as caret-color or :focus-within and also several interoperability issues related to Editing and Selection.

MathML

MathML is a W3C recommendation to represent mathematical formulae that has been included in many other standards such as ISO/IEC, HTML5, ebook and office formats. There are many tools available to handle it, including various assistive technologies as well as generators from the popular LaTeX typesetting system.

After the improvements we performed in WebKit’s MathML implementation, we have regularly been in contact with Google to see how we can implement MathML in Chromium. Early this year, we had several meetings with Google’s layout team to discuss this in further details. We agreed that MathML is an important feature to consider for users and that the right approach would be to rely on the new LayoutNG model currently being implemented. We created a prototype for a small LayoutNG-based MathML implementation as a proof-of-concept and as a basis for future technical discussions. We are going to follow-up on this after the end of Q3, once Chromium’s layout team has made more progress on LayoutNG.

Servo

Servo is Mozilla’s next-generation web content engine based on Rust, a language that guarantees memory safety. Servo relies on a Rust project called WebRender which replaces the typical rasterizer and compositor duo in the web browser stack. WebRender makes extensive use of GPU batching to achieve very exciting performance improvements in common web pages. Mozilla has decided to make WebRender part of the Quantum Render project.

We’ve had the opportunity to collaborate with Mozilla for a few years now, focusing on the graphics stack. Our work has focused on bringing full support for CSS stacking and clipping to WebRender, so that it will be available in both Servo and Gecko. This has involved creating a data structure similar to what WebKit calls the “scroll tree” in WebRender. The scroll tree divides the scene into independently scrolled elements, clipped elements, and various transformation spaces defined by CSS transforms. The tree allows WebRender to handle page interaction independently of page layout, allowing maximum performance and responsiveness.

WebRTC

WebRTC is a collection of communications protocols and APIs that enable real-time communication over peer-to-peer connections. Typical use cases include video conferencing, file transfer, chat, or desktop sharing. Igalia has been working on the WebRTC implementation in WebKit and this development is currently sponsored by Metrological.

This year we have continued the implementation effort in WebKit for the WebKitGTK and WebKit WPE ports, as well as the maintenance of two test servers for WebRTC: Ericsson’s p2p and Google’s apprtc. Finally, a lot of progress has been done to add support for Jitsi using the existing OpenWebRTC backend.

Since OpenWebRTC development is not an active project anymore and given libwebrtc is gaining traction in both Blink and the WebRTC implementation of WebKit for Apple software, we are taking the first steps to replace the original WebRTC implementation in WebKitGTK based on OpenWebRTC, with a new one based on libwebrtc. Hopefully, this way we will share more code between platforms and get more robust support of WebRTC for the end users. GStreamer integration in this new implementation is an issue we will have to study, as it’s not built in libwebrtc. libwebrtc offers many services, but not every WebRTC implementation uses all of them. This seems to be the case for the Apple WebRTC implementation, and it may become our case too if we need tighter integration with GStreamer or hardware decoding.

WebVR

WebVR is an API that provides support for virtual reality devices in Web engines. Implementation and devices are currently actively developed by browser vendors and it looks like it is going to be a huge thing. Igalia has started to investigate on that topic to see how we can join that effort. This year, we have been in discussions with Mozilla, Google and Apple to see how we could help in the implementation of WebVR on Linux. We decided to start experimenting an implementation within WebKitGTK. We announced our intention on the webkit-dev mailing list and got encouraging feedback from Apple and the WebKit community.

ARIA

ARIA defines a way to make Web content and Web applications more accessible to people with disabilities. Igalia strengthened its ongoing committment to the W3C: Joanmarie Diggs joined Richard Schwerdtfeger as a co-Chair of the W3C’s ARIA working group, and became editor of the Core Accessibility API Mappings, [Digital Publishing Accessibility API Mappings] (https://w3c.github.io/aria/dpub-aam/dpub-aam.html), and Accessible Name and Description: Computation and API Mappings specifications. Her main focus over the past six months has been to get ARIA 1.1 transitioned to Proposed Recommendation through a combination of implementation and bugfixing in WebKit and Gecko, creation of automated testing tools to verify platform accessibility API exposure in GNU/Linux and macOS, and working with fellow Working Group members to ensure the platform mappings stated in the various “AAM” specs are complete and accurate. We will provide more information about these activities after ARIA 1.1 and the related AAM specs are further along on their respective REC tracks.

Web Platform Predictability for WebKit

The AMP Project has recently sponsored Igalia to improve WebKit’s implementation of the Web platform. We have worked on many issues, the main ones being:

  • Frame sandboxing: Implementing sandbox values to allow trusted third-party resources to open unsandboxed popups or restrict unsafe operations of malicious ones.
  • Frame scrolling on iOS: Addressing issues with scrollable nodes; trying to move to a more standard and interoperable approach with scrollable iframes.
  • Root scroller: Finding a solution to the old interoperability issue about how to scroll the main frame; considering a new rootScroller API.

This project aligns with Web Platform Predictability which aims at making the Web more predictable for developers by improving interoperability, ensuring version compatibility and reducing footguns. It has been a good opportunity to collaborate with Google and Apple on improving the Web. You can find further details in this blog post.

JavaScript

Igalia has been involved in design, standardization and implementation of several JavaScript features in collaboration with Bloomberg and Mozilla.

In implementation, Bloomberg has been sponsoring implementation of modern JavaScript features in V8, SpiderMonkey, JSC and ChakraCore, in collaboration with the open source community:

  • Implementation of many ES6 features in V8, such as generators, destructuring binding and arrow functions
  • Async/await and async iterators and generators in V8 and some work in JSC
  • Optimizing SpiderMonkey generators
  • Ongoing implementation of BigInt in SpiderMonkey and class field declarations in JSC

On the design/standardization side, Igalia is active in TC39 and with Bloomberg’s support

In partnership with Mozilla, Igalia has been involved in the specification of various JavaScript standard library features for internationalization, in specification, implementation in V8, code reviews in other JavaScript engines, as well as working with the underlying ICU library.

Other activities Preparation of Web Engines Hackfest 2017

Igalia has been organizing and hosting the Web Engines Hackfest since 2009. This event under an unconference format has been a great opportunity for Web Engines developers to meet, discuss and work together on the web platform and on web engines in general. We announced the 2017 edition and many developers already confirmed their attendance. We would like to thank our sponsors for supporting this event and we are looking forward to seeing you in October!

Coding Experience

Emilio Cobos has completed his coding experience program on implementation of web standards. He has been working in the implementation of “display: contents” in Blink but some work is pending due to unresolved CSS WG issues. He also started the corresponding work in WebKit but implementation is still very partial. It has been a pleasure to mentor a skilled hacker like Emilio and we wish him the best for his future projects!

New Igalians

During this semester we have been glad to welcome new igalians who will help us to pursue Web platform developments:

  • Daniel Ehrenberg joined Igalia in January. He is an active contributor to the V8 JavaScript engine and has been representing Igalia at the ECMAScript TC39 meetings.
  • Alicia Boya joined Igalia in March. She has experience in many areas of computing, including web development, computer graphics, networks, security, and software design with performance which we believe will be valuable for our Web platform activities.
  • Ms2ger joined Igalia in July. He is a well-known hacker of the Mozilla community and has wide experience in both Gecko and Servo. He has noticeably worked in DOM implementation and web platform test automation.
Conclusion

Igalia has been involved in a wide range of Web Platform technologies going from Javascript and layout engines to accessibility or multimedia features. Efforts have been made in all parts of the process:

  • Participation to standardization bodies (W3C, TC39).
  • Elaboration of conformance tests (web-platform-tests test262).
  • Implementation and bug fixes in all open source web engines.
  • Discussion with users, browser vendors and other companies.

Although, some of this work has been sponsored by Google or Mozilla, it is important to highlight how external companies (other than browser vendors) can make good contributions to the Web Platform, playing an important role on its evolution. Alan Stearns already pointed out the responsibility of the Web Plaform users on the evolution of CSS while Rachel Andrew emphasized how any company or web author can effectively contribute to the W3C in many ways.

As mentioned in this blog post, Bloomberg is an important contributor of several open source projects and they’ve been a key player in the development of CSS Grid Layout or Javascript. Similarly, Metrological’s support has been instrumental for the implementation of WebRTC in WebKit. We believe others could follow their examples and we are looking forward to seeing more companies sponsoring Web Platform developments!

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Making Privacy More Transparent

Mozilla planet - wo, 06/09/2017 - 23:07

How do you make complex privacy information easily accessible and understandable to users?  At Mozilla, we’ve been thinking through this for the past several months from different perspectives: user experience, product management, content strategy, legal, and privacy.  In Firefox 56 (which releases on September 26), we’re trying a new idea, and we’d love your feedback.

Many companies, including Mozilla, present a Privacy Notice to users prior to product installation.  You’ll find a link to the Firefox Privacy Notice prominently displayed under the Firefox download button on our websites.

Our testing showed that less than 1% of users clicked the link to view the “Firefox Privacy Notice” before downloading Firefox.  Another source of privacy information in Firefox is a notification bar displayed within the first minute of a new installation.  We call this the “Privacy Info Bar.”

User testing showed this was a confusing experience for many users, who often just ignored it.  For users who clicked the button, they ended up in the advanced settings of Firefox.  Once there, some people made unintentional changes that impacted browser performance without understanding the consequences.  And because this confusing experience occurred within the first few minutes of using a brand new browser, it took away from the primary purpose of installing a new browser: to navigate the web.

We know that many Firefox users care deeply about privacy, and we wanted to find a way to increase engagement with our privacy practices.  So we went back to the drawing board to provide users with more meaningful interactions. And after further discovery and iteration, our solution, which we’re implementing in Firefox 56, is a combination of several product and experience changes.  Here are our improvements:

  1. Displaying the Privacy Notice as the second tab of Firefox for all new installs;
  2. Reformatting and improving the Firefox Privacy Notice; and
  3. Improving the language in the preferences menu.

We reformatted the Privacy Notice to make it more obvious what data Firefox uses and sends to Mozilla and others.  Not everyone uses the same features or cares about the same things, so we layered the notice with high-level data topics and expanders to let you dig into details based on your interest.  All of this is now on the second tab of Firefox after a new installation, so it’s much more accessible and user-friendly.  The Privacy Info Bar became redundant with these changes, so we removed it.

We also improved the language in the Firefox preferences menu to make data collection and choices more clear to users.  We also used the same data terms in the preferences menu and privacy notice that our engineers use internally for data collection in Firefox.

These are just a few changes we made recently, but we are continuously seeking innovative ways to make the privacy and data aspects of our products more transparent.  Internally at Mozilla, data and privacy are topics we discuss constantly.  We challenge our engineers and partners to find alternative approaches to solving difficult problems with less data.  We have review processes to ensure the end-result benefits from different perspectives.  And we always consider issues from the user perspective so that privacy controls are easy to find and data practices are clear and understandable.

You can join the conversation on Github, or commenting on our governance mailing list.

Special thanks to Michelle Heubusch, Peter Dolanjski, Tina Hsieh, Elvin Lee, and Brian Smith for their invaluable contributions to our revised privacy notice structure.

The post Making Privacy More Transparent appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Mozilla Future Releases Blog: It’s your data, we’re just living in it

Mozilla planet - wo, 06/09/2017 - 23:06

Let me ask you a question: How often do you think about your Firefox data? I think about your Firefox data every day, like it’s my job. Because it is.  As the head of data science for Firefox, I manage a team of data scientists who contribute to the development of Firefox by shaping the direction of product strategy through the interpretation of the data we collect.  Being a data scientist at Mozilla means that I aim to ensure that Firefox users have meaningful choices when it comes to participating in our data collection efforts, without sacrificing our ability to collect useful, high-quality data that is essential to making smarter product decisions.

To achieve this balance, I’ve been working with colleagues across the organization to simplify and clarify our data collection practices and policies. Our goal is that this will make it easier for you to decide if and when you share data with us.  Recently, you may have seen some updates about planned changes to the data we collect, how we collect it, and how we share the data we collect. These pieces are part of a larger strategy to align our data collection practices with a set of guiding principles that inform how we work with and communicate about data we collect.

The direct impact is that we have made changes to the systems that we use to collect data from Firefox, and we have updated the data collection preferences as a result.  Firefox clients no longer employ two different data collection systems (Firefox Health Report and opt-in Telemetry). Although one was on by default, and the other was opt-in, as a practical matter there was no real difference in the type of data that was being collected by the two different channels in release.  Because of that, we now rely upon a single system called Unified Telemetry that has aspects of both systems combined into a single data collection platform and as a result no longer have separate preferences, as we did for the old systems.

If you are a long-time Firefox user and you previously allowed us to collect FHR data but you refrained from opting into extended telemetry, we will continue to collect the same type of technical and interaction information using Unified Telemetry. We have scaled back all other data collection to either pre-release or in situ opt-in, so you will continue to have choices and control over how Firefox collects your data.

Four Pillars of Our Data Collection Strategy

There are four key areas that we focused on when we decided to adjust our data preferences settings.  For Firefox, it means that any time we collect data, we wanted to ensure that the proposal for data collection met our criteria for:

  • Necessity
  • Transparency
  • Accountability
  • Privacy
Necessity

We don’t collect data “just because we can” or “just because it would be interesting to measure”.  Anyone on the Firefox team who requests data has to be able to answer questions like:

  • Is the data collection necessary for Firefox to function properly? For example, the automatic update check must be sent in order to keep Firefox up to date.
  • Is data collection needed to make a feature of Firefox work well? For example, we need to collect data to make our search suggestion feature work.
  • Is it necessary to take a measurement from Firefox users?  Could we learn what we need from measuring users on a pre-release version of Firefox?
  • Is it necessary to get data from all users, or is it sufficient to collect data from a smaller sample?
Transparency

Transparency at Mozilla means that we publicly share details about what data we collect and ensure that we can answer questions openly about our related decision-making.

Requests for data collection start with a publicly available bug on bugzilla. The general process around requests for new data collection follows this process: people indicate that they would like to collect some data according to some specification, they flag a data steward (an employee who is trained to check that requests have publicly documented their intentions and needs) for review, and only those requests that pass review are implemented.

Most simple requests, like new Telemetry probes or experimental tests, are approved within the context of a single bug.  We check that every simple request includes enough detail that a standard set of questions to determine the necessity and accountability of the proposed measurements.  Here’s an example of a simple request for new telemetry-based data collection.

More complex requests, like those that call for a new data collection mechanism or require changes to the privacy notice, will require more extensive review than a simple request.  Typically, data stewards or requesters themselves will escalate requests to this level of review when it is clear that a simple review is insufficient.  This review can involve some or all of the following:

  • Privacy analysis: Feedback from the mozilla.dev.privacy mailing list and/or privacy experts within and outside of Mozilla to discuss the feature and its privacy impact.
  • Policy compliance review: An assessment from the Mozilla data compliance team to determine if the request matches the Mozilla data compliance policies and documents.
  • Legal review: An assessment from Mozilla’s legal team, which is necessary for any changes to the privacy policies/notices.
Accountability

Our process includes a set of controls that hold us accountable for our data collection. We take the precaution of ensuring that there is a person listed who is responsible for following the approved specification resulting from data review, such as designing and implementing the code as well as analyzing and reporting the data received.  Data stewards check to make sure that basic questions about the intent behind and implementation of the data we collect can be answered, and that the proposed collection is within the boundaries of a given data category type in terms of defaults available.  These controls allow for us to feel more confident about our ability to explain and justify to our users why we have decided to start collecting specific data.

Privacy

We can collect many types of data from your use of Firefox, but we don’t consider them equal. We consider some types of data to be more benign (like what version of Firefox you are using) than others (like the websites you visit). We’ve devised a four-tier system to group data in clear categories from less sensitive to highly sensitive, which you can review here in more detail.   Since we developed this 4-tier approach, we’ve worked to align this language with our Privacy Policy and at the user settings for Privacy in Firefox.   (You can read more about the legal team’s efforts in a post by my colleagues at Legal and Compliance.)

What does this mean for you?

We hope it means a lot and not much at the same time.  At Firefox, we have long worked to respect your privacy, and we hope this new strategy gives you a clearer understanding of what data we collect and why it’s important to us.  We also want to reassure you that we haven’t dramatically changed what we collect by default.  So while you may not often think about the data you share with Mozilla, we hope that when you do, you feel better informed and more in control.

The post It’s your data, we’re just living in it appeared first on Future Releases.

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 06 Sep 2017

Mozilla planet - wo, 06/09/2017 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 06 Sep 2017

Mozilla planet - wo, 06/09/2017 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 111

Mozilla planet - wo, 06/09/2017 - 19:00

The Joy of Coding - Episode 111 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 111

Mozilla planet - wo, 06/09/2017 - 19:00

The Joy of Coding - Episode 111 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting September 6, 2017

Mozilla planet - wo, 06/09/2017 - 18:00

Weekly SUMO Community Meeting September 6, 2017 This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting September 6, 2017

Mozilla planet - wo, 06/09/2017 - 18:00

Weekly SUMO Community Meeting September 6, 2017 This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: I built something with A-Frame in 2 days (and you can too)

Mozilla planet - wo, 06/09/2017 - 17:03

A few months ago, I had the opportunity to try out several WebVR experiences for the first time, and I was blown away by the possibilities. Using just a headset and my Firefox browser, I was able to play games, explore worlds, paint, create music and so much more. All through the open web. I was hooked.

A short while later, I was introduced to A-Frame, a web framework for building virtual reality experiences. The “Hello World” demo is a mere 15 lines of code. This blew my mind. Building an experience in Virtual Reality seems like a task reserved for super developers, or that guy from Mr. Robot. After glancing through the A-Frame documentation, I realized that anyone with a little front-end experience can create something for Virtual Reality…even me – a marketing guy who likes to build websites in his spare time.

My team had an upcoming presentation to give. Normally we would create yet another slide deck. This time, however, I decided to give A-Frame a shot, and use Virtual Reality to tell our story and demo our work.

Within two days I was able to teach myself how to build this (slightly modified for sharing purposes). You can view the GitHub repo here.

The result was a presentation that was fun and unique. People were far more engaged in Virtual Reality than they would have been watching us flip through slides on a screen.

This isn’t a “how-to get started with A-Frame” post (there are plenty of great resources for that). I did, however, find solutions for a few “gotchas” that I’ll share below.

Walking through walls

One of the first snags I ran into was that the camera would pass through objects and walls. After some research, I came across a-frame-extras. It includes an add-on called “kinematic-body” that helped solve this issue for me.

Controls

A-frame extras also has helpers for controls. It gave me an easy way to implement controls for keyboard, mouse, touchscreen, etc.

Generating rooms

It didn’t take me long to figure out how to create and position walls to create a room. I didn’t just want a room though. I wanted multiple rooms and hallways. Manually creating them would take forever. During my research I came across this post, where the author created a maze using an array of numbers. This inspired me to create generate my own map using a similar method:

const map = { "data": [ 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 4, 4, 4, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 4, 0, 0, 0, 4, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 4, 0, 0, 0, 4, 4, 4, 1, 0, 8, 0, 0, 0, 0, 0, 1, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 4, 4, 4, 4, 4, 4, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0 ], "height":19, "width":19 }

0 = no walls
1 – 4 = walls with various textures
8 = user start position
9 = log position to console

This would allow me to try different layouts, start at different spots around the map, and quickly get coordinates for positioning items and rooms (you’ll see why this is useful below). You can view the rest of the code here.

Duplicating rooms

Once I created a room, I wanted to recreate a variation of this room at different locations around the map. This is where I learned to embrace the “ object. When you use “ as a container, it allows entities inside the container to be positioned relative to that parent entity object. I found this post about relative positioning to be helpful in understanding the concept. This allowed me to duplicate the code for a room, and simply provide new position coordinates for the parent entity.

Conclusion

I have no doubt that there are better and more efficient ways to create something like this, but the fact that a novice like myself was able to build something in just a couple of days speaks volumes to the power of A-Frame and WebVR. The A-Frame community also deserves a lot of credit. I found libraries, code examples, and blog posts for almost every issue and question I had.

Now is the perfect time to get started with WebVR and A-Frame, especially now that it’s supported for anyone using the latest version of Firefox on Windows. Check out the website, join the community, and start building.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl author activity illustrated

Mozilla planet - wo, 06/09/2017 - 15:31

At the time of each commit, check how many unique authors that had a change committed within the previous 120, 90, 60, 30 and 7 days. Run the script on the curl git repository and then plot a graph of the data, ranging from 2010 until today. This is just under 10,000 commits.

(click for the full resolution version)

git-authors-active.pl is the little stand-alone script I wrote and used for this – should work fine for any git repository. I then made the graph from that using libreoffice.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla and the Washington Post Are Reinventing Online Comments

Mozilla planet - wo, 06/09/2017 - 15:00
To engage readers, build community, and strengthen journalism, Mozilla’s open-source commenting platform will be integrated across washingtonpost.com this summer

 

Digital journalism has revolutionized how we engage with the news, from the lightning speed at which it’s delivered to different formats on offer.

But the comments section beneath that journalism? It’s… broken. Trolls, harassment, enmity and abuse undermine meaningful discussion and push many people away. Many major newsrooms are removing their comments. Many new sites are launching without them.

Instead, newsrooms are directing interaction and engagement to social media. As a result, tools are limited, giant corporations control the data, and news organizations cannot build a direct relationship with their audience. 

At Mozilla, we’re not giving up on online comments. We believe that engaging readers and building community around the news strengthens not just journalism, but also open society. We believe comments are a fundamental part of the decentralized web.

Mozilla has been researching, testing, and building software in this area since 2015. Today, our work is taking a huge step forward as the Washington Post integrates Talk — Mozilla’s open-source commenting platform — across washingtonpost.com.

Talk is currently deployed across the Washington Post’s Politics, Business, and The Switch (technology) sections, and will roll out to more sections in the coming weeks.

Talk is open-source commenting software developed by Mozilla.

What is Talk?

Talk is developed by The Coral Project, a Mozilla creation that builds open-source tools to make digital journalism more inclusive and more engaging, both for audience members and journalists. Starting this summer, Talk will also be integrated across Fairfax Media’s websites in Australia, including the Sydney Morning Herald and The Age. One of The Coral Project’s other tools, Ask, is currently being used by 13 newsrooms, including the Miami Herald, Univision, and PBS Frontline.

“Trust in journalism relies on building effective relationships with your audience,” says Andrew Losowsky, project lead of The Coral Project. “Talk rethinks how moderation, comment display and conversation can function on news websites. It encourages more meaningful interactions between journalists and the people they serve.”

“Talk is informed by a huge amount of research into online communities,” Losowsky adds. “We’ve commissioned academic studies and held workshops around the world to find out what works, and also published guides to help newsrooms change their strategies. We’ve interviewed more than 300 people from 150 newsrooms in 30 countries, talking to frequent commenters, people who never comment, and even trolls. We’ve learned how to turn comments — which have so much potential — into a productive space for everyone.”

“Commenters and comment viewers are among the most loyal readers The Washington Post has,” said Greg Barber, The Post’s director of newsroom product. “Through our work with Mozilla, The New York Times, and the Knight Foundation in The Coral Project, we’ve invested in a set of tools that will help us better serve them, powering fruitful discussion and debate for years to come.”

The Coral Project was created thanks to a generous grant from the Knight Foundation and is currently funded by the Democracy Fund, the Rita Allen Foundation, and Mozilla. It also offers hosting and consulting services for newsrooms who need support in running their software.

Here’s what makes Talk different

It’s filled with features that improve interactions, including functions that show the best comments first, ignore specific users, find great commenters, give badges to staff members, filter out unreliable flaggers, and offer a range of audience reactions.

You own your data. Unlike the most popular systems, every organization using Talk runs its own version of the software, and keeps its own data. Talk doesn’t contain any tracking, or digital surveillance. This is great for journalistic integrity, good for privacy, and important for the internet.

It’s fast. Talk is small — about 300kb — and lightweight. Only a small number of comments initially load, to keep the page load low. New comments and reactions update instantaneously.

It’s flexible. Talk uses a plugin architecture, so each newsroom can make their comments act in a different way. Plugins can be written by third parties — the Washington Post has already written and open sourced several — and applied within the embed code, in order to change the functionality for particularly difficult topics.

It’s easy to moderate. Based on feedback from moderators at 12 different companies, we’ve created a simple moderation system with keyboard shortcuts and a feature-rich configuration.

It’s great for technologists. Talk is fully extensible with a RESTful and Graph API, and a plugin architecture that includes webhooks. The CSS is also fully customizable.

It’s 100% free. The code is public and available for you to download and run. And if you want us to help you host or integrate Talk into your site, we offer paid services that support the project.

Learn more about The Coral Project.

The post Mozilla and the Washington Post Are Reinventing Online Comments appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Ehsan Akhgari: Identifying regressions when working on bugs

Mozilla planet - wo, 06/09/2017 - 06:14

Many of the patches that we write are fixes to things that have broken as a result of a change, often known as regressions.  An important aspect of a high quality release is for us to be able to identify and fix as many of these regressions as we can for each release, and this requires collaboration between people who file bugs, those who triage them, people who fix them, and of course the release managers.

As engineers, one of the things we can do when fixing a bug is to double check whether the bug was introduced as a result of a recent code change, and based on that decide whether the fix needs to be backported to older branches.  In this post, I’m going to talk about how I usually do this.

Identifying the source of a bug

Sometimes it is clear from the description of a bug that the bug didn’t exist previously, as is often the case for severely broken scenarios.  It’s a good practice to always ask yourself “did this used to work?”  If the answer is yes and you have a way to reproduce the bug, then we need to figure out what code change broke things in the first place.  We have a really helpful tool called mozregression which allows you to bisect the history of the codebase to find the offending code change.  There is documentation available for using the tool, but to summarize: it walks you back in the history using a binary search algorithm to allow you to relatively quickly find out the specific code change that introduced a regression.  This tool handles downloading the old Firefox versions and running them for you — all you need to do is to try to reproduce the bug in each Firefox instance it opens up and then tell the tool whether that version was good or bad.  Mozregression also handles the creation of new profiles for each build so that you don’t have to worry about downgrades when you go from one build to another, or the impact of leaving the profile in a dirty state while testing during bisection.  All you need to have at hand to start using it is knowing which version of Firefox shows a bug, and which version doesn’t.  (When I don’t know when a bug first appeared, I usually use the –launch command to run a super old version to test whether the problem exists there to find a good base version; as it doesn’t matter much how wide the regression range you start with is.)

But sometimes there are no specific steps to reproduce available, or it may not be obvious that a bug is a regression.  In those cases, when you have a fix for the bug at hand, it would be nice to take a few extra minutes to look at the code that you have modified in the fix to see where the code changes are coming from.  This can be done using a blame/annotate source code tool.  These tools show which changeset each line of code is coming from.  Depending on your favorite revision control system (hg or git) and your favorite source code editor/IDE (Vim, Emacs, Eclipse, VS Code, Sublime, etc.) you can find any number of plugins and many online tutorials on how to view annotated source code.  An easy way, and the that I use most of the time these days is looking at annotations through Searchfox.  But you may also want to familiarize yourself with a plugin that works with your workflow to assist you in navigating the history of the code.

For example, when you hover the left-hand sidebar (I’ve sometimes heard this referred to as “the gutter”) when viewing a file in Searchfox, you will see an info box like this:

Searchfox history tooltip box

There are a few helpful links here for following the history of the code.  “Show earliest version with this line” is useful to look at the revision of this file which introduced the line you hovered.  “Show latest version without this line” is helpful to look at the parent revision of this file with respect to the aforementioned changeset.  Using these two links you can navigate backwards in history to get to the point in time when a part of the code was first introduced.  Often times you need to walk several steps back in the history before you get to the correct revision since code can be moved around, re-indented, modified in slight ways that don’t matter to you at the time, etc.

Once you get to the final version of the file that introduced the code in question, you will see something like this at the top of the page:

Searchfox changeset header

This shows you the changeset SHA1 for the problematic code in question (I picked a completely arbitrary changeset here which as far as I’m aware has not caused any regressions for demonstration purposes!)  By clicking on the changeset identifier (bc62a859 in this example), and following the “hg” link from there, you can get to a table showing you which release milestone it got landed in:

hg.mozila.org changeset info section

This information is also available on the bug (which you can get to by clicking the link with the bug number next to “bugs” – 1358447 in this example):

Information about what release a bug got fixed in

Marking the bug as a regression

Once you identify the version of Firefox that first had the broken code checked into it, it is helpful to communicate this information to the release management team.  Doing this involves a simple three-step process:

  • Add the regression keyword to the bug if it doesn’t have it already.
  • Based on the version of Firefox that was first impacted by the bug found in the previous step, under Firefox Tracking Flags, set the Tracking flag for the respective versions to “?”.
  • Similarly, set the Status flag for the affected versions to “affected”, and for the older version(s) to “unaffected”.  Bugzilla will help you fill out a comment section describing why this tracking is needed to the release management team.

Marking a bug as a regression

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR3b1 available

Mozilla planet - wo, 06/09/2017 - 06:10
TenFourFox Feature Parity Release 3 beta 1 is now available (hashes, downloads, release notes). This release has two major updates: first, a whole lot more of the browser has AltiVec in it. All of the relevant call sites that use the slower OS X memory search function memchr() were either converted to the VMX-accelerated version that we introduced for JavaScript in FPR2, or, if the code simply checks to see if a character is there but doesn't need to know where it is, our VMX-accelerated haschr(). This occurs in lots of places, including event handling, font validation and even the network stack; not all of them are hot, but all of them benefit.

The second major change is additional JavaScript ES6 and ES7 compatibility. It's not sufficient for us simply to copy over later versions of JavaScript from browser versions between 45 and 52; besides the fact they may require platform support we don't implement, they don't have our JIT, our PowerPC-specific optimizations and our later updates which would need to be backported and merged, they don't have our accumulated security patches, and they don't have some of the legacy features we need to continue to support 45-era add-ons (particularly legacy generator and array comprehensions). This hybrid engine maintains backwards compatibility but has expanded syntax that fixes some issues with Dropbox (though we still have a couple other glitches to smoke out), Amazon Music and Beta for Pnut, and probably a number of other sites. There is a limit to how much I can cram into the engine and there is a very large frontend refactor around Fx51 which will probably not be easily backported, but there should be more improvements I can squeeze in.

There was also supposed to be a new feature for delaying video decoding until the rest of the page had loaded to improve throughput on YouTube and some other sites, but YouTube introduced its new site design while I was testing the feature, and unfortunately the "lazy loading" technique they appear to be using now means the browser cannot deterministically compute when video will start competing with layout for resources. I'm thinking of a way to retool this but it will not be an enabled part of FPR3. One idea is to forge dropped frames into MSE's statistics early so it shifts to a lower quality stream for a period of time as a "fast start;" another might be to decouple the media state machine from the decoder more completely. I haven't decided how I will attack this problem yet.

In miscellaneous changes, even after desperate begging Imgur would not fix their site sniffer to stop giving us a mobile version using the default TenFourFox user agent (this never used to happen until recently, btw), even if just an image by itself were requested. I got sick of being strung along by their tech support tickets, so this version just doesn't send any user agent to any Imgur site, unless you explicitly select something other than the default. Take that, Imgur. The reason I decided to do this for Imgur specifically is because their mobile site actually causes bugs in TenFourFox due to a hard CoreGraphics limit I can't seem to get around, so serving us the mobile site inappropriately is actually detrimental as opposed to merely annoying. Other miscellaneous changes include some widget tune-ups, more removal of 10.7+ specific code, and responsiveness tweaks to the context menu and awesome bar.

Last but not least, this release has a speculative fix for long-running issue 72 where 10.5 systems could end up with a frozen top menu bar after cycling repeatedly through pop-up menus. You'll notice this does not appear in the source commits yet because I intend to back it out immediately if it doesn't fix the problem (it has a small performance impact even on 10.4 where this issue does not occur). If you are affected by this issue and the optimized build doesn't fix your problem, please report logging to the Github issue from the debug version when the issue triggers. If it does fix it, however, I will commit the patch to the public repo and it will become a part of the widget library.

Other than that, look for the final release on or about September 26. Post questions, concerns and feedback in the comments.

Categorieën: Mozilla-nl planet

Emma Humphries: Triage Summary 2017-09-05

Mozilla planet - wo, 06/09/2017 - 03:24

It's the weekly report on the state of triage in Firefox-related components.

Marking Bugs for the Firefox 57 Release

The bug that will overshadow all the hard work we put into the Firefox 57 release has probably already been filed. We need to find it and fix it. If you think a bug might affect users in the 57 release, please set the correct Firefox versions affected in the bug, and then set the tracking-request flag to request tracking of bug by release management team.

Poll Result

It’s not a large sample, but by an 8-1-1 margin, you said you’d like a logged-in BMO home page like: https://fitzgen.github.io/bugzilla-todos/.

Now, the catch. We don’t have staff to work on this, but I’ve filed a bug, https://bugzilla.mozilla.org/show_bug.cgi?id=1397063 to do this work. If you’d like undying gratitude, and some WONTFIX swag, grab this bug.

Hotspots

The components with the most untriaged bugs remain the JavaScript Engine and Build Config.

**Rank** **Component** **Last Week** **This Week** ---------- ------------------------------ --------------- --------------- 1 Core: JavaScript Engine 477 472 2 Core: Build Config 459 455 3 Firefox for Android: General 408 415 4 Firefox: General 254 258 5 Core: General 241 235 6 Core: JavaScript: GC 180 175 7 Core: XPCOM 171 172 8 Core: Networking 159 167 All Components 8,822 8,962

Please make sure you’ve made it clear what, if anything will happen with these bugs.

Not sure how to triage? Read https://wiki.mozilla.org/Bugmasters/Process/Triage.

Next Release

**Version** 56 56 57 57 57 57 ------------------------------------- ------- ----- ------ ------- ------- ------- **Date** 7/31 8/7 8/14 8/21 8/21 9/5 **Untriaged this Cycle** 4,479 479 835 1,196 1,481 1,785 **Unassigned Untriaged this Cycle** 3,674 356 634 968 1,266 1,477 **Affected this Release** 139 125 123 119 42 83 **Enhancements** 103 3 5 11 17 15 **Orphaned P1s** 192 196 191 183 18 23 **Stalled P1s** 179 157 152 155 13 23

What should we do with these bugs? Bulk close them? Make them into P3s? Bugs without decisions add noise to our system, cause despair in those trying to triage bugs, and leaves the community wondering if we listen to them.

Methods and Definitions

In this report I talk about bugs in Core, Firefox, Firefox for Android, Firefox for IOs, and Toolkit which are unresolved, not filed from treeherder using the intermittent-bug-filer account*, and have no pending needinfos.

By triaged, I mean a bug has been marked as P1 (work on now), P2 (work on next), P3 (backlog), or P5 (will not work on but will accept a patch).

https://wiki.mozilla.org/Bugmasters#Triage_Process

A triage decision is not the same as a release decision (status and tracking flags.)

https://mozilla.github.io/triage-report/#report

Untriaged Bugs in Current Cycle

Bugs filed since the start of the Firefox 57 release cycle (August 2nd, 2017) which do not have a triage decision.

https://mzl.la/2wzJxLP

Recommendation: review bugs you are responsible for (https://bugzilla.mozilla.org/page.cgi?id=triage_owners.html) and make triage decision, or RESOLVE.

Untriaged Bugs in Current Cycle Affecting Next Release

Bugs marked status_firefox56 = affected and untriaged.

https://mzl.la/2wzjHaH

Enhancements in Release Cycle

Bugs filed in the release cycle which are enhancement requests, severity = enhancement, and untriaged.

https://mzl.la/2wzCBy8

​Recommendation: ​product managers should review and mark as P3, P5, or RESOLVE as WONTFIX.

High Priority Bugs without Owners

Bugs with a priority of P1, which do not have an assignee, have not been modified in the past two weeks, and do not have pending needinfos.

https://mzl.la/2sJxPbK

Recommendation: review priorities and assign bugs, re-prioritize to P2, P3, P5, or RESOLVE.

Inactive High Priority Bugs

There 159 bugs with a priority of P1, which have an assignee, but have not been modified in the past two weeks.

https://mzl.la/2u2poMJ

Recommendation: review assignments, determine if the priority should be changed to P2, P3, P5 or RESOLVE.

Stale Bugs

Bugs in need of review by triage owners. Updated weekly.

https://mzl.la/2wNyONP

* New intermittents are filed as P5s, and we are still cleaning up bugs after this change, See https://bugzilla.mozilla.org/show_bug.cgi?id=1381587, https://bugzilla.mozilla.org/show_bug.cgi?id=1381960, and https://bugzilla.mozilla.org/show_bug.cgi?id=1383923

If you have questions or enhancements you want to see in this report, please reply to me here, on IRC, or Slack and thank you for reading.



comment count unavailable comments
Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: Kuma Report, August 2017

Mozilla planet - wo, 06/09/2017 - 02:00

Here’s what happened in August in Kuma, the engine of MDN Web Docs:

  • Launched beta of interactive examples
  • Continued work on the AWS migration
  • Prepared for KumaScript translations
  • Refined the Browser Compat Data schema
  • Shipped tweaks and fixes

Here’s the plan for September:

  • Establish maintenance mode in AWS
Done in August Launched beta of interactive examples

On August 29, we launched the interactive examples. We’re starting with showing them to 50% of anonymous users, to measure the difference in site speed. You can also visit the new pages directly. See the interactive editors in beta post on Discourse for more details. We’re collecting feedback with a short survey. See the “take our survey” link below the new interactive example.

We’ve already gotten several rounds of feedback, by showing early iterations to Mozilla staff and to the Brigade, who helped with the MDN redesign. Schalk, Stephanie, Kadir, and Will Bamberg added user interviews to our process. They recruited web developers to try out the new feature, watched how they used it, and adjusted the design based on the feedback.

One of the challenging issues was avoiding a scrollbar, when the <iframe> for the interactive example was smaller than the content. The scrollbar broke the layout, and made interaction clumsy. We tried several rounds of manual iframe sizing before implementing dynamic sizing using postMessage to send the desired size from the client <iframe> to the MDN page. (PR 4361).

Another change from user testing is that we began with deployment to S3 behind a CDN, rather than waiting until after beta testing. Thanks to Dave Parfitt for quickly implementing this (PR 149).

It will take a while to complete the beta testing and launch these six pages to all users. Until then, we still have the live samples. Stephanie Hobson recently improved these by opening them in a new tab, rather than replacing the MDN reference page. (PR 4391).

Continued work on the AWS migration

We’re continuing the work to rehost MDN in AWS, using Kubernetes. We track the AWS migration work in a GitHub project, and we’re getting close to production data tests.

In our current datacenter, we use Apache to serve the website (using mod_wsgi). We’re not using Apache in AWS, and in August we updated Kuma to take on more of Apache’s duties, such as serving files from MDN’s distant past (PR 4365 from Ryan Johnson) and handling old redirects (PR 4231 from Dave Parfitt).

We are currently using MySQL with a custom collation utf8_distinct_ci. The collation determines how text is sorted, and if two strings are considered to be equal. MySQL includes several collations, but they didn’t allow the behavior we wanted for tags. We wanted to allow both “Reference” and the French “Référence”, but not allow the lower-case variants “reference” and “référence”. The custom collation allowed us to do this while still using our tagging library django-taggit. However, we can’t use a custom collation in AWS’s RDS database service. The compromise was to programmatically rename tags (they are now “Reference” and “Référence (2)”), and switch to the standard utf8_general_ci collation, which still prevents the lowercase variants (PR 4376 by John Whitlock). After the AWS migration, we will revisit tags, and see how to best support the desired features.

Prepared for KumaScript translations

There was some preparatory work toward translating KumaScript strings in Pontoon, but nothing shipped yet. The locale files have been moved from the Kuma repository to a new repository, mozilla-l10n/mdn-l10n. The Docker image for KumaScript now includes the locale files. Finally, KumaScript now lives at mdn/kumascript, in the mdn Github organization.

There are additional tasks planned, to use 3rd-party libraries to load translation files, apply translations, and to extract localizable strings. However, AWS will be the priority for the rest of September, so we are not planning on taking the next steps until October.

Refined the Browser Compat Data schema

Florian Scholz and wbamberg have finished a long project to update the Browser Compatibility Data schema. This included a script to migrate the data (BCD PR 304), and a unified {{compat}} macro suitable for compatibility tables across the site (KumaScript PR 272). The new schema is used in release 0.0.4 of mdn-browser-compat-data.

The goal is to convert all the compatibility data on MDN to the BCD format. Florian is on track to convert the JavaScript data in September. Jean-Yves Perrier has made good progress on migrating HTML compatibility data with 7 merged PRs, starting with PR 279.

Shipped Tweaks and Fixes

There were many PRs merged in August:

Many of these were from external contributors, including several first-time contributions. Here are some of the highlights:

Planned for September

Work will continue to migrate to Browser Compat Data, and to fix issues with the redesign and the new interactive examples.

Establish Maintenance Mode in AWS

In September, we plan to prepare a maintenance mode deployment in AWS, and send it some production traffic. This will allow us to model the resources needed when the production environment is hosted in AWS. It will also keep MDN data available while the production database is transferred, when we finalize the transition.

Categorieën: Mozilla-nl planet

Mark Banner: Bookmarks changes in Firefox Nightly – Testing Wanted

Mozilla planet - ti, 05/09/2017 - 23:08

There’s been a project that’s been going on for several years to move Firefox’s bookmarks processing off the main thread and to happen in the background – to help reduce possible jerkiness when dealing with bookmarks.

One part of this (Places Transactions) has been enabled in Nightly for about 4-5 weeks. We think we’ve fixed all the regressions for this, and we’d now like more people testing.

So, when you’re checking out the improved performance of Firefox Nightly, please also keep an eye on Bookmarks – if you’re moving/copying/editing them, and you undo/redo actions, check that everything behaves as it should.

If it doesn’t, please file a bug with how to reproduce and we’ll take a look. If you can also test with the preference browser.places.useAsyncTransactions set to false (after a restart) to know if it happens with the old style transactions – that will help us finding the issue.

We know there’s still some performance issues with the new async transactions which we’re currently working on and hope to be landing fixes for those soon.

Categorieën: Mozilla-nl planet

Christian Heilmann: Reasons to attend and/or speak at Reasons.to

Mozilla planet - ti, 05/09/2017 - 23:08

I am currently on the train back to London after attending the first two days of Reasons.to in Brighton, England. I need to go to pick up my mail that accumulated in my London flat before going back to Berlin and Seattle in a day, otherwise there would be no way I’d not want to see this conference through to the end.

Reasons.to stage sign

I don’t want to go. Reasons.to is an amazing experience. Let me start by listing the reasons why you should be part of it as an attendee or as a presenter. I will write up a more detailed report on why this year was amazing for me personally later.

Why reasons.to is a great experience for attendees:

Reasons.to is a conference about creative makers that use technology as a tool. It is not a conference about hard-core technical topics or limited to creating the next app or web site. It is a celebration of creativity and being human about it. If you enjoy Beyond Tellerand, this is also very much for you. That’s not by accident – the organisers of both shows are long-term friends and help each other finding talent and getting the right people together.

As such, it demands more of both the presenters and the audience. There are no recordings of the talks, and there is no way to look up later what happened. It is all about the here and now and about everyone at the event making it a memorable experience.

Over and over the organisers remind the audience to use the time to mingle and network and not worry about asking the presenters for more details. There is no Q&A and there is ample time in breaks to ask in person instead. Don’t worry – presenters are coached that this is something to expect at this event and they all agreed.

There is no food catering – you’re asked to find people to join and go out for breaks, lunches and dinners instead. This is a great opportunity to organize yourselves and even for shy people to leave with a group and have a good excuse to get a bit out of their shell.

This is a getting to know and learning about each other event. And as such, there is no need to advertise itself as an inclusive safe space for people. It just is. You meet people from all kind of backgrounds, families arrive with children and all the people involved in putting on the show know each other.

There are no blatant sponsored talks or holy wars about “framework vs. library” or “technology x vs. technology y”. There is no grandstanding about “here is what I did and it will revolutionise our little world”. There is no “I know this doesn’t work yet, but it will be what you need to use else you’d be outdated and you do it wrong”. And most importantly there is no “this is my showreel, am I not amazing” presentations that are sadly enough often what “creative” events end up having.

The organisers are doing a thorough job finding presenters that are more than safe bets to sell tickets or cover the newest hotness. Instead they work hard to find people who have done amazing things and aren’t necessarily that known but deserve to be.

If anything, there is a very refreshing feeling of meeting people whose work you may know from advertising, on trains, TV or big billboards. And realizing that these are humans and humble about their outrageous achievements. And ready to share their experiences and techniques creating them – warts and all.

The organisers have a keen eye on spotting talent that is amazing but not quite ready to tell the world about it and then making them feel welcome and excited about sharing their story. All the presenters are incredibly successful in what they do, yet none of them are slick and perfect in telling their story. On the contrary, it is very human to see the excitement and how afraid some of these amazing people are in showing you how they work.

Reasons.to is not an event where you will leave with a lot of new and immediately applicable technical knowledge. You will leave, however, with a feeling that even the most talented people are having the same worries as you. And that there is more to you if you just stop stalling and allow yourself to be more creative. And damn the consequences.

Why reasons.to is a great idea for presenters

As a presenter, I found this conference to be incredibly relaxed. It is an entity, it is a happening that is closed in itself without being elitist.

Not having video recordings and having a very low-traffic social media backchannel might be bad for your outside visibility and makes it harder to show the impact you had to your manager. But it makes for a much less stressful environment to present in. Your job is to inspire and deal with the audience at the event, not to deliver a great, reusable video recording or deal with people on social media judging you without having seen you performing or being aware of the context in which you said something.

You have a chance to be yourself. A chance to not only deliver knowledge but share how you came by it and what you did wrong without having to worry about disappointing an audience eager for hard facts. You can be much more vulnerable and human here than at other – more competitive – events.

You need to be ready to be available though. And to spend some extra time in getting to know the other presenters, share tips and details with the audience and to not be a performer that drops in, does the show and moves on. This event is a great opportunity not only to show what you did and want people to try, but it is also a great event to stay at and take in every other talk. Not to compare, but to just learn about people like you but with vastly different backgrounds and approaches.

There is no place for ego at this event. That’s a great thing as it also means that you don’t need to be the perfect presenter. Instead you’re expected to share your excitement and be ready to show mistakes you made. As you would with a group of friends. This is refreshing and a great opportunity for people who have something to show and share but aren’t quite sure if the stage is theirs to command.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Army of Awesome’s Retirement and Mozilla Social Support’s Participation Outreach

Mozilla planet - ti, 05/09/2017 - 19:27

Twitter is a social network used by millions of users around the world for many reasons – and this include helping Firefox users when they need it. If you have a Twitter account, you like to help people, you like to share your knowledge, and want to be a Social Support member of Firefox – join us!

We aim to have the engine that has been powering Army of Awesome (AoA) officially disabled before the end of 2017. To continue with the incredible work that has been accomplished for several years, we plan a new approach to supporting users on Twitter using TweetDeck.

TweetDeck is a web tool made available by Twitter allowing you to post to your timeline and manage your user profile within the social network, additionally boasting several features and filters to improve the general experience.

Through the application of filters in TweetDeck you can view comments, questions, and problems of Firefox users. Utilizing simple, but at the same time rather advanced tools, we can offer quality support to the users right where they are.

If you are interested, please take a look at the guidelines of the project, that take careful note of the successes and failures of past programs. Once you are filled with amazing-ness of the guidelines, fill out this form with your email and we will send you more information about everything you need to know about the program’s mutual purpose. After completing the form you can start configuring TweetDeck to display the issues to be answered and users to be helped.

We are sure you will have an incredible experience for all of you who are passionate about Mozilla and Twitter – and we can hardly wait to see the great results of your actions!

Categorieën: Mozilla-nl planet

Air Mozilla: Webdev Beer and Tell: September 2017, 05 Sep 2017

Mozilla planet - ti, 05/09/2017 - 19:00

 September 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Categorieën: Mozilla-nl planet

Pages