mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

The Mozilla Blog: A Copyright Vote That Could Change the EU’s Internet

Mozilla planet - ma, 11/09/2017 - 09:01
On October 10, EU lawmakers will vote on a dangerous proposal to change copyright law. Mozilla is urging EU citizens to demand better reforms.

 

On October 10, the European Parliament Committee on Legal Affairs (JURI) will vote on a proposal to change EU copyright law.

The outcome could sabotage freedom and openness online. It could make filtering and blocking online content far more routine, affecting the hundreds of millions of EU citizens who use the internet everyday.

Dysfunctional copyright reform is threatening Europe’s internet

Why Copyright Reform Matters

The EU’s current copyright legal framework is woefully outdated. It’s a framework created when the postcard, and not the iPhone, was a reigning communication method.

But the EU’s proposal to reform this framework is in many ways a step backward. Titled “Directive on Copyright in the Digital Single Market,” this backward proposal is up for an initial vote on October 10 and a final vote in December.

“Many aspects of the proposal and some amendments put forward in the Parliament are dysfunctional and borderline absurd,” says Raegan MacDonald, Mozilla’s Senior EU Policy Manager. “The proposal would make filtering and blocking of online content the norm, effectively undermining innovation, competition and freedom of expression.”

Under the proposal:

  • If the most dangerous amendments pass, everything you put on the internet will be filtered, and even blocked. It doesn’t even need to be commercial — some proposals are so broad that even photos you upload for friends and family would be included.

 

  • Linking to and accessing information online is also at stake: extending copyright to cover news snippets will restrict our ability to learn from a diverse selection of sources. Sharing and accessing news online would become more difficult through the so-called “neighbouring right” for press publishers.

 

  • The proposal would remove crucial protections for intermediaries, and would force most online platforms to monitor all content you post — like Wikipedia, eBay, software repositories on Github, or DeviantArt submissions.

 

  • Only scientific research institutions would be allowed to mine text and datasets. This means countless other beneficiaries — including librarians, journalists, advocacy groups, and independent scientists — would not be able to make use of mining software to understand large data sets, putting Europe in a competitive disadvantage in the world.
Mozilla’s Role

In the weeks before the vote, Mozilla is urging EU citizens to phone their lawmakers and demand better reform. Our website and call tool — changecopyright.org — makes it simple to contact Members of European Parliament (MEPs).

This isn’t the first time Mozilla has demanded common-sense copyright reform for the internet age. Earlier this year, Mozilla and more than 100,000 EU citizens dropped tens of millions of digital flyers on European landmarks in protest. And in 2016, we collected more than 100,000 signatures calling for reform.

Well-balanced, flexible, and creativity-friendly copyright reform is essential to a healthy internet. Agree? Visit changecopyright.org and take a stand.

Note: This blog has been updated to include a link to the reform proposal.

The post A Copyright Vote That Could Change the EU’s Internet appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Tantek Çelik: My First #Marathon @theSFMarathon

Mozilla planet - ma, 11/09/2017 - 07:30

I started writing this San Francisco Marathon race write-up the evening after the race, braindumping so many thoughts and feelings from various points of the race, the memories pouring forth not in any particular order. Over the next month I sat down for a half hour here, an hour there, and with the help of a race map and my Strava record, closed my eyes, recalled my memories one mile at a time, focusing on where I ran and what it felt like. I wrote paragraphs for each, integrating those visual memories with my post-race braindump.

Seven weeks after the race itself, here’s how it went.

Morning of the race

We woke up so early that morning of July 23rd — I cannot remember exactly when. So early that maybe I blocked it out. My friend Zoe had crashed on my couch the night before — she decided just a month before to race with me in solidarity.

Having laid out our race kits the night before, we quickly got ready. I messaged my friend & neighbor Michele who came over and called a Lyft for us. The driver took us within a couple of blocks of Harrison & Spear street, and we ran the rest of the way. After a quick pitstop we walked out to the Embarcadero to find our starting waves.

I only have a few timestamps from the photos I took before the race.

The Embarcadero and Bay Bridge still lit with lights at dawn

05:32. The Bay Bridge lights with a bit of dawn’s orange glow peeking above the East Bay mountains.

We sent Michele on her way to wave 3, and Zoe & I found our way around the chain link fences to join the crowd in wave 5.

Tantek, a police officer in full uniform & cap, and Zoe lined up in wave 5 of the San Francisco Marathon 2017

05:47. We found a police officer in the middle of the crowd, or rather he found us. He had seen our November Project tagged gear and shouted out “Hey November Project! I used to run that in Boston!” We shared our experiences running the steps at Harvard Stadium. Glancing down we noticed he had a proper race bib and everything. He was doing the whole race in his full uniform, including shoes.

Tantek and Zoe in wave 5 waiting for the start of the San Francisco Marathon

05:52. We took a dawn photo of us in the darkness with the Bay Bridge behind us. Zoe calls this the “When we were young and naïve” shot.

We did a little warm up run (#no_strava) back & forth on the Embarcadero until just minutes before our wave start time.

Sunrise behind the Bay Bridge

05:57. I noticed the clouds were now glowing orange, reflections glistening on the bay, and took another photo. Wave 5 got a proper sunrise send-off.

As we were getting ready to start, Zoe told me to feel free to run ahead, that she didn’t want to slow me down.

In the weeks leading up to the race, all my runner friends had told me: enjoy yourself, don’t focus on time. So that’s what I told Zoe: Don’t worry about time, we’re going to support each other and enjoy ourselves. She quietly nodded. We were ready.

The San Francisco Marathon 2017 Full Marathon Course Map

Map of the 2017 full marathon course from the official website.

Mile 1

06:02. We started with the crowd. That was the last timestamp I remember.

We ran at an easy sub-10minute/mile pace up The Embarcadero, you could see the colors of everything change by the minute as the sun rose.

It took deliberate concentration to keep a steady pace and not let the excitement get to me. I focused on keeping an even breath, an even pace.

That first mile went by in mere moments. I remember feeling ready, happy, and grateful to be running with a friend. With all that energy and enthusiasm from the crowd, it felt effortless. More like gliding than running. Then poof, there went the Mile 1 sign.

Mile 2

It was like gliding until the cobblestones in Fisherman’s Wharf. The street narrowed and they were hard to avoid. The cobblestones made running awkward and slowed us down. Noted for future races.

Mile 3

Running into Aquatic park was a relief.

Yes this is NPSF Mondays home territory, look out. We got excited and picked up the pace. Latent Monday morning competitive memories of so many burnout sprints on the sand. Turned the cove corner and ran up the ramp to the end of Van Ness.

Hey who just shouted my name from the sidelines? It was Lindsay Bolt at a med station. Stopped for a lightning hug and sprinted back to catch Zoe.

Back on asphalt, made it to the Fort Mason climb. Let’s do this.

Time to stretch those hill climbing legs. Strava segment PR NBD, even with a 1 min walk at the top at our 5/1 run/walk pace. Picked up the run just in time for...

Those smiles, that beard. Yes none other than perhaps two of the most positive people in NPSF, Tony & Holly! This race just keeps getting better and better.

These two are so good at sharing and beaming their joy out into the world, it lifts you off the ground. Seriously. I felt like I was running on air, flying.

Fort Mason downhill, more NPSF Mondays home territory. Glanced at my watch to see low-6min/mile pace (!!!). I know I’m supposed to be taking it easy, but it felt like less work to lean forward and flow with gravity’s downhill pull, rather than resist.

Mile 4

Slight veer to the right then left crossing the mile 3 marker to the Marina flats which brought me back to my sustainable ~10min/mile pace.

Somehow it got really crowded. We had caught up to a slower group and had to slalom back and forth to get through them. It was hard to keep a consistent pace. Slowed to about a 10-11min/mile.

Just as we emerged from the slow cluster, the path narrowed and sent us on a zig towards the beach away from Mason street. Then left, another left, and right back onto Mason street after the mile 4 marker.

What was the point of this momentum-killing jut out towards the bay? They couldn’t figure out some other place to put that distance? Really hope they fix it next year.

Mile 5

The long and fairly straight stretch of Mason street was a nice relief. Though it was at this point that I first felt like I had to to pee. I figured I could probably ignore it for a bit, especially with the momentum we had picked up.

I should note that Zoe and I have been run/walking 5min/1min intervals so far this entire time, maybe fudging them a bit to overlap with the water stations so we could walk at each one. We grabbed a cup of water every time. One cup only.

So it was with the station before the mile 5 marker. That station was particularly well placed, right before one of the biggest hills in the course.

Mile 6

We flew by the mile 5 marker and started the uphill grind towards the bridge. I just ran this hill 3 weeks ago. Piece of cake I thought.

Practicing hills for a race course is a huge confidence booster, because nearly everyone else slows down, even slowing to a walk, because hills seem to intrinsically evoke fear in runners, likely mostly fear of the unknowns. How long is this hill? Am I going to run out of energy/steam/breath trying to run up it? Am I going to tire myself out? Practicing a hill removes such mysteries and you know just how long you’ll have to push how hard to summit it how fast. Then you can run uphill with confidence, knowing full well how much energy it will take to get to the top.

Despite all that, hills are still the hardest thing for me. Zoe quickly outpaced me and pulled ahead. I kept her in sight.

We kept a nice 5/1 run/walk pace. And while running up the hill, I glanced at my heart monitor to pace myself and keep it just under 150bpm.

Now for the bridge. Did I mention the view running up to the bridge? I did not, because there was almost no view of the bridge, just a blanket of fog in the Marina.

On the bridge we could see maybe a few hundred meters in front of us, and just the base of the towers. @karlthefog was out stronger than I’ve seen in any SF Marathon of the past four years. And I was quite grateful because I’d forgotten to put on sunscreen.

Mile 7

That blanket of fog also meant nearly no views, which meant nearly no one stopping to selfie in the middle of the bridge. This was the smoothest I have ever seen a race run over the Golden Gate Bridge.

The initial uphill on the bridge went by faster than I ever remember. As the road flattened approaching the halfway point, it started to feel like it was downhill. I couldn’t tell if that was an illusion from excitement or actually gravity.

Sometime after the midpoint, as the bridge cables started to rise once again, I finally saw my first NP racer wearing a November Project tagged shirt coming the other way. He was a tall guy that I did not recognize, likely visiting from another city. We shouted “NP” and high fived as we passed. Smack.

Mile 8

As we crossed the bridge into Marin, the fog thinned to reveal sunlit hills in front of us. Pretty easy loop around the North Vista Point parking lot, biggest challenge was dodging everyone stopping for gu. It was nice to get a bit of sunshine.

We looped back onto the bridge with just enough momentum to keep up a little speed, with the North tower in sight.

Mile 9

The Golden Gate Bridge felt even faster on the way back, and it actually felt good to run back into the fog. Sunglasses off.

We picked up even more speed as the grade flattened, eventually becoming a downhill as we approached the South Tower. That mile felt particularly fast.

Mile 10

Launching into the tenth mile with quite a bit of momentum, I kept us running a bit longer than the five minutes of our 5/1 run/walk, flying around the turns until the bottom of the downhill right turn onto Lincoln Boulevard.

I didn’t know it at the time, but I had just set PRs for the Strava segments across the bridge, having run it significantly faster than any practice runs.

Flying run turned into fast walk, we shuffled up the Lincoln climb at a good clip, which felt less steep than ever before.

Fast walked right up to the aid station, our run/walk timing had worked out well. After we downed a cup of water each and started running again, we both related that a quick bathroom stop would be a good idea, and agreed to take a pee-break at the next set of porta-potties.

Mile 11

One more run/walk up to the top of the Lincoln hill. Been here many times, whether running the first half of the SF Marathon, or coming the other direction in the Rock & Roll SF half, or running racing friends up to the top. Again it felt less steep than before.

All those Friday NPSF hillsforbreakfast sessions followed by Saturday mornings with SFRC running trails in the Marin headlands had prepared me to keep pushing even after 10 miles. Zoe pulled ahead, stronger on the uphills.

We knew going in that we had different strengths, she was faster up the hills and I was faster down them, so we encouraged each other to go faster when we could, figuring we would sync-up on the flats.

Having reached the end of our 1 minute walk as we crested the hill, we picked up our run, I leaned forward and let gravity pull me through. Zooming down the hill faster than I’d expected, by the time I walked through the water stop at the bottom I had lost sight of Zoe. I kept walking and looking but couldn’t see her.

Apparently I had missed the porta-potties by the aid station, she had not, and had stopped as we had agreed.

Mile 12

Crossing mile marker 11, I turned around and started walking backwards, hoping to see Zoe. A few people looked at me like I was nuts but I didn’t care, I was walking uphill backwards nearly as fast as some were shuffling forwards. And I knew from experience that walking backwards works your muscles very differently, so I looked at it as a kind of super active-recovery.

After walking nearly a half mile backwards I finally spotted Zoe running / fast walking to catch-up; I think she spotted me first.

Just after we sync’d back up, and switched back to walking, a swing-dancing friend of mine who I had not seen in years spotted me and cheered us on at 27th & Clement!

We finally got to the top of the Richmond hill (at Anza street I think), and could see Golden Gate Park downhill in front of us.

Mile 12 was my slowest mile of the race, just after my fastest (mile 11). We picked up the pace once more.

Mile 13

We sped into the park, and slowed once we hit the uphill approaching the aid station there. I remember this point in the course very clearly from last year’s first half. At that point last year my knees were unhappy and I was struggling to finish. This year was a different story. Yes I felt the hill, however, my joints felt solid. Ankles, knees, hips all good. A little bit of soreness in my left hip flexor but nothing unmanageable.

However this hill did not feel easy like the others. Not sure if that was due to being tired or someting else.

Making a note to practice this hill in particular if (when) I plan to next run the first half of the SF Marathon (maybe next year).

Speaking of, just after the aid station this is where they divide up first half and full marathon runners. At the JFK intersection, the half runners turn left with a bit more uphill toward their last sprint to the finish, and the marathoners turn right, downhill towards the beach.

I have lost count of the number of times I have run down JFK to the beach, in races like Bay to Breakers, and Sunday training runs in Golden Gate Park. Zoe & I in particular have run this route more times than I can remember. This was super familiar territory and very easy for us to get into a comfortable groove and just go.

Mile 14

As we flew past the mile 13 marker, we high-fived (as we did at every mile marker we passed together), and I told Z hey we’re basically halfway done, we totally got this!

This part of JFK is always so enjoyable — a sweeping curving downhill with broad green meadows and a couple of lakes.

I saw the aid station at Spreckels Lake and gave Z a heads-up that I needed to take a quick pit stop.

Ran back into the fray and while I knew we were passing the bison on our right, I don’t actually remember looking over to see any. I think we were too focused on the road in front of us.

Mile 15

The mile 14 marker seemed to come up even quicker, maybe because we briefly stopped just a half mile or so before. Seeing that “14” had a huge impact on me, a number I had never before run up to in any race.

I remembered from the course map that we were approaching where the second half marathoners were going to start.

We turned left toward MLK drive, right by the second half start, and there was no sign of the second half marathoners.

My dad was running the second half, originally in wave 9, and we had thoughts of somehow trying to cross paths during our races. Not only was he long gone, but he had ended up starting in wave 5, and the second half overall started 15 minutes earlier than expected. Regardless I knew there was very little chance of catching him since all the second half runners were long gone.

MLK drive is a bit of a long uphill slog and we naturally slowed down a bit. It finally started to feel like “work” to get to the mile 15 marker.

Mile 16

Right after the mile 15 marker we zigged left then right onto the forgettably named Middle drive, which I had not run in quite a while, I’m not sure ever. I vaguely remembered rollerblading on it many years ago.

The pavement was a bit rougher, and the slow uphill slog continued. I decided I would chew half of one of my caffeinated cherry Nuun energy tablets at the next aid station, swallowing it with water.

The half tablet started to fizz as I chewed it so I was happy to wash it down. The fizziness felt a bit odd in my stomach. So far in the race I had had zero stomach problems or weirdnesses, so this was maybe not the greatest idea. Yeah, that thing about don’t change your fuel on raceday, that. I was mostly ok, but I think the fizziness threw me off.

I wasn’t really enjoying this part of the race, despite it being in Golden Gate park. I wasn’t hating it either. It just felt kind of meh.

Mile 17

Crossing the mile 16 marker and high-fiving I remember thinking, only ten-ish miles left, that doesn’t seem so bad. Turning right back onto JFK felt good though, finally we were back in familiar territory.

Then I remembered we still had to run up and around Stow lake. When I saw the course map I remember looking forward to that, but at this point I felt done with hills and was no longer looking forward to it.

After we turned right and started running up towards Stow Lake, I decided to walk and wait to sync up with Z, which was good timing it turns out. My friend Michele (who started a couple of waves before us) was just finishing Stow Lake and on her way down that same street.

She expressed that she wasn’t feeling too good, I told her she looked great and she smiled. We hugged, she told me and Zoe that it was only about 15 minutes to go around the lake and come back down, which made it feel more doable.

Still, it continued to feel like “work”. As we ran past the back (South) side of the lake, it was nice to have a bit of downhill, especially down to the next mile marker.

Mile 18

Crossing the mile 17 marker I turned to Zoe and told her hey, less than ten miles left! Single digits! She managed a smile. We kept pushing up and around the lake.

The backside of the lake felt easier since I knew the downhill to JFK was coming up. Picked up speed again, and then walked once I reached JFK, waiting for Zoe to catch back up.

We could see the first half marathoners finishing to our left, and I had flashbacks to how I felt finishing the first half last year. I was feeling a lot better this year at mile 17+ than last year at mile 13+, and I actually felt pretty good last year. That was a huge confidence boost.

As they got their finishers medals, we had an uphill to climb toward the de Young museum tower. This was really the last major hill. Once we crested it and could see the mile 18 marker, knowing it was mostly downhill made it feel like we didn’t have that far to go.

Mile 19

More familiar territory on JFK. Another aid station as we passed the outdoor "roller rink" on the left. The sun finally started to break through the clouds & fog, and we could see blue skies ahead.

I chatted with Z a bit as we passed the Conservatory of Flowers, about how we have done this run so many times, and how it was mostly downhill from here.

Up ahead I heard a couple of people shouting my name and then saw the sign.

 faster than fiber optics' cheering at mile 19 in the San Francisco Marathon

Photo by Amanda Blauvelt. Tim & Amanda surprised me with a sign at the edge of Golden Gate Park! (you can see me in the orange on the left).

I couldn’t help laughing. Ran up and hugged them both. Background: Last year Amanda ran the SF Marathon (her first full), and I conspired with her best friend from out of town to have her show up and surprise Amanda at around mile 10 by jumping in and running with her. The tournabout surprise was quite appreciated.

In my eager run up to Tim & Amanda, I somehow lost Zoe.

First I paused and look around, looked ahead to see if she had run past me and did not see her. Looked behind me to see if she was approaching and also did not see her.

I picked up the pace figuring she may have run past me when I saw Tim and Amanda, or I would figure it out later. (After the race Tim told me they saw Zoe moments after I had left).

The race looped back into Golden Gate park for a bit.

Mile 20

Passing the mile 19 marker, the course took us under a bridge, up to and across Stanyan street onto Haight street, the last noticeable uphill.

This was serious home territory for me, having run up Haight street to the market near Ashbury more times than I can remember.

Tantek running on Haight street just after crossing Ashbury.

Photo by mom. I saw my mom cheering at the intersection of Haight & Ashbury, and positioned myself clear of other runners because I knew she was taking photos. Then I went to her, hugged her, told her I love her, and asked where dad was. An hour ahead of me. No way I’m going to catch him before the finish.

I could see the mile 20 marker, but just as I was passing Buena Vista park on my right, I heard another familiar voice cheering me on. Turning to look I immediately recognized my friend Leah who helped get me into running in the first place, by encouraging me to start with very short distances.

She asked if I wanted company because she had to get in a 6-7 mile run herself and I said sure! Leah asked if I wanted to run quietly, or for her to talk or listen, and I said I was happy to listen to her talk about anything and appreciated the company.

I told her about how I’d lost Zoe earlier. Leah put Zoe’s info into the SF Marathon app on her phone to track Zoe’s progress to see if we could find her as we ran.

We were crushing it down the hill to Divisadero literally passing everyone else around us (downhills are my jam), and she was surprised at how well I looked and sounded so far into the race, at this point farther than I’d ever run before.

Mile 21

As we flew by the mile 20 marker, I remember thinking wow 20 miles and I feel great. I felt like I could just keep running on Clif blocks and Nuun electrolytes for hours. It was an amazing feeling of strength and confidence.

I realized I was doing something I thought I would never do, but more than that, it felt sustainable. I felt unstoppable.

My hip flexors were both a bit sore now, but at least they were evenly sore, which helped both balance things out, and then forget about them. My knees were just a tiny bit sore now, but again, about the same on both sides.

Just as we reached Scott street, they started redirecting racers up Scott to Waller. One more tiny uphill block, I remember complaining and then thinking just gotta push through. Up to Waller street then again a slight eastward downhill.

Once again picking up speed, I really started to enjoy all the cheering from folks who had come out of their houses to cheer us on. There was a family with kids offering small cups of water and snacks to the runners.

As we approached the last block before Buchanan street, I could hear a house on the North side blasting the Chariots of Fire theme song on huge speakers. Louder than the music I was listening to. Brilliant for that last Waller street block which happenned to be uphill. Of course it was a boost.

Making the right turn to run down Buchanan street, we only made it a block before they redirected us eastward down Hermann street to the Market street crossing and veering right onto Guerrero.

Running these familiar streets felt so easy and comfortable.

Once again we picked up speed running downhill, barely slowing down to pick up two cups of Nuun at the aid station before the mile 21 marker.

Mile 22

We kept running South on Guerrero until the course turned East again at 16th street.

16th street in the Mission is a bit of mess. Lots of smells, from various things in the street, to the questionable oily meats spewing clouds of smoke from questionable grills. I think this was my least favorite stretch of the race. Literally disgusting.

The smells didn’t clear until about Folsom street. Still relatively flat, I knew we had a climb coming up to Bryant street, so I was mentally ready for it.

Just before we reached Bryant street, they redirected us South one block onto 17th street.

Still no sign of Zoe. With all these race route switches I was worried that we had been switched different ways, and would have difficulty finding each other.

The racer tracking app was also fairly inaccurate. In several places it showed Zoe as being literally right by us, or just ahead or just behind when she was nowhere to be seen.

Mile 23

Slow climb up to Potrero. It’s not very enticing running there. Mostly industrial. Still felt familiar enough, we just pressed on, occasionally looking for Zoe.

Leah kept up a nice friendly distracting dialog that helped this fairly unremarkable part of the course go by quicker than it otherwise would have.

Another aid station, more Nuun. I started to feel I wasn’t absorbing fluids as fast as I had been earlier. Something also felt a bit off about my stomach. Not sure if it was the fizzing from the cherry Nuun tablet I had chewed on. Or the smells of 16th street.

I only sipped half a cup of Nuun and tossed the rest.

We were almost at 280, turned briefly down Missisippi street for a block, then over on Mariposa to cross underneath 280, and I could see the mile 23 marker just on the other side.

Mile 24

Downhill to Indiana street so we flew right by the marker.

Twenty-three miles done. Just a little over 5km left.

Made a hard right onto Indiana street where it flattened out once more. We had entered the industrial backwaters of the Dogpatch.

Still run/walking at about a 5 to 1 split, but I was starting to slowly feel more tired. No “wall” like I have often heard about. I wondered if the feeling was really physical, or just mental.

Maybe it was just the street and the few memories I had associated with it. Some just two years old, some older. Nothing remarkable. Maybe this was my chance to update my memories of Indiana street.

The sun was shining, and I was running. Over 23 miles into my first marathon and I still felt fine. There were scant few out cheering on this stretch. But I knew the @Nov_Project_SF cheerstation wasn’t far.

The sound of two people shouting my name brought my attention back to my surroundings. My friends @Nov_Project Ava and Tara had run backwards along the course from the cheerstation!

They checked in with me, asked how I was doing. I was able to actually answer while running which was a good sign. They ran with me a bit and then sprinted ahead a few blocks to just past the next corner.

Turning onto 22nd street, I grabbed another half cup of Nuun. At this point I did not feel like eating anything, my stomach had an odd half-full not-hungry feeling. I sipped the Nuun and tossed the cup.

There were Ava & Tara again, cheering me on, like a personal traveling cheersquad. So grateful. I’m convinced smiling helps you go faster, and especially when friends are cheering you on. They sprinted on ahead again and I lost sight of them.

Finally the turn onto 3rd street. There is something very powerful about feeling like you are finally heading directly towards the finish.

It was getting warmer, and the sweat was making it harder to see. This is the point where I was glad I had brought my sunglasses with me, despite the thick clouds this morning. No clouds remained. Just clear blue skies.

Kept going through Dog Patch and China Basin, really not the most attractive places to run. Except once again I saw Ava & Tara up ahead at 20th street, and they cheered us through the corner, and then disappeared again.

Just one block East on 20th and then North again onto Illinois street. I could see the next marker.

Mile 25

Just over a couple of miles left. Slight right swerve onto Terry A Francois Boulevard, and I could see and hear the very excited Lululemon Cheerstation waving their signs, shouting, and cheering on all of us runners.

Then perhaps the second best part of the race. Actually maybe tied for best with finishing.

I saw brightly colored neon shirts up ahead and heard a roar. (I’m having trouble even writing this four weeks later without tearing up.)

The November Project San Francisco cheerstation. What a sight even from a distance.

My friend Caity Rogo ran towards me & Leah, and I had this thought like I should be surprised to see her but I couldn’t remember why.

Leah and Tantek running with Caity beside them right before the NPSF cheergang at the San Francisco Marathon 2017

Photo by Kirstie Polentz. I do not remember what I said to Caity. Later I would remember that just the day before she was away running a Ragnar relay race! Somehow she had made it back in time to cheer.

At this point my cognitive bandwidth was starting to drop. I had just enough to focus on the race, and pay attention to the amazing friends cheering and accompanying me.

Tantek running through the November Project cheergang

Photo by Lillian Lauer. So many high fives. So many folks pacing me. I think there were hugs? It was kind of a blur. I asked and found out Zoe was about 2 min ahead of me, so I picked up the pace in an attempt to catch up to her.

Tantek with Nuun cup walking next to Henri asking him how he is doing during the San Francisco Marathon 2017

Photo by Kirstie Polentz. I remember Henry Romeo asking me what I wanted from the next water station, running ahead, bringing me a Nuun electrolyte cup, and keeping me company for a bit.

After snapping a few photos, my pal Krissi ran with me despite a recent calf injury, grinning with infectious joy and confidence. She ran me past the mile 25 marker, checking to make sure I was ok, how I was feelng etc.

As good as I thought I was feeling before, the cheer station was a massive boost.

Mile 26

Found Zoe again! Or rather she saw me. She was walking slowly or had stopped and was looking for me.

Having reconnected I checked in with her, how was everything feeling. We kept up our run/walk, with still a bit over a mile left.

Apparently there was a ballgame on at AT&T park. I couldn’t help but feel a sharp contrast between the sports fans on one side of the race barrier and runners on the other. Each of us were doing our own thing. A few sports fans cheered us on and reached across to give out high fives which we gladly accepted.

Finally we made it around the ballpark and out to the Embarcadero, our home stretch. Half mile or so to go.

We were all tired, with various body parts aching, and yet did our best to keep up a decent pace.

Leah peeled off at mile 26, shouting encouragements for us to push hard to the finish.

Finish

Past the mile 26 marker we curved a little to the left and could see the finish just a few blocks in front of us.

I talked Zoe into keeping up a regular pace as we approached the finish line. Checking to make sure she was good and still smiling, I picked up the pace with whatever energy I had, just to see how many people I could pass in the last 400 meters.

I actually saw people slowing down, which felt like an enticement to go even faster. I sprinted the last 100m as fast as I could, passing someone with just feet to go to the finish. Maybe a silly bit of competitiveness, but it’s always felt right to push hard to a finish, using any motivation at hand.

5:35:59.

I kept walking and got my finishers medal.

Zoe and Tantek at the finish of of the San Francisco Marathon wearing their medals

Turning around I found Zoe. We had someone take our photo. We had done it. Marathon finishers!

We kept walking and found my dad. We picked up waters & fruit cups and saw my mom & youngest sister on the other side of the barriers.

Tantek and Zoe stretching after finishing the San Francisco Marathon

Photo by Ayşan Çelik. We stopped to stretch our legs and take more photos.

We found more @Nov_Project friends. I stopped by the Nuun booth and kept refilling my cup and Steve gave me big hug too.

I was a little sore in parts, but nothing was actually hurting. No blood, no limping, no pain. Just a blister on one left toe, and one on my right heel that had already popped. Slight chafing on my right ankle where my shoe had rubbed against it.

I felt better than after most of my past half marathon races. Something was different.

Whether it was all the weekly hours of intense Vinyasa yoga practice, from the March through May yoga teacher training @YogaFlowSF and since, or the months of double-gang workouts @Nov_Project_SF (5:30 & 6:30 every Wednesday morning), or doing nearly all my long runs on Marin trails Saturday mornings hosted by @SFRunCo in Mill Valley, setting new monthly meters climbing records leading up to the race, I was stronger than ever before, physically and mentally. Something had changed.

I had just finished my first marathon, and I felt fine.

Tantek wearing the San Francisco Marathon 52 Club hoodie, finisher medal, and 40 for 40 medal.

I waited til I got home to finally put on my San Francisco Marathon “52 Club” hoodie (for having run the first half last year, and the second half the year before that), with the medals of course.

As much as all the training prepared me as an individual, the experience would not have been the same without the incredible support from fellow @Nov_Project runners, from my family, even just knowing my dad was ahead of me running the second half, Leah and other friends that jumped in and ran alongside, and especially starting & finishing with my pal Zoe, encouraging each other along the way.

Grateful for having the potential, the opportunity to train, and all the community, friends, and family support. Yes it took a lot of personal determination and hard work, but it was all the support that made the difference. And yes, we enjoyed ourselves.

(Thanks to Michele, Zoe, Krissi, and Lillian for reviewing drafts of this post, proofreading, feedback, and corrections! Most photos above were posted previously and link to their permalinks. The few new to this post are also on Instagram.)

Categorieën: Mozilla-nl planet

Cameron Kaiser: Irma's silver lining: text is suddenly cool again

Mozilla planet - ma, 11/09/2017 - 01:24
In Gopherspace (proxy link if your browser doesn't support it), plain text with low bandwidth was always cool and that's how we underground denizens roll. But as our thoughts and prayers go to the residents of the Caribbean and Florida peninsula being lashed by Hurricane Irma, our obey your media thought overlords newspapers and websites are suddenly realizing that when the crap hits the oscillating storm system, low-bandwidth text is still a pretty good idea.

Introducing text-only CNN. Yes, it's really from CNN. Yes, they really did it. It loads lickety-split in any browser, including TenFourFox and Classilla. And if you're hunkered down in the storm cellar and the radio's playing static and all you can get is an iffy 2G signal from some half-damaged cell tower miles away, this might be your best bet to stay current.

Not to be outdone, there's a Thin National Public Radio site too, though it only seems to have quick summaries instead of full articles.

I hope CNN keeps this running after Irma has passed because we really do need less crap overhead on the web, and in any disaster where communications are impacted, low-bandwidth solutions are the best way to communicate the most information to the most people. Meanwhile, please donate to the American Red Cross and the Salvation Army (or your relief charity of choice) to help the victims of Hurricanes Harvey and Irma today.

Categorieën: Mozilla-nl planet

Andy McKay: My third Gran Fondo

Mozilla planet - zo, 10/09/2017 - 09:00

Yesterday was my third Gran Frondo, the last was in 2016.

Last year was a bit of an odd year, I knew what to face, yet I struggled. I was planning on correcting that this year.

The most important part of the Fondo is the months and months of training before hand. This year that went well. Up to this point I've been on the bike for 243 hours, 5,050km over 198 bike rides. I only ended up doing Mt Seymour 3 times. But rides with Steed around some North and West Vancouver gave me some extra hill practice.

I managed to lose 20lbs over the training, but have gained a lot of muscle mass especially in my legs. I also did the challenge route of the Ride to Conquer Cancer with some awesome Mozilla friends. The weekend before I did the same route 3 times, on the last day I hit a pile of personal records.

Two equipment changes also helped. I had a computer to tell me how fast I was going (yeah, should have had one earlier) and I moved from Mountain Bike pedals over to Shimano road pedals.

So know what I was facing I had a slightly different plan, focusing on my nemesis, the last hour of the ride. To do that I focused on:

  • Drafting on the flats where I can
  • Taking energy gels every hour to replenish electrolytes
  • Not charging up every hill
  • Going for a faster cadence in a lower gear
  • Saving the energy for the last half (same as last year)

As the day arrived a new challenge appeared. It was raining. Pretty much the entire bloody way.

The first part felt good, I knew what time I would have to arrive each rest stop to beat the last time. I made it to the first stop 13 mins ahead of schedule. But then made it to the next stop about 10 mins ahead of schedule. Then the sticky peice of plastic with the times on flew off.

At this point I was getting anxious, I seemed to be slowing down. All I could remember was the time I needed to be at the last rest stop. Then came the hills.

The difference's here were: the rain was keeping me cool so I wasn't dehydrating (also energy gels helped), I knew my pace and I had energy in my legs. Over the last 20 km I floored it (well comparatively for me) where as, in previous years I just fell apart. The whole second half of the race were personal records.

The result? I ended up crossing at 4h 44m. That's 17 minutes faster than a younger version me.

Today, my knees, wrists and other parts of my body all hurt and I skipped the Steed ride. But other than that I'm feeling not too bad.

Also, I signed up for the Fondo next year. I'm going to get below 4hr 30min next year.

Categorieën: Mozilla-nl planet

QMO: Firefox Developer Edition 56 Beta 12, September 15th

Mozilla planet - vr, 08/09/2017 - 17:43

Hello Mozillians!

We are happy to let you know that Friday, September 15th, we are organizing Firefox Developer Edition 56 Beta 12 Testday. We’ll be focusing our testing on the following new features: Preferences SearchCSS Grid Inspector Layout View and Form Autofill. 

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Meta 2 AR Headset with Firefox

Mozilla planet - vr, 08/09/2017 - 17:00

One of the biggest challenges in developing immersive WebVR experiences today is that immersion takes you away from your developer tools. With Meta’s new augmented reality headset, you can work on and experience WebVR content today without ever taking a headset on or off, or connecting developer tools to a remote device. Our friends at Meta have just released their Meta 2 developer kit and it works right out of the box with the latest 64-bit Firefox for Windows.

The Meta 2 is a tethered augmented reality headset with six degrees of freedom (6DOF). Unlike existing 3D mobile experiences like Google Cardboard, the Meta 2 can track both your orientation (three degrees of freedom) and your position (another three degrees). This means that not only can you look at 3D content, you can also move towards and around it. (3+3 = 6DOF).

In the video above, talented Mozilla engineer Kip Gilbert is editing the NYC Snowglobe demo with the A-Frame inspector on his desktop. After he edits the project, he just lifts his head up to see the rendered 3D scene in the air in front of him.  Haven’t tried A-Frame yet? It’s the easiest way for web developers to build interactive 3D apps on the web. Best of all, Kip didn’t have to rewrite the snowglobe demo to support AR. It just works! Meta’s transparent visor combined with Firefox enables this kind of seamless 3D development.

The Meta 2 is stereoscopic and also has a 90-degree field of view, creating a more immersive experience on par with a traditional VR headset. However, because of the see-through visor, you are not isolated from the real world. The Meta 2 attaches to your existing desktop or laptop computer, letting you work at your desk without obstructing your view, then just look up to see virtual windows and objects floating around you.

In this next video, Kip is browsing a Sketchfab gallery. When he sees a model he likes he can simply look up to see the model live in his office. Thanks to the translucent visor optics, anything colored black in the original 3D scene automatically becomes transparent in the Meta 2 headset.

Meta 2 is designed for engineers and other professionals who need to both work at a computer and interact with high performance visualizations like building schematics or a detailed 3D model of a new airplane. Because the Meta 2 is tethered it can use the powerful GPU in your desktop or laptop computer to render high definition 3D content.

Currently, the Meta team has released Steam VR support and is working to add support for hands as controllers. We will be working with the Meta engineers to transform their native hand gestures into Javascript events that you can interact with in code. This will let you build fully interactive high performance 3D apps right from the comfort of your desktop browser. We are also using this platform to help us develop and test proposed extensions for AR devices to the existing WebVR specification.

You can get your own Meta 2 developer kit and headset on the Meta website. WebVR is supported in the latest release version of FireFox for Windows, with other platforms coming soon.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Last chance to migrate your legacy user data

Mozilla planet - vr, 08/09/2017 - 13:33

If you are working on transitioning your add-on to use the WebExtensions API, you have until about mid-October (a month before Firefox 57 lands to allow time for testing and migrating), to port your legacy user data using an Embedded WebExtension.

This is an important step in giving your users a smooth transition because they can retain their custom settings and preferences when they update to your WebExtensions version. After Firefox 57 reaches the release channel on November 13, you will no longer be able to port your legacy data.

If you release your WebExtensions version after the release of Firefox 57, your add-on will be enabled again for your users and they will still keep their settings if you port the data beforehand. This is because WebExtensions APIs cannot read legacy user settings, and legacy add-ons are disabled in Firefox 57. In other words, even if your WebExtensions version won’t be ready until after Firefox 57, you should still publish an Embedded WebExtension before Firefox 57 in order to retain user data.

When updating to your new version, we encourage you to adopt these best practices to ensure a smooth transition for your users.

The post Last chance to migrate your legacy user data appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Mozilla Releases Version 2.5 of Root Store Policy

Mozilla planet - vr, 08/09/2017 - 00:07

Recently, Mozilla released version 2.5 of our Root Store Policy, which continues our efforts to improve standards and reinforce public trust in the security of the Web. We are grateful to all those in the security and Certificate Authority (CA) communities who contributed constructively to the discussions surrounding the new provisions.

The changes of greatest note in version 2.5 of our Root Store Policy are as follows:

  • CAs are required to follow industry best practice for securing their networks, for example by conforming to the CA/Browser Forum’s Network Security Guidelines or a successor document.
  • CAs are required to use only those methods of domain ownership validation which are specifically documented in the CA/Browser Forum’s Baseline Requirements version 1.4.1.
  • Additional requirements were added for intermediate certificates that are used to sign certificates for S/MIME. In particular, such intermediate certificates must be name constrained in order to be considered technically-constrained and exempt from being audited and disclosed on the Common CA Database.
  • Clarified that point-in-time audit statements do not replace the required period-of-time assessments. Mozilla continues to require full-surveillance period-of-time audits that must be conducted annually, and successive audit periods must be contiguous.
  • Clarified the information that must be provided in each audit statement, including the distinguished name and SHA-256 fingerprint for each root and intermediate certificate in scope of the audit.
  • CAs are required to follow and be aware of discussions in the mozilla.dev.security.policy forum, where Mozilla’s root program is coordinated, although they are not required to participate.
  • CAs are required at all times to operate in accordance with the applicable Certificate Policy (CP) and Certificate Practice Statement (CPS) documents, which must be reviewed and updated at least once every year.
  • Our policy on root certificates being transferred from one organization or location to another has been updated and included in the main policy. Trust is not transferable; Mozilla will not automatically trust the purchaser of a root certificate to the level it trusted the previous owner.

The differences between versions 2.5 and 2.4.1 may be viewed on Github. (Version 2.4.1 contained exactly the same normative requirements as version 2.4 but was completely reorganized.)

As always, we re-iterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

Mozilla Security Team

The post Mozilla Releases Version 2.5 of Root Store Policy appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Ehsan Akhgari: Quantum Flow Engineering Newsletter #23

Mozilla planet - do, 07/09/2017 - 23:28

As was announced earlier today, Firefox 57 will be merged to the Beta channel on September 21, which is two weeks from today.  That wraps up the long development cycle that has gone on for maybe about a year now.  We had a lot of ambitious plans when we started this effort, and a significant part of what we set out to deliver is either already shipped or landed on Nightly.  It is now a good time to focus on making sure that what we have which isn’t shipped yet is as high quality possible, by ensuring the crash rates are low, regressions are triaged and fixed in time, and remaining rough edges are smoothed out before the release.  There are still a lot of ongoing projects in flight, and of course many of open Quantum Flow bugs that have already been triaged which aren’t fixed yet.  I’ll write more about what we plan to do about those later.

Let’s now have a quick look at where we are on the battle on the synchronous IPC performance bottlenecks.  The TL;DR is: things are looking great, and we have solved most of this issue by now!  This worked has happened in 50+ bugs over the course of the past 8 months.  The current telemetry data shows a very different picture on the synchronous IPC messages between our processes compared to how things looked back when we started.  These graphs show where things are now on the C++ side and on the JS side.  For comparison, you can see the latest graphs I posted about four weeks ago.

Sync IPC Analysis (2017-09-07)JS Sync IPC Analysis (2017-09-07)

Things are looking a lot different now compared to a month ago.  On the C++ side, the highest item on the list now is PAPZCTreeManager::Msg_ReceiveMouseInputEvent, which is an IPC message with a mean time of 0.6ms, so not all that bad.  The reason this appears as #1 on the list is that it occurs a lot.  This is followed by PBrowser::Msg_SyncMessage and PBrowser::Msg_RpcMessage, which are the C++ versions of JS initiated synchronous IPCs, followed by PDocAccessible::Msg_SyncTextChangeEvent which is a super rare IPC message.  After that we have PContent::Msg_ClassifyLocal, which will probably be helped by a fix landed two days ago, followed by PCompositorBridge::Msg_FlushRendering (with a mean time of 2.2ms), PAPZCTreeManager::Msg_ReceiveScrollWheelInputEvent (with a mean time of 0.6ms), PAPZCTreeManager::Msg_ReceiveKeyboardInputEvent (with a mean time of 1.3ms) and PAPZCTreeManager::Msg_ProcessUnhandledEvent (with a mean time of 2.8ms).

On the JavaScript side, the messages in the top ten list that are coming from Firefox are contextmenu, Findbar:Keypress, RemoteLogins:findRecipes (which was recently almost completely fixed), Addons:Event:Run (which is a shim for legacy extensions which we will remove later but has no impact on our release population as of Firefox 57) and WebRequest:ShouldLoad (which was recently fixed).

As you’ll note, there is still the long tail of these messages to go through and keep on top of, and we need to keep watching our telemetry data to make sure that we catch other existing synchronous IPC messages if they turn into performance problems.  But I think at this point we can safely call the large umbrella effort under Quantum Flow to address this aspect of the performance problems we’ve had in Firefox done!  This couldn’t have been done in such a short amount of time without the help of so many people in digging through each one of these bugs, analyzing them, figuring out how to rework the code to avoid the need for the synchronous messaging between our processes, helping with reviews, etc.  I’d like to thank everyone who helped us get to this point.

In other exciting performance news, Stylo is now our default CSS engine and is riding the trains.  It’s hard to capture the impact of this project in terms of Talos improvements only, but we had some nonetheless!  Hopefully all the remaining issues will be addressed in time, to make Stylo part of the Firefox 57 release.  A big congrats to the Stylo team for hitting this milestone.

With this, I’d like to take a moment to thank the hard work of everyone who helped make Firefox faster during the past week.  I hope I’m not forgetting any names.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Tell your users what to expect in your WebExtensions version

Mozilla planet - do, 07/09/2017 - 23:15

The migration to WebExtensions APIs is picking up steam, with thousands of compatible add-ons now available on addons.mozilla.org (AMO). To ensure a good experience for the growing number of users whose legacy add-ons have been updated to WebExtensions versions, we’re encouraging developers to adopt the following best practices.

(If your new version has the same features and settings as your legacy version, your users should get a seamless transition once you update your listing, and you can safely ignore the rest of this post.)

If your new version has different features, is missing legacy features, or requires additional steps to recover custom settings, please do one or both of the following.

Update your AMO listing description

If your new version did not migrate with all of its legacy features intact, or has different features, please let your users know in the “About this Add-on” section of your listing.

If your add-on is losing some of its legacy features, let your users know if it’s because they aren’t possible with the WebExtensions API, or if you are waiting on bug fixes or new APIs to land before you can provide them. Include links to those bugs, and feel free to send people to the forum to ask about the status of bug fixes and new APIs.

Retaining your users’ settings after upgrade makes for a much better experience, and there’s still time to do it using Embedded WebExtensions. But if this is not possible for you and there is a way to recover them after upgrade, please include instructions on how to do that, and refer to them in the Version notes. Otherwise, let your users know which settings and preferences cannot be recovered.

Add an announcement with your update

If your new version is vastly different from your legacy version, consider showing a new tab to your users when they first get the update. It can be the same information you provide in your listing, but it will be more noticeable if your users don’t have to go to your listing page to see it. Be sure to show it only on the first update so it doesn’t annoy your users.

To do this, you can use the runtime.onInstalled API which can tell you when an update or install occurs:

function update(details) {

if (details.reason === 'install' || details.reason === 'update') {

browser.tabs.create({url: 'update-notes.html'});

}

}

browser.runtime.onInstalled.addListener(update);

This will open the page update-notes.html in the extension when the install occurs. For example:

For greater control, the runtime.onInstalled event also lets you know when the user updated and what their previous version was so you can tailor your release notes.

Thank you

A big thanks to all the developers who have put in the effort to migrate to the WebExtensions API. We are here to support you, so please reach out if you need help.

The post Tell your users what to expect in your WebExtensions version appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Georg Fritzsche: Recording new Telemetry from add-ons

Mozilla planet - do, 07/09/2017 - 22:08

One of the successes for Firefox Telemetry has been the introduction of standardized data types; histograms and scalars.

They are well defined and allow teams to autonomously add new instrumentation. As they are listed in machine-readable files, our data pipeline can support them automatically and new probes just start showing up in different tools. A definition like this enables views like this.

Measurements Dashboard for max_concurrent_tab_count.

This works great when shipping probes in the Firefox core code, going through our normal release and testing channels, which takes a few weeks.

Going faster

However, often we want to ship code faster using add-ons: this may mean running experiments through Test Pilot and SHIELD or deploying Firefox features through system add-ons.

When adding new instrumentation in add-ons, there are two options:

  • Instrumenting the code in Firefox core code, then waiting a few weeks until it is in release.
  • Implementing a custom ping and submitting it through Telemetry, requiring additional client and pipeline work.

Neither are satisfactory; there is significant manual effort for running simple experiments and adding features.

Filling the gap

This is one of the main pain-points coming up for adding new data collection, so over the last months we were planning how to solve this.

As the scope of an end-to-end solution is rather large, we are currently focused on getting the support built into Firefox first. This can enable some use-cases right away. We can then later add better and automated integration in our data pipeline and tooling.

The basic idea is to use the existing Telemetry APIs and seamlessly allow them to record data from new probes as well. To enable this, we will extend the API with registration of new probes from add-ons at runtime.

The recorded data will be submitted with the main ping, but in a separate bucket to tell them apart.

What we have now

We now support add-on registration of events from Firefox 56 on. We expect event recording to mostly be used with experiments, so it made sense to start here.

With this new addition, events can be registered at runtime by Mozilla add-ons instead of using a registry file like Events.yaml.

When starting, add-ons call nsITelemetry.registerEvents() with information on the events they want to record:

Services.telemetry.registerEvents(“myAddon.ui”, {
“click”: {
methods: [“click”],
objects: [“redButton”, “blueButton”],
}
});

Now, events can be recorded using the normal Telemetry API:

Services.telemetry.recordEvent(“myAddon.ui”, “click”,
“redButton”);

This event will be submitted with the next main ping in the “dynamic” process section. We can inspect them through about:telemetry:

On the pipeline side, the events are available in the events table in Redash. Custom analysis can access them in the main pings under payload/processes/dynamic/events.

The larger plan

As mentioned, this is the first step of a larger project that consists of multiple high-level pieces. Not all of them are feasible in the short-term, so we intend to work towards them iteratively.

The main driving goals here are:

  1. Make it easy to submit new Telemetry probes from Mozilla add-ons.
  2. New Telemetry probes from add-ons are easily accessible, with minimal manual work.
  3. Uphold our standards for data quality and data review.
  4. Add-on probes should be discoverable from one central place.

This larger project then breaks down into roughly these main pieces:

Phase 1: Client work.

This is currently happening in Q3 & Q4 2017. We are focusing on adding & extending Firefox Telemetry APIs to register & record new probes.

Events are supported in Firefox 56, scalars will follow in 57 or 58, then histograms on a later train. The add-on probe data is sent out with the main ping.

Phase 2: Add-on tooling work.

To enable pipeline automation and data documentation, we want to define a variant of the standard registry formats (like Scalars.yaml). By providing utilities we can make it easier for add-on authors to integrate them.

Phase 3: Pipeline work.

We want to pull the probe registry information from add-ons together in one place, then make it available publically. This will enable automation of data jobs, data discovery and other use-cases. From there we can work on integrating this data into our main datasets and tools.

The later phases are not set in stone yet, so please reach out if you see gaps or overlap with other projects.

Questions?

As always, if you want to reach out or have questions:

Recording new Telemetry from add-ons was originally published in Georg Fritzsche on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): L10n Report: September Edition

Mozilla planet - do, 07/09/2017 - 21:30

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

In the past weeks we’ve added several languages to Pontoon, in particular from the Mozilla Nativo project:

  • Mixteco Yucuhiti (meh)
  • Mixtepec Mixtec (mix)
  • Quechua Chanka (quy)
  • Quichua (qvi)

We’ve also started localizing Firefox in Interlingua (ia), while Shuar (jiv) will be added soon for Firefox for Android.

New content and projects What’s new or coming up in Firefox desktop

A few deadlines are approaching:

  • September 13 is the last day to make changes to Beta projects.
  • September 20 is merge day, and all strings move from Central to Beta. There are currently a few discussions about moving this date, but nothing has been decided yet. We’ll communicate through all channels if anything changes.

Photon in Nightly is almost ready for Firefox 57, only a few small changes need to land for the onboarding experience. Please make sure to test your localization on clean profiles, and ask your friend to test and report bugs like mistranslations, strings not displayed completely in the interface, etc.

What’s new or coming up in Test Pilot

Firefox Send holds the record for the highest number of localizations in the Test Pilot family (with SnoozeTabs), with 38 languages completely translated.

For those interested in more technical details, Pontoon is now committing localizations for the Test Pilot project in a l10n branch. This also means that the DEV server URL has changed. Note that the link is also available in the project resources in Pontoon.

What’s new or coming up in mobile
  • Have you noticed that Photon is slowly but surely arriving on Firefox for Android Nightly version? The app is getting a visual refresh and things are looking bright and shiny! There’s a new onboarding experience, icons are different, the awesome bar has never been this awesome, tabs have a new look… and the whole experience is much smoother already! Come check it out.
  • Zapoteco and Belarussian are going to make it to release with the upcoming Firefox Android 56 release.
What’s new or coming up in web projects
  • Mozilla.org:
    • This past month, we continued the trend of creating new pages to replace the old ones, with new layout and color scheme.  We will have several new pages in the work in September.  Some are customized for certain markets and others will have two versions to test the markets.
    • Thanks to all the communities that have completed the new Firefox pages released for localization in late June. The pages will be moved to the new location at Firefox/… replacing the obsolete pages.
    • Germany is the focused market with a few more customized pages than other locales.
    • New pages are expected for mobile topic in September and in early October. Check web dashboard and email communications for pending projects.
  • Snippets: We will have a series snippets campaigns starting early September targeting users of many Mozilla products.
  • MOSS: the landing page is made available in Hindi in time for the partnership announcement on August 31 along with a press release.
  • Legal: Firefox Privacy Notice will be rewritten.  Once localization is complete in a few locales, we invite communities to review them.
What’s new or coming up in Foundation projects
  • Our call tool at changecopyright.org is live! Many thanks to everyone who participated in the localization of this campaign, let’s call some MEPs!
  • The IoT survey has been published, and adding new languages plus snippets made a huge difference. You can learn more in the accomplishments section below.
What’s new or coming up in Pontoon
  • Check out the brand new Pontoon Tools Firefox extension, which you can install from AMO! It brings notifications from Pontoon directly to your Firefox, but that’s just the beginning. It also shows you your team’s statistics and allows you to search for strings straight from Mozilla.org and SUMO. A huge shout out to its creator Michal Stanke, a long time Pontoon user and contributor!
  • We changed the review process by introducing the ability to reject suggestions instead of deleting them. Each suggestion can now be approved, unreviewed or rejected. This will finally make it easy to list all suggestions needing a review using the newly introduced Unreviewed Suggestions filter. To make the filter usable out of the box, all existing suggestions have been automatically rejected if an approved translation was available and approved after the suggestion has been submitted. The final step in making unreviewed suggestions truly discoverable is to show them in dashboards. Thanks to Adrian, who only joined Pontoon team in July and already managed to contribute this important patch!
  • Pontoon homepage will now redirect you to the team page you make most contributions to. You can also pick a different team page or the default Pontoon homepage in your user settings. Thanks to Jarosław for the patch!
  • Editable team info is here! If you have manager permission, you can now edit the content of the Info tab on your team page:

  • Most teams use this place to give some basic information to newcomers. Thanks to Axel, who started the effort of implementing this feature and Emin, who took over!
  • The notification popup (opened by clicking on the bell icon) is no longer limited to unread notifications. Now it displays the latest 7 notifications, which includes both – read and unread. If there are more than 7 unread notifications, all are displayed.
  • Sync with version control systems is now 10 times faster and uses 12 times less computing power. Average sync time dropped from around 20 minutes to less than 2.
  • For teams that localize all projects in Pontoon, we no longer pull Machinery suggestions from Transvision, because they are already included in Pontoon’s internal translation memory. This has positive impact on Machinery performance and the overall string navigation performance. Transvision is still enabled for the following locales: da, de, es-ES, it, ja, nl, pl.
  • Thanks to Michal Vašíček, Pontoon logo now looks much better on HiDPI displays.
  • Background issues have been fixed on in-context pages with a transparent background like the Thimble feature page.
  • What’s coming up next? We’re working on making searching and filtering of strings faster, which will also allow for loading, searching and filtering of strings across projects. We’re also improving the experience of localizing FTL files, adding support for using Microsoft Terminology during the translation process and adding API support.
Newly published localizer facing documentation
  • Community Marketing Kit: showcases ways to leverage existing marketing content, resort to approved graphic asset, and utilize social channels to market Mozilla products in your language.
  • AMO: details the product development cycle that impacts localization. AMO frontend will be revamped in Q4. The documentation will be updated accordingly.
  • Snippets: illustrates the process on how to create locale relevant snippet, or launch snippet in languages that is not on the default snippet locale list.
  • SUMO: covers the process to localize the product, which is different from localizing the articles.
Events
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

 

Accomplishments

We would like to share some good results

Responses by country (not locale), for the 32,000 responses to the privacy survey ran by the Advocacy team back in March, localized in French and German:

It was good, but now let’s compare that with the responses by country for our IoT survey How connected are you? that received over 190,000 responses! We can see that the survey performed better in France, Germany and Italy than in the US. Spanish is underrepresented because it’s spread across several countries, but we expect the participation to be similar. These major differences are explained by the fact that we added support for three more languages, and promoted it with snippets in Firefox. This will give us way more diverse results, so thanks for your hard work everyone! This also helped get new people subscribed to our newsletter, which is really important for our advocacy activities, to fuel a movement for a healthier Internet.
The survey results might also be reused by scientists and included in the next edition of the Internet Health Report How cool is that? Stay tuned for the results.

 

Friends of the Lion

Image by Elio Qoshi

  • Kabyle (kab) organized a Kab Mozilla Days on August, 18-19 in Algeria, discussing localization, Mozilla mission, open source and promotion of indigenous languages.
  • Triqui (trs) community has made significant progress post Asunción workshop, Triqui is now officially supported on mozilla.org. Congratulations!!
  • Wolof (wo): Congrats to Ibra and Ibra (!) who have been keeping up with Firefox for Android work. They have now been added to multi-locale builds, which means they reach release at the same time as Firefox 57! Congrats guys!
  • Eduardo (eo): thanks for catching the mistake in a statement appeared on mozilla.org. The paragraph has been since corrected, published and localized.
  • Manuel (azz) from Spain and Misael (trs) from Mexico met for the first time at the l10n workshop in Asunción, Paraguay. They bonded instantly! Misael will introduce his friends who are native speakers of Highland Puebla Nahuatl, the language Manuel is working on all by himself. He can’t wait to be connected with these professionals, to collaborate, and promote the language through Mozilla products.

 

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links Questions? Want to get involved?

 

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

 

Categorieën: Mozilla-nl planet

David Teller: Binary AST - Motivations and Design Decisions - Part 1

Mozilla planet - do, 07/09/2017 - 21:16
By Kannan Vijayan, Mike Hoye “The key to making programs fast is to make them do practically nothing.” - Mike Haertel, creator of GNU Grep. Binary AST - “Binary Abstract Syntax Tree” - is Mozilla’s proposal for specifying a binary-encoded syntax for JS with the intent of allowing browsers and other JS-executing environments to parse and load code as much as 80% faster than standard minified JS.
Categorieën: Mozilla-nl planet

Mark Côté: Decisions, decisions, decisions: Driving change at Mozilla

Mozilla planet - do, 07/09/2017 - 20:15

As the manager responsible for driving the decision and process behind the move to Phabricator at Mozilla, I’ve been asked to write about my recent experiences, including how this decision was implemented, what worked well, and what I might have done differently. I also have a few thoughts about decision making both generally and at Mozilla specifically.

Please note that these thoughts reflect only my personal opinions. They are not a pronouncement of how decision making is or will be done at Mozilla, although I hope that my account and ideas will be useful as we continue to define and shape processes, many of which are still raw years after we became an organization with more than a thousand employees, not to mention the vast number of volunteers.

Mozilla has used Bugzilla as both an issue tracker and a code-review tool since its inception almost twenty years ago. Bugzilla was arguably the first freely available web-powered issue tracker, but since then, many new apps in that space have appeared, both free/open-source and proprietary. A few years ago, Mozilla experimented with a new code-review solution, named (boringly) “MozReview”, which was built around Review Board, a third-party application. However, Mozilla never fully adopted MozReview, leading to code review being split between two tools, which is a confusing situation for both seasoned and new contributors alike.

There were many reasons that MozReview didn’t completely catch on, some of which I’ve mentioned in previous blog and newsgroup posts. One major factor was the absence of a concrete, well-communicated, and, dare I say, enforced decision. The project was started by a small number of people, without a clearly defined scope, no consultations, no real dedicated resources, and no backing from upper management and leadership. In short, it was a recipe for failure, particularly considering how difficult change is even in a perfect world.

Having recognized this failure last year, and with the urging of some people at the director level and above, my team and I embarked on a process to replace both MozReview and the code-review functionality in Bugzilla with a single tool and process. Our scope was made clear: we wanted the tool that offered the best mechanics for code-review at Mozilla specifically. Other bits of automation, such as “push-to-review” support and automatic landings, while providing many benefits, were to be considered separately. This division of concerns helped us focus our efforts and make decisions clearer.

Our first step in the process was to hold a consultation. We deliberately involved only a small number of senior engineers and engineering directors. Past proposals for change have faltered on wide public consultation: by their very nature, you will get virtually every opinion imaginable on how a tool or process should be implemented, which often leads to arguments that are rarely settled, and even when “won” are still dominated by the loudest voices—indeed, the quieter voices rarely even participate for fear of being shouted down. Whereas some more proactive moderation may help, using a representative sample of engineers and managers results in a more civil, focussed, and productive discussion.

I would, however, change one aspect of this process: the people involved in the consultation should be more clearly defined, and not an ad-hoc group. Ideally we would have various advisory groups that would provide input on engineering processes. Without such people clearly identified, there will always be lingering questions as to the authority of the decision makers. There is, however, still much value in also having a public consultation, which I’ll get to shortly.

There is another aspect of this consultation process which was not clearly considered early on: what is the honest range of solutions we are considering? There has been a movement across Mozilla, which I fully support, to maximize the impact of our work. For my team, and many others, this means a careful tradeoff of custom, in-house development and third-party applications. We can use entirely custom solutions, we can integrate a few external apps with custom infrastructure, or we can use a full third-party suite. Due to the size and complexity of Firefox engineering, the latter is effectively impossible (also the topic for a series of posts). Due to the size of engineering-tools groups at Mozilla, the first is often ineffective.

Thus, we really already knew that code-review was a very likely candidate for a third-party solution, integrated into our existing processes and tools. Some thorough research into existing solutions would have further tightened the project’s scope, especially given Mozilla’s particular requirements, such as Mercurial support, which are in part due to a combination of scale and history. In the end, there are few realistic solutions. One is Review Board, which we used in MozReview. Admittedly we introduced confusion into the app by tying it too closely to some process-automation concepts, but it also had some design choices that were too much of a departure from traditional Firefox engineering processes.

The other obvious choice was Phabricator. We had considered it some years ago, in fact as part of the MozReview project. MozReview was developed as a monolithic solution with a review tool at its core, so the fact that Phabricator is written in PHP, a language without much presence at Mozilla today, was seen as a pretty big problem. Our new approach, though, in which the code-review tool is seen as just one component of a pipeline, means that we limit customizations largely to integration with the rest of the system. Thus the choice of technology is much less important.

The fact that Phabricator was virtually a default choice should have been more clearly communicated both during the consultation process and in the early announcements. Regardless, we believe it is in fact a very solid choice, and that our efforts are truly best spent solving the problems unique to Mozilla, of which code review is not.

To sum up, small-scale consultations are more effective than open brainstorming, but it’s important to really pay attention to scope and constraints to make the process as effective and empowering as possible.

Lest the above seem otherwise, open consultation does provide an important part of the process, not in conceiving the initial solution but in vetting it. The decision makers cannot be “the community”, at least, not without a very clear process. It certainly can’t be the result of a discussion on a newsgroup. More on this later.

Identifying the decision maker is a problem that Mozilla has been wrestling with for years. Mitchell has previously pointed out that we have a dual system of authority: the module system and a management hierarchy. Decisions around tooling are even less clear, given that the relevant modules are either nonexistent or sweepingly defined. Thus in the absence of other options, it seemed that this should be a decision made by upper management, ultimately the Senior Director of Engineering Operations, Laura Thomson. My role was to define the scope of the change and drive it forward.

Of course since this decision affects every developer working on Firefox, we needed the support of Firefox engineering management. This has been another issue at Mozilla; the directorship was often concerned with the technical aspects of the Firefox product, but there was little input from them on the direction of the many supporting areas, including build, version control, and tooling. Happily I found out that this problem has been rectified. The current directors were more than happy to engage with Laura and me, backing our decision as well as providing some insights into how we could most effectively communicate it.

One suggestion they had was to set up a small hosted test instance and give accounts to a handful of senior engineers. The purpose of this was to both give them a heads up before the general announcement and to determine if there were any major problems with the tool that we might have missed. We got a bit of feedback, but nothing we weren’t already generally aware of.

At this point we were ready for our announcement. It’s worth pointing out again that this decision had effectively already been made, barring any major issues. That might seem disingenuous to some, but it’s worth reiterating two major points: (a) a decision like this, really, any nontrivial decision, can’t be effectively made by a large group of people, and (b) we did have to be honestly open to the idea that we might have missed some big ramification of this decision and be prepared to rethink parts, or maybe even all, of the plan.

This last piece is worth a bit more discussion. Our preparation for the general announcement included several things: a clear understanding of why we believe this change to be necessary and desirable, a list of concerns we anticipated but did not believe were blockers, and a list of areas that we were less clear on that could use some more input. By sorting out our thoughts in this way, we could stay on message. We were able to address the anticipated concerns but not get drawn into a long discussion. Again this can seem dismissive, but if nothing new is brought into the discussion, then there is no benefit to debating it. It is of course important to show that we understand such concerns, but it is equally important to demonstrate that we have considered them and do not see them as critical problems. However, we must also admit when we do not yet have a concrete answer to a problem, along with why we don’t think it needs an answer at this point—for example, how we will archive past reviews performed in MozReview. We were open to input on this issues, but also did not want to get sidetracked at this time.

All of this was greatly aided by having some members of Firefox and Mozilla leadership provide input into the exact wording of the announcement. I was also lucky to have lots of great input from Mardi Douglass, this area (internal communications) being her specialty. Although no amount of wordsmithing will ensure a smooth process, the end result was a much clearer explanation of the problem and the reasons behind our specific solution.

Indeed, there were some negative reactions to this announcement, although I have to admit that they were fewer than I had feared there would be. We endeavoured to keep the discussion focussed, employing the above approach. There were a few objections we hadn’t fully considered, and we publicly admitted so and tweaked our plans accordingly. None of the issues raised were deemed to be show-stoppers.

There were also a very small number of messages that crossed a line of civility. This line is difficult to determine, although we have often been too lenient in the past, alienating employees and volunteers alike. We drew the line in this discussion at posts that were disrespectful, in particular those that brought little of value while questioning our motives, abilities, and/or intentions. Mozilla has been getting better at policing discussions for toxic behaviour, and I was glad to see a couple people, notably Mike Hoye, step in when things took a turn for the worse.

There is also a point in which a conversation can start to go in circles, and in the discussion around Phabricator (in fact in response to a progress update a few months after the initial announcement) this ended up being around the authority of the decision makers, that is, Laura and myself. At this point I requested that a Firefox engineering director, in this case Joe Hildebrand, get involved and explain his perspective and voice his support for the project. I wish I didn’t have to, but I did feel it was necessary to establish a certain amount of credibility by showing that Firefox leadership was both involved with and behind this decision.

Although disheartening, it is also not surprising that the issue of authority came up, since as I mentioned above, decision making has been a very nebulous topic at Mozilla. There is a tendency to invoke terms like “open” and “transparent” without in any way defining them, evoking an expectation that everyone shares an understanding of how we ought to make decisions, or even how we used to make decisions in some long-ago time in Mozilla’s history. I strongly believe we need to lay out a decision-making framework that values openness and transparency but also sets clear expectations of how these concepts fit into the overall process. The most egregious argument along these lines that I’ve heard is that we are a “consensus-based organization”. Even if consensus were possible in a decision that affects literally hundreds, if not thousands, of people, we are demonstrably not consensus-driven by having both module and management systems. We do ourselves a disservice by invoking consensus when trying to drive change at Mozilla.

On a final note, I thought it was quite interesting that the topic of decision making, in the sense of product design, came up in the recent CNET article on Firefox 57. To quote Chris Beard, “If you try to make everyone happy, you’re not making anyone happy. Large organizations with hundreds of millions of users get defensive and try to keep everybody happy. Ultimately you end up with a mediocre product and experience.” I would in fact extend that to trying to make all Mozillians happy with our internal tools and processes. It’s a scary responsibility to drive innovative change at Mozilla, to see where we could have a greater impact and to know that there will be resistance, but if Mozilla can do it in its consumer products, we have no excuse for not also doing so internally.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Being Open by Design

Mozilla planet - do, 07/09/2017 - 14:55
“We were born as a radically open, radically participatory organization, unbound to traditional corporate structure. We played a role in bringing the ‘open’ movement into mainstream consciousness.”

Mitchell Baker, Executive Chairwoman of Mozilla

“If external sources of innovation can reliably produce breakthrough and functional and novel ideas, a company has to find ways to bring those to market. They have to have programs that allow them to systematically work with those sources, invest in those programs.”

Karim Lakhani, Member of Mozilla’s Board of Directors

Mozilla origins are in the open source movement, and the concept of ‘working in the open’ has always been key to our identity. It’s embedded in our vision for the open Web, and in how we build products and operate as an organization. Mozilla relies upon open, collaborative practices — foremost open source co-development — to bring in external knowledge and contribution to many of our products, technologies, and operations.

However, the landscape of open has changed dramatically in the past years. There are over a thousand open source software projects in the world, and even open source hardware is now a widespread phenomenon. Even companies once considered unlikely to work with open source projects have opened up key properties, such as Microsoft opening .NET and Visual Studio to drive adoption and make them more competitive products. Companies with a longer history in open source continue to apply it strategically: Google’s open sourcing enough of TensorFlow will help them influence the future of AI development, while they continue to crowdsource a huge corpus of machine learning data through the use of their products. But more importantly, beyond these practices, there are now numerous methods for crowdsourcing ideas and expertise, and a worldwide movement around open innovation.

All this means: there’s much out there to learn from — even (or especially) for a pioneer of the open.

Turning the Mental Model into a Strategic Lever

There are many conceptions of Open Innovation in the industry. Mozilla takes a broad definition: the blurring of an organisation’s boundaries to take advantage of the knowledge, diversity, perspectives and ideas that exist beyond its borders. This requires several related things:

  • Being willing to search for ideas outside the organisation: Identify channels to create opportunities and systematically engage with a wide range of external resources.
  • Being willing and capable of acting upon those ideas: Integrating these external resources and ideas into the organisation’s own capabilities.

Mozilla’s Open Innovation Team has been formed to help implement a broad set of open and collaborative practices in our products and technologies. The main guiding principle of the team’s efforts is to foster “openness by design”, rather than by default. The latter is more of a mental model — strong , but abstract, broad and absolute. Often enough “openness by default” reflects an absence of strategic intent: without clarity on why you’re doing something, or what the intended outcomes are, your commitment to openness is likely to diminish over time. In comparison, “open by design” for us means to develop an understanding of how our products and technologies deliver value within an ecosystem. And to intentionally design how we work with external collaborators and contributors, both at the individual and organizational level, for the greatest impact and mutual value.

As part of our ongoing work we partnered with the Copenhagen Institute for Interaction Design (CIID) for a research project looking at how other companies and industries are leveraging open practices. The project reviewed a range of collaborative methods, including but also beyond open source.

Open Practises — uhm, means?

We define open practices as the ways an organization programmatically works with external communities — from hobbyists to professionals — to share knowledge, intellectual property, work, or influence in order to shape a market toward a specific business goal. Although many of these methods are not new, technology has often made them particularly attractive or useful (e.g. crowdsourcing at scale). Some are made possible only through technology (e.g. user telemetry). Used thoughtfully, open practices can simultaneously build vibrant communities and provide competitive advantage.

Together with CIID we identified a wealth of companies and organizations from which we finally picked seven current innovators in “open” to learn from. We tried to avoid examples where community participation was mainly a marketing tactic. Instead we focused on those in which community collaboration was fundamental to the business model. Many of these organisations also share similarities to Mozilla as a mission-driven organisation.

In a series of blog posts we will share insights in how the different companies deliberately apply open practices across in their product and technology development. And we will also introduce a framework for open practices that we co-developed with CIID, structuring different methods of collaboration and interaction across organisational boundaries, which serves as a way to stimulate our thinking.

We hope that lessons learned from open and participatory practices in the technology sector are applicable across industries and that the framework and case studies will be useful to other organisations as they evaluate and implement open and participatory strategies.

If you’d like to learn more in the meantime, share thoughts or news about your projects, please reach out to the Mozilla Open Innovation team at openinnovation@mozilla.com.

Being Open by Design was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Robert O'Callahan: rr 5.0 Released

Mozilla planet - do, 07/09/2017 - 06:39

I've released rr 5.0. Kyle convinced me that trace stability and portability were worth a major version bump.

Release notes:

  • Introduction of Cap'n Proto to stabilize the trace format. Recordings created in this rr release should be replayable in any future rr release. This is a plan, not a promise, since we don't know what might happen in the future, but I'm hopeful.
  • New rr pack command makes recordings self-contained.
  • Packed recordings from one machine can be replayed on a different machine by trapping CPUID instructions when supported on the replay machine. We don't have much experience with this yet but so far so good.
  • Brotli compression for smaller traces and lower recording overhead.
  • Improvements to the rr command-line arguments to ease debugger/IDE integration. rr replay now accepts a -- argument; all following arguments are passed to the debugger verbatim. Also, the bare rr command is now smarter about choosing a default subcommand; if the following argument is a directory, the default subcommand is replay, otherwise it is record.
  • Performance improvements, especially for pathological cases with lots of switching to and from the rr supervisor process.
  • Syscall support expanded.
  • Many bugs fixed.

Enjoy!

Categorieën: Mozilla-nl planet

Frédéric Wang: Review of Igalia's Web Platform activities (H1 2017)

Mozilla planet - do, 07/09/2017 - 00:00
Introduction

For many years Igalia has been committed to and dedicated efforts to the improvement of Web Platform in all open-source Web Engines (Chromium, WebKit, Servo, Gecko) and JavaScript implementations (V8, SpiderMonkey, ChakraCore, JSC). We have been working in the implementation and standardization of some important technologies (CSS Grid/Flexbox, ECMAScript, WebRTC, WebVR, ARIA, MathML, etc). This blog post contains a review of these activities performed during the first half (and a bit more) of 2017.

Projects CSS

A few years ago Bloomberg and Igalia started a collaboration to implement a new layout model for the Web Platform. Bloomberg had complex layout requirements and what the Web provided was not enough and caused performance issues. CSS Grid Layout seemed to be the right choice, a feature that would provide such complex designs with more flexibility than the currently available methods.

We’ve been implementing CSS Grid Layout in Blink and WebKit, initially behind some flags as an experimental feature. This year, after some coordination effort to ensure interoperability (talking to the different parties involved like browser vendors, the CSS Working Group and the web authors community), it has been shipped by default in Chrome 58 and Safari 10.1. This is a huge step for the layout on the web, and modern websites will benefit from this new model and enjoy all the features provided by CSS Grid Layout spec.

Since the CSS Grid Layout shared the same alignment properties as the CSS Flexible Box feature, a new spec has been defined to generalize alignment for all the layout models. We started implementing this new spec as part of our work on Grid, being Grid the first layout model supporting it.

Finally, we worked on other minor CSS features in Blink such as caret-color or :focus-within and also several interoperability issues related to Editing and Selection.

MathML

MathML is a W3C recommendation to represent mathematical formulae that has been included in many other standards such as ISO/IEC, HTML5, ebook and office formats. There are many tools available to handle it, including various assistive technologies as well as generators from the popular LaTeX typesetting system.

After the improvements we performed in WebKit’s MathML implementation, we have regularly been in contact with Google to see how we can implement MathML in Chromium. Early this year, we had several meetings with Google’s layout team to discuss this in further details. We agreed that MathML is an important feature to consider for users and that the right approach would be to rely on the new LayoutNG model currently being implemented. We created a prototype for a small LayoutNG-based MathML implementation as a proof-of-concept and as a basis for future technical discussions. We are going to follow-up on this after the end of Q3, once Chromium’s layout team has made more progress on LayoutNG.

Servo

Servo is Mozilla’s next-generation web content engine based on Rust, a language that guarantees memory safety. Servo relies on a Rust project called WebRender which replaces the typical rasterizer and compositor duo in the web browser stack. WebRender makes extensive use of GPU batching to achieve very exciting performance improvements in common web pages. Mozilla has decided to make WebRender part of the Quantum Render project.

We’ve had the opportunity to collaborate with Mozilla for a few years now, focusing on the graphics stack. Our work has focused on bringing full support for CSS stacking and clipping to WebRender, so that it will be available in both Servo and Gecko. This has involved creating a data structure similar to what WebKit calls the “scroll tree” in WebRender. The scroll tree divides the scene into independently scrolled elements, clipped elements, and various transformation spaces defined by CSS transforms. The tree allows WebRender to handle page interaction independently of page layout, allowing maximum performance and responsiveness.

WebRTC

WebRTC is a collection of communications protocols and APIs that enable real-time communication over peer-to-peer connections. Typical use cases include video conferencing, file transfer, chat, or desktop sharing. Igalia has been working on the WebRTC implementation in WebKit and this development is currently sponsored by Metrological.

This year we have continued the implementation effort in WebKit for the WebKitGTK and WebKit WPE ports, as well as the maintenance of two test servers for WebRTC: Ericsson’s p2p and Google’s apprtc. Finally, a lot of progress has been done to add support for Jitsi using the existing OpenWebRTC backend.

Since OpenWebRTC development is not an active project anymore and given libwebrtc is gaining traction in both Blink and the WebRTC implementation of WebKit for Apple software, we are taking the first steps to replace the original WebRTC implementation in WebKitGTK based on OpenWebRTC, with a new one based on libwebrtc. Hopefully, this way we will share more code between platforms and get more robust support of WebRTC for the end users. GStreamer integration in this new implementation is an issue we will have to study, as it’s not built in libwebrtc. libwebrtc offers many services, but not every WebRTC implementation uses all of them. This seems to be the case for the Apple WebRTC implementation, and it may become our case too if we need tighter integration with GStreamer or hardware decoding.

WebVR

WebVR is an API that provides support for virtual reality devices in Web engines. Implementation and devices are currently actively developed by browser vendors and it looks like it is going to be a huge thing. Igalia has started to investigate on that topic to see how we can join that effort. This year, we have been in discussions with Mozilla, Google and Apple to see how we could help in the implementation of WebVR on Linux. We decided to start experimenting an implementation within WebKitGTK. We announced our intention on the webkit-dev mailing list and got encouraging feedback from Apple and the WebKit community.

ARIA

ARIA defines a way to make Web content and Web applications more accessible to people with disabilities. Igalia strengthened its ongoing committment to the W3C: Joanmarie Diggs joined Richard Schwerdtfeger as a co-Chair of the W3C’s ARIA working group, and became editor of the Core Accessibility API Mappings, [Digital Publishing Accessibility API Mappings] (https://w3c.github.io/aria/dpub-aam/dpub-aam.html), and Accessible Name and Description: Computation and API Mappings specifications. Her main focus over the past six months has been to get ARIA 1.1 transitioned to Proposed Recommendation through a combination of implementation and bugfixing in WebKit and Gecko, creation of automated testing tools to verify platform accessibility API exposure in GNU/Linux and macOS, and working with fellow Working Group members to ensure the platform mappings stated in the various “AAM” specs are complete and accurate. We will provide more information about these activities after ARIA 1.1 and the related AAM specs are further along on their respective REC tracks.

Web Platform Predictability for WebKit

The AMP Project has recently sponsored Igalia to improve WebKit’s implementation of the Web platform. We have worked on many issues, the main ones being:

  • Frame sandboxing: Implementing sandbox values to allow trusted third-party resources to open unsandboxed popups or restrict unsafe operations of malicious ones.
  • Frame scrolling on iOS: Addressing issues with scrollable nodes; trying to move to a more standard and interoperable approach with scrollable iframes.
  • Root scroller: Finding a solution to the old interoperability issue about how to scroll the main frame; considering a new rootScroller API.

This project aligns with Web Platform Predictability which aims at making the Web more predictable for developers by improving interoperability, ensuring version compatibility and reducing footguns. It has been a good opportunity to collaborate with Google and Apple on improving the Web. You can find further details in this blog post.

JavaScript

Igalia has been involved in design, standardization and implementation of several JavaScript features in collaboration with Bloomberg and Mozilla.

In implementation, Bloomberg has been sponsoring implementation of modern JavaScript features in V8, SpiderMonkey, JSC and ChakraCore, in collaboration with the open source community:

  • Implementation of many ES6 features in V8, such as generators, destructuring binding and arrow functions
  • Async/await and async iterators and generators in V8 and some work in JSC
  • Optimizing SpiderMonkey generators
  • Ongoing implementation of BigInt in SpiderMonkey and class field declarations in JSC

On the design/standardization side, Igalia is active in TC39 and with Bloomberg’s support

In partnership with Mozilla, Igalia has been involved in the specification of various JavaScript standard library features for internationalization, in specification, implementation in V8, code reviews in other JavaScript engines, as well as working with the underlying ICU library.

Other activities Preparation of Web Engines Hackfest 2017

Igalia has been organizing and hosting the Web Engines Hackfest since 2009. This event under an unconference format has been a great opportunity for Web Engines developers to meet, discuss and work together on the web platform and on web engines in general. We announced the 2017 edition and many developers already confirmed their attendance. We would like to thank our sponsors for supporting this event and we are looking forward to seeing you in October!

Coding Experience

Emilio Cobos has completed his coding experience program on implementation of web standards. He has been working in the implementation of “display: contents” in Blink but some work is pending due to unresolved CSS WG issues. He also started the corresponding work in WebKit but implementation is still very partial. It has been a pleasure to mentor a skilled hacker like Emilio and we wish him the best for his future projects!

New Igalians

During this semester we have been glad to welcome new igalians who will help us to pursue Web platform developments:

  • Daniel Ehrenberg joined Igalia in January. He is an active contributor to the V8 JavaScript engine and has been representing Igalia at the ECMAScript TC39 meetings.
  • Alicia Boya joined Igalia in March. She has experience in many areas of computing, including web development, computer graphics, networks, security, and software design with performance which we believe will be valuable for our Web platform activities.
  • Ms2ger joined Igalia in July. He is a well-known hacker of the Mozilla community and has wide experience in both Gecko and Servo. He has noticeably worked in DOM implementation and web platform test automation.
Conclusion

Igalia has been involved in a wide range of Web Platform technologies going from Javascript and layout engines to accessibility or multimedia features. Efforts have been made in all parts of the process:

  • Participation to standardization bodies (W3C, TC39).
  • Elaboration of conformance tests (web-platform-tests test262).
  • Implementation and bug fixes in all open source web engines.
  • Discussion with users, browser vendors and other companies.

Although, some of this work has been sponsored by Google or Mozilla, it is important to highlight how external companies (other than browser vendors) can make good contributions to the Web Platform, playing an important role on its evolution. Alan Stearns already pointed out the responsibility of the Web Plaform users on the evolution of CSS while Rachel Andrew emphasized how any company or web author can effectively contribute to the W3C in many ways.

As mentioned in this blog post, Bloomberg is an important contributor of several open source projects and they’ve been a key player in the development of CSS Grid Layout or Javascript. Similarly, Metrological’s support has been instrumental for the implementation of WebRTC in WebKit. We believe others could follow their examples and we are looking forward to seeing more companies sponsoring Web Platform developments!

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Making Privacy More Transparent

Mozilla planet - wo, 06/09/2017 - 23:07

How do you make complex privacy information easily accessible and understandable to users?  At Mozilla, we’ve been thinking through this for the past several months from different perspectives: user experience, product management, content strategy, legal, and privacy.  In Firefox 56 (which releases on September 26), we’re trying a new idea, and we’d love your feedback.

Many companies, including Mozilla, present a Privacy Notice to users prior to product installation.  You’ll find a link to the Firefox Privacy Notice prominently displayed under the Firefox download button on our websites.

Our testing showed that less than 1% of users clicked the link to view the “Firefox Privacy Notice” before downloading Firefox.  Another source of privacy information in Firefox is a notification bar displayed within the first minute of a new installation.  We call this the “Privacy Info Bar.”

User testing showed this was a confusing experience for many users, who often just ignored it.  For users who clicked the button, they ended up in the advanced settings of Firefox.  Once there, some people made unintentional changes that impacted browser performance without understanding the consequences.  And because this confusing experience occurred within the first few minutes of using a brand new browser, it took away from the primary purpose of installing a new browser: to navigate the web.

We know that many Firefox users care deeply about privacy, and we wanted to find a way to increase engagement with our privacy practices.  So we went back to the drawing board to provide users with more meaningful interactions. And after further discovery and iteration, our solution, which we’re implementing in Firefox 56, is a combination of several product and experience changes.  Here are our improvements:

  1. Displaying the Privacy Notice as the second tab of Firefox for all new installs;
  2. Reformatting and improving the Firefox Privacy Notice; and
  3. Improving the language in the preferences menu.

We reformatted the Privacy Notice to make it more obvious what data Firefox uses and sends to Mozilla and others.  Not everyone uses the same features or cares about the same things, so we layered the notice with high-level data topics and expanders to let you dig into details based on your interest.  All of this is now on the second tab of Firefox after a new installation, so it’s much more accessible and user-friendly.  The Privacy Info Bar became redundant with these changes, so we removed it.

We also improved the language in the Firefox preferences menu to make data collection and choices more clear to users.  We also used the same data terms in the preferences menu and privacy notice that our engineers use internally for data collection in Firefox.

These are just a few changes we made recently, but we are continuously seeking innovative ways to make the privacy and data aspects of our products more transparent.  Internally at Mozilla, data and privacy are topics we discuss constantly.  We challenge our engineers and partners to find alternative approaches to solving difficult problems with less data.  We have review processes to ensure the end-result benefits from different perspectives.  And we always consider issues from the user perspective so that privacy controls are easy to find and data practices are clear and understandable.

You can join the conversation on Github, or commenting on our governance mailing list.

Special thanks to Michelle Heubusch, Peter Dolanjski, Tina Hsieh, Elvin Lee, and Brian Smith for their invaluable contributions to our revised privacy notice structure.

The post Making Privacy More Transparent appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Mozilla Future Releases Blog: It’s your data, we’re just living in it

Mozilla planet - wo, 06/09/2017 - 23:06

Let me ask you a question: How often do you think about your Firefox data? I think about your Firefox data every day, like it’s my job. Because it is.  As the head of data science for Firefox, I manage a team of data scientists who contribute to the development of Firefox by shaping the direction of product strategy through the interpretation of the data we collect.  Being a data scientist at Mozilla means that I aim to ensure that Firefox users have meaningful choices when it comes to participating in our data collection efforts, without sacrificing our ability to collect useful, high-quality data that is essential to making smarter product decisions.

To achieve this balance, I’ve been working with colleagues across the organization to simplify and clarify our data collection practices and policies. Our goal is that this will make it easier for you to decide if and when you share data with us.  Recently, you may have seen some updates about planned changes to the data we collect, how we collect it, and how we share the data we collect. These pieces are part of a larger strategy to align our data collection practices with a set of guiding principles that inform how we work with and communicate about data we collect.

The direct impact is that we have made changes to the systems that we use to collect data from Firefox, and we have updated the data collection preferences as a result.  Firefox clients no longer employ two different data collection systems (Firefox Health Report and opt-in Telemetry). Although one was on by default, and the other was opt-in, as a practical matter there was no real difference in the type of data that was being collected by the two different channels in release.  Because of that, we now rely upon a single system called Unified Telemetry that has aspects of both systems combined into a single data collection platform and as a result no longer have separate preferences, as we did for the old systems.

If you are a long-time Firefox user and you previously allowed us to collect FHR data but you refrained from opting into extended telemetry, we will continue to collect the same type of technical and interaction information using Unified Telemetry. We have scaled back all other data collection to either pre-release or in situ opt-in, so you will continue to have choices and control over how Firefox collects your data.

Four Pillars of Our Data Collection Strategy

There are four key areas that we focused on when we decided to adjust our data preferences settings.  For Firefox, it means that any time we collect data, we wanted to ensure that the proposal for data collection met our criteria for:

  • Necessity
  • Transparency
  • Accountability
  • Privacy
Necessity

We don’t collect data “just because we can” or “just because it would be interesting to measure”.  Anyone on the Firefox team who requests data has to be able to answer questions like:

  • Is the data collection necessary for Firefox to function properly? For example, the automatic update check must be sent in order to keep Firefox up to date.
  • Is data collection needed to make a feature of Firefox work well? For example, we need to collect data to make our search suggestion feature work.
  • Is it necessary to take a measurement from Firefox users?  Could we learn what we need from measuring users on a pre-release version of Firefox?
  • Is it necessary to get data from all users, or is it sufficient to collect data from a smaller sample?
Transparency

Transparency at Mozilla means that we publicly share details about what data we collect and ensure that we can answer questions openly about our related decision-making.

Requests for data collection start with a publicly available bug on bugzilla. The general process around requests for new data collection follows this process: people indicate that they would like to collect some data according to some specification, they flag a data steward (an employee who is trained to check that requests have publicly documented their intentions and needs) for review, and only those requests that pass review are implemented.

Most simple requests, like new Telemetry probes or experimental tests, are approved within the context of a single bug.  We check that every simple request includes enough detail that a standard set of questions to determine the necessity and accountability of the proposed measurements.  Here’s an example of a simple request for new telemetry-based data collection.

More complex requests, like those that call for a new data collection mechanism or require changes to the privacy notice, will require more extensive review than a simple request.  Typically, data stewards or requesters themselves will escalate requests to this level of review when it is clear that a simple review is insufficient.  This review can involve some or all of the following:

  • Privacy analysis: Feedback from the mozilla.dev.privacy mailing list and/or privacy experts within and outside of Mozilla to discuss the feature and its privacy impact.
  • Policy compliance review: An assessment from the Mozilla data compliance team to determine if the request matches the Mozilla data compliance policies and documents.
  • Legal review: An assessment from Mozilla’s legal team, which is necessary for any changes to the privacy policies/notices.
Accountability

Our process includes a set of controls that hold us accountable for our data collection. We take the precaution of ensuring that there is a person listed who is responsible for following the approved specification resulting from data review, such as designing and implementing the code as well as analyzing and reporting the data received.  Data stewards check to make sure that basic questions about the intent behind and implementation of the data we collect can be answered, and that the proposed collection is within the boundaries of a given data category type in terms of defaults available.  These controls allow for us to feel more confident about our ability to explain and justify to our users why we have decided to start collecting specific data.

Privacy

We can collect many types of data from your use of Firefox, but we don’t consider them equal. We consider some types of data to be more benign (like what version of Firefox you are using) than others (like the websites you visit). We’ve devised a four-tier system to group data in clear categories from less sensitive to highly sensitive, which you can review here in more detail.   Since we developed this 4-tier approach, we’ve worked to align this language with our Privacy Policy and at the user settings for Privacy in Firefox.   (You can read more about the legal team’s efforts in a post by my colleagues at Legal and Compliance.)

What does this mean for you?

We hope it means a lot and not much at the same time.  At Firefox, we have long worked to respect your privacy, and we hope this new strategy gives you a clearer understanding of what data we collect and why it’s important to us.  We also want to reassure you that we haven’t dramatically changed what we collect by default.  So while you may not often think about the data you share with Mozilla, we hope that when you do, you feel better informed and more in control.

The post It’s your data, we’re just living in it appeared first on Future Releases.

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 06 Sep 2017

Mozilla planet - wo, 06/09/2017 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Pagina's