mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Aaron Klotz: Coming Around Full Circle

Mozilla planet - mo, 30/09/2019 - 18:30

One thing about me that most Mozillians don’t know is that, when I first applied to work at MoCo, I had applied to work on the mobile platform. When all was said and done, it was decided at the time that I would be a better fit for an opening on Taras Glek’s platform performance team.

My first day at Mozilla was October 15, 2012 – I will be celebrating my seventh anniversary at MoCo in just a couple short weeks! Some people with similar tenures have suggested to me that we are now “old guard,” but I’m not sure that I feel that way! Anyway, I digress.

The platform performance team eventually evolved into a desktop-focused performance team by late 2013. By the end of 2015 I had decided that it was time for a change, and by March 2016 I had moved over to work for Jim Mathies, focusing on Gecko integration with Windows. I ended up spending the next twenty or so months helping the accessibility team port their Windows implementation over to multiprocess.

Once Firefox Quantum 57 hit the streets, I scoped out and provided technical leadership for the InjectEject project, whose objective was to tackle some of the root problems with DLL injection that were causing us grief in Windows-land.

I am proud to say that, over the past three years on Jim’s team, I have done the best work of my career. I’d like to thank Brad Lassey (now at Google) for his willingness to bring me over to his group, as well as Jim, and David Bolter (a11y manager at the time) for their confidence in me. As somebody who had spent most of his adult life having no confidence in his work whatsoever, their willingness to entrust me with taking on those risks and responsibilities made an enormous difference in my self esteem and my professional life.

Over the course of H1 2019, I began to feel restless again. I knew it was time for another change. What I did not expect was that the agent of that change would be James Willcox, aka Snorp. In Whistler, Snorp planted the seed in my head that I might want to come over to work with him on GeckoView, within the mobile group which David was now managing.

The timing seemed perfect, so I made the decision to move to GeckoView. I had to finish tying up some loose ends with InjectEject, so all the various stakeholders agreed that I’d move over at the end of Q3 2019.

Which brings me to this week, when I officially join the GeckoView team, working for Emily Toop. I find it somewhat amusing that I am now joining the team that evolved from the team that I had originally applied for back in 2012. I have truly come full circle in my career at Mozilla!

So, what’s next?

  • I have a couple of InjectEject bugs that are pretty much finished, but just need some polish and code reviews before landing.

  • For the next month or two at least, I am going to continue to meet weekly with Jim to assist with the transition as he ramps up new staff on the project.

  • I still plan to be the module owner for the Firefox Launcher Process and the MSCOM library, however most day-to-day work will be done by others going forward;

  • I will continue to serve as the mozglue peer in charge of the DLL blocklist and DLL interceptor, with the same caveat.

Switching over to Android from Windows does not mean that I am leaving my Windows experience at the door; I would like to continue to be a resource on that front, so I would encourage people to continue to ask me for advice.

On the other hand, I am very much looking forward to stepping back into the mobile space. My first crack at mobile was as an intern back in 2003, when I was working with some code that had to run on PalmOS 3.0! I have not touched Android since I shipped a couple of utility apps back in 2011, so I am looking forward to learning more about what has changed. I am also looking forward to learning more about native development on Android, which is something that I never really had a chance to try.

As they used to say on Monty Python’s Flying Circus, “And now for something completely different!”

Categorieën: Mozilla-nl planet

Julien Vehent: Beyond The Security Team

Mozilla planet - mo, 30/09/2019 - 14:55

This is a keynote I gave to DevSecCon Seattle in September 2019. The recording of that keynote should be available soon.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote.png

Good morning everyone, and thank you for joining us on this second day of DevSecCon. My name is Julien Vehent. I run the Firefox Operations Security team at Mozilla, where I lead a team that secures the backend services and infrastructure of Firefox. I’m also the author of Securing DevOps.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_1_.png

This story starts a few months ago, when I am sitting in our mid-year review with management. We’re reviewing past and future projects, looking at where the dozen or so people in my group spend their time, when my boss notes that my team is under invested in infrastructure security. It’s not a criticism. He just wonders if that’s ok. I have to take a moment to think through the state of our infrastructure. I mentally go through the projects the operations teams have going on, list the security audits and incidents of the past few months.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_2_.png

I pull up our security metrics and give the main dashboard a quick glance before answering that, yes, I think reducing our investment in infrastructure security makes sense right now. We can free up those resources to work on other areas that need help.

Infrastructure security is probably where security teams all over the industry spend the majority of their time. It’s certainly where, in the pre-cloud era, they use to spend most of their time.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_3_.png

Up until recently, this was true for my group as well. But after years of working closely with ops on hardening our AWS accounts, improving logging, integrating security testing in deployments, secrets managements, instances updates, and so on, we have reached the point where things are pretty darn good. Instead of implementing new infrastructure security controls, we spend most of our time making sure the controls that exist don’t regress.

The infrastructure certainly does continue to evolve, but operations teams have matured to the point of becoming their own security teams. In most cases, they know best how to protect their environments. We continue to help, of course. We’re not far away. We talk daily. We have their back during security incidents and for the occasional security review. We also use our metrics to call out areas of improvements. But that’s not a massive amount of work compared to our investment from previous years.I have advocated for some time now that operations teams make the best security teams, and every interaction that I have with the ops of the Firefox organization confirm that impression. They know security just as well as any security engineer would, and in most operational domains, they are the security experts. Effectively, security has gone beyond the security team.So what I want to discuss here today is how we turned our organization’s culture from centralizing security to one where ownership is distributed, and each team owns security for their areas. I’d say it took us a little more than three years to get there, but let me start by going back a lot further than that.


It didn’t use to be this way I’m french. I grew up in cold and rainy Normandy. It’s not unlike Seattle in many ways. I studied in the Loire Valley and started my career in Paris, back in the mid-2000s. I started out by working in banks, as a security engineer in the web and minitel division of a french bank. If you don’t know what a minitel is, you’re seriously missing out. But that's a story for another time.
Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_4_.png

So I was working in suit and tie at a bank in Paris, and the stereotypes were true: we were taking long lunches, occasionally drinking wine and napping during soporific afternoon meetings. Eating lots of cheese and running out of things to do in our 8 or 9 weeks of vacations. Those were the days. And when it came to security, we were the supreme authority of the land. A group of select few that all engineers feared, our words could make projects live or die. 

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_5_.png

This is how a deployment worked back then. This was pre-devops when deployment could take three weeks and everyone was fine with it. An engineering group would kick off a projet, carefully plan it, come up with an elegant and performant design, spend weeks of engineering building the thing. And don’t get me wrong, the engineering teams were absolutely top-notch. Best of the best. 100% french quality that the rest of the world continues to envy us today. Then they would try to deploy their newborn and, “WAIT, what the hell is this?” asks the security team who just discovered the project.

This is where things usually went sideways for engineering. Security would freak out at the new project, complain that it wasn’t consulted, delay production deploys, use its massive influence to make last minute changes and integrate complex controls deep into the new solution. Eventually, it would realize this new system isn’t all that risky after all, it would write a security report about it, and engineering would be allowed to deploy it to production.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_6_.png

In those environments, trust wasn’t even an option. Security decisions were made by security engineers and that was that. Developers, operators and architects of all level were expected to field every security topic to the security team, from asking for permission to open a firewall rule, to picking a hash algorithm for their app. No one dared bypass us. We had so much authority we didn’t hesitate to go up against multi-million dollar projects. In a heavily regulated industry, no one wants a written record of the security team raising the alarm on their projects.

On the first day of the conference, we heard Tanya talk about the need to shift left, and I think this little overly-dramatic story is a good example of why it’s so important that we do so. Shifting left means moving our security work closer to the design phases of engineering. It means being part of the early cycles of the SDLC. It means removing the security surprise from the equation. You’ve probably heard everyone mention something along those lines in recent years. I’m probably preaching to the choir here, but I thought it may be useful to remind those of us who haven’t lived through these broken security models why it’s so important we don’t go back to them.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_7_.png

And consolidating security decisions into the hands of security people has two major downsides.

First, it slows projects down dramatically. We’ve talked about the 1 security engineer to 100 developer ratio yesterday. Routing all security topics through the security team creates a major bottleneck that delays work and generates frustrations. If you’ve worked in organizations that require security reviews before every push to production, you’ve experienced this frustration. Backlogs end up being unmanageable, and review quality drops significantly.

Secondly, non-security teams feel exempt from having to account for security and spend little time worrying about how vulnerable their code is to attacks, or how permeable their perimeter is. Why should they spend time on security, when the organization clearly signals that it isn’t their problem to worry about?

We knew back then this model wasn’t sustainable, but there was little incentive to change it. Security teams didn’t want to give up the power they had accumulated over the years. They wanted more power, because security was never good enough and our society as we knew it was going to end in an Armageddon of buffer overflows should the sysadmins disable SELinux in production to allow for rapid release cycles.

Getting closer to devs & ops

Something that I should mention at this point is I’m an odd security engineer. What I love doing is actually building and shipping software, web applications and internet services in particular. I’ve been doing that for much longer than I’ve been doing security. At some point in my career, I was running a marketing website affiliated with “Who wants to be a millionaire”.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_8_.png

Yeah, that TV Game that airs every day at lunchtime around the country. They would air a question on the show that people had to answer on our website to win points they could use to buy stuff. I gotta say, this was a proud moment of my career. I felt a strong sense of making the world a better place then. It was powerful.

Anyways, this was a tiny operation, from a web perspective. When I joined, we had one java webapp directly connected to the internet, and one oracle database. I upgraded that to an haproxy load balancer, three app servers, and a beefier oracle database. It was all duct tape and cut corners, but it worked, and it made money. And it would crash every day. I remember watching the real-time metrics out of haproxy when they would air the question, and it would spike to 2000 requests per seconds until the site would fall down. Inevitably, every day, the site would break and there was nothing I could do about it. But business was happy because before we had crashed, we had made money.

The point I’m trying to make here is not that I’m a shitty sysadmin who use to run a site that broke every day. It’s that I understand things don’t need to be perfect to serve their purpose. Unless you work at a nuclear plant or a hospital, it’s often more important to serve the business than to have perfect security.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_9_.png

A little more than four years ago, I joined a small operations team focused on building cloud services for Firefox. They were adopting all the new stuff: immutable systems, fully automated deployments controlled by jenkins pipelines, autoscaling groups, load balancing based on application health, and so on. It was an exciting greenfield where none of my security training applied and everything had to be reevaluated from scratch. A challenge, if I ever had one. The chance to shape security in a completely different way.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote.jpg

So I joined the cloud operations team as a security engineer. The only security engineer. In the middle of a dozen of so hardened ops who were running stuff for millions of Firefox users. I live in Florida. We know that swimming in gator infested ponds is just stupid. The same way, security engineers generally avoid getting cornered by hordes of angry sysadmins. They are the enemy, you see. They are the ones who leave mission critical servers open to the internet. They are the ones who don’t change default passwords on network gears. They are the ones who ignore you when you ask for a system update seven times in a row. They are the enemy.

To be fair, they see us the same way. We’re Sauron, on top of our dark tower, overseeing everything. We corrupt the hearts and minds of their leaders. We add impossible requirements to time constrained projects. We generally make their lives impossible.

And here I was. A security guy. Joining an ops team.

By and large, they were nice folks, but it quickly became clear that any attempt at playing the arrogant french security guy who knows it all and dictates how things should be would be met with apathy. I had to win this team over. So I decided to pick a problem, solve it, make their life easier and win myself some goodwill.
Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_11_.png

I didn’t have to search for long. Literally days before I joined the team, a mistake happened and the git repository containing all the secrets got merged into the configuration management repo. It was a beautiful fuck-up, executed with brio, with the full confidence of a battle-tested engineer who has ran these exact commands hundreds of times before. The culprit is a dear friend and colleague of mine who I like to use as an example of professionalism with the youngsters, and I strongly believe that what failed then was not the human layer, but completely inadequate tooling.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_12_.png

We took it as a wake up call that our process needed improvement. So as my first project, I decided to redesign secrets management. There was momentum. Existing tooling was inadequate. And it was a fun greenfield project. I started by collecting the requirements: applications needed to receive their secrets upon initialization, and in autoscaling groups that had to happen without a human taking action. This is a solved problem nowadays known as the bootstrapping of trust, where we use the identify given to an instance to grant permissions to resources, such as the ability to download a given file from S3 or the permission to decrypt with a KMS key. At the time, those concepts were still fairly new, and as I was working through the requirements, something interesting happened.

In a typical security project, I’d gather all the requirements, pick the best possible security architecture and implement it in the cleanest possible way. I’d then spend weeks or months selling my solutions internally, relentlessly trying to convert people to my cause, until everyone agreed or caved.

But in this project, I decided to talk to my customers first. I sat down with every ops who would use the thing and spent the first few weeks of the project studying the provisioning logic and the secrets management workflow. I also looked at the state of the art, and added some features I really wanted, like clean git history and backup keys.

By the time I had reached the fourth proposal, the ops team had significantly shaped the design to fit their needs. I didn’t need to sell them on the value, because by then, they had already decided they really needed, and wanted, the tool. Mind you, I hadn’t written a single line of code yet, and I wasn’t sure I could implement everything we had talked about. There was a chance I had oversold the capabilities of the tool, but that was a risk worth taking.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_13_.png

It took a couple of months to get it implemented. The first version was written in ugly Python, with little tests and poor cross-platform support. But it worked, and it continues to work (after a rewrite in Go). The result of this project is the open source secrets management tool called Sops, which has been our internal standard for three and something years now. Since I started this talk, perhaps a dozen EC2 instances have autoscaled and called Sops to decrypt their secrets for provisioning.

Don’t just build security tools.
Build operational tools that do things securely. Today, Sops is popular DevOps tool inside and outside Mozilla, but more importantly, this project laid out the foundation of how security and operations would work together: strong collaboration on complex technical topics. We don’t just build security tools like we used to. We build operational tools that do things securely. This may seem like a distinction without a difference, but I found that it changes the way we think about our work from only being a security team, to being a security team that supports a business with specific operational needs. Effectively, it forces us to be embedded into operations. Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_14_.png

I spent a couple years embedded in that operations team, working closely with devs & ops, sharing their successes and failures, and helping the team mature its security posture from the inside. I wrote Securing DevOps during those years, and I transferred a lot of what I learned from ops into the book.

I found it fascinating that, while I always had a strong opinion about security architecture and the technical direction we were taking, being down in the trenches dramatically changed my methods. I completely stopped trying to use my security title to block a project, or go against a decision. Instead, I would propose alternative solutions that were both viable and reasonable, and negotiate a middle ground with the parties involved. Now, to be fair, the team dynamic helped a lot, particularly having a manager who put security at equal footing with everything else. But being embedded with Ops, effectively being an ops, greatly contributed to this culture. You see, I wasn’t just any expert in the room, I was their expert in the room, and if something went sideways, I would take just as much blame as everyone else. Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_15_.png

There are two generally accepted models for building security teams. The centralized one, where all the security people report to a CISO who reports to the CEO, and the distributed one, where security engineers are distributed into engineering teams and a CISO sets the strategy from the top of the org. Both of these models have their pros and cons.

The centralized one generally has better security strategy, because security operates as a cohesive group reporting to one person. But its people are so far away from the engineering teams that actually do the work that it operates on incomplete data and a lot of assumptions.

The distributed model is pretty much the exact opposite. It has better information from being so close to where things happen, but its reporting chain is directly tied to the organization’s hierarchy, and the CISO may have a hard time implementing a cohesive security strategy org-wide.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_16_.png

The embedding model is sort of a hybrid between these two that tries to get the best of both worlds. Having a strong security organization that reports to an influential CISO is good, and having access to real-world data is also critical to making the right decisions. Security engineers should then report to the CISO but be embedded into engineering teams.

Now, "embedded" here has a strong meaning. It doesn’t just mean that security people snoop into these teams chatrooms and weekly meetings. It means that the engineering managers of these teams have partial control of the security engineers. They can request their time and change their priorities as needed. If a project needs an urgent review, or an incident needs handling, or a tools needs to be written, the security engineers will step in and provide support. That’s how you show the organization that you’re really 100% in, and not just a compliance group on the outskirts of the org.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_17_.png

Reverse embedding is also very important, and we’ve had that model in the security industry for many years: it’s called security champions. Security champions are engineers from your organization who have a special interest in security. Oftentimes, they are more senior engineers who are in architect roles or deep experts who consider security to be a critical aspect of their work. They are the security team’s best friends. Its partners throughout the organization. They should be treated with respect and given as much support as possible, because they’ll move mountains for you.

Security champions should have full access to the security team. No closed doors meetings they are excluded from, no backchannel discussions they can’t participate in. If you can’t trust your champions, you can’t trust anyone in your org, and that’s a pretty broken situation.

Champions must also be involved with setting the security strategy. If you’re going to adopt a framework or a platform for its security properties, make sure to consult your champions. If they are on board, they’ll help sell that decision down the engineering chain.

Avoid Making Assumptions

If you embed your security engineers into the dev and ops, and open the doors of your organization to security champions, you’ll allow information to flow freely and make better decision. This allow you to dramatically reduce the amount of assumptions you have to make every day, which directly correlates to stronger security.

The first project I worked on when I joined Mozilla was called MIG, for Mozilla InvestiGator (the logo was a gator, for investigator, get it?). The problem we were trying to solve was inspecting our live systems for indicators of compromise in real-time. Back in 2013, we already had too many servers to run investigations manually. The various method we had tried all had drawbacks. The most successful of them involved running parallel ssh from a bastion host that had ssh keys and firewall rules to connect everywhere. If that sounds terrifying to you, it’s because it was. So my job was to invent a system that could securely connect to all those systems to tell us if a file with a given checksum was present, which would indicate that a malware or backdoor existed on the host.

That project was cool as hell! How often do you get to spend 2 years implementing a completely novel agent-based system in a new language? I had a lot of fun working on MIG, and it saved our bacon a few times. But quickly what became evident was that we were running the same investigations over and over. We had some fancy modules that could look for byte strings in memory, or run complex analysis on files, but we never used them. Instead, we used MIG as an inventory tool to find out which version of a package was installed on a system, or which host could connect to the internet through an outbound gateway. MIG wasn’t so much of a security investigation platform as it was an inventory one, and it was addressing a critical need: the need for information.

Every security incident starts with an information gathering phase. You need to understand exactly how much is impacted, and what is the risk of propagation, before you can work on mitigation. The inventory problem continues to be a major concern in infosec. There are too many versions of too many applications running on too many systems for anyone to keep track of them effectively. Is this is even ignoring the problem Shadow IT poses to organizations that haven’t modernized fast enough. As security experts, we operate in the dark most of the time, so we make assumptions.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_18_.png

I’ve grown to learn that assumption are at the root of most vulnerabilities. The NYTimes wrote a fantastic article to how Boeing built a defective 737 Max, and I’ll let you guess what’s at the core of it: assumptions. In their case, it’s – and I quote the New York Times article here - “After Boeing removed one of the sensors from an automated flight system on its 737 Max, the jet’s designers and regulators still proceeded as if there would be two“. The article is truly eye opening on how people making assumptions about various parts of the systems led to the plane being unreliable, to dramatic consequences.

I've made too many assumptions throughout my career, and too often did they prove to be entirely wrong. One of them even broke Firefox for our entire user base. Nothing good comes from making assumptions.

Assumptions lead to vulnerability Assumptions leads to vulnerability. Let me say that one more time. Assumptions lead to vulnerability. As security experts, our job is to identify every assumption as a potential security issue. If you only get one thing out of this talk, please let it be this: every time someone uses the word “assume” in any context, reply we “let’s see how we can remove assumption”, or “do we have a test to confirm this?”

Nowadays, I assert the security maturity of an organization to how many assumptions its security team is making. It’s terrifying, really. But we have a cure, it’s Data. Having better inventories, something cloud infrastructure truly helps us with, is an important step toward fixing out of date systems. Clearing up assumptions on how systems interconnects reduces the attack surface. Etc, etc.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_2_.png


Data is a formidable silver bullet for a lot of things, and it must be at the core of any decent security strategy. The more data you have, the easier it it to take risk. Good security metrics updated daily is what helped me answer my boss’s question that we were okay reinvesting our resources in other areas. Those dashboards are also what upper management want to see. Not that they necessarily want to know the details of the dashboard, though sometimes they do, but they want to make sure you have that data, that you can answer questions about the organization’s security posture, and that you can make decisions based on accurate information.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_19_.png

Data does not come out of thin air. To get good data, you need good tests, good monitoring and good telemetry. Security teams are generally pretty good at writing tests, but they don’t always write the right tests.

Above is an example of an AWS test from our internal framework called “frost”. It checks that EC2 instances are running on an acceptable AMI version. You can see that code for yourself, it’s open source at github.com/mozilla/frost. What’s interesting about this test is that it took a few iterations to get it right. Our first attempt was just completely wrong and flagged pretty much every production instance as insecure, because we weren't aligned with the ops team, and had written the test without consulting them.

In this case, we flag an instance as insecure if it is running on an AMI that isn’t owned by us, and that is older than a configured max age. This is a good test, and we can give that data directly to the ops team for action. It’s really worthwhile spending extra time making sure your tests are trustworthy, because otherwise you’re sending compliance reports no one ever reads or take actions on, and you’re pissing people off.

Setting the expectations

But data and tests only reflect how well you’re growing the security awareness in your organization. They don’t, in and of themselves, mature your organization. So while it is important to spend time improving your testing tools and metrics gathering frameworks, you should also spend time explaining to the organization what your ideal state is. You should set the expectations.


Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_20_.png

A few years ago, we were digging through our metrics to understand how we could get websites and APIs to a higher level of security. It was clear we weren’t being successful at deploying modern security techniques like Content Security Policy or HSTS. It seemed like every time we would perform a risk assessment, we would get a full buy-in from dev teams, and yet those controls would never make it to the production version. We had an implementation problem.

So we tried a few things, hoping that one of them would catch on.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_21_.png


We first pushed on the idea that every deployment pipeline would invoke ZAP right after the pre-production site was deployed. I talked about this in the book under the idea of test-driven security, and I use examples of invoking a container running ZAP in CircleCI. The ZAP container would scan a pre-production version of the site and output a compliance report against a web security baseline. The report was explicit in calling out missing controls, so we thought it would be easy for devs to adopt it and fix their webapps. But it didn’t take off. We tried really hard to get it included in Jenkins deployment pipelines and web applications CI/CD, and yet the uptake was low, and fairly short lived. The integrations would get disabled as soon as it became annoying (too slow, blocking deploys, etc...). The tooling just wasn't there yet.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_22_.png

But the idea of the “baseline”, this minimal set of security controls we wanted every website and api to implement, was fairly successful. So we turned it into a checklist in markdown format that could easily be copied into github issues.

This one worked out beautifully. We would create the checklist in the target repository while running the risk assessment, and devs would go through it as part of their pre-launch check. Over time, we added dozens of items to the checklist, from new controls like checking for out-of-date dependencies, to traps we wanted devs to avoid like refusing to proxy requests to aws instances metadata. The checklist got big, and many items don’t apply to most projects, but we just cross them off and let the devs focus on the stuff that matters. They seem to like it.

And something else interesting happened: project managers started tracking completion of the checklist as part of their own pre-launch checklist. We would see security checklist completeness being mentioned as part of a readiness meeting with directors. It got taken seriously, and as a result every single website we launched over the last couple years implements content security policy, HSTS, same site cookies and so on.


Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_23_.png

The checklist isn’t the only thing that helped improve adoption. Giving developers self-service security tools is also very important. And the approach of giving letter-grades has an interesting psychological effects on engineering teams, because no one wants to ship a production site that gets an F or even a C on the publicly accessible security assessment tool. Everyone wants an A+. And guess what? Following the checklist does give you an A+. That makes the story pretty straightforward: follow the checklist, and you’ll get your nice A+ on the Observatory.


Clear Expectations

Checklist

Self Assessment

Profit

This particular model may not work exactly for your organization. The power dynamics and internal politics may be different. But the general rule still applies: if you want security to be adopted in products that ship, set the expectations early and clearly. Don’t give them vague rules like “only use encryption algorithms that provide more than 128 bits of security”. No one knows what that means. Instead, give them information they can directly translate into code, like a content security policy they can copy and paste then tweak, or a logging library they can import in their apps that spits out the right format from the get go. Set the expectations, and make them easy to follow. Devs and ops are too busy to jump through hoops.

Once you’ve set the expectations, give them checklists and the tools to self-assess. Don’t make your people have to ask you every time they need a check of their webapp, it bothers them as much as it bothers you. Instead, give them security tools that are fully self-service. Give them a chance to be their own security team, and to make you obsolete. Clear expectations and self-service security tools is how you build up adoption.
Not having to say “no”

There is another anti-pattern of security team I’d like to address: it’s the stereotypical “no” team. The team that operates in organizations where engineers keep bringing up projects they feel they have to shut down because of how risky they are. Those security people are usually not a happy bunch. You rarely see them smile. They complain a lot. They look way older than they really are. Maybe they took up drinking.

See, I’m a happy person. I enjoy my work, and I enjoy the people I work with, and I don’t want to end up like that. So I set a personal goal to pretty much never having to say no. Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_24_.png
I have a little one at home. She’s a little over a year old and just started walking. She’s having a blast really, and it’s a sight to see her run around the house, a huge smile on her face, using her newly acquired skills. For her mother and I, it’s just plain terrifying. Every step she takes is a potential disaster. Every furniture in the house that isn’t covered in foam and soft blanket is a threat. Every pot, jar, broom, cat or dog is a weapon she could get her hands on at any moment. The threat modeling of a parent is simple: “EVERYTHING IS DANGEROUS, DO NOT LET THIS CHILD OUT OF SIGHT, NO YOU CAN’T CLIMB THE STAIRS ON YOUR OWN, DON’T LICK THAT KNIFE!”. Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_25_.png

The safe approach here should be simple: cover the little devil in bubble wrap, and don’t let her leave the safe space of her playpen. There, problem solved. Now if she could just stop growing...

And by the way, you CAN buy bubble wrap baby suits. It’s a thing. For those of you with kids, you may want to look into it. Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_26_.png

There is a famous quote from one of my personal hero: Rear Admiral Grace Hopper. She invented compilers, back when computers were barely a thing. She used to hand out nanoseconds to military officers to explain how long messages took to travel over the wire. A nanosecond here is a small piece of wire about 30cm long (that’s almost a foot, for all you americans out there) that represent the maximum distance that light or electricity can travel in a billionth of a second. When an admiral would ask her why it takes so damn long to send a message via a satellite, she’d point out that between here and the satellite there’s a large number of nanoseconds.

Anyway, Admiral Hopper once said “ships are safe in harbor, but that’s not what ships are for”. If you only remember two things from this talk, add that one to the list.
Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_27_.png

As a dad, it is literally my job to paternalize my kid. A lot of security teams feel the same way about their daily job. Let me argue here, that this is completely the wrong approach. It’s not your job to paternalize your organization. The people you work with are adults capable of making rational decisions, and when they decide to ignore a risk, they are also making a rational decision. You may disagree with it, and that’s fine, but you shouldn’t presume that you know better than everyone else involved.

What our job as security professionals really is, is to help the organization make informed decision. We need to surface the risks, explain the threats, perhaps reshape the design of a project to better address some concerns. And when everything is said and done, the organization can decide for itself how much risk it is willing to accept. Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_28_.png

At Mozilla, Instead of saying no, we run risk assessments and threat models together as a team, then we make sure everyone agrees on the assessment, and if they think it’s an appropriate amount of risk to take. The security team may have concerns over a specific feature, only to realize during the assessments those concerns aren’t really that high. Or perhaps they are and the engineering team didn’t realize that until now, and is perfectly willing to modify or remove that feature entirely. And sometimes the project simply is risky by nature, but it’s only being rolled out to a small number of users while being tested out.

The point of risks assessments and threat modeling isn’t only to identify the risks. It’s to help the organization make informed decisions about those risks. A security team that simply says “no” doesn’t solve anyone’s problems, instead, build yourself an assessment framework that can evaluate new projects and features, and can get people to take calculated risks based on the environment the business operates in.

We call this framework the “Rapid Risk Assessment”, or RRA. Guillaume Destuynder and I introduced this framework at Mozilla back in 2013, and over the last 6 years we have ran hundreds, if not thousands, of RRAs with everyone in the organization. I still haven’t heard anyone find the exercise a waste of time. RRAs are a bit different from the standard risk assessments. They are short, 30 minutes to one hour, and focused on small components. It’s more of a security and threat discussion than a typical matrix-based risk framework, and I think this is why people actually enjoy them. One particular team even told me once they were looking forward to running the RRA on their new project. How cool is that?

Having a risk assessment framework is nice, but you can also get started without one. In the panel yesterday, Zane Lackey told the story of introducing risk assessments at Etsy by joining engineering meetings and simply asking "How would you attack this app?" to the devs. This works, I've asked similar questions many times. Guillaume's favorite is "what would happen should the database leak on Twitter?". Devs & Ops are much better at threat modeling than they often realize, and you can see the wheels spinning in their brains when you ask this type of question. Try it out, it's actually fun!
Being Strategic

By this point in the talk, I hope that I’ve convinced you security is owned well beyond the security team, and you might be tempted to think that, perhaps we could get rid of all those pesky security people altogether. I’ll be honest, that’s the end game for me. The digital world will adopt perfect security. People will carefully consider their actions and take the right amount of risk when appropriate. They will communicate effectively and remove all assumptions in their work. They will treat each other with respect during security incidents and collaborate effectively toward a resolution. And I’ll be serving fresh Mojitos at a Tiki bar over there by the beach.

Not a good deal for security conferences really. Sorry Snyk, this may have been a bad investment. But I have a feeling they’ll be doing fine for a while. Until those blessed days come, we’ve got work to do.

A mature security team may not need to hold the hands of its organization anymore, but it still has one job: it must be strategic. It must foresee threats long before they impact the organization, and it must plan defenses long before the organization has to adopt them. It’s easy for security teams to get caught into the tactical day to day, go from review to incident to review to incident, but doing so does not effectively improve anything. The real gratification, the one that no one else gets to see but your own team, is seeing every other organization but your own battle a vulnerability you’re not exposed to because of strategic choices you’ve made long ago.

Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_29_.png


Let me give you an example: I’m a strong supported of putting our admin panels behind VPN. Yes, VPNs. Remember them? Those old dusty tunnels that we use to love in the 2000s until zero trust became all the rage and every vendor on the planet wanted you to replace everything with their version of it. Well, guess what, we do zero trust, but we also put our most sensitive admin panels behind VPN. The reason for it is simply defense in depth. We know authentication layers fail regularly, we have tons of data to prove it, and we don’t trust them as the only layer of defense.

Do developers and operators complain about it? Yes, absolutely, almost all the time. But they also understand our perspective, and trust us to make the right call. It all ties together: you can’t make strategic decisions for your organizations if your own people don’t trust you.

So what else should you be strategic about? It really depends on where you’re operating. Are you global or local? What population do you serve? Can your services be abused for malicious purpose?

For example, imagine you’re running a web business selling birthday cards. You could decide to automatically close every account that’s created or accessed from specific parts of the world. It would be a drastic stance to take, but perhaps the loss of business is cheaper than the security cost. It’s a strategic decision to make, and it’s the role of a mature security team to help its leadership make it by informing on the risks.

I personally like the idea that each component of your environment should be allow to blow up without impacting the rest of the infrastructure. I don’t like over-centralization. This model is nice when you work with teams that have varying degrees of maturity, because you can let them do their things without worrying about dragging everyone down to the lowest common denominator. In practice, it means we don’t put everything in the same AWS accounts, we don’t use single sign on for certain things that we consider too sensitive, and so on. The point is to make strategic decisions that make sense for your organization.

Whatever decision you make, spend time documenting it, and don’t forget to have your champions review them and influence them. Build the security strategy together, so there is no confusion that this is a group effort the entire organization has bought into. It will make implementing it a whole lot easier.

So in closing, I’d like to leave you with this: a security team must help the organization make strategic security decisions. To do so, it must be trusted. To be trusted, its need to have data, avoid making assumptions, set clear expectations and to avoid saying no. And above all, it must be embedded across the organizations.

To go beyond the security team
Get the security team closer to your organization Thank you.Screenshot_2019-09-25_Beyond_the_Security_Team_-_DevSecCon_KeyNote_30_.png
Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the Month – August 2019

Mozilla planet - mo, 30/09/2019 - 12:10

Please join us in congratulating Yamama Shakaa, our Rep of the Month for August 2019!

Yamama is from Nablus, Palestine. She is a teacher and has become a very active Mozillian, she joined the Reps program in November 2018 and is also part of the Mozilla Tech Speaker program. She keeps contributing deeply in the program as Reps Resources member.

image

 

She also contributes a lot to WebVR, A-frame, and Common Voice. Like many teachers around the world she inspires many people – especially school girls in her region by teaching them how to solve problems through lines of code.

Congratulations and keep rocking the open web! :tada:

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: WebHint in Firefox DevTools: Improve Compatibility, Accessibility and more

Mozilla planet - mo, 30/09/2019 - 09:43

Creating experiences that look and work great across different browsers is one of the biggest challenges on the web. It also is the most rewarding part, as it gets your app to as many users as possible. On the other hand, cross-browser compatibility is also the web’s biggest frustration. Testing legacy browsers late in the development process can break a feature that you spent hours on, even requiring rewrites to fix.

What if the tools in your primary development browser could warn you sooner? Thanks to Webhint in Firefox DevTools, we can do exactly that, and more.

The Webhint engine

Webhint provides feedback about your site’s compatibility, performance, security, and accessibility to guide improvements. A key benefit is integration across the development cycle — while you author in VS Code, test in CI/CD automation, or benchmark sites in the online scanner. Having Webhint available in DevTools adds in-page context and inspection capabilities.

Firefox playing Chromium-frisbee with Narwhal Nelli, for real!

Firefox DevTools was happy to collaborate with the Webhint team, which just released version 1.0 of their extension. With the recommendations that the DevTools panel provides, developers on any browser (there is also a Chrome extension) can spend less time looking up cross-browser compatibility tables like caniuse or MDN. The cross-browser guidance for CSS and HTML, a core part of the 1.0 release, is also one of the first projects to apply MDN’s browser-compat-data on code to detect compatibility.

The foundation to build on

The hints are not rules written in stone. In fact, the hint engine is extensible by design so developers can capture their own expertise and best practices for their projects. We also have plans to tweak the heuristics behind recommendations, especially for new ground like compatibility, based on your feedback. We are also working to integrate recommendations further into DevTools. Everything should be at your fingertips when you need it.

Wrapping up

Install Webhint for Firefox, Chrome or Edge (Chromium) and run it against your old and new projects. Find out how you could further optimize compatibility, security, accessibility, and speed. We hope it will help you to make your site work for as many users as possible.

The post WebHint in Firefox DevTools: Improve Compatibility, Accessibility and more appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Security advisory for Cargo

Mozilla planet - mo, 30/09/2019 - 02:00

Note: This is a cross-post of the official security advisory. The official post contains a signed version with our PGP key, as well.

The Rust team was recently notified of a security concern when using older versions of Cargo to build crates which use the package rename feature added in newer versions of Cargo. If you're using Rust 1.26.0, released on 2018-05-10, or later you're not affected.

The CVE for this vulnerability is CVE-2019-16760.

Overview

Cargo can be configured through Cargo.toml and the [dependencies] section to depend on different crates, such as those from crates.io. There are multiple ways to configure how you depend on crates as well, for example if you depend on serde and enable the derive feature it would look like:

serde = { version = "1.0", features = ['derive'] }

Rust 1.31.0 introduced a new feature of Cargo where one of the optional keys you can specify in this map is package, a way to rename a crate locally. For example if you preferred to use serde1 locally instead of serde, you could write:

serde1 = { version = "1.0", features = ['derive'], package = "serde" }

It's the addition of the package key that causes Cargo to compile the crate differently. This feature was first implemented in Rust 1.26.0, but it was unstable at the time. For Rust 1.25.0 and prior, however, Cargo would ignore the package key and and interpret the dependency line as if it were:

serde1 = { version = "1.0", features = ['derive'] }

This means when compiled with Rust 1.25.0 and prior then it would attempt to download the serde1 crate. A malicious user could squat the serde1 name on crates.io to look like serde 1.0.0 but instead act maliciously when built.

In summary, usage of the package key to rename dependencies in Cargo.toml is ignored in Rust 1.25.0 and prior. When Rust 1.25.0 and prior is used Cargo will ignore package and download the wrong dependency, which could be squatted on crates.io to be a malicious package. This not only affects manifests that you write locally yourself, but also manifests published to crates.io. If you published a crate, for example, that depends on serde1 to crates.io then users who depend on you may also be vulnerable if they use Rust 1.25.0 and prior.

Affected Versions

Rust 1.0.0 through Rust 1.25.0 is affected by this advisory because Cargo will ignore the package key in manifests. Rust 1.26.0 through Rust 1.30.0 are not affected and typically will emit an error because the package key is unstable. Rust 1.31.0 and after are not affected because Cargo understands the package key.

In terms of Cargo versions, this affects Cargo up through Cargo 0.26.0. All future versions of Cargo are unaffected.

Mitigations

We strongly recommend that users of the affected versions update their compiler to the latest available one. Preventing this issue from happening requires updating your compiler to either Rust 1.26.0 or newer.

We will not be issuing a patch release for Rust versions prior to 1.26.0. Users of Rust 1.19.0 to Rust 1.25.0 can instead apply the provided patches to mitigate the issue.

An audit of existing crates published to crates.io using the package key has been performed and there is no evidence that this vulnerability has been exploited in the wild. Our audit only covers the crates currently published on crates.io: if you notice crates exploiting this vulnerability in the future please don't hesitate to email security@rust-lang.org in accordance with our security policy.

Timeline of events
  • Wed, Sep 18, 2019 at 13:54 UTC - Bug reported to security@rust-lang.org
  • Wed, Sep 18, 2019 at 15:35 UTC - Response confirming the report
  • Wed, Sep 18, 2019 - Cargo, Core, and crates.io teams confer on how best to handle this
  • Thu, Sep 19, 2019 - Confirmed with Elichai plan of action and continued to audit existing crates
  • Mon, Sep 23, 2019 - Advisory drafted, patches developed, audit completed
  • Mon, Sep 30, 2019 - Advisory published, security list informed of this issue
Acknowledgments

Thanks to Elichai Turkel, who found this bug and reported it to us in accordance with our security policy.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Async-await hits beta!

Mozilla planet - mo, 30/09/2019 - 02:00

Big news! As of this writing, syntactic support for async-await is available in the Rust beta channel! It will be available in the 1.39 release, which is expected to be released on November 7th, 2019. Once async-await hits stable, that will mark the culmination of a multi-year effort to enable efficient and ergonomic asynchronous I/O in Rust. It will not, however, mark the end of the road: there is still more work to do, both in terms of polish (some of the error messages we get today are, um, not great) and in terms of feature set (async fn in traits, anyone?).

(If you're not familiar with what async-await is, don't despair! There's a primer and other details later on in this post!)

Async-await support in the ecosystem growing rapidly

But async-await has never been the entire story. To make good use of async-await, you also need strong libraries and a vibrant ecosystem. Fortunately, you've got a lot of good choices, and they keep getting better:

Restructuring Async I/O in the Rust org

Now that async-await is approaching stable, we are taking the opportunity to make some changes to our Rust team structure. The current structure includes two working groups: the Async Foundations WG, focused on building up core language support, and the Async Ecosystem WG, focused on supporting the ecosystem develop.

In light of all the activity going on in the ecosystem, however, there it not as much need for the Async Ecosystem WG, and as such we've decided to spin it down. We'll be deprecating the rustasync github org. The areweasyncyet.rs and arewewebyet.org websites will move to the main rust-lang org, but the fate of the other projects will be decided by the people who built them. A few will likely be deprecated, and the remainder will be moving out to be maintained independently.

The Async Foundations WG, meanwhile, will continue, but with a shift in focus. Now that async-await is en route to stabilization, the focus will be on polish, such as improving diagnostics, fixing smaller bugs, and improving documentation such as the async book. Once progress is made on that, we'll be considering what features to implement next.

(An aside: this is the first time that we've ever opted to spin down a working group, and we realized that we don't have a formal policy for that. We've created an issue with the governance working group to look into that for the future.)

Async await: a quick primer

So, what is async await? Async-await is a way to write functions that can "pause", return control to the runtime, and then pick up from where they left off. Typically those pauses are to wait for I/O, but there can be any number of uses.

You may be familiar with the async-await from other languages, such as JavaScript or C#. Rust's version of the feature is similar, but with a few key differences.

To use async-await, you start by writing async fn instead of fn:

async fn first_function() -> u32 { .. }

Unlike a regular function, calling an async fn doesn't do anything to start -- instead, it returns a Future. This is a suspended computation that is waiting to be executed. To actually execute the future, you have to use the .await operator:

async fn another_function() { // Create the future: let future = first_function(); // Await the future, which will execute it (and suspend // this function if we encounter a need to wait for I/O): let result: u32 = future.await; ... }

This example shows the first difference between Rust and other languages: we write future.await instead of await future. This syntax integrates better with Rust's ? operator for propagating errors (which, after all, are very common in I/O). One can simply write future.await? to await the result of a future and propagate errors. It also has the advantage of making method chaining painless.

Zero-cost futures

The other difference between Rust futures and futures in other languages is that they are based on a "poll" model, which makes them zero cost. In other languages, invoking an async function immediately creates a future and schedules it for execution: awaiting the future isn't really necessary for it to execute. But this implies some overhead for each future that is created.

In contrast, in Rust, calling an async function does not do any scheduling in and of itself, which means that we can compose a complex nest of futures without incurring a per-future cost. As an end-user, though, the main thing you'll notice is that futures feel "lazy": they don't do anything until you await them.

If you'd like a closer look at how futures work under the hood, take a look at the executor section of the async book, or watch the excellent talk that withoutboats gave at Rust LATAM 2019 on the topic.

Summary

In summary, if you've an interest in using Async I/O in Rust, this is a very exciting time! With async-await syntax hitting stable in November, it's going to be easier than ever to write futures (in particular, if you tried using the combinator-based futures in the past, you'll find async-await integrates much better with Rust's borrowing system). Moreover, there are a now a number of great runtimes and other libraries available in the ecosystem to work with. So get out there and build stuff!

(Oh, yeah, and please file bugs when you hit confusing or surprising problems, so we can improve the user experience!)

Categorieën: Mozilla-nl planet

William Lachance: Metrics Graphics: Stepping back for a while

Mozilla planet - to, 26/09/2019 - 22:33

Just a note that I’ve decided to step back from metrics graphics maintenance for the time being, which means that the project is essentially unowned. This has sort of been the case for a while, but I figured I should probably make it official.

If you follow the link to the metrics graphics repository, you’ll note that the version has been bumped to “3.0-alpha3”. I was this close to making one last new release this afternoon but decided I didn’t want to potentially break existing users who were fine using the last “official” version (v3.0 bumps the version of d3 used to “5”, among other breaking changes). I’d encourage people who want to continue using the library to make a fork and publish a copy under their user or organization name on npm.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Firefox and Tactical Tech Bring The Glass Room to San Francisco

Mozilla planet - to, 26/09/2019 - 18:00

After welcoming more than 30,000 visitors in Berlin, New York, and London, The Glass Room is coming to San Francisco on October 16, 2019.

From the tech boom to techlash, our favorite technologies have become intertwined with our daily lives. As technology is embedded in everything from dating to driving and from the environment to elections, our desire for convenience has given way to trade-offs for our privacy, security, and wellbeing. 

The Glass Room, curated by Tactical Tech and produced by Firefox, is a place to explore how technology and data are shaping our perceptions, experiences, and understanding of the world. The most connected generation in history is also the most exposed, as people’s privacy becomes the fuel for technology’s incredible growth. What’s gained and lost — and who decides — are explored at the Glass Room.

The Glass Room is in a 28,000 square-foot former retail store, located at 838 Market Street, across from Westfield San Francisco Centre, in the heart of the Union Square Retail District. It will be open to the public from October 16th through November 3rd. The location is intentional, meant to entice shoppers into the store and help them leave better equipped to make informed choices about technology and how it impacts their personal data, privacy, and security.

The Glass Room is a pop-up store with a twist, presenting more than 50 provocative tech products in an unexpected environment. This installment arrives in San Francisco to turn a mirror on Silicon Valley, to the people who make our technologies and those who are affected by its impact on society. “The biggest change since we launched The Glass Room in New York in 2016 and in London in 2017 is that the overall mood of tech users and consumers has shifted,” says Stephanie Hankey, of Tactical Tech. “People are starting to question how things work, how it affects them and what they can do about it. The Glass Room is a great way for users of technology to engage on a deeper level and make more informed choices. Each piece in The Glass Room tells a different story about data and technology, so there is something for everyone to connect with.”

This interactive public exhibit also includes a Data Detox Bar, where a team of in-house experts called Ingeniuses will dispense practical tips, tricks, and advice. There will also be a program of talks and workshops to foster debate, discussion, and solution-finding.

“We build the family of Firefox products to help people take charge of their data online, and give them control with features and tools that put privacy first,” says Mary Ellen Muckerman, Vice President of Brand Engagement. “We know it’s our job to help people understand what’s happening behind the scenes of the technology they love and we hope that events like The Glass Room help inform people about how to protect themselves online.”

At this turning point in the age of wider technological advancement, The Glass Room marks a moment to reflect on what our next steps should be. How do we want to shape our relationship with technology in the future?

More Details

October 16th – November 3rd 2019
838 Market Street, San Francisco
12pm–8pm daily
Free and open to the public
theglassroom.org

 

 

The post Firefox and Tactical Tech Bring The Glass Room to San Francisco appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Wladimir Palant: PfP: Pain-free Passwords security review

Mozilla planet - to, 26/09/2019 - 14:42

This is a guest post by Jane Doe, a security professional who was asked to do a security review of PfP: Pain-free Passwords. While publishing the results under her real name would have been preferable, Jane has good reasons to avoid producing content under her own name.

I reviewed the code of the Pain-free Passwords extension. It’s a stateless password manager that generates new passwords based on your master password, meaning that you don’t have to back your password database up (although, you also can import your old passwords, which do need backing up). For this kind of password managers, the most sensitive part is the password generation algorithm. Other possibly vulnerable components include those common for all password managers: autofill, storage and cloud sync.

 Pain-free Passwords

Password generation

Passwords generated by a password manager have to be unpredictable. For a stateless password manager, there’s another important requirement: it should be impossible to derive the master password back from one of the generated passwords.

PfP satisfies both requirements. Its password derivation algorithm is basically scrypt(N=32768, r=8, p=1), which is an industry standard for key derivation. It generates a binary key based on your master password, website address you’ll use that password for and your login there. (You can also specify revision, in case your password has been stolen and you need a new one.) PfP then translates the binary key to text, using characters from sets you choose. The translation algorithm is designed in a way to include symbols from each of character sets you choose.

The resulting password is completely random for a person who doesn’t know your master password, so they cannot predict what passwords you will use for other websites. Your master password cannot be computed either, as scrypt is a one-way hash function.

scrypt parameters (here we’re talking about N=32768, r=8, p=1) are explained well in this article. It is recommended to use such parameters that key derivation time on the author’s computer will be around 100ms (for cases where the key is used immediately, like in PfP). I run the test on my computer and found out that my computer derives a key in 158ms (average) with the parameters PfP uses, which means it is secure enough.

With this setup, the only attack possible would be brute-forcing the master password, i. e. trying to reverse the one-way hash function, which is prohibitively expensive and time-consuming (we’re talking about tens of years running very expensive computers here). So it’s highly unlikely someone would even try, a much more efficient method would be to install a keylogger on the victim’s computer.

AutoFill

PfP does a good job making sure a password for a specific website can only be filled on that website. I wasn’t able to trick it into autofilling password for one website on another.

But I have discovered a couple of non-security bugs. One of them caused autofill to fail in iframes in Chrome, the other one caused PfP to wait for the submit to appear forever, running code in an endless loop.

Storage

PfP uses the API browsers provide to extensions to store passwords (localStorage for the online version). It encrypts everything using AES256-GCM, using a 256-bit encryption key being derived from your master password and salt. Salt is a random 16-byte value stored together with the encrypted data. It is needed to protect you from rainbow table attacks.

Another thing important for encryption is using random initialization vectors. PfP generates 12 random bytes and stores them concatenated with the ciphertexts.

PfP uses HMAC-SHA256 digests for database keys. It needs the keys for retrieving encrypted data to be deterministic, so it cannot use random values. Just SHA256 hashes can be broken (the passwords could be computed based on them in that case), while using scrypt for each action involving storage is expensive performance-wise. With a random HMAC secret being stored encrypted (with a preset database key), it is completely impossible to calculate the password info only having a HMAC digest.

As wih password generation, there’s one way to break such encryption: brute force, which isn’t viable.

Cloud sync

PfP allows storing data in Dropbox, Google Drive or remoteStorage. It uses the same encryption model as in local storage with clouds. (Salt value has to be the same across all synced instances, so when a new PfP instance connects to a non-empty cloud storage, it re-encrypts local data to use the salt value saved in cloud.)

In addition to that, PfP signs all cloud-saved data using HMAC, with another secret value, stored encrypted in the cloud. This means that if the cloud provider changes PfP data, the extension will detect it and produce an error.

So, there’s no way the cloud provider can tamper with stored data, as it would be easily detectable, nor it can see your passwords, as they’re encrypted. The only thing a malicious cloud provider could do is delete your files or modify them in a way that would result in sync failure.

HTTPS protocol is used for communicating with the cloud providers. Received data is parsed using JSON.parse() or simple string manipulation functions (e. g. for parsing HTTP headers). This way, no remote code execution attacks are possible.

User interface

PfP code doesn’t contain any common mistakes, like listening to keyboard events from a webpage context or otherwise executing sensitive code in an untrusted environment.

I found one minor security vulnerability, namely external links with a target="_blank" attribute (only used in the “online” PfP version on its website). Such links should have a rel="noopener" attribute to prevent attacks where the tab opened after clicking on one of those links could be able to redirect the original tab to a different URL. (A more detailed description can be found here.) It was fixed in PfP 2.2.2 (it is a web-only release, as the issue didn’t affect browser extension users).

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Charting a new course for tech competition

Mozilla planet - to, 26/09/2019 - 13:56

As the internet has become more and more centralized, more opportunity for anticompetitive gatekeeping behavior has arisen. Yet competition and antitrust law have struggled to keep up, and all around the world, governments are reviewing their legal frameworks to consider what they can do.

Today, Mozilla released a working paper discussing the unique characteristics of digital platforms in the context of competition and offer a new framework to approach future-proof competition policy for the internet. Charting a course focused on a set of proposals distinct from both the status quo and pure structural reform, this paper proposes stronger single-firm conduct enforcement to capture a modern set of harmful gatekeeping behaviors by powerful firms; tougher merger review, particularly for vertical mergers, to weigh the full spectrum of potential competitive harm; and faster agency processes that can be responsive within the rapid market cycles of tech. And across all competition policy making and enforcement, this paper proposes that standards and interoperability be at the center.

The internet’s unique formula for innovation and productive disruption depends on market entry and growth, put at risk as centralized access to data and networks become more and more of an insurmountable advantage. But we can see a light at the end of the silo, if legislators and competition authorities embrace their duty to internet users and modernize their legal and policy frameworks to respond to today’s challenges and to protect the core of what has made the internet such a powerful engine for socioeconomic benefit.

The post Charting a new course for tech competition appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.38.0

Mozilla planet - to, 26/09/2019 - 02:00

The Rust team is happy to announce a new version of Rust, 1.38.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.38.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.38.0 stable

The highlight of this release is pipelined compilation.

Pipelined compilation

To compile a crate, the compiler doesn't need the dependencies to be fully built. Instead, it just needs their "metadata" (i.e. the list of types, dependencies, exports...). This metadata is produced early in the compilation process. Starting with Rust 1.38.0, Cargo will take advantage of this by automatically starting to build dependent crates as soon as metadata is ready.

While the change doesn't have any effect on builds for a single crate, during testing we got reports of 10-20% compilation speed increases for optimized, clean builds of some crate graphs. Other ones did not improve much, and the speedup depends on the hardware running the build, so your mileage might vary. No code changes are needed to benefit from this.

Linting some incorrect uses of mem::{uninitialized, zeroed}

As previously announced, std::mem::uninitialized is essentially impossible to use safely. Instead, MaybeUninit<T> should be used.

We have not yet deprecated mem::uninitialized; this will be done in a future release. Starting in 1.38.0, however, rustc will provide a lint for a narrow class of incorrect initializations using mem::uninitialized or mem::zeroed.

It is undefined behavior for some types, such as &T and Box<T>, to ever contain an all-0 bit pattern, because they represent pointer-like objects that cannot be null. It is therefore an error to use mem::uninitialized or mem::zeroed to initialize one of these types, so the new lint will attempt to warn whenever one of those functions is used to initialize one of them, either directly or as a member of a larger struct. The check is recursive, so the following code will emit a warning:

struct Wrap<T>(T); struct Outer(Wrap<Wrap<Wrap<Box<i32>>>>); struct CannotBeZero { outer: Outer, foo: i32, bar: f32 } ... let bad_value: CannotBeZero = unsafe { std::mem::uninitialized() };

Astute readers may note that Rust has more types that cannot be zero, notably NonNull<T> and NonZero<T>. For now, initialization of these structs with mem::uninitialized or mem::zeroed is not linted against.

These checks do not cover all cases of unsound use of mem::uninitialized or mem::zeroed, they merely help identify code that is definitely wrong. All code should still be moved to use MaybeUninit instead.

#[deprecated] macros

The #[deprecated] attribute, first introduced in Rust 1.9.0, allows crate authors to notify their users an item of their crate is deprecated and will be removed in a future release. Rust 1.38.0 extends the attribute, allowing it to be applied to macros as well.

std::any::type_name

For debugging, it is sometimes useful to get the name of a type. For instance, in generic code, you may want to see, at run-time, what concrete types a function's type parameters has been instantiated with. This can now be done using std::any::type_name:

fn gen_value<T: Default>() -> T { println!("Initializing an instance of {}", std::any::type_name::<T>()); Default::default() } fn main() { let _: i32 = gen_value(); let _: String = gen_value(); }

This prints:

Initializing an instance of i32 Initializing an instance of alloc::string::String

Like all standard library functions intended only for debugging, the exact contents and format of the string are not guaranteed. The value returned is only a best-effort description of the type; multiple types may share the same type_name value, and the value may change in future compiler releases.

Library changes

Additionally, these functions have been stabilized:

Other changes

There are other changes in the Rust 1.38 release: check out what changed in Rust, Cargo, and Clippy.

Corrections

A Previous version of this post mistakenly marked these functions as stable. They are not yet stable. Duration::div_duration_f32 and Duration::div_duration_f64.

Contributors to 1.38.0

Many people came together to create Rust 1.38.0. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

QMO: Firefox 70 Beta 10 Testday, September 27th

Mozilla planet - wo, 25/09/2019 - 14:32

Hello Mozillians,

We are happy to let you know that FridaySeptember 27th, we are organizing Firefox 70 Beta 10 Testday. We’ll be focusing our testing on: Password Manager.

Check out the detailed instructions via this gdoc.

*Note that this events are no longer held on etherpad docs since public.etherpad-mozilla.org was disabled.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Categorieën: Mozilla-nl planet

Pages