mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

The Talospace Project: Firefox 74 on POWER

Mozilla planet - do, 12/03/2020 - 05:35
So far another uneventful release on ppc64le; I'm typing this blog post in Fx74. Most of what's new in this release is under the hood, and there are no OpenPOWER specific changes (I need to sit down with some of my other VMX/VSX patches and prep them for upstream). The working debug and optimized .mozconfigs are unchanged from Firefox 67.
Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.42.0

Mozilla planet - do, 12/03/2020 - 01:00

The Rust team is happy to announce a new version of Rust, 1.42.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.42.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.42.0 on GitHub.

What's in 1.42.0 stable

The highlights of Rust 1.42.0 include: more useful panic messages when unwrapping, subslice patterns, the deprecation of Error::description, and more. See the detailed release notes to learn about other changes not covered by this post.

Useful line numbers in Option and Result panic messages

In Rust 1.41.1, calling unwrap() on an Option::None value would produce an error message looking something like this:

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', /.../src/libcore/macros/mod.rs:15:40

Similarly, the line numbers in the panic messages generated by unwrap_err, expect, and expect_err, and the corresponding methods on the Result type, also refer to core internals.

In Rust 1.42.0, all eight of these functions produce panic messages that provide the line number where they were invoked. The new error messages look something like this:

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src/main.rs:2:5

This means that the invalid call to unwrap was on line 2 of src/main.rs.

This behavior is made possible by an annotation, #[track_caller]. This annotation is not yet available to use in stable Rust; if you are interested in using it in your own code, you can follow its progress by watching this tracking issue.

Subslice patterns

In Rust 1.26, we stabilized "slice patterns," which let you match on slices. They looked like this:

fn foo(words: &[&str]) { match words { [] => println!("empty slice!"), [one] => println!("one element: {:?}", one), [one, two] => println!("two elements: {:?} {:?}", one, two), _ => println!("I'm not sure how many elements!"), } }

This allowed you to match on slices, but was fairly limited. You had to choose the exact sizes you wished to support, and had to have a catch-all arm for size you didn't want to support.

In Rust 1.42, we have expanded support for matching on parts of a slice:

fn foo(words: &[&str]) { match words { ["Hello", "World", "!", ..] => println!("Hello World!"), ["Foo", "Bar", ..] => println!("Baz"), rest => println!("{:?}", rest), } }

The .. is called a "rest pattern," because it matches the rest of the slice. The above example uses the rest pattern at the end of a slice, but you can also use it in other ways:

fn foo(words: &[&str]) { match words { // Ignore everything but the last element, which must be "!". [.., "!"] => println!("!!!"), // `start` is a slice of everything except the last element, which must be "z". [start @ .., "z"] => println!("starts with: {:?}", start), // `end` is a slice of everything but the first element, which must be "a". ["a", end @ ..] => println!("ends with: {:?}", end), rest => println!("{:?}", rest), } }

If you're interested in learning more, we published a post on the Inside Rust blog discussing these changes as well as more improvements to pattern matching that we may bring to stable in the future! You can also read more about slice patterns in Thomas Hartmann's post.

matches!

This release of Rust stabilizes a new macro, matches!. This macro accepts an expression and a pattern, and returns true if the pattern matches the expression. In other words:

// Using a match expression: match self.partial_cmp(other) { Some(Less) => true, _ => false, } // Using the `matches!` macro: matches!(self.partial_cmp(other), Some(Less))

You can also use features like | patterns and if guards:

let foo = 'f'; assert!(matches!(foo, 'A'..='Z' | 'a'..='z')); let bar = Some(4); assert!(matches!(bar, Some(x) if x > 2)); use proc_macro::TokenStream; now works

In Rust 2018, we removed the need for extern crate. But procedural macros were a bit special, and so when you were writing a procedural macro, you still needed to say extern crate proc_macro;.

In this release, if you are using Cargo, you no longer need this line when working with the 2018 edition; you can use use like any other crate. Given that most projects will already have a line similar to use proc_macro::TokenStream;, this change will mean that you can delete the extern crate proc_macro; line and your code will still work. This change is small, but brings procedural macros closer to regular code.

Libraries Stabilized APIs Other changes

There are other changes in the Rust 1.42.0 release: check out what changed in Rust, Cargo, and Clippy.

Compatibility Notes

We have two notable compatibility notes this release: a deprecation in the standard library, and a demotion of 32-bit Apple targets to Tier 3.

Error::Description is deprecated

Sometimes, mistakes are made. The Error::description method is now considered to be one of those mistakes. The problem is with its type signature:

fn description(&self) -> &str

Because description returns a &str, it is not nearly as useful as we wished it would be. This means that you basically need to return the contents of an Error verbatim; if you wanted to say, use formatting to produce a nicer description, that is impossible: you'd need to return a String. Instead, error types should implement the Display/Debug traits to provide the description of the error.

This API has existed since Rust 1.0. We've been working towards this goal for a long time: back in Rust 1.27, we "soft deprecated" this method. What that meant in practice was, we gave the function a default implementation. This means that users were no longer forced to implement this method when implementing the Error trait. In this release, we mark it as actually deprecated, and took some steps to de-emphasize the method in Error's documentation. Due to our stability policy, description will never be removed, and so this is as far as we can go.

Downgrading 32-bit Apple targets

Apple is no longer supporting 32-bit targets, and so, neither are we. They have been downgraded to Tier 3 support by the project. For more details on this, check out this post from back in January, which covers everything in detail.

Contributors to 1.42.0

Many people came together to create Rust 1.42.0. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl 7.69.1 better patch than sorry

Mozilla planet - wo, 11/03/2020 - 07:50

This release comes but 7 days since the previous and is a patch release only, hence called 7.69.1.

Numbers

the 190th release
0 changes
7 days (total: 8,027)

27 bug fixes (total: 5,938)
48 commits (total: 25,405
0 new public libcurl function (total: 82)
0 new curl_easy_setopt() option (total: 270)

0 new curl command line option (total: 230)
19 contributors, 6 new (total: 2,133)
7 authors, 1 new (total: 772)
0 security fixes (total: 93)
0 USD paid in Bug Bounties

Unplanned patch release

Quite obviously this release was not shipped aligned with our standard 8-week cycle. The reason is that we had too many semi-serious or at least annoying bugs that were reported early on after the 7.69.0 release last week. They made me think our users will appreciate a quick follow-up that addresses them. See below for more details on some of those flaws.

How can this happen in a project that soon is 22 years old, that has thousands of tests, dozens of developers and 70+ CI jobs for every single commit?

The short answer is that we don’t have enough tests that cover enough use cases and transfer scenarios, or put another way: curl and libcurl are very capable tools that can deal with a nearly infinite number of different combinations of protocols, transfers and bytes over the wire. It is really hard to cover all cases.

Also, an old wisdom that we learned already many years ago is that our code is always only properly widely used and tested the moment we do a release and not before. Everything can look good in pre-releases among all the involved developers, but only once the entire world gets its hands on the new release it really gets to show what it can or cannot do.

This time, a few of the changes we had landed for 7.69.0 were not good enough. We then go back, fix issues, land updates and we try again. So here comes 7.69.1 – better patch than sorry!

Bug-fixes

As the numbers above show, we managed to land an amazing number of bug-fixes in this very short time. Here are seven of the more important ones, from my point of view! Not all of them were regressions or even reported in 7.69.0, some of them were just ripe enough to get landed in this release.

unpausing HTTP/2 transfers

When I fixed the pausing and unpausing of HTTP/2 streams for 7.69.0, the fix was inadequate for several of the more advanced use cases and unfortunately we don’t have good enough tests to detect those. At least two browsers built to use libcurl for their HTTP engines reported stalled HTTP/2 transfers due to this.

I reverted the previous change and I’ve landed a different take that seems to be a more appropriate one, based on early reports.

pause: cleanups

After I had modified the curl_easy_pause function for 7.69.0, we also got reports about crashes with uses of this function.

It made me do some additional cleanups to make it more resilient to bad uses from applications, both when called without a correct handle or when it is called to just set the same pause state it is already in

socks: connection regressions

I was so happy with my overhauled SOCKS connection code in 7.69.0 where it was made entirely non-blocking. But again it turned out that our test cases for this weren’t entirely mimicking the real world so both SOCKS4 and SOCKS5 connections where curl does the name resolving could easily break. The test cases probably worked fine there because they always resolve the host name really quick and locally.

SOCKS4 connections are now also forced to be done over IPv4 only, as that was also something that could trigger a funny error – the protocol doesn’t support IPv6, you need to go to SOCKS5 for that!

Both version 4 and 5 of the SOCKS proxy protocol have options to allow the proxy to resolve the server name or you can have the client (curl) do it. (Somewhat described in the CURLOPT_PROXY man page.) These problems were found for the cases when curl resolves the server name.

libssh: MD5 hex comparison

For application users of the libcurl CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 option, which is used to verify that curl connects to the right server, this change makes sure that the libssh backend does the right thing and acts exactly like the libssh2 backend does and how the documentation says it works…

libssh2: known hosts crash

In a recent change, libcurl will try to set a preferred method for the knownhost matching libssh2 provides when connecting to a SSH server, but the code unfortunately contained an easily triggered NULL pointer dereference that no review caught and obviously no test either!

c-ares: duphandle copies DNS servers too

curl_easy_duphandle() duplicates a libcurl easy handle and is frequently used by applications. It turns out we broke a little piece of the function back in 7.63.0 as a few DNS server options haven’t been duplicated properly since then. Fixed now!

curl_version: thread-safer

The curl_version and curl_version_info functions are now both thread-safe without the use of any global context. One issue less left for having a completely thread-safe future curl_global_init.

Schedule for next release

This was an out-of-schedule release but the plan is to stick to the established release schedule, which will have the effect that the coming release window will be one week shorter than usual and the full cycle will complete in 7 weeks instead of 8.

Release video
Categorieën: Mozilla-nl planet

Robert O'Callahan: Debugging Gdb Using rr: Ptrace Emulation

Mozilla planet - wo, 11/03/2020 - 00:20

Someone tried using rr to debug gdb and reported an rr issue because it didn't work. With some effort I was able to fix a couple of bugs and get it working for simple cases. Using improved debuggers to improve debuggers feels good!

The main problem when running gdb under rr is the need to emulate ptrace. We had the same problem when we wanted to debug rr replay under rr. In Linux a process can only have a single ptracer. rr needs to ptrace all the processes it's recording — in this case gdb and the process(es) it's debugging. Gdb needs to ptrace the process(es) it's debugging, but they can't be ptraced by both gdb and rr. rr circumvents the problem by emulating ptrace: gdb doesn't really ptrace its debuggees, as far as the kernel is concerned, but instead rr emulates gdb's ptrace calls. (I think in principle any ptrace user, e.g. gdb or strace, could support nested ptracing in this way, although it's a lot of work so I'm not surprised they don't.)

Most of the ptrace machinery that gdb needs already worked in rr, and we have quite a few ptrace tests to prove it. All I had to do to get gdb working for simple cases was to fix a couple of corner-case bugs. rr has to synthesize SIGCHLD signals sent to the emulated ptracer; these signals weren't interacting properly with sigsuspend. For some reason gdb spawns a ptraced process, then kills it with SIGKILL and waits for it to exit; that wait has to be emulated by rr because in Linux regular "wait" syscalls can only wait for a non-child process if the waiter is ptracing the target process, and under rr gdb is not really the ptracer, so the native wait doesn't work. We already had logic for that, but it wasn't working for process exits triggered by signals, so I had to rework that, which was actually pretty hard (see the rr issue for horrible details).

After I got gdb working I discovered it loads symbols very slowly under rr. Every time gdb demangles a symbol it installs (and later removes) a SIGSEGV handler to catch crashes in the demangler. This is very sad and does not inspire trust in the quality of the demangling code, especially if some of those crashes involve potentially corrupting memory writes. It is slow under rr because rr's default syscall handling path makes cheap syscalls like rt_sigaction a lot more expensive. We have the "syscall buffering" fast path for the most frequent syscalls, but supporting rt_sigaction along that path would be rather complicated, and I don't think it's worth doing at the moment, given you can work around the problem using maint set catch-demangler-crashes off. I suspect that (with KPTI especially) doing 2-3 syscalls per symbol demangle (sigprocmask is also called) hurts gdb performance even without rr, so ideally someone would fix that. Either fix the demangling code (possibly writing it in a safe language like Rust), or batch symbol demangling to avoid installing and removing a signal handler thousands of times, or move it to a child process and talk to it asynchronously over IPC — safer too!

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Security means more with Firefox 74

Mozilla planet - di, 10/03/2020 - 16:13

Today sees the release of Firefox number 74. The most significant new features we’ve got for you this time are security enhancements: Feature Policy, the Cross-Origin-Resource-Policy header, and removal of TLS 1.0/1.1 support. We’ve also got some new CSS text property features, the JS optional chaining operator, and additional 2D canvas text metric features, along with the usual wealth of DevTools enhancements and bug fixes.

As always, read on for the highlights, or find the full list of additions in the following articles:

Note: In the Security enhancements section below, we detail the removal of TLS 1.0/1.1 in Firefox 74, however we reverted this change for an undetermined amount of time, to better enable access to critical government sites sharing COVID19 information. We are keeping the infomation below intact because it is still useful to give you an idea of future intents. (Updated Monday, 30 March.)

Security enhancements

Let’s look at the security enhancement we’ve got in 74.

Feature Policy

We’ve finally enabled Feature Policy by default. You can now use the <iframe> allow attribute and the Feature-Policy HTTP header to set feature permissions for your top level documents and IFrames. Syntax examples follow:

<iframe src="https://example.com" allow="fullscreen"></iframe> Feature-Policy: microphone 'none'; geolocation 'none' CORP

We’ve also enabled support for the Cross-Origin-Resource-Policy (CORP) header, which allows web sites and applications to opt in to protection against certain cross-origin requests (such as those coming from <script> and <img> elements). This can help to mitigate speculative side-channel attacks (like Spectre and Meltdown) as well as Cross-Site Script Inclusion attacks.

The available values are same-origin and same-site. same-origin only allows requests that share the same scheme, host, and port to read the relevant resource. This provides an additional level of protection beyond the web’s default same-origin policy. same-site only allows requests that share the same site.

To use CORP, set the header to one of these values, for example:

Cross-Origin-Resource-Policy: same-site TLS 1.0/1.1 removal

Last but not least, Firefox 74 sees the removal of TLS 1.0/1.1 support, to help raise the overall level of security of the web platform. This is vital for moving the TLS ecosystem forward, and getting rid of a number of vulnerabilities that existed as a result of TLS 1.0/1.1 not being as robust as we’d really like — they’re in need of retirement.

The change was first announced in October 2018 as a shared initiative of Mozilla, Google, Microsoft, and Apple. Now in March 2020 we are all acting on our promises (with the exception of Apple, who will be making the change slightly later on).

The upshot is that you’ll need to make sure your web server supports TLS 1.2 or 1.3 going forward. Read TLS 1.0 and 1.1 Removal Update to find out how to test and update your TLS/SSL configuration. From now on, Firefox will return a Secure Connection Failed error when connecting to servers using the older TLS versions. Upgrade now, if you haven’t already!

secure connection failed error message, due to connected server using TLS 1.0 or 1.1

Note: For a couple of release cycles (and longer for Firefox ESR), the Secure Connection Failed error page will feature an override button allowing you to Enable TLS 1.0 and 1.1 in cases where a server is not yet upgraded, but you won’t be able to rely on it for too long.

To find out more about TLS 1.0/1.1 removal and the background behind it, read It’s the Boot for TLS 1.0 and TLS 1.1.

Other web platform additions

We’ve got a host of other web platform additions for you in 74.

New CSS text features

For a start, the text-underline-position property is enabled by default. This is useful for positioning underlines set on your text in certain contexts to achieve specific typographic effects.

For example, if your text is in a horizontal writing mode, you can use text-underline-position: under; to put the underline below all the descenders, which is useful for ensuring legibility with chemical and mathematical formulas, which make frequent use of subscripts.

.horizontal { text-underline-position: under; }

In text with a vertical writing-mode set, we can use values of left or right to make the underline appear to the left or right of the text as required.

.vertical { writing-mode: vertical-rl; text-underline-position: left; }

In addition, the text-underline-offset and text-decoration-thickness properties now accept percentage values, for example:

text-decoration-thickness: 10%;

For these properties, this is a percentage of 1em in the current font’s size.

Optional chaining in JavaScript

We now have the JavaScript optional chaining operator (?.) available. When you are trying to access an object deep in a chain, this allows for implicit testing of the existence of the objects higher up in the chain, avoiding errors and the need to explicitly write testing code.

let nestedProp = obj.first?.second; New 2D canvas text metrics

The TextMetrics interface (retrieved using the CanvasRenderingContext2D.measureText() method) has been extended to contain four more properties measuring the actual bounding box — actualBoundingBoxLeft, actualBoundingBoxRight, actualBoundingBoxAscent, and actualBoundingBoxDescent.

For example:

const canvas = document.createElement('canvas'); const ctx = canvas.getContext('2d'); const text = ctx.measureText('Hello world'); text.width; // 56.08333206176758 text.actualBoundingBoxAscent; // 8 text.actualBoundingBoxDescent; // 0 text.actualBoundingBoxLeft; // 0 text.actualBoundingBoxRight; // 55.733333333333334 DevTools additions

Next up, DevTools additions.

Device-like rendering in Responsive Design Mode

While Firefox for Android is being relaunched with GeckoView to be faster and more private, the DevTools need to stay ahead. Testing on mobile should be as frictionless as possible, both when using Responsive Design Mode on your desktop and on-device with Remote Debugging.

Correctness is important for Responsive Design Mode, so developers can trust the output without a device at hand. Over the past releases, we rolled out major improvements that ensure meta viewport is correctly applied with Touch Simulation. This ties in with improved device presets, which automatically enable touch simulation for mobile devices.

animated gif showing how responsive design mode now represents view meta settings better

Fun fact: The team managed to make this simulation so accurate that it has already helped to identify and fix rendering bugs for Firefox on Android.

DevTools Tip: Open Responsive Design Mode without DevTools via the tools menu or Ctrl + Shift + M on Windows/Cmd + Opt + M on macOS.

We’d love to hear about your experiences when giving your site a spin in RDM or on your Android phone with Firefox Nightly for Developers.

CSS tools that work for you

The Page Inspector’s new in-context warnings for inactive CSS rules have received a lot of positive feedback. They help you solve gnarly CSS issues while teaching you about the intricate interdependencies of CSS rules.

Since its launch, we have continued to tweak and add rules, often based on user feedback. One highlight for 74 is a new detection setting that warns you when properties depend on positioned elements – namely z-index, top, left, bottom, and right.

Firefox Page Inspector now showing inactive position-related properties such as z-index and top

Your feedback will help to further refine and expand the rules. Say hi to the team in the DevTools chat on Mozilla’s Matrix instance or follow our work via @FirefoxDevTools.

Debugging for Nested Workers

Firefox’s JavaScript Debugger team has been focused on optimizing Web Workers over the past few releases to make them easier to inspect and debug. The more developers and frameworks that use workers to move processing off the main thread, the easier it will be for browsers to prioritize running code that is fired as a result of user input actions.

Nested web workers, which allow workers to spawn and control their own worker instances, are now displayed in the Debugger:

Firefox JavaScript debugger now shows nested workers

Improved React DevTools integration

The React Developer Tools add-on is one of many developer add-ons that integrate tightly with Firefox DevTools. Thanks to the WebExtensions API, developers can create and publish add-ons for all browsers from the same codebase.

In collaboration with the React add-on maintainers, we worked to re-enable and improve the context menus in the add-on, including Go to definition. This action lets developers jump from React Components directly to their source files in the Debugger. The same functionality has already been enabled for jumping to elements in the Inspector. We want to build this out further, to make framework workflows seamless with the rest of the tools.

Early-access DevTools features in Developer Edition

Developer Edition is Firefox’s pre-release channel which gets early access to tooling and platform features. Its settings also enable more functionality for developers by default. We like to bring new features quickly to Developer Edition to gather your feedback, including the following highlights.

Instant evaluation for Console expressions

Exploring JavaScript objects, functions, and the DOM feels like magic with instant evaluation. As long as expressions typed into the Web Console are side-effect free, their results will be previewed while you type, allowing you to identify and fix errors more rapidly than before.

Async Stack Traces for Debugger & Console

Modern JavaScript code depends heavily upon stacking async/await on top of other async operations like events, promises, and timeouts. Thanks to better integration with the JavaScript engine, async execution is now captured to give a more complete picture.

Async call stacks in the Debugger let you step through events, timeouts, and promise-based function calls that are executed over time. In the Console, async stacks make it easier to find the root causes of errors.

async call stack shown in the Firefox JavaScript debugger

Sneak-peek Service Worker Debugging

This one has been in Nightly for a while, and we are more than excited to get it into your hands soon. Expect it in Firefox 76, which will become Developer Edition in 4 weeks.

The post Security means more with Firefox 74 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Support for extension sideloading has ended

Mozilla planet - di, 10/03/2020 - 16:10

Today marks the release of Firefox 74 and as we announced last fall, developers will no longer be able to install extensions without the user taking an action. This installation method was typically done through application installers, and is commonly referred to as “sideloading.”

If you are the developer of an extension that installs itself via sideloading, please make sure that your users can install the extension from your own website or from addons.mozilla.org (AMO).

We heard several questions about how the end of sideloading support affects users and developers, so we wanted to clarify what to expect from this change:

  1. Starting with Firefox 74, users will need to take explicit action to install the extensions they want, and will be able to remove previously sideloaded extensions when they want to.
  2. Previously installed sideloaded extensions will not be uninstalled for users when they update to Firefox 74. If a user no longer wants an extension that was sideloaded, they must uninstall the extension themselves.
  3. Firefox will prevent new extensions from being sideloaded.
  4. Developers will be able to push updates to extensions that had previously been sideloaded. (If you are the developer of a sideloaded extension and you are now distributing your extension through your website or AMO, please note that you will need to update both the sideloaded .xpi and the distributed .xpi; updating one will not update the other.)

Enterprise administrators and people who distribute their own builds of Firefox (such as some Linux and Selenium distributions) will be able to continue to deploy extensions to users. Enterprise administrators can do this via policies. Additionally, Firefox Extended Support Release (ESR) will continue to support sideloading as an extension installation method.

We will continue to support self-distributed extensions. This means that developers aren’t required to list their extensions on AMO and users can install extensions from sites other than AMO. Developers just won’t be able to install extensions without the user taking an action. Users will also continue being able to manually install extensions.

We hope this helps clear up any confusion from our last post. If you’re a user who has had difficulty uninstalling sideloaded extensions in the past, we hope that you will find it much easier to remove unwanted extensions with this update.

The post Support for extension sideloading has ended appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 329

Mozilla planet - di, 10/03/2020 - 05:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crates is plotly, a plotly.js-backed plotting library.

Thanks to Ioannis Giagkiozis for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

302 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs New RFCs

No new RFCs were proposed this week.

Upcoming Events Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I have no idea how to debug Rust, because in 2 years of Rust, I haven't had that type of low level bug.

papaf on hacker news

Thanks to zrk for the suggestions!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Niko Matsakis: Async Interview #7: Withoutboats

Mozilla planet - di, 10/03/2020 - 05:00

Hello everyone! I’m happy to be posting a transcript of my async interview with withoutboats. This particularly interview took place way back on January 14th, but the intervening months have been a bit crazy and I didn’t get around to writing it up till now.

Video

You can watch the video on YouTube. I’ve also embedded a copy here for your convenience:

Next steps for async

Before I go into boats’ interview, I want to talk a bit about the state of async-await in Rust and what I see as the obvious next steps. I may still do a few more async interviews after this – there are tons of interesting folks I never got to speak to! – but I think it’s also past time to try and come to a consensus of the “async roadmap” for the rest of the year (and maybe some of 2021, too). The good news is that I feel like the async interviews highlighted a number of relatively clear next steps. Sometime after this post, I hope to post a blog post laying out a “rough draft” of what such a roadmap might look like.

History

withoutboats is a member of the Rust lang team. Starting around the beginning on 2018, they started looking into async-await for Rust. Everybody knew that we wanted to have some way to write a function that could suspend (await) as needed. But we were stuck on a rather fundamental problem which boats explained in the blog post “self-referential structs”. This blog post was the first in a series of posts that ultimately documented the design that became the Pin type, which describes a pointer to a value that can never be moved to another location in memory. Pin became the foundation for async functions in Rust. (If you’ve not read the blog post series, it’s highly recommended.) If you’d like to learn more about pin, boats posted a recorded stream on YouTube that explores its design in detail.

Vision for async

All along, boats has been motivated by a relatively clear vision: we should make async Rust “just as nice to use” as Rust with blocking I/O. In short, you should be able to write code much like you ever did, but adding making functions which perform I/O into async and then adding await here or there as needed.

Since 2018, we’ve made great progress towards the goal of “async I/O that is as easy as sync” – most notably by landing and stabilizing the async-await MVP – but we’re not there yet. There remain a number of practical obstacles that make writing code using async I/O more difficult than sync I/O. So the mission for the next few years is to identify those obstacles and dismantle them, one by one.

Next step: async destructors

One of the first obstacles that boats mentioned was extending Rust’s Drop trait to work better for async code. The Drop trait, for those who don’t know Rust, is a special trait in Rust that types can implement in order to declare a destructor (code which should run when a value goes out of scope). boats wrote a blog post that discusses the problem in more detail and proposes a solution. Since that blog post, they’ve refined the proposal in response to some feedback, though the overall shape remains the same. The basic idea is to extend the Drop trait with an optional poll_drop_ready method:

trait Drop { fn drop(&mut self); fn poll_drop_ready( self: Pin<&mut Self>, ctx: &mut Context<'_>, ) -> Poll<()> { Poll::Ready(()) } }

When executing an async fn, and a value goes out of scope, we will first invoke poll_drop_ready, and “await” if it returns anything other than Poll::Ready. This gives the value a chance to do async operations that may block, in preparation for the final drop. Once Poll::Ready is returned, the ordinary drop method is invoked.

This async-drop trait came up in early async interviews, and I raised Eliza’s use case with boats. Specifically, she wanted some way to offer values that are live on the stack a callback when a yield occurs and when the function is resumed, so that they can (e.g.) interact with thread-local state correctly in an async context. While distinct from async destructors, the issues are related because destructors are often used to manage thread-local values in a scoped fashion.

Adding async drop requires not only modifying the compiler but also modifying futures combinators to properly handle the new poll_drop_ready method (combinators need to propagate this poll_drop_ready to the sub-futures they contain).

Note that we wouldn’t offer any ‘guarantee’ that poll_drop_ready will run. For example, it would not run if a future is dropped without being resumed, because then there is no “async context” that can handle the awaits. However, like Drop, it would ultimately be something that types can “usually” expect to execute under ordinary circumstances.

Some of the use cases for async-drop include writers that buffer data and wish to ensure that the data is flushed out when the writer is dropped, transactional APIs, or anything that might do I/O when dropped.

block_on in the std library

One very small addition that boats proposed is adding block_on to the standard library. Invoking block_on(future) would block the current thread until future has been fully executed (and then return the resulting value). This is actually something that most async I/O code would never want to do – if you want to get the value from a future, after all, you should do future.await. So why is block_on useful?

Well, block_on is basically the most minimal executor. It allows you to take async code and run it in a synchronous context with minimal fuss. It’s really convenient in examples and documentation. I would personally like it to permit writing stand-alone test cases. Those reasons alone are probably good enough justification to add it, but boats has another use in mind as well.

async fn main

Every Rust program ultimately begins with a main somewhere. Because main is invoked by the surrounding C library to start the program, it also tends to be a place where a certain amount of “boilerplate code” can accumulate in order to “setup” the environment for the rest of the program. This “boilerplate setup” can be particularly annoying when you’re just getting started with Rust, as the main function is often the first one you write, and it winds up working differently than the others. A similar program effects smaller code examples.

In Rust 2018, we extended main so that it supports Result return values. This meant that you could now write main functions that use the ? operator, without having to add some kind of intermediate wrapper:

fn main() -> Result<(), std::io::Error> { let file = std::fs::File::create("output.txt")?; }

Unfortunately, async code today suffers from a similar papercut. If you’re writing an async project, most of your code is going to be async in nature: but the main function is always synchronous, which means you need to bridge the two somehow. Sometimes, especially for larger projects, this isn’t that big a deal, as you likely need to do some setup or configuration anyway. But for smaller examples, it’s quite a pain.

So boats would like to allow people to write an “async” main. This would then permit you to directly “await” futures from within the main function:

async fn main() { let x = load_data(22).await; } async fn load_data(port: usize) -> Data { ... }

Of course, this raises the question: since the program will ultimately run synchronized, how do we bridge from the async fn main to a synchronous main? This is where block_on comes in: at least to start, we can simply declare that the future generated by async fn main will be executed using block_on, which means it will block the main thread until main completes (exactly what we want). For simple programs and examples, this will be exactly what you want.

But most real programs will ultimately want to start some other executor to get more features. In fact, following the lead of the runtime crate, many executors already offer a procedural macro that lets you write an async main. So, for example, tokio and async-std offer attributes called #[tokio::main] and #[async_std::main] respectively, which means that if you have an async fn main program you can pick an executor just by adding the appropriate attribute:

#[tokio::main] // or #[async_std::main], etc async fn main() { .. }

I imagine that other executors offer a similar procedural macro – or if they don’t yet, they could add one. =)

(In fact, since async-std’s runtime starts implicitly in a background thread when you start using it, you could use async-std libraries without any additional setup as well.)

Overall, this seems pretty nice to me. Basically, when you write async fn main, you get Rust’s “default executor”, which presently is a very bare-bones executor suitable only for simple examples. To switch to a more full-featured executor, you simply add a #[foo::main] attribute and you’re off to the races!

(Side note #1: This isn’t something that boats and I talked about, but I wonder about adding a more general attribute, like #[async_runtime(foo)] that just desugars to a call like foo::main_wrapper(...), which is expected to do whatever setup is appropriate for the crate foo.)

(Side note #2: This also isn’t something that boats and I talked about, but I imagine that having a “native” concept of async fn main might help for some platforms where there is already a native executor. I’m thinking of things like GStreamer or perhaps iOS with Grand Central Dispatch. In short, I imagine there are environments where the notion of a “main function” isn’t really a great fit anyhow, although it’s possible I have no idea what I’m talking about.)

async-await in an embedded context

One thing we’ve not talked about very much in the interviews so far is using async-await in an embedded context. When we shipped the async-await MVP, we definitely cut a few corners, and one of those had to do with the use of thread-local storage (TLS). Currently, when you use async fn, the desugaring winds up using a private TLS variable to carry the Context about the current async task down through the stack. This isn’t necessary, it was just a quick and convenient hack that sidestepped some questions about how to pass in arguments when resuming a suspended function. For most programs, TLS works just fine, but some embedded environments don’t support it. Therefore, it makes sense to fix this bug and permit async fn to pass around its state without the use of TLS. (In fact, since boats and I talked, jonas-schievink opened PR #69033 which does exactly this, though it’s not yet landed.)

Async fn are implemented using a more general generator mechanism

You might be surprised when I say that we’ve already started fixing the TLS problem. After all, the reason we used TLS in the first place is that there were unresolved questions about how to pass in data when waking up a suspended function – and we haven’t resolved those problems. So why are we able to go ahead and use them to support TLS?

The answer is that, while the async fn feature is implemented atop a more general mechanism of suspendable functions1, the full power of that mechanism is not exposed to end-users. So, for example, suspendable functions in the compiler permit yielding arbitrary values, but async functions always yield up (), since they only need to signal that they are blocked waiting on I/O, not transmit values. Similarly, the compiler’s internal mechanism will allow us to pass in a new Context when we wake up from a yield, and we can use that mechanism to pass in the Context argument from the future API. But this is hidden from the end-user, since that Context is never directly exposed or accessed.

In short, the suspended functions supported by the compiler are not a language feature: they are an implementation detail that is (currently) only used for async-await. This is really useful because it means we can change how they work, and it also means that we don’t have to make them support all possible use cases one might want. In this particular case, it means we don’t have to resolve some of the thorny questions about to pass in data after a yield, because we only need to use them in a very specific way.

Supporting generators (iterators) and async generators (streams)

One observation that boats raised is that people who write Async I/O code are interacting with Pin much more directly than was expected. The primary reason for this is that people are having to manually implement the Stream trait, which is basically the async version of an iterator. (We’ve talked about Stream in a number of previous async interviews.) I have also found that, in my conversations with users of async, streams come up very, very often. At the moment, consuming streams is generally fairly easy, but creating them is quite difficult. For that matter, even in synchronous Rust, manually implementing the Iterator traits is kind of annoying (although significantly easier than streams).

So, it would be nice if we had some way to make it easier to write iterators and streams. And, indeed, this design space has been carved out in other languages: the basic mechanism is to add a generator2, which is some sort of function that can yield up a series of values before terminating. Obviously, if you’ve read up to this point, you can see that the “suspendable functions” we used to implement async await can also be used to support some form of generator abstractions, so a lot of the hard implementation work has been done here.

That said, support generator functions has been something that we’ve been shying away from. And why is that, if a lot of the implementation work is done? The answer is primarily that the design space is huge. I alluded to this earlier in talking about some of the questions around how to pass data in when resuming a suspended function.

Full generality considered too dang difficult

boats however contends that we are making our lives harder than they need to be. In short, if we narrow our focus from “create the perfect, flexible abstraction for suspended functions and coroutines” to “create something that lets you write iterators and streams”, then a lot of the thorny design problems go away. Now, under the covers, we still want to have some kind of unified form of suspended functions that can support async-await and generators, but that is a much simpler task.

In short, we would want to permit writing a gen fn (and async gen fn), which would be some function that is able to yield values and which eventually returns. Since the iterator’s next method doesn’t take any arguments, we wouldn’t need to support passing data in after yields (in the case of streams, we would pass in data, but only the Context values that are not directly exposed to users). Similarly, iterators and streams don’t produce a “final value” when they’re done, so these functions would always just return unit.

Adopting a more narrow focus wouldn’t close the door to exposing our internal mechanism as a first-class language feature at some point, but it would help us to solve urgent problems sooner, and it would also give us more experience to use when looking again at the more general task. It also means that we are adding features that makes writing iterators and streams as easy as we can make it, which is a good thing3. (In case you can’t tell, I was sympathetic to boats’ argument.)

precludes

Extending the stdlib with some key traits

boats is in favor of adding the “big three” traits to the standard library (if you’ve been reading these interviews, these traits will be quite familiar to you by now):

  • AsyncRead
  • AsyncWrite
  • Stream
Stick to the core vision: Async and sync should be analogous

One important point: boats believes (and I agree) that we should try to maintain the principle that the async and synchronous versions of the traits should align as closely as possible. This matches the overarching design vision of minimizing the differences between “async Rust” and “sync Rust”. It also argues in favor of the proposal that sfackler proposed in their interview, where we address the questions of how to handle uninitialized memory in an analogous way for both Read and AsyncRead.

We talked a bit about the finer details of that principle. For example, if we were to extend the Read trait with some kind of read_buf method (which can support an uninitialized output buffer), then this new method would have to have a default, for backwards compatibility reasons:

trait Read { fn read(&mut self, ...); fn read_buf(&mut self, buf: &mut BufMut<..>) { } }

This is a bit unfortunate, as ideally you would only implement read_buf. For AsyncRead, since the trait doesn’t exist yet, we could switch the defaults. But boats pointed out that this carries costs too: we would forever have to explain why the two traits are different, for example. (Another option is to have both methods default to one another, so that you can implement either one, which – combined with a lint – might be the best of both worlds.)

Generic interface for spawning

Some time back, boats wrote a post proposing global executors. This would basically be a way to add a function to the stdlib to spawn a task, which would then delegate (somehow) to whatever executor you are using. Based on the response to the post, boats now feels this is probably not a good short-term goal.

For one thing, there were a lot of unresolved questions about just what features this global executor should support. But for another, the main goal here is to enable libraries to write “executor independent” code, but it’s not clear how many libraries spawn tasks anyway – that’s usually done more at the application level. Libraries tend to instead return a future and let the application do the spawning (interestingly, one place this doesn’t work is in destructors, since they can’t return futures; supporting async drop, as discussed earlier, would help here.)

So it’d probably be better to revisit this question once we have more experience, particularly once we have the async I/O and stream traits available.

The futures crate

We discussed other possible additions to the standard library. There are a lot of “building blocks” currently in the futures library that are independent from executors and which could do well in the standard library. Some of the things that we talked about:

  • async-aware mutexes, clearly a useful building block
  • channels
    • though std channels are not the most loved, crossbeam’s are genreally preferred
    • interstingly, channel types do show up in public APIs from time to time, as a way to receive data, so having them in std could be particularly useful

In general, where things get more complex is whenever you have bits of code that either have to spawn tasks or which do the “core I/O”. These are the points where you need a more full-fledged reactor or runtime. But there are lots of utilities that don’t need that and which could profitably level in the std library.

Where to put async things in the stdlib?

One theme that boats and I did not discuss, but which has come up when I’ve raised this question with others, is where to put async-aware traits in the std hierarchy, particularly when there are sync versions. For example, should we have std::io::Read and std::io::AsyncRead? Or would it be better to have std::io::Read and something like std::async::io::Read (obviously, async is a keyword, so this precise path may not be an option). In other words, should we combine sync/async traits into the same space, but with different names, or should we carve out a space for “async-enabled” traits and use the same names? An interesting question, and I don’t have an opinion yet.

Conclusion and some of my thoughts

I always enjoy talking with boats, and this time was no exception. I think boats raised a number of small, practical ideas that hadn’t come up before. I do think it’s important that, in addition to stabilizing fundamental building blocks like AsyncRead, we also consider improvements to the ergonomic experience with smaller changes like async fn main, and I agree with the guiding principle that boats raised of keeping async and sync code as “analogous” as possible.

Comments?

There is a thread on the Rust users forum for this series.

Footnotes
  1. In the compiler, we call these “suspendable functions” generators, but I’m avoiding that terminology for a reason. 

  2. This is why I was avoiding using the term “generator” earlier – I want to say “suspendable functions” when referring to the implementation mechanism, and “generator” when referring to the user-exposed feature. 

  3. though not one that a fully general mechanism necessarily 

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: The 2020 RustConf CFP is Now Open!

Mozilla planet - di, 10/03/2020 - 01:00

Greetings fellow Rustaceans!

The 2020 RustConf Call for Proposals is now open!

Got something to share about Rust? Want to talk about the experience of learning and using Rust? Want to dive deep into an aspect of the language? Got something different in mind? We want to hear from you! The RustConf 2020 CFP site is now up and accepting proposals.

If you may be interested in speaking but aren't quite ready to submit a proposal yet, we are here to help you. We will be holding speaker office hours regularly throughout the proposal process, after the proposal process, and up to RustConf itself on August 20 and 21, 2020. We are available to brainstorm ideas for proposals, talk through proposals, and provide support throughout the entire speaking journey. We need a variety of perspectives, interests, and experience levels for RustConf to be the best that it can be - if you have questions or want to talk through things please don't hesitate to reach out to us! Watch this blog for more details on speaker office hours - they will be posted very soon.

The RustConf CFP will be open through Monday, April 5th, 2020, hope to see your proposal soon!

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl ootw: –quote

Mozilla planet - ma, 09/03/2020 - 09:41

Previous command line options of the week.

This option is called -Q in its short form, --quote in its long form. It has existed for as long as curl has existed.

Quote?

The name for this option originates from the traditional unix command ‘ftp’, as it typically has a command called exactly this: quote. The quote command for the ftp client is a way to send an exact command, as written, to the server. Very similar to what --quote does.

FTP, FTPS and SFTP

This option was originally made for supported only for FTP transfers but when we added support for FTPS, it worked there too automatically.

When we subsequently added SFTP support, even such users occasionally have a need for this style of extra commands so we made curl support it there too. Although for SFTP we had to do it slightly differently as SFTP as a protocol can’t actually send commands verbatim to the server as we can with FTP(S). I’ll elaborate a bit more below.

Sending FTP commands

The FTP protocol is a command/response protocol for which curl needs to send a series of commands to the server in order to get the transfer done. Commands that log in, changes working directories, sets the correct transfer mode etc.

Asking curl to access a specific ftp:// URL more or less converts into a command sequence.

The --quote option provides several different ways to insert custom FTP commands into the series of commands curl will issue. If you just specify a command to the option, it will be sent to the server before the transfer takes places – even before it changes working directory.

If you prefix the command with a minus (-), the command will instead be send after a successful transfer.

If you prefix the command with a plus (+), the command will run immediately before the transfer after curl changed working directory.

As a second (!) prefix you can also opt to insert an asterisk (*) which then tells curl that it should continue even if this command would cause an error to get returned from the server.

The actually specified command is a string the user specifies and it needs to be a correct FTP command because curl won’t even try to interpret it but will just send it as-is to the server.

FTP examples

For example, remove a file from the server after it has been successfully downloaded:

curl -O ftp://ftp.example/file -Q '-DELE file'

Issue a NOOP command after having logged in:

curl -O ftp://user:password@ftp.example/file -Q 'NOOP'

Rename a file remotely after a successful upload:

curl -T infile ftp://upload.example/dir/ -Q "-RNFR infile" -Q "-RNTO newname" Sending SFTP commands

Despite sounding similar, SFTP is a very different protocol than FTP(S). With SFTP the access is much more low level than FTP and there’s not really a concept of command and response. Still, we’ve created a set of command for the --quote option for SFTP that lets the users sort of pretend that it works the same way.

Since there is no sending of the quote commands verbatim in the SFTP case, like curl does for FTP, the commands must instead be supported by curl and get translated into their underlying SFTP binary protocol bits.

In order to support most of the basic use cases people have reportedly used with curl and FTP over the years, curl supports the following commands for SFTP: chgrp, chmod, chown, ln, mkdir, pwd, rename, rm, rmdir and symlink.

The minus and asterisk prefixes as described above work for SFTP too (but not the plus prefix).

Example, delete a file after a successful download over SFTP:

curl -O sftp://example/file -Q '-rm file'

Rename a file on the target server after a successful upload:

curl -T infile sftp://example/dir/ -Q "-rename infile newname" SSH backends

The SSH support in curl is powered by a third party SSH library. When you build curl, there are three different libraries to select from and they will have a slightly varying degree of support. The libssh2 and libssh backends are pretty much feature complete and have been around for a while, where as the wolfSSH backend is more bare bones with less features supported but at much smaller footprint.

Related options

--request changes the actual command used to invoke the transfer when listing directories with FTP.

Categorieën: Mozilla-nl planet

Wladimir Palant: Yahoo! and AOL: Where two-factor authentication makes your account less secure

Mozilla planet - ma, 09/03/2020 - 08:43

If you are reading this, you probably know already that you are supposed to use two-factor authentication for your most important accounts. This way you make sure that nobody can take over your account merely by guessing or stealing your password, which makes an account takeover far less likely. And what could be more important than your email account that everything else ties into? So you probably know, when Yahoo! greets you like this on login – it’s only for your own safety:

Yahoo! asking for a recovery phone number on login

Yahoo! makes sure that “Remind me later” link is small and doesn’t look like an action, so it would seem that adding a phone number is the only way out here. And why would anybody oppose adding it anyway? But here is the thing: complying reduces the security of your account considerably. This is due to the way Verizon Media (the company which acquired Yahoo! and AOL a while ago) implements account recovery. And: yes, everything I say about Yahoo! also applies to AOL accounts.

Table of Contents Summary of the findings

I’m not the one who discovered the issue. A Yahoo! user wrote me:

I entered my phone number to the Yahoo! login, and it asked me if I wanted to receive a verification key/access key (2fa authentication). So I did that, and typed in the access key… Surprise, I logged in ACCIDENTALLY to the Yahoo! mail of the previous owner of my current phone number!!!

I’m not even the first one to write about this issue. For example, Brian Krebs mentioned this a year ago. Yet here we still are: anybody can take over a Yahoo! or AOL account as long as they control the recovery phone number associated with it.

So if you’ve got a new phone number recently, you could check whether its previous owner has a Yahoo! or AOL account. Nothing will stop you from taking over that account. And not just that: adding a recovery phone number doesn’t necessarily require verification! So when I tested it out, I was offered access to a Yahoo! account which was associated with my phone number even though the account owner almost certainly never proved owning this number. No, I did not log into their account…

How two-factor authentication is supposed to work

The idea behind two-factor authentication is making account takeover more complicated. Instead of logging in with merely a password (something you know), you also have to demonstrate access to a device like your phone (something you have). There is a number of ways how malicious actors could learn your password, e.g. if you are in the habit of reusing passwords; chances are that your password has been compromised in one of the numerous data breaches. So it’s a good idea to set the bar for account access higher.

The already mentioned article by Brian Krebs explains why phone numbers aren’t considered a good second factor. Not only do phone numbers change hands quite often, criminals have been hijacking them en masse via SIM swapping attacks. Still, despite sending SMS messages to a phone number being considered a weak authentication scheme, it provides some value when used in addition to querying the password.

The Yahoo! and AOL account recovery process

But that’s not how it works with Yahoo! and AOL accounts. I added a recovery phone to my Yahoo! account and enabled two-factor authentication with the same phone number (yes, you have to do it separately). So my account should have been as secure as somehow possible.

And then I tried “recovering” my account. From a different browser. Via a Russian proxy. While still being logged into this account in my regular browser. That should have been enough for Yahoo! to notice something being odd, right?

Yahoo! form for account recovery, only asking for a phone number

Clicking “Forgot username” brought me to a form asking me for a recovery phone number or email address. I entered the phone number and received a verification code via SMS. Entered it into the web page and voilà!

Yahoo! offering me access to my account and as well as some

Now it’s all very straightforward: I click on my account, set a new password and disable two-factor authentication. The session still open in my regular browser is logged out. As far as Yahoo! is concerned, somebody from Russia just took over my account using only the weak SMS-based authentication, not knowing my password or even my name. Yet Yahoo! didn’t notice anything suspicious about this and didn’t feel any need for additional checks. But wait, there is a notification sent to the recovery email address!

Yahoo! notifying me about account takeover

Hey, big thanks Yahoo! for carefully documenting the issue. But you could have shortened the bottom part as “If this wasn’t you then we are terribly sorry but the horse has already left the barn.” If somebody took over my account and changed my password, I’ll most likely not get a chance to review my email addresses and phone numbers any more.

Aren’t phone numbers verified?

Now you are probably wondering: who is that other “X Y” account? Is that my test account? No, it’s not. It’s some poor soul who somehow managed to enter my phone number as their recovery phone. Given that this phone number has many repeating digits, it’s not too surprising that somebody typed it in merely to avoid Yahoo! nagging them. The other detail is surprising however: didn’t they have to verify that they actually own this number?

Now I had to go back to Yahoo!’s nag screen:

Yahoo! asking for a recovery phone number on login

If I enter a phone number into that text field and click the small “Add” link below it, the next step will require entering a verification code that I receive via SMS. However, if I click the much bigger and more obvious “Add email or mobile no.” button, it will bring me to another page where I can enter my phone number. And there the phone number will be added immediately, with the remark “Not verified” and a suggestion to verify it later. Yet the missing verification won’t prevent this the phone number from being used in account recovery.

With the second flow being the more obvious one, I suspect that a large portion of Yahoo! and AOL users never verified that they actually own the phone number they set as their recovery phone. They might have made a typo, or they might have simply invented a number. These accounts can be compromised by the rightful owner of that number at any time, and Verizon Media will just let them.

What does Verizon Media think about that?

Do the developers at Verizon Media realize that their trade-off is tilted way too much towards convenience and sacrifices security as a result? One would think so, at the very least after a big name like Brian Krebs wrote about this issue. Then again, having dealt with the bureaucratic monstrosity that is Yahoo! even before they got acquired, I was willing to give them the benefit of the doubt.

Of course, there is no easy way of reaching the right people at Yahoo!. Their own documentation suggests reporting issues via their HackerOne bug bounty program, and I gave it a try. Despite my explicitly stating that the point was making the right team at Verizon aware of the issue, my report was immediately closed as a duplicate by HackerOne staff. The other report (filed last summer) was also closed by HackerOne staff, stating that exploitation potential wasn’t proven. There is no indication that either report ever made it to the people responsible.

So it seems that the only way of getting a reaction from Verizon Media is by asking publicly and having as many people as possible chime in. Google and Microsoft make account recovery complicated for a reason, the weakest factor is not enough there. So Verizon Media, why don’t you? Do you care so little about security?

Categorieën: Mozilla-nl planet

Allen Wirfs-Brock: Teaser—JavaScript: The First 20 Years

Mozilla planet - ma, 09/03/2020 - 00:16

Our HOPL paper is done—all 190 pages of it. The preprint will be posted this week.  In the meantime, here’s a little teaser.

JavaScript: The First 20 Years
By Allen Wirfs-Brock and Brendan Eich Introduction

In 2020, the World Wide Web is ubiquitous with over a billion websites accessible from billions of Web-connected devices. Each of those devices runs a Web browser or similar program which is able to process and display pages from those sites. The majority of those pages embed or load source code written in the JavaScript programming language. In 2020, JavaScript is arguably the world’s most broadly deployed programming language. According to a Stack Overflow [2018] survey it is used by 71.5% of professional developers making it the world’s most widely used programming language.

This paper primarily tells the story of the creation, design, and evolution of the JavaScript language over the period of 1995–2015. But the story is not only about the technical details of the language. It is also the story of how people and organizations competed and collaborated to shape the JavaScript language which dominates the Web of 2020.

This is a long and complicated story. To make it more approachable, this paper is divided into four major parts—each of which covers a major phase of JavaScript’s development and evolution. Between each of the parts there is a short interlude that provides context on how software developers were reacting to and using JavaScript.

In 1995, the Web and Web browsers were new technologies bursting onto the world, and Netscape Communications Corporation was leading Web browser development. JavaScript was initially designed and implemented in May 1995 at Netscape by Brendan Eich, one of the authors of this paper. It was intended to be a simple, easy to use, dynamic language that enabled snippets of code to be included in the definitions of Web pages. The code snippets were interpreted by a browser as it rendered the page, enabling the page to dynamically customize its presentation and respond to user interactions.

Part 1, The Origins of JavaScript, is about the creation and early evolution of JavaScript. It examines the motivations and trade-offs that went into the development of the first version of the JavaScript language at Netscape. Because of its name, JavaScript is often confused with the Java programming language. Part 1 explains the process of naming the language, the envisioned relationship between the two languages, and what happened instead. It includes an overview of the original features of the language and the design decisions that motivated them. Part 1 also traces the early evolution of the language through its first few years at Netscape and other companies.

A cornerstone of the Web is that it is based upon non-proprietary open technologies. Anybody should be able to create a Web page that can be hosted by a variety of Web servers from different vendors and accessed by a variety of browsers. A common specification facilitates interoperability among independent implementations. From its earliest days it was understood that JavaScript would need some form of standard specification. Within its first year Web developers were encountering interoperability issues between Netscape’s JavaScript and Microsoft’s reverse-engineered implementation. In 1996, the standardization process for JavaScript was begun under the auspices of the Ecma International standards organization. The first official standard specification for the language was issued in 1997 under the name “ECMAScript”

Two additional revised and enhanced editions, largely based upon Netscape’s evolution of the language, were issued by the end of 1999.

Part 2, Creating a Standard, examines how the JavaScript standardization effort was initiated, how the specifications were created, who contributed to the effort, and how decisions were made.

By the year 2000, JavaScript was widely used on the Web but Netscape was in rapid decline and Eich had moved on to other projects. Who would lead the evolution of JavaScript into the future? In the absence of either a corporate or
individual “Benevolent Dictator for Life,” the responsibility for evolving JavaScript fell upon the ECMAScript standards committee. This transfer of design responsibility did not go smoothly. There was a decade-long period of false starts, standardization hiatuses, and misdirected efforts as the ECMAScript committee tried to find its own path forward evolving the language. All the while, actual usage of JavaScript rapidly grew, often using implementation-specific extensions. This created a huge legacy of unmaintained JavaScript-dependent Web pages and revealed new interoperability issues. Web developers began to create complex client-side JavaScript Web applications and were asking for standardized language enhancements to support them.

Part 3, Failed Reformations, examines the unsuccessful attempts to revise the language, the resulting turmoil within the standards committee, and how that turmoil was ultimately resolved.

In 2008 the standards committee restored harmonious operations and was able to create a modestly enhanced edition of the standard that was published in 2009.  With that success, the standards committee was finally ready to successfully undertake the task of compatibly modernizing the language. Over the course of seven years the committee developed major enhancements to the language and its specification. The result, known as ECMAScript 2015, is the foundation for the ongoing evolution of JavaScript. After completion of the 2015 release, the committee again modified its processes to enable faster incremental releases and now regularly completes revisions on a yearly schedule.

Part 4, Modernizing JavaScript, is the story of the people and processes that were used to create both the 2009 and 2015 editions of the ECMAScript standard. It covers the goals for each edition and how they addressed evolving needs of the JavaScript development community. This part examines the significant foundational changes made to the language in each edition and important new features that were added to the language.

Wherever possible, the source materials for this paper are contemporaneous primary documents. Fortunately, these exist in abundance. The authors have ensured that nearly all of the primary documents are freely and easily accessible on the Web from reliable archives using URLs included in the references. The primary document sources were supplemented with interviews and personal communications with some of the people who were directly involved in the story. Both authors were significant participants in many events covered by this paper. Their recollections are treated similarly to those of the third-party informants.

The complete twenty-year story of JavaScript is long and so is this paper. It involves hundreds of distinct events and dozens of individuals and organizations. Appendices A through E are provided to help the reader navigate these details. Appendices A and B provide annotated lists of the people and organizations that appear in the story. Appendix C is a glossary that includes terms which are unique to JavaScript or used with meanings that may be different from common usage within the computing community in 2020 or whose meaning might change or become unfamiliar for future readers.The first use within this paper of a glossary term is usually italicized and marked with a “g” superscript.’ Appendix D defines abbreviations that a reader will encounter. Appendix E contains four detailed timelines of events, one for each of the four parts of the paper.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Meet the women who man social media brand accounts

Mozilla planet - zo, 08/03/2020 - 14:00

Being online can be an overwhelming experience especially when it comes to misinformation, toxicity and inequality. Many of us have the option to disconnect and retreat from it all when … Read more

The post Meet the women who man social media brand accounts appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR20 available

Mozilla planet - zo, 08/03/2020 - 03:16
TenFourFox Feature Parity Release 20 final is now available for testing (downloads, hashes, release notes). This version is the same as the beta except for one more tweak to fix the preferences for those who prefer to suppress Reader mode. Assuming no issues, it will go live Monday evening Pacific as usual.

I have some ideas for FPR21, including further updates to Reader mode, AltiVec acceleration for GCM (improving TLS throughput) and backporting later improvements to 0RTT, but because of a higher than usual workload it is possible development may be stalled and the next release will simply be an SPR. More on that once I get a better idea of the timeframes necessary.

Categorieën: Mozilla-nl planet

Giorgio Maone: A cross-browser code library for security/privacy extensions. Interested?

Mozilla planet - vr, 06/03/2020 - 23:16
The problem

Google's "Manifest V3" ongoing API changes are severely hampering browser extensions in their ability to  block unwanted content and to enforce additional security policies, threatening the usefulness, if not to the very existence, of many popular privacy and security tools. uBlock's developer made clear that this will cause him to cease supporting Chromium-based browsers. Also EFF (which develops extensions such as HTTPS Everywhere and Privacy Badger) publicly stigmatized Google's decisions, questioning both their consequences and their motivations.

NoScript is gravely affected too, although its position is not as dire as others': in facts, I've finished porting it to Chromium-based browsers in the beginning of 2019, when Manifest V3 had already been announced. Therefore, in the late stages of that project and beyond, I've spent considerable time researching and experimenting alternate techniques, mostly based on standardized Web Platform APIs and thus unaffected by Manifest V3, allowing to implement comparable NoScript functionality albeit at the price of added complexity and/or performance costs. Furthermore Mozilla developers stated that, even though staying as much compatible as possible with the Chome extensions API is a goal of theirs, they do not plan to follow Google in those choices which are more disruptive for content blockers (such as the deprecation of blocking webRequest).

While this means that the future of NoScript is relatively safe, on Firefox and the Tor Browser at least, the browser extensions APIs and capabilities are going to diverge even more: developing and maintaining a cross-browser extension, especially if privacy and/or security focused, will become a complexity nightmare, and sometimes an impossible puzzle: unsurprisingly, many developers are ready to throw in the towel.

What would I do?

NoScript Commons Library

The collection of alternate content interception/blocking/filtering techniques I've experimented with and I'm still researching in order to overcome the severe limitations imposed by Manifest V3, in their current form are best defined as "a bunch of hacks": they're hardly maintainable, and even less so reusable by the many projects which are facing similar hurdles. What I'd like to do is to refine, restructure and organize them into an open source NoScript Commons Library. It will provide an abstraction layer on top of common functionality needed to implement in-browser security and privacy software tools.

The primary client of the library will be obviously NoScript itself, refactored to decouple its core high-level features from their browser-dependent low-level implementation details, becoming easier to isolate and manage. But this library will also be freely available (under the General Public License) in a public code repository which any developer can reuse as it is or improve/fork/customize according to their needs, and hopefully contribute back to.

What do I hope?

Some of the desired outcomes:

  • By refactoring its browser-dependent "hacks" into a Commons Library, NoScript manages to keep its recently achieved cross-browser compatibility while minimizing the cross-browser maintenance burden and the functionality loss coming from Manifest V3, and mitigating the risk of bugs, regressions and security flaws caused by platform-specific behaviors and unmanageable divergent code paths.
  • Other browser extensions in the same privacy/security space as NoScript are offered similar advantages by a toolbox of cross-browser APIs and reusable code, specific to their application domain. This can also motivate their developers (among the most competent people in this field) to scrutinize, review and improve this code, leading to a less buggy, safer and overall healthier privacy and security browser extensions ecosystem.
  • Clearly documenting and benchmarking the unavoidable differences between browser-specific implementations help users make informed choices based on realistic expectations, and pressure browser vendors into providing better support (either natively or through enhanced APIs) for the extensions-provided features which couldn't be optimized for their product. This will clearly outline, in a measurable way, the difference in commitment for a striving ecosystem of in-browser security/privacy solutions between Mozilla and other browser vendors, keeping them accountable.
  • Preserving a range of safe browsing options, beyond Firefox-based clients, increases the diversity in the "safe browsing" ecosystem, making web-based attacks significantly more difficult and costly than they are in a Firefox-based Tor Browser mono-culture.
I want you!

Are you an extensions developer, or otherwise interested in in-browser privacy/security tools? I'd be very grateful to know your thoughts, and especially:

  1. Do you think this idea is useful / worth pursing?
  2. What kind of features would you like to see supported? For instance, content interception and contextual blocking, filtering, visual objects replacement (placeholders), missing behavior replacement (script "surrogates"), user interaction control (UI security)...
  3. Would you be OK with a API and documentation styles similar to what we have for Firefox's WebExtensions?
  4. How likely would you be to use such a library (either for an existing or for a new project), and/or to contribute to it?

Many thanks in advance for your feedback!

Categorieën: Mozilla-nl planet

Mike Hoye: Brace For Impact

Mozilla planet - vr, 06/03/2020 - 17:24

I don’t spend a lot of time in here patting myself on the back, but today you can indulge me.

In the last few weeks it was a ghost town, and that felt like a victory. From a few days after we’d switched it on to Monday, I could count the number of human users on any of our major channels on one hand. By the end, apart from one last hurrah the hour before shutdown, there was nobody there but bots talking to other bots. Everyone – the company, the community, everyone – had already voted with their feet.

About three weeks ago, after spending most of a month shaking out some bugs and getting comfortable in our new space we turned on federation, connecting Mozilla to the rest of the Matrix ecosystem. Last Monday we decommissioned IRC.Mozilla.org for good, closing the book on a 22-year-long chapter of Mozilla’s history as we started a new one in our new home on Matrix.

I was given this job early last year but the post that earned it, I’m guessing, was from late 2018:

I’ve mentioned before that I think it’s a mistake to think of federation as a feature of distributed systems, rather than as consequence of computational scarcity. But more importantly, I believe that federated infrastructure – that is, a focus on distributed and resilient services – is a poor substitute for an accountable infrastructure that prioritizes a distributed and healthy community. […] That’s the other part of federated systems we don’t talk about much – how much the burden of safety shifts to the individual.

Some inside baseball here, but if you’re wondering: that’s why I pushed back on the idea of federation from the beginning, for all invective that earned me. That’s why I refused to include it as a requirement and held the line on that for the entire process. The fact that on classically-federated systems distributed access and non-accountable administration means that the burden of personal safety falls entirely on the individual. That’s not a unique artifact of federated systems, of course – Slack doesn’t think you should be permitted to protect yourself either, and they’re happy to wave vaguely in the direction of some hypothetical HR department and pretend that keeps their hands clean, as just one example of many – but it’s structurally true of old-school federated systems of all stripes. And bluntly, I refuse to let us end up in a place where asking somebody to participate in the Mozilla project is no different from asking them to walk home at night alone.

And yet here we are, opting into the Fediverse. It’s not because I’ve changed my mind.

One of the strongest selling points of Matrix is the combination of powerful moderation and safety tooling that hosting organizations can operate with robust tools for personal self-defense available in parallel. Critically, these aren’t half-assed tools that have been grafted on as an afterthought; they’re first-class features, robust enough that we can not only deploy them with confidence, but can reasonably be held accountable by our colleagues and community for their use. In short, we can now have safe, accountable infrastructure that complements, rather than comes at the cost, of individual user agency.

That’s not the best thing, though, and I’m here to tell you about my favorite Matrix feature that nobody knows about: Federated auto-updating blocklist sharing.

If you decide you trust somebody else’s decisions, at some other organization – their judgment calls about who is and is not welcome there – those decisions can be immediately and automatically reflected in your own. When a site you trust drops the hammer on some bad actor that ban can be adopted almost immediately by your site and your community as well. You don’t have to have ever seen that person or have whatever got them banned hit you in the eyes. You don’t even need to know they exist. All you need to do is decide you trust that other site judgment and magically someone persona non grata on their site is precisely that grata on yours.

Another way to say that is: among people or communities who trust each other in these decisions, an act of self-defense becomes, seamlessly and invisibly, an act of collective defense. No more everyone needing to fight their own fights alone forever, no more getting isolated and picked off one at a time, weakest first; shields-up means shields-up for everyone. Effective, practical defensive solidarity; it’s the most important new idea I’ve seen in social software in years. Every federated system out should build out their own version, and it’s very clear to me, at least, that is going to be the table stakes of a federated future very soon.

So I feel pretty good about where we’ve ended up, and where we’re going.

In the long term, I see that as the future of Mozilla’s responsibility to the Web; not here merely to protect the Web, not merely to defend your freedom to participate in the Web, but to mount a positive defense of people’s opportunities to participate. And on the other side of that coin, to build accountable tools, systems and communities that promise not only freedom from arbitrary harassment, but even freedom from the possibility of that harassment.

I’ve got a graph here that’s pointing up and to the right, and it’s got nothing to do with scraping fractions of pennies out of rageclicks and misery; just people making a choice to go somewhere better, safer and happier. Maybe, just maybe, we can salvage this whole internet thing. Maybe all is not yet lost, and the future is not yet written.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Getting Closer on Dot Org?

Mozilla planet - vr, 06/03/2020 - 14:30

Over the past few months, we’ve raised concerns about the Internet Society’s plan to sell the non-profit Public Interest Registry (PIR) to Ethos Capital. Given the important role of dot org in providing a platform for free and open speech for non-profits around the world, we believe this deal deserves close scrutiny.

In our last post on this issue, we urged ICANN to take a closer look at the dot org sale. And we called on Ethos and the Internet Society to move beyond promises of accountability by posting a clear stewardship charter for public comment. As we said in our last post:

One can imagine a charter that provides the council with broad scope, meaningful independence, and practical authority to ensure PIR continues to serve the public benefit. One that guarantees Ethos and PIR will keep their promises regarding price increases, and steer any additional revenue from higher prices back into the dot org ecosystem. One that enshrines quality service and strong rights safeguards for all dot orgs. And one that helps ensure these protections are durable, accounting for the possibility of a future resale.

On February 21, Ethos and ISOC posted two proposals that address many concerns Mozilla and others have raised. The proposals include: 1. a charter for a stewardship council, including sections on free expression and personal data; and 2. an amendment to the contract between PIR and ICANN (aka a Public Interest Commitment), touching on price increases and the durability of the stewardship council. Ethos and ISOC also announced a public engagement process to gather input on these proposals.

These new proposals get a number of things right, but they also leave us with some open questions.

What do they get right? First, the proposed charter gives the stewardship council the veto power over any changes that PIR might want to make to the freedom of expression and personal data rules governing dot org domains. These are two of the most critical issues that we’d urged Ethos and ISOC to get specific about. Second, the proposed Public Interest Commitment provides ICANN with forward-looking oversight over dot org. It also codifies both the existence of the stewardship council and a price cap for dot org domains. We’d suggested a modification to the contract between PIR and ICANN in one of our posts. It was encouraging to see this suggestion taken on board.

Yet questions remain about whether the proposals will truly provide the level of accountability the dot org community deserves.

The biggest question: with PIR having the right to make the initial appointments and to veto future members, will the stewardship council really be independent? The fact that the council alone can nominate future members provides some level of independence, but that independence could be compromised by the fact that PIR will make all the initial nominations and board appointment veto authority. While it makes sense for PIR to have a significant role, the veto power should be cabined in some way. For example, the charter might call for the stewardship council to propose a larger slate of candidates and give the PIR board optional veto, down the number of positions to fill, should it so choose. And, to address the first challenge, maybe a long standing and trusted international non profit could nominate the initial council instead of PIR?

There is also a question about whether the council will have enough power to truly provide oversight of dot org freedom of expression and personal data policies. The charter requires that council vetoes of PIR freedom of expression and data policy changes require a supermajority — five out of seven members. Why not make it a simple majority?

There are a number of online meetings happening over the next week during ICANN 67, including a meeting of the Governmental Advisory Committee. Our hope is that these meetings will provide an opportunity for the ICANN community to raise questions about the dot org sale, including questions like these. We also hope that the public consultation process that Ethos and ISOC are running over the coming week will generate useful ideas on how these questions might be answered.

Mozilla will continue to keep an eye on the sale as this unfolds, with a particular eye to ensuring that, if the deal goes ahead, the stewardship council has the independence and authority needed to protect the dot org community.

The post Getting Closer on Dot Org? appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Pagina's