mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - https://planet.mozilla.org/
Bijgewerkt: 1 week 1 dag geleden

Will Kahn-Greene: Socorro Engineering: Year in Review 2019

ma, 06/01/2020 - 16:00
Summary

Last year at about this time, I wrote a year in review blog post. Since I only worked on Socorro at the time, it was all about Socorro. In 2019, that changed, so this blog post covers the efforts of two people across a bunch of projects.

2019 was pretty crazy. We accomplished a lot, but picking up a bunch of new projects really threw a wrench in the wheel of ongoing work.

This year in review covers highlights, some numbers, and some things I took away.

Here's the list of projects we worked on over the year:

Read more… (13 min remaining to read)

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Mozilla Announces Deal to Bring Firefox Reality to Pico Devices

ma, 06/01/2020 - 14:59
Mozilla Announces Deal to Bring Firefox Reality to Pico Devices

For more than a year, we at Mozilla have been working to build a browser that was made to showcase the best of what you love about browsing, but tailor made for Virtual Reality.

Now we are teaming up with Pico Interactive to bring Firefox Reality to its latest VR headset, the Neo 2 – an all-in-one (AIO) device with 6 degrees of freedom (DoF) head and controller tracking that delivers key VR solutions to businesses. Pico’s Neo 2 line includes two headsets: the Neo 2 Standard and the Neo 2 Eye featuring eye tracking and foveated rendering. Firefox Reality will also be released and shipped with previous Pico headset models.

Mozilla Announces Deal to Bring Firefox Reality to Pico Devices

This means anytime someone opens a Pico device, they’ll be greeted with the speed, privacy, and great features of Firefox Reality.

Firefox Reality includes the ability to sync your Firefox Account enabling you to send tabs, sync history and bookmarks, making great content easily discoverable. There’s also a curated section of top VR content, so there’s always something fresh to enjoy.

“We are pleased to be partnered with Pico to bring Firefox Reality to their users, especially the opportunity to reach more people through their large Enterprise audience,” says Andre Vrignaud, Head of Mixed Reality Platform Strategy at Mozilla. “We look forward to integrating Hubs by Mozilla to bring fully immersive collaboration to business.”

As part of Firefox Reality, we are also bringing Hubs by Mozilla to all Pico devices. In Hubs, users can easily collaborate online around virtual objects, spaces, and tasks - all without leaving the headset.

The virtual spaces created in Hubs can be used similarly to a private video conference room to meet up with your coworkers and share documents and photos, but with added support for all of your key 3D assets. You can fully brand the environment and avatars for your business, and with web-based access the meetings are just a link away, supported on any modern web browser.

Firefox Reality will be available on Pico VR headsets later in Q1, 2020. Stay tuned to our mixed reality blog and twitter account for more details.

Categorieën: Mozilla-nl planet

Ryan Harter: Syncthing

zo, 05/01/2020 - 09:00

I did a lot of reading and exploring over my holiday break. One of the things I'm most excited about is finding Syncthing. If you haven't seen it yet, take a look. It's like and open-source decentralized Dropbox.

It works everywhere, which for me means Linux and Android. Google Drive …

Categorieën: Mozilla-nl planet

Ryan Harter: Syncthing and Open Source Data Collection

zo, 05/01/2020 - 09:00

I don't see many open source packages collecting telemetry, so when Syncthing asked me to opt-in to telemetry I was intrigued.

I see a lot of similarities between how Syncthing and Firefox collects data. Both collect daily pings and make it easy to view the data you're submitting (in Firefox …

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR18 available (and the classic MacOS hits Y2K20)

zo, 05/01/2020 - 05:38
TenFourFox Feature Parity Release 18 final is now available for testing (downloads, hashes, release notes). There are no other changes from the beta other than to update the usual certs and such. As usual, assuming no late-breaking critical bugs, it will become final Monday evening Pacific time.

Meanwhile, happy new year: classic Mac systems prior to Mac OS 9 are now hit by the Y2K20 bug, where you cannot manually use the Date and Time Control Panel to set the clock to years beyond 2019 (see also Apple Technote TN1049). This does not affect any version of MacOS 9 nor Classic on OS X, and even affected versions of the classic MacOS can still maintain the correct date until February 6, 2040 at 6:28:15 AM when the unsigned 32-bit date overflows. If you need to set the date on an older system or 68K Mac, you can either use a CDEV like Network Time, which lets you sync to a network time source or a local server if you have one configured (as I do), or you can use Rob Braun's SetDate, which allows you to manually enter a date or time through the entire supported range (and even supports System 6).

One other note is that all HFS+ volumes regardless of operating system version have the same year 2040 limit on dates -- that includes Intel Macs using HFS+ filesystems. You have 20 years to think about how you want to fix this (during which you should replace the PRAM batteries in your classic Macs, too).

Categorieën: Mozilla-nl planet

Daniel Stenberg: Restored complete curl changelog

za, 04/01/2020 - 09:25

For a long time, the curl changelog on the web site showed the history of changes in the curl project all the way back to curl 6.0. Released on September 13 1999. Older changes were not displayed.

The reason for this was always basically laziness. The page in its current form was initially created back in 2001 and then I just went back a little in history and filled up with a set of previous releases. Since we don’t have pre-1999 code in our git tree (because of a sloppy CVS import), everything before 1999 is a bit of manual procedure to extract so we left it like that.

Until now.

I decided to once and for all fix this oversight and make sure that we get a complete changelog from the first curl release all the way up until today. The first curl release was called 4.0 and was shipped on March 20, 1998.

Before 6.0 we weren’t doing very careful release notes and they were very chatty. I got the CHANGES file from the curl 6.0 tarball and converted them over to the style of the current changelog.

Notes on the restoration work

The versions noted as “beta” releases in the old changelog are not counted or mentioned as real releases.

For the released versions between 4.0 and 4.9 there are no release dates recorded, so I’ve “estimated” the release dates based on the knowledge that we did them fairly regularly and that they probably were rather spread out over that 200 day time span. They won’t be exact, but close enough.

Complete!

The complete changelog is now showing on the site, and in the process I realized that I have at some point made a mistake and miscounted the total number of curl releases. Off-by one actually. The official count now says that the next release will become the 188th.

As a bonus from this work, the “releaselog” page is now complete and shows details for all curl releases ever. (Also, note that we provide all that info in a CSV file too if you feel like playing with the data.)

There’s a little caveat on the updated vulnerability information there: when we note how far vulnerabilities go, we have made it a habit to sometimes mark the first vulnerable version as “6.0” if the bad code exists in the first ever git imported code – simply because going back further and checking isn’t easy and usually isn’t worth the effort because that old versions are not used anymore.

Therefore, we will not have accurate vulnerability information for versions before 6.0. The vulnerability table will only show versions back to 6.0 for that reason.

Many bug-fixes

With the complete data, we also get complete numbers. Since the birth of curl until version 7.67.0 we have fixed exactly 5,664 bugs shipped in releases, and there were exactly 7,901 days between the 4.0 the 7.67.0 releases.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl receives 10K USD donation

vr, 03/01/2020 - 14:38

The largest ever single-shot monetary donation to the curl project just happened when indeed.com graciously boosted our economy with 10,000 USD. (It happened before the new year but as I was away then I haven’t had the chance to blog about it until now.)

curl remains a small project with no major financial backing, with no umbrella organization (*) and no major company sponsorships.

Indeed’s FOSS fund

At Indeed they run this awesome fund for donating to projects they use. See Duane O’Brien’s FOSDEM 2019 talk about it.

How to donate to curl

curl is not a legal, registered organization or company or anything that can actually hold on to assets such as money. In any country.

What we do have however, is a “collective” over at Open Collective. Skip over there to make monetary donations. Over there you also get a complete look into previous donations with full transparency as to what funds we have and spend in the project.

Money donated to us will only be spent on project related activities.

Other ways to donate to the project is of course to donate time and effort. Allow your employees to help out or spend your own time at writing code, fixing bugs or extend the documentation. Every little bit helps and will be appreciated!

curl sponsors

curl is held upright and pushing forward much thanks to the continuous financial support from champion companies. The primary curl sponsors being Haxx, wolfSSL, Fastly and Teamviewer.

The curl project’s use of donated money

We currently have two primary expenses in the project that aren’t already covered by sponsors:

The curl bug bounty. We’ve already discussed internally that we should try to raise the amounts we hand out as rewards for the flaws we get reported going forward. We started out carefully since we didn’t want to drain the funds immediately, but time has shown that we haven’t received so many reports and the funds are growing. This means we will raise the rewards levels to encourage researchers to dig deeper.

The annual curl up developers conference. I’d like us to sponsor top contributors’ and possibly student developers’ travels to enable a larger attendance – and a social development team dinner! The next curl up will take place in Berlin in May 2020.

(*) = curl has previously applied for membership in both Software Freedom Conservancy and Linux Foundation as they seemed like suitable stewards, but the first couldn’t accept us due to work load and the latter didn’t even bother to respond. It’s not a big bother, just reality.

Categorieën: Mozilla-nl planet

Karl Dubost: Week notes - 2020 w01 - worklog - First week

vr, 03/01/2020 - 09:00

After 10+ days of holidays, the first morning is going through the pile of bugs and emails. I had cleaned my desk before leaving for holidays on December 21. So starting this morning was like a fresh breeze. I'm on diagnosis rotation. Let's discover the effect of holidays on the pile. I'm pleasantly surprised. Ah I see! Ksenia did the hard work. Cool.

Diagnosis Bang!

The webcompat-bot was not only suspended by GitHub, but also all the issues with it in the repo. It means hours of work just gone. We probably need to prepare for this again. And we need to have a reliable backup of all issues (and comments) and probably events, labels, etc.

Basically we need a static version of the issues we have been working on.

In the meantime we are keeping track of the events in a detailed incident report (non public) and some information in public. We also deployed a landing page for webcompat.com/issues/new.

Once we know what it was about I would detail a bit more things.

Reading Thoughts
  • 2020 poney wish: a container tab profile, where i can deactivate/activate some addons and allow only specific domains

Otsukare!

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Reducing support for 32-bit Apple targets

vr, 03/01/2020 - 01:00

The Rust team regrets to announce that Rust 1.41.0 (to be released on January 30th, 2020) will be the last release with the current level of support for 32-bit Apple targets. Starting from Rust 1.42.0, those targets will be demoted to Tier 3.

The decision was made on RFC 2837, and was accepted by the compiler and release teams. This post explains what the change means, why we did it, and how your project is affected.

What’s a support tier?

The Rust compiler can build code targeting a lot of platforms (also called “targets”), but the team doesn't have the resources or manpower to provide the same level of support and testing for each of them. To make our commitments clear, we follow a tiered support policy (currently being formalized and revised in RFC 2803), explaining what we guarantee:

  • Tier 1 targets can be downloaded through rustup and are fully tested during the project’s automated builds. A bug or a regression affecting one of these targets is usually prioritized more than bugs only affecting platforms in other tiers.

  • Tier 2 targets can also be downloaded through rustup, but our automated builds don’t execute the test suite for them. While we guarantee a standard library build (and for some of them a full compiler build) will be available, we don’t ensure it will actually work without bugs (or even work at all).

  • Tier 3 targets are not available for download through rustup, and are ignored during our automated builds. You can still build their standard library for cross-compiling (or the full compiler in some cases) from source on your own, but you might encounter build errors, bugs, or missing features.

Which targets are affected?

The main target affected by this change is 32-bit macOS (i686-apple-darwin), which will be demoted from Tier 1 to Tier 3. This will affect both using the compiler on 32-bit Mac hardware, and cross-compiling 32-bit macOS binaries from any other platform.

Additionally, the following 32-bit iOS targets will be demoted from Tier 2 to Tier 3:

  • armv7-apple-ios
  • armv7s-apple-ios
  • i386-apple-ios

We will continue to provide the current level of support for all Apple 64bit targets.

Why are those targets being demoted?

Apple dropped support for running 32-bit binaries starting from macOS 10.15 and iOS 11. They also prevented all developers from cross-compiling 32-bit programs and apps starting from Xcode 10 (the platform’s IDE, containing the SDKs).

Due to those decisions from Apple, the targets are no longer useful to our users, and their choice to prevent cross-compiling makes it hard for the project to continue supporting the 32-bit platform in the long term.

How will this affect my project?

If you don’t build 32-bit Apple binaries this change won’t affect you at all.

If you still need to build them, you’ll be able to continue using Rust 1.41.0 without issues. As usual the Rust project will provide critical bugfixes and security patches until the next stable version is released (on March 12th, 2020), and we plan to keep the release available for download for the foreseeable future (as we do with all the releases shipped so far).

The code implementing the targets won’t be removed from the compiler codebase, so you’ll also be able to build future releases from source on your own (keeping in mind they might have bugs or be broken, as that code will be completly untested).

What about the nightly channel?

We will demote the targets on the nightly channel soon, but we don't have an exact date for when that will happen. We recommend pinning a nightly version beforehand though, to prevent rustup toolchain install from failing once we apply the demotion.

To pin a nightly version you need to use "nightly" followed by the day the nightly was released, as the toolchain name. For example, to install the nightly released on December 1st, 2019 and to use it you can run:

rustup toolchain install nightly-2019-12-01 # Default to this nightly system-wide... rustup default nightly-2019-12-01 # ...or use this nightly for a single build cargo +nightly-2019-12-01 build
Categorieën: Mozilla-nl planet

About:Community: Firefox 72 new contributors

do, 02/01/2020 - 23:59

With the release of Firefox 72, we are pleased to welcome the 36 developers who contributed their first code change to Firefox in this release, 28 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Bringing California’s privacy law to all Firefox users in 2020

di, 31/12/2019 - 16:47

2019 saw a spike of activity to protect online privacy as governments around the globe grappled with new revelations of data breaches and privacy violations. While much of the privacy action came from outside the U.S., such as the passage of Kenya’s data protection law and Europe’s enforcement of its GDPR privacy regulation, California represented a bright spot for American privacy.

Amidst gridlock in Congress over federal privacy rules, California marched forward with its landmark privacy law, the California Consumer Privacy Act (CCPA), which goes into effect on January 1, 2020. Mozilla has long been a supporter of data privacy laws that empower people — including CCPA. In fact, we were one of the few companies to endorse CCPA back in 2018 when it was before the California legislature.

The California Consumer Privacy Act (CCPA) expands the rights of Californians over their data – and provides avenues for the Attorney General to investigate and enforce those rights, as well as allowing Californians to sue. Californians now have the right to know what personal information is being collected, to access it, to update and correct it, to delete it, to know who their data is being shared with, and to opt-out of the sale of their data.

Much of what the CCPA requires companies to do moving forward is in line with how Firefox already operates and handles data. We’ve long believed that your data is not our data, and that privacy online is fundamental. Nonetheless, we are taking steps to go above and beyond what’s expected in CCPA.

Here’s how we are bringing CCPA to life for Firefox users.

CCPA rights for everyone.

When Europe passed its GDPR privacy law we made sure that all users, whether located in the EU or not, were afforded the same rights under the law.  As a company that believes privacy is fundamental to the online experience, we felt that everyone should benefit from the rights laid out in GDPR. That is why our new settings and privacy notice applied to all of our users.

With the passage and implementation of CCPA, we will do the same. Changes we are making in the browser will apply to every Firefox user, not just those in California.

Deleting your data.

One of CCPA’s key new provisions is its expanded definition of “personal data” under CCPA. This expanded definition allows for users to request companies delete their user specific data.

As a rule, Firefox already collects very little of your data. In fact, most of what we receive is to help us improve the performance and security of Firefox. We call this telemetry data. This telemetry doesn’t tell us about the websites you visit or searches you do; we just know general information, like a Firefox user had a certain amount of tabs opened and how long their session was. We don’t collect telemetry about private browsing mode and we’ve always given people easy options to disable telemetry in Firefox. And because we’ve long believed that data should not be stored forever, we have strict limits on how long we keep telemetry data.

We’ve decided to go the extra mile and expand user deletion rights to include deleting this telemetry data stored in our systems. To date, the industry has not typically considered telemetry data “personal data” because it isn’t identifiable to a specific person, but we feel strongly that taking this step is the right one for people and the ecosystem.

In line with the work we’ve done this year to make privacy easier and more accessible to our users, the deletion control will be built into Firefox and will begin rolling out in the next version of the browser on January 7. This setting will provide users a way to request deletion for desktop telemetry directly from Firefox – and a way for us, at Mozilla, to perform that deletion.

For Firefox, privacy is not optional. We don’t think people should have to choose between the technology they love and their privacy. We think you should have both. That’s why we are taking these steps to bring additional protection to all our users under CCPA. And why we will continue to press in 2020 – through the products we build and the policies we advocate – for an Internet that gives people the privacy and security they deserve.

The post Bringing California’s privacy law to all Firefox users in 2020 appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Happy New Year from Hubs!

di, 31/12/2019 - 15:00
Happy New Year from Hubs!

As we wrap up 2019, The Hubs team says thank you to the Mozilla Mixed Reality Community for an incredible year! We’ve been looking back and we’re excited about the key milestones that we’ve hit in our mission to make private social VR readily available to the general public. At the core of what we’re doing, our team is exploring the ways that spatial computing and shared environments can improve the ways that we connect and collaborate, and thanks to the feedback and participation of our users and community as a whole, we got to spend a lot of time this year working on new features and experiments.

Early in the year, we wanted to dive into our hypothesis that social 3D spaces could integrate into our existing platforms and tools that the team was regularly using. We launched the Hubs Discord Bot back in April, which bridged chat between the two platforms and added an optional authentication layer to restrict access to rooms created with the bot to users in a given server. Since launching the Discord bot, we’ve learned more about the behaviors and frameworks that enable healthy community development and management, and we released a series of new features that supported multiple moderators, configurable room permissions, closing rooms, and more.

One of our goals for this year was to empower users to more easily personalize their Hubs experiences by making it easy to create custom content. This work kicked off with making Spoke available as a hosted web application, so creators no longer had to download a separate application to build scenes for Hubs. We followed with new features that improved how avatars could be created, shared, remixed, and discovered, and we wrapped up the year by releasing several pre-configured asset kits for building unique environments, starting with the Spoke Architecture Kit release that also included a number of ease-of-use feature updates.

We’ve also just had a lot of fun connecting with users and growing our team and community, and we’ve learned a lot about what we’re working on and how to improve Hubs for different use cases. When we joined Twitter, we got to start interacting with a lot more of you on a regular basis and we’ve loved seeing how you’ve been using Hubs when you share your own content with us! The number of new scenes, avatars, and even public events that have been shared within our community gets us even more excited for what we think 2020 can bring.

As we look ahead into the next year, we’ll be sharing a big update in January and go in-depth with work we’ve been doing to make Hubs a more versatile platform. If you want to follow along with our roadmap, you can keep an eye on the work we have planned on GitHub and follow us on Twitter @ByHubs. Happy 2020!

Categorieën: Mozilla-nl planet

The Firefox Frontier: New Year, New Rights: What to know about California’s new privacy law

di, 31/12/2019 - 08:59

The California Consumer Privacy Act (CCPA) expands the rights of Californians over their data. Starting in 2020, Californians have the right to know what personal information is being collected, access … Read more

The post New Year, New Rights: What to know about California’s new privacy law appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 319

di, 31/12/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is attohttpc, a tiny synchronous HTTP client library.

Thanks to Matěj Laitl for the suggestions!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

184 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

No RFCs are currently in final comment period.

New RFCs Upcoming Events Asia Pacific Europe North America South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust has multiple unique paradigms that don't even exist in other languages, such as lifetimes and compile-time-tracked "exclusive access". But instead of endorsing them from the beginning, as @mbrubeck's Rust: a unique perspective does, the Rust book tries to show a language that is "like other languages, but with (magical) compile-time checks". When the truth is that Rust's strength lies in non-unsafe Rust being less expressive than languages like C or C++.

I think that Rust should start with the statement: "Welcome to a language that by being less expressive forces you to use constructs that are guaranteed at compile-time to be sound. But don't worry; after some time you will get used to the coding patterns that are allowed, and will then almost not notice the hindered expressiveness, only the enhanced zero-cost safety that will let you hack without fear."

  • It doesn't sound bad imho, and is at least honest w.r.t. the struggles that someone refusing to shift their way of coding / mental coding patterns may encounter.

Daniel H-M on rust-users

Thanks to Tom Phinney for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Emily Dunham: Toy hypercube construction

ma, 30/12/2019 - 09:00
Toy hypercube construction

I think hypercubes are neat, so I tried to make one out of string to play with. In the process, I discovered that there are surprisingly many ways to fail to trace every edge of a drawing of a hypercube exactly once with a single continuous line.

This puzzle felt like the sort of problem that some nerd had probably solved before, so I searched the web and discovered that the shape I was trying to configure the string into is called an Eulerian Cycle.

I learned that any graph in which every vertex attaches to an even number of edges has such a cycle, which is useful for my craft project because the euler cycle is literally the path that the string needs to take to make a model of the object represented by the graph.

Mathematical materials

To construct a toy hypercube or any other graph, you need the graph. To make it from a single piece of string, every vertex should have an even number of edges.

Knowing the number of edges in the graph will be useful later, when marking the string.

Physical materials

For the edges of the toy, I wanted something that’s a bit flexible but can sort of stand up on its own. I found that cotton clothesline rope worked well: it’s easy to mark, easy to pin vertex numbers onto, and sturdy but still flexible. I realized after completing the construction that it would have been clever to string items like beads onto the edges to make the toy prettier and identify which edge is which.

For the vertices, I pierced jump rings through the rope, then soldered them shut, to create flexible attachment points. This worked better than a previous prototype in which I used flimsier string and made the vertices from beads.

Vertices could be knotted, glued, sewn, or safety pinned. A bookbinding awl came in handy for making holes in the rope for the rings to go through.

Mathematical construction

First, I drew the graph of the shape I was trying to make – in this case, a hypercube. I counted its edges per vertex, 4. I made sure to draw each vertex with spots to write numbers in, half as many numbers as there are edges, because each time the string passes through the vertex it makes 2 edges. So in this case, every vertex needs room to write 2 numbers on it.

Here’s the graph I started with. I drew the edges in a lighter color so I could see which had already been visited when drawing in the euler cycle.

../../../_images/one1.jpg

Then I started from an arbitrary vertex and drew in the line. Any algorithm for finding euler paths will suffice to draw the line. The important part of tracing the line on the graph is to mark each vertex it encounters, sequentially. So the vertex I start at is 1, the first vertex I visit is 2, and so forth.

Since the euler path visits every vertex of my particular hypercube twice, every vertex will have 2 numbers (the one I started at will have 3) when I finish the math puzzle. These pairs of numbers are what tell me which part of the string to attach to which other part.

Here’s what my graph looked like once I found an euler cycle in it and numbered the vertices that the cycle visited:

../../../_images/two1.jpg Physical construction

Since my graph has 32 edges, I made 33 evenly spaced marks on the string. I used an index card to measure them because that seemed like an ok size, but in retrospect it would have been fine if I’d made it smaller.

../../../_images/three1.jpg

I then numbered each mark in sequence, from 1 to 33. I numbered them by writing the numbers on slips of paper and pinning the papers to the rope, but if I was using a ribbon or larger rope, the numbers could have been written directly on it. If you’re doing this at home, you could mark the numbers on masking tape on the rope just as well.

../../../_images/four1.jpg

The really tedious step is applying the vertices. I just went through the graph, one vertex at a time, and attached the right points on the string together for it.

The first vertex had numbers 1, 25, and 33 on it for the euler cycle I drew and numbered on the graph, so I attached the string’s points 1, 25, and 33 together with a jump ring. The next vertex on the drawing had the numbers 2 and 18 on it, so I pierced together the points on the string that were labeled 2 and 18.

I don’t think it matters what order the vertices are assembled in, as long as the process ultimately results in all the vertices on the graph being represented by rings affixing the corresponding points on the string together.

I also soldered the rings shut, because after all that work I don’t want them falling out.

../../../_images/five1.jpg

That’s all there is to it!

../../../_images/seven1.jpg

I’m going to have to find a faster way to apply the vertices before attempting a 6D hypercube. An ideal vertex would allow all edges to rotate and reposition themselves freely, but failing that, a lighter weight string and crimp fasteners large enough to hold 6 pieces of that string might do the trick.

The finished toy is not much to look at, but quite amusing to try to flatten into 3-space.

../../../_images/six1.jpg
Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: The dawning of the Age of Apple Aquarius

ma, 30/12/2019 - 08:59
An interesting document has turned up at the Internet Archive: the specification to the Scorpius CPU, the originally intended RISC successor to the 68K Macintosh.

In 1986 the 68K processor line was still going strong but showing its age, and a contingent of Apple management (famously led by then-Mac division head Jean-Louis Gassée and engineer Sam Holland) successfully persuaded then-CEO John Sculley that Apple should be master of its own fate with its own CPU. RISC was just emerging at that time, with the original MIPS R2000 CPU appearing around 1985, and was clearly where the market was going (arguably it still is, since virtually all major desktop and mobile processors are load-store at the hardware level today, even Intel); thus was the Aquarius project born. Indeed, Sculley's faith in the initiative was so great that he allocated a staff of fifty and even authorized a $15 million Cray supercomputer, which was smoothed over with investors by claiming it was for modeling Apple hardware (which, in a roundabout and overly optimistic way, it was).

Holland was placed in charge of the project and set about designing the CPU for Aquarius. The processor's proposed feature set was highly ambitious, including four cores and SIMD (vector) support with inter-processor communication features. Holland's specification was called Scorpius; the initial implementation of the Scorpius design was to be christened Antares. This initial specification is what was posted at the Internet Archive, dated around 1988.

Despite Sculley and Gassée's support, Aquarius was controversial at Apple from the very beginning: it required a substantial RandD investment, cash which Apple could ill afford to fritter away at the time, and even if the cash were there many within the company did not believe Apple had sufficient technical chops to get the CPU to silicon. Holland's complex specification worried senior management further as it required solving various technical problems that even large, highly experienced chip design companies at the time would have found difficult.

With only a proposal and no actual hardware by 1988, Sculley became impatient, and Holland was replaced by Al Alcorn. Alcorn was a legend in the industry by this time, best known for his work at Atari, where he designed Pong and was involved in the development of the Atari 400 and the ill-fated "holographic" Atari Cosmos. After leaving Atari in 1981, he consulted for various companies and was brought in by Apple as outside expertise to try to rescue Aquarius. Alcorn pitched the question to microprocessor expert Hugh Martin, who studied the specification and promptly pronounced it "ridiculous" to both Alcorn and Sculley. On this advice Sculley scuttled Aquarius in 1989 and hired Martin to design a computer instead using an existing CPU. Martin's assignment became the similarly ill-fated Jaguar project, which completed poorly with another simultaneous project led by veteran engineer Jack McHenry called Cognac. Cognac, unlike Jaguar and Aquarius, actually produced working hardware. The "RISC LC" that the Cognac team built, originally a heavily modified Macintosh LC with a Motorola 88100 CPU running Mac OS, became the direct ancestor of the Power Macintosh. The Cray supercomputer, now idle, eventually went to the industrial design group for case modeling until it was dismantled.

Now that we have an actual specification to read, how might this have compared to the PowerPC 601? Scorpius defined a big-endian 32-bit RISC chip addressing up to 4GB of RAM with four cores, which the technical specification refers to as processing units, or PUs. Each core shares instruction and data caches with the others and communicates over a 5x4 crossbar network, and because all cores on a CPU must execute within the same address space, are probably best considered most similar to modern hardware threads (such as the 32 threads on the SMT-4 eight core POWER9 I'm typing this on). An individual core has 16 32-bit general purpose registers (GPRs) and seven special purpose registers (SPRs), plus eight global SPRs common to the entire CPU, though there is no floating-point unit in the specification we see here. Like ARM, and unlike PowerPC and modern Power ISA, the link register (which saves return addresses) is a regular GPR and code can jump directly to an address in any register. However, despite having a 32-bit addressing space and 32-bit registers, Scorpius uses a fixed-size 16-bit instruction word. Typical of early RISC designs and still maintained in modern MIPS CPUs, it also has a branch delay slot, where the instruction following a branch (even if the branch is taken) is always executed. Besides the standard cache control instructions, there are also special instructions for a core to broadcast to other cores, and the four PUs could be directed to work on data in tandem to yield SIMD vector-like operations (such as what you would see with AltiVec and SSE). Holland's design even envisioned an "inter-processor bus" (IPB) connecting up to 16 CPUs, each with their own local memory, something not unlike what we would call a non-uniform memory access (NUMA) design today.

The 16-bit instruction size greatly limits the breadth of available instructions compared to PowerPC's 32-bit instructions, but that would certainly be within the "letter" spirit of RISC. It also makes the code possibly more dense than PowerPC, though the limited amount of bits available for displacements and immediate values requires the use of a second prefix register and potentially multiple instructions which dampens this advantage somewhat. The use of multiple PUs in tandem for SIMD-like operations is analogous to AltiVec and rather more flexible, though the use of bespoke hardware support in later SIMD designs like the G4 is probably higher performance. The lack of a floating-point unit was probably not a major issue in 1986 but wasn't very forward-looking as every 601 shipped with an FPU standard from the factory; on the other hand, the NUMA IPB was very adventurous and certainly more advanced than multiprocessor PowerPC designs, something that wasn't even really possible until the 604 (or not without a lot of hacks, as in the case of the 603-based BeBox).

It's ultimately an academic exercise, of course, because this specification was effectively just a wish list whereas the 601 actually existed, though not for several more years. Plus, the first Power Macs, being descendants of the compatibility-oriented RISC LC, could still run 68K Mac software; while the specification doesn't say, Aquarius' radical differences from its ancestor suggests a completely isolated architecture intended for a totally new computer. Were Antares-based systems to actually emerge, it is quite possible that they would have eclipsed the Mac as a new and different machine, and in that alternate future I'd probably be writing a droll and informative article about the lost RISC Mac prototype instead.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR18b1 available

di, 24/12/2019 - 07:37
TenFourFox Feature Parity Release 18 beta 1 is now available (downloads, hashes, release notes). As promised, the biggest change in this release is to TenFourFox's Reader mode. Reader mode uses Mozilla Readability to display a stripped-down version of the page with (hopefully) the salient content, just the salient content, and no crap or cruft. This has obvious advantages for old systems like ours because Reader mode pages are smaller and substantially simpler, don't run JavaScript, and help to wallpaper over various DOM and layout deficiencies our older patched-up Firefox 45 underpinnings are starting to show a bit more.

In FPR18, Reader mode has two main changes: first, it is updated to the same release used in current versions of Firefox (I rewrote the glue module in TenFourFox so that current releases could be used unmodified, which helps maintainability), and second, Reader mode is now allowed on most web pages instead of only on ones Readability thinks it can render. By avoiding a page scan this makes the browser a teensy bit faster, but it also means that edge-case web pages that could still usefully display in Reader mode now can do so. When Reader mode can be enabled, a little "open book" icon appears in the address bar. Click that and it will turn orange and the page will switch to Reader mode. Click it again to return to the prior version of the page. Certain sites don't work well with this approach and are automatically filtered; we use the same list as Firefox. If you want the old method where the browser would scan the page first before offering reader mode, switch tenfourfox.reader.force-enable to false and reload the tab, and please mention what it was doing inappropriately so it can be investigated.

Reader mode isn't seamless, and in fairness wasn't designed to be. The most noticeable discontinuity is if you click a link within a Reader mode page, it renders that link in the regular browser (requiring you to re-enter Reader mode if you want to stay there), which kind of sucks for multipage documents. I'm considering a tweak to it such that you stay in Reader mode in a tab until you exit it but I don't know how well this would work and it would certainly alter the functionality of many pages. Post your thoughts in the comments. I might consider something like this for FPR19.

Besides the usual security updates, FPR18 also makes a minor compatibility fix to the browser and improves the comprehensiveness of removing browser data for privacy reasons. More work needs to be done on this because of currently missing APIs, but this first pass in FPR18 is a safe and easy improvement. As this is the first "fast four week" release, it will become live January 6.

I also wrote up another quickie script for those of you exploring TenFourFox's AppleScript support. Although Old Reddit appears to work just dandy with TenFourFox, the current React-based New Reddit is a basket case: it's slow, it uses newer JavaScript support that TenFourFox only allows incompletely, and its DOM is hard for extensions to navigate. If you're stuck on New Reddit and you can't read the comments because the "VIEW ENTIRE CONVERSATION" button doesn't work because React and if you work on React I hate you, you can now download Reddit Moar Comments. When the script is run, if a Reddit comments thread is in the current tab, it acts as if the View Entire Conversation button had been clicked and expands the thread. If you're like me, put the Scripts menu in the menu bar (using the AppleScript Utility), have a TenFourFox folder in your Scripts, and put this script in it so it's just a menu drop-down or two away. Don't forget to try the other possibly useful scripts in that folder, or see if you can write your own.

Merry Christmas to those of you who celebrate it, and a happy holiday to all.

Categorieën: Mozilla-nl planet

Armen Zambrano: Reducing Treeherder’s time to-deploy

ma, 23/12/2019 - 19:34
Reducing Treeherder’s time-to-deploy

Up until September we had been using code merges from the master branch to the production one to cause production deployments.

A merge to production would trigger few automatic steps:

  1. The code would get tested in the Travis CI (10 minutes or more)
  2. Upon success the code would be built by Heroku (few minutes)
  3. Upon success a Heroku release would happen (less than a minute)
<figcaption>What steps happen before new code is deployed</figcaption>

If a regression was to be found on production we would either `git revert` a change out of all merged changes OR use Heroku’s rollback feature to the last known working state (without using Git).

Using `git revert` to get us back into a good state would be very slow since it would take 15–20 minutes to run through Travis, a Heroku build and a Heroku release.

On the other hand, Heroku’s rollback feature would be an immediate step as it would skip steps 1 and 2. Rolling back is possible because a previous build of a commit would still be available and only the release step would be needed .

The procedural change I proposed was to use Heroku’s promotion feature (similar to Heroku’s rollback feature). This would reuse a build from the staging app with the production app. The promotion process is a one-click button event that only executes the release step since steps 1 & 2 had already run on the staging app. Promotions would take less than a minute to be live.

<figcaption>This shows how a Heroku build on stage is reused for production.</figcaption>

This change made day to day deployments a less involved process since all deployments would take less that a minute. I’ve been quite satisfied with the change since a deployment requires less waiting around to validate a deployment.

Categorieën: Mozilla-nl planet

Marco Zehe: Happy Chanukka

ma, 23/12/2019 - 13:00

Wishing all of my readers who celebrate it, a very happy Chanukka!

This year, Chanukka started at sundown on December 22 and runs through December 30. It coincides with Christmas. And as it so happens, the muslim mayor of London kicked off the Chanukka celebrations from a Christmas tree last night. In a world where there are so many separating thoughts and actions becoming more prominent again, endangering the free and open societies of some western countries, these connecting events are more important than ever.

Welcome to london…where the Muslim mayor of London kicks off the Jewish festival of Chanukah metre from a giant Christmas tree on one of the most famous sites in the world @JLC_uk @JewishLondon @ChabadUK @JewishNewsUK @sadiqkhan pic.twitter.com/5V8sUtaeE5

— Justin Cohen (@CohenJust) December 22, 2019

Let me close by sharing with you a musical wish for a happy Chanukka by one of my favorite bands, the Canadians Walk Off The Earth, featuring Scott Helman.

Categorieën: Mozilla-nl planet

Niko Matsakis: Async Interview #3: Carl Lerche

ma, 23/12/2019 - 06:00

Hello! For the latest async interview, I spoke with Carl Lerche (carllerche). Among many other crates1, Carl is perhaps best known as one of the key authors behind tokio and mio. These two crates are quite widely used through the async ecosystem. Carl and I spoke on December 3rd.

Video

You can watch the video on YouTube. I’ve also embedded a copy here for your convenience:

Background: the mio crate

One of the first things we talked about was a kind of overview of the layers of the “tokio-based async stack”.

We started with the mio crate. mio is meant to be the “lightest possible” non-blocking I/O layer for Rust. It basically exposes the “epoll” interface that is widely used on linux. Windows uses a fundamentally different model, so in that case there is a kind of compatibility layer, and hence the performance isn’t quite as good, but it’s still pretty decent. mio “does the best it can”, as Carl put it.

The tokio crate builds on mio. It wraps the epoll interface and exposes it via the Future abstraction from std. It also offers other things that people commonly need, such as timers.

Finally, bulding atop tokio you find tower, which exposes a “request-response” abstraction called Service. tower is similar to things like finagle or rack. This is then used by libraries like hyper and tonic, which implement protocol servers (http for hyper, gRPC for tonic). These protocol servers internally use the tower abstractions as well, so you can tell hyper to execute any Service.

One challenge is that it is not yet clear how to adapt tower’s Service trait to std::Future. It would really benefit from support of async functions in traits, in particular, which is difficult for a lot of reasons. The current plan is to adopt Pin and to require boxing and dyn Future values if you wish to use the async fn sugar. (Which seems like a good starting place, -ed.)

Returning to the overall async stack, atop protocol servers like hyper, you find web frameworks, such as warp – and (finally) within those you have middleware and the actual applications.

How independent are these various layers?

I was curious to understand how “interconnected” these various crates were. After all, while tokio is widely used, there are a number of different executors out there, both targeting different platforms (e.g., Fuchsia) as well as different trade-offs (e.g., async-std). I’m really interested to get a better understanding of what we can do to help the various layers described above operate independently, so that people can mix-and-match.

To that end, I asked Carl what it would take to use (say) Warp on Fuchsia. The answer was that “in principle” the point of Tower is to create just such a decoupling, but in practice it might not be so easy.

One of the big changes in the upcoming tokio 0.2 crate, in fact, has been to combine and merge a lot of tokio into one crate. Previously, the components were more decoupled, but people rarely took advantage of that. Therefore, tokio 0.2 combined a lot of components and made the experience of using them together more streamlined, although it is still possible to use components in a more “standalone” fashion.

In general, to make tokio work, you need some form of “driver thread”. Typically this is done by spawning a background thread, you can skip that and run the driver yourself.

The original tokio design had a static global that contained this driver information, but this had a number of issues in practice: the driver sometimes started unexpectedly, it could be hard to configure, and it didn’t work great for embedded environments. Therefore, the new system has switched to an explicitly launch, though there are procedural macros #[tokio::main] or #[tokio::test] that provide sugar if you prefer.

What should we do next? Stabilize stream.

Next we discussed which concrete actions made sense next. Carl felt that an obvious next step would be to stabilize the Stream trait. As you may recall, cramertj and I discussed the Stream trait in quite a lot of detail – in short, the existing design for Stream is “detached”, meaning that it must yield up ownership of each item it produces, much like an Iterator. It would be nice to figure out the story for “attached” streams that can re-use internal buffers, which are a very common use case, especially before we create syntactic sugar.

Carl’s motivation for a stable Stream is in part that he would like to issue a stable tokio release, ideally in Q3 of 2020, and Stream would be a part of that. If there is no Stream trait in the standard libary, that complicates things.

One thing we didn’t discuss, but which I personally would like to understand better, is what sort of libraries and infrastructure might benefit from a stabilized Stream. For example, “data libraries” like hyper mostly want a trait like AsyncRead to be stabilized.

About async read

Next we discussed the AsyncRead trait a little, though not in great depth. If you’ve been following the latest discussion, you’ll have seen that there is a tokio proposal to modify the AsyncRead traits used within tokio. There are two main goals here:

  • to make it safe to pass an uninitialized memory buffer to read
  • to better support vectorizing writes

However, there isn’t a clear consensus on the thread (at least not the last time I checked) on the best alternative design. The PR itself proposes changing from a &mut [u8] buffer (for writing the output into) to a dyn trait value, but there are other options. Carl for example proposed using a concrete wrapper struct instead, and adding methods to test for vectorization support (since outer layers may wish to adopt different strategies based on whether vectorization works).

One of the arguments in favor of the current design from the futures crate is that it maps very cleanly to the Read trait from the stdlib ([cramertj advanced this argument][c3], for example). Carl felt that the trait is already quite different (e.g., notably, it uses Pin) and that these more “analogous” interfaces could be made with defaulted helper methods instead. Further, he felt that async applications tend to prize performance more highly than synchronous ones, so the importance and overhead of uninitialized memory may be higher.

About async destructors and other utilities

We discussed async destructors. Carl felt that they would be a valuable thing to add for sure. He felt that the “general design” proposed by boats would be reasonable, although he thought there might be a bit of a duplication issue if you have both a async drop and a sync drop. A possible solution would be to have a prepare_to_drop async method that gives the object time to do async preparations, and then to always run the sync drop afterwards.

We also discussed a few utility methods like select!, and Carl mentioned that a lot of the ecosystem is currently using things like proc-macro-hack to support these, so perhaps a good thing to focus on would be improving procedural macro support so that it can handle expression level macros more cleanly.

Footnotes
  1. I think loom looks particularly cool. 

Categorieën: Mozilla-nl planet

Pagina's