mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Hacks.Mozilla.Org: Hopping on Firefox 91

Mozilla planet - di, 10/08/2021 - 17:04
Hopping on Firefox 91

August is already here, which means so is Firefox 91! This release has a Scottish locale added and, if the ‘increased contrast’ setting is checked, auto enables High Contrast mode on macOS.

Private browsing windows have an HTTPS-first policy and will automatically attempt to make all connections to websites secure. Connections will fall back to HTTP if the website does not support HTTPS.

For developers Firefox 91 supports the Visual Viewport API and adds some more additions to the Intl.DateTimeFormat object.

This blog post provides merely a set of highlights; for all the details, check out the following:

Visual Viewport API

Implemented back in Firefox 63, the Visual Viewport API was behind the pref dom.visualviewport.enabled in the desktop release. It is now no longer behind that pref and enabled by default, meaning the API is now supported in all major browsers.

There are two viewports on the mobile web, the layout viewport and the visual viewport. The layout viewport covers all the elements on a page and the visual viewport represents what is actually visible on screen. If a keyboard appears on screen, the visual viewport dimensions will shrink, but the layout viewport will remain the same.

This API gives you information about the size, offset and scale of the visual viewport and allows you to listen for resize and scroll events. You access it via the visualViewport property of the window interface.

In this simple example the resize event is listened for and when a user zooms in, hides an element in the layout, so as not to clutter the interface.

const elToHide = document.getElementById('to-hide'); var viewport = window.visualViewport; function resizeHandler() {    if (viewport.scale > 1.3)      elToHide.style.display = "none";    else      elToHide.style.display = "block"; } window.visualViewport.addEventListener('resize', resizeHandler); New formats for Intl.DateTimeFormat

A couple of updates to the Intl.DateTimeFormat object include new timeZoneName options for formatting how a timezone is displayed. These include the localized GMT formats shortOffset and longOffset, and generic non-location formats shortGeneric and longGeneric. The below code shows all the different options for the timeZoneName and their format.

var date = Date.UTC(2021, 11, 17, 3, 0, 42); const timezoneNames = ['short', 'long', 'shortOffset', 'longOffset', 'shortGeneric', 'longGeneric'] for (const zoneName of timezoneNames) { // Do something with currentValue var formatter = new Intl.DateTimeFormat('en-US', { timeZone: 'America/Los_Angeles', timeZoneName: zoneName, }); console.log(zoneName + ": " + formatter.format(date) ); } // expected output: // > "short: 12/16/2021, PST" // > "long: 12/16/2021, Pacific Standard Time" // > "shortOffset: 12/16/2021, GMT-8" // > "longOffset: 12/16/2021, GMT-08:00" // > "shortGeneric: 12/16/2021, PT" // > "longGeneric: 12/16/2021, Pacific Time"

You can now format date ranges as well with the new formatRange() and formatRangeToParts() methods. The former returns a localized and formatted string for the range between two Date objects:

const options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric' }; const startDate = new Date(Date.UTC(2007, 0, 10, 10, 0, 0)); const endDate = new Date(Date.UTC(2008, 0, 10, 11, 0, 0)); const dateTimeFormat = new Intl.DateTimeFormat('en', options1); console.log(dateTimeFormat.formatRange(startDate, endDate)); // expected output: Wednesday, January 10, 2007 – Thursday, January 10, 2008

And the latter returns an array containing the locale-specific parts of a date range:

const startDate = new Date(Date.UTC(2007, 0, 10, 10, 0, 0)); // > 'Wed, 10 Jan 2007 10:00:00 GMT' const endDate = new Date(Date.UTC(2007, 0, 10, 11, 0, 0));   // > 'Wed, 10 Jan 2007 11:00:00 GMT' const dateTimeFormat = new Intl.DateTimeFormat('en', { hour: 'numeric', minute: 'numeric' }); const parts = dateTimeFormat.formatRangeToParts(startDate, endDate); for (const part of parts) { console.log(part); } // expected output (in GMT timezone): // Object { type: "hour", value: "2", source: "startRange" } // Object { type: "literal", value: ":", source: "startRange" } // Object { type: "minute", value: "00", source: "startRange" } // Object { type: "literal", value: " – ", source: "shared" } // Object { type: "hour", value: "3", source: "endRange" } // Object { type: "literal", value: ":", source: "endRange" } // Object { type: "minute", value: "00", source: "endRange" } // Object { type: "literal", value: " ", source: "shared" } // Object { type: "dayPeriod", value: "AM", source: "shared" } Securing the Gamepad API

There have been a few updates to the Gamepad API to fall in line with the spec. It is now only available in secure contexts (HTTPS) and is protected by Feature Policy: gamepad. If access to gamepads is disallowed, calls to Navigator.getGamepads() will throw an error and the gamepadconnected and gamepaddisconnected events will not fire.

 

The post Hopping on Firefox 91 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Firefox 91 Introduces Enhanced Cookie Clearing

Mozilla planet - di, 10/08/2021 - 14:55

We are pleased to announce a new, major privacy enhancement to Firefox’s cookie handling that lets you fully erase your browser history for any website. Today’s new version of Firefox Strict Mode lets you easily delete all cookies and supercookies that were stored on your computer by a website or by any trackers embedded in it.

Building on Total Cookie Protection, Firefox 91’s new approach to deleting cookies prevents hidden privacy violations and makes it easy for you to see which websites are storing information on your computer.

When you decide to tell Firefox to forget about a website, Firefox will automatically throw away all cookies, supercookies and other data stored in that website’s “cookie jar”. This “Enhanced Cookie Clearing” makes it easy to delete all traces of a website in your browser without the possibility of sneaky third-party cookies sticking around.

What data websites are storing in your browser

Browsing the web leaves data behind in your browser. A site may set cookies to keep you logged in, or store preferences in your browser. There are also less obvious kinds of site data, such as caches that improve performance, or offline data which allows web applications to work without an internet connection. Firefox itself also stores data safely on your computer about sites you have visited, including your browsing history or site-specific settings and permissions.

Firefox allows you to clear all cookies and other site data for individual websites. Data clearing can be used to hide your identity from a site by deleting all data that is accessible to the site. In addition, it can be used to wipe any trace of having visited the site from your browsing history.

Why clearing this data can be difficult

To make matters more complicated, the websites that you visit can embed content, such as images, videos and scripts, from other websites. This “cross-site” content can also read and write cookies and other site data.

Let’s say you have visited facebook.com, comfypants.com and mealkit.com. All of these sites store data in Firefox and leave traces on your computer. This data includes typical storage like cookies and localStorage, but also site settings and cached data, such as the HTTP cache. Additionally, comfypants.com and mealkit.com embed a like button from facebook.com.

Firefox Strict Mode includes Total Cookie Protection, where the cookies and data stored by each website on your computer are confined to a separate cookie jar. In Firefox 91, Enhanced Cookie Clearing lets you delete all the cookies and data for any website by emptying that cookie jar. Illustration: Megan Newell and Michael Ham.

Embedded third-party resources complicate data clearing. Before Enhanced Cookie Clearing, Firefox cleared data only for the domain that was specified by the user. That meant that if you were to clear storage for comfypants.com, Firefox deleted the storage of comfypants.com and left the storage of any sites embedded on it (facebook.com) behind. Keeping the embedded storage of facebook.com meant that it could identify and track you again the next time you visited comfypants.com.

How Enhanced Cookie Clearing solves this problem

Total Cookie Protection, built into Firefox, makes sure that facebook.com can’t use cookies to track you across websites. It does this by partitioning data storage into one cookie jar per website, rather than using one big jar for all of facebook.com’s storage. With Enhanced Cookie Clearing, if you clear site data for comfypants.com, the entire cookie jar is emptied, including any data facebook.com set while embedded in comfypants.com.

Now, if you click on Settings > Privacy and Security > Cookies and Site Data > Manage Data, Firefox no longer shows individual domains that store data. Instead, Firefox lists a cookie jar for each website you have visited. That means you can easily recognize and remove all data a website has stored on your computer, without having to worry about leftover data from third parties embedded in that website. Here is how it looks:

In Firefox’s Privacy and Security Settings, you can manage cookies and other site data stored on your computer. In Firefox 91 ETP Strict Mode, Enhanced Cookie Clearing ensures that all data for any site you choose has been completely removed.

How to Enable Enhanced Cookie Clearing

In order for Enhanced Cookie Clearing to work, you need to have Strict Tracking Protection enabled. Once enabled, Enhanced Cookie Clearing will be used whenever you clear data for specific websites. For example, when using “Clear cookies and site data” in the identity panel (lock icon) or in the Firefox preferences. Find out how to clear site data in Firefox.

If you not only want to remove a site’s cookies and caches, but want to delete it from history along with any data Firefox has stored about it, you can use the “Forget About This Site” option in the History menu:

Firefox’s History menu lets you clear all history from your computer of any site you have visited. Starting in Firefox 91 in ETP Strict Mode, Enhanced Cookie Clearing ensures that third-party cookies that were stored when you visited that site are deleted as well.

Thank you

We would like to thank the many people at Mozilla who helped and supported the development and deployment of Enhanced Cookie Clearing, including Steven Englehardt, Stefan Zabka, Tim Huang, Prangya Basu, Michael Ham, Mei Loo, Alice Fleischmann, Tanvi Vyas, Ethan Tseng, Mikal Lewis, and Selena Deckelmann.

 

The post Firefox 91 Introduces Enhanced Cookie Clearing appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Firefox 91 introduces HTTPS by Default in Private Browsing

Mozilla planet - di, 10/08/2021 - 09:28

 

We are excited to announce that, starting in Firefox 91, Private Browsing Windows will favor secure connections to the web by default. For every website you visit, Firefox will automatically establish a secure, encrypted connection over HTTPS whenever possible.

What is the difference between HTTP and HTTPS?

The Hypertext Transfer Protocol (HTTP) is a key protocol through which web browsers and websites communicate. However, data transferred by the traditional HTTP protocol is unprotected and transferred in clear text, such that attackers are able to view, steal, or even tamper with the transmitted data. The introduction of HTTP over TLS (HTTPS) fixed this privacy and security shortcoming by allowing the creation of secure, encrypted connections between your browser and the websites that support it.

In the early days of the web, the use of HTTP was dominant. But, since the introduction of its secure successor HTTPS, and further with the availability of free, simple website certificates, the large majority of websites now support HTTPS. While there remain many websites that don’t use HTTPS by default, a large fraction of those sites do support the optional use of HTTPS. In such cases, Firefox Private Browsing Windows now automatically opt into HTTPS for the best available security and privacy.

How HTTPS by Default works

Firefox’s new HTTPS by Default policy in Private Browsing Windows represents a major improvement in the way the browser handles insecure web page addresses. As illustrated in the Figure below, whenever you enter an insecure (HTTP) URL in Firefox’s address bar, or you click on an insecure link on a web page, Firefox will now first try to establish a secure, encrypted HTTPS connection to the website. In the cases where the website does not support HTTPS, Firefox will automatically fall back and establish a connection using the legacy HTTP protocol instead:

If you enter an insecure URL in the Firefox address bar, or if you click an insecure link on a web page, Firefox Private Browsing Windows checks if the destination website supports HTTPS. If YES: Firefox upgrades the connection and establishes a secure, encrypted HTTPS connection. If NO: Firefox falls back to using an insecure HTTP connection.

(Note that this new HTTPS by Default policy in Firefox Private Browsing Windows is not directly applied to the loading of in-page components like images, styles, or scripts in the website you are visiting; it only ensures that the page itself is loaded securely if possible. However, loading a page over HTTPS will, in the majority of cases, also cause those in-page components to load over HTTPS.)

We expect that HTTPS by Default will expand beyond Private Windows in the coming months. Stay tuned for more updates!

It’s Automatic!

As a Firefox user, you can benefit from the additionally provided security mechanism as soon as your Firefox auto-updates to version 91 and you start browsing in a Private Browsing Window. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the internet.

Thank you

We are thankful for the support of our colleagues at Mozilla including Neha Kochar, Andrew Overholt, Joe Walker, Selena Deckelmann, Mikal Lewis, Gijs Kruitbosch, Andrew Halberstadt and everyone who is passionate about building the web we want: free, independent and secure!

The post Firefox 91 introduces HTTPS by Default in Private Browsing appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Firefox Add-on Reviews: Find that font! I must have that font!

Mozilla planet - ma, 09/08/2021 - 23:51

You’re probably a digital designer or work in some publishing capacity (otherwise it would be pretty strange to have a fascination with fonts); and you appreciate the aesthetic power of exceptional typography. 

So what do you do when you encounter a wonderful font in the wild that you might want to use in your own design work? Well, if you have a font finder browser extension you can learn all about it within a couple mouse clicks. Here are some of our favorite font discovery extensions…

Font Finder (revived)

Striking a balance between simple functionality and nuanced features, Font Finder (revived) delivers about everything you’d want in a font inspector. 

The extension provides three main functions:

  • Typography analysis. Font Finder reveals all relevant typographical characteristics like color, spacing, alignment, and of course font name. 
  • Copy information. Any portion of the font analysis can be copied to a clipboard so you can easily paste it anywhere. 
  • Inline editing. All font characteristic (e.g. color, size, type) on an active element can be changed right there on the page.
WhatFont

If you just want to know the name of any font you find and not much else, WhatFont is the ideal tool. 

See an interesting font? Just click the WhatFont toolbar button and mouseover any text on the page to see its font. If you want a bit more info, click the text and a pop-up will show font size, color, and family. 

<figcaption>Just mouseover a font and WhatFont will display the goods. </figcaption> FontsNinja

With a few distinct features, FontsNinja is great if you’re doing a lot of font finding and organization. 

The extension really shines when you encounter a page loaded with a bunch of different fonts you want to learn about. Click the toolbar button and Fonts Ninja will analyze the entire page and display info for every single font found. Then, when you mouseover text on the page you’ll see which font it is and its CSS properties. 

<figcaption>Fonts Ninja has a unique Bookmarks feature that lets you to save your favorite fonts in simple fashion. </figcaption>

We hope these extensions help in your search for amazing fonts! Explore more visual customization extensions on addons.mozilla.org

Categorieën: Mozilla-nl planet

Spidermonkey Development Blog: TC39 meeting, July 13-16 2021

Mozilla planet - ma, 09/08/2021 - 09:30

In this meeting, the Realms proposal finally moved forward to stage 3. The form it will take is as what is now called “isolated realms”. This form does not allow direct object access across the realm boundary (which you can do with iframes). To address this, a new proposal is being put forward titled getOriginals.

Beyond that, the ergonomic brand checks proposal moved to stage 4 and will be published in the next specification. Intl.Enumeration also finally moved to stage 3 and implementers have started working on it.

A feature that developers can look forward to experimenting with soon is Array find-from-last. This will enable programmers to easily search for an element from the end of a collection, rather than needing to first reverse the collection to do this search.

Keep an eye on…
  • Realms
  • Import assertions
  • Module fragments
Normative Spec Changes Remove “Designed to be subclassable” note.
  • Notes
  • Proposal
  • Slides
  • Summary: Unrelated to the “remove subclassable proposal” – this pr seeks to remove confusing notes about the “subclassabilty” of classes such as “boolean” where such a note makes no sense.
  • Impact on SM: No change
  • Outcome: Consensus.
Restricting callables to only be able to return normal and throw completions
  • Notes
  • Proposal
  • Slides
  • Summary: This proposal tightens the specification language around the return value of callables. Prior to this change, it would be possible for a spec compliant implementation to have functions return with a completion type “break”. This doesn’t make that much sense and is fixed here.
  • Impact on SM: No change
  • Outcome: Consensus.
Proposals Seeking Advancement to Stage 4 Ergonomic Brand Checks
  • Notes
  • Proposal
  • PR
  • Spec
  • Summary: Provides an ergonomic way to check the presence of a private field when one of its methods is called.
  • Impact on SM: Already Shipping,
  • Outcome: Advanced to stage 4.
Proposals Seeking Advancement to Stage 3 Array Find From Last
  • Notes
  • Proposal Link
  • Slides
  • Summary: Proposal for .findLast() and .findLastIndex() methods on array.
  • Impact on SM: In progress
  • Outcome: Advanced to stage 3
Intl Enumeration API
  • Notes
  • Proposal Link
  • Slides
  • Summary: Intl enumeration allows inspecting what is available on the Intl API.
  • Impact on SM: In progress
  • Outcome: Advanced to stage 3.
Realms for stage 3
  • Notes Day 1
  • Notes Day 3
  • Proposal Link
  • Slides
  • Summary: The Realms proposal exposes a new global without a document for use by JS programmers, think iframes without the document. This new proposal api is “isolated realms” which does not allow passing bare object specifiers between realms. This is an improvement from the browser architecture perspective, but it is less ergonomic. This issue was called out in the previous meeting. In this meeting the issue was resolved by splitting out the added functionality to its own proposal, getOriginals. Realms advanced to stage 3. getOriginals advanced to stage 1.
  • Impact on SM: Needs implementation, must not ship until the name “Isolated Realms” has been resolved.
  • Outcome: Realms advanced to stage 3. GetOriginals advanced to stage 1.
Stage 3 Updates Intl.NumberFormat v3
  • Notes
  • Proposal Link
  • Slides
  • Summary: A batch of internationalization features for number formatting. This update focused on changes to grouping enums, rounding and precision options, and sign display negative.
  • Impact on SM: In progress
  • Outcome: Advanced to stage 3.
Extend TimeZoneName Option Proposal
  • Notes
  • Proposal Link
  • Slides
  • Summary: Adds further options for the TimeZoneName option in Intl.DateTimeFormat, allowing for greater accuracy in representing different time zones. No major changes since last presentation.
  • Impact on SM: Implemented
Intl Locale update
  • Notes
  • Proposal Link
  • Slides
  • Summary: An API to expose information of locale, such as week data (first day of a week, weekend start, weekend end), hour cycle, measurement system, commonly used calendar, etc. There was a request regarding excluding standard and search from intl.Locale.prototype.collations, which was retrospectively agreed to.
  • Impact on SM: In progress
Intl DisplayNames
  • Notes
  • Proposal Link
  • Slides
  • Summary: Adds further coverage to the existing Intl.DisplayNames API. No significant changes since last presentation. There has been progress in implementation.
  • Impact on SM: In progress
Import Assertions update
  • Notes
  • Proposal Link
  • Slides
  • Summary: The Import Assertions proposal adds an inline syntax for module import statements to pass on more information alongside the module specifier. The initial application for such assertions will be to support additional types of modules in a common way across JavaScript environments, starting with JSON modules. The syntax allows for the following. import json from "./foo.json" assert { type: "json" };

    The update focused on the question of “what do we do when we have an assertion that isn’t recognized?”. Currently if a host sees a module type assertion that they don’t recognize they can choose what to do. From our perspective it would be better to restrict this somehow – for now the champions will not change the specification.

  • Impact on SM: Implementation in Progress
Object.hasOwn (Accessible Object hasOwnProperty)
  • Notes
  • Proposal Link
  • Slides
  • Summary: Checking an object for a property at the moment, is rather unintuitive and error prone. This proposal introduces a more ergonomic wrapper around a common pattern involving Object.prototype.hasOwnProperty which allows the following: let hasOwnProperty = Object.prototype.hasOwnProperty if (hasOwnProperty.call(object, "foo")) { console.log("has property foo") }

    to be written as:

    if (Object.hasOwn(object, "foo")) { console.log("has property foo") }

    No significant changes since the last update.

  • Impact on SM: Implemented
Proposals Seeking Advancement to Stage 2 Array filtering
  • Notes
  • Proposal Link
  • Slides
  • Summary: This proposal was two proposals bundled. It introduces a .filterReject method which is an alias for a negative filter, such as [1, 2, 3].filter(x => !(x > 2)) which would return all of the elements less than or equal to 2. This did not move forward. A second proposal, groupBy, which groups elements related to a condition (for example, [1,2,3].groupBy(x => x > 2), would return {false:[1,2],true: [3]}); GroupBy advanced to stage 1 as a separate proposal.
  • Impact on SM: No change yet.
  • Outcome: FilterOut did not advance. GroupBy is treated as its own proposal and is now stage 1.
Stage 2 Updates Decorators update
  • Notes
  • Proposal Link
  • Slides
  • Summary: The decorators proposal had a champion switch, but they are now happy with the current semantics of the proposal and are seeking stage 3 proposal reviewers. Decorators are functions called on classes, class elements, or other JavaScript syntax forms during definition. They have 3 capabilities: to replace the value being decorated, to associate metadata with a given value being decorated, or provide access to that decorated value. Our concerns with the proposal were related to possible performance issues arising from the proposal. These were addressed in the last iteration, and we are looking forward to rereading the spec.
  • Impact on SM: Needs review.
Proposals Seeking Advancement to Stage 1 ArrayBuffer to/from Base64
  • Notes
  • Proposal Link
  • Slides
  • Summary: Transforms an array buffer to and from Base64. base64 is the de-facto standard way to represent arbitrary binary data as ASCII. JavaScript has ArrayBuffers (and other wrapping types) to work with binary data, but no built-in mechanism to encode that data as base64, nor to take base64’d data and produce a corresponding ArrayBuffer. Peter Hoodie from Moddable raised concerns about this being out of scope, but did not block stage 1.
  • Impact on SM: No change yet.
  • Outcome: Advanced to stage 1.
Stage 1 Updates Module fragments current direction
  • Notes day 2
  • Notes day 3
  • Proposal Link
  • Slides
  • Summary: The Module fragments proposal allows multiple modules to be written in the same file. The issue was raised that this proposal should be closer in terms of syntax to module blocks, and this change achieved consensus. The primary changes are:

    • Module fragments are named by identifiers, not strings, so they are declared like module foo { export x = 1 }
    • Import statements can load a module fragment with syntax like import { x } from foo;, similarly as an identifier.
    • Import statements which import from a module fragment can work on anything which was declared by a top-level module fragment declaration in the same module, or one which was imported from another module. There’s a link-time data structure representing the subset of the lexical scope which is the statically visible module fragments.
    • When a declared module fragment is referenced as a variable, in a normal expression context, it evaluates to a module block (one per time when it was evaluated, so the same one is reused for module fragments declared at the top level). It appears as a const declaration (so the link-time and run-time semantics always correspond).
    • Module fragments are only visible from outside the module by importing the containing module, and here, only if they are explicitly exported. They have no particular URL (note related issue: Portability concerns of non-string specifiers #10)
    • Module fragment declarations can appear anywhere a statement can, e.g., eval, nested blocks, etc (but they can only have a static import against them if they are at the top-level of a module). In contexts which are not the top level of a module, module fragments are just useful for their runtime behavior, of a nice way of declaring a module block.

    This achieved consensus and the proposal had support overall.

  • Impact on SM: No change yet.
Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR32 SPR3 available

Mozilla planet - zo, 08/08/2021 - 00:42
TenFourFox Feature Parity Release 32 Security Parity Release 3 "32.3" is available for testing (downloads, hashes). There are, once again, no changes to the release notes and nothing notable regarding the security patches in this release. Assuming no major problems, FPR32.3 will go live Monday evening Pacific time as usual. FPR32.4 will appear on September 7 and the final official build FPR32.5 on October 5.
Categorieën: Mozilla-nl planet

Firefox Add-on Reviews: How to use a temp mail extension for spam and security protection

Mozilla planet - za, 07/08/2021 - 01:55

One of the most common methods malicious hackers use to break into their victims’ computer systems is tricking them into clicking dangerous links within an email. It’s been popular with cyber criminals for decades because it’s so simple yet consistently effective. Just make the email appear like it’s from a trusted source and include a compelling link that, once clicked, is like opening the front door of your home to a thief. 

Temp mail (i.e. temporary email) is a tremendous way to combat this classic cyber scam. Temp mail creates disposable email accounts for you to use for non-personal/business situations, like registering with websites or online services when you don’t want them knowing your actual email, because the more your actual email is in circulation the greater its chances of falling into the hands of malicious actors. 

Beyond security protection, temp mail is also great for filtering spam. Consider how many daily emails you receive from different social media sites, services, etc.—trying to pull you back into their orbit. Certainly your inbox has seen better days? 

So clear the inbox clutter and better protect yourself against cybercrime by using a temp mail browser extension…

Temp Mail – Disposable Temporary Mail

Just click Temp Mail – Disposable Temporary Mail toolbar button to create a temp mail address and access other extension features. 

Temp Mail – Disposable Temporary Mail is free to use and, once installed, always available wherever you and your browser go on the web. Your Temp Mail email accounts will remain active until you delete them, so just how “temporary” they are is entirely up to you (also note that whenever you delete a Temp Mail account, other personal details like your IP address will be wiped away as well). 

To be clear, you can operate temp mail just like you would any other email account—you’re free to send and receive messages at will. 

<figcaption>The Temp Mail service will be right there whenever you need it. </figcaption> Firefox Relay

Mozilla has developed a temp mail service designed for Firefox users called Firefox Relay. It lets you create anonymous email aliases that will forward messages on to your actual, personal email addresses. 

Relay will keep track of all the aliases you’ve created and they’ll remain active until you delete them. Do note, however, that Relay does not allow you to reply to messages anonymously, though that feature is in the works and will hopefully roll out soon. 

If curious, here’s more information about Firefox Relay.

<figcaption>Just click the Firefox Relay button in the email form fields to automatically generate your new alias. </figcaption> Ew, Mail!

There are no distinct features of Ew, Mail! that you won’t find in Temp Mail – Disposable Mail or Firefox Relay, but it’s worth including here because it may be the most lightweight of the three. 

Whenever you encounter a need for temp mail, just place your mouse cursor in the address field and right-click to pull up an option to create a temp mail address. Simple as that. 

We hope one of these handy temp mail extensions will give you more security—and less spam. Feel free to explore more great privacy extensions on addons.mozilla.org.

Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: Australia needs to cut the crap with expats

Mozilla planet - do, 05/08/2021 - 23:33
I'm going to be very tightly focused in this post, because there are tons of politics swirling around COVID-19 (and anyone who knows my actual line of work will know my opinions about it); any comments about masks, vaccines, etc. will be swiftly removed. Normally I don't discuss non-technical topics here, but this is a situation that personally affects me and this is my blog, so there. I want to talk specifically about the newly announced policy that Australians normally resident overseas will now require an exemption to leave the country.

(via twitter)

I am an Australian-American dual citizen (via my mother, who is Australian, but is resident in the United States), and my wife of five years is Australian. She is legimately a resident of Australia because she was completing her master's degree there and had to teach in the Australian system to get an unrestricted credential. All this happened when the borders closed. Anyone normally resident in Australia must obtain an exemption to leave the country and cite good cause, except to a handful of countries like New Zealand (who only makes the perfectly reasonable requirement that its residents have a spot in quarantine for when they return).

It was already difficult to exit Australia before, which is why, for the six weeks that I've gotten to see my wife since January 2020, it was me traveling to Australia. Here again many thanks to Air New Zealand, who were very understanding on rescheduling (twice) and even let us keep our Star Alliance Gold status even though we weren't flying much, I did my two weeks of quarantine, got my two negative tests, and was released into the hinterlands of regional New South Wales to visit that side of the family. Upon return to Sydney Airport, it was a simple matter to leave the country, since it was already obvious in the immigration records that I don't normally reside in it.

(The nearly abandoned International Terminal in Sydney when I left.)

Now, there is the distinct possibility that if I can land a ticket to visit my wife, and if I can get space in hotel quarantine (at A$3000, plus greatly inflated airfares), despite being fully vaccinated, I may not be able to leave. Trying to get my credentials approved in Australia has been hung up for months so I wouldn't be able to have a job there in my current employ, and with my father currently on chemo, if he were to take a turn for the worse there are plenty of horror stories of Australians being unable to see terminally ill family members due to refused exemptions (or, adding insult to injury, being approved when they actually died).

I realize as (technically) an expat there isn't much of a constituency to join, but even given we're in the middle of a pandemic this crap has to stop. Restricting entries is heavyhanded, but understandable. Reminding those exiting that they're responsible for hotel or camp quarantine upon return is onerous (and should be reexamined at minimum for those who have indeed gotten the jab), but defensible. Preventing Australian citizens from leaving altogether, especially those with family, is unconscionable and the arbitrary nature of the exemption process is a foul joke.

If Premier Palaszczuk can strike a pose at the International Olympic Committee and Prime Minster Morrison can go gallivanting with randos in English pubs, those of us who are vaccinated and following the law should have that same freedom. I should be able to visit my wife and she should be able to visit me.

Categorieën: Mozilla-nl planet

Data@Mozilla: This Week in Glean: Building a Mobile Acquisition Dashboard in Looker

Mozilla planet - do, 05/08/2021 - 20:26

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

As part of the DUET (Data User Engagement Team) working group, some of my day-to-day work involves building dashboards for visualizing user engagement aspects of the Firefox product. At Mozilla, we recently decided to use Looker to create dashboards and interactive views on our datasets. It’s a new system to learn but provides a flexible model for exploring data. In this post, I’ll walk through the development of several mobile acquisition funnels built in Looker. The most familiar form of engagement modeling is probably through funnel analysis — measuring engagement by capturing a cohort of users as they flow through various acquisition channels into the product. Typically, you’d visualize the flow as a Sankey or funnel plot, counting retained users at every step. The chart can help build intuition about bottlenecks or the performance of campaigns.

Mozilla owns a few mobile products; there is Firefox for Android, Firefox for iOS, and then Firefox Focus on both operating systems (also known as Klar in certain regions). We use Glean to instrument these products. The foremost benefit of Glean is that it encapsulates many best practices from years of instrumenting browsers; as such, all of the tables that capture anonymized behavior activity are consistent across the products. One valuable idea from this setup is that writing a query for a single product should allow it to extend to others without too much extra work. In addition, we pull in data from both the Google Play Store and Apple App Store to analyze the acquisition numbers. Looker allows us to take advantage of similar schemas with the ability to templatize queries.

ETL Pipeline

The pipeline brings all of the data into BigQuery so it can be referenced in a derived table within Looker.

  1. App Store data is exported into a table in BigQuery.
  2. Glean data flows into the org_mozilla_firefox.baseline table.
  3. A derived org_mozilla_firefox.baseline_clients_first_seen table is created from the baseline table. An org_mozilla_firefox.baseline_clients_daily table is created that references the first seen table.
  4. A Looker explore references the baseline_clients_clients_daily table in a parameterized SQL query, alongside data from the Google Play Store.
  5. A dashboard references the explore to communicate important statistics at first glance, alongside configurable parameters.
Peculiarities of Data Sources

Before jumping off into implementing a dashboard, it’s essential to discuss the quality of the data sources. For one, Mozilla and the app stores count users differently, which leads to subtle inconsistencies.

For example, there is no way for Mozilla to tie a Glean client back to the Play Store installation event in the Play Store. Each Glean client is assigned a new identifier for each device, whereas the Play Store only counts new installs by account (which may have several devices). We can’t track a single user across this boundary, and instead have to rely on the relative proportions over time. There are even more complications when trying to compare numbers between Android and iOS. Whereas the Play Store may show the number of accounts that have visited a page, the Apple App Store shows the total number of page visits instead. Apple also only reports users that have opted into data collection, which under-represents the total number of users.

These differences can be confusing to people who are not intimately familiar with the peculiarities of these different systems. Therefore, an essential part of putting together this view is documenting and educating the dashboard users to understand the data better.

Building a Looker Dashboard

There are three components to building a Looker dashboard: a view, an explore, and a dashboard. These files are written in a markup called LookML. In this project, we consider three files:

  • mobile_android_country.view.lkml
    • Contains the templated SQL query for preprocessing the data, parameters for the query, and a specification of available metrics and dimensions.
  • mobile_android_country.explore.lkml
    • Contains joins across views, and any aggregate tables suggested by Looker.
  • mobile_android_country.dashboard.lkml
    • Generated dashboard configuration for purposes of version-control.
View

The view is the bulk of data modeling work. Here, there are a few fields that are particularly important to keep in mind. First, there is a derived table alongside parameters, dimensions, and measures.

The derived table section allows us to specify the shape of the data that is visible to Looker. We can either refer to a table or view directly from a supported database (e.g., BigQuery) or write a query against that database. Looker will automatically re-run the derived table as necessary. We can also template the query in the view for a dynamic view into the data.

derived_table: {
  sql: with period as (SELECT ...),
      play_store_retained as (
          SELECT
          Date AS submission_date,
          COALESCE(IF(country = "Other", null, country), "OTHER") as country,
          SUM(Store_Listing_visitors) AS first_time_visitor_count,
          SUM(Installers) AS first_time_installs
          FROM
            `moz-fx-data-marketing-prod.google_play_store.Retained_installers_country_v1`
          CROSS JOIN
            period
          WHERE
            Date between start_date and end_date
            AND Package_name IN ('org.mozilla.{% parameter.app_id %}')
          GROUP BY 1, 2
      ),
      ...
      ;;
}

Above is the derived table section for the Android query. Here, we’re looking at the play_store_retained statement inside the common table expression (CTE). Inside of this SQL block, we have access to everything available to BigQuery in addition to view parameters.

# Allow swapping between various applications in the dataset
parameter: app_id {
  description: "The name of the application in the `org.mozilla` namespace."
  type:  unquoted
  default_value: "fenix"
  allowed_value: {
    value: "firefox"
  }
  allowed_value: {
    value: "firefox_beta"
  }
  allowed_value: {
    value:  "fenix"
  }
  allowed_value: {
    value: "focus"
  }
  allowed_value: {
    value: "klar"
  }
}

View parameters trigger updates to the view when changed. These are referenced using the liquid templating syntax:

AND Package_name IN (‘org.mozilla.{% parameter.app_id %}’)

For Looker to be aware of the shape of the final query result, we must define dimensions and metrics corresponding to columns in the result. Here is the final statement in the CTE from above:

SELECT
    submission_date,
    country,
    max(play_store_updated) AS play_store_updated,
    max(latest_date) AS latest_date,
    sum(first_time_visitor_count) AS first_time_visitor_count,
    ...
    sum(activated) AS activated
FROM play_store_retained
FULL JOIN play_store_installs
USING (submission_date, country)
FULL JOIN last_seen
USING (submission_date, country)
CROSS JOIN period
WHERE submission_date BETWEEN start_date AND end_date
GROUP BY 1, 2
ORDER BY 1, 2

 

Generally, in an aggregate query like this, the grouping columns will become dimensions while the aggregate values become metrics. A dimension is a column that we can filter or drill down into to get a different slice of the data model:

dimension: country {
  description: "The country code of the aggregates. The set is limited by those reported in the play store."
  type: string
  sql: ${TABLE}.country ;;
}

Note that we can refer to the derived table using the ${TABLE} variable (not unlike interpolating a variable in a bash script).

A measure is a column that represents a metric. This value is typically dependent on the dimensions.

measure: first_time_visitor_count {
  description: "The number of first time visitors to the play store."
  type: sum
  sql: ${TABLE}.first_time_visitor_count ;;
}

We must ensure that all dimensions and columns are declared to make them available to explores. Looker provides a few ways to create these fields automatically. For example, if you create a view directly from a table, Looker can autogenerate these from the schema. Likewise, the SQL editor has options to generate a view file directly. Whatever the method may be, some manual modification will be necessary to build a clean data model for use.

Explore

One of the more compelling features of Looker is the ability for folks to drill down into data models without the need to write SQL. They provide an interface where the dimensions and measures can be manipulated and plotted in an easy-to-use graphical interface. To do this, we need to declare which view to use. Often, just declaring the explore is sufficient:

include: "../views/*.view.lkml"

explore: mobile_android_country {
}

We include the view from a location relative to the explore file. Then we name an explore that shares the same name as the view. Once committed, the explore is available to explore in a drop-down menu in the main UI.

The explore can join multiple views and provide default parameters. In this project, we utilize a country view that we can use to group countries into various buckets. For example, we may have a group for North American countries, another for European countries, and so forth.

explore: mobile_android_country {
  join: country_buckets {
    type: inner
    relationship: many_to_one
    sql_on:  ${country_buckets.code} = ${mobile_android_country.country} ;;
  }
  always_filter: {
    filters: [
      country_buckets.bucket: "Overall"
    ]
  }
}

Finally, the explore is also the place where Looker will materialize certain portions of the view. Materialization is only relevant when copying the materialized segments from the exported dashboard code. An example of what this looks like follows:

aggregate_table: rollup__submission_date__0 {
  query: {
    dimensions: [
      # "app_id" is filtered on in the dashboard.
      # Uncomment to allow all possible filters to work with aggregate awareness.
      # app_id,
      # "country_buckets.bucket" is filtered on in the dashboard.
      # Uncomment to allow all possible filters to work with aggregate awareness.
      # country_buckets.bucket,
      # "history_days" is filtered on in the dashboard.
      # Uncomment to allow all possible filters to work with aggregate awareness.
      # history_days,
      submission_date
    ]
    measures: [activated, event_installs, first_seen, first_time_visitor_count]
    filters: [
      # "country_buckets.bucket" is filtered on by the dashboard. The filter
      # value below may not optimize with other filter values.
      country_buckets.bucket: "tier-1",
      # "mobile_android_country.app_id" is filtered on by the dashboard. The filter
      # value below may not optimize with other filter values.
      mobile_android_country.app_id: "firefox",
      # "mobile_android_country.history_days" is filtered on by the dashboard. The filter
      # value below may not optimize with other filter values.
      mobile_android_country.history_days: "7"
    ]
  }  # Please specify a datagroup_trigger or sql_trigger_value
  # See https://looker.com/docs/r/lookml/types/aggregate_table/materialization
  materialization: {
    sql_trigger_value: SELECT CURRENT_DATE();;
  }
} Dashboard

Looker provides the tooling to build interactive dashboards that are more than the sum of its parts. Often, the purpose is to present easily digestible information that has been vetted and reviewed by peers. To build a dashboard, you start by adding charts and tables from various explores. Looker provides widgets for filters and for markdown text used to annotate charts.  It’s an intuitive process that can be somewhat tedious, depending on how complex the information you’re trying to present.

Once you’ve built the dashboard, Looker provides a button to get a YAML representation to check into version control. The configuration file contains all the relevant information for constructing the dashboard and could even be written by hand with enough patience.

Strengths and Weaknesses of Looker

Now that I’ve gone through building a dashboard end-to-end, here are a few points summarizing my experience and the take-aways from putting together this dashboard.

Parameterized queries allow flexibility across similar tables

I worked with Glean-instrumented data in another project by parameterizing SQL queries using Jinja2 and running queries multiple times. Looker effectively brings this process closer to runtime and allows the ETL and visualization to live on the same platform. I’m impressed by how well it works in practice. The combination of consistent data models in bigquery-etl (e.g. clients_first_seen) and the ability to parameterize based on app-id was surprisingly straightforward. The dashboard can switch between Firefox for Android and Focus for Android without a hitch, even though they are two separate products with two separate datasets in BigQuery.

I can envision many places where we may not want to precompute all results ahead of time but instead just a subset of columns or dates on-demand. The costs of precomputing and materializing data is non-negligible, especially for large expensive queries that are viewed once in a blue moon or dimensions that fall in the long tail. Templating and parameters provide a great way to build these into the data model without having to resort to manually written SQL.

LookML in version control allows room for software engineering best practices

While Looker appeals to the non-technical crowd, it also affords many conveniences for the data practitioners who are familiar with the software development practices.

Changes to LookML files are version controlled (e.g., git). Being able to create branches and work on multiple features in parallel has been handy at times. It’s relieving to have the ability to make changes in my instance of the Looker files when trying out something new without having to lose my place. In addition, the ability to configure LookML views, explores, and dashboards in code allow for the process of creating new dashboards to incorporate many best practices like code review.

In addition, it’s nice to be able to use a real editor for mass revision. I was able to create a new dashboard for iOS data that paralleled the Android dashboard by copying over files, modifying the SQL in the view, and making a few edits to the dashboard code directly.

Workflow management is clunky for deploying new dashboards

While there are many upsides to having LookML explores and dashboards in code, there are several pain points while working with the Looker interface.

In particular, the workflow for editing a Dashboard goes something like this. First, you copy the dashboard into a personal folder that you can edit. Next, you make whatever modifications to that dashboard using the UI. Afterward, you export the result and copy-paste it into the dashboard code. While not ideal, this prevents the Dashboard from going out of sync from the one that you’re editing directly (since there won’t be conflicts between the UI and the code in version control). However, it would be nice if it were possible to edit the dashboard directly instead of making a copy with Looker performing any conflict resolution internally.

There have been moments where I’ve had to fight with the built-in git interface built into Looker’s development mode. Reverting a commit to a particular branch or dealing with merge conflicts can be an absolute nightmare. Suppose you do happen to pull the project in a local environment. In that case, you aren’t able to validate your changes locally (you’ll need to push, pull into Looker, and then validate and fix anything). Finally, the formatting option is stuck behind a keyboard shortcut while the browser is already using the keyboard shortcut.

Conclusion: Iterating on Feedback

Simply building a dashboard is not enough to demonstrate that it has value. It’s important to gather feedback from peers and stakeholders to determine the best path forward. Some things benefit from having a concrete implementation, though; there are differences between different platforms and inconsistencies in the data that may only appear after putting together an initial draft of a project.

While hitting goals of making data across app stores and our user populations visible, the funnel dashboard has room for improvement. Having this dashboard located in Looker makes the process of iterating that much easier, though. In addition, the feedback cycle of changing the query to seeing the results is relatively low and is easy to roll back. The tool is promising, and I look forward to seeing how it transforms the data landscape at Mozilla.

Resources
Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Thank you, Recommended Extensions Community Board!

Mozilla planet - do, 05/08/2021 - 20:05

Given the broad visibility of Recommended extensions across addons.mozilla.org (AMO), the Firefox Add-ons Manager, and other places we promote extensions, we believe our curatorial process should include a wide range of perspectives from our global community of contributors. That’s why we have the Recommended Extensions Advisory Board—an ongoing project that involves a rotating group of contributors to help identify and evaluate new extension candidates for the program.

Our most recent community board just completed their six-month project and I’d like to take a moment to thank Sylvain Giroux, Jyotsna Gupta, Chandan Baba, Juraj Mäsiar, and Pranjal Vyas for sharing their time, passion, and knowledge of extensions. Their insights helped usher a wave of new extensions into the Recommended program, including really compelling content like I Don’t Care About Cookies (A+ cookie manager), Tab Stash (highly original take on tab management), Custom Scrollbars (neon colored scrollbar? Yes please!), PocketTube (great way to organize a bunch of YouTube subscriptions), and many more. 

On behalf of the entire Add-ons staff, thank you and all!

Now we’ll turn our attention to forming the next community board for another six-month project dedicated to evaluating new Recommended candidates. If you have a passion for browser extensions and you think you could make an impact contributing your insights to our curatorial process, we’d love to hear from you by Monday, 30 August. Just drop us an email at amo-featured [at] mozilla.org along with a brief note letting us know a bit about your experience with extensions—whether as a developer, user, or both—and why you’d like to participate on the next Recommended Extensions Community Advisory Board.

The post Thank you, Recommended Extensions Community Board! appeared first on Mozilla Add-ons Community Blog.

Categorieën: Mozilla-nl planet

Mozilla Performance Blog: Performance in progress

Mozilla planet - do, 05/08/2021 - 18:34

In the last six months the Firefox performance team has implemented changes to improve startup, responsiveness, security (Fission), and web standards.

Startup and perceived startup

Doug Thayer and Emma Malysz implemented work to improve the perceived startup of Firefox on Windows using a concept called the skeleton UI. Users on Windows may click the Firefox icon and not get visual feedback in the timeline they expect that Firefox is starting. So they click the icon again. And again. And then their screen looks like this.

The reason that startup takes a long time is that many things need to happen before Firefox starts.

As part of startup, we need to start the JS engine, load the profile to get the size and position of the window. We also need to load a large library called XUL.dll which takes a lot of time to read from disk, especially if your computer is slow.

So what changes did the skeleton UI implement? Basically after the icon is clicked, we immediately show a window to indicate that Firefox is starting.

The final version of the skeleton UI looks at the user’s past sessions and creates a window with the theme, window dimensions, toolbar content and positions. You can see what it looks like in this video where the right hand side starts up with the skeleton UI in place. These changes are now available on Firefox 92 beta and riding the trails to release!

Photo by Saffu on Unsplash

In other impactful work to address startup, last summer, Keefer Rourke, an intern on the performance team wrote a simplified API for file IO called IOUtils for use with privileged JavaScript. Emma Malysz and Barret Rennie, along with contributors migrated the existing startup code to IOUtils to improve startup performance.

Responsiveness

Previously, when a Firefox user encountered a page that had a script that ran over a certain timing threshold, you would see a warning message that looked as follows:

For many people, this warning showed up too often, the cause was unclear and the options or next steps were confusing.

Doug Thayer and Emma Malysz embarked on work in early 2021 to reduce the proportion of users who experience the slow script warning. The solution that was implemented changed the user experience so the warning would only show if a user interacted with a hung page. They also added code to blame the page that’s causing the hang and remove the confusing “wait button”.

The result is a 50% reduction in slow script notification submissions!

Vsync

Sean Feng implemented changes to make user interaction more strictly aligned with when the next frame is going to be presented on the screen. This makes Firefox feel more responsive by making sure a Frame always contains the result of all pending user interactions. On mobile Sean also implemented changes for better responsiveness on mobile devices. Sean landed code to allow the coalescing of more touchmove events to generate the events more efficiently.

The impact of Sean’s work plus Matt Woodrow’s vsync work in bug is reflected in the graph above.  To read more about other responsiveness changes in Firefox, Bas Schouten’s blog post provides more details.

Security (Fission)

Fission is site isolation in Firefox. If you want to learn more, read this detailed and thorough blog post by Anny Gakhokidz and Neha Kochar to learn about the implementation and rollout of Fission in Firefox.

Sean Feng and Randell Jesup landed changes to improve process switches related to NSS initialization and http accept setup in process preallocation for Fission. There are improvements on several pages on Windows (~9% for google search, 5% for bing, around 3-4% for gmail, 2-3% for Microsoft); This should cut process-switch times by 6-8ms, perhaps as high as 10. Previously, we were seeing 20-40ms of time attributable to switching processes.

View in Perfherder

Web standards

The Performance Event Timing API was enabled in Firefox 89 by Sean Feng on all platforms. This API provides web page authors with insights into the latency of certain events triggered by user interactions which is a prerequisite for Web Vitals. To learn more read 1667836 – Prototype PerformanceEventTiming, the announcement and the specification.

Tooling

The performance team would like to thank everyone who contributed to this work

Markus Jaritz, Eric Smyth, Adam Gashlin, Molly Howell, Chris Martin, Jim Mathies, Aaron Klotz, Florian Quèze, Gijs Kruitbosch, Mike Conley, Markus Stange, Emma Malysz, Doug Thayer, Denis Palmerio, Sean Feng, Andrew Creskey, Barret Rennie, Benjamin De Kosnik, Bas Schouten Marc Leclair and Mike Comella. A special thanks to Doug Thayer for the artwork to display the changes in the skeleton UI and slow script work!

Categorieën: Mozilla-nl planet

Firefox Add-on Reviews: Read EPUB e-books right in your browser

Mozilla planet - wo, 04/08/2021 - 22:15

For many online readers you simply can’t beat the convenience and clarity of reading e-books in EPUB form (i.e. “electronic publication”). EPUB literature adjusts nicely to any screen size or device, but if you want to read EPUBs in your browser, you’ll need an extension to open their distinct files. Here are a few extensions to help turn your browser into an awesome digital bookshelf. 

EPUBReader

Extremely popular and easy to use, EPUBReader can take care of all your e-reading needs in one extension. 

Whenever you encounter a website that offers EPUB, the extension automatically loads the ebook for you. 

Access features by clicking EPUBReader’s toolbar icon, which launches a hub for all your EPUB activity. Here you’ll find all of your saved EPUB files (plus a portal for discovering new, free ebooks), as well as manage your layout settings like text font, size, colors, backgrounds, and more. 

<figcaption>The Adventures of Captain Hatteras in EPUBReader.</figcaption>

EPUBReader also works very well in tandem with…

Read Aloud: text to speech voice reader

Think of Read Aloud: text to speech voice reader as an audio version of a traditional text-based e-reader. Sit back and let it read the web to you. 

Key features:

  • 40+ languages 
  • Male/female voice options
  • Adjust the pitch and reading speed of any voice
  • PDF support 
<figcaption>Story time with Read Aloud. </figcaption> EpubPress – read the web offline

Optimized for offline reading, EpubPress lets you easily download and organize web pages into a “book” for offline reading. Use it to compile an actual long read book, or utilize it for saving news articles and other short form reading lists.  

Very intuitive to operate. Once you have all the pages you want to collate opened in separate tabs, just order them how you want them to appear in your book. Ads and other distracting widgets are automatically removed from your saved pages. 

<figcaption>EpubPress conveniently turns individual web pages into easy-to-read offline e-books. </figcaption>

We hope these extensions bring you great browser reading joy! Explore more reading extensions on addons.mozilla.org

Categorieën: Mozilla-nl planet

Jeff Klukas: Deduplication: Where Apache Beam Fits In

Mozilla planet - wo, 04/08/2021 - 22:00

Summary of a talk delivered at Apache Beam Digital Summit on August 4, 2021.

Title slide

This session will start with a brief overview of the problem of duplicate records and the different options available for handling them. We’ll then explore two concrete approaches to deduplication within a Beam streaming pipeline implemented in Mozilla’s open source codebase for ingesting telemetry data from Firefox clients.

We’ll compare the robustness, performance, and operational experience of using the deduplication built in to PubsubIO vs. storing IDs in an external Redis cluster and why Mozilla switched from one approach to the other.

Finally, we’ll compare streaming deduplication to a much stronger end-to-end guarantee that Mozilla achieves via nightly scheduled queries to serve historical analysis use cases.

Links
Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Advancing advertising transparency in the US Congress

Mozilla planet - wo, 04/08/2021 - 18:46

At Mozilla we believe that greater transparency into the online advertising ecosystem can empower individuals, safeguard advertisers’ interests, and address systemic harms. Lawmakers around the world are stepping up to help realize that vision, and in this post we’re weighing in with some preliminary reflections on a newly-proposed ad transparency bill in the United States Congress: the Social Media DATA Act.

The bill – put forward by Congresswoman Lori Trahan of Massachusetts – mandates that very large platforms create and maintain online ‘ad libraries’ that would be accessible to academic researchers. The bill also seeks to advance the policy discourse around transparency of platform systems beyond advertising (e.g. content moderation practices; recommender systems; etc), by directing the Federal Trade Commission to develop best-practice guidelines and policy recommendations on general data access.

We’re pleased to see that the bill has some many welcome features that mirror’s Mozilla’s public policy approach to ad transparency:

  • Clarity: The bill spells out precisely what kind of data should be made available, and includes many overlaps with Mozilla’s best practice guidance for ad archive APIs. This approach provides clarity for companies that need to populate the ad archives, and a clear legal footing for researchers who wish to avail of those archives.
  • Asymmetric rules: The ad transparency provisions would only be applicable to very large platforms with 100 million monthly active users. This narrow scoping ensures the measures only apply to the online services for whom they are most relevant and where the greatest public interest risks lie.
  • A big picture approach: The bill recognizes that questions of transparency in the platform ecosystem go beyond simply advertising, but that more work is required to define what meaningful transparency regimes should look like for things like recommender systems and automated content moderation systems. It provides the basis for that work to ramp up.

Yet while this bill has many positives, it is not without its shortcomings. Specifically:

  • Access: Only researchers with academic affiliations will be able to benefit from the transparency provisions. We believe that academic affiliation should not be the sole determinant of who gets to benefit from ad archive access. Data journalists, unaffiliated public interest researchers, and certain civil society organizations can also be crucial watchdogs.
  • Influencer ads: This bill does not specifically address risks associated with some of the novel forms of paid online influence. For instance, our recent research into influencer political advertising on TikTok has underscored that this emergent phenomenon needs to be given consideration in ad transparency and accountability discussions.
  • Privacy concerns: Under this bill, ad archives would include data related to the targeting and audience of specific advertisements. If targeting parameters for highly micro-targeted ads are disclosed, this data could be used to identify specific recipients and pose a significant data protection risk.

Fortunately, these shortcomings are not insurmountable, and we already have some ideas for how they could be addressed if and when the bill proceeds to mark-up. In that regard, we look forward to working with Congresswoman Trahan and the broader policy community to fine-tune the bill and improve it.

We’ve long-believed that transparency is a crucial prerequisite for accountability in the online ecosystem. This bill signals an encouraging advancement in the policy discourse.

 

The post Advancing advertising transparency in the US Congress appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: How MDN’s autocomplete search works

Mozilla planet - di, 03/08/2021 - 17:49

Last month, Gregor Weber and I added an autocomplete search to MDN Web Docs, that allows you to quickly jump straight to the document you’re looking for by typing parts of the document title. This is the story about how that’s implemented. If you stick around to the end, I’ll share an “easter egg” feature that, once you’ve learned it, will make you look really cool at dinner parties. Or, perhaps you just want to navigate MDN faster than mere mortals.

MDN's autocomplete search in action

In its simplest form, the input field has an onkeypress event listener that filters through a complete list of every single document title (per locale). At the time of writing, there are 11,690 different document titles (and their URLs) for English US. You can see a preview by opening https://developer.mozilla.org/en-US/search-index.json. Yes, it’s huge, but it’s not too huge to load all into memory. After all, together with the code that does the searching, it’s only loaded when the user has indicated intent to type something. And speaking of size, because the file is compressed with Brotli, the file is only 144KB over the network.

Implementation details

By default, the only JavaScript code that’s loaded is a small shim that watches for onmouseover and onfocus for the search <input> field. There’s also an event listener on the whole document that looks for a certain keystroke. Pressing / at any point, acts the same as if you had used your mouse cursor to put focus into the <input> field. As soon as focus is triggered, the first thing it does is download two JavaScript bundles which turns the <input> field into something much more advanced. In its simplest (pseudo) form, here’s how it works:

<input type="search" name="q" onfocus="startAutocomplete()" onmouseover="startAutocomplete()" placeholder="Site search..." value="q"> let started = false; function startAutocomplete() { if (started) { return false; } const script = document.createElement("script"); script.src = "/static/js/autocomplete.js"; document.head.appendChild(script); }

Then it loads /static/js/autocomplete.js which is where the real magic happens. Let’s dig deeper with the pseudo code:

(async function() { const response = await fetch('/en-US/search-index.json'); const documents = await response.json(); const inputValue = document.querySelector( 'input[type="search"]' ).value; const flex = FlexSearch.create(); documents.forEach(({ title }, i) => { flex.add(i, title); }); const indexResults = flex.search(inputValue); const foundDocuments = indexResults.map((index) => documents[index]); displayFoundDocuments(foundDocuments.slice(0, 10)); })();

As you can probably see, this is an oversimplification of how it actually works, but it’s not yet time to dig into the details. The next step is to display the matches. We use (TypeScript) React to do this, but the following pseudo code is easier to follow:

function displayFoundResults(documents) { const container = document.createElement("ul"); documents.forEach(({url, title}) => { const row = document.createElement("li"); const link = document.createElement("a"); link.href = url; link.textContent = title; row.appendChild(link); container.appendChild(row); }); document.querySelector('#search').appendChild(container); }

Then with some CSS, we just display this as an overlay just beneath the <input> field. For example, we highlight each title according to the inputValue and various keystroke event handlers take care of highlighting the relevant row when you navigate up and down.

Ok, let’s dig deeper into the implementation details

We create the FlexSearch index just once and re-use it for every new keystroke. Because the user might type more while waiting for the network, it’s actually reactive so executes the actual search once all the JavaScript and the JSON XHR have arrived.

Before we dig into what this FlexSearch is, let’s talk about how the display actually works. For that we use a React library called downshift which handles all the interactions, displays, and makes sure the displayed search results are accessible. downshift is a mature library that handles a myriad of challenges with building a widget like that, especially the aspects of making it accessible.

So, what is this FlexSearch library? It’s another third party that makes sure that searching on titles is done with natural language in mind. It describes itself as the “Web’s fastest and most memory-flexible full-text search library with zero dependencies.” which is a lot more performant and accurate than attempting to simply look for one string in a long list of other strings.

Deciding which result to show first

In fairness, if the user types foreac, it’s not that hard to reduce a list of 10,000+ document titles down to only those that contain foreac in the title, then we decide which result to show first. The way we implement that is relying on pageview stats. We record, for every single MDN URL, which one gets the most pageviews as a form of determining “popularity”. The documents that most people decide to arrive on are most probably what the user was searching for.

Our build-process that generates the search-index.json file knows about each URLs number of pageviews. We actually don’t care about absolute numbers, but what we do care about is the relative differences. For example, we know that Array.prototype.forEach() (that’s one of the document titles) is a more popular page than TypedArray.prototype.forEach(), so we leverage that and sort the entries in search-index.json accordingly. Now, with FlexSearch doing the reduction, we use the “natural order” of the array as the trick that tries to give users the document they were probably looking for. It’s actually the same technique we use for Elasticsearch in our full site-search. More about that in: How MDN’s site-search works.

The easter egg: How to search by URL

Actually, it’s not a whimsical easter egg, but a feature that came from the fact that this autocomplete needs to work for our content creators. You see, when you work on the content in MDN you start a local “preview server” which is a complete copy of all documents but all running locally, as a static site, under http://localhost:5000. There, you don’t want to rely on a server to do searches. Content authors need to quickly move between documents, so much of the reason why the autocomplete search is done entirely in the client is because of that.

Commonly implemented in tools like the VSCode and Atom IDEs, you can do “fuzzy searches” to find and open files simply by typing portions of the file path. For example, searching for whmlemvo should find the file files/web/html/element/video. You can do that with MDN’s autocomplete search too. The way you do it is by typing / as the first input character.

Activate "fuzzy search" on MDN

It makes it really quick to jump straight to a document if you know its URL but don’t want to spell it out exactly.
In fact, there’s another way to navigate and that is to first press / anywhere when browsing MDN, which activates the autocomplete search. Then you type / again, and you’re off to the races!

How to get really deep into the implementation details

The code for all of this is in the Yari repo which is the project that builds and previews all of the MDN content. To find the exact code, click into the client/src/search.tsx source code and you’ll find all the code for lazy-loading, searching, preloading, and displaying autocomplete searches.

The post How MDN’s autocomplete search works appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: The push for GATs stabilization

Mozilla planet - di, 03/08/2021 - 02:00
The push for GATs stabilization

Where to start, where to start...

Let's begin by saying: this is a very exciting post. Some people reading this will be overwhelmingly thrilled; some will have no idea what GATs (generic associated types) are; others might be in disbelief. The RFC for this feature did get opened in April of 2016 (and merged about a year and a half later). In fact, this RFC even predates const generics (which an MVP of was recently stabilized). Don't let this fool you though: it is a powerful feature; and the reactions to the tracking issue on Github should maybe give you an idea of its popularity (it is the most upvoted issue on the Rust repository): GATs reactions

If you're not familiar with GATs, they allow you to define type, lifetime, or const generics on associated types. Like so:

trait Foo { type Bar<'a>; }

Now, this may seem underwhelming, but I'll go into more detail later as to why this really is a powerful feature.

But for now: what exactly is happening? Well, nearly four years after its RFC was merged, the generic_associated_types feature is no longer "incomplete."

crickets chirping

Wait...that's it?? Well, yes! I'll go into a bit of detail later in this blog post as to why this is a big deal. But, long story short, there have been a good amount of changes that have had to have been made to the compiler to get GATs to work. And, while there are still a few small remaining diagnostics issues, the feature is finally in a space that we feel comfortable making it no longer "incomplete".

So, what does that mean? Well, all it really means is that when you use this feature on nightly, you'll no longer get the "generic_associated_types is incomplete" warning. However, the real reason this is a big deal: we want to stabilize this feature. But we need your help. We need you to test this feature, to file issues for any bugs you find or for potential diagnostic improvements. Also, we'd love for you to just tell us about some interesting patterns that GATs enable over on Zulip!

Without making promises that we aren't 100% sure we can keep, we have high hopes we can stabilize this feature within the next couple months. But, we want to make sure we aren't missing glaringly obvious bugs or flaws. We want this to be a smooth stabilization.

Okay. Phew. That's the main point of this post and the most exciting news. But as I said before, I think it's also reasonable for me to explain what this feature is, what you can do with it, and some of the background and how we got here.

So what are GATs?

Note: this will only be a brief overview. The RFC contains many more details.

GATs (generic associated types) were originally proposed in RFC 1598. As said before, they allow you to define type, lifetime, or const generics on associated types. If you're familiar with languages that have "higher-kinded types", then you could call GATs type constructors on traits. Perhaps the easiest way for you to get a sense of how you might use GATs is to jump into an example.

Here is a popular use case: a LendingIterator (formerly known as a StreamingIterator):

trait LendingIterator { type Item<'a> where Self: 'a; fn next<'a>(&'a mut self) -> Option<Self::Item<'a>>; }

Let's go through one implementation of this, a hypothetical <[T]>::windows_mut, which allows for iterating through overlapping mutable windows on a slice. If you were to try to implement this with Iterator today like

struct WindowsMut<'t, T> { slice: &'t mut [T], start: usize, window_size: usize, } impl<'t, T> Iterator for WindowsMut<'t, T> { type Item = &'t mut [T]; fn next<'a>(&'a mut self) -> Option<Self::Item> { let retval = self.slice[self.start..].get_mut(..self.window_size)?; self.start += 1; Some(retval) } }

then you would get an error.

error[E0495]: cannot infer an appropriate lifetime for lifetime parameter in function call due to conflicting requirements --> src/lib.rs:9:22 | 9 | let retval = self.slice[self.start..].get_mut(..self.window_size)?; | ^^^^^^^^^^^^^^^^^^^^^^^^ | note: first, the lifetime cannot outlive the lifetime `'a` as defined on the method body at 8:13... --> src/lib.rs:8:13 | 8 | fn next<'a>(&'a mut self) -> Option<Self::Item> { | ^^ note: ...so that reference does not outlive borrowed content --> src/lib.rs:9:22 | 9 | let retval = self.slice[self.start..].get_mut(..self.window_size)?; | ^^^^^^^^^^ note: but, the lifetime must be valid for the lifetime `'t` as defined on the impl at 6:6... --> src/lib.rs:6:6 | 6 | impl<'t, T: 't> Iterator for WindowsMut<'t, T> { | ^^

Put succinctly, this error is essentially telling us that in order for us to be able to return a reference to self.slice, it must live as long as 'a, which would require a 'a: 't bound (which we can't provide). Without this, we could call next while already holding a reference to the slice, creating overlapping mutable references. However, it does compile fine if you were to implement this using the LendingIterator trait from before:

impl<'t, T> LendingIterator for WindowsMut<'t, T> { type Item<'a> where Self: 'a = &'a mut [T]; fn next<'a>(&'a mut self) -> Option<Self::Item<'a>> { let retval = self.slice[self.start..].get_mut(..self.window_size)?; self.start += 1; Some(retval) } }

As an aside, there's one thing to note about this trait and impl that you might be curious about: the where Self: 'a clause on Item. Briefly, this allows us to use &'a mut [T]; without this where clause, someone could try to return Self::Item<'static> and extend the lifetime of the slice. We understand that this is a point of confusion sometimes and are considering potential alternatives, such as always assuming this bound or implying it based on usage within the trait (see this issue). We definitely would love to hear about your use cases here, particularly when assuming this bound would be a hindrance.

As another example, imagine you wanted a struct to be generic over a pointer to a specific type. You might write the following code:

trait PointerFamily { type Pointer<T>: Deref<Target = T>; fn new<T>(value: T) -> Self::Pointer<T>; } struct ArcFamily; struct RcFamily; impl PointerFamily for ArcFamily { type Pointer<T> = Arc<T>; ... } impl PointerFamily for RcFamily { type Pointer<T> = Rc<T>; ... } struct MyStruct<P: PointerFamily> { pointer: P::Pointer<String>, }

We won't go in-depth on the details here, but this example is nice in that it not only highlights the use of types in GATs, but also shows that you can still use the trait bounds that you already can use on associated types.

These two examples only scratch the surface of the patterns that GATs support. If you find any that seem particularly interesting or clever, we would love to hear about them over on Zulip!

Why has it taken so long to implement this?

So what has caused us to have taken nearly four years to get to the point that we are now? Well, it's hard to put into words how much the existing trait solver has had to change and adapt; but, consider this: for a while, it was thought that to support GATs, we would have to transition rustc to use Chalk, a potential future trait solver that uses logical predicates to solve trait goals (though, while some progress has been made, it's still very experimental even now).

For reference, here are some various implementation additions and changes that have been made that have furthered GAT support in some way or another:

  • Parsing GATs in AST (#45904)
  • Resolving lifetimes in GATs (#46706)
  • Initial trait solver work to support lifetimes (#67160)
  • Validating projection bounds (and making changes that allow type and const GATs) (#72788)
  • Separating projection bounds and predicates (#73905)
  • Allowing GATs in trait paths (#79554)
  • Partially replace leak check with universes (#65232)
  • Move leak check to later in trait solving (#72493)
  • Replacing bound vars in GATs with placeholders when projecting (#86993)

And to further emphasize the work above: many of these PRs are large and have considerable design work behind them. There are also several smaller PRs along the way. But, we made it. And I just want to congratulate everyone who's put effort into this one way or another. You rock.

What limitations are there currently?

Ok, so now comes the part that nobody likes hearing about: the limitations. Fortunately, in this case, there's really only one GAT limitation: traits with GATs are not object safe. This means you won't be able to do something like

fn takes_iter(_: &mut dyn for<'a> LendingIterator<Item<'a> = &'a i32>) {}

The biggest reason for this decision is that there's still a bit of design and implementation work to actually make this usable. And while this is a nice feature, adding this in the future would be a backward-compatible change. We feel that it's better to get most of GATs stabilized and then come back and try to tackle this later than to block GATs for even longer. Also, GATs without object safety are still very powerful, so we don't lose much by defering this.

As was mentioned earlier in this post, there are still a couple remaining diagnostics issues. If you do find bugs though, please file issues!

Categorieën: Mozilla-nl planet

Wladimir Palant: Data exfiltration in Keepa Price Tracker

Mozilla planet - ma, 02/08/2021 - 14:46

As readers of this blog might remember, shopping assistants aren’t exactly known for their respect of your privacy. They will typically use their privileged access to your browser in order to extract data. For them, this ability is a competitive advantage. You pay for a free product with a privacy hazard.

Usually, the vendor will claim to anonymize all data, a claim that can rarely be verified. Even if the anonymization actually happens, it’s really hard to do this right. If anonymization can be reversed and the data falls into the wrong hands, this can have severe consequences for a person’s life.

Meat grinder with the Keepa logo on its side is working on the Amazon logo, producing lots of prices and stars<figcaption> Image credits: Keepa, palomaironique, Nikon1803 </figcaption>

Today we will take a closer look at a browser extension called “Keepa – Amazon Price Tracker” which is used by at least two million users across different browsers. The extension is being brought out by a German company and the privacy policy is refreshingly short and concise, suggesting that no unexpected data collection is going on. The reality however is: not only will this extension extract data from your Amazon sessions, it will even use your bandwidth to load various Amazon pages in the background.

Contents The server communication

The Keepa extension keeps a persistent WebSocket connection open to its server dyn.keepa.com. The server parameters include your unique user identifier, stored both in the extension and as a cookie on keepa.com. As a result, this identifier will survive both clearing browse data and reinstalling the extension, you’d have to do both for it to be cleared. If you choose to register on keepa.com, this identifier will also be tied to your user name and email address.

Looking at the messages being exchanged, you’ll see that these are binary data. But they aren’t encrypted, it’s merely deflate-compressed JSON-data.

Developer tools showing binary messages being exchanged

You can see the original message contents by copying the message as a Base64 string, then running the following code in the context of the extension’s background page:

pako.inflate(atob("eAGrViouSSwpLVayMjSw0FFQylOyMjesBQBQGwZU"), {to: "string"});

This will display the initial message sent by the server:

{ "status": 108, "n": 71 } What does Keepa learn about your browsing?

Whenever I open an Amazon product page, a message like the following is sent to the Keepa server:

{ "payload": [null], "scrapedData": { "tld": "de" }, "ratings": [{ "rating": "4,3", "ratingCount": "2.924", "asin": "B0719M4YZB" }], "key": "f1", "domainId": 3 }

This tells the server that I am using Amazon Germany (the value 3 in domainId stands for .de, 1 would have been .com). It also indicates the product I viewed (asin field) and how it was rated by Amazon users. Depending on the product, additional data like the sales rank might be present here. Also, the page scraping rules are determined by the server and can change any time to collect more sensitive data.

A similar message is sent when an Amazon search is performed. The only difference here is that ratings array contains multiple entries, one for each article in your search results. While the search string itself isn’t being transmitted (not with the current scraping rules at least), from the search results it’s trivial to deduce what you searched for.

Extension getting active on its own

That’s not the end of it however. The extension will also regularly receive instructions like the following from the server (shortened for clarity):

{ "key": "o1", "url": "https://www.amazon.de/gp/aod/ajax/ref=aod_page_2?asin=B074DDJFTH&…", "isAjax": true, "httpMethod": 0, "domainId": 3, "timeout": 8000, "scrapeFilters": [{ "sellerName": { "name": "sellerName", "selector": "#aod-offer-soldBy div.a-col-right > a:first-child", "altSelector": "#aod-offer-soldBy .a-col-right span:first-child", "attribute": "text", "reGroup": 0, "multiple": false, "optional": true, "isListSelector": false, "parentList": "offers", "keepBR": false }, "rating": { "name": "rating", "selector": "#aod-offer-seller-rating", "attribute": "text", "regExp": "(\\d{1,3})\\s?%", "reGroup": 1, "multiple": false, "optional": true, "isListSelector": false, "parentList": "offers", "keepBR": false }, … }], "l": [{ "path": ["chrome", "webRequest", "onBeforeSendHeaders", "addListener"], "index": 1, "a": { "urls": ["<all_urls>"], "types": ["main_frame", "sub_frame", "stylesheet", "script", …] }, "b": ["requestHeaders", "blocking", "extraHeaders"] }, …, null], "block": "(https?:)?\\/\\/.*?(\\.gif|\\.jpg|\\.png|\\.woff2?|\\.css|adsystem\\.)\\??" }

The address https://www.amazon.de/gp/aod/ajax/ref=aod_page_2?asin=B074DDJFTH belongs to an air compressor, not a product I’ve ever looked at but one that Keepa is apparently interested in. The extension will now attempt to extract data from this page despite me not navigating to it. Because of isAjax flag being set here, this address is loaded via XMLHttpRequest, after which the response text is being put into a frame of extensions’s background page. If isAjax flag weren’t set, this page would be loaded directly into another frame.

The scrapeFilters key sets the rules to be used for analyzing the page. This will extract ratings, prices, availability and any other information via CSS selectors and regular expressions. Here Keepa is also interested in the seller’s name, elsewhere in the shipping information and security tokens. There is also functionality here to read out contents of the Amazon cart, I didn’t look too closely at that however.

The l key is also interesting. It tells the extension’s background page to call a particular method with the given parameters, here chrome.webRequest.onBeforeSendHeaders.addListener method is being called. The index key determines which of the predefined listeners should be used. The purpose of the predefined listeners seems to be removing some security headers as well as making sure headers like Cookie are set correctly.

The server’s effective privileges

Let’s take a closer look at the privileges granted to the Keepa server here, these aren’t entirely obvious. Loading pages in the background isn’t meant to happen within the user’s usual session, there is some special cookie handling meant to produce a separate session for scraping only. This doesn’t appear to always work reliably, and I am fairly certain that the server can make pages load in the usual Amazon session, rendering it capable of impersonating the user towards Amazon. As the server can also extract arbitrary data, it is for example entirely possible to add a shipping address to the user’s Amazon account and to place an order that will be shipped there.

The l key is also worth taking a second look. At first the impact here seems limited by the fact that the first parameter will always be a function, one out of a few possible functions. But the server could use that functionality to call eval.call(function(){}, "alert(1)") in the context of the extension’s background page and execute arbitrary JavaScript code. Luckily, this call doesn’t succeed thanks to the extension’s default Content Security Policy.

But there are more possible calls, and some of these succeed. For example, the server could tell the extension to call chrome.tabs.executeScript.call(function(){}, {code: "alert(1)"}). This will execute arbitrary JavaScript code in the current tab if the extension has access to it (meaning any Amazon website). It would also be possible to specify a tab identifier in order to inject JavaScript into background tabs: chrome.tabs.executeScript.call(function(){}, 12, {code: "alert(1)"}). For this the server doesn’t need to know which tabs are open: tab identifiers are sequential, so it’s possible to find valid tab identifiers simply by trying out potential candidates.

Privacy policy

Certainly, a browser extension collecting all this data will have a privacy policy to explain how this data is used? Here is the privacy policy of the German-based Keepa GmbH in full:

You can use all of our services without providing any personal information. However, if you do so we will not sell or trade your personal information under any circumstance. Setting up a tracking request on our site implies that you’d like us to contact you via the contact information you provided us. We will do our best to only do so if useful and necessary - we hate spam as much as you do. If you login/register using Social-Login or OpenID we will only save the username and/or email address of the provided data. Should you choose to subscribe to one of our fee-based subscriptions we will share your email and billing address with the chosen payment provider - solely for the purpose of payment related communication and authentication. You can delete all your information by deleting your account through the settings.

This doesn’t sound right. Despite being linked under “Privacy practices” in the Chrome Web Store, it appears to apply only to the Keepa website, not to any of the extension functionality. The privacy policy on the Mozilla Add-ons site is more specific despite also being remarkably short (formatting of the original preserved):

You can use this add-on without providing any personal information. If you do opt to share contact information, we will only use it to provide you updates relevant to your tracking requests. Under no circumstances will your personal information be made available to a third party. This add-on does not collect any personal data beyond the contact information provided by you.

Whenever you visit an Amazon product page the ASIN (Amazon Standard Identification Number) of that product is used to load its price history graph from Keepa.com. We do not log such requests.

The extension creates required functional cookies containing a session and your settings on Keepa.com, which is required for session management (storing settings and accessing your Keepa.com account, if you create one). No other (tracking, advertising) cookies are created.

This refers to some pieces of the Keepa functionality but it once again completely omits the data collection outlined here. It’s reassuring to know that they don’t log product identifiers when showing product history, but they don’t need to if on another channel their extension sends far more detailed data to the server. This makes the first sentence, formatted as bold text, a clear lie. Unless of course you don’t consider the information collected here personal. I’m not a lawyer, maybe in the legal sense it isn’t.

I’m fairly certain however that this privacy policy doesn’t meet the legal requirements of the GDPR. To be compliant it would need to mention the data being collected, explain the legal grounds for doing so, how it is being used, how long it is being kept and who it is shared with.

That said, this isn’t the only regulation violated by Keepa. As a German company, they are obliged to publish a legal note (in German: Impressum) on their website so that visitors can immediately recognize the party responsible. Keepa hides both this information and the privacy policy in a submenu (one has to click “Information” first) under the misleading name “Disclaimer.” The legal requirements are for both pages to be reachable with one click, and the link title needs to be unambiguous.

Conclusions

Keepa extension is equipped to collect any information about your Amazon visits. Currently it will collect information about the products you look at and the ones you search for, all that tied to a unique and persistent user identifier. Even without you choosing to register on the Keepa website, there is considerable potential for the collected data to be deanonymized.

Some sloppy programming had the (likely unintended) consequence of making the server even more powerful, essentially granting it full control over any Amazon page you visit. Luckily, the extension’s privileges don’t give it access to any websites beyond Amazon.

The company behind the extension fails to comply with its legal obligations. The privacy policy is misleading in claiming that no personal data is being collected. It fails to explain how the data is being used and who it is shared with. There are certainly companies interested in buying detailed online shopping profiles, and a usable privacy policy needs to at least exclude the possibility of the data being sold.

Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: "Upgrading" your Quad G5 LCS

Mozilla planet - za, 31/07/2021 - 06:30
One of the most consistently popular old posts on this blog is our discussion on long-life computing and how to extend the working, arguably even useful, life of your Power Mac. However, what I think gives it particular continued traction is it has a section on how to swap out the liquid cooling system of the Quad G5, obviously the most powerful Power Macintosh ever made and one of the only two G5 systems I believe worth using (the other being the dual-processor 2.3GHz, as it is aircooled). LCSes are finicky beasts under the best of conditions and certain liquid-cooled models of the G5 line have notoriously bad reputations for leakage. My parents' dual 2.5GHz, for example, succumbed to a leak and it ended up being a rather ugly postmortem.

The Quad G5 is one of the better ones in this regard and most of the ones that would have suffered early deaths already have, but it still requires service due to evaporative losses and sediment, and any Quad on its original processors is by now almost certainly a windtunnel under load. An ailing LCS, even an intact one, runs the real risk of an unexpected shutdown if the CPU it can no longer cool effectively ends up exceeding its internal thermal limits; you'll see a red OVERTEMP light illuminate on the logic board when this is imminent, followed by a CHECKSTOP. Like an automotive radiator it is possible to open the LCS up and flush the coolant (and potentially service the pumps), but this is not a trivial process. Additionally, those instructions are for the single-pump Delphi version 1 assembly, which is the more reliable of the two; the less reliable double-pump Cooligy version 2 assemblies are even harder to work on.

Unfortunately our current employment situation requires I downsize, so I've been starting on consolidating or finding homes for excess spare systems. I had several spare Quad G5 systems in storage in various states, all version 2 Cooligy LCSes, but the only LCS assemblies I have in stock (and the LCS in my original Quad G5) are version 1. These LCSes were bought Apple Certified Refurbished, so they were known to be in good condition and ready to go; as the spare Quads were all on their original marginal LCSes and processors, I figured I would simply "upgrade" the best-condition v2 G5 with a v1 assembly. The G5 service manual doesn't say anything about this, though it has nothing in it indicating that they aren't interchangeable, or that they need different logic boards or ROMs, and now having done it I can attest that it "just works." So here's a few things to watch out for.

Both the v1 and the v2 assemblies have multiple sets of screws: four "captive" (not really) float plate screws, six processor mount screws, four terminal assembly screws (all of which require a 3mm flathead hex driver), and four captive ballheads (4mm ballhead hex). Here's the v1, again:

And here's the v2. Compare and contrast. The float plate screws differ between the two versions, and despite the manual calling them "captive" can be inadvertently removed. If your replacement v1 doesn't have float plate screws in it, as mine didn't, the system will not boot unless they are installed (along with the terminal assembly screws, which are integral portions of the CPU power connections). I had to steal them from a dead G5 core module that I fortunately happen to have kept.

Once installed, the grey inlet frame used in the v2 doesn't grip the v1:

The frame is not a necessary part. You can leave it out as the front fan module and clear deflector are sufficient to direct airflow. However, if you have a spare v1 inlet frame, you can install that; the mounting is the same.

The fan and pump connector cable is also the same between v1 and v2, though you may need to move the cable around a bit to get the halves to connect if it was in a wacky location.

Now run thermal calibration, and enjoy your renewed Apple PowerPC tank.

Categorieën: Mozilla-nl planet

Firefox Add-on Reviews: Supercharge your productivity with a browser extension

Mozilla planet - vr, 30/07/2021 - 20:05

With more work and education happening online (and at home) you may find yourself needing new ways to juice your productivity. From time management to organizational tools and more, the right browser extension can give you an edge in the art of efficiency. 

I need help saving and organizing a lot of web content  Gyazo

Capture, save, and share anything you find on the web. Gyazo is a great tool for personal or collaborative record keeping and research. 

Clip entire pages or just pertinent portions. Save images or take screenshots. Gyazo makes it easy to perform any type of web clipping action by either right-clicking on the page element you want to save or using the extension’s toolbar button. Everything gets saved to your Gyazo account, making it accessible across devices and collaborative teams. 

On your Gyazo homepage you can easily browse and sort everything you’ve clipped; and organize everything into shareable topics or collections.

<figcaption>With its minimalist pop-up interface, Gyazo makes it easy to clip elements, sections, or entire web pages. </figcaption> Evernote Web Clipper

Similar to Gyazo, Evernote Web Clipper offers a kindred feature set—clip, save, and share web content—albeit with some nice user interface distinctions. 

Evernote places emphasis on making it easy to annotate images and articles for collaborative purposes. It also has a strong internal search feature, allowing you to search for specific words or phrases that might appear across scattered groupings of clipped content. Evernote also automatically strips out ads and social widgets on your saved pages. 

Focus! Focus! Focus!

Anti-distraction extensions can be a major boon for online workers and students… 

Block Site 

Do you struggle avoiding certain time-wasting, productivity-sucking websites? With Block Site you can enforce restrictions on sites that tempt you away from good work habits. 

Just list the websites you want to avoid for specified periods of time (certain hours of the day or some days entirely, etc.) and Block Site won’t let you access them until you’re out of the focus zone. There’s also a fun redirection feature where you’re automatically redirected to a more productive website anytime you try to visit a time waster 

<figcaption>Give yourself a custom message of encouragement (or scolding?) whenever you try to visit a restricted site with Block Site</figcaption> LeechBlock NG

Very similar in function to Block Site, LeechBlock NG offers a few intriguing twists beyond standard site-blocking features. 

In addition to blocking sites during specified times, LeechBlock NG offers an array of granular, website-specific blocking abilities—like blocking just portions of websites (e.g. you can’t access the YouTube homepage but you can see video pages) to setting restrictions on predetermined days (e.g. no Twitter on weekends) to 60-second delayed access to certain websites to give you time to reconsider that potentially productivity killing decision. 

Tomato Clock

A simple but highly effective time management tool, Tomato Clock (based on the Pomodoro technique) helps you stay on task by tracking short, focused work intervals. 

The premise is simple: it assumes everyone’s productive attention span is limited, so break up your work into manageable “tomato” chunks. Let’s say you work best in 40-minute bursts. Set Tomato Clock and your browser will notify you when it’s break time (which is also time customizable). It’s a great way to stay focused via short sprints of productivity. The extension also keeps track of your completed tomato intervals so you can track your achieved results over time.

Tranquility Reader

Imagine a world wide web where everything but the words are stripped away—no more distracting images, ads, tempting links to related stories, nothing—just the words you’re there to read. That’s Tranquility Reader

Simply hit the toolbar button and instantly streamline any web page. Tranquility Reader offers quite a few other nifty features as well, like the ability to save content offline for later reading, customizable font size and colors, add annotations to saved pages, and more. 

We hope some of these great extensions will give your productivity a serious boost! Fact is there are a vast number of extensions out there that could possibly help your productivity—everything from ways to organize tons of open tabs to translation tools to bookmark managers and more. 

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: New tagging feature for add-ons on AMO

Mozilla planet - do, 29/07/2021 - 18:00

There are multiple ways to find great add-ons on addons.mozilla.org (AMO). You can browse the content featured on the homepage, use the top navigation to drill down into add-on types and categories, or search for specific add-ons or functionality. Now, we’re adding another layer of classification and opportunities for discovery by bringing back a feature called tags.

We introduced tagging long ago, but ended up discontinuing it because the way we implemented it wasn’t as useful as we thought. Part of the problem was that it was too open-ended, and anyone could tag any add-on however they wanted. This led to spamming, over-tagging, and general inconsistencies that made it hard for users to get helpful results.

Now we’re bringing tags back, but in a different form. Instead of free-form tags, we’ll provide a set of predefined tags that developers can pick from. We’re starting with a small set of tags based on what we’ve noticed users looking for, so it’s possible many add-ons don’t match any of them. We will expand the list of tags if this feature performs well.

The tags will be displayed on the listing page of the add-on. We also plan to display tagged add-ons in the AMO homepage.

Example of a tag shelf in the AMO homepage

Example of a tag shelf in the AMO homepage

We’re only just starting to roll this feature out, so we might be making some changes to it as we learn more about how it’s used. For now, add-on developers should visit the Developer Hub and set any relevant tags for their add-ons. Any tags that had been set prior to July 22, 2021 were removed when the feature was retooled.

The post New tagging feature for add-ons on AMO appeared first on Mozilla Add-ons Community Blog.

Categorieën: Mozilla-nl planet

Pagina's