mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

The Firefox Frontier: Data detox: Four things you can do today to protect your computer

Mozilla planet - do, 13/02/2020 - 17:00

From the abacus to the iPad, computers have been a part of the human experience for longer than we think. So much so that we forget the vast amounts of … Read more

The post Data detox: Four things you can do today to protect your computer appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl is 8000 days old

Mozilla planet - do, 13/02/2020 - 07:46

Another pointless number that happens to be round and look nice so I feel a need to highlight it.

When curl was born WiFi didn’t exist yet. Smartphones and tablets weren’t invented. Other things that didn’t exist include YouTube, Facebook, Twitter, Instagram, Firefox, Chrome, Spotify, Google search, Wikipedia, Windows 98 or emojis.

curl was born in a different time, but also in the beginning of the explosion of the web and Internet Protocols. Just before the big growth wave.

In 1996 when I started working on the precursor to curl, there were around 250,000 web sites (sources vary slightly)..

In 1998 when curl shipped, the number of sites were already around 2,400,000. Ten times larger amount in just those two years.

In early 2020, the amount of web sites are around 1,700,000,000 to 2,000,000,000 (depending on who provides the stats). The number of web sites has thus grown at least 70,000% over curl’s 8000 days of life and perhaps as much as 8000 times the amount as when I first working with HTTP clients.

One of the oldest still available snapshots of the curl web site is from the end of 1998, when curl was just a little over 6 months old. On that page we can read the following:

That “massive popularity” looks charming and possibly a bit naive today. The number of monthly curl downloads have also possibly grown by 8,000 times or so – by estimates only, as most users download curl from other places than our web site these days. Even more users get it installed as part of their OS or bundled with something else.

Thank you for flying curl.

(This day occurs only a little over a month before curl turns 22, there will be much more navel-gazing then, I promise.)

Image by Annie Spratt from Pixabay

Categorieën: Mozilla-nl planet

The Firefox Frontier: What watching “You” on Netflix taught us about privacy

Mozilla planet - wo, 12/02/2020 - 19:26

We’re not sure if we can consider “You” a guilty pleasure considering how many people have binged every episode (over 43 million), but it certainly ranks up there right next … Read more

The post What watching “You” on Netflix taught us about privacy appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Henri Sivonen: IME Smoke Testing

Mozilla planet - wo, 12/02/2020 - 14:08

In early 2019, I found myself in a situation where I needed to check that I hadn’t broken IME integration code. Later in 2019, I needed to do it again and now I'm testing this again in 2020, so I’m writing this down.

This is “Did I break things?” smoke testing advice for software developers who don’t themselves use an IME daily or for IME users who need to also test other IMEs that they don’t themselves use regularly. This is not a guide to building things with IME APIs. Also, obviously, writing one word with each IME isn’t the same level of testing as actually writing a lot of text using an IME daily. Once you’ve checked that things aren’t totally broken, you should get your code tested by actual daily users of various IMEs.

What’s an IME?

IME stands for Input Method Editor, which is an old Windows term. However, these days IME is colloquially used regardless of operating system. An IME is a piece of software that transforms user-generated input events (mostly keyboard events, but some IMEs allow some auxiliary pointing device interaction) into text in a manner more complex than a mere keyboard layout. Basically, if the relationship between the keys that a user presses on a hardware keyboard and the text that ends up in an applications text buffer is more complex than when writing French, an IME is in use.

Notably, this is a matter of complexity of the mapping from input events into text in memory. This is not a matter of complexity of the mapping from memory to display. In particular, the mapping from keys to Unicode scalar values for e.g. Arabic is less complex than for French even though the display is more complex.

What’s a Keyboard Layout Then?

The above definition is incomplete without defining the capabilities of a keyboard layout, so this digression to keyboard layouts seems necessary for completeness. On the basic level, a keyboard layout provides a mapping from key codes to Unicode scalar values with modifier keys like shift and alt gr (option on Mac) taken into account.

On the basic level, if the modifiers shift and alt gr (option on Mac) are allowed (whether more modifiers are technically possible is outside the scope of this article), the non-modifier keys get four possible mappings to Unicode scalar values: no modifier, with shift, with alt gr/option, and with both shift and alt gr/option. On Windows (since XP) and Gtk, the mapping indeed is to Unicode scalar values rather than to UTF-16 code units. On macOS, key strokes can generate Unicode strings, but those strings are typically single-character strings.

The previous sentences were qualified by “on the basic level”. There is an added complication: dead keys. A dead key represents an accent such that pressing the key does not yet produce the accent, but the next key press produces an accented letter. Note that this is not just about swapping the key press order of a base character and a combining accent. By convention, the output is a single precomposed Unicode scalar value as opposed to the output being the base character followed by a combining accent.

For example, to type ô on a French AZERTY keyboard, you first press the key whose US QWERTY keycap says [, which is a dead key for circumflex in French AZERTY, and then you press the key o (the same key as on US QWERTY). The output after pressing the second key is U+00F4 LATIN SMALL LETTER O WITH CIRCUMFLEX (i.e. precomposed in Normalization Form C).

For historical reasons, Win32 and Gtk don’t treat the dead key mechanism as an IME even though logically it’s a tiny IME. On macOS, however, dead keys act like a tiny IME that’s driven by the declarative keyboard layout data as opposed to being bespoke for each language whose keyboard layouts use dead keys.

Keyboard layouts are a sufficient abstraction for most scripts, including ones whose display is considered complex.

How to Activate Them

On Windows, macOS, and Fedora IMEs are activated just like keyboard layouts. Fedora and macOS install the IMEs by default, and you add an IME to the input source selector the way you add a keyboard layout.

On macOS, this is done in the “Input Sources” tab of the “Keyboard” pane of “System Preferences”. There is no distinction in the selector between keyboard layouts and IMEs. All the ones with color icons are keyboard layouts, but the ones with grayscale icons can be keyboard layouts or IMEs.

On Fedora, this is done in the “Input Sources” section of the “Region & Language” pane of “Settings”. IMEs are distinguished from keyboard layouts by a two-gear icon to the right of their name.

On Ubuntu, the selector is otherwise the same as on Fedora, but IMEs don’t show up unless the corresponding language has been added first via the “Manage Installed Languages” button that is below the “Input Sources” section. Note that adding a language can install a set of fonts that are relevant to the language, so the combination of languages you have added is fingerprintable from the Web. Note that thanks to a Gnome bug, some input sources may be hidden unless a related glibc locale has been generated (even if not taken into use) on the system, so you should run sudo locale-gen zh_TW.UTF-8 first. (You can find this and other details on PinyinJoe.com.)

On Windows 10, both IMEs and keyboard layouts become available via adding a language to the system. This is done via the Add a language button in the “Region & Language” pane of “Windows Settings”. In the process, uncheck the box that offers to set the newly-added language as your Windows display language unless you want the UI language to change! As in the case of Ubuntu, adding a language to Windows may install additional language-relevant fonts, which makes the combination of languages you’ve added fingerprintable from the Web.

Beyond Fedora and Ubuntu (and, presumably, distros based on them), chances are that you are going to have a bad time with other distros. I timed out trying to figure out how to enable any IME on a U.S. English openSUSE installation. However, installing openSUSE in Japanese enabled a Japanese IBus-based IME, so I don’t expect openSUSE to provide any testing insight beyond Fedora and Ubuntu. Debian (Debian 9 at least) leaves setting up an IME as an exercise to the user even if the user installs the system selecting an IME-requiring language!

Firefox telemetry shows that most Linux IME users have use IBus IME framework (all Nightly Linux IME users use IBus) but some use fcitx. I timed out trying to figure out how one would end up with an fcitx configuration by default, so I didn’t test fcitx. Later, I read that at least at some point Ubuntu Kylin defaulted to fcitx, but I didn’t verify this. (It seems to me that at this point, avoiding IBus is like avoiding PulseAudio and systemd.)

Note that Windows won’t let you add more than one Traditional Chinese regional variant and one Simplified Chinese regional variant at a time. I don’t know what effect the region has in practice beyond which IME is offered as the default. Notably, choosing Hong Kong doesn’t reveal a Jyutping (phonetic Cantonese) IME. To enable non-default IMEs for Traditional Chinese and Simplified Chinese, click the language in the “Region & Language” pane, click “Options”, and click the “Add a keyboard” button.

Windows 10 simplified the input source selection UI. Especially if you install IMEs to Windows by other means or if you want to trigger Hanja conversion by mouse, you need to enable the legacy Language bar feature to get the old more complex input source selection UI. To enable the old Language bar, go to the “Typing” subpane of the “Devices” pane in “Windows Settings”, click “Advanced keyboard settings” and check the box “Use the desktop language bar when it’s available”.

Note that event though on all systems you switch between keyboard layouts and IMEs the same way (Gnome and macOS have a menu in the top right area in the menubar and Windows at bottom right area in the task bar), typically the CJK IMEs internally have a mode switch between a mode called either “English” or “Direct Input” (acting like a QWERTY keyboard layout) and the language that the IME is for. The “English” / “Direct Input” mode is typically denoted either by “A” or “英” in the UI, so that’s not the mode you want, but that’s the piece of UI to look for and click. The Chinese mode is typically denoted as “中”, the main (Hiragana plus Kanji) Japanese mode as “あ”, and the Korean mode as “한”. (In the context of the Microsoft Korean IME, the “漢” button is for Hanja conversion, so that’s not the button needed for this step.)

IMEs

As far as I can tell, IMEs fall into three categories: ones that address a character repertoire that is too large to fit into the keyboard layout abstraction, one that moves display-time complexity to input time, and ones that are a matter of preference.

Moving Display-Time Complexity to Input Time

Hangul, the Korean script, has alphabet/syllabary duality. Logically, the script consists of alphabetic letters called jamo. However, each syllable is grouped into a block that occupies a square of the size of a Chinese character.

The jamo level of Hangul fits into a keyboard layout. The possible syllables don’t. For modern Hangul, given a word (separated by spaces in present-day Korean) consisting of a valid jamo sequence, the grouping into syllables is unambiguous.

If one considers how e.g. Brahmic scripts are rendered using contextual rendering-time glyph selection (shaping), one might expect a similar mechanism to work for modern Hangul. However, due to the relative timelines of IME and shaping technology development as well as wishing to fit archaic Hangul into the same general approach as modern Hangul in text storage, the syllable grouping is made explicit in text storage.

It is then the job of a Hangul IME to group jamo produced by a keystroke per alphabetic unit into explicitly-stored syllables. Since the grouping is unambiguous for modern Hangul, there’s no need for UI for explicitly guiding the grouping.

Testing Hangul

Let’s try typing 서울 (Seoul; this word has a syllable that ends in a vowel sound and another syllable that starts with a vowel sound, which is interesting as we’ll see below).

To figure out which key corresponds to which jamo, refer to a picture of the most common layout (known as 두벌식 / Dubeolsik / 2-Bulsik / 2-set Korean) on Wikimedia Commons.

US QWERTY keys typedJamo keys typedOutputNotes tㅅㅅNot yet a valid syllable. tjㅅㅓ서First syllable is complete. Or is it? tjdㅅㅓㅇ성The first jamo of the second syllable is a plausible third jamo for the first syllable but isn’t a valid syllable alone. tjdnㅅㅓㅇㅜ서우The fourth jamo isn’t valid unless the third jamo becomes the first jamo of the second syllable. tjdnfㅅㅓㅇㅜㄹ서울The second syllable is complete.

Each modern syllable starts with one consonant followed by one vowel optionally followed by one or two consonants, where ㅇ as the first consonant is a silent placeholder for when there is phonetically no leading consonant and jamo that are visually (not phonetically) double-consonants (e.g. ㅆ is visually a double form of ㅅ typed by pressing shift while pressing ㅅ) are analyzed as a single consonant for the purpose of the rule, keystrokes, and Unicode. As seen above, after entering one consonant and one vowel, the syllable is both plausible complete and incomplete. A following consonant can become the third jamo in the syllable. However, if a vowel follows that consonant, they have to form a new syllable.

Microsoft also ships an Old Hangul IME for writing archaic Hangul. I’m not covering it here.

More Characters than Fit on a Keyboard

The issue that there are more characters than fit on a keyboard arises with the Han, Yi, and Ge’ez scripts.

Testing Ge’ez

Microsoft bundles Ge’ez-script IMEs for Amharic and Tigrinya. Fedora and Ubuntu appear to ship only an Amharic IME, but a Web search suggests it might be suitable for also writing Tigrinya. macOS does not appear to come with Ge’ez text input support, but e.g. SIL has a third-party IME (which I didn’t test) that supports macOS and that supports more Ge’ez-script languages than the Windows and Fedora-bundled IMEs.

The Ge’ez script encodes a consonant, which may be glottal stop, and a vowel as one character such that the dominant shape of the character denotes the consonant and then the shape acquires smaller changes to denote the vowel. The consonants fit on a keyboard and so do the vowels. They map roughly to QWERTY keys with similar Latin-script phonetic values. To type a character, you type a vowel, which generates a character with with a glottal stop as the consonant, or the consonant and, if the default vowel isn’t the right one, also a vowel.

Let’s try typing አማርኛ, the name of the Amharic language in the language itself. Wikipedia says the romanization is Amarəñña. Then it’s easy to guess the keys.

US QWERTY keys typedOutput aአ amአም amaአማ amarአማር amarNአማርኝ amarNaአማርኛ

This particular example works the same on Gnome and Windows, but there appear to be some differences for some of the key mappings.

So how is this different from Hangul? In both cases, you type alphabetic keystrokes and get characters in the text buffer that group those keystrokes. In Hangul, the jamo have standalone notation, the jamo appear graphically identifiably in the clusters, and the clusters can be decomposed into their component jamo within Unicode. The Ge’ez characters do not decompose in Unicode or graphically even though they can be considered to decompose phonetically and in terms of keystrokes.

Also, compared to Hangul, it seems to me that Ge’ez could be handled by the dead key abstraction but gets an IME on Windows and Gnome in order to get visual feedback of the key that acts like the dead key, since those systems don’t provide visual feedback of dead keys. It seems to me that Amharic input could work as a keyboard layout on macOS.

Testing Yi

Windows and Fedora come with a Yi-script IME for the Nuosu language. It appears that Ubuntu and macOS don’t.

Let’s try typing ꆈꌠ.

To write Yi syllables, you type in the standard romanization, which is unambiguous. You can look this up in the documentation of SIL Keyman. Gnome doesn’t require you to press space between syllables. Windows and Keyman do, so on Gnome, you type nuosu and on Windows nuo su (trailing space).

Han-Script IMEs

Han-script IMEs split into two main categories: ones that are based on the shape of each character and ones that map phonetic notation to Han characters using a dictionary. In the Japanese and Korean contexts, Han input is of the latter type. In the Chinese contexts, the most common input methods are as follows:

ShapePhonetic TraditionalCangjieBopomofo / Zhuyin SimplifiedWubi 86Pinyin

These four, plus Quick / Sucheng, which is a simplified version of Cangjie, are the ones that Windows, Mac, and major Linux distros (and Android Gboard) have in common. There are also others, both bundled with a specific OS and available as third-party products. The phonetic methods may be configurable to flip the traditional/simplified output expectation relative to the table above. Check your IME settings if the phonetic method tests below give you simplified form when expecting traditional or vice versa.

For testing these, let’s use the word for a Han character, 漢字 in traditional form and 汉字 in simplified form, which contains a character that is obviously different in traditional and simplified forms and a character that does not have a separate simplified form.

Testing Cangjie

Cangjie (called ChangJie on Windows) assigns a unique key stroke sequence for each supported character. The sequence is based on assigning a radical (Cangjie-specific radical; not the same as KangXi radicals) to each key and decomposing the characters into radicals. On Windows, if you wish to input Hong Kong-specific characters or other characters that were not part of code page 950 (Big5 without the HKSCS extension), you need to check some boxes in the IME settings.

Let’s try typing 漢字.

You can use Wiktionary lookup for the individual characters to figure out the Cangjie keystrokes in terms of Cangjie key caps and QWERTY key caps (: 水廿中人 / etlo, : 十弓木 / jnd). macOS comes with a nice palette that you can also use to do these lookups (available from the input method selector menu when Cangjie is active). The space bar ends a character without producing a space in the output.

US QWERTY keys typedCangjie keys typedOutput etlo jnd (trailing space)水廿中人 十弓木 (trailing space)漢字 Testing Quick / Sucheng

The Quick (as it’s called on Windows, Linux, and Android) or Sucheng (as it’s called on macOS) method involves typing the first and last keystroke of the Cangjie sequence for the desired character and then choosing from a menu.

US QWERTY keys typedCangjie keys typedOutput eo8jd2水人8十木2漢字

The digits refer the position of the candidate character in the popup menu and depend on the implementation, on your personalized frecency, and potentially on context. In my case, the first character was the eigth in the popup and the second one was the second in the popup.

Testing Bopomofo / Zhuyin

Bopomofo, also called Zhuyin, is a phonetic notation primarily for Mandarin whose characters are derived from Han characters. An IME of the same name (whether the name is Bopomofo or Zhuyin depends on the operating system) takes phonetic (in terms of Mandarin pronunciation) input as Bopomofo, which fits on a keyboard, and produces Han characters based on a dictionary lookup.

Let’s try typing 漢字 again.

This time, the Wiktionary lookup needs to be by the whole word. We find that the Bopomofo form is ㄏㄢˋ ㄗˋ. To figure out how these map to keys, let’s again look at a picture on Wikimedia Commons. A syllable ends with a tone or a space when there is no tone. Here ˋ is the tone, so we omit the space that Wiktionary includes after the first ˋ.

US QWERTY keys typedBopomofo keys typedOutput c04y4 (return)ㄏㄢˋㄗˋ (return)漢字

In this case, the dictionary lookup probably offers just one candidate, so we don’t need to use the down arrow key to choose a candidate and we can just commit the word using return.

Testing Wubi 86

Like Cangjie, Wubi 86 assigns a key stroke sequence to each supported character according to a radical-based decomposition, but Wubi limits the character sequence to up to four key strokes so that if the decomposition would result in more than four key strokes only the first three and the last one are used. Space ends composition for a given character.

Let’s try typing 汉字.

There’s a Web site with Wubi lookup tables, which you can search using DuckDuckGo by entering the character and site:wubi.free.fr as the search terms. The key sequence is in the title tooltip. Form there, we learn: 汉: ic, 字: pb.

US QWERTY keys typedOutput ic pb (trailing space)汉字 Testing Pinyin

Pinyin is a romanization system for Mandarin. In display form, it uses diacritics to indicate tone, but for IME use, you type without the diacritics.

Let’s try typing 汉字 again.

Wiktionary says that the pinyin form is hànzì.

US QWERTY keys typedOutput hanzi1汉字

In this case, the number key is not a tone but the position of the choices offered, which depends on what you’ve written previously (your personalized frecency). In this case, the IME offered what I wanted as the first choice, so I pressed the number key 1.

Despite Microsoft already shipping a Pinyin IME with Windows, it appears that publishing a Pinyin IME for Windows is a thing that search engine companies operating in China do. I gather these compete with Microsoft on dictionary coverage.

Testing DaYi

DaYi is conceptually similar to Wubi but for Traditional Chinese. It is bundled with Windows. Visually, it looks neglected by Microsoft in the transition to Windows 10 and looks like it is likely based on the same framework as the Chinese Array IME. I recall seeing a glitch (I forgot what exactly) that was specific to these two IMEs, which along with the neglected appearance made me suspect that these two IMEs might exercise the IME APIs in a different way from the other Chinese IMEs. (I didn’t verify this suspicion using logging or a debugger.) For this reason, it seems prudent to test this IME on Windows.

There’s a list of the input codes on GitHub.

US QWERTY keys typedOutput xv mg (trailing space)漢字 Testing Hanja Conversion

Korean desktop IMEs provide a way of converting a word written in Hangul into Hanja (Han-script characters in Korean context) using a dictionary. However, since the usage is rare, unlike with phonetic Chinese IMEs or Japanese IMEs, you need to explicitly invoke the conversion. If you take a look at present-day Korean text, chances are that you’ll find only Hangul and if you occasionally find Han-script characters, they are or shorthand in newspaper headlines (e.g. 美 for the United States or 北 for North Korea) restatements in parentheses. Google’s Gboard for Android doesn’t even have the Hanja conversion feature. On the other hand, the Korean IME on macOS can be configured to generate restatements in parentheses instead of just converting a word from Hangul to Hanja.

Invoking this feature varies by system. On Windows, the key that on a US QWERTY keyboard is the right ctrl key triggers Hangul to Hanja conversion, but on Gnome, you might have to press F9 instead, and on Mac option-return. On Windows and Mac, the lookup is on a per-word basis, so the cursor should be at the end of a word when invoking the conversion. On Gnome, the conversion is on a per-syllable basis.

Let’s try typing 漢字. Note that these are the same Unicode characters as in the Traditional Chinese case, but the second glyph looks different (the tiny stroke at the very top is vertical) in fonts meant for Korean or Japanese (below) compared to Chinese (Simplified or Traditional).

Wiktionary says the Hangul form is 한자, which is according to the previously-mentioned layout picture is gkswk in terms of QWERTY keys.

On Gnome, you type ㅎㅏㄴ (gks), then press F9, then press arrow down until you find 漢, then press return, then type ㅈㅏ (wk), then press F9, and then press arrow down until you find 字, and then press return.

On Mac, you type ㅎㅏㄴㅈㅏ (gkswk), then press option-return, then press down arrow until you find 漢字, and then press return.

On Windows, you type ㅎㅏㄴㅈㅏ (gkswk), then press what on non-Korean keyboards is the ctrl key on the right side of the keyboard, then press down arrow until you find 漢字, and then press return.

An alternative way on Windows, which is worth testing, since it involves a unique UI gesture among IMEs (clicking some UI outside of the IME popup), is that, with the Windows legacy Language bar enabled, you type ㅎㅏㄴㅈㅏ (gkswk), then click the button labeled “漢” in the language bar, and then click 漢字 in the popup that shows up.

(Aside: Note that after typing ㅎㅏㄴㅈ but before typing the final ㅏ, the composition string shows 핝, a cluster that has two trailing consonants.)

Hiragana to Kanji Conversion

Japanese IMEs convert Hiragana text to Kanji (Han-script characters in Japanese context). For example, if you’ve written the Japanese reading for 漢字 as かんじ (U.S. QWERTY key strokes tyd[), the IME offers to convert it into 漢字 by dictionary lookup as was previously seen in the Bopomofo, Pinyin, and Hangul-to-Hanja cases. Hiragana fits into a keyboard layout and you can configure Japanese IMEs such that each keystroke produces a Hiragana base character or a voicing mark directly with voicing marks immediately combining with their bases. Here じ is one Unicode scalar value produced by two key strokes: し, QWERTY d, and ゛, U.S. QWERTY left square bracket. Note that in legacy half-width Katakana the voicing marks remain as distinct characters in the text buffer. However, a Hiragana keyboard layout is not the default or, as I understand it, the popular configuration. (It is, though, the way Apple’s Ainu IME works out of the box.) Which brings us to the next category of IMEs.

IMEs as a Matter of Preference

There are cases where the keyboard layout abstraction is logically sufficient for a given script, but an IME is used nonetheless. The most notable ones are the step of getting from keystrokes to Hiragana in Japanese IMEs and writing Vietnamese using the Telex spelling. As I understand it, these are cases where the user community as a whole has a very strong preference towards IME rather than keyboard layout. Despite keyboard layouts being logically sufficient, IMEs exist for languages of India, but my understanding is that there isn’t a user community-wide IME preference.

Notably, as far as operating system vendors are concerned, there is a cut-off around Vietnamese. All vendors appear to agree that IMEs for Chinese (both traditional and simplified), Japanese, and Korean are the must haves. Operating system vendors other than Microsoft also bundle a Vietnamese Telex IME. For languages used in India, what operating system vendors bundle does not appear to have a clear pattern or norm, and macOS lacks bundled Ge’ez and Yi IMEs. (There also appears to be a documentation cut-off such that it’s easy enough to find information in English about the CJK IMEs and the Vietnamese Telex IME and hard to find information in English about how to operate the rest.)

Testing Japanese

Despite Hiragana fitting into a keyboard layout in principle, the default configuration of Japanese IMEs is that in addition to the Hiragana to Kanji conversion layer (in the previous category), there is a Romaji to Hiragana conversion layer (in this category). Romaji means writing Japanese with Latin letters. As with Pinyin, the standards (of which there are multiple) use diacritical marks. IMEs, however, use ASCII-only notation. This gist was the best online reference I found, but the IME romanization is documented more fully in Ken Lunde’s book CJKV Information Processing (pages 304–306 in the second edition), and it appears to differ from the romanization standards in ways that are more than a matter of just omitting the diacritics from any one of the standards. On Ubuntu and Windows, the conversion table is viewable and editable in the settings of the Japanese IME.

Let’s try typing 漢字.

Wiktionary tells us that the romaji form is kanji.

US QWERTY keys typedOutput kk kaか kanかn kanjかんj kanjiかんじ kanji (trailing space or return, depending on system, potentially after down arrows)漢字

Although Japanese IMEs have (on desktop) one dominant design, there are multiple implementations. Notably, despite using the same IME API (IBus), Ubuntu and Fedora ship different implementations, so it’s probably prudent to test both. Google ships a proprietary but gratis IME for Japanese for Windows. (As I understand it, the code is Open Source and shipped also on Ubuntu, but the dictionary is proprietary and is the dictionary distinguishes the IME from the one that Microsoft bundles with Windows.)

There’s a proprietary product called ATOK that appears to be popular. (As I understand it, the distinguishing feature of ATOK is that you can have cloud sync for your personal dictionary and word frecency across multiple computers and operating systems.) The ATOK code base is very old and it shows in technically interesting ways. In particular, it uses some non-Unicode Windows APIs and it has a character palette that isn’t a palette for Windows window management purposes, which means that the character palette takes the place of the application window as the Windows active window, and the character palette ends up sending text input to an application window that isn’t active for Window management purposes, even though conceptually the active window is the window that receives text input. This can lead to interesting effects.

Japanese IMEs also support after-the-fact conversion actions similar to Hanja conversion in the Korean case. (Thanks to Masayuki Nakano for pointing these out to me.) Since these action involve the IME querying the app for text that’s already there, these actions can expose bugs that normal composition operations don’t.

Immediately after committing a word, you can uncommit it by pressing ctrl-backspace (cmd-backspace on Mac).

More generally, you can request reconversion for the word around the text insertion point (i.e. the IME figures out what constitutes a word around the text insertion point) or for an explicit text selection. On Mac this appears to be triggered by both ctrl-shift-r and option-shift-r. I don’t know if there’s a subtle difference between the two. On Windows and Linux, this operation is by default bound to a key that doesn't exist on non-Japanese keyboards, so to test with a non-Japanese keyboard you need to go in the IME preferences, look for a reconversion action bound to the “Henkan” key and change the key binding to something else.

Testing Vietnamese

When Vietnamese tones are treated as separate key strokes, Vietnamese fits into a keyboard layout, and Vietnam indeed has a keyboard layout standard like this. The Vietnamese keyboard layout is unusual, however, in the sense that the stream of text that it produces is unnormalized in Unicode terms. Unlike in French, where accented characters for which there is no dedicated key are produced using dead keys by typing the accent first, Vietnamese tones are typed after the base character and this produces Unicode combining mark scalar values without IME post-processing in contrast to Hiragana voicing marks. However, some of the Vietnamese base characters are precomposed characters for Unicode purposes. Hence, the text stream produced by the standard keyboard layout is neither in Normalization Form C (because the tones are decomposed) nor in Normalization form D (because the bases are precomposed).

An alternative, which I understand to be much more popular than the standard keyboard layout, is typing Vietnamese using Telex spelling and having an IME convert the ASCII-only Telex spelling to the official spelling (in Unicode Normalization Form C). This is analogous to if German was written such that the user typed two letters oe to have an IME convert them into ö.

As noted, Microsoft does not ship a Telex IME for Vietnamese and only ships the standard keyboard layout. For Windows, you need to download and install a third-party Telex IME called UniKey. (Be sure not to download it with malware added from sites other than the linked official site.) Unfortunately, UniKey does not integrate with the Windows system input method switcher and instead has its own on/off UI.

In contrast, Google’s Gboard for Android only has the Telex IME for Vietnamese and doesn’t ship the standard keyboard layout. Ubuntu, Fedora, and macOS ship both.

Let’s try typing Tiếng Việt (the name of the Vietnamese language in Vietnamese).

The rules are on Wikipedia.

US QWERTY keys typedOutput Tieengs VieetjTiếng Việt

As with modern Hangul, the rules are unambiguous, so there is no UI for explicitly guiding the transformation.

Testing Languages of India

As noted, languages used in India each fit into a keyboard layout (InScript is a family of federal government keyboard layouts, but Tamil Nadu also has a different state government standard for Tamil: Tamil 99) and there appears to be no established cross-platform practice of which languages also get an IME as an alternative. The way the IMEs work is that you type in some phonetic romanization. On Windows and Gnome, there appear to be strict rules as with Vietnamese Telex input, so that there’s no need for additional UI beyond the keystrokes themselves. I failed to figure out what those rules are, though! On Mac and Android Gboard, the behavior is closer to the way a Pinyin IME works: The input is approximate and you choose from options. It also seems to me that for a given system, the reverse Latin transliteration IMEs for the languages of India are all built on the same framework, so it seems that it’s sufficient to test one per system. I tested Bengali on Windows, Hindi on macOS, and Tamil on Fedora.

For Windows, the IMEs don’t install the same way as CJK IMEs. Rather, you need to download and install them separately. The downloads for Windows 8 in the Indic Input 3 section on Microsoft’s site work on Windows 10. However, you need to have the legacy Language bar enabled to switch to the IMEs. To test Bengali, I added the language Bangla (India) to Windows the usual way and then installed Bengali Indic Input 3. Subsequently, I was able to switch to the IME using the Language bar.

For Tamil and Bengali, I failed to figure out how to type the name of the language itself, but it turns out that the word for “father” in both languages was easy enough to figure out how to type.

LanguageSystemInputOutput BengaliWindowspitaaপিতা TamilFedoraappaaஅப்பா HindimacOShindee right arrow, right arrow, returnहिन्दी Edge Cases

IMEs have the notion of a composition string. This is the part of the text that is rendered by the application but that still isn’t “done” and that the IME still expects subsequent keystrokes to change. Typically, the composition string is underlined, though for Hangul the convention appears to be highlighting the whole glyph of the current syllable. At some point, the composition gets committed. In addition to the primary commit action, a space keystroke, return keystoke, number key keystroke, or typing the vowel of the next Hangul syllable, the composition may need to be committed in response to the user clicking something that blurs the text field that had an uncommitted composition in it. Obviously, this is an opportunity for bugs.

Therefore, it’s worthwhile to test that the composition gets committed if the next action is something that makes use of the string, such as pressing the submit button on a form. With IMEs that have a popup, you should try this with the popup open. With Korean and Vietnamese IMEs, you should test syllables that could still gain a trailing consonant in Korean or a tone in Vietnamese.

Also, you should try opening menus from the menubar both by mouse and by keyboard with an unfinished composition open.

If the application supports arbitrary geometric transformations (scale, rotate, translate) of content, as a Web browser does, it’s a good idea to test that IMEs that show a popup end up showing the popup in the right place when the text insertion point is within transformed content.

Is Testing All These Really Necessary?

Is it really worthwhile to test all of these IMEs? Which ones exercise the IME APIs so similarly that they are mutually equivalent for purposes of testing API interaction?

I don’t know, and before trying them all, I couldn’t have guessed what I ended up seeing. In particular, the needs of Korean IMEs appear simpler than the needs Japanese and Chinese IMEs, so I expected Korean IMEs expose fewer bugs. Yet, I saw bugs that initially appeared Korean-specific (a couple in Firefox and one, on Ubuntu, in the IME). Part of this is due to Hanja conversion being an after-the-fact operation rather than part of writing the word initially. Later I learned that it was possible to reproduce a bug that looked Hanja conversion-specific by requesting undo or reconversion from Japanese IMEs. In retrospect, it seems to me that testing OS-bundled Japanese on each OS (including the undo and reconversion features), DaYi on Windows, and peripheral palette features of ATOK on Windows would have given enough coverage for my needs in 2019, but chances are your set of bugs is revealed by a different minimal set of IMEs. Testing ATOK character palette is not something that one thinks of as a matter of superficially trying every IME: The ATOK character palette issue was reported by a user. Also, in the case of reconversion bugs, I was able to think of the required actions on my own in the Korean case but needed advice on what to try in the Japanese case.

Even though the list looks long, iterating through each of the system-bundled IMEs on Windows 10, macOS, and Ubuntu plus checking the IMEs on Fedora that differ from those provided by Ubuntu doesn’t really take that much time with the hints of what to try listed above, so I encourage you to go through the whole list. You could sink a lot of time into installing third-party IMEs though. Still, it’s probably a good idea to test at least UniKey for Vietnamese on Windows.

Categorieën: Mozilla-nl planet

Karl Dubost: Week notes - 2020 w06 - worklog - Finishing anonymous reporting

Mozilla planet - wo, 12/02/2020 - 07:00
Monday

I came back yesterday at home from Berlin All Hands at noon. Today will be probably tough on jetlag. Almost all Japanese from Narita Airport to home were wearing a mask (as I did). The coronavirus is in the mind: "deaths at 361 and confirmed infections in China at 17,238".

Cleaning up emails. And let's restart coding for issue #3140 (PR #3167). Last week, I discussed with mike, if I should rebase the messy commits so we have a cleaner version. On one hand, the rebase would create a clean history with commits by specific sections, but the history of my commits also document the thought process. For now I think I will keep the "messy informative" commits.

Unit tests dependency

When unit tests are not failing locally but failing on CircleCI, it smells a dependency on the environment. Indeed, here the list of labels returned behaved differently on CircleCI because locally, in my dev environment, I had a populated data/topsites.db. This issue would not have been detected if my local topsites.db was empty like on circleCI. A unittest which depends on an external variability is bad. For now I decided to mock the priority in the test and I opened an issue to create a more controlled environment.

Tuesday

Finishing the big pull request for the new workflow. I don't like usually to create huge pull request. I prefer a series of smaller ones, tied to specific issues. The circumstances pushed me to do that. But I think it's an interesting lesson. How much do we bend our guidelines and process rules in an emergency situation.

23:00 to midnight, we had a webcompat team video-meeting.

Wednesday

Code review today for kate's code on fetching the labels for the repos. She used GraphQL, which is cool. Small piece of codes creates opportunities to explore new ways of doing things.

I wonder if there is a client with a pythonic api for GraphQL, the same way that SQLAlchemy does for SQL.

I need to

Thursday

Big pull request… big bug. Big enough that it would create an error 500 on a certain path of the workflow. So more tests, and fixing the issue in a new pull request.

Restarting diagnosis too because the curve is going up. We are +50 above our minimum on January 2020. You can definitely help.

Friday

Such a big pull request created obviously more bugs. Anonymous reporting is activated with a flag and our flag didn't work as expected. So that was strange.

environment variables are always strings

After investigating a bit, we were using an environment variable for the activation with the value True or False in bash environment.

ANONYMOUS_REPORTING = True

and importing it in python with

ANONYMOUS_REPORTING = os.environ.get('ANONYMOUS_REPORTING') or False

then later on in the code it would be simple enough to do:

if ANONYMOUS_REPORTING: # do something clever here

Two mistakes here. Assumming that:

  1. True in bash will carry the same meaning than True once read in python
  2. True is not a boolean (and never was) but a string, which means that ANONYMOUS_REPORTING will always be True (in python boolean sense) because a string is true compared to the absence of a string.
>>> not '' True >>> not 'foo' False

So to make it more explicit, we switched to ON or OFF values

ANONYMOUS_REPORTING = ON

In the process of doing this, we inadvertently (bad timing) into a regression because some flask modules have been adjusted to a different version of Werkzeug. So it is time for an upgrade.

So we are almost there but not yet.

By the end of this week, the epidemy is now close to 1000 deaths of Coronavirus. This is getting really serious.

Otsukare!

Categorieën: Mozilla-nl planet

The Talospace Project: Firefox 73 on POWER

Mozilla planet - wo, 12/02/2020 - 06:18
... seems to just work. New in this release is better dev tools and additional CSS features. This release includes the fix for certain extensions that regressed in Fx71, and so far seems to be working fine on this Talos II. The debug and optimized mozconfigs I'm using are, as before, unchanged from Firefox 67.
Categorieën: Mozilla-nl planet

Mozilla Addons Blog: FAQ for extension support in new Firefox for Android

Mozilla planet - di, 11/02/2020 - 18:31
There are a lot of Firefox applications on the Google Play store. Which one is the new Firefox for Android?

The new Firefox for Android experience is currently available for early testing on the Firefox Preview Nightly and Firefox Preview production channels.

In February 2020, we will change which Firefox applications remain available in the Play store. Once we’ve completed this transition, Firefox Preview Nightly will no longer be available. New feature development will take place on what is currently Firefox Preview.

We encourage users who are eager to make use of extensions to stay on Firefox Preview. This will ensure you continue to receive updates while still being among the first to see new developments.

Which version supports add-ons?

Support for one extension, uBlock Origin, has been enabled for Firefox Preview Nightly. Every two weeks, the code for Firefox Preview Nightly gets migrated to the production release of Firefox Preview. Users of Firefox Preview should be able to install uBlock Origin by mid-February 2020.

We expect to start transferring the code from the production release of Firefox Preview to the Firefox for Android Beta channel during the week of February 17.

I’m using one of the supported channels but I haven’t been able to install an extension yet. Why?

We are rolling out the new Firefox for Android experience to our users in small increments to test for bugs and other unexpected surprises. Don’t worry — you should receive an update that will enable extension support soon!

Can I install extensions from addons.mozilla.org to Firefox for Android?

No, in the near term you will need to install extensions from the Add-ons Manager on the new Firefox for Android. For the time being, you will not be able to install extensions directly from addons.mozilla.org.

What add-ons are supported on the new Firefox for Android?

Currently, uBlock Origin is the only supported extension for the new Firefox for Android. We are working on building support for other extensions in our Recommended Extensions program.

Will more add-ons be supported in the future?

We want to ensure that the first add-ons supported in the new Firefox for Android provide an exceptional, secure mobile experience to our users. To this end, we are prioritizing Recommended Extensions that cover common mobile use cases and that are optimized for different screen sizes. For these reasons, it’s possible that not all the add-ons you have previously installed in Firefox for Android will be supported in the near future.

Will add-ons not part of the Recommended Extensions program ever be supported on the new Firefox for Android?

We would like to expand our support to other add-ons. At this time, we don’t have details on enabling support for extensions not part of the Recommended Extensions program in the new Firefox for Android. Please follow the Add-ons Blog for future updates.

What is GeckoView?

GeckoView is Mozilla’s mobile browser engine. It takes Gecko, the engine that powers the desktop version of Firefox, and packages it as a reusable Android library. Rebuilding our Firefox for Android browser with GeckoView means we can leverage our Firefox expertise in creating safe and robust online experiences for mobile.

What’s happening to add-ons during the migration?

Support for uBlock Origin will be migrated for users currently on Firefox Nightly, Firefox Beta, and Firefox Production. All other add-ons will be disabled for now.

The post FAQ for extension support in new Firefox for Android appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Firefox 73 is upon us

Mozilla planet - di, 11/02/2020 - 16:52

Another month, another new browser release! Today we’ve released Firefox 73, with useful additions that include CSS and JavaScript updates, and numerous DevTools improvements.

Read on for the highlights. To find the full list of additions, check out the following links:

Note: Until recently, this post mentioned the new form method requestSubmit() being enabled in Firefox 73. It has come to light that requestSubmit() is in fact currently behind a flag, and targetted for a release in Firefox 75. Apologies for the error. (Updated Friday, 14 February.)

Web platform language features

Our latest Firefox offers a fair share of new web platform additions; let’s review the highlights now.

We’ve added to CSS logical properties, with overscroll-behavior-block and overscroll-behavior-inline.

These new properties provide a logical alternative to overscroll-behavior-x and overscroll-behavior-y, which allow you to control the browser’s behavior when the boundary of a scrolling area is reached.

The yearName and relatedYear fields are now available in the DateTimeFormat.prototype.formatToParts() method. This enables useful formatting options for CJK (Chinese, Japanese, Korean) calendars.

DevTools updates

There are several interesting DevTools updates in this release. Upcoming features can be previewed now in Firefox DevEdition.

We continually survey DevTools users for input, often from our @FirefoxDevTools Twitter account. Many useful updates come about as a result. For example, thanks to your feedback on one of those surveys, it is now possible to copy cleaner CSS snippets out of the Inspector’s Changes panel. The + and - signs in the output are no longer part of the copied text.

Solid & Fast

The DevTools engineering work for this release focused on pushing performace forward. We made the process of collecting fast-firing requests in the Network panel a lot more lightweight, which made the UI snappier. In the same vein, large source-mapped scripts now load much, much faster in the Debugger and cause less strain on the Console as well.

Loading the right sources in the Debugger is not straightforward when the DevTools are opened on a loaded page. In fact, modern browsers are too good at purging original files when they are parsed, rendered, or executed, and no longer needed. Firefox 73 makes script loading a lot more reliable and ensures you get the right file to debug.

Smarter Console

Console script authoring and logging gained some quality of life improvements. To date, CORS network errors have been shown as warnings, making them too easy to overlook when resources could not load. Now they are correctly reported as errors, not warnings, to give them the visibility they deserve.

Variables declared in the expression will now be included in the autocomplete. This change makes it easier to author longer snippets in the multi-line editor. Furthermore, the DevTools setting for auto-closing brackets is now working in the Console as well, bringing you closer to the experience of authoring in an IDE.

Did you know that console logs can be styled using backgrounds? For even more variety, you can add images, using data-uris. This feature is now working in Firefox, so don’t hesitate to get creative. For example, we tried this in one of our Fetch examples:

console.log('There has been a problem with your fetch operation: %c' + e.message, 'color: red; padding: 2px 2px 2px 20px; background: yellow 3px no-repeat url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAwAAAAMCAYAAABWdVznAAAACXBIWXMAAA 7EAAAOxAGVKw4bAAAApUlEQVQoz5WSwQ3DIBAE50wEEkWkABdBT+bhNqwoldBHJF58kzryIp+zgwiK5JX2w+ 2xdwugMMZ4IAIZeCszELX2hYhcgQIkEQnOOe+c8yISgAQU1Rw3F2BdlmWig56tQNmdIpA68Qbcu6akWrJat7 gp27EDkCdgttY+uoaX8oBq5gsDiMgToNY6Kv+OZIzxfZT7SP+W3oZLj2JtHUaxnnu4s1/jA4NbNZ3AI9YEA AAAAElFTkSuQmCC);');

And got the following result:

styled console message with yellow highlighter effect

We’d like to thank Firefox DevTools contributor Edward Billington for the data-uri support!

We now show arguments by default. We believe this makes logging JavaScript functions a bit more intuitive.

And finally for this section, when you perform a text or regex search in the Console, you can negate a search item by prefixing it with ‘-’ (i.e. return results not including this term).

WebSocket Inspector improvements

The WebSocket inspector that shipped in Firefox 71 now nicely prints WAMP-formatted messages (in JSON, MsgPack, and CBOR flavors).

a screencapture showing WAMP MessagPack in the WebSocket Inspector

You won’t needlessly wait for updates, as the Inspector now also indicates when a WebSocket connection is closed.

A big thanks to contributor Tobias Oberstein for implementing the WAMP support, and to saihemanth9019 for the WebSocket closed indicator!

New (power-)user features

We wanted to mention a couple of nice power user Preferences features dropping in Firefox 73.

First of all, the General tab in Preferences now has a Zoom tool. You can use this feature to set the magnification level applied to all pages you load. You can also specify whether all page contents should be enlarged, or only text. We know this is a hugely popular feature because of the number of extensions that offer this functionality. Selective zoom as a native feature is a huge boon to users.

The DNS over HTTPS control in the Network Settings tab includes a new provider option, NextDNS. Previously, Cloudflare was the only available option.

The post Firefox 73 is upon us appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Mozilla Mornings on the EU Digital Services Act: Making responsibility a reality

Mozilla planet - di, 11/02/2020 - 16:17

On 3 March, Mozilla will host the next installment of Mozilla Mornings – our regular breakfast series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

In 2020 Mozilla Mornings is adopting a thematic focus, starting with a three-part series on the upcoming Digital Services Act. This first event on 3 March will focus on how content regulation laws and norms are shifting from mere liability frameworks to more comprehensive responsibility ones, and our panelists will discuss how the DSA should fit within this trend.

Speakers
 hhhh Prabhat Agarwal
Acting Head of Unit, E-Commerce and Platforms
European Commission, DG CNECTfff
Karen Melchior MEP
Renew Europe

Siada El-Ramly
Director-General, EDiMA

Owen Bennett
EU Internet Policy Manager, Mozilla

Moderated by Jennifer Baker
EU Tech Journalist

Logistical information 3 March, 2020 08:30-10:30 The Office cafe, Rue d’Arlon 80, Brussels 1040 jjj Register your attendance here

The post Mozilla Mornings on the EU Digital Services Act: Making responsibility a reality appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 325

Mozilla planet - di, 11/02/2020 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is argh, a small opinionated argument parsing library for Rust.

Thanks to Vikrant for the suggestions!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

261 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs

No new RFCs were proposed this week.

Upcoming Events Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

This week we have two (related) quotes:

Even with just basic optimization, Rust was able to outperform the hyper hand-tuned Go version. This is a huge testament to how easy it is to write efficient programs with Rust compared to the deep dive we had to do with Go.

[..] After a bit of profiling and performance optimizations, we were able to beat Go on every single performance metric . Latency, CPU, and memory were all better in the Rust version.

Jesse Howard on the discord blog

The consistency angle really shouldn’t be overlooked. Performance is nice, but slow and consistent can still be planned for much more easily than inconsistent.

That was the big aha moment about Rust for me when I pushed out my first project using the language. Being nervous about it I had added way too much instrumentation so that I could know how every bit of it was responding to real traffic. But as soon as I started seeing the data, I was convinced that my instrumentation code was broken. The graphs I was seeing were just so...boring. Straight lines everywhere, no variation...after 24hrs, the slowest response (not P99...literally P100) was within 75ms of the fastest response.

/u/tablair commenting on /r/rust

Thanks to Jules Kerssemakers and Stephan Sokolow for the suggestions!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Niko Matsakis: Async Interview #6: Eliza Weisman

Mozilla planet - di, 11/02/2020 - 06:00

Hello! For the latest async interview, I spoke with Eliza Weisman (hawkw, mycoliza on twitter). Eliza first came to my attention as the author of the tracing crate, which is a nifty crate for doing application level tracing. However, she is also a core maintainer of tokio, and she works at Buoyant on the linkerd system. linkerd is one of a small set of large applications that were build using 0.1 futures – i.e., before async-await. This range of experience gives Eliza an interesting “overview” perspective on async-await and Rust more generally.

Video

You can watch the video on YouTube. I’ve also embedded a copy here for your convenience:

The days before question mark

Since I didn’t know Eliza as well, we started out talking a bit about her background. She has been using Rust for 5 years, and I was amused by how she characterized the state of Rust when she got started: pre-“question mark” Rust. Indeed, the introduction of the ? operator does feel one of those “turning points” in the history of Rust, and I’m quite sure that async-await will feel similarly (at least for some applications).

One interesting observation that Eliza made is that it feels like Rust has reached the point where there is nothing critically missing. This isn’t to say there aren’t things that need to be improved, but that the number of “rough edges” has dramatically decreased. I think this is true, and we should be proud of it – though we also shouldn’t relax too much. =) Getting to learn Rust is still a significant hurdle and there are still a number of things that are much harder than they need to be.

One interesting corrolary of this is that a number of the things that most affect Eliza when writing Async I/O code are not specific to async I/O. Rather, they are more general features or requirements that apply to a lot of different things.

Tokio’s needs

We talked some about what tokio needs from async Rust. As Eliza said, many of the main points already came up in my conversation with Carl:

  • async functions in traits would be great, but they’re hard
  • stabilizing streams, async read, and async write would be great
Communicating stability

One thing we spent a fair while discusing is how to best communicate our stability story. This goes beyond “semver”. semver tells you when a breaking change has been made, of course, but it doesn’t tell whether a breaking change will be made in the future – or how long we plan to do backports, and the like.

The easiest way for us to communicate stability is to move things to the std library. That is a clear signal that breaking changes will never be made.

But there is room for us to set “intermediate” levels of stability. One thing that might help is to make a public stability policy for crates like futures. For example, we could declare that the futures crate will maintain compatibility with the current Stream crate for the next year, or two ears.

These kind of timelines would be helpful: for example, tokio plans to maintain a stable interface for the next 5 years, and so if they want to expose traits from the futures crate, they would want a guarantee that those traits would be supported during that period (and ideally that futures would not release a semver-incompatible version of those traits).

Depending on community crates

When we talk about interoperability, we are often talking about core traits like Future, Stream, and AsyncRead. But as we move up the stack, there are other things where having a defined standard could be really useful. My go to example for this is the http crate, which defines a number of types for things like HTTP error codes. The types are important because they are likely to find their way in the “public interface” of libraries like hyper, as well as frameworks and things. I would like to see a world where web frameworks can easily be converted between frameworks or across HTTP implementations, but that would be made easier if there is an agreed upon standard for representing the details of a HTTP request. Maybe the http crate is that already, or can become that – in any case, I’m not sure if the stdlib is the right place for such a thing, or at least not for some time. It’s something to think about. (I do suspect that it might be useful to move such crates to the Rust org? But we’d have to have a good story around maintainance.) Anyway, I’m getting beyond what was in the interview I think.

Tracing

We talked a fair amount about the tracing library. Tracing is one of those libraries that can do a large number of things, so it’s kind of hard to concisely summarize what it does. In short, it is a set of crates for collecting scoped, structured, and contextual diagnostic information in Rust programs. One of the simplest use cases is to collect logging information, but it can also be used for things like profiling and any number of other tasks.

I myself started to become interesting in tracing as a possible tool to help for debugging and analyzing programs like rustc and chalk, where the “chain” that leads to a bug can often be quite complex and involve numerous parts of the compiler. Right now I tend to just dump gigabytes of logs into files and traverse them with grep. In so doing, I lose all kinds of information (like hierarchical information about what happens during what) that would make my life easier. I’d love a tool that let me, for example, track “all the logs that pertain to a particular function” while also making it easy to find the context in which a particular log occurred.

The tracing library got its start as a structured replacement for various hacky layers atop the log crate that were in use for debugging linkerd. Like many async applications, debugging a linkerd session involves correlating a lot of events that may be taking place at distinct times – or even distinct machines – but are still part of one conceptual “thread” of control.

tracing is actually a “front-end” built atop the “tracing-core” crate. tracing-core is a minimal crate that just stores a thread-local containing the current “event subscriber” (which processes the tracing events in some way). You don’t interact with tracing-core directly, but it’s important to the overall design, as we’ll see in a bit.

The tracing front-end contains a bunch of macros, rather like the debug! and info! you may be used to from the log crate (and indeed there are crates that let you use those debug! logs directly). The major one is the span! macro, which lets you declare that a task is happening. It works by putting a “placeholder” on the stack: when that placeholder is dropped, the task is done:

let s: Span = span!(...); // create a span `s` let _guard = s.enter(); // enter `s`, so that subsequent events take place "in" `s` let t: Span = span!(...); // create a *subspan* of `s` called `t` ...

Under the hood, all of these macros forward to the “subscripber” we were talking about later. So they might receive events like “we entered this span” or “this log was generated”.

The idea is that events that happen inside of a span inherit the context of that span. So, to jump back to my compiler example, I might use a span to indicate which function is currently being type-checked, which would then be associated with any events that took place.

There are many different possible kinds of subscribers. A subscriber might, for example, dump things out in real time, or it might just collectevents and log them later. Crates like tracing-timing record inter-event timing and make histograms and flamegraphs.

Integrating tracing with other libraries

It seems clear that tracing would work best if it is integrated with other libaries. I believe it is already integrated into tokio, but one could also imagine integrating tracing with rayon, which distributes tasks across worker threads to run in parallel. The goal there would be that we “link” the tasks so that events which occur in a parallel task inherit the context/span information from the task which spawned them, even though they’re running on another thread.

The idea here is not only that Rayon can link up your application events, but that Rayon can add its own debugging information using tracing in a non-obtrusive way. In the ‘bad old days’, tokio used to have a bunch of debug! logs that would let you monitor what was going on – but these logs were often confusing and really targeting internal tokio developers.

With the tracing crate, the goal is that libraries can enrich the user’s diagnostics. For example, the hyper library might add metadata about the set of headers in a request, and tokio might add information about which thread-pool is in use. This information is all “attached” to your actual application logs, which have to do with your business logic. Ideally, you can ignore them most of the time, but if that sort of data becomes relevant – e.g., maybe you are confused about why a header doesn’t seem to be being detected by your appserver – you can dig in and get the full details.

Integrating tracing with other logging systems

Eliza emphasized that she would really like to see more interoperability amongst tracing libraries. The current tracing crate, for example, can be easily made to emit log records, making it interoperable with the log crate (there is also a “logger” that implements the tracing interface).

Having a distinct tracing-core crate means that it possible for there to be multiple facades that build on tracing, potentially operating in quite different ways, which all share the same underlying “subscriber” infrastructure. (rayon uses the same trick; the rayon-core crate defines the underlying scheduler, so that multiple versions of the rayon ParallelIterator traits can co-exist without having multiple global schedulers.) Eliza mentioned that – in her ideal world – there’d be some alternative front-end that is so good it can replaces the tracing crate altogether, so she no longer has to maintain the macros. =)

RAII and async fn doesn’t always play well

There is one feature request for async-await that arises from the tracing library. I mentioned that tracing uses a guard to track the “current span”:

let s: Span = span!(...); // create a span `s` let _guard = s.enter(); // enter `s`, so that subsequent events take place "in" `s` ...

The way this works is that the guard returned by s.enter() adds some info into the thread-local state and, when it is dropped, that info is withdrawn. Any logs that occur while the _guard is still live are then decorated with this extra span information. The problem is that this mechanism doesn’t work with async-await.

As explained in the tracing README, the problem is that if an async await function yields during an await, then it is removed from the current thread and suspended. It will later be resumed, but potentially on another thread altogether. However, the _guard variable is not notified of these events, so (a) the thread-local info remains set on the original thread, where it may not longer belong and (b) the destructor which goes to remove the info will run on the wrong thread.

One way to solve this would be to have some sort of callback that _guard can receive to indicate that it is being yielded, along with another callback for when an async fn resumes. This would probably wind up being optional methods of the Drop trait. This is basically another feature request to making RAII work well in an async environment (in addition to the existing problems with async drop that boats described here).

Priorities as a linkerd hacker

I asked Eliza to think for a second about what priorities she would set for the Rust org while wearing her “linkerd hacker” hat – in other words, when acting not as a library designer, but as the author of an that relies on Async I/O. Most of the feedback here though had more to do with general Rust features than async-await specifically.

Eliza pointed out that linkerd hasn’t yet fully upgraded to use async-await, and that the vast majority of pain points she’s encountered thus far stem from having to use the older futures model, which didn’t integrate well with rust borrows.

The other main pain point is the compilation time costs imposes by the deep trait hierarchies created by tower’s service and layer traits. She mentioned hitting a type error that was so long it actually crashed her terminal. I’ve heard of others hitting similar problems with this sort of setup. I’m not sure yet how this is best addressed.

Another major feature request would be to put more work into procedural macros, especially in expression position. Right now proc-macro-hack is the tool of choice but – as the name suggests – it doesn’t seem ideal.

The other major point is that support for cargo feature flags in tooling is pretty minimal. It’s very easy to have code with feature flags that “accidentally” works – i.e., I depend on feature flag X, but I don’t specify it; it just gets enabled via some other dependency of mine. This also makes testing of feature flags hard. rustdoc integration could be better. All true, all challenging. =)

Comments?

There is a thread on the Rust users forum for this series.

Categorieën: Mozilla-nl planet

The Firefox Frontier: The 7 best things about the new Firefox browser for Android

Mozilla planet - ma, 10/02/2020 - 22:44

The biggest ever update to Firefox browser for Android is on its way. Later this spring, everyone using the Firefox browser on their Android phones and tablets will get the … Read more

The post The 7 best things about the new Firefox browser for Android appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Extensions in Firefox 73

Mozilla planet - ma, 10/02/2020 - 18:00

As promised, the update on changes in Firefox 73 is short: There is a new sidebarAction.toggle API that will allow you to open and close the sidebar. It requires being called from a user action, such as a context menu or click handler. The sidebar toggle was brought to you by Mélanie Chauvel. Thanks for your contribution, Mélanie!

On the backend, we fixed a bug that caused tabs.onCreated and tabs.onUpdated events to be fired out-of-order.

We have also added more more documentation on changing preferences for managing settings values with experimental WebExtensions APIs. As a quick note, you will need to set the preference extensions.experiments.enabled to true to enable experimental WebExtensions APIs starting with Firefox 74.

That’s all there is to see for Firefox 73. We’ll be back in a few weeks to highlight changes in Firefox 74.

The post Extensions in Firefox 73 appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl ootw: –keepalive-time

Mozilla planet - ma, 10/02/2020 - 15:54

(previously blogged about options are listed here.)

This option is named --keepalive-time even if the title above ruins the double-dash (thanks for that WordPress!). This command line option was introduced in curl 7.18.0 back in early 2008. There’s no short version of it.

The option takes a numerical argument; number of seconds.

What’s implied in the option name and not spelled out is that the particular thing you ask to keep alive is a TCP connection. When the keepalive feature is not used, TCP connections typically don’t send anything at all if no data is transmitted.

Idle TCP connections

Silent TCP connections typically cause the two primary issues:

  1. Middle-boxes that track connections, such as your typical NAT boxes (home WiFi routers most notoriously) will consider silent connections “dead” after a certain period of time and drop all knowledge about them, leading to the connection non functioning when the client (or server) later wants to resume operation of it.
  2. Neither side of the connection will notice when the network between them breaks, as it takes actual traffic to do so. This is of course also a feature, because there’s no need to be alarmed by a breakage if there’s no traffic as it might be fine again when it eventually gets used again.

TCP stacks then typically implement a low-level feature where they can send a “ping” frame over the connection if it has been idle for a certain amount of time. This is the keepalive packet.

--keepalive-time <seconds> therefor sets the interval. After this many seconds of “silence” on the connection, there will be a keepalive packet sent. The packet is totally invisible to the applications on both sides but will maintain the connection through NATs better and if the connection is broken, this packet will make curl detect it.

Keepalive is not always enough

To complicate issues even further, there are also devices out there that will still close down connections if they only send TCP keepalive packets and no data for certain period. Several protocols on top of TCP have their own keepalive alternatives (sometimes called ping) for this and other reasons.

This aggressive style of closing connections without actual traffic TCP traffic typically hurts long-going FTP transfers. This, because FTP sets up two connections for a transfer, but the first one is the “control connection” and while a transfer is being delivered on the “data connection”, nothing happens over the first one. This can then result in the control connection being “dead” by the time the data transfer completes!

Default

The default keepalive time is 60 seconds. You can also disable keepalive completely with the --no-keepalive option.

The default time has been selected to be fairly low because many NAT routers out there in the wild are fairly aggressively and close idle connections already after two minutes (120) seconds.

For what protocols

This works for all TCP-based protocols, which is what most protocols curl speaks use. The only exception right now is TFTP. (See also QUIC below.)

Example

Change the interval to 3 minutes:

curl --keepalive-time 180 https://example.com/ Related options

A related functionality is the --speed-limit amd --speed-time options that will cancel a transfer if the transfer speed drops below a given speed for a certain time. Or just the --max-time that sets a global timeout for an entire operation.

QUIC?

Soon we will see QUIC getting used instead of TCP for some protocols: HTTP/3 being the first in line for that. We will have to see what exactly we do with this option when QUIC starts to get used and what the proper mapping and behavior shall be.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR19 available

Mozilla planet - ma, 10/02/2020 - 05:36
Due to a busy work schedule and $REALLIFE, TenFourFox Feature Parity Release 19 final is just now available for testing (downloads, hashes, release notes). This version is the same as the beta except for a couple URL bar tweaks I meant to land and the outstanding security updates. If all goes well, it will go live tomorrow Pacific time in the evening.

Since the new NSS is sticking nicely, FPR20 will probably be an attempt at enabling TLS 1.3, and just in time, too.

Categorieën: Mozilla-nl planet

About:Community: Firefox 73 new contributors

Mozilla planet - zo, 09/02/2020 - 14:07

With the release of Firefox 73, we are pleased to welcome the 19 developers who contributed their first code change to Firefox in this release, 18 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

Daniel Stenberg: Rockbox services transition

Mozilla planet - za, 08/02/2020 - 13:25

Remember Rockbox? It is a free software firmware replacement for mp3 players. I co-founded the project back in 2001 together with Björn and Linus. I officially left the project back in 2014.

The project is still alive today, even of course many of us can’t hardly remember the concept of a separate portable music player and can’t figure out why that’s a good idea when we carry around a powerful phone all days anyway that can do the job – better.

Already when the project took off, we at Haxx hosted the web site and related services. Heck, if you don’t run your own server to add fun toy projects to, then what kind of lame hacker are you?

None of us in Haxx no longer participates in the project and we haven’t done so for several years. We host the web site, we run the mailing lists, we take care of the DNS, etc.

Most of the time it’s no biggie. The server hosts a bunch of other things anyway for other project so what is a few extra services after all?

Then there are times when things stop working or when we get a refreshed bot attack or web crawler abuse against the site and we get reminded that here we are more than eighteen years later hosting things and doing work for a project we don’t care much for anymore.

It doesn’t seem right anymore. We’re pulling the plug on all services for Rockbox that occasionally gives us work and annoyances. We’re offering to keep hosting DNS and the mailing lists – but if active project members rather do those too, feel free. It never was a life-time offer and the time has come for us.

If people still care for the project, it is much better if those people will also care for these things for the project’s sake. And today there are more options than ever for an open source project to get hosting, bug tracking, CI systems etc setup for free with quality. There’s no need for us ex-Rockboxers to keep doing this job that we don’t want to do.

I created a wiki page to detail The Transition. We will close down the specified services on January 1st 2021 but I strongly urge existing Rockboxers to get the transition going as soon as possible.

I’ve also announced this on the rockbox-dev mailing list, and I’ve mentioned it in the Rockbox IRC.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Visual Development in Hello WebXR!

Mozilla planet - do, 06/02/2020 - 18:31
Visual Development in Hello WebXR!

This is a post that tries to cover many aspects of the visual design of our recently released demo Hello WebXR! (more information in the introductory post), targeting those who can create basic 3D scenes but want to find more tricks and more ways to build things, or simply are curious about how the demo was made visually. Therefore this is not intended to be a detailed tutorial or a dogmatic guide, but just a write-up of our decisions. End of the disclaimer :)

Here it comes a mash-up of many different topics presented in a brief way:

  • Concept
  • Pipeline
  • Special Shaders and Effects
  • Performance
  • Sound Room
  • Vertigo Room
  • Conclusion
Concept


From the beginning, our idea was to make a simple, down-paced, easy to use experience that gathered many different interactions and mini-experiences that introduces VR newcomers to the medium, and also showcased the recently released WebXR API. It would run on almost any VR device but our main target device was the Oculus Quest, so we thought that we could have some mini-experiences that could share the same physical space, but other experiences should have to be moved to a different scene (room), either for performance reasons and also due its own nature.

We started by gathering references and making concept art, to figure out how the "main hall" would look like:

Visual Development in Hello WebXR!<figcaption>Assorted images taken from the web and Sketchfab</figcaption>

Then, we used Blender to start sketching the hall and test it on VR to see how it feels. It should have to be welcoming and nice, and kind of neutral to be suitable for all audiences.

Visual Development in Hello WebXR!Visual Development in Hello WebXR!Visual Development in Hello WebXR!<figcaption>Look how many pedestals and doors for experiences we initially planned to add :_D</figcaption>Pipeline

3D models were exported to glTF format (Blender now comes with an exporter, and three.js provides a loader), and for textures PNG was used almost all the time, although on a late stage in the development of the demo all textures were manually optimized to drastically reduce the size of the assets. Some textures were preserved in PNG (handles transparency), others were converted to JPG, and the bigger ones were converted to BASIS using the basisu command line program. Ada Rose Cannon’s article introducing the format and how to use it is a great read for those interested.

glTF files were exported without materials, since they were created manually by code and assigned to the specific objects at load time to make sure we had the exact material we wanted and that we could also tweak easily.

In general, the pipeline was pretty traditional and simple. Textures were painted or tweaked using Photoshop. Meshes and lightmaps were created using Blender and exported to glTF and PNG.

For creating the lightmap UVs, and before unwrapping, carefully picked edges were marked as seams and then the objects were unwrapped using the default unwrapper, in the majority of cases. Finally, UVs were optimized with UVPackMaster 2 PRO.

Draco compression was also used in the case of the photogrammetry object, which reduced the size of the asset from 1.41MB to 683KB, less than a half.

Special Shaders and Effects

Some custom shaders were created for achieving special effects:

Beam shader

This was achieved offseting the texture along one axis and rendered in additive mode:

Visual Development in Hello WebXR!

The texture is a simple gradient. Since it is rendered in additive mode, black turns transparent (does not add), and dark blue adds blue without saturating to white:

Visual Development in Hello WebXR!

And the ray target is a curved mesh. The top cylinder and the bottom disk are seamlessly joined, but their faces and UVs go in opposite directions.

Visual Development in Hello WebXR!Door shader

This is for the star field effect in the doors. The inward feeling is achieved by pushing the mesh from the center, and scaling it in Z when it is hovered by the controller’s ray:

Visual Development in Hello WebXR!Visual Development in Hello WebXR!

This is the texture that is rendered in the shader using polar coordinates and added to a base blue color that changes in time:

Visual Development in Hello WebXR!Panorama ball shader

Used in the deformation (in shape and color) of the panorama balls.

Visual Development in Hello WebXR!

The halo effect is just a special texture summed to the landscape thumbnail, which is previously modified by shifting red channel to the left and blue channel to the right:

Visual Development in Hello WebXR!Zoom shader

Used in the zoom effect for the paintings, showing only a portion of the texture and also a white circular halo. The geometry is a simple plane, and the shader gets the UV coordinates of the raycast intersection to calculate the amount of texture to show in the zoom.

Visual Development in Hello WebXR!SDF Text shader

Text rendering was done using the Troika library, which turned out to be quite handy because it is able to render SDF text using only a url pointing to a TTF file, without  having to generate a texture.

Performance

Oculus Quest is a device with mobile performance, and that requires a special approach when dealing with polygon count, complexity of materials and textures; different from what you could do for desktop or high end devices. We wanted the demo to perform smoothly and be indistinguishable from native or desktop apps, and these are some of the techniques and decisions we took to achieve that:

  • We didn't want a low-poly style, but something neat and smooth. However, polygon count was reduced to the minimum within that style.
Visual Development in Hello WebXR!
  • Meshes were merged whenever it was possible. All static objects that could share the same material where merged and exported as a single mesh:
Visual Development in Hello WebXR!
  • Materials were simplified, reduced and reused. Almost all elements in the scene have a constant (unlit) material, and only two directional lights (sun and fill) are used in the scene for lighting the controllers. PBR materials were not used. Since constant materials cannot be lit, lightmaps must be precalculated to give the feeling of lighting. Lightmaps have two main advantages:

    - Lighting quality can be superior to real time lighting, since the render is done “offline”. This is done beforehand, without any time constraint. This allows us to do full global illumination with path tracing in Blender, simulating light close to real life.

    - Since no light calculations are done realtime, constant shading is the one that has the best performance: it just applies a texture to the model and nothing else.

    However, lightmaps also have two main disadvantages:

    - It is easy to get big, noticeable pixels or pixel noise in the texture when applied to the model (due to the insufficient resolution of the texture or to the lack of smoothness or detail in the render). This was solved by using 2048x2048 textures, rendered with an insane amount of samples (10,000 in our case since we didn’t have CUDA or Denoising available at that moment). 4096px textures were initially used and tested in Firefox Mixed Reality, but Oculus Browser did not seem to be able to handle them so we switched to 2048, reducing texture quality a bit but improving load time along the way.

    - You cannot change the lighting dynamically, it must be static. This was not really an issue for us, since we did not need any dynamic lighting.
Visual Development in Hello WebXR!<figcaption>Hall, Vertigo and Angel lightmaps, respectively.</figcaption>Sound RoomVisual Development in Hello WebXR!<figcaption>Sketches for the visual hints in the sound room</figcaption>

Each sound in the sound room is accompanied by a visual hint. These little animations are simple meshes animated using regular keyframes on position/rotation/scale transforms.

Visual Development in Hello WebXR!Visual Development in Hello WebXR!<figcaption>Blender setup for the sound room</figcaption>Vertigo Room

The first idea for the vertigo room was to build a low-poly but convincing city and put the user on top of a skyscraper. After some days of Blender work:

Visual Development in Hello WebXR!

We tried this in VR, and to our surprise it did not produce vertigo! We tested different alternatives and modifications to the initial design without success. Apparently, you need more than just lifting the user to 500m to produce vertigo. Texture scale is crucial for this and we made sure they were at a correct scale, but there is more needed. Vertigo is about being in a situation of risk, and there were some factors in this scene that did not make you feel unsafe. Our bet is that the position and scale of the other buildings compared to the player situation make them feel less present, less physical, less tangible. Also, unrealistic lighting and texture may have influenced the lack of vertigo.

So we started another scene for the vertigo, focusing on the position and scale of the buildings, simplifying the texture to a simple checkerboard, and adding the user in a really unsafe situation.

Visual Development in Hello WebXR!

The scene is comprised of only two meshes: the buildings and the teleport door. Since the range of movements of the user in this scene is very limited, we could remove all those sides of the buildings that face away from the center of the scene. It is a constant material with a repeated checkerboard texture to give sense of scale, and a lightmap texture that provides lighting and volume.

Visual Development in Hello WebXR!

Conclusion

Things that did not go very well:

  • We didn’t use the right hardware to render lightmaps, so it took 11 hours to render, which did not help us iterate quickly.
  • Wasted a week refining the first version of the vertigo room without testing properly if the vertigo effect worked or not. We were overconfident about it.
  • We had a tricky bug with Troika SDF text library on Oculus Browser for many days, which was finally solved thanks to its author.
  • There is something obscure in the mipmapping of BASIS textures and the Quest. The level of mipmap chosen is always lower than it should, so textures look lower quality. This is noticeable when getting closer to the paintings from a distance, for example. We played with basisu parameters, but it was not of much help.
  • There are still many improvements we can make to the pipeline to speed up content creation.

Things we like how it turned out:

  • Visually it turned out quite clean and pleasing to the eye, without looking cheap despite using simple materials and reduced textures.
  • The effort we put into merging meshes and simplifying materials was worth it, performance wise, the demo is very solid. Although we did not test while developing on lower end devices, we loved seeing that it runs smoothly on 3dof devices like Oculus Go and phones, and on all browsers.
  • Despite some initial friction, new formats and technologies like BASIS or Draco work well and bring real improvements. If all textures were JPG or PNG, loading and starting times would be many times longer.

We uploaded the Blender files to the Hello WebXR repository.

If you want to know the specifics of something, do not hesitate to contact me at @feiss or the whole team at @mozillareality.

Thanks for reading!

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Brrrlin 2020: a SUMO journal from All Hands

Mozilla planet - do, 06/02/2020 - 16:48

Hello, SUMO Nation!

Berlin 2020 has been my first All Hands and I am still experiencing the excitement the whole week gave me.

Contributors picture

The intensity an event of this scale is able to build is slightly overwhelming (I suppose all the introverts reading this can easily get me), but the gratification and insights everyone of us has taken home are priceless.

The week started last Monday, on January 27th, when everyone landed in Berlin from all over the world. An amazing group of contributors, plus every colleague I had always only seen on a small screen, was there, in front of me, flesh and bones. I was both excited and scared by the number of people that suddenly were inhabiting the corridors of our conference/dorm/workspace.

The schedule for the SUMO team and SUMO contributors was a little tight, but we managed to make it work: Kiki and I decided to share our meetings between the days and I am happy about how we balanced the work/life energy.

On Tuesday we opened the week by having a conversation over the past, the current state and the future of SUMO. The community meeting was a really good way to break the ice, the whole SUMO team was there and gave updates from the leadership, products, as well as the platform team.  This meeting was necessary also to lay down the foundations for the priorities of the week and develop an open conversation.

On Wednesday, Kiki and I were fully in the game. We decided to have two parallel sessions: one regarding the Forum and Social support and one focusing on the KB localization. The smaller groups were both really vibrant and lively. We highlighted pain points, things that are working and issues that we as community managers could focus more on at this time. In the afternoon, we had a face to face meeting between the community and the Respond Tool team. It was a feedback-based discussion on features and bugs.

Thursday was ON FIRE. In the morning we had the pleasure to host Vesta Zare, the Product Manager of Fenix, and we had a session focusing on Firefox Preview and its next steps. Vesta was thrilled to meet the SUMO community, excited to share information, and happy to answer questions. After the session, we had a 2-hour-long brainstorming workshop organized by Kiki and me for the community to help us build a priority pipeline for the Community plan we have been working on in the last few months. The session was long but incredibly helpful and everyone who participated was active and rich in insights. The day was still running at a fast pace and the platform team had an Ask-Me-Anything session with the contributors. Madalina and Tasos were great and they both set real expectations while leaving the community open doors to get involved.

On Friday the community members were free to follow their own schedule, while the SUMO team had the last meetings to run up to. The week was closing up with one of the most incredible parties I have ever experienced, and that was a great opportunity to finally collect the last feedback and friendly connections we lost along the way of this really busy week.

Here is a recollection of the pain points we got from the meetings with contributors:

  • On-boarding new contributors: retainment is low for many reasons (time, skillset, etc.)
  • Contributors’ tools, first and foremost, Kitsune, need attention.
  • The bus factor is still very much real.
  • The community needs Forum, Social and Respond Tool analyze:
    • Which questions are being skipped and not answered?
    • Device coverage from contributors.
  • What about the non-EN locales on the community events?
  • Localization quality and integrity are at risk.
  • Language level of the KB is too technical and does not reach every audience.

We have also highlighted the many successes that we have from last year:

  • The add-on apocalypse
  • The 7 SUMO Sprints (Fx 65-71)
  • The 36 community meetings
  • More than 300 articles localized in every language
  • One cool addons (SUMO Live Helper) (Thanks to Jhonatas, Wesley, and Danny!)
  • The Respond tool campaign

As you’ve probably heard before, we’re currently working with an external agency called Context Partners on the community strategy project. The result from that collaboration is a set of recommendations on 3 areas that we managed to discuss during the all hands.

Recommedations

Obviously, we wouldn’t be able to do all of them, so we need your help.

Which recommendation do you believe would provide the greatest benefit to the SUMO community? 

Is there a recommendation you would make that is missing from this list?

Your input would be very valuable for us since the community is all about you. We will collect all of your feedback with us to be discussed in our final meeting with the Context Partner team in Toronto in mid-February. We’ll appreciate any additional feedback that we can gather before the end of next week (02/14/2020).

Please read carefully and think about the questions above. Kiki and I have opened a Discourse post and Contributor Forum thread to collect feedbacks on this. You can also reach out directly to us with your questions or feedbacks.

I feel lucky to be part of this amazing community and to work alongside passionate and lively people I can look up to everyday. Remember that SUMO is made by you and you should be proud to identify yourself as part of this incredible group of people who honestly enjoy helping others.

As a celebration of the All Hands and the SUMO community, I would like to share the poem that Seburo kindly shared with us:

It is now over six months since Mozilla convened last,
and All Hands is now coming up so fast.
From whatever country, nation or state they currently be in,
Many MoCo and MoFo staff, interns and contributors are converging on Berlin.
Twenty Nineteen was a busy year,
Much is going on with Firefox Voice, so I hear.
The new Fenix is closer to release,
the GeckoView team’s efforts will not cease.
MoFo is riding high after an amazing and emotional MozFest,
For advice on how to make the web better, they are the best.
I hope that the gift guide was well read,
Next up is putting concerns about AI to bed…?
Please don’t forget contributors who are supporting the mission from wide and far,
Writing code, building communities and looking to Mozilla’s north star.
The SUMO team worked very hard during the add-on apocalypse,
And will not stop helping users with useful advice and tips.
I guess I should end with an attempt at a witty one liner.
So here it is.
For one week in January 2020,
Mozillianer sind Berliner.

Thank you for being part of SUMO,

See you soon!

Giulia

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: uBlock Origin available soon in new Firefox for Android Nightly

Mozilla planet - do, 06/02/2020 - 16:43

Last fall, we announced our intention to support add-ons in Mozilla’s reinvented Firefox for Android browser. This new, high-performance browser for Android has been rebuilt from the ground up using GeckoView, Mozilla’s mobile browser engine and has been available for early testing as Firefox Preview. A few weeks ago, Firefox Preview moved into the Firefox for Android Nightly pre-release channel, starting a new chapter of the Firefox experience on Android.

In the next few weeks, uBlock Origin will be the first add-on to become available in the new Firefox for Android. It is currently available on Firefox Preview Nightly and will soon be available on Firefox for Android Nightly. As one of the most popular extensions in our Recommended Extensions program, uBlock Origin helps millions of users gain control of their web experience by blocking intrusive ads and improving page load times.

As GeckoView builds more support for WebExtensions APIs, we will continue to enable other Recommended Extensions to work in the new Firefox for Android.

We want to ensure that any add-on supported in the new Firefox for Android provides an exceptional, secure mobile experience to our users. To this end, we are prioritizing Recommended Extensions that are optimized for different screen sizes and cover common mobile use cases. For these reasons, it’s possible that not all the add-ons you have previously installed in Firefox for Android will be supported in the near future. When an add-on you previously installed becomes supported, we will notify you.

When we have more information about how we plan to support add-ons in Firefox for Android beyond our near-term goals, we will post them on this blog. We hope you stay tuned!

The post uBlock Origin available soon in new Firefox for Android Nightly appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Pagina's