mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Jan-Erik Rediger: "Edit this file on GitHub"

Mozilla planet - do, 06/02/2020 - 16:38

At work I help with maintaining two large documentation books:

Back in 2018 I migrated dtmo from gitbook to mdbook (see the pull request). mdbook is maintained by the Rust project and hosts the Rust book as well as a multitude of other community projects. It provided all we need, plus a way to extend it with some small things, I blogged about ToC and mermaid before.

During the Mozilla All Hands last week my colleague Mike casually asked why we don't have links to quickly edit the documentation. When someone discovers a mistake or inaccuracy in the book the current process involves finding the repository of the book, then finding the right file, then edit that file (through the GitHub UI or by cloning the repository), then push changes, open a pull request, wait for review and finally get it merged and deployed.

I immediately set out to build this feature.

I present to you: mdbook-open-on-gh

It's another preprocessor for mdbook, that adds a link to the edit dialog on GitHub (if your book is actually hosted on GitHub). And that's how it looks:

Screenshot of a Glean SDK book site showing the "Edit this file on GitHub" link

It's already deployed on dtmo and the Glean SDK book and simplifies the workflow to: click the link, edit the file on GitHub, commit and open a PR, get a review and merge it to deploy.

If you want to use this preprocessor, install it:

cargo install mdbook-open-on-gh

Add it as a preprocessor to your book.toml:

[preprocessor.open-on-gh] command = "mdbook-open-on-gh" renderer = ["html"]

Add a repository URL to use as a base in your book.toml:

[output.html] git-repository-url = "https://github.com/mozilla/glean"

To style the footer add a custom CSS file for your HTML output:

[output.html] additional-css = ["open-in.css"]

And in open-in.css style the <footer> element or directly the CSS element id open-on-gh:

footer { font-size: 0.8em; text-align: center; border-top: 1px solid black; padding: 5px 0; }

This code block shrinks the text size, center-aligns it under the rest of the content and adds a small horizontal bar above the text to separate it from the page content.

Finally, build your book as normal:

mdbook path/to/book
Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: It’s the Boot for TLS 1.0 and TLS 1.1

Mozilla planet - do, 06/02/2020 - 15:35
Coming to a Firefox near you in March

The Transport Layer Security (TLS) protocol is the de facto means for establishing security on the Web. The protocol has a long and colourful history, starting with its inception as the Secure Sockets Layer (SSL) protocol in the early 1990s, right up until the recent release of the jazzier (read faster and safer) TLS 1.3. The need for a new version of the protocol was born out of a desire to improve efficiency and to remedy the flaws and weaknesses present in earlier versions, specifically in TLS 1.0 and TLS 1.1. See the BEAST, CRIME and POODLE attacks, for example.

With limited support for newer, more robust cryptographic primitives and cipher suites, it doesn’t look good for TLS 1.0 and TLS 1.1. With the safer TLS 1.2 and TLS 1.3 at our disposal to adequately project web traffic, it’s time to move the TLS ecosystem into a new era, namely one which doesn’t support weak versions of TLS by default. This has been the abiding sentiment of browser vendors – Mozilla, Google, Apple and Microsoft have committed to disabling TLS 1.0 and TLS 1.1 as default options for secure connections. In other words, browser clients will aim to establish a connection using TLS 1.2 or higher. For more on the rationale behind this decision, see our earlier blog post on the subject.

What does this look like in Firefox?

We deployed this in Firefox Nightly, the experimental version of our browser, towards the end of 2019. It is now also available in Firefox Beta 73. In Firefox, this means that the minimum TLS version allowable by default is TLS 1.2. This has been executed in code by setting security.tls.version.min=3, a preference indicating the minimum TLS version supported. Previously, this value was set to 1. If you’re connecting to sites that support TLS 1.2 and up, you shouldn’t notice any connection errors caused by TLS version mismatches.

What if a site only supports lower versions of TLS?

In cases where only lower versions of TLS are supported, i.e., when the more secure TLS 1.2 and TLS 1.3 versions cannot be negotiated, we allow for a fallback to TLS 1.0 or TLS 1.1 via an override button. As a Firefox user, if you find yourself in this position, you’ll see this:

screenshot showing "Secure Connection Failed" message that allows user to override the TLS 1.0 and 1.1 deprecation

As a user, you will have to actively initiate this override. But the override button offers you a choice. You can, of course, choose not to connect to sites that don’t offer you the best possible security.

This isn’t ideal for website operators. We would like to encourage operators to upgrade their servers so as to offer users a secure experience on the Web. We announced our plans regarding TLS 1.0 and TLS 1.1 deprecation over a year ago, in October 2018, and now the time has come to make this change. Let’s work together to move the TLS ecosystem forward.

Deprecation timeline

We plan to monitor telemetry over two Firefox Beta cycles, and then we’re going to let this change ride to Firefox Release. So, expect Firefox 74 to offer TLS 1.2 as its minimum version for secure connections when it ships on 10 March 2020. We plan to keep the override button for now; the telemetry we’re collecting will tell us more about how often this button is used. These results will then inform our decision regarding when to remove the button entirely. It’s unlikely that the button will stick around for long. We’re committed to completely eradicating weak versions of TLS because at Mozilla we believe that user security should not be treated as optional.

Again, we would like to stress the importance of upgrading web servers over the coming months, as we bid farewell to TLS 1.0 and TLS 1.1. R.I.P, you’ve served us well.

The post It’s the Boot for TLS 1.0 and TLS 1.1 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Multi-Account Containers Add-on Sync Feature

Mozilla planet - do, 06/02/2020 - 15:26

Image of the Multi-Account Containers Sync On-boarding ScreenThe Multi-Account Containers Add-on will now sync your container configuration and site assignments.

Firefox Multi-Account Containers allows users to separate their online identities into different tab types called Containers. Each Container has its own separate storage and cookies.  This way, their browsing activity in one Container is not accessible to websites in other Containers. This privacy feature allows users to assign sites to only open in a specific Container. For instance, it permits them to set your shopping websites to always open in a Shopping Container. This keeps advertising tracking data from those websites separate from the user’s Work Container. Users can also use Containers for separate areas of their life, like work and personal email. The user can separate email accounts from the same provider, so they don’t have to log in and out of each account. For more information about how to use the containers add on, visit the Mozilla support page.

The new sync feature will align Multi-Account Containers on different computers. The add-on carries over Container names, colors, icons, and site assignments on to any other machines with the same Firefox account.

If you have allowed automatic updates of the add-on, your extension should update on its own. The first time you click the Multi-Account Container icon after the update, an on-boarding panel will allow you to activate sync.

In order to use this feature, you will need to be signed in to a Firefox account in Firefox.

The post Multi-Account Containers Add-on Sync Feature appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Robert Kaiser: FOSDEM, and All Those 20's

Mozilla planet - do, 06/02/2020 - 14:02
I've been meaning to blog again for some time, and just looked in disbelief at the date of my last post. Yes, I'm still around. I hope I get to write more often in the future.

Ludo just posted his thoughts on FOSDEM, which I also attended last weekend as a volunteer for Mozilla. I have been attending this conference since 2002, when it first went by that exact name, and since then AFAIK only missed the 2010 edition, giving talks in the Mozilla dev room almost every year - though funnily enough, in two of the three years where I've been a member of the Mozilla Tech Speakers program, my talks were not accepted into that room, while I made it all the years before. In fact, that's more telling a story of how interested speakers are in getting into this room nowadays, while in the past there were probably fewer submissions in total. So, this year I helped out Sunday's Mozilla developer room by managing the crowd entering/leaving at the door(s), similar to what I did in the last few years, and given that we had fewer volunteers this year, I also helped out at the Mozilla booth on Saturday. Unfortunately, being busy volunteering on both days meant that I did not catch any talks at all at the conference (I hear there were some good ones esp. in our dev room), but I had a number of good hallway and booth conversations with various people, esp. within the Mozilla community - be it with friends I had not seen for a while, new interesting people within and outside of Mozilla, or conversations clearing up lingering questions.

Image No. 23467 Image No. 23470 Image No. 23464 Image No. 23468
(pictures by Rabimba & Bob Chao)

Now, this was the 20th conference by the FOSDEM team (their first one went by "OSDEM", before they added the "F" in 2002), and the number 20 is coming up for me all over the place - not just that it works double duty in the current year's number 2020, but even in the months before, I started my row of 20-year anniversaries in terms of my Mozilla contributions: first bug reported in May, first contribution contact in December, first German-language Mozilla suite release on January 1, and will will continue with the 20th anniversaries of my first patches to shared code this summer - see 'My Web Story' post from 2013 for more details. So, being part of an Open-Source project with more than 20 years of history, celebrating a number of 20th anniversaries in that community, I see that number popping up quite a bit nowadays. Around the turn of the century/millennium, a lot of change happened, for me personally but all around as well. Since then, it has been a whirlwind, and change is the one constant that really stayed with me and has become almost a good friend. A lot of changes are going on in the Mozilla community right now as well, and after a bit of a slump and trying to find my new place in this community (since I switched back from staff to volunteer in 2016), I'm definitely excited again to try and help building this next chapter of the future with my fellow Mozillians.

There's so much more going around in my mind, but for now I'll leave it at that: In past times, when I was invited as volunteer or staff, the Mozilla Summits and All-hands were points that energized me and gave me motivation to push forward on making Mozilla better. This year, FOSDEM, with my volunteering and the conversations I had, did the same job. Let's build a better Internet and a better Mozilla community!
Categorieën: Mozilla-nl planet

Ludovic Hirlimann: Fosdem turns 20

Mozilla planet - do, 06/02/2020 - 12:59

I've been attending Fosdem since 2004 when I was involved with Camino. I got enticed to come by a post of Tristan. On that particular year I got enrolled by Gerv to check a few mac things. I met Patrick who was working on enigmail, and we became friends. I was hooked - and have only missed Fosdem 2015. Over the years I gave talks. I met new people, made friends. 3 years ago I became a volunteer, by accident and ran the PGP key signing party. I enjoyed being a volunteer, it was fun and gave me an orange T-shirt to grow my collection. So the year after I signed up on volunteers.fosdem.org to help clean up on the Sunday evening. It was my first time attending the fosdem fringe (CentOS dojo and Configuration Management Camp).
2020 was very special: I helped organize the Mozilla dev room (thank you, Anthony and Jean-Yves for letting me be part of that). A few things happened that shortened the number of Volunteers this year. I Managed the room and had the pleasure to introduce speakers and talks. It was very smooth because Robert was directing people to the door where space was available in the auditorium. My two favorites were:

Once the talks were over, I attended the last two talks in the main auditorium Janson. The closing talk and the one remembering 20 year of conference. This later was a hell of a great show thanks MarquisdeGeek (Video is not ready yet).

Categorieën: Mozilla-nl planet

Tantek Çelik: Local First, Undo Redo, JS-Optional, Create Edit Publish

Mozilla planet - do, 06/02/2020 - 10:11

For a while I have brainstormed designs for a user experience (UX) to create, edit, and publish notes and other types of posts, that is fully undoable (like Gmail’s "Undo Send" yet generalized to all user actions) and redoable, works local first, and lastly, uses progressive enhancement to work without scripts in the extreme fallback case of not being installed, and scripts not loading.

I’d like to be able to construct an entire post on a mobile device, like a photo post with caption, people tags, location tag etc. all locally, offline, without any need to access a network.

This is like how old email applications used to work. You could be completely offline, open your email application (there was no need to login to it!), create a message, add attachments, edit it etc., click "Send" and then forget about it. Eventually it synced to the network but you didn’t worry or care about when that step would happen, you just knew it would eventually work without you having to tend to it or watch it.

I want to approach this from user-experience-first design perspective, rather than a bottom-up protocol/technology/backend first perspective. For one, I don’t know if any existing protocols actually have the necessary features to support such a UX.

Micropub has a lot of what’s needed, and I won’t know what else is needed until I build the user flows I want, and then use those to drive any necessary Micropub feature additions. I absolutely do not want to limit my UX by what an existing protocol can or cannot do (essentially the software design version of the tail wags dog problem).

Local First

I wrote up a brief stub article on the IndieWeb wiki on local first. I see local first as an essential aspect of an authoring experience that is maximally responsive to user input, and avoids any and all unnecessary ties to other services.

I want a 100% local first offline capable creating / editing / posting workflow which then “auto-syncs” once the network shows up. The presence / absence of internet access should not affect user flow at all. Network presence or absence should only be a status indicator (e.g. whether / how much a post has been sent to the internet or not, any edits / updates etc.). It should never block any user actions. I’ll say it again for emphasis:

The absence or presence of network access must not block any user actions. Ever. Any changes should be effective locally immediately, with zero data loss.

Nearly no one actually builds apps like that today. Even typical mobile “native” apps fail without network access (a few counter-examples are the iOS built-in Notes & Photos apps, as well as the independent maps.me 100% offline mapping program). Some “offline first” apps get close. But even those, especially on mobile, fail in both predictable (like requiring logging into a website or network service, just to edit a local text document) and strange ways.

Full Undo Redo

Every such user action should be undoable and redoable, again, without waiting for the network (it’s reasonable to apply some time limits for some actions, e.g. Gmail Undo can be configured to work for 30 seconds). Now imagine that for any user action, especially any user action that creates, edits, or deletes content or any aspects thereof (like name, tags, location etc.).

JS-Optional

In the case where a web application has not yet been installed, I also want it to be 100% capable without depending on loading any external scripts. This JS-Optional approach is more broadly known as progressive enhancement, which does require that you have at least some connection, enough for a browser to submit form requests and retrieve static HTML (and preferably though not required, static CSS and image files too).

Once you are connected and are running at least a Service Worker for the site, local first requires execution of some scripts, though even then dependencies on any external scripts should be minimized and preferably eliminated.

Incremental Progress

I believe aspects of this experience can be built and deployed incrementally, iterating over time until the full system is built.

I’ve got a handful of paper sketches of local-first undoable/redoable user flows. I have a Service Worker deployed on my site that allows offline browsing of previously visited pages, and I have a form submission user interface that handles part of publishing. There are many more aspects to design & implement. As I think of them I’m capturing them in part of my “Working On” list on the IndieWeb Wiki, iteratively reprioritizing and making incremental progress at IndieWebCamps.

One building block at a time, collaboratively.

Thanks to J. Gregory McVerry, Grant Richmond, and Kevin Marks for reviewing drafts of this post.

Categorieën: Mozilla-nl planet

Mike Hommey: Announcing git-cinnabar 0.5.4

Mozilla planet - do, 06/02/2020 - 01:16
Please partake in the git-cinnabar survey.

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.3?
  • Windows helper is dynamically linked against libcurl again. Static linkage was causing more problems than it was fixing.
  • Fix clonebundles support to ignore stream=v2 bundles.
  • Ignore graft cinnabarclones when not grafting.
  • Fixed a corner case where git cinnabar fsck would not skip files it was meant to skip and failed as a result.
Categorieën: Mozilla-nl planet

Karl Dubost: Week notes - 2020 w05 - worklog - Mozilla All Hands Berlin

Mozilla planet - wo, 05/02/2020 - 08:55
Monday

Left home in between 5:30 AM and 6:00 AM. All geared up for the coronavirus outbreak just in case. Japan is not yet heavily affected. 3 cases at the time of this writing and all coming from Chinese traveling in Japan. Taking the train to Narita Airport. Then the plane to Brussels, and finally Brussels to Berlin.

Mask on face

Funny story about the mask. In Japan and many parts of Asia, wearing a mask is fairly common. You do it for mainly 3 reasons:

  1. You do not want to spread your cold to others
  2. You do not want to catch a cold when it's high season (public transportation is crowded)
  3. You do not want to show your face without makeup, or any personal reasons.

My mask has been unnoticeable in the eyes of anyone until I arrive… Brussels airport. First I was the only one wearing one. Second people were giving me a inquisitive look. I think some were worried.

I then walked from the Berlin airport to the hotel. It's something I do time to time to better understand the fabric of a city and how it changes from its periphery to its core. Walking is the best pace for thinking. It was only a 1 hour 45 minutes walk. It was night. First time for me doing it at night. The exit of Berlin airport on foot is not complicated, but the mood is a bit spooky for about 2 km with the motorway on one side and the forest on the other side.

I had a quick dinner with Kate that I bumped into at the registration desk. And then I didn't try to catch up with anyone and went directly to bed after 22 hours of trip.

Tuesday

Japan has now 6 cases of coronavirus, including one person infected through transmission, a bus driver who drove around a Chinese tour group.

This is a very special All Hands. It's happening just two weeks after the layoffs. So this was in the mind and talk of everyone. The catharsis may help and we need to rebuild our forces, hopes, and tackles some of the issues of the Web. When I think about Mozilla as a place for working, I still think it is one of the best places to work on the open web issues.

Open Web and Power Harassment

I had written something about the controversial tweet of the week, but i decided to give it a separate article, because it was starting to be longer. So it will be published soon on this blog.

Search engine indexer

During one of our discussions, someone wondered if Mozilla should enter the search space by providing a search engine.

Mozilla doesn't need to create a new search engine, but there is definitely a place for privacy-focused search engine. Currently there are solutions for people looking for information such as DuckDuckGo. But DuckDuckGo just use the index of Google, and a couple of other sources to give results. They do not have their own indexer. So as a site owner if you are blocking most of the search engines, because you do not want your website to appear in an environment full of ads, there is no possibility for you to provide this information to their users/readers.

I would love to see a indexer which is on-demand that would build an index for people who don't want to be for example on Google Search or Bing, but would still be willing to be on DuckDuckGo. Another consequence of this would be to have a leverage to improve the indexing of the information with better structured content. As the mission is not to index the web, but to catalog people who are requesting it. I even wonder if site owners would be ready to pay this as a service. What would be the cost of running such a thing? Would DuckDuckGo be interested?

Should Mozilla help with this? Create a deal with DuckDuckGo?

"Time Machine" for the browser or personal WebArchive.

I want to be able to browse the Web keeping the history of some of the sites I visited. Basically keep a records of some targeted websites. When I go to let's say NY times homepage, I want to be able to save this page. If I go a couple of days later, the browser would remember I already visited this site and I would be able to access the previous instances as layers of timestamped-information of my previous visits. A kind of personal webarchive.

Wednesday

Mozilla AI plenary session was a good wake up call. A couple of speakers gave examples on the policies, the instrumentalization of ai to maximize profits (and not knowledge). I wish these talks were video recorded for public consumptions.

All-Hands are a great opportunity to re-think about what we are and what we do. I started to take notes on my laptop about the webcompat agencies.

Thursday

Breakfast discussions around service workers, Web standards and balance of powers.

A walk, in the evening, along some stretches of the wall. Graffitis and holes in an agency which was once a divider in a city. The first time, I came to Berlin, was in 1988, during a school trip (I was 19 at the time). We traveled by train from France to Berlin, crossing East Germany in a fenced corridor until we reached Berlin. Then we visited different parts of Berlin west during the trip. We had a special day though. We crossed checkpoint charlie and for one afternoon, we stayed in East Berlin. The contrast with the luxurious west part of the city was stunning. The quietness of the street we could see from the bus was stunning. A few trabant here and there. I think I remember that we went to visit the Pergamon museum, which was in East Berlin at this time. It was not imaginable that the wall would fall one year later.

Friday

Friday was the concept map day where we drew what webcompat activity meant for the Web and Mozilla. It was pretty cool to see everyone pitching in. I will redraw it properly later, but here an incomplete draft.

I published the meeting minutes we had during the week. These often doesn't take into all the little brain storming, here and there. The breakfast discussions which matter. The meeting in person for the first time someone else from Mozilla giving the opportunity to thank them directly about something they did in the past year that helped you a lot.

Also for everyone landing here through twitter or because they are following the Mozilla Planet feed, just publish simple work thoughts, codes, cool things you are working on.

It's not about being oustanding to your peers or appealing to your boss, it's about helping people who are less knowledgeable than you. We all learn from people who had the grace to share their own knowledge and help us to grow as a person or as a better professional. You can do exactly the same for people who are not at the same level than you.

Tomorrow long flight back from Berlin to Tokyo (via Zurich).

Otsukare!

Categorieën: Mozilla-nl planet

Kim Moir: AMA: I’m a (senior) staff engineer panel

Mozilla planet - di, 04/02/2020 - 23:12

Last week, my colleague Chenxia Liu and I arranged a panel at our Berlin all-hands meeting called AMA: I’m a (senior) staff engineer. Our goal for this panel was to provide a Q&A session where staff and senior staff engineers could share their stories what that a typical day in that role looks like, how their career progressed to that level and their advice for others interested in the role.

IMG_2714<figcaption id="caption-attachment-2026" class="wp-caption-text">Tiergarten in Berlin at dusk.  Or the next bend in the road of your career.</figcaption>

Everyone company’s career ladder for individual contributors is different.  At Mozilla, the change for senior engineer to staff engineer is the progression where the role changes to be substantially more self-directed. You aren’t just landing code to address issues identified by your manager or peers.  Your role is to determine what problems the team should focus on.  What value will solving these problems bring to the business?  How can you elevate the work of your team from a technical perspective? How can you level the skills of early career engineers on your team? As a result, the promotion to staff engineer requires promotion paperwork to be approved by higher level of management than the individual’s direct manager.

Ahead of the panel, we reached out to five staff or senior staff engineers and asked them to participate.  We reached out to people from several geographies and domains of expertise within the company and also different demographics.  The day before panel, Chenxia arranged a lunch with the panellists so we could share the logistics of the panel, proposed initial questions and allow the panellists to get to know each other a bit before the session.  We also shared a doc in a company wide channel where attendees could add questions before the session.

The day of the session, we didn’t know what to expect.  How many people would show up?  Was this a topic people were interested in? Our modestly sized room quickly filled up to to state where people were sitting on the floor and there wasn’t enough room for all the people to enter the room.

During the panel, we started out with some proposed questions and then moved on to audience questions.

  1.  Panellist introductions
    1. Briefly describe your role, where you work (org), where you live, what a typical day looks like
  2. How is your job fundamentally different from being a senior software engineer?
    1. Some examples of the class of problems
    2. How to deal with challenges
  3. What’s the role of mentorship, both mentoring and being mentored?
  4. Can you describe the difference between mentorship and sponsorship and how that contributed to your success?
  5. What was the most important skill that you developed in your previous role that you use in your current role?
  6. What is the process of promotion from your perspective?
  7. What is the actual process for promotion from a manager’s perspective?
  8. What advice would you give around leading a team? How does this change based on the seniority of the team?

IMG_2733

I think this was a really successful panel and thought I’d share this approach to others in case it could be useful at your organization.  As engineering managers, there is a power differential when we talk to the people on our teams about their career progression.  It is the role of the manager to coach their reports to grow their career but that is often limited to their own previous experience.  Learning from the experiences of other staff and senior staff engineers is really valuable, as their are many paths to grow your career.  In fact, you don’t even need a panel. One thing I suggest for people who want to move into a new role is not to just talk to your manager, but talk to a peer on your team or another team who is in that role to learn more about their experience.

Screen Shot 2020-02-04 at 4.36.57 PM

We received really positive feedback on the panel and hope to do it again.  From Emily Toop, an engineering manager at Mozilla.

I’d like to call out Chenxia and Kim for doing an amazing job organizing the “I’m a (senior) staff engineer, AMA” event yesterday. It was excellent from multiple angles, but from a diversity perspective it was especially fantastic. Along with giving insightful and intelligent answers, I felt that backgrounds and demographics of the five panellists helped to make almost everyone in that room feel as if they were represented in some way, and that there was a path to success for someone who looked like them. Well done.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl ootw: -U for proxy credentials

Mozilla planet - di, 04/02/2020 - 07:38

(older options of the week)

-U, --proxy-user

The short version of this option uses the uppercase letter ‘U’. It is important since the lower case letter ‘u’ is used for another option. The longer form of the option is spelled --proxy-user.

This command option existed already in the first ever curl release!

The man page‘s first paragraph describes this option as:

Specify the user name and password to use for proxy authentication.

Proxy

This option is for using a proxy. So let’s first briefly look at what a proxy is.

A proxy is a “middle man” in the communication between a client (curl) and a server (the one that holds the contents you want to download or will receive the content you want to upload).

The client communicates via this proxy to reach the server. When a proxy is used, the server communicates only with the proxy and the client also only communicates with the proxy:

curl <===> proxy <===> server

There exists several different types of proxies and a proxy can require authentication for it to allow it to be used.

Proxy authentication

Sometimes the proxy you want or need to use requires authentication, meaning that you need to provide your credentials to the proxy in order to be allowed to use it. The -U option is used to set the name and password used when authenticating with the proxy (separated by a colon).

You need to know this name and password, curl can’t figure them out – unless you’re on Windows and your curl is built with SSPI support as then it can magically use the current user’s credentials if you provide blank credentials in the option: -U :.

Security

Providing passwords in command lines is a bit icky. If you write it in a script, someone else might see the script and figure them out.

If the proxy communication is done in clear text (for example over HTTP) some authentication methods (for example Basic) will transmit the credentials in clear text across the network to the proxy, possibly readable by others.

Command line options may also appear in process listing so other users on the system can see them there – although curl will attempt to blank them out from ps outputs if the system supports it (Linux does).

Needs other options too

A typical command line that use -U also sets at least which proxy to use, with the -x option with a URL that specifies which type of proxy, the proxy host name and which port number the proxy runs on.

If the proxy is a HTTP or HTTPS type, you might also need to specify which type of authentication you want to use. For example with --proxy-anyauth to let curl figure it out by itself.

If you know what HTTP auth method the proxy uses, you can also explicitly enable that directly on the command line with the correct option. Like for example --proxy-basic or --proxy-digest.

SOCKS proxies

curl also supports SOCKS proxies, which is a different type than HTTP or HTTPS proxies. When you use a SOCKS proxy, you need to tell curl that, either with the correct prefix in the -x argument or by with one of the --socks* options.

Example command line curl -x http://proxy.example.com:8080 -U user:password https://example.com See Also

The corresponding option for sending credentials to a server instead of proxy uses the lowercase version: -u / --user.

Categorieën: Mozilla-nl planet

Daniel Stenberg: HTTP/3 for everyone

Mozilla planet - zo, 02/02/2020 - 23:06

FOSDEM 2020 is over for this time and I had an awesome time in Brussels once again.

Stickers

I brought a huge collection of stickers this year and I kept going back to the wolfSSL stand to refill the stash and it kept being emptied almost as fast. Hundreds of curl stickers were given away! The photo on the right shows my “sticker bag” as it looked before I left Sweden.

Lesson for next year: bring a larger amount of stickers! If you missed out on curl stickers, get in touch and I’ll do my best to satisfy your needs.

The talk

“HTTP/3 for everyone” was my single talk this FOSDEM. Just two days before the talk, I landed updated commits in curl’s git master branch for doing HTTP/3 up-to-date with the latest draft (-25). Very timely and I got to update the slide mentioning this.

As I talked HTTP/3 already last year in the Mozilla devroom, I also made sure to go through the slides I used then to compare and make sure I wouldn’t do too much of the same talk. But lots of things have changed and most of the content is updated and different this time around. Last year, literally hundreds of people were lining up outside wanting to get into room when the doors were closed. This year, I talked in the room Janson, which features 1415 seats. The biggest one on campus. It was pack full!

It is kind of an adrenaline rush to stand in front of such a wall of people. At one time in my talk I paused for a brief moment and then I felt I could almost hear the complete silence when a huge amount of attentive faces captured what I had to say.

<figcaption>The audience, photographed by Sidsel Jensen who had to sit in the stairs…</figcaption> <figcaption>Photo by Mirza Krak</figcaption> <figcaption>Photo by Wolfgang Gassler</figcaption>

I got a lot of positive feedback on the presentation. I also thought that my decision to not even try to take question in the big room was a correct and I ended up talking and discussing details behind the scene for a good while after my talk was done. Really fun!

The video

The video is also available from the FOSDEM site in webm and mp4 formats.

The slides

If you want the slides only, run over to slideshare and view them.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Tracking Diaries with Melanie Ehrenkranz

Mozilla planet - vr, 31/01/2020 - 15:09

In Tracking Diaries, we invited people from all walks of life to share how they spent a day online while using Firefox’s privacy protections to keep count of the trackers … Read more

The post Tracking Diaries with Melanie Ehrenkranz appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: The 2020 Rust Event Lineup

Mozilla planet - vr, 31/01/2020 - 01:00

A new decade has started, and we are excited about the Rust conferences coming up. Each conference is an opportunity to learn about Rust, share your knowledge, and to have a good time with your fellow Rustaceans. Read on to learn more about the events we know about so far.

FOSDEM
February 2nd, 2020

FOSDEM stands for the Free and Open Source Developers European Meeting. At this event software developers around the world will meet up, share ideas and collaborate. FOSDEM will be hosting a Rust devroom workshop that aims to present the features and possibilities offered by Rust, as well as some of the many exciting tools and projects in its ecosystem.

Located in Brussels, Belgium RustFest Netherlands
Q2, 2020

The RustFest Netherlands team are working hard behind the scenes on getting everything ready. We hope to tell you more soon so keep an eye on the RustFest blog and follow us on Twitter!

Located in Netherlands Rust+GNOME Hackfest
April 29th to May 3rd, 2020

The goal of the Rust+GNOME hackfest is to improve the interactions between Rust and the GNOME libraries. During this hackfest, we will be improving the interoperability between Rust and GNOME, improving the support of GNOME libraries in Rust, and exploring solutions to create GObject APIs from Rust.

Located in Montréal, Quebec Rust LATAM
May 22nd-23rd, 2020

Where Rust meets Latin America! Rust Latam is Latin America's leading event for and by the Rust community. Two days of interactive sessions, hands-on activities and engaging talks to bring the community together. Schedule to be announced at this link.

Located in Mexico City, Mexico Oxidize
July, 2020

The Oxidize conference is about learning, and improving your programming skills with embedded systems and IoT in Rust. The conference plans on having one day of guided workshops for developers looking to start or improve their Embedded Rust skills, one day of talks by community members, and a two day development session focused on Hardware and Embedded subjects in Rust. The starting date is to be announced at a later date.

Located in Berlin, Germany RustConf
August 20th-21st, 2020

The official RustConf will be taking place in Portland, Oregon, USA. Last years' conference was amazing, and we are excited to see what happens next. See the website, and Twitter for updates as the event date approaches!

Located in Oregon, USA Rusty Days
Fall, 2020

Rusty Days is a new conference located in Wroclaw, Poland. Rustaceans of all skill levels are welcome. The conference is still being planned. Check out the information on their site, and twitter as we get closer to fall.

Located in Wroclaw, Poland RustLab
October 16th-17th, 2020

RustLab 2020 is a 2 days conference with talks and workshops. The date is set, but the talks are still being planned. We expect to learn more details as we get closer to the date of the conference.

Located in Florence, Italy

For the most up-to-date information on events, visit timetill.rs. For meetups, and other events see the calendar.

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2015-05-26 Calendar builds

Thunderbird - wo, 27/05/2015 - 10:26

Common (excluding Website bugs)-specific: (23)

  • Fixed: 735253 – JavaScript Error: “TypeError: calendar is null” {file: “chrome://calendar/content/calendar-task-editing.js” line: 102}
  • Fixed: 768207 – Make the cache checkbox default-on in the new calendar dialog
  • Fixed: 1049591 – Fix lots of strict warnings
  • Fixed: 1086573 – Lightning and Thunderbird disagree about timezone support in ics files
  • Fixed: 1099592 – Make JS callers of ios.newChannel call ios.newChannel2 in calendar/
  • Fixed: 1149423 – Add Windows timezone names to list of aliases
  • Fixed: 1151011 – Calendar events show up on wrong day when printing
  • Fixed: 1151440 – Choose a color not responsive when creating a New calendar in Lightning 4.0b1
  • Fixed: 1153327 – Run compare-locales with merging for Lightning
  • Fixed: 1156015 – Email scheduling fails for recipients with URN id
  • Fixed: 1158036 – Support sendMailTo for URN type attendees
  • Fixed: 1159447 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_extract.js
  • Fixed: 1159638 – Getter fails in calender-migration-dialog on first run after installation
  • Fixed: 1159682 – Provide a more appropriate “learn more” page on integrated Lightning firstrun
  • Fixed: 1159698 – Opt-out dialog has a button for “disable”, but actually the addon is removed
  • Fixed: 1160728 – Unbreak Lightning 4.0b4 beta builds
  • Fixed: 1162300 – TEST-UNEXPECTED-FAIL | xpcshell-libical.ini:calendar/test/unit/test_alarm.js | xpcshell return code: 0
  • Fixed: 1163306 – Re-enable libical tests and disable ical.js in nightly builds when binary compatibility is back
  • Fixed: 1165002 – Lightning broken, tries to load libical backend although “calendar.icaljs” defaults to “true”
  • Fixed: 1165315 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_bug759324.js | xpcshell return code: 1 | ###!!! ASSERTION: Deprecated, use NewChannelFromURI2 providing loadInfo arguments!
  • Fixed: 1165497 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_alarmservice.js | xpcshell return code: -11
  • Fixed: 1165726 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/testBasicFunctionality.js | testBasicFunctionality.js::testSmokeTest
  • Fixed: 1165728 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_bug494140.js | xpcshell return code: -11

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2015-05-26 Thunderbird comm-central builds

Thunderbird - wo, 27/05/2015 - 10:25

Thunderbird-specific: (54)

  • Fixed: 401779 – Integrate Lightning Into Thunderbird by Default and Ship Thunderbird with Lightning Enabled
  • Fixed: 717292 – Spell check language setting for subject and body not synchronized, but temporarily appears so when changing language and depending on focus (confusing ux)
  • Fixed: 914225 – Support hotfix add-on in Thunderbird
  • Fixed: 1025547 – newmailaccount/jquery.tmpl.js, line 123: reference to undefined property def[1]
  • Fixed: 1088975 – Answering mail with sendername containing encoded special chars and comma creates two “To”-entries
  • Fixed: 1101237 – Remove distribution directory during install
  • Fixed: 1109178 – Thunderbird OAuth implementation does not work with Evernote
  • Fixed: 1110166 – Port |Bug 1102219 – Rename String.prototype.contains to String.prototype.includes| to comm-central
  • Fixed: 1113097 – Fix misuse of fixIterator
  • Fixed: 1130854 – Package Lightning with Thunderbird
  • Fixed: 1131997 – Adapt for Debugger Server code for changes in bug 1059308
  • Fixed: 1135291 – Update chat log entries added to Gloda since bug 955292 to use relative paths
  • Fixed: 1135588 – New conversations get indexed twice by gloda, leading to duplicate search results
  • Fixed: 1138154 – Plugins default to “always activate” in Thunderbird
  • Fixed: 1142879 – [meta] track Mozilla-central (Core) issues that we want to have fixed in TB38
  • Fixed: 1146698 – Chat Messages added to logs just before shutdown may not be indexed by gloda
  • Fixed: 1148330 – Font indicator doesn’t update when cursor is placed in text where core returns sans-serif (Windows). Serif and monospace don’t work (Linux).
  • Fixed: 1148512 – TEST-UNEXPECTED-FAIL | mailnews/imap/test/unit/test_dod.js | xpcshell return code: 0||1 | streamMessages – [streamMessages : 94] false == true | application crashed [@ mozalloc_abort(char const * const)]
  • Fixed: 1149059 – splitter in compose window can be resized down to completely obscure composition area
  • Fixed: 1151206 – Using a theme hides minimize, maximize and close button in composer window [Mac]
  • Fixed: 1151475 – Remove use of expression closures in mail/
  • Fixed: 1152299 – [autoconfig] Cosmetic changes for WEB.DE config
  • Fixed: 1152706 – Upgrade to Correspondents column (combined To/From column) too agressive
  • Fixed: 1152796 – chrome://messenger/content/folderDisplay.js, line 697: TypeError: this._savedColumnStates.correspondentCol is undefined
  • Fixed: 1152926 – New mail sound preview doesn’t work for default system sound on Mac OS X
  • Fixed: 1154737 – Permafail: TEST-UNEXPECTED-FAIL | toolkit/components/telemetry/tests/unit/test_TelemetryPing.js | xpcshell return code: 0
  • Fixed: 1154747 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/session-store/test-session-store.js | test-session-store.js::test_message_pane_height_persistence
  • Fixed: 1156669 – Trash folder duplication while using IMAP with localized TB
  • Fixed: 1157236 – In-content dialogs: Port bug 1043612, bug 1148923 and bug 1141031 to TB
  • Fixed: 1157649 – TEST-UNEXPECTED-FAIL | dom/push/test/xpcshell/test_clearAll_successful.js (and most other push tests)
  • Fixed: 1158824 – Port bug 138009 to fix packaging errors | Missing file(s): bin/defaults/autoconfig/platform.js
  • Fixed: 1159448 – Thunderbird ignores proxy settings on POP3S protocol
  • Fixed: 1159627 – resource:///modules/dbViewWrapper.js, line 560: SyntaxError: unreachable code after return statement
  • Fixed: 1159630 – components/glautocomp.js, line 155: SyntaxError: unreachable code after return statement
  • Fixed: 1159676 – mailnews/mime/jsmime/test/test_custom_headers.js | run_next_test 0 – TypeError: _gRunningTest is undefined at /builds/slave/test/build/tests/xpcshell/head.js:1435 (and other jsmime tests)
  • Fixed: 1159688 – After switching/changing the window layout, dragging the splitter between threadpane and messagepane can create gray/grey area/space (misplaced notificationbox)
  • Fixed: 1159815 – Take bug 1154791 “Inline spell checker loses red underlines after a backspace is used – take two” in Thunderbird 38
  • Fixed: 1159817 – Take “Bug 1100966 – Inline spell checker loses red underlines after a backspace is used” in Thunderbird 38
  • Fixed: 1159834 – Consider taking “Bug 756984 – Changing location in editor doesn’t preserve the font when returning to end of text/line” in Thunderbird 38
  • Fixed: 1159923 – Take bug 1140105 “Can’t query for a specific font face when the selection is collapsed” in TB 38
  • Fixed: 1160105 – Fix strict mode warnings in protovis-r2.6-modded.js
  • Fixed: 1160106 – “Searching…” spinner at the bottom of gloda search results never goes away
  • Fixed: 1160114 – Strict mode warnings on faceted search
  • Fixed: 1160805 – Missing Windows and Linux nightly builds, build step set props: previous_buildid fails
  • Fixed: 1161162 – “Join Chat” doesn’t focus the newly joined MUC
  • Fixed: 1162396 – Take bug 1140617 “Pasting an image loses the composition style” in TB38
  • Fixed: 1163086 – Take bug 967494 “changing spellcheck language in one composition window affects all open and new compositions” in TB38
  • Fixed: 1163299 – “TypeError: getBrowser(…) is null” in contentAreaClick with Lightning installed and started in calendar view
  • Fixed: 1163343 – Incorrectly formatted error message “sending failed”
  • Fixed: 1164415 – Error in comment for imapEnterServerPasswordPrompt
  • Fixed: 1164658 – TypeError: Cc[‘@mozilla.org/weave/service;1’] is undefined at resource://gre/modules/FxAccountsWebChannel.jsm:227
  • Fixed: 1164707 – missing toolkit_perfmonitoring.xpt in aurora builds
  • Fixed: 1165152 – Take bug 1154894 in TB 38 branch: Disable test_plugin_default_state.js so Thunderbird can ship with plugins disabled by default
  • Fixed: 1165320 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/notification/test-notification.js

MailNews Core-specific: (30)

  • Fixed: 610533 – crash [@ nsMsgDatabase::GetSearchResultsTable(char const*, int, nsIMdbTable**)] with virtual folder
  • Fixed: 745664 – Rename Address book aaa to aaa_test, delete another address book bbb, and renamed address book aaa_test will lose its name and appear deleted after restart (dataloss! involving localized names)
  • Fixed: 777770 – get rid of nsVoidArray from /mailnews
  • Fixed: 786141 – Use nsIFile.exists() instead of stat to check the existence of the file
  • Fixed: 1069790 – Email addresses with parenthesis are not pretty-printed anymore
  • Fixed: 1072611 – Ctrl+P not working from Composition’s Print Preview window
  • Fixed: 1099587 – Make JS callers of ios.newChannel call ios.newChannel2 in mail/ and mailnews/
  • Fixed: 1130248 – |To: “foo@example.com” <foo@example.com>| becomes |”foo@example.comfoo”@example.com| when I compose mail to it
  • Fixed: 1138220 – some headers are not not properly capitalized
  • Fixed: 1141446 – Behaviour of malformed rfc2047 encoded From message header inconsistent
  • Fixed: 1143569 – User-agent error when posting to NNTP due to RFC5536 violation of Tb (user-agent header is folded just after user-agent:, “user-agent:[CRLF][SP]Mozilla…”)
  • Fixed: 1144693 – Disable libnotify usage on Linux by default for new-mail notifications (doesn’t always work after bug 858919)
  • Fixed: 1149320 – fix compile warnings in mailnews/extensions/
  • Fixed: 1150891 – Port package-manifest.in changes from Bug 1115495 – Part 2: PAC generator for browsing and system wide proxy
  • Fixed: 1151782 – Inputting 29th Feb as a birthday in the addressbook contact replaces it with 1st Mar.
  • Fixed: 1152364 – crash in Address Book via nsAbBSDirectory::GetChildNodes nsCOMArrayEnumerator::operator new(unsigned int, nsCOMArray_base const&)
  • Fixed: 1152989 – Account Manager Extensions broken in Thunderbird 37/38
  • Fixed: 1154521 – jsmime fails on long references header and e-mail gets sent and stored in Sent without headers
  • Fixed: 1155491 – Support autoconfig and manual config of gmail IMAP OAuth2 authentication
  • Fixed: 1155952 – Nesting level does not match indentation
  • Fixed: 1156691 – GUI “Edit filters”: Conditions/actions (for specfic accounts) not visible
  • Fixed: 1156777 – nsParseMailbox.cpp:505:55: error: ‘do_QueryObject’ was not declared in this scope
  • Fixed: 1158501 – Port bug 1039866 (metro code removal) and bug 1085557 (addition of socorro symbol upload API)
  • Fixed: 1158751 – Port NO_JS_MANIFEST changes | mozbuild.frontend.reader.SandboxValidationError: calendar/base/backend/icaljs/moz.build
  • Fixed: 1159255 – Build error: MSVC_ENABLE_PGO = True is not permitted to be used in mailnews/intl/moz.build
  • Fixed: 1159626 – chrome://messenger/content/accountUtils.js, line 455: SyntaxError: unreachable code after return statement
  • Fixed: 1160647 – Port |Bug 1159972 – Remove the fallible version of PL_DHashTableInit()| to comm-central
  • Fixed: 1163347 – Don’t require scope in ispdb config for OAuth2
  • Fixed: 1165737 – Fix usage of NS_LITERAL_CSTRING in mailnews, port Bug 1155963 to comm-central
  • Fixed: 1166842 – Re-enable binary extensions for comm-central

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Andrew Sutherland: Talk Script: Firefox OS Email Performance Strategies

Thunderbird - do, 30/04/2015 - 22:11

Last week I gave a talk at the Philly Tech Week 2015 Dev Day organized by the delightful people at technical.ly on some of the tricks/strategies we use in the Firefox OS Gaia Email app.  Note that the credit for implementing most of these techniques goes to the owner of the Email app’s front-end, James Burke.  Also, a special shout-out to Vivien for the initial DOM Worker patches for the email app.

I tried to avoid having slides that both I would be reading aloud as the audience read silently, so instead of slides to share, I have the talk script.  Well, I also have the slides here, but there’s not much to them.  The headings below are the content of the slides, except for the one time I inline some code.  Note that the live presentation must have differed slightly, because I’m sure I’m much more witty and clever in person than this script would make it seem…

Cover Slide: Who!

Hi, my name is Andrew Sutherland.  I work at Mozilla on the Firefox OS Email Application.  I’m here to share some strategies we used to make our HTML5 app Seem faster and sometimes actually Be faster.

What’s A Firefox OS (Screenshot Slide)

But first: What is a Firefox OS?  It’s a multiprocess Firefox gecko engine on an android linux kernel where all the apps including the system UI are implemented using HTML5, CSS, and JavaScript.  All the apps use some combination of standard web APIs and APIs that we hope to standardize in some form.

Firefox OS homescreen screenshot Firefox OS clock app screenshot Firefox OS email app screenshot

Here are some screenshots.  We’ve got the default home screen app, the clock app, and of course, the email app.

It’s an entirely client-side offline email application, supporting IMAP4, POP3, and ActiveSync.  The goal, like all Firefox OS apps shipped with the phone, is to give native apps on other platforms a run for their money.

And that begins with starting up fast.

Fast Startup: The Problems

But that’s frequently easier said than done.  Slow-loading websites are still very much a thing.

The good news for the email application is that a slow network isn’t one of its problems.  It’s pre-loaded on the phone.  And even if it wasn’t, because of the security implications of the TCP Web API and the difficulty of explaining this risk to users in a way they won’t just click through, any TCP-using app needs to be a cryptographically signed zip file approved by a marketplace.  So we do load directly from flash.

However, it’s not like flash on cellphones is equivalent to an infinitely fast, zero-latency network connection.  And even if it was, in a naive app you’d still try and load all of your HTML, CSS, and JavaScript at the same time because the HTML file would reference them all.  And that adds up.

It adds up in the form of event loop activity and competition with other threads and processes.  With the exception of Promises which get their own micro-task queue fast-lane, the web execution model is the same as all other UI event loops; events get scheduled and then executed in the same order they are scheduled.  Loading data from an asynchronous API like IndexedDB means that your read result gets in line behind everything else that’s scheduled.  And in the case of the bulk of shipped Firefox OS devices, we only have a single processor core so the thread and process contention do come into play.

So we try not to be a naive.

Seeming Fast at Startup: The HTML Cache

If we’re going to optimize startup, it’s good to start with what the user sees.  Once an account exists for the email app, at startup we display the default account’s inbox folder.

What is the least amount of work that we can do to show that?  Cache a screenshot of the Inbox.  The problem with that, of course, is that a static screenshot is indistinguishable from an unresponsive application.

So we did the next best thing, (which is) we cache the actual HTML we display.  At startup we load a minimal HTML file, our concatenated CSS, and just enough Javascript to figure out if we should use the HTML cache and then actually use it if appropriate.  It’s not always appropriate, like if our application is being triggered to display a compose UI or from a new mail notification that wants to show a specific message or a different folder.  But this is a decision we can make synchronously so it doesn’t slow us down.

Local Storage: Okay in small doses

We implement this by storing the HTML in localStorage.

Important Disclaimer!  LocalStorage is a bad API.  It’s a bad API because it’s synchronous.  You can read any value stored in it at any time, without waiting for a callback.  Which means if the data is not in memory the browser needs to block its event loop or spin a nested event loop until the data has been read from disk.  Browsers avoid this now by trying to preload the Entire contents of local storage for your origin into memory as soon as they know your page is being loaded.  And then they keep that information, ALL of it, in memory until your page is gone.

So if you store a megabyte of data in local storage, that’s a megabyte of data that needs to be loaded in its entirety before you can use any of it, and that hangs around in scarce phone memory.

To really make the point: do not use local storage, at least not directly.  Use a library like localForage that will use IndexedDB when available, and then fails over to WebSQLDatabase and local storage in that order.

Now, having sufficiently warned you of the terrible evils of local storage, I can say with a sorta-clear conscience… there are upsides in this very specific case.

The synchronous nature of the API means that once we get our turn in the event loop we can act immediately.  There’s no waiting around for an IndexedDB read result to gets its turn on the event loop.

This matters because although the concept of loading is simple from a User Experience perspective, there’s no standard to back it up right now.  Firefox OS’s UX desires are very straightforward.  When you tap on an app, we zoom it in.  Until the app is loaded we display the app’s icon in the center of the screen.  Unfortunately the standards are still assuming that the content is right there in the HTML.  This works well for document-based web pages or server-powered web apps where the contents of the page are baked in.  They work less well for client-only web apps where the content lives in a database and has to be dynamically retrieved.

The two events that exist are:

DOMContentLoaded” fires when the document has been fully parsed and all scripts not tagged as “async” have run.  If there were stylesheets referenced prior to the script tags, the script tags will wait for the stylesheet loads.

load” fires when the document has been fully loaded; stylesheets, images, everything.

But none of these have anything to do with the content in the page saying it’s actually done.  This matters because these standards also say nothing about IndexedDB reads or the like.  We tried to create a standards consensus around this, but it’s not there yet.  So Firefox OS just uses the “load” event to decide an app or page has finished loading and it can stop showing your app icon.  This largely avoids the dreaded “flash of unstyled content” problem, but it also means that your webpage or app needs to deal with this period of time by displaying a loading UI or just accepting a potentially awkward transient UI state.

(Trivial HTML slide)

<link rel=”stylesheet” ...> <script ...></script> DOMContentLoaded!

This is the important summary of our index.html.

We reference our stylesheet first.  It includes all of our styles.  We never dynamically load stylesheets because that compels a style recalculation for all nodes and potentially a reflow.  We would have to have an awful lot of style declarations before considering that.

Then we have our single script file.  Because the stylesheet precedes the script, our script will not execute until the stylesheet has been loaded.  Then our script runs and we synchronously insert our HTML from local storage.  Then DOMContentLoaded can fire.  At this point the layout engine has enough information to perform a style recalculation and determine what CSS-referenced image resources need to be loaded for buttons and icons, then those load, and then we’re good to be displayed as the “load” event can fire.

After that, we’re displaying an interactive-ish HTML document.  You can scroll, you can press on buttons and the :active state will apply.  So things seem real.

Being Fast: Lazy Loading and Optimized Layers

But now we need to try and get some logic in place as quickly as possible that will actually cash the checks that real-looking HTML UI is writing.  And the key to that is only loading what you need when you need it, and trying to get it to load as quickly as possible.

There are many module loading and build optimizing tools out there, and most frameworks have a preferred or required way of handling this.  We used the RequireJS family of Asynchronous Module Definition loaders, specifically the alameda loader and the r-dot-js optimizer.

One of the niceties of the loader plugin model is that we are able to express resource dependencies as well as code dependencies.

RequireJS Loader Plugins

var fooModule = require('./foo'); var htmlString = require('text!./foo.html'); var localizedDomNode = require('tmpl!./foo.html');

The standard Common JS loader semantics used by node.js and io.js are the first one you see here.  Load the module, return its exports.

But RequireJS loader plugins also allow us to do things like the second line where the exclamation point indicates that the load should occur using a loader plugin, which is itself a module that conforms to the loader plugin contract.  In this case it’s saying load the file foo.html as raw text and return it as a string.

But, wait, there’s more!  loader plugins can do more than that.  The third example uses a loader that loads the HTML file using the ‘text’ plugin under the hood, creates an HTML document fragment, and pre-localizes it using our localization library.  And this works un-optimized in a browser, no compilation step needed, but it can also be optimized.

So when our optimizer runs, it bundles up the core modules we use, plus, the modules for our “message list” card that displays the inbox.  And the message list card loads its HTML snippets using the template loader plugin.  The r-dot-js optimizer then locates these dependencies and the loader plugins also have optimizer logic that results in the HTML strings being inlined in the resulting optimized file.  So there’s just one single javascript file to load with no extra HTML file dependencies or other loads.

We then also run the optimizer against our other important cards like the “compose” card and the “message reader” card.  We don’t do this for all cards because it can be hard to carve up the module dependency graph for optimization without starting to run into cases of overlap where many optimized files redundantly include files loaded by other optimized files.

Plus, we have another trick up our sleeve:

Seeming Fast: Preloading

Preloading.  Our cards optionally know the other cards they can load.  So once we display a card, we can kick off a preload of the cards that might potentially be displayed.  For example, the message list card can trigger the compose card and the message reader card, so we can trigger a preload of both of those.

But we don’t go overboard with preloading in the frontend because we still haven’t actually loaded the back-end that actually does all the emaily email stuff.  The back-end is also chopped up into optimized layers along account type lines and online/offline needs, but the main optimized JS file still weighs in at something like 17 thousand lines of code with newlines retained.

So once our UI logic is loaded, it’s time to kick-off loading the back-end.  And in order to avoid impacting the responsiveness of the UI both while it loads and when we’re doing steady-state processing, we run it in a DOM Worker.

Being Responsive: Workers and SharedWorkers

DOM Workers are background JS threads that lack access to the page’s DOM, communicating with their owning page via message passing with postMessage.  Normal workers are owned by a single page.  SharedWorkers can be accessed via multiple pages from the same document origin.

By doing this, we stay out of the way of the main thread.  This is getting less important as browser engines support Asynchronous Panning & Zooming or “APZ” with hardware-accelerated composition, tile-based rendering, and all that good stuff.  (Some might even call it magic.)

When Firefox OS started, we didn’t have APZ, so any main-thread logic had the serious potential to result in janky scrolling and the impossibility of rendering at 60 frames per second.  It’s a lot easier to get 60 frames-per-second now, but even asynchronous pan and zoom potentially has to wait on dispatching an event to the main thread to figure out if the user’s tap is going to be consumed by app logic and preventDefault called on it.  APZ does this because it needs to know whether it should start scrolling or not.

And speaking of 60 frames-per-second…

Being Fast: Virtual List Widgets

…the heart of a mail application is the message list.  The expected UX is to be able to fling your way through the entire list of what the email app knows about and see the messages there, just like you would on a native app.

This is admittedly one of the areas where native apps have it easier.  There are usually list widgets that explicitly have a contract that says they request data on an as-needed basis.  They potentially even include data bindings so you can just point them at a data-store.

But HTML doesn’t yet have a concept of instantiate-on-demand for the DOM, although it’s being discussed by Firefox layout engine developers.  For app purposes, the DOM is a scene graph.  An extremely capable scene graph that can handle huge documents, but there are footguns and it’s arguably better to err on the side of fewer DOM nodes.

So what the email app does is we create a scroll-region div and explicitly size it based on the number of messages in the mail folder we’re displaying.  We create and render enough message summary nodes to cover the current screen, 3 screens worth of messages in the direction we’re scrolling, and then we also retain up to 3 screens worth in the direction we scrolled from.  We also pre-fetch 2 more screens worth of messages from the database.  These constants were arrived at experimentally on prototype devices.

We listen to “scroll” events and issue database requests and move DOM nodes around and update them as the user scrolls.  For any potentially jarring or expensive transitions such as coordinate space changes from new messages being added above the current scroll position, we wait for scrolling to stop.

Nodes are absolutely positioned within the scroll area using their ‘top’ style but translation transforms also work.  We remove nodes from the DOM, then update their position and their state before re-appending them.  We do this because the browser APZ logic tries to be clever and figure out how to create an efficient series of layers so that it can pre-paint as much of the DOM as possible in graphic buffers, AKA layers, that can be efficiently composited by the GPU.  Its goal is that when the user is scrolling, or something is being animated, that it can just move the layers around the screen or adjust their opacity or other transforms without having to ask the layout engine to re-render portions of the DOM.

When our message elements are added to the DOM with an already-initialized absolute position, the APZ logic lumps them together as something it can paint in a single layer along with the other elements in the scrolling region.  But if we start moving them around while they’re still in the DOM, the layerization logic decides that they might want to independently move around more in the future and so each message item ends up in its own layer.  This slows things down.  But by removing them and re-adding them it sees them as new with static positions and decides that it can lump them all together in a single layer.  Really, we could just create new DOM nodes, but we produce slightly less garbage this way and in the event there’s a bug, it’s nicer to mess up with 30 DOM nodes displayed incorrectly rather than 3 million.

But as neat as the layerization stuff is to know about on its own, I really mention it to underscore 2 suggestions:

1, Use a library when possible.  Getting on and staying on APZ fast-paths is not trivial, especially across browser engines.  So it’s a very good idea to use a library rather than rolling your own.

2, Use developer tools.  APZ is tricky to reason about and even the developers who write the Async pan & zoom logic can be surprised by what happens in complex real-world situations.  And there ARE developer tools available that help you avoid needing to reason about this.  Firefox OS has easy on-device developer tools that can help diagnose what’s going on or at least help tell you whether you’re making things faster or slower:

– it’s got a frames-per-second overlay; you do need to scroll like mad to get the system to want to render 60 frames-per-second, but it makes it clear what the net result is

– it has paint flashing that overlays random colors every time it paints the DOM into a layer.  If the screen is flashing like a discotheque or has a lot of smeared rainbows, you know something’s wrong because the APZ logic is not able to to just reuse its layers.

– devtools can enable drawing cool colored borders around the layers APZ has created so you can see if layerization is doing something crazy

There’s also fancier and more complicated tools in Firefox and other browsers like Google Chrome to let you see what got painted, what the layer tree looks like, et cetera.

And that’s my spiel.

Links

The source code to Gaia can be found at https://github.com/mozilla-b2g/gaia

The email app in particular can be found at https://github.com/mozilla-b2g/gaia/tree/master/apps/email

(I also asked for questions here.)

Categorieën: Mozilla-nl planet

Joshua Cranmer: Breaking news

Thunderbird - wo, 01/04/2015 - 09:00
It was brought to my attention recently by reputable sources that the recent announcement of increased usage in recent years produced an internal firestorm within Mozilla. Key figures raised alarm that some of the tech press had interpreted the blog post as a sign that Thunderbird was not, in fact, dead. As a result, they asked Thunderbird community members to make corrections to emphasize that Mozilla was trying to kill Thunderbird.

The primary fear, it seems, is that knowledge that the largest open-source email client was still receiving regular updates would impel its userbase to agitate for increased funding and maintenance of the client to help forestall potential threats to the open nature of email as well as to innovate in the space of providing usable and private communication channels. Such funding, however, would be an unaffordable luxury and would only distract Mozilla from its central goal of building developer productivity tooling. Persistent rumors that Mozilla would be willing to fund Thunderbird were it renamed Firefox Email were finally addressed with the comment, "such a renaming would violate our current policy that all projects be named Persona."

Categorieën: Mozilla-nl planet

Joshua Cranmer: Why email is hard, part 8: why email security failed

Thunderbird - di, 13/01/2015 - 05:38
This post is part 8 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. Part 7 discusses the problem of trust. This part discusses why email security has largely failed.

At the end of the last part in this series, I posed the question, "Which email security protocol is most popular?" The answer to the question is actually neither S/MIME nor PGP, but a third protocol, DKIM. I haven't brought up DKIM until now because DKIM doesn't try to secure email in the same vein as S/MIME or PGP, but I still consider it relevant to discussing email security.

Unquestionably, DKIM is the only security protocol for email that can be considered successful. There are perhaps 4 billion active email addresses [1]. Of these, about 1-2 billion use DKIM. In contrast, S/MIME can count a few million users, and PGP at best a few hundred thousand. No other security protocols have really caught on past these three. Why did DKIM succeed where the others fail?

DKIM's success stems from its relatively narrow focus. It is nothing more than a cryptographic signature of the message body and a smattering of headers, and is itself stuck in the DKIM-Signature header. It is meant to be applied to messages only on outgoing servers and read and processed at the recipient mail server—it completely bypasses clients. That it bypasses clients allows it to solve the problem of key discovery and key management very easily (public keys are stored in DNS, which is already a key part of mail delivery), and its role in spam filtering is strong motivation to get it implemented quickly (it is 7 years old as of this writing). It's also simple: this one paragraph description is basically all you need to know [2].

The failure of S/MIME and PGP to see large deployment is certainly a large topic of discussion on myriads of cryptography enthusiast mailing lists, which often like to partake in propositions of new end-to-end encryption of email paradigms, such as the recent DIME proposal. Quite frankly, all of these solutions suffer broadly from at least the same 5 fundamental weaknesses, and I see it unlikely that a protocol will come about that can fix these weaknesses well enough to become successful.

The first weakness, and one I've harped about many times already, is UI. Most email security UI is abysmal and generally at best usable only by enthusiasts. At least some of this is endemic to security: while it mean seem obvious how to convey what an email signature or an encrypted email signifies, how do you convey the distinctions between sign-and-encrypt, encrypt-and-sign, or an S/MIME triple wrap? The Web of Trust model used by PGP (and many other proposals) is even worse, in that inherently requires users to do other actions out-of-band of email to work properly.

Trust is the second weakness. Consider that, for all intents and purposes, the email address is the unique identifier on the Internet. By extension, that implies that a lot of services are ultimately predicated on the notion that the ability to receive and respond to an email is a sufficient means to identify an individual. However, the entire purpose of secure email, or at least of end-to-end encryption, is subtly based on the fact that other people in fact have access to your mailbox, thus destroying the most natural ways to build trust models on the Internet. The quest for anonymity or privacy also renders untenable many other plausible ways to establish trust (e.g., phone verification or government-issued ID cards).

Key discovery is another weakness, although it's arguably the easiest one to solve. If you try to keep discovery independent of trust, the problem of key discovery is merely picking a protocol to publish and another one to find keys. Some of these already exist: PGP key servers, for example, or using DANE to publish S/MIME or PGP keys.

Key management, on the other hand, is a more troubling weakness. S/MIME, for example, basically works without issue if you have a certificate, but managing to get an S/MIME certificate is a daunting task (necessitated, in part, by its trust model—see how these issues all intertwine?). This is also where it's easy to say that webmail is an unsolvable problem, but on further reflection, I'm not sure I agree with that statement anymore. One solution is just storing the private key with the webmail provider (you're trusting them as an email client, after all), but it's also not impossible to imagine using phones or flash drives as keystores. Other key management factors are more difficult to solve: people who lose their private keys or key rollover create thorny issues. There is also the difficulty of managing user expectations: if I forget my password to most sites (even my email provider), I can usually get it reset somehow, but when a private key is lost, the user is totally and completely out of luck.

Of course, there is one glaring and almost completely insurmountable problem. Encrypted email fundamentally precludes certain features that we have come to take for granted. The lesser known is server-side search and filtration. While there exist some mechanisms to do search on encrypted text, those mechanisms rely on the fact that you can manipulate the text to change the message, destroying the integrity feature of secure email. They also tend to be fairly expensive. It's easy to just say "who needs server-side stuff?", but the contingent of people who do email on smartphones would not be happy to have to pay the transfer rates to download all the messages in their folder just to find one little email, nor the energy costs of doing it on the phone. And those who have really large folders—Fastmail has a design point of 1,000,000 in a single folder—would still prefer to not have to transfer all their mail even on desktops.

The more well-known feature that would disappear is spam filtration. Consider that 90% of all email is spam, and if you think your spam folder is too slim for that to be true, it's because your spam folder only contains messages that your email provider wasn't sure were spam. The loss of server-side spam filtering would dramatically increase the cost of spam (a 10% reduction in efficiency would double the amount of server storage, per my calculations), and client-side spam filtering is quite literally too slow [3] and too costly (remember smartphones? Imagine having your email take 10 times as much energy and bandwidth) to be a tenable option. And privacy or anonymity tends to be an invitation to abuse (cf. Tor and Wikipedia). Proposed solutions to the spam problem are so common that there is a checklist containing most of the objections.

When you consider all of those weaknesses, it is easy to be pessimistic about the possibility of wide deployment of powerful email security solutions. The strongest future—all email is encrypted, including metadata—is probably impossible or at least woefully impractical. That said, if you weaken some of the assumptions (say, don't desire all or most traffic to be encrypted), then solutions seem possible if difficult.

This concludes my discussion of email security, at least until things change for the better. I don't have a topic for the next part in this series picked out (this part actually concludes the set I knew I wanted to discuss when I started), although OAuth and DMARC are two topics that have been bugging me enough recently to consider writing about. They also have the unfortunate side effect of being things likely to see changes in the near future, unlike most of the topics I've discussed so far. But rest assured that I will find more difficulties in the email infrastructure to write about before long!

[1] All of these numbers are crude estimates and are accurate to only an order of magnitude. To justify my choices: I assume 1 email address per Internet user (this overestimates the developing world and underestimates the developed world). The largest webmail providers have given numbers that claim to be 1 billion active accounts between them, and all of them use DKIM. S/MIME is guessed by assuming that any smartcard deployment supports S/MIME, and noting that the US Department of Defense and Estonia's digital ID project are both heavy users of such smartcards. PGP is estimated from the size of the strong set and old numbers on the reachable set from the core Web of Trust.
[2] Ever since last April, it's become impossible to mention DKIM without referring to DMARC, as a result of Yahoo's controversial DMARC policy. A proper discussion of DMARC (and why what Yahoo did was controversial) requires explaining the mail transmission architecture and spam, however, so I'll defer that to a later post. It's also possible that changes in this space could happen within the next year.
[3] According to a former GMail spam employee, if it takes you as long as three minutes to calculate reputation, the spammer wins.

Categorieën: Mozilla-nl planet

Joshua Cranmer: A unified history for comm-central

Thunderbird - za, 10/01/2015 - 18:55
Several years back, Ehsan and Jeff Muizelaar attempted to build a unified history of mozilla-central across the Mercurial era and the CVS era. Their result is now used in the gecko-dev repository. While being distracted on yet another side project, I thought that I might want to do the same for comm-central. It turns out that building a unified history for comm-central makes mozilla-central look easy: mozilla-central merely had one import from CVS. In contrast, comm-central imported twice from CVS (the calendar code came later), four times from mozilla-central (once with converted history), and imported twice from Instantbird's repository (once with converted history). Three of those conversions also involved moving paths. But I've worked through all of those issues to provide a nice snapshot of the repository [1]. And since I've been frustrated by failing to find good documentation on how this sort of process went for mozilla-central, I'll provide details on the process for comm-central.

The first step and probably the hardest is getting the CVS history in DVCS form (I use hg because I'm more comfortable it, but there's effectively no difference between hg, git, or bzr here). There is a git version of mozilla's CVS tree available, but I've noticed after doing research that its last revision is about a month before the revision I need for Calendar's import. The documentation for how that repo was built is no longer on the web, although we eventually found a copy after I wrote this post on git.mozilla.org. I tried doing another conversion using hg convert to get CVS tags, but that rudely blew up in my face. For now, I've filed a bug on getting an official, branchy-and-tag-filled version of this repository, while using the current lack of history as a base. Calendar people will have to suffer missing a month of history.

CVS is famously hard to convert to more modern repositories, and, as I've done my research, Mozilla's CVS looks like it uses those features which make it difficult. In particular, both the calendar CVS import and the comm-central initial CVS import used a CVS tag HG_COMM_INITIAL_IMPORT. That tagging was done, on only a small portion of the tree, twice, about two months apart. Fortunately, mailnews code was never touched on CVS trunk after the import (there appears to be one commit on calendar after the tagging), so it is probably possible to salvage a repository-wide consistent tag.

The start of my script for conversion looks like this:

#!/bin/bash set -e WORKDIR=/tmp HGCVS=$WORKDIR/mozilla-cvs-history MC=/src/trunk/mozilla-central CC=/src/trunk/comm-central OUTPUT=$WORKDIR/full-c-c # Bug 445146: m-c/editor/ui -> c-c/editor/ui MC_EDITOR_IMPORT=d8064eff0a17372c50014ee305271af8e577a204 # Bug 669040: m-c/db/mork -> c-c/db/mork MC_MORK_IMPORT=f2a50910befcf29eaa1a29dc088a8a33e64a609a # Bug 1027241, bug 611752 m-c/security/manager/ssl/** -> c-c/mailnews/mime/src/* MC_SMIME_IMPORT=e74c19c18f01a5340e00ecfbc44c774c9a71d11d # Step 0: Grab the mozilla CVS history. if [ ! -e $HGCVS ]; then hg clone git+https://github.com/jrmuizel/mozilla-cvs-history.git $HGCVS fi

Since I don't want to include the changesets useless to comm-central history, I trimmed the history by using hg convert to eliminate changesets that don't change the necessary files. Most of the files are simple directory-wide changes, but S/MIME only moved a few files over, so it requires a more complex way to grab the file list. In addition, I also replaced the % in the usernames with @ that they are used to appearing in hg. The relevant code is here:

# Step 1: Trim mozilla CVS history to include only the files we are ultimately # interested in. cat >$WORKDIR/convert-filemap.txt <<EOF # Revision e4f4569d451a include directory/xpcom include mail include mailnews include other-licenses/branding/thunderbird include suite # Revision 7c0bfdcda673 include calendar include other-licenses/branding/sunbird # Revision ee719a0502491fc663bda942dcfc52c0825938d3 include editor/ui # Revision 52efa9789800829c6f0ee6a005f83ed45a250396 include db/mork/ include db/mdb/ EOF # Add the S/MIME import files hg -R $MC log -r "children($MC_SMIME_IMPORT)" \ --template "{file_dels % 'include {file}\n'}" >>$WORKDIR/convert-filemap.txt if [ ! -e $WORKDIR/convert-authormap.txt ]; then hg -R $HGCVS log --template "{email(author)}={sub('%', '@', email(author))}\n" \ | sort -u > $WORKDIR/convert-authormap.txt fi cd $WORKDIR hg convert $HGCVS $OUTPUT --filemap convert-filemap.txt -A convert-authormap.txt

That last command provides us the subset of the CVS history that we need for unified history. Strictly speaking, I should be pulling a specific revision, but I happen to know that there's no need to (we're cloning the only head) in this case. At this point, we now need to pull in the mozilla-central changes before we pull in comm-central. Order is key; hg convert will only apply the graft points when converting the child changeset (which it does but once), and it needs the parents to exist before it can do that. We also need to ensure that the mozilla-central graft point is included before continuing, so we do that, and then pull mozilla-central:

CC_CVS_BASE=$(hg log -R $HGCVS -r 'tip' --template '{node}') CC_CVS_BASE=$(grep $CC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2) MC_CVS_BASE=$(hg log -R $HGCVS -r 'gitnode(215f52d06f4260fdcca797eebd78266524ea3d2c)' --template '{node}') MC_CVS_BASE=$(grep $MC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2) # Okay, now we need to build the map of revisions. cat >$WORKDIR/convert-revmap.txt <<EOF e4f4569d451a5e0d12a6aa33ebd916f979dd8faa $CC_CVS_BASE # Thunderbird / Suite 7c0bfdcda6731e77303f3c47b01736aaa93d5534 d4b728dc9da418f8d5601ed6735e9a00ac963c4e, $CC_CVS_BASE # Calendar 9b2a99adc05e53cd4010de512f50118594756650 $MC_CVS_BASE # Mozilla graft point ee719a0502491fc663bda942dcfc52c0825938d3 78b3d6c649f71eff41fe3f486c6cc4f4b899fd35, $MC_EDITOR_IMPORT # Editor 8cdfed92867f885fda98664395236b7829947a1d 4b5da7e5d0680c6617ec743109e6efc88ca413da, e4e612fcae9d0e5181a5543ed17f705a83a3de71 # Chat EOF # Next, import mozilla-central revisions for rev in $MC_MORK_IMPORT $MC_EDITOR_IMPORT $MC_SMIME_IMPORT; do hg convert $MC $OUTPUT -r $rev --splicemap $WORKDIR/convert-revmap.txt \ --filemap $WORKDIR/convert-filemap.txt done

Some notes about all of the revision ids in the script. The splicemap requires the full 40-character SHA ids; anything less and the thing complains. I also need to specify the parents of the revisions that deleted the code for the mozilla-central import, so if you go hunting for those revisions and are surprised that they don't remove the code in question, that's why.

I mentioned complications about the merges earlier. The Mork and S/MIME import codes here moved files, so that what was db/mdb in mozilla-central became db/mork. There's no support for causing the generated splice to record these as a move, so I have to manually construct those renamings:

# We need to execute a few hg move commands due to renamings. pushd $OUTPUT hg update -r $(grep $MC_MORK_IMPORT .hg/shamap | cut -d' ' -f2) (hg -R $MC log -r "children($MC_MORK_IMPORT)" \ --template "{file_dels % 'hg mv {file} {sub(\"db/mdb\", \"db/mork\", file)}\n'}") | bash hg commit -m 'Pseudo-changeset to move Mork files' -d '2011-08-06 17:25:21 +0200' MC_MORK_IMPORT=$(hg log -r tip --template '{node}') hg update -r $(grep $MC_SMIME_IMPORT .hg/shamap | cut -d' ' -f2) (hg -R $MC log -r "children($MC_SMIME_IMPORT)" \ --template "{file_dels % 'hg mv {file} {sub(\"security/manager/ssl\", \"mailnews/mime\", file)}\n'}") | bash hg commit -m 'Pseudo-changeset to move S/MIME files' -d '2014-06-15 20:51:51 -0700' MC_SMIME_IMPORT=$(hg log -r tip --template '{node}') popd # Echo the new move commands to the changeset conversion map. cat >>$WORKDIR/convert-revmap.txt <<EOF 52efa9789800829c6f0ee6a005f83ed45a250396 abfd23d7c5042bc87502506c9f34c965fb9a09d1, $MC_MORK_IMPORT # Mork 50f5b5fc3f53c680dba4f237856e530e2097adfd 97253b3cca68f1c287eb5729647ba6f9a5dab08a, $MC_SMIME_IMPORT # S/MIME EOF

Now that we have all of the graft points defined, and all of the external code ready, we can pull comm-central and do the conversion. That's not quite it, though—when we graft the S/MIME history to the original mozilla-central history, we have a small segment of abandoned converted history. A call to hg strip removes that.

# Now, import comm-central revisions that we need hg convert $CC $OUTPUT --splicemap $WORKDIR/convert-revmap.txt hg strip 2f69e0a3a05a

[1] I left out one of the graft points because I just didn't want to deal with it. I'll leave it as an exercise to the reader to figure out which one it was. Hint: it's the only one I didn't know about before I searched for the archive points [2].
[2] Since I wasn't sure I knew all of the graft points, I decided to try to comb through all of the changesets to figure out who imported code. It turns out that hg log -r 'adds("**")' narrows it down nicely (1667 changesets to look at instead of 17547), and using the {file_adds} template helps winnow it down more easily.

Categorieën: Mozilla-nl planet

Kent James: Thunderbird Summit in Toronto to Plan a Viable Future

Thunderbird - wo, 15/10/2014 - 06:17

On Wednesday, October 15 through Saturday, October 19, 2014, the Thunderbird core contributors (about 20 people in total) are gathering at the Mozilla offices in Toronto, Ontario for a key summit to plan a viable future for Thunderbird. The first two days are project work days, but on Friday, October 18 we will be meeting all day as a group to discuss how we can overcome various obstacles that threaten the continuing viability of Thunderbird as a project. This is an open Summit for all interested parties. Remote participation or viewing of Friday group sessions is possible, beginning at 9:30 AM EDT (6:30 AM Pacific Daylight Time)  using the same channels as the regular weekly Thunderbird status meetings.

Video Instructions: See https://wiki.mozilla.org/Thunderbird/StatusMeetings for details.

Overall Summit Description and Agenda: See https://wiki.mozilla.org/Thunderbird:Summit_2014

Feel free to join in if you are interested in the future of Thunderbird.

Categorieën: Mozilla-nl planet

Pagina's