mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Subscribe to Mozilla Blog  feed
News and Updates about Mozilla
Updated: 2 wiken 3 dagen ferlyn

Mozilla Announces Call for Entries for the 2nd Annual Rise25 Awards in Dublin, Ireland

fr, 29/03/2024 - 17:02

Haven’t filled out the nomination form yet? You’re in luck. We are extending the deadline for nominations for 2024’s Rise25 cohort until 5PM PT Friday, April 12th. You can nominate someone you know who is making an impact with AI (or yourself) below:

www.mozilla.org/rise25/nominate/

On the heels of Mozilla’s Rise25 Awards in Berlin last year, we’re excited to announce that we’ll be returning once again with a special celebration that will take place in Dublin, Ireland later this year.

The 2nd Annual Rise25 Awards will feature familiar categories, but with an emphasis on trustworthy AI. We will be honoring 25 people who are leading that next wave of AI — who are using philanthropy, collective power, and the principles of open source to make sure the future of AI is responsible, trustworthy, inclusive and centered around human dignity. 

2023 was indeed the year of AI, and as more people adopt it, we know it is a technology that will continue to impact our culture and society, act as a catalyst for innovation and creation, and be a medium to engage people from all walks of life in conversations thanks to its growing ubiquity in our everyday lives.

We know we cannot do this alone: At Mozilla, we believe the most groundbreaking innovations emerge when people from diverse backgrounds unite to collaborate and openly trade ideas. 

So if you know someone who you think should be celebrated, we want to hear from you

Five winners from each of the five categories below will be selected to make up our 2024 Rise25 cohort: 

Advocates Guiding AI towards a responsible future

These are the policymakers, activists, and thinkers ensuring AI is developed ethically, inclusively, and transparently. This category also includes those who are adept at translating complex AI concepts for the broader public — including journalists, content creators, and cultural commentators. They champion digital rights and accessible AI, striving to make AI a force for societal good.

Builders Developing AI through ethical innovation

They are the architects of trustworthy AI, including engineers and data scientists dedicated to developing AI’s open-source language infrastructure. They focus on technical proficiency and responsible and ethical construction. Their work ensures AI is secure, accessible, and reliable, aiming to create tools that empower and advance society. 

Artists Reimagining AI’s creative potential

They transcend traditional AI applications, like synthesizing visuals or using large language models. Their projects, whether interactive websites, films, or digital media, challenge our perceptions and demonstrate how AI can amplify and empower human creativity. Their work provokes thought and offers fresh perspectives on the intersection of AI and art.

Entrepreneurs Fueling AI’s evolution with visionary ventures

These daring individuals are transforming imaginative ideas into reality. They’re crafting businesses and solutions with AI to meet societal needs, improve everyday life and forge new technological paths. They embody innovation, steering startups and projects with a commitment to ethical standards, inclusiveness and enhancing human welfare through technology.

Change Agents Cultivating inclusive AI

They are challengers that lead the way in diversifying AI, bringing varied community voices into tech. They focus on inclusivity in AI development, ensuring technology serves and represents everyone, especially those historically excluded from the tech narrative. They are community leaders, corporate leaders, activists and outside-the-box thinkers finding ways to amplify the impacts of AI for marginalized communities. Their work fosters an AI environment of equality and empowerment.

This year’s awards build upon the success of last year’s programming and community event in Berlin, which brought to life what a future trustworthy Internet could look like. Last year’s event crowned trailblazers and visionaries across five distinct categories: Builders, Activists, Artists, Creators, and Advocates. (Psst! Stay tuned as we unveil their inspiring stories in a video series airing across Mozilla channels throughout the year, leading up to the 2nd Annual Rise25 Awards.)

So join us as we honor the innovators, advocates, entrepreneurs, and communities who are working to build a happier, healthier web. Click here to submit your nomination today.

The post Mozilla Announces Call for Entries for the 2nd Annual Rise25 Awards in Dublin, Ireland appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Readouts from the Columbia Convening on Openness and AI

wo, 27/03/2024 - 22:50

On February 29, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners working on openness and AI. These individuals — spanning prominent open source AI startups and companies, non-profit AI labs, and civil society organizations — focused on exploring what “open” should mean in the AI era. We previously wrote about the convening, why it was important, and who we brought together.

Today, we are publishing two readouts from the convening. 

The first is a technical memorandum that outlines three different approaches to openness in AI, and highlights different components and spectrums of openness. It includes an extensive appendix that outlines key components in the AI stack, and describes how more openness in each component can help advance system and societal goals. Finally, it outlines open questions that would be worthy of future exploration, digging deeper into the specifics of openness and AI. This memorandum will be helpful for technical leaders and practitioners who are shaping the future of AI, so that they can better incorporate principles of openness to make their own AI systems more effective for their goals and more beneficial for society. 

The second is a policy memorandum that outlines how and why policymakers should support openness in AI. It outlines the societal benefits from openness in AI, provides a higher-level overview of how different parts of the AI stack contribute to different opportunities and risks, and lays out a series of recommendations about how policymakers can advance openness in AI. This memorandum will be helpful for policymakers, especially those who are grappling with the details of policy interventions related to openness in AI.

In the coming weeks, we will also be publishing a longer document that goes into greater detail about the dimensions of openness in AI. This will help advance our broader work with partners and allies to tackle complex and important topics around openness, competition, and accountability in AI. We will continue to keep mozilla.org/research/cc updated with materials stemming from the Columbia Convening on Openness and AI.

The post Readouts from the Columbia Convening on Openness and AI appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

6 takeaways from The Washington Post Futurist Tech Summit in D.C.

mo, 25/03/2024 - 15:00

A full conglomerate including journalists from The Washington Post, U.S. policymakers and influential business leaders gathered for a day of engaging discussions about technology March 21 in the nation’s capital.

Mozilla sponsored “The Futurist Summit: The New Age of Tech,” an event focused on addressing the wide range of promise and risks associated with emerging technologies — the largest of them being Artificial Intelligence (AI). It featured interviews moderated by journalists from The Post, as well as interactive sessions about tech for audience members in attendance at the paper’s office in Washington D.C.

Missed the event? Here are six takeaways from it that you should know about:

1. How OpenAI is preparing for the election.

The 2024 U.S. presidential election is one of the biggest topics of discussion involving the emergence and dangers of AI this year. It’s no secret that AI has incredible power to create, influence and manipulate voters with misinformation and fake media content (video, photos, audio) that can unfairly sway voters.

OpenAI, one of the biggest AI organizations, stressed an importance to provide transparency for its users to ensure their tools aren’t being used in those negative ways to mislead the public.

“It’s four billion people voting, and that is really unprecedented, and we’re very, very cognizant of that,” OpenAI VP of Global Affairs Anna Makanju said. “And obviously, it’s one of the things that we work — to ensure that our tools are not used to deceive people and to mislead people.”

Makanju reiterated that AI concerns with the election is a very large scale, and OpenAI is focused on engaging with companies to hammer down transparency in the 2024 race.

“This is like a whole of society issue,” Makanju said. “So that’s why we have engaged with other companies in this space as well. As you may have seen in the Munich Security Conference, we announced the Tech Accord, where we’re going to collaborate with social media companies and other companies that generate AI content, because there’s the issue of generation of AI content and the issue of distribution, and they’re quite different. So, for us, we really focus on things like transparency. … We of course have lots of teams investigating abuse of our systems or circumvention of the use case guidelines that are intended to prevent this kind of work. So, there are many teams at OpenAI working to ensure that these tools aren’t used for election interference.”

And OpenAI will be in the spotlight even more as the election inches closer. According to a report from Business Insider, OpenAI is preparing to launch GPT-5 this summer, which will reportedly eclipse the abilities of the ChatGPT chatbot.

The futurist summit focused on the wide range of promise and risks associated with emerging technologies

2. Policymakers address the potential TikTok ban.

The House overwhelmingly voted 352-65 on March 13 to pass a measure that gives ByteDance, the parent company of TikTok, a decision: Sell the social media platform or face a nationwide ban on all U.S. devices.

One of the top lawmakers on the Senate Intelligence Committee, Sen. Mark Warner (D-Va.), addressed the national security concerns around TikTok on a panel moderated by political reporter Leigh Ann Caldwell alongside Sen. Todd Young (R-Ind.).

“There is something uniquely challenging about TikTok because ultimately if this information is turned over to the Chinese espionage services that could be then potentially used for nefarious purposes, that’s not a good thing for America’s long-term national security interests,” Werner said. “End of the day, all we want is it could be an American company, it could be a British company, it could be a Brazilian company. It just needs not to be from one of the nation states, China being one of the four, that are actually named in American law as adversarial nations.”

Young chimed in shortly after Warner: “Though I have not authored a bill on this particular topic, I’ve been deeply involved, for several years running now, in this effort to harden ourselves against a country, China, that has weaponized our economic interdependence in various ways.”

The fate of the measure now heads to the Senate, which is not scheduled to vote on it soon.

3. Deep Media AI is fighting against fake media content.

AI to fight against AI? Yes, it’s possible!

AI being able to alter how we perceive reality through deepfakes — in other words, synthetic media — is another danger of the emerging technology. Deep Media AI founder Rijul Gupta is countering that AI problem with AI of his own.

In a video demonstration alongside tech columnist Geoffrey Fowler, Gupta showcased how Deep Media AI scans and detects deepfakes in photos, videos and audio files to combat the issue.

For example, Deep Media AI can determine if a photo is fake by looking at wrinkles, reflections and things humans typically don’t pay attention to. In the audio space, which Gupta described as “uniquely dangerous,” the technology analyzes the waves and patterns. It can detect video deepfakes by tracking motion of the face — how it moves, the shape and movement of lips — and changes in lighting.

A good sign: Audience members were asked to identify a deepfake between two video clips (one real, one AI generated by OpenAI) at the start of Gupta’s presentation. The majority of people in attendance guessed correctly. Even better: Deep Media AI detected it was fake and scored a 100/100 in its detection system. In other words, it got it right perfectly.

“Generative AI is going to be awesome; it’s going to make us all rich; it’s going to be great,” Gupta said. “But in order for that to happen, we need to make it safe. We’re part of that, but we need militaries and governments. We need buy-in from the generative AI companies. We need buy-in from the tech ecosystem. We need detectors. And we need journalists to tell us what’s real, and what’s fake from a trusted source, right? I think it’s possible. We’re here to help, but we’re not the only ones here. We’re hoping to provide solutions that people use.”

VP of Global Policy at Mozilla, Linda Griffin, interviewed by The Washington Post’s Kathleen Koch.

4. Mozilla’s push for trustworthy AI

As we continue to shift towards a world with AI that’s helpful, it’s important we involve human beings in that process as much as possible. It’s concerning if companies are making AI while only thinking about profit and not the public. That hurts public trust and faith in big tech.

This work is urgent, and Mozilla has been delivering the trustworthy AI report — which had a 2020 status update in February — to aid in aligning with our vision of creating a healthy internet where openness, competition and accountability are the norms.

“We want to know what you think,” Mozilla VP of Global Policy Linda Griffin said. “We’re trying to map and guide where we think these conversations are. What is the point of AI unless more people can benefit from it more broadly? What is the point of this technology if it’s just in the hands of the handful of companies thinking about their bottom line?

“They do important and really interesting things with the technology; that’s great. But we need more; we need the public counterpoint. So, for us, trustworthy AI, it’s about accountability, transparency, and having humans in the loop thinking about people wanting to use these products and feeling safe and understanding that they have recourse if something goes wrong.”

5. AI’s ability to change rules in the NFL (yes!).

While the NFL is early in the process of incorporating AI into the game of football, the league has found ways to get the ball rolling (pun intended) on using its tools to make the game smarter and better.

One area is with health and safety, a major priority for the NFL. The league uses AI and machine learning tools on the field to grab predictive analysis to identify plays and body positions that most likely lead to players getting injured. Then, they can adjust rules and strategies accordingly, if they want.

For example, kickoffs. Concussions sustained on kickoffs dropped by 60 percent in the NFL last season, from 20 to eight. That is because kickoffs were returned less frequently after the league adjusted the rules governing kickoff returns during the previous offseason, so that a returner could signal for a fair catch no matter where the ball was kicked, and the ball would be placed on the 25-yard line. This change came after the NFL used AI tools to gather injury data on those plays.

“The insight to change that rule had come from a lot of the data we had collected with chips on the shoulder pads of our players of capturing data, using machine learning, and trying to figure out what is the safest way to play the game,” Brian Rolapp, Chief Media & Business Officer for the NFL, told media reporter Ben Strauss, “which led to an impact of rule change.”

While kickoff injuries have gone down, making this tweak to one of the most exciting plays in football is tough. So this year, the NFL is working on a compromise and exploring new ideas that can strike a balance to satisfy both safety and excitement. There will be a vote at league meetings this week in front of coaches, general managers and ownership about it.

6. Don’t forget about tech for accessibility.

With the new chapter of AI, the possibilities of investing and creating tools for those with disabilities is endless. For those who are blind, have low vision or have trouble hearing, AI offers an entire new slate of capabilities.

Apple has been one of the companies at the forefront creating features for those with disabilities that use their products. For example, on iPhones, Apple has implemented live captions, sound recognition and voice control on devices to assist.

Sarah Herrlinger, Senior Director of Global Accessibility Policy & Initiatives at Apple, gave insight into how the tech giant decides what features to add and which ones to update. In doing so, she delivered one of the best talking points of the day.

“I think the key to that is really engagement with the communities,” Herrlinger said. “We believe very strongly in the disability mantra of, nothing about us without us, and so it starts with first off employing members of these communities within our ranks. We never build for a community. We build with them.”

Herrlinger was joined on stage alongside retired Judge David S. Tatel, Mike Buckley, the Chair & CEO of Be My Eyes and Disability reporter for The Post Amanda Morris. When asked about the future of accessibility for those that are blind, Patel shared a touching sentiment many in the disability space resonate with.

“It’s anything that improves and enhances my independence, and enhances it seamlessly is with what I look for,” Tatel said. “That’s it. Independence, independence, independence.”

Get Firefox Get the browser that protects what’s important

The post 6 takeaways from The Washington Post Futurist Tech Summit in D.C. appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

How AI is unfairly targeting and discriminating against Black people

ti, 19/03/2024 - 17:00

The rise of Artificial Intelligence (AI) is here, and it’s bringing a new era of technology that is already creating and impacting the world. It was the story of 2023, and its emphasis isn’t going anywhere anytime soon.

While the creative growth of AI occurring so rapidly is a fascinating development for our society, it’s important to remember its harms that cannot be ignored, especially pertaining to racial bias and discrimination against African-Americans.

In recent years, there has been research revealing that AI technologies have struggled to identify images and speech patterns of nonwhite people. Black AI researchers at tech giants creating AI technology have raised concerns about its harms against the Black community. 

The concerns surrounding AI’s racial biases and harms against Black people are serious and should be a big focus as 2024 gets underway. We invited University of Michigan professor, Harvard Faculty Associate and former Mozilla Foundation Senior Fellow in Trustworthy AI, Apryl Williams, to dive into this topic further. Williams studies experiences of gender and race at the intersection of digital spaces and algorithmic technocultures, and her most recent book, “Not My Type: Automating Sexual Racism in Online Dating,” exposes how race-based discrimination is a fundamental part of the most popular and influential dating algorithms.

To start, as a professor, I’m curious to know: How aware do you think students are of the dangers of the technology they’re using? Beyond the simple things like screen time notifications they might get, and more about AI problems, misinformation, etc.?

They don’t know. I show two key documentaries in my classes every semester. I teach a class called “Critical Perspectives on the Internet.” And then I have another class that’s called “Critical AI” and in both of those classes, the students are always shook. They always tell me, “You ruin everything for me, I can never look at the world the same,” which is great. That’s my goal. I hope that they don’t look at the world the same when they leave my classes, of course. But I show them  “Coded Bias” by Shalini Kantayya and when they watched that just this past semester they were like, “I can’t believe this is legal, like, how are they using facial recognition everywhere? How are they able to do these things on our phones? How do they do this? How do they do that? I can’t believe this is legal. And why don’t people talk about it?” And I’m like, “Well, people do talk about it. You guys just aren’t necessarily keyed into the places where people are talking about.” And I think that’s one of the feelings of sort of like these movements that we’re trying to build is that we’re not necessarily tapped into the kinds of places young people go to get information.

We often assume that AI machines are neutral in terms of race, but research has shown that some of them are not and can have biases against Black people. When we think about where this problem stems from, is it fair to say it begins with the tech industry’s lack of representation of people who understand and can work to address the potential harms of these technologies?

I would say, yes, that is a huge part of it. But the actual starting point is the norms of the tech industry. So we know that the tech industry was created by and large by the military, industrial, complex  — like the internet was a military device. And so because of that, a lot of the inequity or like inequality, social injustice of the time that the internet work was created were baked into the structure of the internet. And then, of course, industries that spring up from the internet, right? We know that the military was using the internet for surveillance. And look now, we have in 2024, widespread surveillance of Black communities, of marginalized communities, of undocumented communities, right? So really, it’s the infrastructure of the internet that was built to support white supremacy, I would say, is the starting point. And because the infrastructure of the internet and of the tech industry was born from white supremacy, then, yes, we have these hiring practices, and not just the hiring practices, but hiring practices where, largely, they are just hiring the same kinds of people — Cisgender, hetero white men. Increasingly white women, but still we’re not seeing the kinds of diversity that we should be seeing if we’re going to reach demographic parity. So we have the hiring. But then also, we have just the norms of the tech industry itself that are really built to service, I would say, the status quo, they’re not built to disrupt. They’re built to continue the norm. And if people don’t stop and think about that, then, yeah, we’re going to see the replication of all this bias because U.S.  society was built on bias, right? Like it is a stratified society inherently. And because of that,  we’re always going to see that stratification in the tech industry as well.

Issues of bias in AI tend to impact the people who are rarely in positions to develop the technology. How do you think we can enable AI communities to engage in the development and governance of AI to get it where it’s working toward creating systems that embrace the full spectrum of inclusion?

Yes, we should enable it. But also the tech industry, like people in these companies, need to sort of take the onus on themselves to reach out to communities in which they are going to deploy their technology, right? So if your target audience, let’s say on TikTok, is Black content creators, you need to be reaching out to Black content creators and Black communities before you launch an algorithm that targets those people. You should be having them at your headquarters. You should be doing listening sessions. You should be elevating Black voices. You should be listening to people, right? Listening to the concerns, having support teams in place, before you launch the technology, right? So instead of retroactively trying to Band-aid it when you have an oops or like a bad PR moment, you should be looking to marginalize communities as experts on what they need and how they see technology being implemented in their lives.

A lot of the issues with these technologies in relation to Black people is that they are not designed for Black people — and even the people they are designed for run into problems. It feels like this is a difficult spot for everyone involved?

Yeah, that’s an interesting question. I feel like it’s really hard for good people on the inside of tech companies to actually say, “Hey, this thing that we’re building might be generating money, but it’s not generating long-term longevity,” right? Or health for our users. And I get that — not every tech company is health oriented. They may act like they are, but they’re not, like to a lot of them, money is their bottom line. I really think it’s up to sort of like movement builders and tech industry shakers to say or to be able to create buy-in for programs, algorithms, ideas, that foster equity. But we have to be able to create buy-in for that. So that might look like, “Hey, maybe we might lose some users on this front end when we implement this new idea, but we’re going to gain a whole lot more users.” Folks of color, marginalized users, queer users, trans users, if they feel like they can trust us, and that’s worth the investment, right? So it’s really just valuing the whole person, rather than just sort of valuing the face value of the money only or what they think it is, but looking to see the potential of what would happen if people felt like their technology was actually trustworthy.

AI is rapidly growing. What are things we can add to it as it evolves, and what are things we should work to eliminate? 

I would say we need to expand our definition of safety. I think that safety should fundamentally include your mental health and well-being, and if the company that you’re using it for to find intimacy or to connect with friends is not actually keeping you safe as a person of color, as a trans person, as a queer person, then you can’t really have like full mental wellness if you are constantly on high alert, you’re constantly in this anxious position, you’re having to worry that your technology is exploiting you, right? So, if we’re going to have all of this buzz that I’m seeing about trust and safety, that can’t just stop at the current discourse that we’re having on trust and safety. It can’t just be about protecting privacy, protecting data, protecting white people’s privacy. That has to include reporting mechanisms for users of color when they encounter abuse. Whether that is racism or homophobia, right? Like it needs to be more inclusive. I would say that the way that we think about trust and safety and automated or  algorithmic systems needs to be more inclusive. We really need to widen the definition of safety. And probably the definition of trust also. 

In terms of subtracting, they’re just a lot of things that we shouldn’t be doing, that we’re currently doing. Honestly, the thing that we need to subtract the most is this idea that we move fast and break things in tech culture. It’s sort of like, we are just moving for the sake of innovation. We might really need to dial back on this idea of moving for the sake of innovation, and actually think about moving towards a safer  humanity for everybody, and designing with that goal in mind. We can innovate in a safe way. We might have to sacrifice speed, a nd I think we need to say, it’s okay to sacrifice speed in some cases.

When I started to think about the dangers of AI, I immediately remembered the situation with Robert Williams a few years ago, when he was wrongly accused by police that used AI facial recognition. There is more to it than just the strange memes and voice videos people create. What are the serious real world harms that you think of when it comes to Black people and AI that people are overlooking?

I don’t know that it’s overlooked, but I don’t think that Black people are aware of the amount of surveillance of everyday technologies. When you go to the airport, even if you’re not using Clear or other facial recognition technology at the airport for expedited security, they’re still using facial recognition technology. When you’re crossing borders, when you are even flying domestically, they’re still using that tech to look at your face. You look into the camera, they take your picture. They compare it to your ID. Like, that is facial recognition technology. I understand that that is for our national safety, but that also means that they’re collecting a lot of data on us. We don’t know what happens with that data. We don’t know if they keep it for 24 hours or if they keep it for 24 years. Are they keeping logs of what your face looks like every time you go? In 50 years, are we going to see a system that’s like “We’ve got these TSA files, and we’re able to track your aging from the time that you were 18 to the time that you’re 50, just based on your TSA data,” right? Like, we really don’t know what’s happening with the data. And that’s just one example. 

We have constant surveillance, especially in our cars. The smarter our cars get, the more they’re surveilling us. We are seeing increasing use of those systems and cars being used, and police cases to see if you were paying attention. Were you talking on your phone? Were you texting and driving? Things like that. There is automation in cars that’s designed to identify people and to stop right to avoid hitting you. And as we know, a lot of the systems misidentify Black people as trash cans, and will instead hit them. There are so many instances where AI is part of our life, and I don’t think people realize the depth of which it really does drive our lives. And I think that’s the thing that scares me the most for people of color is that we don’t understand just how much AI is part of our everyday life. And I wish people would stop and sort of think about, yes, I get easy access to this thing, but what am I trading off to get that easy access? What does that mean for me? And what does that mean for my community? We have places like Project Blue light, Project Green Light, where those systems are heavily surveilled in order to “protect communities.” But are those created to protect white communities at the expense of Black and brown communities? Right? That’s what we have to think about when we say that these technologies, especially surveillance technologies, are being used to protect people, who are they protecting? And who are they protecting people from? And is that idea that they’re protecting people from a certain group of people realistic? Or is that grounded in some cultural bias that we have. 

Looking bigger picture this year: It’s an election year and AI will certainly be a large talking point for candidates. Regardless of who wins this fall, in what ways do you think the administration can ensure that policies and enforcement are instilled to address AI to make sure that racial and other inequities don’t continue and evolve?

They need to enforce or encourage that tech companies have the onus of transparency on them. There needs to be some kind of legislative prompting, there has to be some kind of responsibility where tech companies actually suffer consequences, legal consequences, economic consequences, when they violate trust with the public, when they extract data without telling people. There also needs to be more two-way conversations. Often tech companies will just tell you, “These are the terms of service, you have to agree with them,” and if you don’t, you opt-out, that means you can’t use the tech. There needs to be some kind of system where tech companies can say, “Okay, we’re thinking about rolling this out or updating our terms of service in this way, how does the community feel about that?” And a way that really they can be accountable to their users. I think we really just need some legislation that makes tech companies sort of put their feet to the fire in terms of them actually having responsibility to their users.

When it comes to fighting against racial biases and struggles, sometimes the most important people that can help create change and bring awareness are those not directly impacted by what’s going on — for example, a white person being an ally and protesting for a Black person. What do you think most normal people can do to influence change and bring awareness to AI challenges for Black people?

I would say, for those people who are in the know about what tech companies are doing, talk about that with your kids, right? When you’re sitting down and your kids are telling you about something that their friend posted, that’s a perfect time to be like, “Let’s talk about that technology that your friend is using or that you’re using.” Did you know that on TikTok, this happens? Did you know that on TikTok, often Black creator voices are hidden, or Black content creators are shadow-banned? Did you know what happens on Instagram? These kinds of regular conversations, that way, these kinds of tech injustices are part of the everyday vernacular for kids as they’re coming up so that they can be more aware, and also so that they can advocate for themselves and for their communities.

Get Firefox Get the browser that protects what’s important

The post How AI is unfairly targeting and discriminating against Black people appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Dr. J. Nathan Matias on leading technology research for better digital rights

mo, 18/03/2024 - 15:00

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates. builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with winner Dr. J. Nathan Matias, a professor at Cornell University leading technology research to create change and impact digital rights. He leads the school’s Citizen and Technology Lab (CAT) and is the co-founder of the Coalition for Independent Technology Research, a nonprofit defending the right to ethically study the impact of tech on society. We talk with Matias about his start in citizen science, his work advocating for researchers’ rights and more.

As a professor at Cornell, how would you gauge where students and Gen Z are at in terms of knowing the dangers of the internet?

As a researcher, I am very aware that my students are one narrow slice of Americans. I teach communication and technology. I teach this 500 student class and I think the students I teach hear about people’s concerns, about technology, through media, through what they see online. And they’re really curious about what if that is true and what we can do about it. That’s one of the great joys of being a professor, that I can introduce students to what we know, thanks to research and to all the advocacy and journalism, and also to what we don’t know and encourage students to help create the answers for themselves, their communities and future generations.

To kind of go a little bit even further, as a professor, what are the things that you try to instill with them, or what are core concepts that you think are really important for them to know and try to hammer down to them about the internet and the social impacts of all of these platforms?

If I’m known for one thing, it’s the idea that knowledge and power about digital technologies shouldn’t be constrained to just within the walls of the universities and tech companies. Throughout my classes and throughout my work, I actively collaborate with and engage with the general public to understand what people’s fears are to collect evidence and to inform accountability. And so, my students had the opportunity to see how that works and participate in it themselves. And I think that’s especially important, because yeah, people come to a university to learn and grow and learn from what scholars have said before, but also, if we come out of our degrees without an appreciation for the deeply held knowledge that people have outside of universities, I think that’s a missed opportunity. 

Beyond the data you collect in your field, what other types of data collection out there creates change and inspires you to continue the work that you do?

I’m often inspired by people who do environmental citizen science because many of them live in context. We all live in contexts where our lives and our health and our futures are shaped by systems and infrastructures that are invisible, and that we might not appear to have much power over, right? It could be air or water, or any number of other environmental issues. And it’s similar for our digital environments. I’m often inspired by people who do work for data collection and advocacy and science on the environment when thinking about what we could do for our digital worlds. Last summer, I spent a week with a friend traveling throughout the California Central Valley, talking to educators, activists, organizers and farmworkers and communities working to understand and use data to improve their physical environment. We spent a day with Cesar Aguirre at the Central California Justice Network. You have neighborhoods in central California that are surrounded by oil wells and people are affected by the pollution that comes out of those wells — some of them have long been abandoned and are just leaking. And it’s hard to convince people sometimes that you’re experiencing a problem and to document the problem in a way that can get things to change. Cesar talked about ways that people used air sensors and told their stories and created media and worked in their local council and at a state level to document the health impacts of these oil wells and actually get laws changed at the state level to improve safety across the state. Whenever I encounter a story like that, whether it’s people in Central California or folks documenting oil spills in Louisiana or people just around the corner from Cornell — indigenous groups advocating for safe water and water rights in Onondaga Lake — I’m inspired by the work that people have to do and do to make their concerns and experiences legible to powerful institutions to create change. Sometimes it’s through the courts, sometimes it’s through basic science that finds new solutions. Sometimes it’s mutual aid, and often at the heart of these efforts, is some creative work to collect and share data that makes a difference.

Dr.J Nathan Matias at Mozilla’s Rise25 award ceremony in October 2023.

When it pertains to citizen science and the work that you do, what do you think is the biggest challenge you and other researchers face? And by that I mean, is it kind of the inaction of tech companies and a lot of these institutions? Or is it maybe just the very cold online climate of the world today?

It’s always hard to point to one. I think the largest one is just that we have a lot more work to do to help people realize that they can participate in documenting problems and imagining solutions. We’re so used to the idea that tech companies will take care of things for us, that when things go wrong, we might complain, but we don’t necessarily know how to organize or what to do next. And I think there’s a lot that we as people who are involved in these issues and more involved in them can do to make people aware and create pathways — and I know Mozilla has done a lot of work around awareness raising. Beyond that, we’ve kind of reached a point where I wish companies were indifferent, but the reality is that they’re actively working to hinder independent research and accountability. If you talk to anyone who’s behind the Coalition for Independent Tech Research, I think we would all say we kind of wish it we didn’t have to create it, because spending years building a network to support and defend researchers when they come under attack by governments or tech companies for accountability and transparency work for actually trying to solve problems, like, that’s not how you prefer to spend your time. But, I think that on the whole, the more people realize that we can do something, and that our perspective and experience matters, and that it can be part of the solution, the better off we are with our ability to document issues and imagine a better future. And as a result, when it involves organizing in the face of opposition, the more people we’ll have on that journey

Just looking at this year in general with so much going on, what do you think is the biggest challenge that we face this year and in the world? How do we combat it?

Here’s the one I’ve been thinking about. Wherever you live, we don’t live in a world where a person who has experienced a very real harm from a digital technology — whether it’s social media or some kind of AI system — can record that information and seek some kind of redress, or even know who to turn to, to address or fix the problem or harm. And we see this problem in so many levels, right? If someone’s worried about discrimination from an algorithm in hiring, who do you turn to? If you’re worried about the performance of your self-driving car, or you have a concern about mental health and social media this year? We haven’t had those cases in court yet. We’re seeing some efforts by governments to create standards and we’re seeing new laws proposed. But it’s still not possible, right? If you get a jar of food from the supermarket that has harmful bacteria, we kind of know what to do. There’s a way you can report it, and that problem can be solved for lots of people. But that doesn’t yet exist in these spaces. My hope for 2024 is that on whatever issue people are worried about or focused on, we’ll be able to make some progress towards knowing how to create those pathways. Whether it’s going to be work so that courts know how to make sense of evidence about digital technologies —and I think they’re going to be some big debates there — whether it’s going to involve these standards conversations that are happening in Europe and the U.S., around how to report AI incidents and how to determine whether an AI system is safe or not, or safe for certain purposes and any number of other issues. Will that happen and be solved this year? No, it’s a longer term effort. But how could we possibly say that we have a tech ecosystem that respects people’s rights and treats them well and is safe if we don’t even have basic ways for people to be heard when things go wrong, whether it’s by courts or companies, or elsewhere. And so I think that’s the big question that I’m thinking about both in our citizen science work and our broader policy work at Cat Lab.

There’s also a bigger problem that so many of these apps and platforms are very much dependent upon us having to doing something compared to them. 

Absolutely. I think a lot of people have lost trust in companies to do things about those reports. Because companies have a history of ignoring them. In fact, my very first community participatory science project in this space, which started back in 2014, we pulled information from hundreds of women who faced online harassment. And we looked at the kinds of things they experienced. And then whether Twitter back then was responding to people’s reports. It revealed a bunch of systemic problems and how the company has handled it. I think we’ve reached the point where there’s some value in that reporting, and sometimes for good and sometimes those things are exploited for censorship purposes as well — people report things they disagree with to try to get it taken down. But even more deeply, those reports don’t get at the deeper systemic issues. They don’t address how to prevent problems in the first place, or how to create or how to change the underlying logics of those platforms, or how to incentivize companies differently, so that they don’t create the conditions for those problems in the first place. I think we’re all looking for what are the right entities? Some currently exist, some we’re going to have to create that will be able to take on what people experience and actually create change that matters.

We started Rise25 to celebrate Mozilla’s 25th anniversary, what do you hope people are celebrating in the next 25 years?

I love that question because my first true encounter with Mozilla would have been in 2012 at the Mozilla festival, and I was so inspired to be surrounded by a room of people who cared about making the Internet and our digital worlds better for people. And it was such a powerful statement that Mozilla convened people. Other tech institutions have these big events where the CEO stands on a stage and tells everyone why what they’re doing is revolutionary. And Mozilla did something radically different, which was to create a community and a space for people to envision the future together. I don’t know what the tech innovations or questions are going to be 25 years from now — there will probably be some enduring ones about access and equity and inclusion and safety for whatever the technologies are. My hope is that 25 years from now, Mozilla will continue to be an organization and a movement that listens and amplifies and supports a broad and diverse community to envision that together. It’s one of the things that makes Mozilla so special, and I think is one of the things that makes it so powerful.

What is one action you think that everybody can take to make the world and their lives online better?

I think the action to believe yourself when you notice something unusual, or have a question. And then to find other people who can corroborate and build a collective picture. Whether it’s by participating in the study at Cat Lab or something else. I have a respiratory disability, and it’s so easy to doubt your own experience and so hard to convince other people sometimes that what you’re experiencing is real. And so I think the biggest step we can do is to believe ourselves and to like, believe others when they talk about things they’ve experienced and are worried about but use that experience as the beginning of something larger, because it can be so powerful, and make such a huge difference when people believe in each other and take each other seriously.

What gives you hope about the future of our world?

So many things. I think every time I meet someone who is making things work under whatever circumstances they have — unsurprising as someone who does citizen and community science. I think about our conversations with Jasmine Walker, who is a community organizer who organizes these large spaces for Black communities online and has been doing it for ages and across many versions of technology and eras of time. And just to see the care and commitment that people have to their communities and families as it relates to technology — it could be our collaborators who are investigating hiring algorithms or communities we’ve talked to. We did a study that involved understanding the impact of smartphone design on people’s time use, and we met a bunch of people who are colorblind and advocates for accessibility. In each of those cases, there are people who care deeply about those around them and so much that they’re willing to do science to make a difference. I’m always inspired when we talk, and we find ways to support the work that they’re doing by creating evidence together that could make a difference. As scientists and researchers, we are sometimes along for the ride for just part of the journey. And so I’m always inspired when I see the commitment and dedication people have for a better world.

Get Firefox Get the browser that protects what’s important

The post Dr. J. Nathan Matias on leading technology research for better digital rights appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet