+
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Who's In This Podcast
Helen Todd is co-founder and CEO of Sociality Squared and the human behind Creativity Squared.
Maya Daver-Massion is the Head of Online Safety at PUBLIC, and is an Online Safety subject matter expert.

Ep31. Maya Daver-Massion: A.I. & Safe Online Spaces

Up Next
Ep32. Martin Groedl and Moritz Resl: Are Machines Creative?

Ep31. A.I. & Safe Online Spaces: How Creativity & Collaboration Address the Complexities of Online Harms with Maya Daver-Massion, the Head of Online Safety at PUBLIC

Will generative artificial intelligence (GenAI) make the internet better or worse? The answer is more complicated than you might think. On this episode of Creativity Squared, Online Safety Expert, Maya Daver-Massion, walks us through the landscape of online harms as well as the multi-pronged efforts to protect vulnerable internet users from malicious actors.

Maya is the Head of Online Safety at PUBLIC, a London-based digital transformation partner committed to helping the public sector turn innovative ideas into practical solutions. She is an Online Safety expert, having worked with the public and private sector to tackle harms including online child sexual abuse and exploitation, gender-based violence, human rights violations, and more. Maya helps clients build evidence on key policy and regulatory challenges, identify targeted interventions, and design products to help empower and safeguard individuals online. She’s driven by her passion to keep vulnerable populations safe online and her commitment to making the internet a safer place for all. 

Maya says that continuous effort, innovation, and cooperation are the foundations of fostering safer online environments. She discusses the challenges in addressing online harms, the double-edged role of generative A.I. in the evolving battle against online harms, and what governments and tech companies are doing to mitigate online harms in the age of generative artificial intelligence. 

Online Harms and the Challenges of Mitigation

Online harm as a subject matter is as amorphous and diverse as the internet itself, making it difficult to define and therefore challenging to address. 

There’s no real set definition or taxonomy of online harms, as harms really manifest in so many different ways, and for so many different communities.”

Maya Daver-Massion

Maya describes online harm broadly as any behavior online which may impact or hurt someone physically or emotionally. There are some overarching categories that online safety advocates run into daily, such as cyber bullying and child sexual abuse materials (CSAM). The world of online harms also includes more subtle triggers that aren’t as obvious without understanding their implicit context. The wide variety of cultural values co-existing on the internet further complicates the task of articulating a useful definition of online harms.

Yet many instances of online harms are motivated by familiar forms of hatred and malice. For instance, Maya says that 80 percent of CSAM depicts females. Black women and women in publicly-visible positions are also significantly more likely to be targets of intimate imagery abuse; 90 percent of deepfakes are pornographic depictions of women. 

The second significant challenge for online safety advocates is collecting the evidence of online harms. Maya says the issue is two-fold. Those who experience online harm, depending on the type, may not feel safe or empowered enough to report or even share their experience with loved ones. Additionally, the lack of a common definition of online harms makes it hard to write rules for harm-detection technology to enforce. 

As somebody working at the intersection of online harms and technological innovation, the third major challenge that Maya encounters is in keeping up with the evolution of online harms and the tools used to create harmful material. 

Harm types are evolving in the same way that technologies are evolving. So you really have to have the finger on the pulse constantly to know what the key challenges are.”

Maya Daver-Massion

Over the past year, GenAI has opened up a whole new can of worms in the race to combat online harms. But Maya says that the technology also offers massive potential for good. 

Risks and Benefits of A.I. for Online Safety

Maya says that GenAI can both exacerbate and help mitigate online harms. 

Realistically, she says that we’re now in an environment where online harms can be scaled much more rapidly than ever before. She cites three main factors in the risk analysis of GenAI’s impact on online safety. 

First of all, the barrier to entry for using GenAI is almost nothing. While many of the most popular GenAI products have built-in guardrails to prevent misuse, open source models are becoming more available and more powerful. Malicious actors can quickly go from intent to execution without getting slowed down (and without the time to reconsider their plan) by the need to learn about Photoshop or video editing. 

The production scale possible with A.I. assistance is also enormous. Peddlers of mis- and disinformation can pump out endless content to advance their agenda at almost zero cost. In comparison, companies like Meta devote significant resources to moderating content on their platform and still don’t catch everything. 

Finally, GenAI is closing the quality gap between authentic and synthetic content. Newer versions of text-to-image generators can produce images that are indiscernible from actual photographs. Facial and vocal cloning scams are on the rise as well. While there are tools available to help online citizens check the authenticity of content they encounter, some of the content generated by A.I. is of high-enough quality that someone experiencing it may never even think to check if it’s real or not, which is cause for caution and skepticism in online spaces.  

Maya emphasizes that these risk factors exist on top of the issues that have always troubled traditional A.I., such as regurgitating and reinforcing the biases that models learn from their training data. 

However, she remains optimistic that a combination of regulation and innovative technology will help improve online safety. 

The Rise of Online Safety Regulation and Safety Tech

For the last eight years, Australia has been the global leader in regulating digital spaces against harmful content. The Australian Online Safety Act, passed in 2015 and updated in 2018, focuses mainly on child safety and anti-bullying. Interventions focus on safety by design, requiring platforms to take proactive and reasonable steps to protect users, offer education about online harms, and strive for transparency in their harm prevention measures. Australia’s eSafety Commissioner is now a blueprint for other countries building out their online safety regulatory framework. 

Across the globe, governments are taking the risks of online harms more seriously and working on policies to protect their digital citizens. Singapore enacted its own Online Safety Act at the start of this year, focusing more on what Maya calls “codes of practice” or guidance on specific harms. 

The European Union also enacted its own online safety policies in August. The EU’s Online Safety Act focuses on a “notice and takedown” approach, where platforms can be held liable if they are aware of illegal materials on their site and fail to remove them and/or don’t communicate transparently about the existence of such harms. 

Canada and South Africa are also working on developing their own online safety regulations. Even though global adoption of online safety regulation is piecemeal, Maya says that strengthening regulation in big online markets can help improve online safety in countries without robust regulation. 

As these massive markets are enforcing regulation, there will be spillover effects, because ultimately, these platforms are predominantly sitting in these markets as well. So it’s not only impacting their user base, but it’s impacting their headquarters.”

Maya Daver-Massion

The United Kingdom is getting ready to start enforcing its own online safety regime, focusing mainly on illegal content such as terrorist radicalization content and CSAM. Unlike other countries, though, the UK is emphasizing an active moderation approach that relies on technology intervention, including help from artificial intelligence. 

Platforms using A.I. to help detect the needles of online harm in the haystack of digital content is nothing new. However, the computing power available today combined with more advanced A.I. models can uncover more of the needles and misidentify fewer straws of hay. Artificial intelligence can not only help detect more instances of harmful content than previous algorithms, they can also help ease the mental health burden that human content moderators experience. 

Maya says that A.I. content moderation solutions have to balance the greater good with individual rights. 

There are different technologies that could be used to best intervene and block certain content. But how are you able to use those technologies, while also enabling private spaces?”

Maya Daver-Massion

Through her work, Maya gets a front row seat to many of the technological innovations seeking to strike this balance. Although tech platforms have invested fewer resources into their trust and safety teams over the past year, startups like the ones Maya works with are trying to fill the gaps. 

U.S.-based Modulate offers an A.I. product for online gaming providers called “ToxMod” which actively mitigates toxic and harmful language in online gaming voice chat. In the U.K., a company called Unitary offers multimodal “contextual moderation” which analyzes the totality of factors within and around a piece of content to determine if it is explicitly harmful or harmful specifically within the context that it’s shared. Maya says, though, that mitigating online harms takes a village. It can’t solely be up to platforms, or civil society organizations (CSOs), or A.I. tech startups to combat online harm, it needs to be a collaborative effort. She cites Bloom as an example of a successful collaboration between tech and more traditional anti-harm advocacy. Bloom is a free feature available on Bumble which offers courses and advice on building healthy relationships from the online abuse research and prevention group Chayn. Users can access resources about the signs of an abusive relationship and chat with professional counselors from Chayn.

“I think that’s a really beautiful way that you can take the resources that are already made available from civil society organizations working directly with individuals to understand their needs, and build it out on larger scale platforms, in the unfortunate event that someone needs it.”

Maya Daver-Massion

As governments and platforms hash out the best ways to keep their citizens and users safe online, Maya predicts that collaboration between policymakers, tech companies, CSOs, and educators will continue to yield the best results. 

Online Safety in Numbers

Through all the talk about governments, multilateral regulatory summits, massive tech companies, and futuristic technology, Maya wants listeners to remember that individuals still have the agency to determine their online experience. 

You have a lot more power, just in your knowledge of being an online citizen, than you may know. And it’s about reinforcing that power through some of the resources that are already made available.”

Maya Daver-Massion

She mentions organizations like the National Center for Missing and Exploited Children, which offers resources for parents and age-appropriate learning materials for kids about staying safe online. The Internet Watch Foundation is another organization on the front lines helping companies purge CSAM from their servers and assisting law enforcement prosecute those who traffic in CSAM. U.K.-based Parent Zone developed a tool with Google called “Be Internet Legends” for 7-11 year olds which gamifies the online safety learning experience. 

Ultimately, Maya acknowledges that the space of online harm can be daunting, but reminds us that a wealth of resources and tools exist to help us feel safe and empowered. Although there’s no shortage of bad actors online, there’s already also a wide and growing network of good actors ready to support those experiencing online harm. 

Public – International State of Safety Tech Report: 2023

The International State of Safety Tech is a research piece conducted by PUBLIC and Perspective Economics on behalf of Paladin, a leading global investor in the cybersecurity and Safety Tech space. The report provides an overview of the Safety Tech landscape globally, including details on the emerging technologies and threats that may challenge it, policy and regulation drives it, and investment opportunity. The report is the first of three which will be produced annually. 

Links Mentioned in this Podcast

Continue the Conversation

Thank you, Maya, for being our guest on Creativity Squared. 

This show is produced and made possible by the team at PLAY Audio Agency: https://playaudioagency.com.  

Creativity Squared is brought to you by Sociality Squared, a social media agency who understands the magic of bringing people together around what they value and love: http://socialitysquared.com.

Because it’s important to support artists, 10% of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized non-profit that supports over 150 arts organizations, projects, and independent artists.

Join Creativity Squared’s free weekly newsletter and become a premium supporter here.