Will generative artificial intelligence (GenAI) make the internet better or worse? The answer is more complicated than you might think. On this episode of Creativity Squared, Online Safety Expert, Maya Daver-Massion, walks us through the landscape of online harms as well as the multi-pronged efforts to protect vulnerable internet users from malicious actors.
Maya is the Head of Online Safety at PUBLIC, a London-based digital transformation partner committed to helping the public sector turn innovative ideas into practical solutions. She is an Online Safety expert, having worked with the public and private sector to tackle harms including online child sexual abuse and exploitation, gender-based violence, human rights violations, and more. Maya helps clients build evidence on key policy and regulatory challenges, identify targeted interventions, and design products to help empower and safeguard individuals online. She’s driven by her passion to keep vulnerable populations safe online and her commitment to making the internet a safer place for all.
Maya says that continuous effort, innovation, and cooperation are the foundations of fostering safer online environments. She discusses the challenges in addressing online harms, the double-edged role of generative A.I. in the evolving battle against online harms, and what governments and tech companies are doing to mitigate online harms in the age of generative artificial intelligence.
Online harm as a subject matter is as amorphous and diverse as the internet itself, making it difficult to define and therefore challenging to address.
Maya Daver-Massion
Maya describes online harm broadly as any behavior online which may impact or hurt someone physically or emotionally. There are some overarching categories that online safety advocates run into daily, such as cyber bullying and child sexual abuse materials (CSAM). The world of online harms also includes more subtle triggers that aren’t as obvious without understanding their implicit context. The wide variety of cultural values co-existing on the internet further complicates the task of articulating a useful definition of online harms.
Yet many instances of online harms are motivated by familiar forms of hatred and malice. For instance, Maya says that 80 percent of CSAM depicts females. Black women and women in publicly-visible positions are also significantly more likely to be targets of intimate imagery abuse; 90 percent of deepfakes are pornographic depictions of women.
The second significant challenge for online safety advocates is collecting the evidence of online harms. Maya says the issue is two-fold. Those who experience online harm, depending on the type, may not feel safe or empowered enough to report or even share their experience with loved ones. Additionally, the lack of a common definition of online harms makes it hard to write rules for harm-detection technology to enforce.
As somebody working at the intersection of online harms and technological innovation, the third major challenge that Maya encounters is in keeping up with the evolution of online harms and the tools used to create harmful material.
Maya Daver-Massion
Over the past year, GenAI has opened up a whole new can of worms in the race to combat online harms. But Maya says that the technology also offers massive potential for good.
Maya says that GenAI can both exacerbate and help mitigate online harms.
Realistically, she says that we’re now in an environment where online harms can be scaled much more rapidly than ever before. She cites three main factors in the risk analysis of GenAI’s impact on online safety.
First of all, the barrier to entry for using GenAI is almost nothing. While many of the most popular GenAI products have built-in guardrails to prevent misuse, open source models are becoming more available and more powerful. Malicious actors can quickly go from intent to execution without getting slowed down (and without the time to reconsider their plan) by the need to learn about Photoshop or video editing.
The production scale possible with A.I. assistance is also enormous. Peddlers of mis- and disinformation can pump out endless content to advance their agenda at almost zero cost. In comparison, companies like Meta devote significant resources to moderating content on their platform and still don’t catch everything.
Finally, GenAI is closing the quality gap between authentic and synthetic content. Newer versions of text-to-image generators can produce images that are indiscernible from actual photographs. Facial and vocal cloning scams are on the rise as well. While there are tools available to help online citizens check the authenticity of content they encounter, some of the content generated by A.I. is of high-enough quality that someone experiencing it may never even think to check if it’s real or not, which is cause for caution and skepticism in online spaces.
Maya emphasizes that these risk factors exist on top of the issues that have always troubled traditional A.I., such as regurgitating and reinforcing the biases that models learn from their training data.
However, she remains optimistic that a combination of regulation and innovative technology will help improve online safety.
For the last eight years, Australia has been the global leader in regulating digital spaces against harmful content. The Australian Online Safety Act, passed in 2015 and updated in 2018, focuses mainly on child safety and anti-bullying. Interventions focus on safety by design, requiring platforms to take proactive and reasonable steps to protect users, offer education about online harms, and strive for transparency in their harm prevention measures. Australia’s eSafety Commissioner is now a blueprint for other countries building out their online safety regulatory framework.
Across the globe, governments are taking the risks of online harms more seriously and working on policies to protect their digital citizens. Singapore enacted its own Online Safety Act at the start of this year, focusing more on what Maya calls “codes of practice” or guidance on specific harms.
The European Union also enacted its own online safety policies in August. The EU’s Online Safety Act focuses on a “notice and takedown” approach, where platforms can be held liable if they are aware of illegal materials on their site and fail to remove them and/or don’t communicate transparently about the existence of such harms.
Canada and South Africa are also working on developing their own online safety regulations. Even though global adoption of online safety regulation is piecemeal, Maya says that strengthening regulation in big online markets can help improve online safety in countries without robust regulation.
Maya Daver-Massion
The United Kingdom is getting ready to start enforcing its own online safety regime, focusing mainly on illegal content such as terrorist radicalization content and CSAM. Unlike other countries, though, the UK is emphasizing an active moderation approach that relies on technology intervention, including help from artificial intelligence.
Platforms using A.I. to help detect the needles of online harm in the haystack of digital content is nothing new. However, the computing power available today combined with more advanced A.I. models can uncover more of the needles and misidentify fewer straws of hay. Artificial intelligence can not only help detect more instances of harmful content than previous algorithms, they can also help ease the mental health burden that human content moderators experience.
Maya says that A.I. content moderation solutions have to balance the greater good with individual rights.
Maya Daver-Massion
Through her work, Maya gets a front row seat to many of the technological innovations seeking to strike this balance. Although tech platforms have invested fewer resources into their trust and safety teams over the past year, startups like the ones Maya works with are trying to fill the gaps.
U.S.-based Modulate offers an A.I. product for online gaming providers called “ToxMod” which actively mitigates toxic and harmful language in online gaming voice chat. In the U.K., a company called Unitary offers multimodal “contextual moderation” which analyzes the totality of factors within and around a piece of content to determine if it is explicitly harmful or harmful specifically within the context that it’s shared. Maya says, though, that mitigating online harms takes a village. It can’t solely be up to platforms, or civil society organizations (CSOs), or A.I. tech startups to combat online harm, it needs to be a collaborative effort. She cites Bloom as an example of a successful collaboration between tech and more traditional anti-harm advocacy. Bloom is a free feature available on Bumble which offers courses and advice on building healthy relationships from the online abuse research and prevention group Chayn. Users can access resources about the signs of an abusive relationship and chat with professional counselors from Chayn.
Maya Daver-Massion
As governments and platforms hash out the best ways to keep their citizens and users safe online, Maya predicts that collaboration between policymakers, tech companies, CSOs, and educators will continue to yield the best results.
Through all the talk about governments, multilateral regulatory summits, massive tech companies, and futuristic technology, Maya wants listeners to remember that individuals still have the agency to determine their online experience.
Maya Daver-Massion
She mentions organizations like the National Center for Missing and Exploited Children, which offers resources for parents and age-appropriate learning materials for kids about staying safe online. The Internet Watch Foundation is another organization on the front lines helping companies purge CSAM from their servers and assisting law enforcement prosecute those who traffic in CSAM. U.K.-based Parent Zone developed a tool with Google called “Be Internet Legends” for 7-11 year olds which gamifies the online safety learning experience.
Ultimately, Maya acknowledges that the space of online harm can be daunting, but reminds us that a wealth of resources and tools exist to help us feel safe and empowered. Although there’s no shortage of bad actors online, there’s already also a wide and growing network of good actors ready to support those experiencing online harm.
The International State of Safety Tech is a research piece conducted by PUBLIC and Perspective Economics on behalf of Paladin, a leading global investor in the cybersecurity and Safety Tech space. The report provides an overview of the Safety Tech landscape globally, including details on the emerging technologies and threats that may challenge it, policy and regulation drives it, and investment opportunity. The report is the first of three which will be produced annually.
Thank you, Maya, for being our guest on Creativity Squared.
This show is produced and made possible by the team at PLAY Audio Agency: https://playaudioagency.com.
Creativity Squared is brought to you by Sociality Squared, a social media agency who understands the magic of bringing people together around what they value and love: http://socialitysquared.com.
Because it’s important to support artists, 10% of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized non-profit that supports over 150 arts organizations, projects, and independent artists.
Join Creativity Squared’s free weekly newsletter and become a premium supporter here.
TRANSCRIPT
[00:00:00] Maya Daver-Massion: To see from the safety tech landscape is a lot of innovation and a lot of creativity in trying to think through how to tackle these very specific targeted harms. Just to kind of conceptualize what safety tech is again, in comparison to cyber security, which I think people know a lot more. So cyber security focuses on systems.
[00:00:22] Maya Daver-Massion: Safety tech focuses on humans. Think about it in that way. They kind of complement each other. They work in parallel, so to say.. but really there’s a lot of bottom line benefits to safety tech adoption that go beyond just an individual human. You know, there’s a lot around brand integrity, reputational awareness, user retention, reduced churn reducing customer acquisition costs, but really being able to creatively think through a solution to mitigate some of the harms before they even reach a human being.
[00:01:00] Helen Todd: Maya Daver-Massion is the head of online safety at public, a London based digital transformation partner committed to helping the public sector turn innovative ideas into practical solutions. She’s an online safety subject matter expert having worked with the public and private sector to tackle harms, including online child sexual abuse and exploitation, gender-based violence, human rights violations and more.
Maya helps clients to build evidence on key policy and regulatory challenges, identify targeted interventions and design products to help empower and safeguard individuals online. Maya is driven by her passion to keep vulnerable populations safe online, and her commitment to making the internet a safer place for all.
[00:01:49] Helen Todd: Maya is a fellow optimist, who I was connected with through Ryan Shea, who’s the managing director at Public and a fellow Bold community member. Today’s topic is one that AI greatly impacts and that’s online harms. While it’s a bit of a different direction than most shows that we do at the intersection of creativity and AI, creative solutions and emerging safety tech are crucial for building safer, online spaces.
It’s important to understand both the amazing creativity that AI unlocks, and also to be aware of the potential risks from bad actors. And that’s what today’s conversation is about. Listen to understand the need for continuous effort, innovation, and cooperation to foster safer online environments.
[00:02:36] Helen Todd: Awareness and education are important, and you’ll hear how online harms are defined and impact different communities. Maya discusses the role of generative AI and emerging technologies and both exacerbating and also helping solve the issues related to security and online safety. You’ll also get an international view of the regulatory landscape and tips on what you can do on an individual level when it comes to building safe and creative online spaces, discover why the future is collaborative. Enjoy.
[00:03:17] Helen Todd: Welcome to Creativity Squared. Discover how creatives are collaborating with artificial intelligence in your inbox on YouTube and on your preferred podcast platform.
Hi, I’m Helen Todd, your host, and I’m so excited to have you join the weekly conversations I’m having with amazing pioneers in this. Space.
[00:03:36] Helen Todd: The intention of these conversations is to ignite our collective imagination at the intersection of AI and creativity to envision a world where artists thrive.
[00:03:52] Helen Todd: Maya, it is so good to have you on Creativity Squared. Welcome to the show.
[00:03:57] Maya Daver-Massion: Thank you so much. I’m really excited to be here.
[00:03:59] Helen Todd: Excited to have you. I met Maya through Ryan, who was at a conference I went to and works at the same company as Public and super excited to have Maya on the show. Maya, for those who are meeting you for the first time, can you introduce yourself and tell us a little bit about your origin story?
[00:04:18] Maya Daver-Massion: Sure. Very happy to. Again, thanks so much for having me on the show. Really excited about our conversation today. So for everyone who’s listening in, my name is Maya Daver-Massion and I’m head of online safety at an organization called public. Happy to tell you all a little bit more about public and our work, but I’m really passionate about it.
building safer online spaces, and I’m very privileged to do that in the work that I do on a day to day basis. My passion for this area comes from my origin story, really. I come from a mixed cultural background. My mother is Indian, my father is German, but I spent the majority of my life in Japan. And my grandparents on both sides also grew up in Japan, so really feels like home away from home.
[00:05:02] Maya Daver-Massion: But due to the fact that I had so many different cultures around me, the only thing that really felt like the golden thread, the thing that tied it all together was really policy and politics, which always captivated my interests. And then I ended up in Hong Kong in 2019 to 2020 when the protests were ongoing and my job very quickly shifted to focusing on the online environments and spaces and combating issues such as hate speech, mis and disinformation and trying to understand them from a brand reputational standpoint got me to make a shift in my career. And now I’ve been dedicating my efforts to the space ever since and really focusing on empowering and protecting people online with Public.
[00:05:48] Helen Todd: Thank you for sharing that. And you’re based in the UK right now. So, it’s great to have have you joined the show virtually? And one reason why I’m really excited to have you on the show is we’ve had one attorney talk about kind of the different IP related issues to AI and it was a little bit more focused on the EU and the US.
[00:06:08] Helen Todd: But you know, the work that you’re doing on online harm is so important and you bring such a wonderful international perspective, which I’m excited to dive into. And just a note for our viewers and listeners, this episode, you know, is such a, such an important topic to raise awareness about all of this.
[00:06:25] Helen Todd: It’s a little bit heavier than our normal content on the show, but it’s all for the most part, high level and not too deep, but for anyone who might have a need a trigger warning for some of the content. This is your trigger warning. Unfortunately, there are bad actors on the web and in society and AI plays a big role in that.
[00:06:43] Helen Todd: So with that little segue, why don’t you tell us about Public and the work that Public does and why it’s so important and your role within Public.
[00:06:52] Maya Daver-Massion: Definitely. So Public is a digital transformation partner sitting out of London. As you mentioned, we support the public sector with turning innovative ideas into practical solutions.
[00:07:04] Maya Daver-Massion: And really what that means in simple terms, is that I sit at the intersection of the public sector and digital innovation. We have a range of different teams at Public. We do learning and workforce transformation. We do health. A lot of different things to support the public sector, but where I sit and where I focus my attention is the team called security and online safety with a focus on online safety specifically.
[00:07:26] Maya Daver-Massion: And really what my focus is there is that I work with governments, regulators, civil society organizations, private sector as well, to really advance the online safety space and what we call the policy to product loop. And what that means is that I am, I spend the bulk of my time really gathering evidence and trying to assess kind of policy landscapes and trying to understand what is needed based on the critical online harms that are impacting people online, but also spending a lot of time focusing on the interventions from a product landscape.
[00:08:01] Maya Daver-Massion: And I’m looking forward to speaking about that. Today, too. But just to say with the space it’s a very nascent space. Would love to speak to you again in 5, 10 years and see how different the conversation is. But one of the greatest privileges that I have in my role is being able to sit at the intersection between so many different stakeholders, partners, collaborators in this space.
[00:08:24] Maya Daver-Massion: So I really hope to be able to provide that. Perspective and the importance of collaboration as well as my understanding of what artificial intelligence really means. I’m the online safety landscape.
[00:08:36] Helen Todd: It’s so good to have you here. And, you know, I think generally, most people, when they hear, you know, deep fakes, they immediately go to the negative side of, you know, well, deepfakes inherently are negative and done without consent, but online harms, you know, expands and includes a lot of different things. And while some people might be aware of deepfakes because that gets a lot of you know, coverage in the press, can you kind of define for us when we say online harms what that means and what that encompasses as well?
[00:09:07] Maya Daver-Massion: Yes, definitely. And mindful how you started this Helen. I want to mention that this may yeah, don’t want to re-traumatize anyone. I’m going to be speaking to different harm types, but I won’t go into any detail about it. Just want to kind of give a brief overview of the realities that individuals are unfortunately facing online.
[00:09:26] Maya Daver-Massion: So when I speak about online harms really this is any behavior online, which May impact or hurt someone physically or emotionally. I think something that’s really important to be mindful of here is that there’s no real set definition or taxonomy of online harms. As harms really kind of manifest in so many different ways and for so many different communities.
[00:09:51] Maya Daver-Massion: So this is really my interpretation and what I’ve seen. But Another person may give you a different perspective, but the way that I wanted to break it out is kind of overarching. What are online harms? How they may impact specific individuals or communities, which is based on a lot of the work that I do.
[00:10:10] Maya Daver-Massion: And then also how they can be used as tools by groups as well. So if I start more from the overarching perspective, I think What’s important to kind of know about the online harms landscape is that it may feel a bit foreign, but unfortunately it’s a reality for so many different people, right? So when I speak about an overarching online harm, you would probably be able to list off quite a few of them.
[00:10:33] Maya Daver-Massion: Things such as cyberbullying, intimidation, harassment, abuse, which can really be felt across a range of different individuals and platforms, experiences online. Then moving on into kind of specific communities. What I wanted to highlight out here is the fact that there are specific harms that are exacerbated, more so in specific communities.
[00:10:56] Maya Daver-Massion: So, for example, one of the key harms that is oftentimes discussed, particularly from a regulatory perspective, is the harms that are inflicted on children. There’s a broad category which is kind of widely named as child sexual abuse and exploitation within which there’s kind of a proliferation or a spread of what’s called child sexual abuse material or CSAM.
[00:11:17] Maya Daver-Massion: The majority of this material is actually of girls, unfortunately. It’s unfortunate that it happens as a whole, but really 80 percent of it, based on latest reports that I’ve seen, are of Girls. Women also are particularly targeted with online harm, such as intimate image, race abuse which can be exacerbated by topic that you just brought up deep fakes of which 95 percent is pornographic material, majority of which, again. Is women. Moreover than that, black women and women who are in public positions, particularly receive a lot of toxic content and hate speech directed at them and I would be amiss not to acknowledge some of the work that’s done by a charity in the UK landscape called Glitch, who have been looking really pivotal in being able to evidence and understand the space better.
[00:12:05] Maya Daver-Massion: There are also tools, as I was mentioning, used by groups. What you see in conflict situations, the proliferation of mis and disinformation, for example, which can be really harmful as well. But in summary, all to say there’s unfortunately, Quite a substantial range of online harms that are impacting individuals online today.
[00:12:26] Maya Daver-Massion: And also these harms may result in other harms types as well, which is a scary thing that we need to be cognizant of when we’re thinking through how to best intervene in this landscape as well.
[00:12:38] Helen Todd: Thank you for going over that landscape. It is so multifaceted and really borderless when it comes to the internet.
[00:12:45] Helen Todd: What are some of the challenges that, you know, from the, seat that you sit at, what are the challenges in addressing these?
[00:12:51] Maya Daver-Massion: Yeah, definitely. And borderless is exactly the right term. It’s the way that I used to describe it all the time. It’s quite intimidating because it is borderless, but we have to be mindful of this fact.
[00:13:02] Maya Daver-Massion: And the way that I approach my work is I completely acknowledge that it’s a very difficult space to be operating in. However, we need people to do it. And I would love to speak about some of the different work that is going on to combat kind of these borderless issues. But in terms of this, let’s say three issues that are top of mind for me.
[00:13:20] Maya Daver-Massion: I think one of them is really kind of the variance and understanding of what harms actually are. You know, you speak about something like online gender based violence, it can be interpreted by different communities, different stakeholders in different ways even the kind of nuances between mis and disinformation are really hard to Label out.
[00:13:41] Maya Daver-Massion: But really, the fact that there isn’t kind of one joint definition and one joint taxonomy to understand this is challenging. There’s also, of course, a massive challenge in being able to evidence these harms, right? We have to think about the range of different challenges that come behind people actually being able to speak up or technology being able to detect Right.
[00:14:03] Maya Daver-Massion: So there’s a challenge with ineffective reporting mechanism. There’s a challenge with normalization. There’s a challenge with fear of speaking up, but there’s also the challenge of because there’s no definition. How do you know what to target exactly? And then lastly, as well, something that needs to be mindful is the difficulty in tracking the evolution of harms as well through the uses of technology.
[00:14:27] Maya Daver-Massion: And I think a big part of this is just acknowledgement that harm types are evolving in the same way that technologies are evolving, so it’s something that you really have to have a finger on the pulse, so to say constantly to know what the key challenges are.
[00:14:40] Helen Todd: And you had mentioned when we were speaking ahead of the interview that there’s a difference between privacy and safety.
[00:14:47] Helen Todd: And I was wondering if you could expand on that.
[00:14:49] Maya Daver-Massion: Yeah, I wouldn’t say difference as much as there is a balance. And I think this comes into play, particularly when you’re speaking in the context of technology, because realistically speaking, it’s important for privacy to be upheld, particularly for more vulnerable communities speaking about journalists, for example and women in in public Profiles.
[00:15:16] Maya Daver-Massion: But also the kind of safety and security angle is there’s different technologies that could be used to best kind of, intervene and block certain content. How? How are you able to use those technologies while also? Enabling private spaces. And I think there are a lot of technologies to be frank that do work and operate in this space.
[00:15:38] Maya Daver-Massion: But it is a question and kind of a constant balance that we’re trying to think about and work through as we examine the space further.
[00:15:45] Helen Todd: You mentioned generative AI as one of the issues exacerbating this issue. Can you kind of expand on that and how it’s changed the game a bit when it comes to online harm and safety?
[00:15:59] Maya Daver-Massion: Yes, definitely. So I think all of the online harms that I just mentioned were really reflective of, kind of, I don’t want to say the world before Gen AI, but they’re reflective of research that has been conducted over the past few years, ourselves included, at Public. We’re all quite curious to see how Gen AI is going to change the landscape, to be honest, and I would love to speak about it, both in the lens of how Gen AI may harm, Further harm or further exacerbate online harms, but also how it can support in terms of interventions and prevention. But starting off, I guess, thinking about how Jenny I has changed the landscape. Realistically speaking, where we’re in an environment where online harms can be scaled at a much more rapid Pace than ever before. That’s due to three key features. I would say, and we have a blog post on this on our website, which I would be happy to share over to.
[00:17:01] Maya Daver-Massion: But the three key features are a low technical barrier to entry, right? People are able to use. It’s so easy to access. There’s also with that rapid development of new content, the person is able to input something to a generic chatbot, for example, and quickly develop specific materials that they’re looking for.
[00:17:21] Maya Daver-Massion: And with that content. The third part is that it’s so it’s high quality, right? And it’s hard to really be able to determine what material is is created by Gen AI or created by a human being, both which may be harmful. need to recognize regardless. These three key challenges are on top of existing challenges with artificial intelligence, the challenges with bias which can lead to discrimination, but also new challenges that come with AI models.
[00:17:53] Maya Daver-Massion: For example the ability for gen AI to hallucinate, which is effectively to create its own material and then believe its own material. So really all of these together come to the kind of. Challenges at scale, right? But at the same time, this low technical barrier on this rapid development of content can really be used to power for good and be able to build evidence, build education support with product development.
[00:18:22] Maya Daver-Massion: So as much as I’m keeping a finger on the pulse in terms of what’s Potentially challenging or not working well, definitely focusing a lot of my attention in terms of what, how this could be used for good over time.
[00:18:33] Helen Todd: That’s good to, to hear. And I’m forever the optimist when it comes to all these things.
[00:18:39] Maya Daver-Massion: We have to be, we have to be. Yeah.
[00:18:43] Helen Todd: And you mentioned a link and I’ll be sure any of the links mentioned in the show we’ll put in the dedicated episode blog posts and a link to that in the episode description. Well, I know yesterday a friend texted me an article about some teenagers who used AI to, you know, Put their classmates faces on all this pornographic material, and it was New Jersey, I think.
[00:19:07] Helen Todd: But there’s not really state law in New Jersey that protects from AI generated, which opens up all this regulatory conversation, and we mentioned that it’s borderless already. But can you kind of give us a feel of the regulatory landscape where it’s at now and the gaps are where it’s needed to?
[00:19:26] Maya Daver-Massion: Yes, definitely.
[00:19:27] Maya Daver-Massion: So one of the things I think again, which is a privilege about working in this space is that it’s a very nascent space. And as a result, we’re really kind of working in support of the different regulators who are looking to regulate this space and again, why I think it would be great to have a conversation or 5, 10 years to see how things have changed.
[00:19:47] Maya Daver-Massion: But let me give you a quick overview in terms of the Online safety regulatory landscape, which then links to a I in the context of solutions. It also links to other types of regulation, including regulation that’s coming out in the landscape itself. So in the context of online safety regulation, really, there’s Quite a few countries that have began to regulate already.
[00:20:16] Maya Daver-Massion: Australia was the first one to regulate. They came up with the concept really in 2015 and then had the iteration in 2018 for their online safety act. This really is a focus on child safety and cyber bullying, but in terms of the interventions that they’re looking at, it’s really been focused on thinking about the principles of safety by design and education.
[00:20:38] Maya Daver-Massion: And transparency. And what’s been really exciting, and I encourage listeners to take a look at the work by the E Safety Commissioner, who is the regulator of the Online Safety Act in Australia. They have been doing a lot of effort and have really kind of built their regulatory teeth over time and have been publishing quite a few transparency reports, which are quite interesting to see as other countries start to build and reflect on this from a regulatory lens.
[00:21:05] Maya Daver-Massion: In the context of what this means from a intervention point, they’re focused on driving platforms to really take proactive measures with reasonable steps, really focusing on safety of users and following up on complaints, following the principles again of safety by design and education. Now in terms of the countries that I’m about to speak about, they’re slightly different because they’re all really emerging as we speak.
[00:21:36] Maya Daver-Massion: There’s Singapore that brought into action its online safety act at the beginning of this year, really on the focus of again, reducing exposure, but instead looking at codes of practice. So really focusing more so on guidance on specific crimes. This is then different to the EU, which also just about to start regulating as of August 2023 focuses on quite a wide range of all online harms, but really focusing more on kind of a notice and takedown approach, which means that platforms will ultimately be held liable if there is knowledge of illegal materials and they fail to remove it and they fail to kind of communicate transparently about it. I’m now getting to the U. K, which, to be honest, is very exciting because we just reached the last stage of the bills passage before enforcement as of the last two weeks. So we are going to have the Online Safety Act come into play soon. Really focused heavily on illegal content.
[00:22:35] Maya Daver-Massion: So really looking at child safety as well as terrorism. But yeah. The UK has placed a really heavy emphasis on active moderation. So really focusing on technical interventions, oftentimes with the usage of AI, to be able to prevent specific types of content from reaching users at the onset.
[00:22:59] Maya Daver-Massion: But, it’s an exciting space. There’s more regulation to come. Canada, South Africa other countries are also looking into it as well. I know that the U. S. has looked at a few different kind of legislative measures as well. For example, the Kids Online Safety Act. But really the theme here, more than anything, is that there’s a variety of approaches, but the harm type that’s really being focused on more than anything is focusing on child safety.
[00:23:25] Maya Daver-Massion: First, which I think considering that’s the one thing that really people are able to agree upon. And trust me, there’s a lot of things that people don’t agree upon on this space, but everyone really agrees that child safety needs to be at the forefront of regulation. So that’s a bit on the online safety side.
[00:23:41] Maya Daver-Massion: Do you have any? Questions there, or do you want me to drill into the kind of AI lens of this a bit more too?
[00:23:47] Helen Todd: I read your blog post ahead of this interview and you touched on this, but one of the things that you pointed out was that there’s a system led and a harm led approach. And I was wondering, is that when you say that the different approaches of the regulation is that some of the different approaches that you see, or is that more from an industry standpoint.
[00:24:09] Maya Daver-Massion: No, definitely. That’s one of the different approaches that we see. And really what that means is it kind of speaks to how narrow or widespread regulation can be. I think when you’re speaking about harm specific, so for example, the UK’s online safety act soon to be is very narrow in its approach.
[00:24:28] Maya Daver-Massion: It’s focusing on very specific, illegal. Harm types, and that’s its first approach. And I’m sure that we’ll kind of revisit that over time and see how it’s landing both with users, but also the platforms who are going to be under regulation versus a systems that approach, which is a little bit more widespread.
[00:24:47] Maya Daver-Massion: I would probably say that the EU is quite a bit more widespread because it’s not only looking at The harm types that being child safety, for example, but it’s also looking at intellectual property. It’s looking at crisis management. It’s a little bit more widespread and a little bit more open in terms of the specific types of organizations that it’s going to be regulating.
[00:25:09] Helen Todd: And one thing I did want to mention, cause you pointed out that AI can help detect online content, you know, on the backend before it gets published and whatnot. And there is and there’s a lot of articles that have come up about this too. Another component to this online landscape, when you talk about moderation are the human components that are looking at this harmful content as well and evaluating, which AI helps alleviate some of that, but, you know, the people who have to monitor, you know, they can be traumatized by this content. So it’s a very multifaceted field. And I don’t want to forget about that.
[00:25:47] Helen Todd: The human side of keeping the Internet safe as well, because that’s something that is always top of mind when I hear these. But you mentioned segwaying into what I means in the space. So I’d love to have you expand on that some more.
[00:26:01] Maya Daver-Massion: Definitely. And just to say completely agree with you on that.
[00:26:05] Maya Daver-Massion: It’s something that comes up in conversation quite a bit, and I have to admit, I don’t personally work with human moderators, but there’s a big emphasis in this landscape, which I do a lot of research on in trying to understand how content moderation can be used to prevent. Users from seeing specific content, but also to prevent human moderators from seeing specific content and ensuring that some of the most harmful things that you’re seeing on the Internet you reduce the risk of harm really to the human moderators and you protect their mental health and well being because it’s so critical.
[00:26:41] Maya Daver-Massion: And there are countless people who are really working day in and day out to try and safeguard this incredibly complex space. So Just to say an agreement on that. And then to your question about the A. I. Landscape. So it’s what I find really interesting about online safety is that it kind of captures a lot of different regulatory environments all at once.
[00:27:05] Maya Daver-Massion: There’s regulations around child protection. Privacy, security, et cetera. And what’s starting to come up, of course, is AI regulation as well. And the reason why I have a smile on my face is because it was a really big week for AI this week for people who are listening. We’re in the last week of October 2023 right now, but just some things that have happened as a recent is the U.S. President Biden has put out an executive order on a I kind of affirming the interest in regulating the AI landscape and placing new. Safety requirements on the landscape, focusing on safety tests and really looking at transparency here. I think for the U. S. audience, too, if you’re looking to learn more about this space, one of the newspapers and people that I follow quite actively, who I feel gives really great overviews is someone named Casey Newton, and he does a newsletter called Platformer, which is really helpful.
[00:28:03] Maya Daver-Massion: But also what happened this week is that there’s an AI summit in the UK. So Prime Minister Sunak has brought together 28 countries looking at a range of different kind of issue areas within the AI landscape. Not focusing on the regulatory landscape as much, but really thinking about the different risks and the different types of collaborative efforts that could be put into place to build out a system of best practice over time.
[00:28:31] Maya Daver-Massion: And one of the outcomes of that was a non binding agreement to enable government to test their models for safety risks with a range of some of the largest platforms on our planet, but some of the exciting things that are happening right now. And then just, It would be amiss to say to not speak about the EU regulation and the EU Act, which is kind of one of the furthest regulations in the space.
[00:28:55] Maya Daver-Massion: I’m really trying to again take a risk based approach and will likely be enforced in the next year. But in terms of how these all overlap with one another, ultimately, online safety can be a problem. Sorry, online harms can be exacerbated through AI, and it can also be prevented through AI. So what’s really critical about the use of AI regulation and how it stands with online safety is thinking about those potential benefits and harms, but really ensuring that in that thought process that there aren’t kind of barriers to innovation while also protecting individuals against risks over time.
[00:29:35] Helen Todd: It has been a very exciting week in the world. And also it’s, you know, there’s a lot happening in the world too, that this conversation has a lot of implications for that I want to acknowledge. But you know, one, one thing, because there is a platform side to all of this and a lot of like the big platforms like meta have all these online trust and safety teams that have really worked.
[00:29:58] Helen Todd: On these subjects. But then of recent the last couple of years, they’ve been reducing their online trust and safety teams, most notably Twitter, now known as X. And really the people at these platforms, their teams have been reduced. So what is your pulse on kind of the platform side of the equation with this exciting regulatory side?
[00:30:20] Maya Daver-Massion: My personal belief is they kind of go hand in hand, and I’m not speaking to the unfortunate kind of layoffs that are going in on the platform side. But what I think is that as regulation continues to delve in this space, platforms are the first people to be required to take action based on those regulations.
[00:30:39] Maya Daver-Massion: As a result, you need the right people in place to be able to effectively understand and adapt and articulate it. Those regulations to your product teams and ensure that policy is reflective of those regulations as well. So while I think it definitely has been difficult year for trust and safety, I’m optimistic as you are, too.
[00:31:01] Maya Daver-Massion: I’m an optimistic person, but I’m really optimistic that the movement of regulation and then pending regulation will continue to kind of draw focus on trust and safety teams. Over time and continue to try and work hand in hand really to ensure that there is adequate safeguarding as incentivized by regulation, but as supported by platforms and driven by platforms as well.
[00:31:26] Maya Daver-Massion: And I think that’s one of them. Interesting and exciting things about this space, too, is that there are so many different actors involved. It’s not only about platforms and regulators. It’s also about the safety tech. So the developers of these these intervene the technical interventions. It’s about educators.
[00:31:44] Maya Daver-Massion: It’s about civil society who are able to provide support as well. So it’s quite extensive, and it requires a lot of collaboration. And I do think if you have all of those different stakeholders, I With regulators and platforms at the kind of lead of it all, we’ll hopefully continue to move to safer online spaces.
[00:32:01] Helen Todd: Since you mentioned Casey Newton, I’m a huge fan of Casey Newton his newsletter, and he also has a great podcast with Kevin Roos called hard fork. And also my own podcast. It’s the one that I always listen to every week in my podcast diet, I guess you could say. And when we talk about the borderless tech and, you know, different countries and EU approaching this oftentimes, like, you know, the web then has like second class citizens like some and based on the country have more protections than other and from a an international lens What does all the regulatory? Movement mean for people who might not be in these other countries Is there a spillover effect from the platforms or is there still a big gap for?
[00:32:47] Helen Todd: And an international solution for you know, maybe someone in a country that doesn’t have Strong online safety or online harms regulation.
[00:32:56] Maya Daver-Massion: Yeah, you know, it’s a great question. And ultimately, I don’t think we’ll fully recognize the challenges with this until regulation is coming into play at a further rate than what it is right now.
[00:33:09] Maya Daver-Massion: As mentioned, it’s really been Australia alone. Less far. So with regards to the international landscape in my kind of optimist hot again, what I do think is that as these massive markets are being expected to regulate and are. Are enforcing regulation. There will be spillover effects because ultimately we have to acknowledge as well that these platforms are predominantly sitting in these markets as well.
So it’s not only impacting their user base, but it’s impacting their headquarters. It’s something that’s going on around the people who are really conceptualizing these these topics. So I do feel that the spillover effects there’s potential for it. And I’m hopeful about that as well. Another thing that I’m hopeful about in this space is that there’s convening between different regulatory bodies as well to create more of a kind of unified global understanding.
[00:34:01] Maya Daver-Massion: And what’s been really great to see from that perspective is it’s the regulators who have regulated. So thinking about Australia, Singapore, for example, but also thinking about those who have not regulated yet. So South Africa. But we’re looking to do so. So I think with that support and collaboration and back and forth dialogue will hopefully get to a place where any interventions that are found can be shared in an effective way.
[00:34:30] Maya Daver-Massion: And that’s something that I can say over and over and over again, I think with the space that I work in, the solutions are there. We need people to talk to each other. It’s a big part on that because there’s so many brilliant people and so many brilliant minds. We need the merge to happen. So, I’m hoping regulation will help with that as well.
[00:34:49] Helen Todd: And, I know one episode that we have on the show is with Andy Parsons, who heads up Adobe’s content authenticity initiative. And one of the things that you mentioned earlier on in the conversation is when you see content online, knowing how much it’s been.
[00:35:06] Helen Todd: Manipulated with AI or not. And Adobe is really leading the way with a content standard to in encrypted in the content. A digital signature to help with that. And it’s a standard that they’re hoping to have widely adopted as like one tool in the toolbox to understand what’s been touched by AI or not.
[00:35:27] Helen Todd: And, you know, in certain fields, You know, like journalism. It becomes much more important than maybe conceptual art, for example. But you did mention different solutions and intervention intervention points. I was wondering if you could kind of expand on that area as well.
[00:35:44] Maya Daver-Massion: Yeah, definitely. And just on your point around Adobe, I think that’s great. You know, we need the innovation to be coming from everyone. We need to source innovative solutions, understand what’s working well and be able to share those effectively. So I’m always encouraged to hear of different intervention points, regardless of who it’s coming from, really. So I’m glad that you brought that in.
[00:36:05] Maya Daver-Massion: Thank you. With regards to some of the other interventions the ones that we’re seeing quite a bit, so. I focus a lot of my time and energy on the safety tech landscape and what safety tech means is effectively technologies or solutions that facilitate safer online experiences. And with that could be anything from content moderation.
[00:36:33] Maya Daver-Massion: It could be kind of, age verification. It could be a range of different things, but really focusing on the preventative. Aspect in particular of online safety. There’s also a huge emphasis on education. And I’m would be happy to share some resources later on in the podcast to of different CSOs as well as.
[00:36:56] Maya Daver-Massion: Educators themselves who are focusing on this landscape and providing a lot of really helpful resources on the different types of online harms that individuals may be facing and how to best think about them really. There’s also another angle to this, which is civil society organizations who are. So resilient being at the forefront of really supporting communities or supporting individuals who have been affected by online harms, building out toolkits and trying to understand how to support as being a active bystander online, what that looks like, what different harm types mean, what resources are available.
[00:37:37] Maya Daver-Massion: So there is quite a lot of different kind of. Conversations and interventions that are being built out alongside what’s happening from regulators and platforms as well. And I think really, as I was mentioning beforehand, it would be incredible to continue to see collaboration in this space. And it’s an immense privilege for the work that I do to be able to sit at the intersection of all of these different intervention points and really be able to see what’s working and how how people can potentially continue to work together.
[00:38:07] Helen Todd: One thing I know that we talked about ahead of this interview too is outside of even the online landscape is offline sex trafficking and harm to women is all too common and heartbreaking in all forms. And one thing in any of these instances is just believing women and supporting women who are often targeted because that is, is an issue.
[00:38:31] Helen Todd: So baseline for anyone who’s listening. If a woman comes out or anyone shares about a harm that’s caused, like believe them and then seek resources to help. I think that’s one very important thing that I know has come up in a lot of different conversations. And I don’t know, have you seen the movie?
[00:38:49] Helen Todd: It’s a documentary called Another Body. I’ll link to it in the description. I actually haven’t seen it, but I saw a presentation on it at South by Southwest this past March, and it’s a documentary that follows a gal who. Was deep faked and then follows her story. And they actually use the same technology to mask her identity within the film, showing how video and photorealistic the technology is.
[00:39:16] Helen Todd: So that’s one way of using deep fakes in a positive way to highlight the harms. But for anyone who’s like interested in learning more about it, it’s a great documentary that really highlights the nuance and the issues and. the trauma that even though it’s like an online harm, one of the takeaways from the presentation is like, it can still be so traumatizing, even if it’s not a physical harm, which is something that really stood out to me.
[00:39:41] Maya Daver-Massion: There’s also a documentary from the BBC on this. So I’ll make sure to find that and link that as well.
[00:39:47] Helen Todd: I’ll be sure to link that in the description and you mentioned safety tech and actually in one of our conversations, you also mentioned a creativity related to safety tech. So I’d love for you to kind of dive deeper into that with our audience and listeners the safety tech component side of things.
[00:40:05] Maya Daver-Massion: Yeah, definitely. So as mentioned, safety tech is really focusing on technologies or solutions to facilitate safer online spaces. But I think there is a really exciting creative element to this. You know, I mean, it’s obviously a really difficult space, but, Yeah, definitely. There are so many safety tech providers who are building out these incredibly fascinating solutions to be able to try and intervene and prevent specific harms from occurring.
[00:40:33] Maya Daver-Massion: So that could be from a harm specific lens that could be from a data type specific lens, or it could be layered in a lot of different ways. But for my work, which sometimes I guess from an outsider or insider view can. Feel less creative because we’re focusing on very specific things. What we’re able to see from the safety tech landscape is a lot of innovation and a lot of creativity and trying to think through how to tackle these very specific targeted harms.
[00:41:00] Maya Daver-Massion: Just to kind of conceptualize what safety tech is again, in comparison to cyber security, which I think people know a lot more. So cyber security focuses on systems. Safety tech focuses on humans. Think about it in that way. They kind of complement each other. They work in parallel, so to say, but really there’s a lot of bottom line benefits to safety tech adoption that go beyond just an individual human.
[00:41:29] Maya Daver-Massion: You know, there’s a lot around brand integrity, reputational awareness User retention, reduced turn reducing customer acquisition costs, but really being able to creatively think through a solution to mitigate some of the harms before they even reach a human being. And I would love to bring up a few examples of safety tech providers who are.
[00:41:52] Maya Daver-Massion: Operating in the space right now who we really admire. So one of them is modulate their U. S. based company, and their focus really is on audio detection. So they work a lot on gaming in particular and trying to understand kind of toxic content from audio communications between Users, which they do through the uses of AI UK based provider that I really admire is called Unitary.
[00:42:20] Maya Daver-Massion: And what Unitary does instead is they really focus on what they call contextual moderation. So taking that for an example, if they see a specific type of online harm, if you see, let me develop this out a little bit. If you see a meme, for example, and it’s just an image, if you see the image alone, You may not think much about it, right?
[00:42:45] Maya Daver-Massion: It’s just an image. However, if you see, if you’re able to see the text that’s embedded on top of the image, or audio that’s embedded, or where it sits on a platform as well, it could look very different, actually. So that’s what Unitary does. They focus on what they call this contextual moderation, really taking kind of layers of moderation, taking it all together to Bring together a multimodal approach and another safety tech provider that I wanted to call out again is a U.S. based one who really focused on kind of the end to end trust and safety environment and system. They’re called cinder. And they integrate across platform needs to help. Kind of centralized trust and safety decision making and really taking into consideration a lot of the different data inputs that could come into a trust and safety professional or an operations professionals minds when it comes into really safeguarding their users.
[00:43:39] Maya Daver-Massion: But there’s so much creativity in this space, and we’re always speaking to really interesting solutions providers who are working with AI to try and yeah. Figure out how this can work, figure out how to take the burden off of users, off of platforms, off of governments, et cetera. And actually we wrote a report on this last year called The International State of Safety Tech, and I can say that there’s some more reporting on it that’s due to come out in the next few months, but really kind of Articulates this landscape as a whole and provides a lot more case studies, but it is a really exciting space to see creativity AI and being able to mitigate against online harms.
[00:44:18] Helen Todd: Yeah. With the complex issues, we do need creative solutions. So I I love that you’re bringing up creativity and with safety tech on the show. And I love just. How public and the work that you do is so collaborative because I really do believe that the future has to be collaborative to address some of the biggest challenges that we’re facing right now.
[00:44:42] Helen Todd: You know, and one of the things that we mentioned the role of platforms and, you know, platforms have kind of had this self regulation and interesting ways you have. Meta with their oversight board, even open AI is developing their own. I think they’re calling it the red lining team. But can you kind of expand on yeah, just more of the interventions from the platforms that you’re seeing and anything else I might have missed to ask you on the safety tech front?
[00:45:12] Maya Daver-Massion: Yes, definitely. So, from a platform perspective, of course, there’s kind of larger bodies like the oversight board, which are able to really hold in this case, meta to account in terms of some of the decision making that they make there’s also a lot of different ways that, the stakeholders and safety tech in particular is collaborating with different stakeholders across the ecosystem. So, for example, when I was explaining the UK Online Safety Act the regulation coming out of the UK really focuses on kind of preventative measures, moderation and gives an open space for safety tech to really grow and flourish and for for platforms to partner.
[00:45:53] Maya Daver-Massion: With safety tech as well to try and advance some of the moderation tactics that are going on their platforms. There’s also different ways that safety tech, for example, can support with education or civil society organizations interventions by being able to layer on their solution with specific Resources or tools that could be available for users in the event that a specific type of content is detected or if specific content is about to be shared and you want to prevent or provide a nudge to someone to say, Hey, maybe that’s not what you want to be sharing.
[00:46:26] Maya Daver-Massion: And also to kind of close the loop on that, too. I really believe in the work that different educators as well as civil society organizations are doing across the landscape to platforms and regulators as well. An example that I always love to bring up here and would be happy to link to is a partnership between Bumble and a UK-based organization called Chayn to develop out a product called Bloom.
And what Bloom does is it provides trauma informed resources to victims of online abuse. And it’s built into the platform and gives specific resources to anyone who’s looking for it. And I think that’s a really beautiful way that you can take the resources that are already made available from civil society organizations who are working directly with individuals to understand their needs and build it out on larger scale platforms in the unfortunate event that someone needs it, but would be happy to share that one as well.
[00:47:21] Helen Todd: Please do. And I guess if you don’t mind sharing because we’ve talked really at a lot of high levels of the different harms and regulation and stuff, but are there just some tips for our listeners and viewers? You know, we’re all on our screens online all the time, probably way too much how we can be as consumers and, you know, concerned parents.
[00:47:45] Helen Todd: And people in communities, like what we can do on an individual level to like be aware of this and what we can do, or if you have any tips like that, that you can share?
[00:47:55] Maya Daver-Massion: Definitely look, there are so many resources that are available in the space. And I think in my role, I need to be sharing these resources more often.
So again, lots of things I want to be linking here. But there’s really a lot of incredible organizations that are already putting in the work for us. Right. And it’s just, I think from an individual basis I think what I would say is that this is a very daunting space. It can be very triggering and of course you don’t want it to be traumatizing or re-traumatizing in any way or form. It’s also a space that is not difficult to kind of understand. You know, if you’re online, you can understand how a bad interaction can come up on. And oftentimes, unfortunately, the reality is that people have either experienced or witness specific types of harms to themselves or the communities around them.
[00:48:51] Maya Daver-Massion: What I’ll say with that is that you have a lot more power just in your knowledge of being an online citizen, then you may know, and what it is about reinforcing that power through some of the resources that are already made available. So, some of those I would love to flag already is one from NCMEC which is the National Center for Missing and Exploited Children.
[00:49:16] Maya Daver-Massion: They provide resources for parents for educators and others based on age. So that’s really helpful, actually, if you’re looking for specific interventions or trying to understand what could be best, most useful for a middle school student versus a high school student, they have that there.
[00:49:31] Maya Daver-Massion: Similarly the Internet Watch Foundation also focuses on child safety. They provide guidance to parents online. There’s also organizations that Literally exclusively are focused on this area. There’s an organization called Parent Zone in the UK. It’s a civil society organization that provides guidance for parents.
[00:49:50] Maya Daver-Massion: And they also have built out a tool with Google, actually, which is specifically for Children between the ages of 7 to 11, I think, called Be Internet Legends, where they’re able to learn about how to be safe online through gamification. So making it exciting and interesting for them. There’s also charities such as Glitch and Chayn, who I mentioned beforehand, both of them who provide a lot of resources in terms of how to document online abuse, being an active bystander, thinking about toolkits, and Just making sure that you’re an informed online citizen.
[00:50:28] Maya Daver-Massion: And this really goes back to my initial part point that you have a lot of power being online. Allow yourself to kind of use these resources to give you more power. And be a kind of responsible, but also safe citizen online. Whether you’re sitting in the UK or in the U. S. Or if you’re interacting as we’re doing right now.
[00:50:55] Helen Todd: I’ll be sure to include all of these links which we have a lot from today’s episode in the dedicated, we’re giving everyone a lot of homework but all good and important homework well, and cause I, you know, I’m so enjoying this conversation. I know we can go on and on. But unfortunately we’ve got to come to a close soon. But we’ve talked about a lot of things going on across the ecosystem. What gives you the most hope right now?
[00:51:25] Maya Daver-Massion: I think what gives me the most hope. I’ll say regulation, but because of the foundation of it.
And let me explain what I mean by that. I think regulation is an outcome of years and years of collaborative efforts that have been ongoing across the ecosystem efforts by platforms to protect their users. efforts by civil society organizations to raise awareness, build advocacy and campaign for users protections online, advocacy and work by educators.
[00:52:00] Maya Daver-Massion: All of this has come together to build up kind of a system that is now being enforced through regulation. So I say that regulation is exciting because I feel that it’s truly an outcome of a lot of the collaborative efforts and the interventions and the work that has already been put into play over years.
[00:52:21] Maya Daver-Massion: So I’m excited to see where it takes us now and how we adapt over time because we’re going to have to continue adapting.
[00:52:27] Helen Todd: And I guess one question on the regulatory front, cause it, it is such a nice tool to enshrine our values into law with the tech moving so fast is the regulation that you’re seeing coming out.
[00:52:40] Helen Todd: Is it flexible in adapting to the new tech or is it one of those things where we’re going to have to using creative imaginatory regulation or having to. Keep adapting and editing the regulation to, or is there flexibility in that just out of curiosity?
[00:52:58] Maya Daver-Massion: It’s a great point again. I don’t know. I don’t know if it’s unfair to say, but I always feel that regulation is inherently not the most flexible.
[00:53:06] Maya Daver-Massion: However, I think, well, just because of the fact of like the amount of time that it. Takes to get regulation in place. However, I think that looking at regulation in the context of online spaces is so unique and so demanding. So regulators will have to be flexible. You don’t have a choice right in the timing that the online safety bill now online safety act came into play.
[00:53:34] Maya Daver-Massion: I’m in the UK. Generative AI became accessible to communities at large globally. You’re going to have to be able to speak to that and adapt to that, you know, so to the extent that I think it’s flexible or not, I don’t think it’s as flexible as we need it to be. But I’m still hopeful that we can get there.
[00:53:49] Maya Daver-Massion: And I think that the continual evolution of technology will demand that regulation. Stays flexible. So I’m looking forward to seeing how that comes to play over the next few years.
[00:54:01] Helen Todd: Same. And we’ll definitely have to bring you back to do that check in. I feel like you mentioned five to ten years.
I can even picture five years out from right now with how fast things things are moving. Well, one question I like to ask all the guests on the show is if you want our listeners and viewers. to remember one thing from our conversation or from the work that you do. What’s that one thing that you want them to walk away with?
[00:54:27] Maya Daver-Massion: I would really say I know it’s a daunting space but there’s a lot of tools to help you feel safe and empowered. And so as much as you may feel overwhelmed and scared, and honestly, many people have been really hurt by online harms, just to know that in those vulnerable moments that there’s an ecosystem of doctors who are.
[00:54:47] Maya Daver-Massion: Looking to support and there’s an abundance of resources that are already available and will continue to grow.
[00:54:53] Helen Todd: Thank you again. Is there anything else that you want to share with our listeners and viewers today?
[00:54:59] Maya Daver-Massion: Well, I have a lot of research that’s coming up in the next few months. so we’ll be very happy to share that and very happy to share for the links, but also I’m very open to. Talk about this topic and connect. If you’d like to learn more, do feel free to reach out, particularly on LinkedIn. And if you want to follow me on LinkedIn, that’s where I post the majority of research or thought pieces. So it would be very happy to engage there.
[00:55:27] Helen Todd: Now again, be sure to link to all of this and sign up for the creativity squared newsletter and the public newsletter. So you don’t miss any of the research when it comes out, cause we’ll be sure to include it in our newsletter as well. Well, Maya, thank you so much for one, just the amazing work that you do. Cause it’s, you know. I know it’s not the easiest work and for your time and just sharing all of your insights and perspective with us. So it’s been so good to have you on the show. So thank you.
[00:55:56] Maya Daver-Massion: Thanks so much for having me and for making the space. I really appreciate it. And I hope that people can see the creativity and the chaos of the online safety landscape.
[00:56:07] Helen Todd: Thank you for spending some time with us today. We’re just getting started and would love your support. Subscribe to Creativity Squared on your preferred podcast platform and leave a review. It really helps. And I’d love to hear your feedback. What topics are you thinking about and want to dive into more?
[00:56:23] Helen Todd: I invite you to visit CreativitySquared. com to let me know. And while you’re there, be sure to sign up for our free weekly newsletter so you can easily stay on top of all the latest news at the intersection of AI and creativity. Because it’s so important to support artists, 10 percent of all revenue CreativitySquared generates will go to ArtsWave, a nationally recognized nonprofit that supports over 100 arts organizations.
[00:56:48] Helen Todd: Become a premium newsletter subscriber or leave a tip on the website to support this project and Artswave. And premium newsletter subscribers will receive NFTs of episode cover art and more extras to say thank you for helping bring my dream to life. And a big thank you to everyone who’s offered their time, energy, and encouragement and support so far.
[00:57:09] Helen Todd: I really appreciate it from the bottom of my heart. This show is produced and made possible by the team at Play Audio Agency. Until next week. Keep creating.