Do you allow the A.I. large language model (LLM) chatbots to freely scrape your content or not?
Episode 56 of Creativity Squared features two experts with unique perspectives on this question — Jon Accarrino on content protection and monetization strategies for publishers, and Pete Blackshaw on how brands can optimize their content for the future of A.I.-powered brand discovery through “Prompted Moments of Truth.”
Jon is an award-winning media executive recognized as the Local Media Association’s “Digital Innovator of the Year” for his transformative work leveraging media and emerging technologies. He’s pioneered new platforms and grown digital revenue for media leaders like NBC News, WRAL, and KSL; co-executive produced a Lionsgate horror film with Spike Lee; and won a Shorty Award for innovative HBO campaigns. A graduate of MIT’s A.I. strategy program and a GE Six Sigma Green Belt holder, Jon now guides brand development through his media technology and A.I. strategy consultancy, Ordo Digital.
Pete is Founder and CEO of BrandRank.AI, a startup that monitors brand perception across all major and emerging generative A.I. search engines. As an author, Ad Age contributor, and award-winning marketer, he’s been recognized for innovation in marketing and consumer trust. Pete’s career began at Procter & Gamble, where he co-founded the first “Interactive Marketing” team and was soon after recognized as “Interactive Marketer of the Year” by Ad Age. He also launched PlanetFeedback.com, was CMO at NM Incite, a collaborative venture between Nielsen and McKinsey, served as Global Head of Digital for Nestlé, and, most recently, led Cintrifuse as CEO. Pete holds an MBA from Harvard and continues to shape the intersection of consumer feedback, technology, and brand promise.
In this episode’s value-packed conversation, Helen, Jon and Pete discuss the intricate balance between harnessing A.I.’s potential and protecting content creators’ rights. Jon and Pete explain how LLMs are disrupting traditional search engines, the ethical implications of A.I.-generated content, and the pressing need for fair creator compensation models.
As developers race to build Artificial General Intelligence, Jon and Pete emphasize the importance of Responsible A.I., the value of human creativity, and the need for artists and brands to adapt their strategies in this new digital landscape.
Our guests offer valuable tips and insights on optimizing websites for A.I. discovery, rethinking marketing analytics, and how creators can avoid losing the value of their content to A.I. reproductions.
What’s the future of content creation and brand strategy in the age of A.I.?
So what is Scraping? And why should creators, content owners and brands be thinking about scraping as part of their content strategies?
Scraping is extracting data from accessible websites, converting that data into a more computer-readable format, and then indexing it within a much larger data set. Scrapers or scrape bots are programs that crawl through the internet, vacuuming up as much data as possible. Scrapers may be targeted, but many of them collect data indiscriminately. A.I. search tool Perplexity AI has been in hot water recently for accusations that their scraping tool even bypasses websites’ security measures.
Pete’s neutral on scraping. He sees LLMs as a growing part of the shopper journey for consumers that want straightforward answers quickly. The critical issue for brands, he says, is understanding how people perceive them through A.I. interactions, just as they’d monitor social media interactions. Brands can then influence the LLMs knowledge by creating more content to be scraped. His business collects and provides data to inform how brands can best present themselves to LLMs.
As for the information creators don’t want A.I. to collect, Pete says that questions about creator compensation from A.I. companies are legitimate discussions. The marketplace is starting to take shape through content licensing deals and copyright lawsuits by major publishers, but there’s still a long road to a solution that benefits all sides.
Until then, creators who don’t implement protective measures take the risk of seeing other people repurpose and monetize their content. The risk that A.I. poses for creators is unique from the traditional ways that online communities might “borrow” a brand’s assets for a meme or other forms of tribute and parody. As Jon points out, those uses often increase brand recognition by driving engagement and consumer curiosity.
On the other hand, anybody can upload an entire book to an A.I. model, change some details, and republish it under their own name. The same can be done with artwork, music, audio, or any other piece of content.
So what can creators do? According to Jon, the first step is to stop using public or free LLMs to create original content. Instead, he recommends using grounded A.I. models, which are already trained on verifiable data, which also means they’re less prone to hallucinating fictional responses.
For monetized content that already exists online, Jon recommends a three-part strategy to his publisher clients: secure, protect, and monetize.
What assets would a creator want to protect from a hack by a competitor? Those are the same assets that should be secured from scrapers, Jon says. Locking down content allows creators to decide what kind of factual brand information can be scraped, repurposed, and reused by A.I. versus what’s reserved for commercial use only. There are tools and data marketplaces where owners can license their content for compensation. But if that content exists elsewhere unsecured, it could end up in A.I. training data with no way to get it back out.
Jon Accarino
Instead of a light switch, he says to think about it like a dimmer that can raise or lower visibility by A.I. bots. The situation is different for each brand or creator; those with more resources may want to make more content available for LLMs than individuals or small businesses.
Brands that allow their content to be discovered by A.I. engines, though, have a big upside. Pete predicts that more consumers will start their shopping journey through A.I. systems over time.
Instead of asking Yelp or Google Search for the best restaurant or contractor, more customers will turn to ChatGPT or a competitor for what Pete calls the “prompted moment of truth.”
Pete is considering writing another book titled “The Answer Economy” about how A.I. is changing the way we seek and find information online. Unlike the traditional search model, which offers a list of links that users must click through, A.I. search results can answer a user’s question directly and immediately.
Pete Blackshaw
As Jon points out, those generative search results have produced plenty of mistakes as well. Google’s Generative Search and Perplexity AI have been beset with problems such as providing unsafe answers to mental health questions and repurposing news content without attribution. For Jon, Google’s embrace of generative search confirms the decline of traditional Google Search, tracing back to 2019, when ads and SEO spam became more prominent among search results.
Google and Perplexity AI have responded toA.I. search controversies with adjustments or scalebacks, but publishers that rely on click-through statistics to sell advertising still feel the impact of these companies’ no-click search experiences.
This is a problem for the information ecosystem in general, as publishers draw audiences to sell advertising to fund their publishing. The cycle breaks if publishers lose the ability to sell advertising because users are getting their answers with zero clicks, even though the information still came from the publisher. That’s why Jon says it’s critical for A.I. companies to participate in data marketplaces in good faith.
Jon Accarrino
Despite drawbacks, Pete sees himself and other users returning to A.I. search for quick, direct answers to often complex questions. As a passionate sustainability advocate, he finds that A.I. systems often surface information better than what brands offer themselves.
Asked if A.I. search results can really be trusted if the information comes directly from companies themselves, Pete acknowledges that there are some dishonest companies but argues that the majority of brands comply with marketing ethics and laws.
Pete predicts that more consumers will consult A.I. tools to make purchases, a trend he’s already joined. Considering the rapid adoption rate of A.I. by consumers, A.I.-powered search is poised to become a major funnel for brands trying to reach consumers and vice versa.
Pete Blackshaw
So how can brands and creators adapt to still reach their intended audiences through the filter of A.I. algorithms?
To prepare for the future of brand discovery, Pete says that content creators should go back to the fundamentals of digital branding.
Brands that don’t position themselves as the answer to a customer’s A.I. prompt risk putting themselves at a massive disadvantage.
Now that search engines and social media platforms are bursting with ads and SEO content of varying quality, as Jon noted, objective answers are harder to find. A.I. search offers users a solution to find good answers quickly, as Pete mentioned, but where does the A.I. turn for answers? Whatever it can find.
If you’re in charge of content for a national brand, would you want an LLM‘s source of truth to come from what you say or what other people say? An LLM can only respond with what it knows. If it only knows of your brand from news articles and Wikipedia, that’s the information users will receive, for better or worse.
Even though no brand owns 100 percent of its narrative, Pete says that content creators can tip the scales in their favor by refocusing on assets that may have languished under the common marketing refrain of “meeting customers where they are” on social media.
By optimizing owned media that’s fully under its control, brands can have an outsized impact on how they’re perceived within the larger narrative. As somebody who’s made a career in acquiring and monetizing data, Pete says that a brand’s website in particular is like its anchor in an ocean of online content. A.I. models are programmed accordingly, the algorithms consider owned media more reliable sources of information about your brand than others. As a result, the content created and controlled by a brand disproportionately influences its perception by A.I. systems.
Owned media isn’t just a website, it can also include FAQ’s (which Pete says are woefully underutilized by brands) and all the scientific and factual information about the brand.
Pete Blackshaw
It’s not enough to put content on a website, though. Brands who want their content to be picked up by A.I. systems have to make that content easily accessible by scrapers. Pete’s tip for displaying bot-friendly content on a website is to actually put the content on the website, rather than putting it in a.pdf file linked on a webpage.
Despite the risks that A.I. poses for news outlets and publishers of every size, industry leaders and some individual creators are trying to establish new forms of compensation from A.I. service providers.
The New York Times is among a group of news publishers suing OpenAI and Microsoft for damages from alleged copyright infringement. Multiple music recording companies are pursuing the same claims against generative audio tools Suno and Udio. On the other hand, News Corp (owner of the Wall Street Journal) signed a deal this year to license its content to OpenAI. The Financial Times, Axel Springer, the Associated Press, Vox Media and The Atlantic have signed similar deals.
Pete suggests that these negotiations will set compensation precedents and benchmarks to help value other publishers’ content as the A.I. industry matures. Jon adds that the pricing model could lead to different tiers of A.I. systems. Just like HBO spends and charges more per episode than reality TV channels, A.I. dataset builders will likely offer different compensation depending on the quality of the data and what it’s being used for.
At the individual level, multiple celebrities have licensed their voice and likeness for brand promotion, A.I. intimacy services, chatbot characters, and more.
Few others have embraced A.I.’s potential for generating new revenue like Grimes, though. Last year, the pop singer released a tool for fans to use a clone of her voice in music production, offering to share any resulting revenue with the co-creator. Among negotiations between content owners and users, it’s a rare example of a win-win.
Jon Accarino
Jon says that Grimes’ model aligns with what many content creators desire, a sustainable and mutually beneficial relationship between content creators and A.I. technology.
Whatever solution arises from negotiations for content access, Jon and Pete agree that compensating creators is essential for the health of the online ecosystem. Not only is original content the primary reason that many of us visit the internet, but as A.I. systems develop, it’s in our common interest to make sure they reflect our human cultures and society as fully as possible. This will help ensure that A.I. is explainable, rather than a black box of undiscernible machine calculations based on synthetic data.
As A.I.-powered search gains popularity, brands and creators need to adapt to ensure they remain visible and relevant to their target audiences. By protecting and monetizing or optimizing their owned media for A.I. readability, creating high-quality content, and exploring new revenue models, brands can position themselves to thrive in the “answer economy.”
Pete Blackshaw
The key to success in this new landscape is to understand how A.I. scraping works and to align content strategies accordingly. By doing so, brands can tap into the power of A.I. to reach new customers, build stronger relationships with existing customers, and ultimately drive growth. By putting themselves in control of which content A.I. systems can access, creators can protect their creative identity and ensure that they benefit from any reproductions of their work.
Pete Blackshaw
The future of brand discovery, creator compensation and the digital information marketplace is still murky. As a new paradigm takes shape, content owners should stay vigilant against unauthorized A.I. scraping and ready to adapt to new consumer preferences.
Thank you, Jon and Pete, for joining us on this special episode of Creativity Squared.
This show is produced and made possible by the team at PLAY Audio Agency: https://playaudioagency.com.
Creativity Squared is brought to you by Sociality Squared, a social media agency who understands the magic of bringing people together around what they value and love: http://socialitysquared.com.
Because it’s important to support artists, 10% of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized non-profit that supports over 150 arts organizations, projects, and independent artists.
Join Creativity Squared’s free weekly newsletter and become a premium supporter here.
TRANSCRIPT
[00:00:00] Jon: There is so much hunger right now for data. Foundational LLMs; they’ve basically absorbed the known world at this point, and they still need more.
[00:00:12] Helen: Do you allow the AI large language model chatbots to freely scrape your content or not? Today, we have a special episode featuring two experts with unique perspectives on this question.
[00:00:25] Helen: Jon Accarino on how content creators and media publishers can protect and get compensated for their work. And Pete Blackshaw on how brands can optimize their content for the AI chatbot answers that he calls “the prompted moment of truth”. Jon is an award-winning media executive, recognized as the LMAs Digital Innovator of the Year for his transformative work, leveraging media and emerging technologies. He has pioneered new platforms and grown digital revenue for media leaders, like NBC News, WRAL and KSL, co-executive produced a Lionsgate horror film with Spike Lee and won a Shorty Award for innovative HBO campaigns.
[00:01:12] Helen: Using his background and education from MIT’s AI strategy program and a GE Six Sigma green belt, Jon now guides brands to evolve through his media technology and AI strategy consultancy, Ordo Digital. Pete is founder and CEO of BrandRank.AI, a startup using AI to monitor brands across all major and emerging generative AI search engines.
[00:01:39] Helen: As an author, Ad Age contributor, and award winning marketer, he’s recognized for innovation in marketing and consumer trust. Pete’s career trajectory began at Procter and Gamble, where he co-founded the first interactive marketing team and shortly thereafter was recognized as Interactive Marketer of the Year by Ad Age.
[00:02:00] Helen: He also launched planetfeedback.com, was CMO at NM Insight, a collaborative venture between Nelson and McKinsey, served as global head of digital for Nestle. And most recently, led Cintrifuse as CEO. Pete holds an MBA from Harvard and continues to shape the intersection of consumer feedback, technology, and brand promise.
[00:02:25] Helen: Join this thought provoking conversation that explores the intricate balance between harnessing AI’s potential and protecting content creators rights. Jon and Pete discuss the disruption of traditional search engines by large language models, the ethical implications of AI generated content and the pressing need for fair compensation models. Against the backdrop of a race to artificial general intelligence and the far reaching consequences it could create across industries, Jon and Pete emphasize the importance of responsible AI development, the value of human creativity, and the need for artists and brands to adapt their strategies in this new digital landscape.
[00:03:12] Helen: From optimizing websites for AI discovery and rethinking analytics and revenue models to protecting content from being scraped and getting compensated through data marketplaces, this episode offers tips and insights for navigating the complex intersection of AI and content consumption in our rapidly evolving digital world. What’s the future of content creation and brand strategy in the age of AI? Listen in to find out. Enjoy.
[00:03:51] Helen: Welcome to Creativity Squared, discover how creatives are collaborating with artificial intelligence in your inbox, on YouTube, and on your preferred podcast platform. Hi, I’m Helen Todd, your host, and I’m so excited to have you join the weekly conversations I’m having with amazing pioneers in this space.
[00:04:09] Helen: The intention of these conversations is to ignite innovation. Our collective imagination at the intersection of AI and creativity to envision a world where artists thrive.
[00:04:26] Helen: Jon and Pete, welcome to Creativity Squared. It is so good to have you both on the show today.
[00:04:32] Pete: Thanks for inviting us.
Jon: Thank you. I’m so excited to be on your podcast today.
[00:04:36] Helen: It’s great to have you. It’s a special episode. We usually don’t do fireside chats, but each of these gentlemen, who I have so much respect for, come at the big question of: do you let the LLMs, large language models and chatbots scrape your content, or not?
[00:04:53] Helen: It’s in the headlines of the press, people are debating it internally. So it’s very timely and very excited to have two experts in the space. But before we dive in to the discussion, I’d love to get an introduction and a little bit of your origin stories shared for our audience who might be meeting you for the first time.
[00:05:11] Helen: So Jon, I’ll kick it off to you first to introduce yourself.
[00:05:15] Jon: Hi, Ellen. I’m Jon Accarino. I am the founder of the AI and media technology consulting firm, Ordo Digital. But I got my start early days at NBCUniversal in the marketing department. I created all the social media accounts for NBC News, helped launch their podcasting strategy, got the news division involved in iTunes, their events at CES, thier blogger lounge at the CES booth and then I worked at a social media agency called Definition 6.
[00:05:45] Jon: I co-executed, produced a horror movie with Spike Lee for Lionsgate, and then I’ve been in the local media space for the past decade or so, working with brands like KSL and WRL. Now I advise broadcasters and AI companies on strategy executions and tech.
[00:06:04] Pete: I can’t compete with this. My I guess my origin story on the branding side goes back to being a student at university, California, Santa Cruz, where I co developed the the logo for the iconoclastic mascot, the banana slug, which was later featured in Pulp Fiction.
[00:06:27] Pete: And at that time I was in business school and I was so worried [that] I was going to flunk out that I ended up creating a website called SlugWeb to manage all the media requests that I was getting about my T-shirt being in Pulp Fiction and that fueled this lifelong passion for the internet and the web and all the possibilities there.
[00:06:48] Pete: And I have a little bit of a mix of big companies like P&G, Nestle, Nielsen, and working on startups. Always with a bit of an edge. I’ve always loved the consumer power side of the internet. My first company was planetfeedback.com. We empowered consumers to complain, give compliments to companies, monetize the data, sold it back to companies.
[00:07:01] Pete: Some people said that we were kind of holding them hostage. And now, I’ve started this startup called BrandRank.Ai, which is really, you know, the tagline is “prompting your brand truth.” And it’s all about, you know, the new forms of accountability that are emerging. I mean, you know, AI is the world’s greatest BS detector.
[00:07:19] Pete: And to some extent we’re using it to really calibrate brand claims with what the AI world believes in, whether it’s sustainability, supply chain, diversity, ethics, trust, you name it. AI is infinitely revealing of brand truth. And I’ve been having a lot of fun. I’m about a hundred days into it, but really thrilled to be here.
[00:07:40] Helen: Well, I’m so excited to have you both on the show. Jon and I, we first met at Social Media Week forever ago. And then Pete is part of the amazing AI ecosystem here in Cincinnati. And we cross paths all the time on stage and at events. And one interesting thing about all three of us is we all have backgrounds in social media marketing, content marketing, and have really seen that industry and are, you know, at the precipice of all things AI together.
[00:08:12] Helen: So to dive into the conversation, Jon.
[00:08:32] Pete: By the way, you’re being very modest. You’re my cheat sheet.
[00:08:23] Helen: Thank you. Okay. So the different perspectives, Jon, you advise on clamping down, so the LLMs can’t search your sites. Pete, you advise brands on how to optimize. So just kind of getting a lay of the land from an industry perspective or bird’s eye view, I’d love to kind of hear both of your perspectives on where we are in this moment in time from the seats that you sit.
[00:08:47] Helen: And Pete, you kind of dovetailed into that a little bit with your introduction. So I’ll let you kick us off on this one.
[00:08:53] Pete: Yeah, well, I guess I don’t really have… It’s not that I’m for or against the scraping. I mean, I think what is important is for brands to understand what’s being said about them and whatever content is permissible or ultimately you know, works through this legal or rights process. I just think it’s really, really important to understand what is being said, similar to social media. You couldn’t afford not to know what was said on online sites. I think these issues, these questions over content and compensation is very legitimate.
[00:09:35] Pete: I think it will, I’m certainly, you know, I expect that some type of you know, win-win agreement will be worked out with the New York Times and some of these other lawsuits. And I think that will set precedent for models where there is appropriate forms of compensation. I think it will be tricky.
[00:09:51] Pete: I think we’ll probably need some really sophisticated technology to figure out how those pieces are working. Jon is a much greater expert than I am. But in the meantime, I think there’s a huge amount of upsides for brands to you know, allow themselves to be, you know, discovered by the AI engine.
[00:10:09] Pete: One of the things that I found out, for example, is, you know, everybody’s going to these engines to learn, right? And so brands, there’s a lot of upside for brands to be part of the shopper journey. And this is a new shopper journey. This is kind of replacing search. And so brands put themselves at a massive disadvantage if they are not part of that blended story.
[00:10:28] Pete: I call it “the prompted moment of truth.” And so, and one thing that we’ve learned and I’ve been doing, I’ve probably done a couple thousand audits so far; you know, own media, your website is your number one algorithmic anchor and so what is said about you is disproportionately influenced by your own media.
[00:10:47] Pete: And I think that’s a good thing that the LLMs did. They kind of, you know, it concluded that the brands themselves probably are closest to the truth. I think where brands have failed is they are just really bad at content liquidity of kind of making sure the relevant information gets to the LLMs. And I think for consumers, that is a really important because you know, brands are doing a lot around carbon scorecards and the like and I think you want to make sure that information is getting to the end consumer. I’ll pause right there.
[00:11:19] Pete: I’m sure there’s a an even better perspective coming up with my c o-colleague here.
[00:11:24] Helen: before I hand the mic to Jon, I guess I want to take one step back too, because you quit your job at Cintrifuse to start your company brand rake, but it’s, you’re putting a lot of eggs in the basket that that user behavior on the internet is going to be totally disrupted by the LLMs,
[00:11:44] Helen: and that this is kind of the new optimizing for Google search. And instead of SEO optimization, you’re optimizing for LLMs. So can you touch on like how you see, I guess LLMs like disrupting search just from a industry perspective, I guess?
[00:11:57] Pete: Yeah, well, listen, I’m kind of thinking about writing another book and it will be called “The Answer Economy.”
[00:12:04] Pete: And I think we’re fundamentally in this world where you know, all these, you know, what do you call them, LLMs whatever, they’re just offering answers to consumers and users and that is like a really, really big deal. And they’re delivering the answers in a way that is, you could argue it’s almost restricting choice relative to Search 1.0 where you get 10 blue links, but the responses are really good.
[00:13:09] Pete: And we’re seeing this again and again in our research. I know there’s debates over hallucinations. I think that’s working itself out, but the answers are incredible. I mean, what, and the reason why I got hooked on this is that I you know, a couple of days after Chat GPT came out, I was pursuing my passion. I’m real sustainability nut.
[00:13:26] Pete: And so I was going in there saying, are pampers sustainable? Is Nestle doing enough on green? What’s the carbon footprint of this or that brand? And I was just like blown away at the answers that I was getting. It’s like a hundred times better than what you would get through search way better than what brands offer themselves.
[00:13:42] Pete: And I think from a consumer perspective, I was thinking, wow, this is a revolution, not only in access, but in getting really good answers that seem to be, you know, some of the LLMs are kind of almost refreshing on a daily basis. I’ve been doing a lot of searches on all the concerns about tampon safety.
[00:14:03] Pete: Cause that relates to certain clients. And so, and you’re finding that there’s just a lot of knowledge of the consumer benefit is huge. And to some extent, it’s kind of what the web was always supposed to be, this great knowledge broker of information. And so, yeah, and I think ultimately more consumers are going to lean on this before they buy.
[00:14:24] Pete: And I’m already doing it. And the sheer ramp up is unlike anything we’ve ever seen. I mean, yes, I’m on the extreme side of geekdom on this, but just even my daughter, when I look at the number of times she’s searching Chat GPT, or these other tools on a daily basis, this is absolutely a consumer shift.
[00:14:43] Helen: Thank you. Okay, Jon let’s hear your perspective.
[00:14:47] Jon: All right. So, my consulting company, I work primarily with content creators and I do work with the AI companies as well. So I kind of have that unique perspective where I see both sides and I’m helping both sides work together. When I work with publishers or content creators, think of like a news organization, for example, or photographer.
[00:14:25] Jon: If you’re creating content, everyone listening right now, if you are a content creator I’m telling you, you need to protect your content and I’ll walk through why, alright? So what is your product? What do you want to protect? Blog posts, your voice or your voiceover artist, your images, your photography. If you got hacked, what would a competitor want to steal from you?
[00:15:30] Jon: That’s what you want to protect. You need to block, block, block that from the AI bots because they will scrape it, reproduce it, reuse it, and you won’t get a dime from any of it. Alright, so I have a three part strategy that I work with clients on. Secure, protect, and then monetize. There’s ways for you to monetize your content.
[00:15:49] Jon: The AI bots will steal it for free. But there’s tools and marketplaces for you to monetize that. The strategy is similar to what iTunes did to Napster. You make it easier for the AI companies to buy your content than steal it. That’s essentially what iTunes did, right? You can always change your minds later.
[00:16:06] Jon: If you want to open up your content and let the AI bots scrape it, you can change it later, but you can’t take the flour out of the cake after it’s been baked. You can’t reverse your decision. So you need to be smart and strategic. There’s a lot of advantages of going slow. Figure out what you want to do, look what other people are doing, but protect your contents now, because you can’t later.
[00:16:32] Jon: Once it’s in the LLM, that’s permanent. You can’t take it back. You can’t take the flour out of the cake. So, what I do is, there’s different ways to secure your content and you don’t have- by the way, this isn’t a light switch. This isn’t on or off, think of this as like a dimmer solution, okay? You can raise or lower the visibility on the light that you want the AI bots to have on your content.
[00:16:57] Jon: So, news organization for example, let’s say you’re a blog or a news website. Maybe you want to lock down your news articles, but make the company information or your corporate information scrapable for the bots. As somebody says who is Helen Todd or what’s her company, for example, you want that information freely available in the AI bots like Chat GPT, for example.
[00:17:24] Jon: You want the AI companies to have access to that information, but if your blog posts or your infographics or your research is valuable to you, and that’s your business model, then you want to protect that. You don’t want to give that away for free. So you can lock down specific directories on your website or certain file types.
[00:17:43] Jon: And then let the AI scrape what you want, but protect what you want to protect. I have tons of strategies and techniques-
[00:17:51] Pete: Can you do that while keeping the site openly accessible? So if I just want to go visit a blog post, but you can still kind of set the parameters where you kind of, you know, keep the bots out.
[00:18:03] Jon: Absolutely, I do that. That’s what I do every day. I help- It’s so.. every situation is specific to the brand. So if you’re listening right now, your company, your website, your digital products, everyone’s going to have different priorities and it’s a custom solution for everyone.
[00:18:24] Pete: Do you think differently in the context of like a perplexity?
[00:18:29] Pete: So, where there is clear attribution and you can argue that is you know, it’s almost easier to click the link to the source. So the content provider might say, okay, that’s something right? As opposed to the other stuff where it’s blended to the point where you have no idea where you are in the mix, but you kind of know you’re there.
[00:18:49] Jon: Yeah. So perplexity or Google’s AI overviews that, yes, you may be sourced and clickable from that search, but what, unfortunately, what is happening at Google, and it’s kind of a sad situation in my opinion, it happened starting in 2019 when the head of Google Ads took over organic search. From the guy who created it in 1999.
[00:19:14] Jon: And if you think back to when Google started to suck, it happened
[00:19:19] Pete: You couldn’t find anything organic.
Jon: Exactly. It happened in 2019. There’s a very detailed article called “The Man Who Killed Google Search.” That’s the title of the article. Just search for that. You’ll find it. It is a fantastic detailed embarrassing.
[00:19:39] Jon: It’s an embarrassing account of what happened at Google and why Google search sucks. Like Google search is awful. When you think about what you loved about Google search, that died in 2019 because of the changes they made, they put the head of Google Ads in charge of organic search, and that’s when things went downhill.
[00:20:00] Jon: And now it’s all shopping and SEO spam essentially on the main page of Google. And they’ve kind of just thrown in the towel and said, okay, well, what the, our Google search sucks now, everyone hates it. We’ll just let AI answer people’s search queries. And what Google is creating now is a zero click experience.
[00:20:23] Jon: They’re answering the information and poorly. That’s why the AI overviews on Google have basically disappeared at this point because there’s, it was saying that horses had six legs or it was recommending when somebody was searching for information for depression, recommending the best, most, the prettiest bridge to jump off to commit suicide, like ridiculous recommendations like that.
[00:20:47] Jon: And so a lot of the AI overviews have been scaled back, but it’s creating a zero click experience. And sure, it might be answering the user’s question but they’re not getting the full context of that answer. They’re not getting the full details. They’re not, they don’t have to click into the article to go read it. And so it’s killing digital publishers.
[00:21:08] Helen: Well, I guess in general, we all know that these LLMs are going to really change the game of how we interact with the Internet and the web in general. And one of yours is, with the brands that where the content is their product that they want to protect, that you should protect it because it will force the large language models to pay for it.
[00:21:30] Helen: If they want to scrape it and that because they need so much training data, they’re kind of in a corner and have to scrape and play by the content creators who are pushing back. Is that accurate? And can you comment if I missed anything?
[00:21:46] Jon: First step is secure and protect your content, lock it down. There’s different ways to do that. Then you protect it. You update your terms of service, your privacy policy to specifically forbid scraping without your authorization. And then depending on the size of your organization, if you’re a big company, like News Corp or Financial Times or Business Insider, Axel Springer, et cetera, you can go-
[00:22:08] Jon: You can get OpenAI to pick up the phone when you call them and they’ll go give you millions of dollars to get your content into their LLM. If your company isn’t big enough, whether your news organization isn’t big enough to get a direct conversation with OpenAI or maybe your blog isn’t, you don’t have a popular blog.
[00:22:31] Jon: What you can do is go put your content into a data marketplace. There’s data marketplaces like Dappier, D A P P I E R, Dappier.com. There’s Tollbit. There’s a bunch of other data marketplaces where you go put in your content and then the AI developers can basically buy it like a CPM to get it into their apps.
[00:22:54] Jon: So if you’re building like the best Cincinnati, like Cincinnati nightlife app, hypothetically, you want to be able to pull in a bunch of current information. It’s called a rag marketplace. So the LLMs have generic information. Pete mentioned like, tampon health tips, like that’s going to be kind of generic information that’s well established and everywhere on the internet, but if you want to know what are the top events or concerts this weekend in Cincinnati, you need that to come from current sources.
[00:23:25] Jon: So through a rag marketplace and companies like Dappier can give you that data, high quality in your app, you set your CPM and what I would recommend is you look at what is your RPM for the page views on your website. And if you’re losing that traffic to AI tools, just match your CPM in the data marketplaces.
[00:23:47] Jon: That way you’re going to recover the traffic loss to your website. You would recover it through the data marketplaces.
[00:23:54] Helen: And in full transparency, Jon is helping me lock down some of the content on Creativity Squared. The question I was going through my head, which I think a lot of people can relate to is right now, you know, I’m not the Lex Friedman size podcast yet, but the show is growing and everything.
[00:24:17] Helen: So it’s like in my head, it’s like, well if I lock everything down will I be invisible with the new trends of the internet? And I don’t want to be totally invisible. But the happy medium is dimming parts of the site and protecting the most important content, which is the actual podcast content itself.
[00:24:37] Helen: And it is on YouTube, so who knows what’s happening to the video versions of this. And then keeping the other stuff open. But I guess to follow up on that question or you gave us a couple of tips, Pete on your side, could you advise brands on how to actually optimize their content for the search?
[00:24:58] Helen: And I got to sit in on one presentation where you actually said websites, which a lot of brands have not invested in for a long time are so much more important now. So I was wondering if you could speak to that. And then after that we’ll dive into, you know, what’s, how smaller brands should be thinking.
[00:25:18] Helen: Cause like the big news corpse and the P&G’s of the world can call all of these companies directly, but all of us in the little, the middle and the little guys, you know, how should we be thinking about it in the trend? But Pete, I’ll let you talk about the website first.
[00:25:33 ] Pete: Yeah, well, listen, again, the business we’re in, the customers we’re working with, they obviously want to get their message out.
[00:25:40] Pete: They typically pay for it through paid advertising. And I’ve always believed that owned media is you know, is a really you know, under leveraged opportunity and it’s disproportionately important in an AI world. And I’ve often said that it’s as important to understand how to market to the bots as it is to the consumers. Websites, you know, across most of the consumer packaged goods industry, or just most major marketers have been kind of a wasteland of innovation for about 15 years.
[00:26:11] Pete: You’ve seen very little improvement. And that’s because and some of that was understandable when I was leading digital at Nestle in Switzerland. And I kind of drank this Kool Aid as well that, hey, you have to go where the consumers are. And so I think, you know, the Facebook’s and the social media, they did a great job of like convincing us to go to their pages which sometimes we generously categorize as owned media, but the true owned media is your brand website, and I think, you know, your FAQ, your search, your, you know, just everything about the brand, the science behind the brand and the like. And so, yeah, we’ve noticed that type of content is getting disproportionately picked up by the LLMs. And I think that’s a good thing. I think it gives some brands some leverage to get their
[00:27:03] Pete: point across and I do think brands are generally compliant with what they should and shouldn’t say. So generally you get a fairly accurate response, but there’s a right and a wrong way to do it. A lot of the websites are impossibly illiquid, which means that they haven’t really designed it in a way to make them bot friendly.
[00:27:22] Pete: You know, I do advise a lot of companies on how to get, you know, the companies have a long way to go on green, but there’s a lot of companies that are doing a lot of things to improve their carbon footprint and are not getting any credit. And we know this because we’re analyzing all of them and we’re, you know, we’re literally able to put a quantitative score on how strong a company is on their green footprint or on, you know, responsible supply chains.
[00:27:47] Pete: And a lot of reasons why some, you know, brand X may be a three through our scoring versus a four, which is what they deserve is because they put all their content in PDFs. And so oftentimes the people that are in charge of, you know, the green reporting, you know, they’re not really in digital marketing.
[00:28:05] Pete: They’re just like, I got to check off a box. Put our green stuff up, you know, and it’s, then the bots don’t capture that. And I think that’s really important or brands are just horrible at FAQs. FAQs are actually heavily indexed by LLMs. And I think that’s a good, a really important development. And what brands need to understand is if they do not give the consumer what they want,
[00:28:32] Pete: you know, through their own FAQs or other means of getting information, consumers are just going to get completely spoiled by the LLMs. And those narratives are not always what the brands want. And so, yeah, I think it’s mission critical. I wrote this. I think you saw, you think you’re referring to the website or the article I wrote in LinkedIn that said that AI strategy starts with your boring website.
[00:28:53] Pete: And that right now is not the conversation with marketers. Everybody’s talking about better, faster, cheaper content. But I think they need to really think about like how their existing content properly, you know, position on the website can ensure that their story breaks through these through these LLMs.
[00:29:19] Helen: And one question from a consumer standpoint. I guess the one thing that makes me a little, I don’t know, concerned about what you said with brands controlling their full narrative. And I’m curious because you do care so much about sustainability and the greenwashing. If we’re only looking to the brand websites and these LLMs are scraping them for how green or sustainable brand is, how, like, how do we actually know that and it’s not greenwashing from a consumer standpoint.
[00:29:50] Pete: Yeah, no, it’s a great question. To be clear, no, brands don’t own 100 percent of their narrative. We look at algorithmic anchors. So what sources disproportionately feed into the LLMs? The websites are very much at the top. Now there’s some that are still very strong.
[00:30:09] Pete: Like if you’re typing in a corporate brand, Wikipedia is still heavily referenced, especially when you type in P&G versus Tide. And, you know, Wikipedia is a tricky one as well because Wikipedia does a wonderful job of like, you know, cataloging a brand’s history with activists. And it’s, you know, I’d always, at Nestle, I’d always say Wikipedia never forgets.
[00:30:31] Pete: And so it was always tricky because the first thing you would see when you typed in Nestle was Wikipedia, and then you’d open up Wikipedia and it would remind you of all these, you know, less favorable things. I’ll put it diplomatically. But no, what we see is it’s a blend. I mean, you’ve seen me.
[00:30:50] Pete: I, you know, I’ve just told one of the top CMOs in the world yesterday. I said your brand is effectively a smoothie in the world of generative AI, and it takes all these ingredients out there, and it kind of puts it into a smoothie, and it may or may not taste good. If it tastes good, that means that’s good for purchase behavior.
[00:31:07] Pete: Oftentimes it doesn’t taste good because you got bad reviews. You’ve got activists that do a pretty good job of getting out a narrative and they’re putting out some very legitimate counter narratives about, you know, green scoring. You’ve got third party credentialing that also still have high indexability, if you will, in the smoothie.
[00:31:27] Pete: So it’s a blend, but I do understand why the LLMs are giving brands a fair shake. I mean, they are the creators of the brand. And I don’t believe you, I’m absolutely convinced you cannot manipulate this the way you used to. So yeah, technically brands could throw out a bunch of garbage, but I think there’s maybe a margin of 10 to 15 percent max to improve the taste of your smoothie.
[00:31:54] Pete: It has to be legitimate. I mean, I think, I do sense the LLMs are very on guard about excessive manipulation because they’re all in competition to be the single source of truth. And so I, and I think this is going to be a headache for a lot of SEO companies that try to create content farms and game the system. So that, and that I’d say that’s good news for the consumer.
[00:32:18] Helen: There’s so much to unpack and I have so many more questions based on everything that we’ve been saying. But one thing I wanted to say, since you mentioned social media and I mentioned this before we started recording that in some ways it feels like the early days of social media marketing all over again, when brands were like really uncomfortable with having to share the branding positioning with users and user generated content that they didn’t have all this control.
[00:32:43] Helen: And now they’re having to expand that to a large language models. And like, I know when I was kind of debating, it’s like, I’m kind of uncomfortable with people just taking my content and putting [it] in the LLMs, but I also kind of know the culture of the internet is going this way. So I should just embrace it because the consumer side in me, like I’ll put a PDF and say, give me the 20 percent to understand the 80 percent of this article.
[00:33:10] Helen: So I love the summaries and helping me learn faster and get downloads. The content creators side of me, like I don’t want everyone just copying and pasting, which is a workaround for all of these things or like being ingested to then engage with my content, but I kind of know that’s the user behavior that’s coming.
[00:33:30] Helen: And I have this phrase that’s, “the democratization of the consumption of content,” and on social, that’s like, well, do you want the long form blog? Do you want the video? Do you want the social snippets? And now it’s almost like, how can people engage with your content through LLMs and shorter questions and you know, chat bot type things.
[00:33:50] Helen: But Jon, when I mentioned that to you, you kind of mentioned the difference between memes and these LLMs ingesting full content. And it comes down to monetization. So I’d love to get your feedback on that.
[00:34:01] Jon: I get it. I mean, yes, I want you to protect your content. And there is a kind of a happy medium where you can put some of your content into the LLMs and it’ll be beneficial to you.
[00:34:13] Jon: So if you’re a large organization and let’s say you’re a TV station company and you have multiple TV stations in the United States, you probably want to lock down your news content and, but make your corporate content, your FAQs and stuff available. You do want to have some content available for people to use.
[00:34:35] Jon: And going back to what you’re saying, like early days of social media and making memes and stuff; it’s one thing to go create a meme for like Harry Potter, right? Harry Potter with a broomstick, some kind of funny meme or whatever, and that benefits the Harry Potter brand and the films. It’s kind of, it’s people talking about the film, getting them engaged, watching them, downloading the films, whatever.
[00:34:59] Jon: But then there’s another side to that where the LLMs and AI tools are so good now, where you could just import all of the Harry Potter books into an LLM and tell it to recreate or make another book that doesn’t exist or change the story so Harry and Ron or have a homosexual relationship and it just completely changes whatever you want.
[00:35:22] Jon: You can have the LLM write whatever you want that doesn’t exist. And then maybe you go sell that as a fan fiction or distribute that. And so there’s a fine line between creating a simple meme that is funny and people engage with, but still kind of promotes your brand to where you’re mass producing or changing the whole story of the brand or creating new products without authorization from J.K. Rawlings or the studios.
[00:35:52] Jon: The AI is so good it can go create videos and images and full books and it’s to the point where it’s scary that the amount of how clever it’s getting and it’s getting better and better. So it’s not just a simple meme or social media post, it’s recreating the whole canon or are taking the canon in new directions or creating content that doesn’t exist, but in mass volumes.
[00:36:16] Pete: But let me ask a question because this is where it gets really tricky because, you know, Helen gave an example of the way she will take an article to try to simplify it. And, you know, you’re trying to address this at an industry level, but maybe the enemy is the consumer, right? I mean, the consumer is like, you know, uploading books or whatever they can to almost help them to connect the dots, spark ideas, you know, leap to the next level.
[00:36:45] Pete: Obviously, I wouldn’t say that college kids are cheating as much as they’re just much more sophisticated at how they use, you know, content to take them to a different place. So I don’t know. I think the problem may be as much with, you know, my daughter uploading the Harry Potter book to try to figure out the next story is like, I don’t know, the LLMs making it readily available.
[00:37:11] Pete: I mean, because again, we’re all like private users of this as well.
[00:37:15] Jon: And the contextual windows. The contextual windows on the public AI tools get bigger and bigger to the point where you might be able to upload a 500 page PDF without hit hitting the limits. So you’re definitely right.
[00:37:31] Jon: There are holes in the strategy where the user can just copy and paste anything into the public LLMs, which is pretty scary. By the way, if you’re using a public LLM for your own work, please stop. Don’t go put HR data, marketing data, sales strategies. Don’t go put that in the Chat GPT cause Chat GPT is “free” in air quotes.
[00:38:00] Jon: It’s free, but it’s really not. It costs a lot of money to run Chat GPT. They’re spending millions of dollars a day on just electricity and water to just to run it. And so every time you’re using Chat GPT, it costs a lot of money. So why is it free? Well, because you are the product. They need the training data.
[00:38:24] Jon: They need the information. They’re relying on the user, like you’re saying, Pete, to go put stuff into the LLM. They need that training data.
[00:38:32] Helen: And I think the workaround, which everyone should be just like mindful of is, if you’re taking an article, especially if it’s behind a paywall, even on LinkedIn, if I were to take one of your articles.
[00:38:40] Helen: Pete, because LinkedIn has locked down their content where it’s not scrapeable, copy and paste it into an LLM, PDF it, then you as the user, even though it’s personal use and consumption, you’re still taking protected content and then training the model against the publishers wish, especially if they’ve locked it down.
[00:38:58] Helen: So that is a consideration to have. But I do think that there’s a middle ground from the consumer and content creators. You mentioned fan fiction, but I think that there’s actually interesting new monetization models there, where if you look at Grimes as an example, where she’s given out her voice, and it’s like, if you can take this and make music with it, I’ll share the revenues with you.
[00:39:25] Helen: And especially with blockchain and smart contracts, I think that’s actually really interesting where you can open up worlds for brands and creators, share the IP and both monetize that if they take your story and run with it or whatever your world is.
[00:39:41] Jon: What Grimes is doing is perfect. It’s a compromise and everyone’s happy and she’s getting paid.
[00:39:48] Jon: She’s saying, here’s my voice, go create music. And if it’s a hit, yeah, we’ll split it. And so that’s what content creators want. That’s everyone from journalists to photographers, to bloggers, to poem writers, poets. If you’re creating content and you want to be progressive and use it with AI systems, nobody wants their content used without them getting paid for it.
[00:40:14] Pete: But what about when the connection to the artist is a lot murkier? Again, I don’t know if I’m, you know, wearing a white hat or a criminal, but I’ve written like 500 songs on Suno. I know the recording industry is going after them. And I mean, where does that kind of fall in? That’s like, you know, Suno is what, and I know the debate is like, did they use real songs to train it or not, but.
[00:40:42] Pete: At what point does it just become too, like, generic? I mean, people always say I wrote a song because I was inspired by this music or I, you know, I admire Beyonce. So she inspired my thinking, you know, and I don’t know if it’s any different in this type of context.
[00:40:58] Jon: It’s one thing for a human to be inspired by like, they love Metallica and they want to go write a heavy metal song.
[00:41:08] Jon: and it’s inspired by Metallica. That’s one thing. It’s another thing to go upload the album And Justice for All into an AI music tool and say, recreate this with my voice and and make the songs about these 10 topics. AI tools are so good.
[00:41:28] Pete: That I agree, but I was referring more to Suno. I mean, they’re going after them, right?
[00:41:33] Pete: Because they’re, you know, you can create these incredible songs, they’re technically all original. Yeah, they kind of sound like other genres.
[00:41:43] Jon: But where did the training data come from? Same thing with companies like, like Oudio.
[00:41:48] Helen: The lawsuit for that, and this came from one of the latest Hard Fork episodes, is that the lawsuit is actually just trying to figure out
[00:41:56] Helen: what trained it because they haven’t revealed yet how they train the model and it goes, I think for all of these things, it goes back to the intentionality, whether it’s the end consumers or the tech behind it and that type of thing, but it’s really the training data and how it was trained, but sorry, I didn’t mean to cut you off, Jon.
[00:42:15] Jon: No, I, and listen, I love AI tools. I have AI companies as clients. If you want the premium content to train on they just need to pay for it. It’s just like, when you go to like netflix or something and you see like some of the generic movies that you probably don’t really want to watch there’s no one really famous in it.
[00:42:41] Jon: It’s probably not that good and then there’s the premium content the new beverly hills cop movie. You want to pay a top dollar for that premium content and Netflix does. They pay big money for that because it’s great content that everyone wants to watch. And so there’s a big difference between the generic content or the lower level content and the top tier content.
[00:43:05] Jon: And if you want to use the top tier contents, the top news articles, the real, the political information from journalists, if you want to use that in your LLM, for example or the hit songs or the hit movies, then you just need the pay for it. You can’t just steal it.
[00:43:21] Pete: So are, but are we close to where the market is being made?
[00:43:25] Pete: So again, I forget the amount, but one of the LLMs did what? 150 million deal with News Corp. Maybe it was higher. You know, I’m sure the New York times one will settle on some, split it down the middle and, you know, and once they do that, then they’ve sort of set a market value where you could almost, you know, go see LLMs and sort of say, if News Corp is worth X.
[00:43:45] Pete: New York times is worth X. What should everybody else be worth? I mean, are we, are there enough like legal data points that could just inform a model? I mean, there is a massive amount of money sloshing around with these LLMs. And I got to believe you know, once you have the precedent of pain, then you could just create a standard, no?
[00:44:08] Jon: Yeah. I mean, there’s two ways to approach this. You can upload your archive of content. So whether you’re, let’s say you’re a photographer, you have all these photos you want to provide, or you have a whole bunch of blog posts or whatever. So you can just, you can provide a bulk deal for your archive content.
[00:44:25] Jon: And then there’s your new content, which would be the rag content, the current information. And so you can sell the current information, maybe at a, on a CPM basis. And then maybe do a bulk deal like you’re seeing with The Financial Times, News Corp, et cetera, where they’re selling their archive content. So there’s a couple of different opportunities there.
[00:44:47] Pete: Is there real money there? Like, I mean, just take a, I mean, take Helen’s blog. I mean, or podcast. I mean, it’s making progress. It’s finding more voice, but maybe it’s in there.
[00:44:59] Jon: There is so much hunger right now for data. Foundational LLMs; they’ve basically absorbed the known world at this point, and they still need more. So in the background here, right now we’re seeing that the race for like the best chat bot and stuff. Bigger perspective, zoom out. We’re in a massive race right now to achieve AGI, Artificial General Intelligence.
[00:45:27] Jon: That’s what this is all about. That’s why like platforms like Chat GPT are free right now. They need the data. Everyone is desperately trying to achieve Artificial General Intelligence. That’s, if you’re not familiar with that, AGI is basically human level intelligence. Once somebody or some company has achieved Artificial General Intelligence, that’s a huge, huge game changer to the point where you can, depending on how much money you have, you can turn on as many AGI’s.
[00:45:59] Jon: human level personalities that are artificial as you want, as you can afford. And so what can you do? Let’s say you want, let’s say you want to cure cancer. Okay. I’m going to go turn on a million AGI’s that 24/7 are just working on curing cancer. They don’t need to take bathroom breaks.
[00:46:21] Jon: They don’t need to sleep. They’re brilliant scientists programs, and they’re just going to be dedicated to doing. Or, you want to go hack a nuclear power plant. So, it can go both ways, but the race now is for AGI. Can AGI cure our climate crisis? Can it cure diseases? Who knows? You’re already seeing in the Ukraine, that’s a battlefield right now that where AI is being tested on what it can do.
[00:46:51] Jon: It gets a little scary when you see a drone, for example, if you’re on your remote control on your phone, you can just tap on a tank and the drone or whole swarm of drones with bombs can just go to that target, whatever it is. And they don’t need any communication with humans or internet connectivity.
[00:47:09] Jon: They can just operate on their own. So that’s kind of where we are right now with the AI being able to just do something on its own without human interaction. And that’s coming faster than we realize.
[00:47:25] Helen: Yeah. There’s again, so much to unpack. One thing I want to point out, cause we’ve talked about synthetic media and I want to hear both of your perspectives on synthetic media on the race to just needing so much information and brands are using AI to create synthetic data,
[00:47:44] Helen: the LLMs are starting to experiment. But one thing I just want to point out for our listeners and viewers, I’m sure everyone’s sick of hearing me say Amy Webb, but her 2024 South by Southwest presentation, and we had Sam Jordan on the show for episode 50 to talk about this too, that we have to be very careful with the content that we see online.
[00:48:05] Helen: And not only are we seeing AI generated fake content, but one potential harm could be whole fake events where you can’t discern right off the bat if something happened or didn’t happen. And that has massive implications, especially if you think about you know, war areas or potential war areas.
[00:48:27] Helen: So be very careful with the content that you’re seeing in this day and age. And then I think it also dovetails into a conversation we were having earlier of truth. One of one of the talks Sam Altman did on stage with Android Jones, I think this was 2023 before Chat GPT blew up at Burning Man, where he said, one of the biggest questions of our time is going to be, “who gets to decide what training information goes into these LLMs?” Because then, Pete, one of your taglines is “the prompted moment of truth,” but what does truth mean in this day and time as we’re trying to assess it all out?
[00:49:10] Helen: And Jon, you said something really interesting on one of our prior calls of, you know, I referenced the book, the information of who controls the information kind of controls the narrative of society and are we rewriting truth or not? So I’ll hand it over to you first to kind of comment on it. And expand on that and get your reactions first, Jon.
[00:49:32] Jon: Who owns the truth?
[00:49:34] Helen: Yeah.
[00:49:36] Jon: Yeah, it’s, well. So the AI company
[00:49:40] Pete: I’m waiting for like, Socrates to show up in the fourth box right now.
[00:49:46] Jon: The, like I said before, they are so hungry for data. They need it to the point where they’re creating synthetic data, which is scary. You think about the AI is creating data for the AI to train on. And what kind of loop is that creating when there’s no humans involved. And it, it’s almost like playing operator.
[00:50:09] Jon: Remember that game you would play with kids, you’d whisper in somebody’s ear and go around in a circle. And by the time it came back to you, it was always kind of manipulated and changed. And so what happens if that happens with synthetic data? And just gets looped around in the AI over and over again.
[00:50:26] Jon: And then what becomes truth? What becomes our reality? And we’re at the point now in society where if we don’t pay attention to that, there could be a lot of damage. I mean, think about early, early days, thousands of years ago. Think of the Bible, right? And before the printing press copies of the Bible were handwritten and somebody would sit down with one copy of the Bible.
[00:50:54] Jon: And then handwrite a new copy of it. That’s what they would do. And the priests, the monks, et cetera, that’s what they would do. They would handwrite copies of the Bible. And just think of the person who gave you the Bible to copy, what if they had changed something? Right. And then the next copy of the copy, that change just gets perpetuated.
[00:51:18] Jon: So something to think about now, like you said, Helen, what goes into the LLMs? And then how is that getting reproduced? And if it’s bad data or fake event, and then that gets reproduced and distributed, and then at one point, does that become true, even though it’s not true? So it’s kind of scary. And synthetic data kind of worries me, I understand that AI companies need something to train on.
[00:51:46] Jon: But I would definitely say that human oversight is on what the AI is creating for itself to train itself, humans need to be in the loop.
[00:51:56] Helen: And these AIs can’t really discern – I use the word truth generally, some listeners might push back – but like what’s true or not true. Like one of the Google responses AI overviews is like, “how do you make a cheese stickier on your pizza?”
[00:52:10] Helen: And it answered, you know, “Glue, use Elmer’s glue,” which is totally not accurate at all either.
[00:52:18] Jon: Something, one of my professors at MIT said when we were talking about how do you prevent AI hallucinations, and he said, He’s like, “what are you talking about? He’s like, the AI is always hallucinating.
[00:52:32] Jon: It’s not conscious. It doesn’t know what it’s talking about. It’s just the way that way generative AI works is it’s just predicting what token comes next, after the previous token.” So a lot of times we’ll get good result, a good hallucination if you go ask it for some basic advice or the AI will probably hallucinate a good response for you.
[00:52:56] Jon: But when it when it gives you a bad response, like horses have six legs or you should put glue on your pizza to make your cheese stickier, that’s a bad hallucination, but even if it told you to the proper way to create pizza to cook pizza, It’s still a hallucination. The AI doesn’t know what it’s talking about.
[00:53:15] Pete: I worry. I think a lot of this is, I don’t know if overblown is the right word, but I just keep hearing the same examples that folks use. I feel like I’ve heard the pizza glue thing again and again, but I just think that, you know, compared to other channels for truth, it’s just remarkable what you’re getting out of these engines.
[00:53:39] Pete: These smoothies are really good. You know, I used to be pretty addicted to Wikipedia. I know there’s debates over the editors and stuff, but you know, it was like, it was just an incredible mismatch of truth, if you will. And every once in a while, you’d find some mistake, but they had rules.
[00:53:57] Pete: They gave disproportionate credit to the New York times. You know, if we had Nestle tried to manipulate, they kind of slap our hands. And I just, you know, I’ve looked, I’ve done this enough as a consumer thousands of times. It’s like the results are fantastic. And they’re just much more credible than
[00:54:17] Pete: what you would get through other sources. I mean, what’s interesting about our model is that, you know, when we say prompting your brand truth, often the truth from the AI engines is much more truthful than what even brands volunteer. I mean, brands are constantly marketing themselves, but does the laundry detergent really work?
[00:54:37] Pete: Does it perform to X standard? Is it truly carbon neutral? The AI engines are figuring out a way of bringing a truth to the table that’s not available anywhere else. And so, yeah, you know, we’re in a arms race to kind of improve it, but I think we’re, I think we’re kind of overblowing it and I think the reason why people are, the adoption is up so much is that in fact, it is so reliable.
[00:55:07] Pete: And there are opportunities much like the early days of eBay, circle of trust. Did you like this response? Did you not? You got to presume that’s, you know, helping a bit. So I don’t know. I just, I think there’s a, I think the reason why this stuff is so addictive and taking off is that there’s a lot of happy campers out there that are just feeling like they’re getting a lot smarter and they’re getting what they want and they’re doing it in a very elegant interface where they’re not bombarded with noise and ads and all sorts of other distractions.
[00:55:43] Pete: You know, it’s a revolution in simplicity. And this is something that, you know, brands have not been able to figure out, right? Cause we’ve been, we built the whole system based on clicks and annoyance, and, you know, we’re like a hundred years away from figuring out how to make email easy. And yet the the AI, the bots have just like, they’ve kind of, it’s a revolution in simplicity and good information in my view.
[00:56:09] Helen: The consumer side of me loves like not having to scan through multiple search results and pages of like, I’m just trying to get a, like a quick understanding where it’s so easy. I think two concerns, one would be if, and when that they start inserting ads, because it’s already such… can easily manipulate us because someone, I think it was Tristan Harris said that if social media was a race to, you know, our lower feelings of like fear and anger, that the LLMs are raced to intimacy.
[00:56:45] Helen: And that is a very, you know, it can be a very, how do I want to say it… like, dangerous place to play in manipulating people. So I think that’s one thing. And I think I would feel much better about the LLMs if they were explainable. And this is one thing that we never got out of social media was everything’s like, it’s a blackbox.
[00:57:09] Helen: you know, we can’t understand it and all of this stuff and there is explainable AI out there, but how do they come to get the answers? And most of them are black boxes now. So I would feel much better about the, I love how you said the elegance of simplicity, you know, developers, UXers go that route, but make it explainable.
[00:57:30] Helen: So we understand the logic where it’s coming. I think I would feel better if that were the case too.
[00:57:35] Jon: You’re absolutely right. You always want the AI to give it sources. You don’t want it to be a black box and Chat GPT to give you an answer [that] you don’t know what it’s based on or where that information came from.
[00:57:46] Jon: You always want it to provide its sources. And then I want it to compensate; want the AI companies to compensate where it’s getting this information. What we see ads or ways that they monetize it, who knows? Probably if I had to predict the future, we’re probably going to see AI to become almost like a car.
[00:58:07] Jon: for families and companies, like, can you afford the high level subscription, the expensive subscription for the high end AI that can file your taxes for you and manage your family calendar and do stuff like that? Or do you have the lower level AI that’s not as sophisticated? The probably going to do a subscription models I think.
[00:58:29] Pete: That may have a lot of ads, the offset for the fact that it’s a lower cost.
[00:58:35] Jon: A whole ‘nother expense the family’s going to have to manage.
[00:58:38] Pete: The thing that’s going to be tricky here, and again, I’ve been studying online advertising my entire career is that. You know, we’ve just established this precedent of elegant, simplicity and noise free.
[00:58:51] Pete: And so it’s going to have to be a hell of a bargain to get consumers back to the noise factor. And I don’t know, this is going to be, this is going to almost be as tricky as, you know, figuring out how to compensate you know, publishers for being harvested, right? The reality is that I think ad models are really going to change.
[00:59:08] Pete: I know Google wants it both ways. You can tell they’re really tortured there, but I don’t know. I don’t know if I’m ever going to go back as a consumer. And of course, since we do all this stuff through mobile, I don’t know if there is a lot of room for the ad model. And so, yeah, it’s going to be, this is some really complicated topics.
[00:59:29] Pete: And so brands may realize that their only hope for advertising in this environment is just create good products that really work. Don’t lie, you know, get the information out there and let smart consumers who are empowered by these incredible tools make the right choice, right? Versus trying to like manipulate their preference.
[00:59:47] Pete: This is where it’s going to be really, you know, I said in AdAge the other day that you know, R and D is the new marketing. You know, brands put a lot of science behind their products, but they rarely volunteer that information. Marketers are always, “Oh, you’ll need, you should only share a few things to be focused,” but the bots kind of go right after the science and that may be the way people make choices.
[01:00:12] Pete: I want the best product. And so I don’t know, I think we’re just entering a whole new era of advertising, it’ll be much more subscription based and it’ll be very, you know, consumers are going to search in ways that we never imagined before.
[01:00:30] Helen: And Pete, what about discovery? Cause I think that’s the one thing that kind of gets missing from the chat bots is like, if you have one question, one answer, how does discovery of brands that might be adjacent or, you know, relevant to that consumer, what are your thoughts on that?
[01:00:49] Pete: This is a big one. I was talking about this a lot this week and the reality is, I think some of the rules of Search 1. 0 still hold. Most consumers, you know, discover brands at the category and the needs level, right? I’m a mom. I’m having a new kid. What diaper should I choose? I’m getting married. What do I, you know, what brands should I be paying attention to?
[01:01:15] Pete: And I think brands are gonna have to think really hard about how they become part of that narrative, right? And it, and you can easily go to all these engines and just say at the category level, what are the best toothpaste brands? What are the best X brands? And I think brands are going to have to figure out how do they win in that environment.
[01:01:33] Pete: where the consumer is actually looking for expertise without referencing the brand. It’s almost like the unaided discovery. And that, I think, I wouldn’t be surprised if you see a lot of brands not only making their brand websites much better, but when we first started at P&G, you know, they bought every single generic domain name you could imagine:
[01:01:56] Pete: toiletpaper.com, cleaning.com, you name it. At some point they threw them away. I’m sure that all the big companies did this, but there may be some benefit in actually thinking about how do you build a much broader set of content at the category need – at the category level. And you’d almost want the bots to kind of discover that.
[01:02:16] Pete: So at some point your brand shows up in the answer, but this is a really tricky question for brands. Huge because that may be the only way you get discovered is through the normal course of querying.
[01:02:28] Helen: and Jon on your side because it seems like one of my takeaways is that we don’t have proper revenue models to meet this moment. But since you’re on the publisher side and these LLMs are eating into, you know, their search results and how they’re making money on their sites, What do you see on the revenue side of things and how you’re in conversation with the media publishers?
[01:02:55] Jon: Journalism in general, and not just news articles, but all content creators. There’s so much value in the human content that is created for our society, whether it be a breaking news article, a nice poem that makes you smile, or painting of a cat. Anyone who’s creating content is really contributing valuable information to our society and think about how sad it would be if that all stopped and it was all AI generated and there was no, progression from where we are now.
[01:03:30] Jon: There’s a movie called Idiocracy with – it’s a Mike Judge movie where if you haven’t seen, I think it came out like 2002, something like that. Where in the future, people, society is dumb. Humans are dumb because they basically just stopped and they let all the machines and stuff do everything for them.
[01:03:48] Jon: And so we don’t want to get to that point where creators are discouraged from creating content because the AI is just going to steal it and reuse it without them being compensated. So figuring out the revenue model is really important to keep the creators creating content. And so these data marketplaces like Tollbit or Dappier, there’s a bunch of others too. Figuring out ways to use those data marketplaces, getting the AI companies involved in it.
[01:04:20] Jon: And again, I sit in the middle here. I’m working with the content creators and the AI companies. I love AI. I use it for everything. My company is a collective intelligence company. I’ve created 20 custom AI bots that help me do everything in my daily work life. But I use grounded tools. I’m not using public AI tools.
[01:04:44] Jon: I’m using grounded tools. I’m working with publishers to make sure their content gets monetized. There’s ways for this to, there’s structures already in place for this to work well for everyone into the future. We just need to make sure everyone’s being compensated and the creators don’t feel squashed to the point where they’re not creating blog posts or poems or photography or news articles. We want that to continue.
[01:05:11] Pete: Very well said. I would only amend that and just say, if they elect to be compensated, everybody should have a right to be compensated. But again, some may say, Hey, we’re kind of indifferent, right? Because we’re kind of getting the value in another way, but I think your point’s really well taken and I hope we do get to that level and you think with all the great science and, you know, if we can even create Chat GPT, there’s gotta be a comp model that you can create too, right? It sounds like small potatoes compared to the bigger science.
[01:05:42] Jon: Yeah. Some websites like, like Pixabay, Or Unsplash; photographers and image creators, they can go put their content on those sites for free and they just want to donate it to the world. And every content creator should have that choice. That choice should not be stolen from them.
[01:05:58] Helen: I had an interesting conversation with a gentleman the other day who’s going to come on the show on the question of how to train the bots, you could take it a whole nother perspective of, if you look at AI as our collective consciousness, digitized, that we should all contribute to it and train it on our arts and works as a, as like an impetus of the human collective knowledge base.
[01:06:29] Helen: And I think I’m not opposed to that idea if we didn’t have to worry about paying bills and, you know, had universal basic income or all the different ways that people are referring to it today, but we don’t have that. So we really need artists compensated and, you know, attribution and stuff. But I thought it was an interesting thought experiment that flips the content creator perspective on its head a little bit.
[01:06:55] Pete: You know, one of the big opportunities I have been thinking about, boy, if everything Jon says is correct, like, how could this play out? Like, we might revive the media, the newspaper business, right? I mean, that poor industry has been on a life thread and but for kind of a huge players and the big players are rightfully kind of suing like crazy.
[01:07:17] Pete: But, you know, just think about how you think about when you go to the Cincinnati inquire, just like, it’s like an, it’s like an ad fiesta. It’s like, it’s almost like you can’t even get through to the [information], but just imagine if you had a model where the compensation came through the LLMs and there was enough revenue there to kind of hire another 50 reporters and really get that informed news at city hall that we used to have like 20 years ago, but you know, but yeah, there, there could be some really interesting outcomes.
[01:07:51] Pete: You should almost get a Chat GPT and like play it out. Like if there was, if every dying newspaper got an infusion of X, you know, how might you know, society improve and because, you know, that’s just. But yeah, they could lead to some really interesting outcomes because there’s some industries that I think, you know, are really, it’s kind of tragedy that they’ve kind of died the way they have, unless some rich benefactor decides to save them; Jeff Bezos, Washington Post and the like, so.
[01:08:19] Helen: I did want to ask one question that I got from LinkedIn. Cause I was so excited to have you both on the show that I put it up on LinkedIn. Does anyone have any questions? And Alex, who is the CEO of InfoTrust, InfoTrust is headquartered in Cincinnati, and they are a big data analytics company.
[01:08:40] Helen: We touched on the ethical implications that he talked about, but he’s like, he wrote, another question is, “what if a brand wants to change how it’s going to market, if an LLM scraped an old site prior to rebranding, how will the LLM relearn?” So I’ll let each of you just touch on this quickly from your perspectives, Jon, you’re nodding your head, so I’ll let you go first.
[01:09:00] Jon: Yeah. And this is why the rag marketplaces, data marketplaces are so important because what’s in the LLMs is mostly generic information. So you want to make sure that AI companies need to make sure they have access to the fresh, current information. You’re absolutely right. You changing your website, maybe you’ve done a brand identity change or your company’s name is changed.
[01:09:24] Jon: You want that current information available out there everywhere. And so having access to the RAG marketplace through companies like Dappier or Tollbit or Data Raid, that’s important for the AIs. They want to have the latest information, especially if it’s like an AI overview where it’s supposed to be like the most current information, the AI companies need to make sure they have that current information in their system and they’re compensating.
[01:09:51] Pete: I think the LLMs do a great job of figuring that out. Again, I’m a new brand just in the process of raising money. So I’m like hypersensitive about anything that’s said about my brand. I mean, I look at our, I mean, I eat our own dog food and like, look at our daily reports on LLMs to find out whether the latest hiccup played out the way I wanted, and I’m kind of amazed at how quickly it refreshes and it’s getting better and better.
[01:10:17] Pete: And I do think some of the newer entrants like Perplexity and Grok have really raised the bar. And so I’ve noticed that you know, Chat GPT is actually figuring it out as well. So I think it goes back to like, you know, are you just smart about content liquidity and make sure that you’re feeding the bots?
[01:10:34] Pete: I think they’ll eat up anything you give them. And again, that’s separate from the issue of like, is there fair compensation? But if you want to make sure that your brand story is right, that your latest product initiative they’ve got your team represented, right? You just got to feed the bots. I think that’s, you know, fairly straightforward. And then ask the question, okay, if they’re going to eat my stuff, you know, should I get a few, some compensation along the way?
[01:10:58] Jon: Yeah. And every, this is a, it’s a very personal question. It’s going to be different for every company, every individual. What information do you want to freely give to the LLMs? And what do you want to basically pay wall essentially and get compensated?
[01:11:14] Helen: And I have a few more questions. I know we’re running a little over. I feel like we could go on for a lot longer. How are you both thinking about analytics? Because I feel like this impacts analytics too. So Pete, I saw you nod your head first, so I’ll hand it over to you to answer.
[01:11:32] Pete: think it’s a renaissance of analytics. I mean, you can’t manage what you can’t measure. And I think that I’ve always believed the results. Whether you like them or not, they’re infinitely revealing a brand value. And from our end, I mean, we’re trying to, you know, I’m trying to create the Nielsen ratings of AI search results.
[01:11:48] Pete: And I just think I’m amazed at how much you can slice and dice the data to help brands understand what’s going on. One of the things that I’m really into that really makes CMOs lean up in their chair is brand archetypes. And so what, you know, it’s just like if your brand were an animal or if it were a cuisine, if it were a historic figure, who would they be?
[01:12:11] Pete: And it’s kind of amazing how AI can just almost seamlessly give you another lens to view your brand. And what I like about that, it’s really motivating. If you tell a brand manager that you’ve got a two rating, you know, they may not do anything. But if you say that, Oh, no, AI believes you’re really an elephant.
[01:12:30] Pete: And here’s the metaphorical reasons. And so, you know, we’re presenting data, but we’re also using visual imagery as part of the analytics suite to really motivate behavior. And there’s just like so many different ways to slice and dice the data. We measure pre post, you know, how, when our dashboard comes out, yeah, just what was it could be like the Alex could be a customer.
[01:12:51] Pete: Like, did it account for your change and did it break in the right direction? Or what was the algorithm, the anchor, like one thing I found really interesting is that a lot of, you know, there’s a lot of like small like ratings and review sites that are disproportionately showing up in results.
[01:13:13] Pete: And you kind of, and then suddenly they’re elevated in importance and the brands, like they may not care about them at all. And I’m like no. You know, Helen’s blog, Helen’s brand of the day blog is actually really impacting the marketplace. You might want to court her a little bit.
[01:13:33] Pete: And so, but that’s the kind of analytics that you can provide in this type of environment. And I think it’s a, I’m really excited that I’m early in this space. I think there’ll be a lot of players, but I think it’s going to be a really interesting opportunity, but there’s very little that you can’t quantify.
[01:13:49] Pete: You do need to bring your own brand smarts to make the data relevant. It’s like data, you know, I bring more of a CMO’s perspective on what’s valuable, what’s not. And so that’s part of the equation as well, but I’m really excited about the analytics side.
[01:14:04] Jon: Analytics, they’re tricky because it’s completely different from what we have now for companies that are used to looking at visitors to their website. Ask your webmaster how many bots are visiting your website? And chances are it’s probably somewhere around 50 percent now unfortunately. So you’re 50 percent of your website’s audience are AI bots. Think of it that way. So it’s getting harder to figure out who’s visiting your website and why they’re there.
[01:14:41] Jon: It’s you’re not just creating content for human people. You’re also creating content for the bots now. And then those bots are consuming ad impressions too. So, it’s getting really messy really quickly and it’s kind of, scary for a lot of publishers to, from that perspective. So, it’ll be interesting to see what happens in the next year or so. But it’s getting a lot harder to measure who’s visiting your website and why and if it’s a human eyeball that’s seeing those ads.
[01:15:17] Helen: Well, we’ll have to have you both back on in like six months or a year to do a check, see where everything how the cards are laying out or how the cookie’s crumbling, whatever the analogy is.
[01:15:29] Helen: Okay. Well, I have a few more things like one, can you guys each plug your companies and what you do? And I want to give you a chance to leave our viewers with how they can connect with you. Pete, I’ll let you go first.
[01:15:42] Pete: Yeah, sure. I mean, you know, help brands protect, grow, nurture, trust through AI search and discovery and part reputation monitoring part, you know, new product evaluation.
[01:16:00] Pete: And yeah, just reach out at Pete@brandrank.AI, or go to our website. It’s got a lot of information. We’ve got a couple of white papers that are out that if you’re really wanting to geek out, we’ve done a lot of analytics around sustainability and how you can use AI to kind of slice that into different buckets and then attributed to brand good actions or bad actions. And then also did one just on general implications for the search space. But yeah happy to dialogue. I love, as you can tell, I love this space, so you’ll always get a response for me. And I have no issues with pushback because I’m pretty humble about the fact that we’re still pretty early and there’s a lot to learn about the space, but I encourage everybody to jump in.
[01:16:44] Jon: I always say AI is like a tool. Like a hammer. You can use a hammer to go build a house for a homeless family. Or you can go use it to smack your neighbor on the head. So responsible AI use. And that’s what I consult with. I work with both AI companies and content creators all around the world.
[01:17:04] Jon: And my company is Ordo Digital, O R D O digital.com. And if you want to learn how to protect your content, or if you want to learn how to use AI tools that are grounded and not going to expose your proprietary information to the internet. I recommend tools like Elvex, E L V E X or if you wanna experiment with a free tool, I would try Cohere.
[01:17:32] Jon: They have a grounded AI chatbot where you can use sensitive information like financial stuff or sales strategies and not worry about it getting retrained with the AIs. But contact me at Ordo Digital, but ordo means it’s Latin for order, in case you’re curious, but ordodigital.com
[01:17:55] Helen: And you’ve mentioned “grounded” a few times. That just means that it doesn’t touch the public AIs, right? Is that correct?
[01:18:03] Jon: It depends on the AI tool you’re using, but some of them may absorb the data that you give it. Whether it’s for their own internal training purposes or it just gets loaded into the LLM. So be very careful about what sensitive information you put into a chatbot and make sure it’s grounded.
[01:18:25] Jon: Elvex, Cohere there’s a bunch of different AI tools that are grounded. Or WordPress tools like Nota, N O T A, if you want to use tools to help you create headlines, or article summaries, or social media posts that’s an awesome tool for any publisher to use, blogger, news organization, et cetera, to use on your website.
[01:18:47] Jon: I have lots of other recommendations. We could do a whole episode about that, but yeah, just contact me. I’ll be able to point you in the right direction.
[01:18:54] Helen: And we’ll be sure to include some of these links in our dedicated blog posts that goes out with this. Okay. We covered so much today. It’s a Friday.
[01:19:05] Helen: It’s almost six o’clock East coast time. So I appreciate your guys being on this call and the interview. Final thoughts. We talked a lot, and actually I’ve listened to the podcast, the interview the other day, and there was something that they did where they do the regular interview and three days later, they call back up where they both marinate on the conversation and then reflect a little bit more.
[01:15:17] Helen: And I feel like this is one of those interviews where we should like, we’re not because I’m going to get this published next week, but check back in of like, you know, how this marinated. Closing thoughts just in general in the conversation, because we covered so much. But the main question that I ask all my guests is if you want our viewers and listeners to walk away with one thing, what’s that one thing that you want them to walk away with? So whoever wants to jump in first I’ll let you guys jump in.
[01:19:54] Jon: Well, we kind of hit on it. I would say use, use AI responsibly. It’s a tool, don’t use it for evil purposes and protect your content, use grounded tools and look into data marketplaces if you want help monetizing your lost traffic from AI overviews. And I have tons of great advice on, I read a column for TV news check, lots of really actual tips on there. So go there for more information.
[01:20:27] Pete: My advice is, you know, you are appealing to an audience of innovators and leaders that are shaping this medium and in any type of new medium, there’s just like a lot of complicated tensions and it’s tempting to want to, you know, bear hug a truth on either side, but they’re just complicated issues.
[01:20:56] Pete: I mean, as leaders, we have to manage the very complicated tension of, you know, efficiency versus ethics. I mean, you can cheap down just about anything in the AI world, but there’s trade offs. Innovation and regulation is another big one. And I think today we’re really digging into, you know, almost like, you know, attribution slash compensation and visibility.
[01:21:23] Pete: And I think they’re just tricky issues. And I think. I think this dialogue was a good dialogue and I give you big high fives, Helen, for nurturing this. And I think we’re just going to have to work through a lot of the muck. I think the right model will emerge, but it’s only going to happen if we really, you know, try to figure out like, how do we split down the middle and and get comfortable with that tension zone, cause I don’t think that’s going to get away anytime soon.
[01:21:50] Pete: So, and I’ve always said throughout my career, being in digital leadership is about managing, not resolving tension. And never has that been more true with some of these complicated issues. So thank you for, you know, giving us a forum to work through it and let’s hope that others keep it going. And Jon, I hope you and I can maybe come back in some context to see where we are. Maybe the ultimate marination. I love the work you’re doing. I can tell you’re really trying to manage it very holistically.
[01:22:24] Jon: I’m working with both sides here. The AI companies are my clients and the content creators. And so I’m trying to find that middle ground where everyone is happy and the content creators still are excited about creating content and the AI developers are still excited about creating new tools that are going to benefit everyone. So there’s that middle ground that I’m trying to work with everyone to achieve because if it’s one sided, if it’s one sided on either side, then the, everything falls apart and no one’s happy.
[01:22:58] Pete: I like that. I like that a lot.
[01:23:01] Helen: Well, I’m already looking forward to our next conversation and we haven’t finished this one yet. But it is a Friday end of day. So thank you both so much for all of your time and this rich conversation. I’m very excited to get it out into the world and really get everyone else’s feedback and perspective.
[01:23:19] Helen: And it’s definitely going to be an evolving conversation. And so we’ll have you both back on the show at some point, but thank you so much and have a wonderful weekend too.
[01:23:31] Pete: Thank you, Helen.
[01:23:32] Jon: Thanks Helen. This was fantastic.
[01:23:36] Helen: Thank you for spending some time with us today. We’re just getting started and would love your support.
[01:23:41] Helen: Subscribe to Creativity Squared on your preferred podcast platform and leave a review. It really helps and I’d love to hear your feedback. What topics are you thinking about and want to dive into more? I invite you to visit creativitysquared.com to let me know, and while you’re there, be sure to sign up for our free weekly newsletter so you can easily stay on top of all the latest news at the intersection of AI and creativity.
[01:24:05] Helen: Because it’s so important to support artists, 10 percent of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized nonprofit that supports over 100 arts organizations. Become a premium newsletter subscriber or leave a tip on the website to support this project and ArtsWave.
[01:24:23] Helen: And premium newsletter subscribers will receive NFTs of episode cover art and more extras to say thank you for helping bring my dream to life. And a big, big thank you to everyone who’s offered their time, energy, and encouragement and support so far. I really appreciate it from the bottom of my heart.
[01:24:41] Helen: This show is produced and made possible by the team at Play Audio Agency. Until next week, keep creating.