What are you really agreeing to when you sign up for ChatGPT? Will Europe continue leading the way in protecting citizens’ digital rights? And should business leaders think twice before incorporating large language models into their operations?
On the latest episode of Creativity Squared, tech attorney Derrick L. Maultsby, Jr. meets us at the intersection of A.I. and law to offer his perspective on the evolving legal landscape for creatives and businesses alike.
Derrick is a Senior Associate Attorney at Frost Brown Todd where he is a member of the Data Digital Assets and Privacy, Corporate and Intellectual Property practice groups, and is one of the few Black male attorneys working in this space in the country. He specifically helps coordinate the firm’s efforts on artificial intelligence, digital assets, and web3. Derrick has been invited to speak on various topics of law and technology at conferences and universities across the country.
Derrick is also committed to creating a more equitable and inclusive environment within his law firm by serving on the firm’s Diversity Equity and Inclusion committee. In addition to being on the Regional Advisory Board of Venture for America, Derrick also serves on several community boards including his role as Vice President of the Duquesne Black Alumni Network Board.
In 2021, Derrick was named a “Trailblazer in the Law” by the American Lawyer Magazine and during the Summer of 2022 he was named a “Lawyer on the Fast Track” by the Legal Intelligencer. Most recently, Derrick was named a “2023 Rising Star” by Super Lawyers and a 2023 “Ones to Watch in America” by Best Lawyers.
Episode 20 does a deep dive into the current legal landscape around generative artificial intelligence. While nothing discussed is legal advice, both creatives and business leaders alike will learn tips to safeguard their intellectual property and harness A.I. ethically to grow their businesses.
Find out why Derrick looks to consumer privacy and our European friends for regulatory framework parallels, what we’ve learned from the Zarya of the Dawn case on A.I. art copyright, and how creatives can protect their IP, including actors when it comes to scanning their likeness.
You won’t want to miss how to ethically navigate generative A.I., IP, and the law.
When ChatGPT first hit the scene, many rushed to experiment with the novel technology before reading the fine print.
Derrick cites the viral tale of caution in which a lawyer had to admit that they used ChatGPT to write a legal brief that turned out to reference court cases that the A.I. completely made up. These “hallucinations” are a major chatbot issue. Derrick says copyright is another issue that creatives and businesses should be careful to navigate when using artificial intelligence. According to ChatGPT’s terms of service, all input is fair game for training, and they make no guarantee that the system’s output will not contain copyrighted material. So there are potential pitfalls for creatives and business at both ends of a large language model.
Derrick says that content owners shouldn’t input any proprietary material that they wouldn’t want to see included in somebody else’s A.I.-generated work. At the other end, Derrick suggests that nobody should rely on A.I. output as the sole or major component of a commercial product, in case it quotes copyrighted work without attribution or simply turns out to be false. OpenAI’s terms of service state that it’s the user’s responsibility to verify that the information they receive from ChatGPT doesn’t contain copyrighted material.
Derrick L. Maultsby, Jr.
Some A.I. solutions are addressing this by offering business licenses with greater data protection. With any A.I. solution, though, Derrick says that proactivity is key. He’s already assisted some firms to develop A.I. usage policies that get out ahead of how employees may encounter A.I. in the workplace. He says that an effective policy should empower employees to improve their efficiency, while protecting sensitive information.
As Congress and state legislatures ramp up to address the legal gray areas for A.I. usage in the U.S., Derrick discusses how Europe is once again leading the charge to regulate innovative new technology.
Derrick started out as a data privacy attorney during what he calls the “renaissance of consumer privacy.” In the mid-to-late 2010’s, the European Union was enshrining a collection of digital rights for its citizens through the General Data Protection Regulation (GDPR).
A critical component of the Europeans’ legal framework is the idea that companies worldwide are subject to the same restrictions on collecting data from European users. So companies headquartered in jurisdictions with less robust digital privacy laws can’t skirt around the GDPR if it wants to serve its product to European users.
Derrick L. Maultsby, Jr.
The GDPR inspired other countries and U.S. states to enact similar digital privacy laws. In the U.S., digital privacy regulation is a patchwork of various state laws. Multiple federal bills seeking to establish national data privacy laws have stalled in Congress.
Now the EU is again poised to influence how the world’s governments regulate the newest impacts to our digital lives. Derrick discusses the EU AI act, which has been drafted and is currently under review by EU-member states.
The EU AI act would require a review of generative A.I. systems before commercial release, ban real-time facial recognition, prohibit A.I. systems designed to manipulate children, and establish a method for classifying the risk of A.I. systems.
And just like before, Derrick says U.S. states are following suit. New York, for example, has already passed a law prohibiting businesses from using A.I. to make hiring decisions. Derrick says he expects the pace of lawmaking to pick up as the EU solidifies their framework.
Derrick L. Maultsby, Jr.
Like the GDPR and laws that followed, Derrick says that the focus of A.I. regulation should be on transparency, respecting customers, and respecting their ownership of their data. Derrick predicts that the A.I. companies striving to develop and distribute their product in an ethical way right now will be better off later when regulations arrive.
Can you copyright a work if you have an A.I. help you produce it? What will happen to your personal brand if A.I. can clone your likeness? How will A.I. skew the power balance between companies and creators?
These are issues that Derrick says he is already encountering with his clients, including models, influencers, and artists.
One of the most significant cases to come up recently is a comic book creator’s battle with the U.S. Copyright Office (USCO) to obtain rights for her work, Zarya of the Dawn. Creator Kris Kashtanova used the A.I. image generator Midjourney to illustrate the graphic novel. When she filed for copyright, she originally received full protection of the entire work. But after a secondary review, the USCO decided to rescind protection for the book’s A.I.-generated images on the grounds that there was too much “distance” between Kashtanova’s input and the output images. The USCO compared the use of A.I. image generation to hiring an illustrator, in which case the illustrator would hold the copyrights for the images, not Kashtanova.
Derrick L. Maultsby, Jr.
Derrick says that this case shows how our understanding of creative ownership over A.I.-assisted works is evolving. As institutions like the USCO analyze more works, Derrick says that understanding will likely evolve further.
As A.I. matures, companies such as Adobe are spearheading efforts to be able to trace the provenance of digital creations. Through such initiatives, Derrick speculates that creatives may achieve greater ability to prove their works are human-created and deserving of protection.
But what about when you are your creative work? Hollywood has been grappling with this question since news reports this year revealed how production studios scanned actors’ bodies for reasons not fully known.
For Derrick, the number one question is “what does the contract say?” Members of the Screen Actors’ Guild, for example, never had contracts that contemplate what’s possible now with synthetic A.I. avatars, the metaverse, and voice cloning.
One thing that Derek doesn’t think will change very much is the power struggle between a performer and their employer over the rights to their likeness on-screen and off. With experience from both sides of the negotiating table, Derrick acknowledges how companies will try to establish their contractual ownership of a performer’s likeness in the broadest terms possible. That might make it easier for a company to get away with using that likeness beyond the scope of what the performer thought they permitted.
Derrick L. Maultsby, Jr.
Derrick acknowledges that union actors in particular have the advantage of professional lawyers bargaining on their behalf against powerful studios, a luxury that many creatives of other professions don’t have at their disposal. Yet, Derrick says he sees signs that non-union creatives are taking a page from the actors and standing up in their own ways. He cites a lawsuit against Netflix brought by participants on the reality show Love is Blind for poor working conditions. And laws may be catching up too, Illinois recently passed a law mandating that child influencers and children of family vloggers share in viewership profits depending on how much they appear in content.
In the uncertain and rapidly changing landscape of artificial intelligence’s effects on privacy and intellectual property law, Derrick says that creatives need to be more diligent than ever. Despite power imbalances and unequal access to legal representation, Derrick encourages creatives to advocate for their ability to thrive.
Derrick L. Maultsby, Jr.
The silver lining of A.I. is that everybody, not just corporations, can utilize it to learn, empower themselves, and grow. There are also tools available for creatives to help protect themselves from being taken advantage of by artificial intelligence. And if you’re looking for a lawyer to help navigate this tricky terrain, Derrick says he’s a resource for the community to provide representation or just answer questions.
Here’s a summary of the tips that Derrick shared in Episode 20:
Thank you, Derrick, for being our guest on Creativity Squared.
This show is produced and made possible by the team at PLAY Audio Agency: https://playaudioagency.com.
Creativity Squared is brought to you by Sociality Squared, a social media agency who understands the magic of bringing people together around what they value and love: http://socialitysquared.com.
Because it’s important to support artists, 10% of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized non-profit that supports over 150 arts organizations, projects, and independent artists.
Join Creativity Squared’s free weekly newsletter and become a premium supporter here.
TRANSCRIPT
Derrick: I think the number one thing is is that you are the owner of, you know, your image, your likeness, you are the owner of things you’re creating. So when leveraging AI, you know, solutions, make sure you do it in a way that is thoughtful and you don’t give away certain rights that you already own to these AI solution companies, you don’t compromise your art or, or your creativity by utilizing certain components from these AI solution companies in your final products.
And if you do so, you do so informed that you understand the risks of that. And then more than anything, when you know you’re dealing with these larger entities, larger companies, you understand that you own your image, you own your likeness.
Derrick: It is okay to, you know, review these contracts and ask questions and ask for revisions, when it comes to the license that you’re providing to your image and likeness and you know, the worst they can say is no, but it’s important to, I think, review it, to understand it and to go into things with your eyes wide open.
Derrick: Legal counsel, I think, is the best way to go about that. But I, you know, like I said, I understand that there are barriers to that and, uh, making sure that you can try to find resources or do your best to ask questions when, you know, of these companies and things along those lines is a really important thing to do.
Derrick: And I think will you know, save you heartache in the long run.
Helen: Derrick Maultsby Jr. is a Senior Associate Attorney at Frost Brown Todd, where he is a member of the Data Digital Assets and Privacy, Corporate and Intellectual Property practice groups, and is one of the few black male attorneys working in this space in the country.
Helen: He specifically helps coordinate the firm’s efforts on artificial intelligence, digital assets, and Web3. Derrick has been invited to speak on various topics. At the intersection of the law and technology at conferences and universities across the country. Derrick is also committed to creating a more equitable and inclusive environment within his law firm by serving on the firm’s diversity, equity, and inclusion committee.
Helen: In addition to being on the regional advisory board of Venture for America, Derrick also serves on several community boards, including his role as vice president of the Duquesne Black Alumni network board. In 2021, Derrick was named a trailblazer in the law by the American Lawyer Magazine, and during the summer of 2022, he was named a lawyer on the fast track by the Legal Intelligencer.
Helen: Most recently, Derrick was named a 2023 rising star by super lawyers and a 2023 Best Lawyers Ones to Watch in America. I had the great pleasure of meeting Derrick at Black Tech Week here in Cincinnati this year and couldn’t be more excited. to have them on the show to discuss all things AI and the law. In today’s episode, you’ll learn about the current legal landscape around generative AI.
Helen: While nothing discussed is legal advice, both creatives and business leaders alike will discover some tips on how to be empowered to protect your IP and leverage artificial intelligence with an ethical approach to grow your brands and businesses. Discover why Derrick looks to consumer privacy and our European friends for regulatory framework parallels what we’ve learned from the Zarya of the Dawn opinion on the ability to copyright your AI art, how creatives can protect their IP, including actors when it comes to scanning their likeness, and how to stay ahead of the law in this fast-paced landscape.
Helen: Enjoy.
Helen: Welcome to Creativity Squared. Discover how creatives are collaborating with artificial intelligence in your inbox, on YouTube, and on your preferred podcast platform. Hi, I’m Helen Todd, your host, and I’m so excited to have you join the weekly conversations I’m having with amazing pioneers in this space.
Helen: The intention of these conversations is to ignite our collective imagination at AI and creativity to envision a world where artists thrive.
Helen: Derrick, it is so good to have you on the show. Welcome to Creativity Squared.
Derrick: Thank you for having me,
Derrick: Helen. It’s good to see you again.
Helen: Yeah, Derrick and I actually met at Black Tech Week here in Cincinnati. Seems like it was yesterday, but it was a couple weeks ago. You know, what is time these days anyway?
Derrick: Yeah, completely agree. And I mean, what an incredible conference that was. It was awesome to get to meet you while I was there. Yeah. I’m already looking forward to next year.
Helen: Well, and I think one thing, uh, I’m super excited. You’re the first attorney that we’re having on the show.
And when it comes to all things AI and legal, it’s an exciting time to say the least, uh, and I think one thing that you want to make sure to say out the gate is that this is not legal advice. So, if you want to say that specifically in your own legal terminology, I’ll let you do that. Yeah, absolutely.
Derrick: Thank you. Yeah. So, I mean, conversation today is purely for, for informational and entertainment purposes. You know, nothing is to be construed or taken as legal advice and before you do anything, you should consult a lawyer, and engage a lawyer to, to advise you in your business plan and what you’re doing.
Derrick: And you should not cite to this podcast and to what I said on this podcast as, you know your legal reasoning for doing something.
Helen: And Chat GPT for that matter, which we’ll get into.
Derrick: It’s really funny because. and this is maybe a great way to kick the show off. There was recently a lawyer who relied upon chat GPT to draft a brief that he then submitted to the court and Chat GPT made up. Law to support its argument that didn’t actually exist. So, when the court was checking the case law citations, they realized that they were fictional.
Derrick: They did not exist. And so that lawyer’s law license, you know, has been suspended and is pending investigation and all those types of things. So it’s so interesting in… don’t rely on legal Chat GPT, either as is, is exactly it.
Helen: In conversations, because, you know, I talk about my podcast a lot, uh, even people who are somewhat well versed in Chat GPT don’t realize that it hallucinates.
And I think that’s just something that, you know, if you aren’t aware of that, hallucination is a term where Chat GPT makes up stuff because it’s trained to give you what you want. It’s a prediction model. And sometimes of what it thinks that you want might not be based in reality. So we have a great article on creativitysquared.com that really outlines what hallucination is.
But yes, everything that it spews out, any of these generative AI chatbots..do not take it as 100% real. And I think this also opens up something interesting from an earlier discussion that we were saying is the terms of service and the burden of this. So, can you talk a little bit about Open AI’s terms of service and what that means in relation to the authenticity or the legal aspects of what it generates.
Derrick: And I think it could be really cool. To like even having like a series of, you know, podcasts, maybe this is something I need to put together that just pulls up different terms of service for all the different, you know, generative AI and AI solutions and just pick through them and just read them like line by line, and picking out all the, all the problematic things or dissecting them a little bit, that, that could be a lot of fun, put that on my big board of things to do.
Derrick: But, speaking of open AI specifically, cause we don’t have time to go through all of them. One of the things that they really point out in a theme throughout their terms of use is that they are not standing by the accuracy of any of the output that, you know, you’re provided from whatever your input is.
Derrick: And so what that means is that they openly are saying that this is information that could be inaccurate, and it’s your job to verify its accuracy. So the burden is pushed back on you to ensure that anything that you utilize in that output, you verified it to, to make sure it’s accurate and true. Beyond that, when you’re talking about the input, which is, you know, sort of your prompt that you’re going to place into the solution.
Derrick: They also are giving themselves a license to be able to train their algorithm. And a piece of that is just that because your input and others input is training the algorithm and, you know, open AI is being given a license to that input.. that means that the output is influenced by others’ input as well.
Derrick: And so another thing that they are, you know, they openly say is it’s up to you to ensure that you’re not infringing on any third party intellectual property by utilizing the output in a commercial or other business context. And so, and they say that because they know going back to, you know, the hallucinations problem.
Derrick: Similarly, they are at times just taking others, you know, intellectual property and giving that as the output, right, as a major point of the output. And, you know, there are arguments of, well, is it intellectual property infringement if the person put the input in there in the first place, which then gave the license to Open AI and then, you know, it was able to be used as output and now it’s being given to me and there’s like a whole argument there. Until we understand, you know, how the courts and how everything will play out in that realm, we won’t really know what that argument will lead to.
Derrick: So for now, it’s safe to say, you know, you shouldn’t be utilizing Chat GPT or some other generative AI solutions in a manner commercially that you’re relying upon the output as sort of your main component of the product that you’re commercializing.
Helen: Thank you. And we’ll come back to that because I think that opens up, you know, a whole can of worms and a lot of different cases that are open right now when it comes to these tools.
But taking a step back, you know, something that you had said the other day is kind of how you’re thinking about it from more of like a high level and looking to some of the privacy laws.. So could you kind of walk us through your thought press process on that and how you’re thinking about it and how it applies to the possible outcomes of these generative AI cases.
Derrick: Yeah. So my background. I initially started out as a data privacy attorney, data privacy professional, and it was at sort of the renaissance of consumer privacy when the general data protection regulation, GDPR for short, was being passed into law and enacted in the European Union. And a thing about the GDPR was that it was extraterritorial in scope, which means it didn’t matter.
Derrick: Where you were as a business, the consumer carried the law in their pocket, and that’s why it was such a phenomenon because, you know, you had folks in the US that interact with EU consumers that were concerned, obviously, if the law was applicable to them. And, you know, we know now there’s a sliding scale of risk when it comes to that law. But what you saw that what the GDPR did was it inspired then domestic privacy laws here in the United States. And what we’ve seen is a state by state approach, sort of a patchwork system where each state has their own consumer privacy law.
Derrick: I believe that similarly, you know, generative AI and also AI is going to have a very similar approach in which we may see the EU take lead, right? Recently they’ve provided some guidance in the EU AI Act, which has been drafted and is, you know, under review and looking to be passed and enacted.
Derrick: That talks about the risks associated with AI and different types of AI, right? Because, you know, AI and different functions present different risks which will then present different regulatory structures, frameworks and requirements. And then you look at generative AI in the importance of transparency, right? So they have transparency requirements around generative AI and you need to disclose to a consumer that what they’re viewing was generated by AI, what they’re reading was generated by AI..things like that.
Derrick: And so, I think the EU AI Act is going to be something that sets a great framework similar to what the General Data Protection Regulation did, and then it’ll allow sort of a copycat mentality here in the US as different states decide to take action and put laws in place when it comes to businesses use of artificial intelligence and a consumer protection standpoint.
Derrick: I believe we talked about this before. The state of New York has already passed laws about utilizing artificial intelligence in employment decisions. And a prohibition against that, right? You can’t make an employment decision based on an artificial intelligence solution.
Derrick: Which, you know, is to stop bias from AI being involved in hiring, and other employment functions. So… You’re already starting to see that state by state patchwork, and it’s only going to, I think, increase as the EU sort of lays out their framework. I’m excited to watch it.
Derrick: I think the similarities are there, not from a law standpoint, but more from, I think, political and sort of structural standpoint and sort of just how the chips will fall.
Helen: Yeah. Thank you for, for explaining that. Thank you to our EU friends for, for leading the way. We appreciate you.
Yeah. And it kind of just seems, you know, like the law is always slow and behind, especially at how fast this technology is moving and we’ll dive into some of the cases now, but I would encourage all of our listeners and viewers, there’s already so many great AI frameworks of how to have best in class ethical approach, like, you know, follow the frameworks or create your own framework that’s as ethical and moral as possible and don’t wait for the law, because it’s going to be behind,
Derrick: I think that’s, I think that’s great advice because ultimately when you look at, and I use privacy just because it’s sort of far enough along that we have case law, we have the regulations out there and we have all this stuff that for folks like myself to digest and at the center of everything that has come out with privacy, you know, consumer privacy over the last 10 years..
Derrick: the key point in all of it is transparency, right? It’s transparency. It’s ethics, and it’s giving the consumer the ownership over their data. And so when you’re analyzing where will regular regulation around artificial intelligence go? It’s similar, right? It’s focusing on transparency. It’s focusing on, you know, are we respecting our consumers or are we trying to fool them?
Derrick: Right? If any piece of your business model is trying to fool somebody into believing that, you know, some level of AI is creating some level of human element, right? Then you’re probably going to have a business model that needs to change when regulations do finally come around generative AI and other artificial intelligence solutions.
Helen: And I’ll put links into the episode blog posts because we do have links to some ethical frameworks already on the website too. So, let’s dive into some of the different ways to think about it because you kind of mentioned the business approach of how to think about it. And then, you know, our show is also about how creatives are collaborating with artificial intelligence.
And, I think there’s a couple of different ways to look at it and different cases open now. So since you mentioned business use cases already, maybe we can start there and how businesses.. well, one, they shouldn’t have their own AI policy, but kind of walk us through how businesses should be thinking about artificial intelligence with their companies right now.
Derrick: Yeah. I, you know, I think it’s super important to be ready to adopt and understand that, you know, your employees or your colleagues will be engaging with these AI solutions one way or another, whether it’s in their personal life or, you know, at work. And what I’ve seen a lot of companies doing, what we help companies, you know, navigate what is an acceptable AI Usage policy?
And how do we get out in front of how our employees interact with artificial intelligence? Rather than wait for something to go wrong and then be reactive And so, you know, I obviously, as a lawyer, love a proactive approach and love analyzing these issues ahead of time.
Derrick: And so, one of the things that, you know, you put in place is that AI usage policy, and in that you set sort of the parameters for what’s appropriate. And one of the huge things is obviously, and Helen, you said this yesterday or last week or whenever we talked, it was do not put confidential information or proprietary information into these solutions, right?
Derrick: Because no matter what, you are helping train these solutions. And, I do believe that there are AI solutions that are obviously starting to account for that and they are entering into B2B licenses and creating sort of closed circuit systems that don’t take any of that confidential information or IP, out of, you know, that license of that company.
Derrick: And it sort of just circulates around.. But even then, you know, I still believe you want to be as careful as possible, right? And, when you do have employees that want to go on and use Chat GPT or do something from an efficiency standpoint, and you’re a company that wants to embrace that technology..
Really setting forth for them what’s appropriate, what’s not appropriate is important. And then also making sure that they know when something does go wrong, or if they know of a colleague that is doing something, the steps and procedures in reporting that and working through that. And so put together, honestly a few now and it is amazing to watch companies be proactive in thinking about that.
Derrick: But naturally you’ll have folks that, you know, will have to be reactive because they won’t have thought that far ahead and something bad will happen. And you know, we’ll cross those bridges when we get there. But, from a business standpoint, I think it’s always trying to leverage these solutions in a manner that is, you know, creating efficiency and helping you in your job title some way or another, but not relying upon it to provide the solution that you’re then providing to your customer, to your client. And obviously not, you know, giving the secret sauce over to this third party.
Derrick: who, you know, more than likely is gaining some level of a license to that information to then train their algorithm.
Helen: Yeah. And then let’s also talk about the liability issues with using the output. From a commercialization standpoint, because I know within, you know, the advertising circles that I’ve been in..a lot of big brands are asking, you know, can we even use generative AI in final TV commercials, print ads, social media? And, it’s kind of like, no, don’t do that. It seems like the advice is more use it upstream and like the brainstorming, the storyboards, but not in the final products. And you’re shaking your head for those who are just listening to the audio..so can you kind of like walk us through your thought process on an advice around this idea as well?
Derrick: Yeah, absolutely. So going back to the OpenAI example, for instance, just because that’s the one we’ve talked about and it’s very specific. One of the things in the OpenAI terms that it states very clearly is that, you know, they do not verify that the output is owned by them, right? They say that they do not own the output,
Derrick: and it is up to you to ensure that by utilizing the output, you are not infringing upon a third party’s intellectual property rights. And so, all of that output is coming from somewhere on the internet. And at the end of the day, there are elements of the output that could be an element of somebody’s protected intellectual property, whether that be, you know, a trademark, a copyright or even some other state common law IP right.
Derrick: And so because of that, you have to verify and be sure that you’re not infringing upon somebody else’s rights when you’re utilizing that output from one of these generative AI solutions in a commercial context. So, to your point, Helen, it’s definitely smart to use it for more internal purposes that helps you and is an efficiency purpose to get to your final solution rather than taking that output and utilizing it as the direct final solution.
Derrick: There’s a ton of risk in doing that, and I think that’s where to go to I think our next point, that’s where the conversation, I think, is really key for creatives and ensuring that creatives understand that when it comes to you know, copyright and trademark that they may not be able to gain certain rights if they’re utilizing too much output from a source like ChatGPT or DallE or something along those lines because AI is not a human element and certain traditional IP protections require a human element.
Helen: Yeah, I will say, I will point out two companies where if you use their generative text to image tool, that does fall under, I think legal use and is safe and that’s Shutterstock and when Adobe’s Firefly comes out of beta (right now it’s in beta) but both of these companies are kind of leading the way when it comes to ethical use to generative AI, and if you haven’t listened to the episode with Adobe’s head of the content authenticity initiative on the show, definitely recommend that, but both of these, the artists and photographers for their stock services had the option to opt out, their licensing says that their images can be used in these generative AI tools,
and they will get compensation. Shutterstock is already currently doing compensation and Adobe’s Firefly, the company told me once it comes out of beta, there will be a compensation model. So, those are the only two companies that I know of offhand.. there might be a few more out there but if they’re not those two companies with that licensing in place, it’s really kind of the wild, wild west in that regard.
Derrick: Yeah. Yeah. And I think that’s the important part, right? That they are getting the appropriate licenses from the source in which the AI then would you know, scrape and configure the output. Right. And that is the important piece of that in some of these open source generative AI solutions..
Derrick: they don’t have the ability to obviously do that. Right? And so I think Helen, you hit it on the head.That’s the future of some of these products, right? And where they all I think would probably like to go because if you can, you know, sell these licenses to use their product and they can ensure you that you’re able to commercialize the output, then that puts them, I think, in a strategic position to really have success within the market.
Derrick: So, it just goes back to the important point of have your lawyer, have your team really dig into the terms of service in terms of use for these products and the solutions that you’re utilizing to understand what your rights are, but then also understand what the rights that you’re giving over to these entities, because, you know, they do vary and they do change over time.
Derrick: So, what’s true on Monday might not be true on Friday.
Helen: There’s so many different ways that we can go, because I know that there’s so many cases right now of these companies getting sued by artists and authors and all this, but you had mentioned actually from a creative standpoint, what can be copyright or not.
So, why don’t we start there and then we’ll go into some of the other cases that are open right now. Because there are some artists that are trying to get their generative AI artworks copyright or even prompts copyright. So, kind of walk us through the big case and where things stand and how our artists and creative listeners should be thinking about this too.
Derrick: Yeah, absolutely. So, you know, I think the big case and most people listening to this probably have heard this in some way or another at this point is the sort of Zarya of the Dawn copyright / USPTO decision. And that case is interesting just because it was a comic book, right, created by a New York artist and author and utilized all images that were created by generative AI.
Derrick: And so when going to look for protection, the authorities looked at sort of what human element exists, right? So, the current stance right now is it’s a case by case analysis, which is great, right? I think that shows that the authorities that be, you know, the USPTO and the copyright office, they understand that you need to analyze the facts of the usage of generative AI in each case to really know what human element existed in the final product.
Derrick: And so in that case, the human element was the way that the images were placed and how they were organized and then the text which the author did create the text. and so though the way the images were organized in the text were protectable. But, the images themselves were not. And so I think that that’s just kind of a great road map for where we are now and where we’ll go when you talk about the human element aspect of things. And I believe that it will change over time considering that they’re going to take this case by case analysis and because these solutions are evolving and things are..to your point, Helen, you have solutions that are actually giving certain licenses and ownership and different things like that, then I think you will start to see potentially more human element being able to be fought for in those applications for protection, which will impact the decisions.
Helen: And I know like some of the prompts are have secret sauce already.. you know, the very in depth to get the outputs that people are wanting. And right now, prompts are not able to be protected. But there is a difference between copyright and trade secret that might be applicable here. So can you kind of walk us through the difference or how people should be thinking about their special prompts that they’ve been using?
Derrick: Yeah. So, you know, copyright is obviously something you file. And I guess I’ll go on record and say that I am not a traditional IP attorney. I work with intellectual property. I work around intellectual property protection, but I don’t file patents. I don’t file trademarks. I don’t file copyrights. Our folks in our IP department in our firm, they do that work.
Derrick: I work alongside them. So I know enough to be dangerous, but you know that this is an area where the details are beyond me. But the big thing here is, you know, a copyright that’s something you file for, right? That is a protection that is, is given to you.
Derrick: Whereas a trade secret is actually more of a common law, right? And so it’s a practice and it’s something that you do more so than something that you file. And so, one of my favorite sort of trade secrets is the Coca Cola recipe, which is in a vault somewhere. And, they have kept that a trade secret that nobody knows.
Derrick: And because they’ve never had to file it, it’s never had to be public. So nobody knows what the recipe is. And if you can keep something private, and you can sort of keep it a trade secret, right, you might not meet all the elements of a trade secret under common law, depending on what state you’re in, but it’s a practice that can be, you know, utilized to try to keep something as private as possible.
Derrick: And so, you know, elements of trying to keep something a trade secret is compartmentalization. So not allowing people to know every aspect of your secret, under lock and key, right? So a vault, a safe, you know, want to go to blockchain, right? You can get into that. Like there’s ways you can keep that locked and secure.
Derrick: And the list goes on and on of different elements of keeping something a secret, how many people know… things along those lines. So, it is something that we talked about with prompts. Because with prompts, you obviously can start to realize certain prompts will get you certain outcomes and if you want to recreate those outcomes, keeping those prompts a secret and keeping them with people with limited amount of knowledge and being able to sort of keep it under lock and key and however you want to go about doing that
Derrick: is something that you can do and explore considering that the more traditional protections are not afforded to you at this time.
Helen: I always love a good disclaimer at the beginning about..
Derrick: It’s so hard, right? Because I’m in our intellectual property practice group because I do work with confidentiality agreements and IP assignments and, you know, bigger and broader IP protections within a company and for creatives with name, image and likeness and all that kind of stuff and contracts.
Derrick: But I’m not a patent bar. I’m not a patent bar, right? So, I can’t file a patent. I can’t file a trademark. I don’t do any of those things. So in the legal world, it makes sense. But I always realized like when somebody says, “Oh, well you’re in the intellectual property practice..
Derrick: So like, how are you not an intellectual property attorney?” And you know, it’s one of those things where I think our industry jargon is so confusing and you know, we can go into a whole rant about the archaic systems of the legal world.
Helen: Well, I feel like this is actually a good segue because you’ve mentioned, you know, the artists that you work with name and likeness and on the flip side of protecting artists’AI work or any artwork is actually protecting the artists from being scraped and how their likeness is being used.
And there’s a lot of suits right now where you have… the latest one I just saw was like the New York Times is suing Open AI for scraping their articles without permission and without licensing.
You have the writer’s strike and the actor’s strike and one of the fears right now with the actors is especially background actors is having their likeness scans and then handing over all of the rights to where they could be used without compensation and all of this stuff. So let’s kind of talk a little bit on the side of how you’re thinking about these things and then kind of how our creatives can navigate this right now in this moment in time.
Derrick: Yeah, it’s, it’s so hard because it’s ongoing, right? It’s evolving. We don’t have a lot of definitive case law and opinions that give us the framework and the guidance for exactly what the answers are. You talk about the New York Times case, and I’m sure an argument is for public record and things along those lines. I haven’t dug too deep into the case, but the outcome of that case and in the opinions will dictate what the whole industry does moving forward. Right? And folks, to your point, when you build your framework as ethically as possible, you’ll be in a great position.
Derrick: Well, guess who’s not worried about this scraping case is Shutterstock. The Shutterstock business model is, you know, focused on actually already doing it in a way that makes sure that the license is appropriate and that anybody who doesn’t want sort of their images and text scraped can opt out, right?
Derrick: So that’s I think, just a conversation in itself in terms of positioning and watching what happens, and then adjusting your business model depending. As far as the cloning and AI scans and stuff like that. It is fascinating and you have actors coming out and, and saying that they’ve been scanned and their image and likeness is now being utilized in the background of different films and things like that, that they are not, that they did not appear in
like a traditional sense. And so one of the important things there is..and we don’t know sitting on this call..It’s like, what did your contract say? And I think what we do know is that I know I’m positive that the old agreement that the union negotiated,
Derrick: It likely did not contemplate for artificial intelligence, metaverse, the ability to scan and clone digitally and things like that.. So I think it is something that we need to continue to monitor and watch. But I urge all creatives, I represent influencers, artists and all types of different creatives, studios and things like that.
Derrick: And one of the places we spend the most time on is the license to somebody’s image and likeness, right? There are very broad default terms that are placed in these agreements whether it’s an influencer agreement, an endorsement agreement, whatever it may be, in which these companies are trying to take the ability to utilize a specific person’s likeness and in perpetuity for almost any reason they see fit.
Derrick: Right. And so if you agree to that language. Does scanning you and utilizing your image via a digital version of yourself, fall contractually within that? Maybe, right? And that would be their argument. “Oh, well, you license this over to us where we were allowed to do this” and your argument would be like, “that was not the contemplation and meeting of the minds that we were having in that moment.”
Derrick: So, you know, to avoid that argument, you obviously want to then limit that license as much as possible and account for these things. And I think right now as creatives. you understand the risks and you understand the possibilities that are out there. And so, empowering yourself to look at those agreements.
Derrick: I think it’s really important, but I also know, and we, Helen, we talked about this, that there’s also a leverage aspect here and there’s a place where these companies, they obviously have the ability to get big law firms to look over their documentation and to draft these documents and everything like that..
And you as a creative may not have the same financial resources to be able to engage your own lawyers and to be able to have them spend the time to redline and negotiate out each agreement. And then there’s ultimately the leverage point of, are you able to walk away from this agreement if it’s unfair?
Derrick: At times, creatives are in a position where they need the paycheck to pay their bills and support their families and to support themselves. And if you can’t go into something with the leverage to be able to say, well, this is a, this is a non negotiable for me or I’m walking, then that also kind of puts you at a disadvantage.
Derrick: And so, I give this advice also keeping in mind the reality, which is that’s not always possible.
Helen: And I think that kind of underscores the importance of the union to and collectively leveraging the collective group in these negotiations and how important just following what’s happening with actors and writer strikes are. Because that’s, you know, when it comes to leveraging power and stuff, that’s, from a collective bargaining standpoint, that’s really important.
Derrick: Yeah, I agree. And I’m going to get the state wrong. I think it’s Illinois. I think it’s Illinois… just passed the law that protects children entertainers, and influencers who, you know, obviously are not able to give consent to being utilized in certain ways to have profits.
Derrick: And so I also think what we’re going to start seeing is influencers, reality television, others that aren’t necessarily within these unions also start to realize that their likeness and their rights are being sort of compromised in a predatory sense because, you know, they haven’t been afforded the same protections of a union. And so, you know, right now, the Love is Blind cast from previous seasons is suing Netflix for harsh working conditions and things along those lines.. and influencers, I think are going to start, you start seeing states protecting minor influencers
Derrick: And so, we’re going to start to see the laws shift in a way to start addressing for some of these other groups that aren’t just the actors and writers.
Helen: And this is coming from someone who, I’ve cloned myself and have like intentionally went in to have be scanned and my video presence cloned and my voice cloned..but I sit, versus these background actors, is I know I own all the IP and if I work with a brand or a partner that I’ll establish a licensing deal and have complete control over how my likeness is used, at least for people I engage with, it still can be scraped and deep faked online just because I exist online.
Helen: But in my mind, I think one big miss and it goes into the intentionality, the greed of Netflix and these Hollywood execs of like, with all the fears that AI can open up other revenue streams, like if the actors could license themselves and be five places versus one place and do like an in person gig and get fair compensation for their likeness you know that changes the game as well.
So I’m very interested to see how this will play out for sure.
Derrick: Yeah, I agree.
Helen: And one thing I know we went to rant the other day about waivers, because I went to an event earlier this year, and I think I gave them a headache, but I didn’t want to sign their release. And I’m going to read it just because I think all waivers, which people signed so quickly, even like Black Tech Week, where we met, you know, it’s like by entering the building, you’re giving away your rights to be filmed and photographed.
And I just want to read this because this language, I think is all going to be outdated, and we all need to be aware of this. So, I irrevocably grant to the event host the non-exclusive right in perpetuity, throughout the universe, in all media, whether now known or hereafter devised, to photograph, reproduce, and or otherwise use my name and voice from my participation in the event.
Helen: I represent and warrant that consent by any other person, firm, corporation, or labor organization is not required to enable event hosts to use my name described herein, and that such will not violate any rights of third parties.
I mean, this was like a retreat or whatever, and they were so.. I loved the experience, but they were not the best organizers in the world, but like I gave them a hard time and they’re like, we’re just going to take photos for our social media, but I was like, what you’re asking, like, you could literally, when I’m signing over, you could literally clone my voice and likeness based on this language now, and I’m not comfortable because I know the technology, so that’s kind of my reaction.
Would you agree that people should be careful about waivers and this type of language now?
Derrick: Yeah, yes and no, right?
Helen: Or am I over, I’m overreacting?
Derrick: Yes and no. I think one of the things that, going back to like the privacy side of things, right? Like, when you look at a privacy policy as a consumer, the privacy policy says everything, well, it should, right?
Derrick: To be legally compliant, it should say everything they’re going to do with your data and if you read that document and you go through it, you’ll kind of be like, Oh, that’s a lot, right? They’re going to do a lot with my data. But most people don’t read it. And so to your point, in terms of.. people just sign it, most people don’t read it.
Derrick: And what a lawyer’s position obviously is, is to represent their clients. And you know, that language is positioning that client to do everything they may do with all of the photos and video that they take from the conference and utilize it in a manner that they maybe are thinking about right now, or to your point, they’re thinking about 100 years from now.
Derrick: Right? I think for folks like you, it’s very rare that you have somebody who reads a waiver and understands pieces of it that are going to be potentially meant to be harmful or inappropriate to their senses. From a practical standpoint, it’s like, well, does that maybe challenge everyone to stop just using these template waivers?
Derrick: And actually get the consent for the real thing that you’re going to actually use these, you know, these images and the video for, right? To your point, like, well, if you’re just going to take photos and why doesn’t it say that? And so I think it also is a challenge to some of these organizers who…I’ll tell you this right now.
Derrick: Like there are a lot of people that look at lawyers as useless, right? Like, well, why do I need you for a simple waiver when I can just go get this thing offline? They then don’t know. And I’m not saying that’s what happened in this situation. Who knows that they had counselor if they didn’t. But, you know, if you don’t know what you don’t know, and so you pulled this waiver offline and you, you employ, you, you go to deploy it and then it says all this stuff, right?
Derrick: That you don’t know what it means. And you have somebody who’s like, well, this is inappropriate for the situation. Why isn’t there just a standard photo release in this waiver rather than all this other stuff, right? And like, it’s hard because they maybe didn’t even realize that all that other stuff was in there that they didn’t know what it meant.
Derrick: Or they just asked their lawyer and their lawyer went and drafted the most comprehensive, most detailed thing, right? So it’s like it’s everybody involved on the side of the organizers, I think just need to do a better job and thinking through like, what’s, you know, to your point, Helen, what’s ethical, what’s appropriate and what’s actually called for in this moment?
Derrick: What are we actually trying to capture via this consent and via this document? If it’s just the fact that you walked in this room and we want to post the videos and photos on our website and on our social media.. I’m sure you’re okay with that. So, just say that. And I think that’s where we’re going to have to get,
Derrick: but I think the only way we get there is for more consumers to be like you, to be honest, right? To be informed, to be educated and to question certain things. And when you have more people like you questioning the waiver, I’m sure they’ll go back to their lawyers and say, “Hey, by the way, this waiver is too intense.”
Derrick: We need it to just allow us to do these things..And then, you know, allow for the strip down photo and video release form to be what actually exists rather than everything else that was put in there.
Helen: I guess one takeaway for our listeners— read before you sign. And you know, I gave them a hard time, but you can negotiate. Like they ended up using language that I gave them that I was comfortable with and I know that they didn’t want to budge cause it was a headache for them, but they ended up doing it. So it’s like, you do have a right to decide what you sign and don’t sign as well. So, but definitely read anything before you sign it.
Derrick: Yeah. Well, and it goes back to the old saying, right? Like, all they can say is no. So why not ask? I think that goes back to our conversation about sort of influencers, artists, creatives, actors…when they’re going through their process, just ensuring that they ask for the revision, ask for the red line.
Derrick: The worst they can say is no. And when they say no, you gotta be okay with either moving forward or walking away, but ask, right? you’d be surprised how much companies and organizations give on when you do ask, like you just said.
Helen: Yeah. I think one thing that came to mind as you were saying that too, as far as like companies just be, you know, stepping forward the most ethical way possible and most transparent.
One recently that came up just I think it was last week was Zoom changed their terms of service where you couldn’t opt out and it was for free and paid were basically just by using Zoom, you are allowing the company to use any of the information and data collected, however wanted, and also to train their AI machine learning tools, which they got a ton of backlash.
Helen: And I think they’ve since reversed, but I think that’s kind of maybe a bellwether sign for companies. Like you just can’t default scrape everyone’s information to train your machines because users are kind of waking up to how their data is being used, especially within the creative space, how writers and artworks are being used to train these machines.
Derrick: Yeah. In a case like that, it sounds like the operations team, the PR team and legal team need to get in the same room together and make those decisions.
Helen: Yeah. Call Derrick.
Helen: Before we close out the interview, and Derrick, you mentioned starting a podcast. You’re welcome to join this one. You could be our legal update anytime. You’re always welcome to join the show. You mentioned a couple of practical tips for creatives on the show but maybe we could summarize and just reinforce those again for all of our listeners, like, what are some things that they can walk away with from today’s conversation?
Derrick: Yeah, absolutely. I think the number one thing is, is that you are the owner of your image, your likeness. You are the owner of your creativity and the things you’re creating. So, when leveraging AI solutions, make sure you do it in a way that is thoughtful and you don’t give away certain rights that you already own to these AI
Derrick: solution companies. You don’t compromise your art or your creativity by utilizing certain components from these AI solution companies in your final products and if you do so, you do so informed that you understand the risks of that. And then more than anything, when, you know, you’re dealing with these larger entities, larger companies, you understand that you own your image, you own your likeness.
Derrick: And it is okay to review these contracts and ask questions and ask for revisions when it comes to the license that you’re providing to your image and likeness. The worst they can say is no, but it’s important to, I think, review it, to understand it and to go into things with your eyes wide open.
Derrick: Legal counsel, I think it’s the best way to go about that. But like I said, I understand that there are barriers to that and making sure that you can try to find resources or do your best to ask questions of these companies and things along those lines is a really important thing to do
Derrick: and I think would save you heartache in the long run.
Helen: Yeah. Thank you for, for recapping that. And I think one thing also that I just want to mention as well, because I know a lot of artists feel maybe disempowered because these big companies have already scraped the entire internet. And, there are some resources that exist now. I’ll be sure to link it in the show notes and I know that they’re on our website resources but there’s a one tool where you can look up to see if your artwork has trained a machine and then get on a database, to say, I don’t want my art to train these machine or the machine learning generative tools.
And then, what’s the other thing? Oh, Open AI recently just released code that you can put on your website that says “don’t scrape.” Although, you know, it’s a little too late in that regard because it’s already scraped everything. But I think we’re going to be seeing more of those,
Helen: So even if you feel like your artwork has already been scraped, I don’t want anyone to feel disempowered in light of these big corporations and there are some tools out there to protect your works, even right now.
Helen: Well, Derrick, it has been so wonderful having you on the show and sharing, um, so much of your time and your insights and tips, even though it’s not legal advice that people should be taking, but tips to think about how people can get in touch with you.
Derrick: Yeah, absolutely. You can reach out to me on Twitter on LinkedIn. If you search my name on either of those places, I’ll pop up. You can also look us up on my firm website and reach out to me. My email and office number and things like that are there, and I’m happy to be a resource and happy to connect with anybody that has any questions or comments or any further discussions.
Helen: Well, thank you so much. It has been an absolute pleasure and look, looking forward to seeing how these cases unfold and keeping the conversation going. And we’ll definitely have to have you back on the show to talk more, all things AI and legal down the line. So thank you again.
Derrick: Thank you. I appreciate it. Look forward to coming back sometime.
Helen: Thank you for spending some time with us today. We’re just getting started and would love your support. Subscribe to creativity squared on your preferred podcast platform and leave a review. It really helps. And I’d love to hear your feedback. What topics are you thinking about and want to dive into more?
I invite you to visit creativity squared. com to let me know. And while you’re there, be sure to sign up for our free weekly newsletter. So you can easily stay on top of all the latest news at the intersection of AI and creativity because it’s so important to support artists. 10% of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized nonprofit that supports over a hundred arts organizations.
Become a premium newsletter subscriber, or leave a tip on the website to support this project and arts sway and premium newsletter subscribers will receive NFTs of episode cover art, and more extras to say thank you for helping bring my dream to life. And a big, big thank you to everyone who’s offered their time, energy, and encouragement and support so far. I really appreciate it from the bottom of my heart. This show is produced and made possible by the team at Play Audio Agency. Until next week, keep creating.