Imagine a world where your digital twin not only looks and sounds like you but can also think and create like you. A world where artificial intelligence works hand in hand with human ingenuity to push the boundaries of what’s possible.
This is the world that Natalie Monbiot, a trailblazing strategist in the realm of virtual humans and A.I. video technology, envisions — and it’s closer than you might think.
In a recent episode of the Creativity Squared podcast, Natalie shared her insights on the rapidly evolving landscape of the “virtual human economy.”
As a founding team member and head of strategy at Hour One, a company at the forefront of this revolution, Natalie has played a pivotal role in shaping the path forward.
From conceptualizing the technology’s potential to grappling with its ethical implications and bringing it to market, she’s been there every step of the way.
Natalie’s fascination with emerging technology and its potential to transform the way we live and work runs deep. With an impressive background in crafting transformative brand strategies for industry giants like BMW, Coca-Cola, and Spotify, she’s always had her finger on the pulse of innovation. But it was her work at Hour One that truly opened her eyes to the vast possibilities of virtual humans.
The concept of “virtual twins” lies at the heart of this vision. These digital doppelgangers, created using cutting-edge A.I. and video technology, are not mere avatars or cartoon characters. They are photorealistic representations of real people, imbued with the ability to speak, act, and even think on their behalf. Helen, Creativity Squared host, has a digital clone and has spoken to Render’s Jon Tota and Dr. Jill Schiefelbein about hyperrealistic avatars on the show, too.
“Wow, so there are things I can now do, thanks to my virtual self, that I could not have even dreamed of doing before,” Natalie marvels.
Natalie Monbiot
The implications are staggering.
Imagine a world where your virtual twin could deliver a keynote speech in flawless Mandarin while you’re sound asleep, or represent you in a meeting halfway around the globe.
Picture a future where a renowned scientist’s digital self continues to make groundbreaking discoveries long after their physical body is gone, or where a beloved grandparent’s virtual presence can comfort and guide generations to come.
These scenarios may sound like science fiction, but according to Natalie, they’re quickly coming to fruition.
Natalie points to real-world examples, like the baby formula brand Enfamil using virtual humans in their Amazon product listings, or a futurist whose digital twin was hired as an A.I. correspondent for a news broadcaster. The virtual human economy is already taking shape, and its potential is boundless.
As with any groundbreaking technology, the rise of virtual humans raises important questions and concerns.
Will these digital entities replace human workers? Will they be used to deceive or manipulate? How can we ensure they are developed and deployed responsibly?
Natalie is quick to emphasize that virtual twins are intended to augment and enhance human capabilities, not replace them. “We wanted to tether real humans to this future of A.I.,” she explains. “We also thought about the opportunity to enable real humans to create versions of themselves that could empower them as individuals.”
One of the most exciting aspects of this empowerment is the ability to connect virtual twins to their human counterpart’s own augmented intelligence.
Imagine having a digital version of yourself that can instantly access and draw upon your entire body of knowledge and experience — every book you’ve read, every conversation you’ve recorded, every insight you’ve gleaned over a lifetime. “And also to be able to connect other intelligences,” Natalie muses. “You know, like languages — those are skills and abilities that we might not have natively.”
Natalie Monbiot
This augmentation could supercharge our productivity, creativity, and problem-solving abilities. It could allow us to take on challenges and opportunities that were previously unthinkable, from crafting multilingual marketing campaigns to tackling global scientific conundrums. The possibilities are endless, and they’re only just beginning to come into focus.
Of course, with great power comes great responsibility.
As virtual humans become more sophisticated and lifelike, Natalie explains that it’s crucial that we develop them with transparency, accountability, and a strong ethical framework.
Natalie is acutely aware of the potential for misuse, particularly in light of deepfake videos that have stoked fears and controversy in recent years.
She’s adamant about distinguishing Hour One’s work from these nefarious applications.
“The way that we distinguish the term deepfake from synthetic media was that deepfake was kind of the nefarious use of synthetic media,” she clarifies. “And in particular, it was the non-permissioned use of your digital likeness. It has an intent of trying to deceive.”
In contrast, Hour One operates with full transparency and consent from the individuals being digitized. Every virtual human on their platform is based on a real person who has expressly agreed to have their likeness used in this way.
What’s more, any content featuring these virtual humans must be clearly labeled as computer-generated, ensuring that viewers are never misled.
This commitment to authenticity and transparency is critical not just for Hour One, but for the entire virtual human economy.
As the technology advances and becomes more ubiquitous, we’ll need clear guidelines, standards, and regulations to ensure it’s being used responsibly and ethically. Collaboration between technologists, policymakers, ethicists, and the public will be essential to getting this right.
Natalie Monbiot
While the ethical considerations are significant, so too are the creative possibilities.
As virtual humans become more expressive and emotionally nuanced, they could open up entirely new avenues for storytelling, entertainment, and self-expression.
Natalie Monbiot
“Actually, in cases where some people just aren’t, it’s not just inconvenient to be in front of the camera — a lot of people aren’t good in front of the camera, and they’re not able to express themselves in the way that they intended in the way that they feel,” Natalie explains. “So your virtual twin will be something that you’ll be able to guide, and in a way, may be the truest expression of you.”
Imagine a world where anyone can be a movie star, a pop idol, or a public intellectual — all through the power of their virtual twin. Where a shy, introverted artist can let their digital self take center stage, performing their work in front of a global audience. Where a person with a disability can have their virtual presence navigate the world with ease, unfettered by physical barriers.
These are just a few of the possibilities Natalie envisions on the horizon.
As virtual humans become more sophisticated and expressive, Natalia suggests, they could usher in a new era of creativity, one where virtual humans are not just extensions of ourselves, but powerful tools for exploring new identities, pushing creative boundaries, and connecting with others in ways we never thought possible.
As Helen’s conversation with Natalie draws to a close, the pair agree that one thing is abundantly clear: the virtual human economy is not a distant dream, but an emerging reality. The technology is here, Helen and Natalie concur, and is evolving at a breathtaking pace — the question is not if it will transform our world, but how.
Natalie’s vision for the future is both exhilarating and sobering. On one hand, she foresees a world where capturing a virtual twin becomes as commonplace as getting a professional headshot, and where our digital selves can represent us in an ever-expanding range of contexts, from the boardroom to the newsroom to the stages of our wildest dreams.
“I think we, for years, thought like, ‘What is the ultimate expression of what we’re doing at Hour One?’ And I think having everyone with a LinkedIn profile having a virtual twin, kind of like what we were saying before about how you had a professional headshot done, and in the same breath, you captured your virtual twin because you can now just bring that to life,” Natalie reflects.
Natalie Monbiot
On the other hand, she recognizes the immense responsibility that comes with shaping this new frontier.
As we navigate the uncharted waters of the virtual human economy, Natalie posits, we’ll need to work together to ensure that it develops in a way that benefits everyone, not just a select few. This will require ongoing dialogue, collaboration, and a commitment to the greater good.
For those working in creative fields, the rise of virtual humans presents both challenges and opportunities, in Natalie’s view: It challenges individuals to think deeply about what it means to be creative, to be human, and to express oneself authentically in a world where the lines between the physical and the digital are blurring.
At the same time, it offers powerful new tools for bringing ideas to life and connecting with others in meaningful ways.
By experimenting with these new technologies thoughtfully and responsibly, and by pushing the boundaries of what’s possible while staying true to one’s values, Natalie believes creatives can help steer this revolution in a positive direction.
Natalie Monbiot
“And what does that mean?” Natalie wondered. “Like, what could you do with that virtual twin?”
The answers to these questions will shape the future in ways we can scarcely imagine. By engaging with them head-on, with curiosity, creativity, and care, Natalie believes individuals can help ensure that the virtual human economy becomes a force for good — a tool not just for augmenting one’s abilities, but for deepening one’s humanity and expanding one’s potential in ways never dreamed possible.
Thank you, Natalie, for being our guest on Creativity Squared.
This show is produced and made possible by the team at PLAY Audio Agency: https://playaudioagency.com.
Creativity Squared is brought to you by Sociality Squared, a social media agency who understands the magic of bringing people together around what they value and love: http://socialitysquared.com.
Because it’s important to support artists, 10% of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized non-profit that supports over 150 arts organizations, projects, and independent artists.
Join Creativity Squared’s free weekly newsletter and become a premium supporter here.
TRANSCRIPT
Natalie Monbiot: [00:00:00] Wow, so there are things I can now do thanks to my virtual self that I could not have even dreamed of doing before. And so even though we’re still nascent in this virtual human economy, I think it really will become an economy at large and something that will affect all of us.
Helen Todd: Natalie Monbiot, who has six virtual human twins of herself, is an emerging technology strategist and pioneer in virtual human and AI video technology. As a founding team member and head of strategy at Hour One, she has helped to create the category from the ground up from the vision to ethics to commercialization. A leading voice in the industry, she has made the case for the virtual human economy in which real people profit from putting their virtual selves to work.
Recently, she’s worked with Reid Hoffman to bring his AI virtual human twin to life. As a trusted futurist, Natalie excels at creating emerging technology roadmaps for companies like Samsung, and holds a strong track record in transformative brand strategies and growth for brands like BMW, Coca-Cola, and Spotify.
She’s received award recognition for her work, including from Fast Company, Next Big Things in Tech and the Cannes Lions Gold. She’s also an active collaborator and advisor to startups, agency boards, and media publishers. In addition to being a frequent keynote speaker and contributor to publications like the Wall Street Journal and the Information.
Natalie and I’s paths first crossed at last year’s South by Southwest, as we were in the same session together. I couldn’t be more excited to have her on the show ahead of South by Southwest this year, where she’ll be on stage with John Gauntt, another Creativity Squared guest, for a fireside chat titled, “The Future: Stories in a Changing World.”
In today’s episode, discover the current landscape of the virtual human economy and what lies ahead, including connecting virtual twins to their human counterparts’ own augmented intelligence and instances when people prefer interacting with virtual humans over real ones. We discuss everything from terminology, ethical considerations, and photorealism to imaginative use cases and the interoperability of virtual humans – even how one brand on Amazon is putting virtual humans to work for their product listings. Join the conversation as Natalie shares her passion for virtual humans and envisions how they can augment human capabilities, not replace us.
How can you put your virtual human to work? Listen in to find out.
Enjoy.
Theme: But have you ever thought, what if this is all just a dream?
Helen Todd: Welcome to Creativity Squared. Discover how creatives are collaborating with artificial intelligence in your inbox, on YouTube, and on your preferred podcast platform. Hi, I’m Helen Todd, your host, and I’m so excited to have you join the weekly conversations I’m having with amazing pioneers in this space.
The intention of these conversations is to ignite our collective imagination at the intersection of AI and creativity to envision a world where artists thrive.
Well, Natalie, welcome to Creativity Squared. It is so good to have you on the show.
Natalie Monbiot: Thank you, Helen. It’s wonderful to be here.
Helen Todd: And I feel like the universe has been conspiring to actually have us meet because we found out once we were connected through John Gauntt, who was another guest on the show that we were actually at the same session at South by Southwest in the audience together.
So it’s so nice to have gotten to know you after being in the same room with you ahead of this interview. So, welcome again.
Natalie Monbiot: Thank you very much. And, it was also funny because I think we actually met on Zoom, yet to meet in real life, but you, we kind of, vibed off my virtual human economy post.
I was like, we should talk on LinkedIn and you’re like, we should talk. It’s like, well, actually, I think we’re talking on Monday. There are lots of things leading up to this moment.
Helen Todd: Yes. And I’m so excited to have you on the show because as everyone probably knows, who’s a listener, I’ve already cloned myself.
But you’ve been in the cloning world a lot longer and have done a lot of thought pieces on the virtual economy, which we’re going to dive into today. So, so excited to get into that. But before we do for all of our listeners and viewers who are meeting you for the first time, can you introduce yourself of who you are, what you do, and your origin story?
Natalie Monbiot: Absolutely. So, Natalie Monbiot, and I am currently the head of strategy and a founding team member of Hour One, which is a virtual human and AI video startup that was founded as far back as 2019. So ancient history in this world, it kind of feels like.
And what brought me to that moment, was that I was really steeped in the world of emerging technology and emerging media, having been, served a number of leadership roles, mostly on the agency side.
I was the head of digital at Universal McCann. and I was the head of innovation and strategy as well. And, at Publicis, I was the head of futures focused on a future roadmapping for brands like Samsung.
So I’ve always leaned into the future and I’ve always been passionate about what emerging technology and the companies, that are building it, are doing and what kind of outsized impact that could eventually have on consumer behavior and brands.
And so having spent a lot of time in the corporate world, and, also as you might detect from my accent, originally from the UK, I moved to San Francisco, you know, just try it out for a year about 12 years ago. And then America happened as I think of it, I got an opportunity to work at the emerging Media Lab, the Interpublic Media Lab, in Los Angeles.
And that was kind of a defining moment for me, sort of the shift from kind of, strategy and communications, which is kind of the field that I was in, into really leaning into emerging technology and, what all of that means, for consumer behavior and brands. So, that was kind of a defining moment, I guess, in a way, an origin story in terms of realizing that my passion was really around emerging technology and what this meant.
And I felt a calling to kind of understanding what that is and being this translation layer to both people at large, brands, consumers to help educate as to what this can mean for them. This might feel small, might feel like a toy right now, might feel irrelevant to you right now. But actually, for all of these reasons, it’s something that you should be paying attention to.
Helen Todd: Thank you. And in terms of synthetic media and the virtual human economy, it’s 2019 was like super early, but I feel like right now it’s still the very, very early days, especially of like the tech adoption curve. I think we’re still in the, before early adopters, like in the innovators stage, but that’s really set to change.
So where do you kind of like taking a step back, can you kind of give us a landscape of since you’ve been in the space since 2019, kind of the virtual human economy, where it’s been, where it’s at, and where you see it going?
Natalie Monbiot: For sure. And I should probably define what I mean by the virtual human economy, which is that real people have the opportunity today to create a virtual twin.
And in creating a virtual twin, they have the opportunity to put it to work on their behalf. And if you put your virtual twin to work on your behalf, I mean, you know sort of, there’s all kinds of ideas and possibilities there, but in the kind of most simplest form, you can have it literally do things for you.
And in the, you know, since the very earliest days, you can have it present endless amounts of content for you. So what enables you to do is create content without having to sit in front of a camera. I know that you’ve covered this technology a few times on previous episodes, we don’t need to dive too much into the details about how you create a virtual twin.
But the idea is that once you’ve created one with a little bit of footage, a little bit of training data that is required at the start, you can then create endless amounts of content in your likeness. And not just the ability, it enables not just the ability to scale content, but also to augment your skills.
Because once you’ve been digitized, you can, you know, to start with you can end up speaking any language, you know languages that you don’t know and it helps you to become an uber communicator.
So I think that’s something that’s really fascinated me since the very beginning. And one of our very earliest, customers, enterprise customers, actually who we started talking to and working with back in 2020 was Berlitz and they actually, for them, we created eight virtual human instructors that were actually all based on real people, many of whom were not professional teachers in any respect. They, you know one was a student, you know one was a gig worker and this is sort of another type of gig work that they managed to kind of discover which is to become a German language teacher and watching themselves speak German and teach German was kind of a groundbreaking moment for them.
Like, wow. So there are things I can now do thanks to my virtual self, that I could not have even dreamed of doing before. And so, even though we’re still nascent in this virtual human economy, I think it really will become an economy at large and something that will affect all of us.
Since 2019, real people have been paid a passive income for acting as the virtual selves and kind of, you know, being featured in commercial content, whether it’s as language teachers or virtual receptionists or HR representatives or shop assistants.
And so since the very beginning, I’ve been really fascinated by this idea of this new economy and all the different directions it could take.
Helen Todd: Thank you for that background. And it just seems like the natural evolution of our photos and content that we have online anyway, that instead of just an animated photo, or just video that we just have this new extension of media that we can use to fill in for ourselves or stand in for ourselves if we don’t want to put on makeup or get on camera and all of that.
So it seems like before long, everyone will have their digital twins.
Natalie Monbiot: Yeah, because I think on the one hand, it seems like this very wild concept and something that might feel very alien to you, but then you just have to look at your own experience. And as a person that puts an image of themselves that has been stylized, to meet the needs of a particular platform or to meet the needs of what you want to achieve in terms of how you communicate with others via that platform.
So, your LinkedIn avatar or your Instagram avatar or profile picture might be quite different, but they are designed to represent something and to say something on your behalf. So, and as you mentioned, you know, kind of got these like sort of, you know, increasingly these images can be animated. Well, this is kind of a natural, if not, you know, if, if a bit accelerated a continuation of that trend.
And so you can kind of see us as experiences become more and more immersive and as video in particular becomes more and more pervasive, thanks to the fact that you can generate it using AI, we can see how this type of self-representation can become increasingly normalized.
Helen Todd: And one thing that I’d like for you to explain, and it’s been explained on the show, but in case listeners or viewers are tuning in for the first time, I know often, as soon as you hear synthetic media, and there’s a lot of different names for avatars: custom synthetic avatar, digital twin, AI clone, you know, there’s a lot of different vocabulary floating out there, but immediately because of all the headlines, people go to deepfakes and the unethical use of them.
So can you, if anyone isn’t aware of that, break down the difference between deepfakes and the ethically made digital twins that we’re talking about.
Natalie Monbiot: Yeah, the nomenclature around this category has been something that we’ve thought a lot about and played around with and kind of you know sort of evolved as the years have gone by.
And so I’ll just say that we at Hour One, have chosen to use the term virtual humans and virtual twins, distinguishing it from digital twin, which can be a digital twin [00:13:00] of a space of a building.
It’s already a term that is quite known and sort of, you know, in its own kind of adjacent field. But going back, back to sort of 2009, when, you know, the term really that that was used for this type of technology was deepfake and I feel that that term has negative connotations just kind of inherently.
But I also think that people weren’t aware of any use case or possibility of this technology that didn’t have negative connotations. So back then, when we were investigating this space, we would just call it synthetic media. Okay. So that’s like the proper term and the way that we distinguish the term deepfake from synthetic from synthetic media was, that deepfake was kind of the nefarious use of synthetic media.
And in particular, it was, the non permissioned use of your light of your digital likeness. So, and we, and also it has an intent of trying to deceive. Right. So deepfakes as we, as we had defined it then, and I still think that this is, you know, you know, a viable, working definition is non-permissioned use of the technology that is designed to mislead.
And I think there’s only one area in which designed to mislead isn’t a bad thing. And that is around satire. Right. Cause, but then there’s a certain level of knowledge on the user’s part that what you’re watching is satire, but it’s kind of using the technology and its ability to deceive as the tool to deliver the satire.
But in all other respects, it wasn’t, it was a pretty bad thing. So we work to kind of try to, demonstrate the use cases for synthetic media in a way that was ethical and in a way that could be commercial for really big and established brands. And, and I think, you know, right from the get go, we were able to actually engage some very traditional and large-scale brands based on this kind of transparency.
So, first of all, any virtual person that’s either available on our platform, kind of like stock characters. Or any specific person that signed up to become a virtual person would, first of all, do it with their express consent. We would always require an agreement, which stated that we could handle the footage and the training data and use that training data or use that footage as training data to create the virtual twins, that’s step one, in, you know, in all cases.
And then, you know, each agreement would, the details would depend on kind of what the intended use and, collaboration intent was with that virtual twin.
But the other component of it as well, that distinguished what we were doing from deepfake, and that, you know, that sort of encourage brands, to sort of, you know, partner with us was around transparency. So to always respect the user’s right to know that the content that they were looking at was computer-generated. And, we did that by sort of mandating that somewhere within the frame, it would be disclosed that the content being viewed was computer-generated.
It wasn’t actually viewed, it wasn’t actually delivered by a real person in the traditional way.
Helen Todd: I love that because I want to make sure when I use my, I call it, I probably say custom synthetic avatar or my digital clone the most, but when I use my digital clone, I always want to make sure that everyone knows that it’s my clone and not me so I, I fully am in the camp of more transparency around this form of media, the better, especially since there is still a lot of fears and stuff around it. And oftentimes, because I’m starting to clone people here in Cincinnati with a partner studio, and with Render, our partner, our Creativity Squared partner.
Oftentimes people are like, “Oh, this is great. But how do you go from novelty to use? And like, what do you actually do with a clone?” which you listed off, a lot of examples, just a second ago, but what’s your general response when you hear that, that question of what do you do with a digital clone or a virtual human?
Natalie Monbiot: Yeah. It’s really interesting because I’d say that at the same time that it was a novelty, like a total novelty back in 2019, there were also, simultaneously examples of how this would be used for really practical purposes.
So, in the case of Berlitz, they needed to digitally transform beyond in, you know, in classroom learning. And this is well before the pandemic that this sort of came up as a strategic priority. And there was actually no way to film enough footage with real instructors to be able to create the body of content necessary to educate people online.
And they actually tried doing that and it, and it wasn’t feasible. And so they had to look for technological solutions to do that. Also, ones where there was a real human or a synthesized photorealistic human that could be expressive in the way that a teacher could be to show pronunciation, and to kind of like, imply, infer context as well.
So that, so we were already seeing the need for this technology to solve problems, that kind of, you know, that existed for a lot of different businesses, but you know, technology, you know, is, it sort of happens and innovation happens in pockets. And I think that over the years, it’s now actually become kind of more pervasive.
If you look at the learning and development industry, so like, a vertical within a lot of enterprises, a good number of those companies are employing AI tools and virtual humans, in the mix of their content development today. So there really is kind of something that, you know, it’s sort of, becoming a given in some pockets, in very large organizations.
So, I think that’s really interesting. Another thing that has taken it from kind of evidence of having gone from novelty to kind of pervasiveness is, right now we have one of our clients, Enfamil, which is, baby formula. And on Amazon right now, you know, without sort of the blink of an eye, not without batting an eyelid, I should say, you’ve got videos explaining the product right in the listing on Amazon presented by virtual humans.
So this, it’s interesting because things sometimes don’t move as fast as you think they will. And sometimes they move faster, or I think maybe what it is is that there are these moments of inflection.
And everything happens at once. I think as we’ve seen with the whole sort of ChatGPT era, it sort of felt like everything kind of happened at once, but it was a lot of work invested to getting to that point.
But, yeah, maybe in 2020, I was presenting retail conferences with POCs that we’ve done with Amazon directly and other retailers and showing, you know, here’s a virtual human, you know, selling TVs and such.
And actually what our very first customer, was a used car platform. And in the wild, you know, back in 2019, there were virtual people selling cars. So I think, it’s sort of been happening in pockets and now, there’s pervasiveness within certain verticals or horizontals across enterprises.
And we’re just going to see that roll out into more context and scenarios, especially as the technology evolves, meaning specifically how emotional and how hyperrealistic and how well they can actually communicate.
And they’ll sort of rise, towards kind of like, let’s say they’re sort of like a little bit back office, personas, or part of, you know, they’re part of the workforce, but kind of more internal-facing, particularly with learning and development and training, but we’re starting to see more external-facing roles for these virtual humans because they are more presentable and more effective communicators.
And yeah, so I think we’re going to see them, more and more of them out in the wild in different contexts.
Helen Todd: And I know that you’re going to have a conversation with Dr. Jill Schiefelbein, who’s a partner at Render, and we had her on the show because there’s really not that much research on hyperrealistic avatars that exist, and she’s done some of it.
And her initial findings that she shared on the show is that when your virtual human stands in for you to do online trainings or whatever it is, that they’re still really effective, both in engaging the viewer and also from an educational standpoint that the viewer retains enough information to make them effective.
So I’m really excited to see more research come out in that regard too.
Natalie Monbiot: Yeah, me too. And as you know, I’m very excited to see that. I can tell you that maybe again, 2020, we started working with an AI hiring platform that was basically conducting interviews in an asynchronous manner and kind of taking camera footage and kind of initial screening interviews, which happened kind of online, by the webcam, and, they were using text, initially.
So you’d see, as a candidate, I would see on my screen a question in text, and I would respond to the camera, and that felt like a bit of an unnatural experience, like it’s not, wasn’t conversational, it was sort of like mixing media in a way that wasn’t really comfortable for the candidate.
And so, kind of did it, we did a test, where HR representatives were on the other end. Okay. So it’s like a live person. And then we also introduced kind of an AI, HR representative that disclosed that they were an AI and then they actually tested this using real candidates for jobs at that company.
And the findings were really interesting because people liked, similar to Dr. Jill’s findings, people enjoyed, they preferred and felt most comfortable with the AI avatar as the interviewer, because they, first of all, it was better than text, which was weird and awkward and really, you know, un-personable and also reflects badly on the company that is actually conducting this interviews.
It doesn’t say anything great about their culture that they would kind of be so cold like that and you’d think that maybe the real HR representative would be the best experience. But actually what we found, and also looking at the videos of these experiences and comparing them, the HR representative kind of looks tired.
They look really tired. They have to do, you know, having to go through all of these screenings is really exhausting and not the best use of a human being’s time necessarily, at that first juncture. So it was really eye-opening to be like, Oh, wait a minute. Like, you know, an AI avatar that can be constantly smiling and on brand and welcoming.
And you know, the user knows that it’s this kind of canned experience or this, you know, pre-recorded experience anyway. So it wasn’t, it was obvious that it wasn’t a live conversation. I should just clarify that with the real HR representatives, it was videos of them asking the questions.
And so when it was videos of them asking questions versus a video, you know, generated with an AI avatar, which was perfect lighting, just had the right vibe and everything, that is what the candidates felt most comfortable with.
Helen Todd: And that actually doesn’t surprise me at all in terms of, and I think we discussed this at some point, that as an interviewee, you know, you’re also like looking for facial signals of the person that’s conducting the interview that could be distracting or, you know, take you out of answering the question.
So I could totally see people more comfortable with an avatar in, especially in that context, than with a human interaction, just because there’s so much psychology that happens between or in interviews themselves. So that’s really interesting.
Natalie Monbiot: And just sort of outside of the AI avatar space, but kind of just sort of looking at kind of common truths from, you know, other areas of technology and, you know, and computer-human-computer interaction. If you look at how honest people feel that they can be when they’re not talking to a human or feel like a human isn’t in the loop or isn’t reading what they’re saying or asking, if you look at Google, Google search.
Or just search in general, people are not afraid to search for what they really mean and what they, you know, what they really need. And I think there’s some, there’s some level of comfort in talking to computers about certain topics that might feel sensitive or where you feel like you might be being judged, by a real human being.
Helen Todd: I think the history of algorithms goes back to a story there, I think it was a psychologist, I forget the [00:27:00] psychologist’s name, but the computer was Eliza and all it did was repeat back whatever was texted back to the viewer, mirroring the viewer. So, and the psychologist let his secretary start using it.
And so if you type something like, “Oh, I’m having a bad day,” the computer would just write back, “Oh, sorry, you’re having a bad day.” And like within just a few minutes, the secretary, I know that’s an outdated term now, you know, ask the psychologist to like leave because she wanted the personal time with the computer.
And he was like really taken aback about the power of just having even digitally, you mirrored back to you. And then, you know, the rise of
Natalie Monbiot: Yeah, being heard.
Helen Todd: Yes, all algorithms, you know, hearing back what we want.
Natalie Monbiot: Maybe it’s not about being, it’s just the mirroring, just the feeling of being heard will do
Helen Todd: Yeah, we are funny creatures. Well, one, one thing that you had mentioned is, specifically the commercialization of these virtual humans. And I know you’ve spent a lot of time kind of on the maybe legalese side of things of like, what does it mean to own your virtual likeness?
So I was wondering if you could kind of walk viewers and listeners kind of through that framework, because it sounded like you’ve kind of put together your own framework.
Natalie Monbiot: Yeah. So since the very beginning, we were sort of thoughtful about, well, you know, how can we, first of all, you know, create and put these virtual twins to use.
And we made a conscious choice to have these virtual people based on real people because we thought that was a really exciting opportunity for real people to kind of play a role in this kind of more AI sort in this AI infused future. So what if you could, you know, how could you allow people to play a role in that future with our extended selves?
So, we were thoughtful about, first of all, it should always be done with consent. And so that’s part of the legalese involved, is to get someone’s permission, first of all, to, you know, take the footage, use it as training data to create the virtual twin, and then also to put other legal parameters, around that relationship and use of that virtual twin.
So for the, you know, maybe 300 plus stock characters that have, you know, graced Hour One’s platform and have been available for work to Hour One’s customers, the standard contract that we have with them, is that they consent to being used in Hour One’s content or content by our customers and set parameters around the kind of content that they won’t show up in.
So for Hour One, that’s no illegal content obviously, no content of sexual nature, no political content, that sort of thing. So that’s our kind of, the agreement is that you won’t be seen in that type of content. And also the types of customers that we have tend to be using the platform for a lot of kind of like information-led training learning type of content.
And and as part of that agreement as well these, the real people behind the virtual twins would get paid based on, how much their virtual twin was used in these commercial videos. So and we’re not talking a lot of money here. We’re talking, you know in the hundreds or thousands, you know, low thousands. But you know, again we are at the beginning of this virtual human economy where even the fact of it happening is just really interesting and the fact that you know, people are engaging in this way and this kind of marketplace kind of exists and does serve all parties.
So, those are the components that kind of go into it. You can also have your virtual twin for your own personal use. And if that’s the case, then that will be a part of your agreement. So it could be, for example, Helen, your virtual twin is only available to you, or it could be available to you and some colleagues as well. So, that, it’s up to you to decide who has those permissions.
We do clone quite a few sort of CEOs. And a way of kind of looking at who should have use or who it makes sense to have use of that virtual twin and access to that virtual twin, would be the PR team, your social media team, the usual people that would be making content and producing and publishing content on your behalf.
This is kind of an extension or a new way to publish content on your behalf.
Helen Todd: And I guess just one question out of curiosity is your stock avatars based on real humans or all those AI created just based on general faces training data.
Natalie Monbiot: No, they’re all, there’s a real person behind every single one of our avatars.
Yeah, we made that choice to kind of, try and keep humans, real humans in the equation, and benefit from some of the upside of this new economy and this new kind of AI driven world.
Helen Todd: Very cool. So it’s just kind of like the new stock photography to a certain extent, at least from the photographer standpoint of a passive income on the royalties or something as a just a mental reference point.
Natalie Monbiot: Yeah, that’s good analogy.
Helen Todd: Yeah. Well, and I know that you’ve also written a paper that you shared on LinkedIn about why it’s important to use your, as real of a likeness as possible for business use cases. Cause in the avatar world, you know, everyone probably already has an avatar. Like I was at the Apple store not too long ago, and I don’t know if everyone knows this, but on their little badges, they have little cartoon-y avatars to represent the person that you’re talking to at the Apple store.
But whether it’s Apple, Meta, Snap, we all have these, or like gaming environments. We all have these cartoon-y or game-like, but it really depends on the context. So I want you to kind of expand on that and what’s the importance of hyperrealistic or photorealistic avatars?
Natalie Monbiot: Yeah, so there’s a lot there that I’d love to touch on. One of, the first thing would be kind of just around nomenclature again. So, we, avatar is just the easiest term that means the most to most people. Like everyone kind of knows the term avatar. So that’s why the term avatar is just like a good catch all in general. Especially for people who are completely new to this idea of virtual twins.
But then I think the usefulness of the term avatar kind of ends right around there when you start to get into the nitty gritty because the term avatar really comes from the world of gaming, where, you know, you do have these kind of more kind of cartoonish avatars.
Of course, gaming is becoming so sophisticated and photorealism is really kind of embedded in gaming, but really kind of the term avatar often does sort of denote this kind of more kind of cartoonish version of yourself, which is why we’ve kind of like generally opted for virtual twin, something that actually kind of resembles you in a photorealistic way. So why is it important to have a photorealistic or hyperrealistic representation of you.
So, I would refine and say that it’s important to have that in the context of work. So, the type of virtual representation of you should fit the context in which you you hope for this thing, this avatar, this virtual twin to represent you.
So for example if it is in a cartoonish world, if it’s in a kind of, you know, an immersive world you want it to look maybe fantastical, and you want it to be, you know, you want to kind of play to the properties of that world. And if it is about fantasy, you want to be fantastical.
However, if we’re talking about your kind of white collar working virtual twin and that context of your life, if, you know, if that’s your world and you’re on LinkedIn and you have a presence on LinkedIn, you know, you want to be, you want to look like you. First of all, like a real like trust aspect to that. You expect to, yes, do a lot of Zoom calls and you want people to kind of know that that’s the same person, some consistency. You might even meet in real life, who knows one day, and also to carry through that consistency.
But also on a kind of more practical basis, was that on LinkedIn, you’re actually not allowed to have a cartoonish avatar. They kind of cracked down on that, in the Yuga Labs days when people were putting their Yuga labs avatars on LinkedIn.
So actually there’s a lot of reasons why you’d want to use your photorealistic avatar and, you know, like, let’s be honest, you don’t necessarily need to put exactly what you look like online, you know, at this point, we’re all too savvy. we’ve got, you know, our Zoom filters preset, you know, our kind of slightly enhanced professional looking you, an image that can be attributed to you, the real you.
Helen Todd: Oh, my professional photo was done with a professional photographer, professional makeup artist and hairstylist. So it’s like, it’s me, but it’s like very enhanced version.
Natalie Monbiot: Totally. And your virtual twin is absolutely an extension of that you, because, if you do a shoot, when you do your original capture, which by the way, is often, conducted in a green screen studio, you have your hair and makeup done, you’re, you know, you’re wearing something that you feel great in.
So it’s the perfect kind of extension of a, you know, of having a professional headshot done. In fact, I can see a near future in which headshot studios and, photographers add on, you know, a virtual twin shoot to that session, while you’ve got your hair and makeup done so nicely.
And given that actually your virtual twin is an extension of your headshot, it is an animated version of your headshot that you can kind of, you know, you can have represent you in augmented ways, it really does make sense to sort of think about the two as kind of, you know, in a similar role.
Helen Todd: The one thing I did request after I had my virtual twin made, I thought my cheeks looked a little chubbier than normal. And I was like, “can you just thin those,” they’re like, “Nope. Once the footage is done, it’s done.” Whereas with my photo you know, I had a retoucher touch it. So I think that will eventually come with all the tech, but at least in the production that we do for clones, it’s like you have to, whatever the video captures, you’ve gotta look your best for that. And there’s not any retouching or airbrushing afterwards.
Natalie Monbiot: Yeah. It’s kind of an interesting one because that isn’t necessarily specific manual retouching. But we’ve gone through various iterations of our algorithm and played around with it because it’s like, you know, we’ve just explored why having a virtual twin that looks like you is important when you know, in the context of work, but at the same time, we do want to look like, you know, improved versions of ourselves and glossier versions of ourselves.
And, you know, we’re kind of competing against all the glossy images of people online. So we have kind of played around with our algorithm a little bit. And at one point, it did enhance, it made you look a little slimmer. Your teeth definitely looked a little better. And actually that was my last avatar.
That was, the one before the one that I just had captured, which looks like me more, which is fine. But still there are, it’s, there’s still, you know, it’s the great lighting and, and all of that, that as you say, at the moment of the shoot is really important.
Helen Todd: And so how many avatars do you actually have and how do you use yours?
Natalie Monbiot: Yeah, that’s a great question. So I’ve had, I mean, I’ve probably had six different ones done at different times, you know, since 2019. And, but it’s interesting cause only about a year and a half ago, did I feel it was an inflection point for me where I felt good about my avatar representing me, like the first time I actually posted my virtual twin on LinkedIn about something that I was doing or speaking a language that I wanted to kind of experiment with, or I felt comfortable putting her out in the world as a representative of me, which I felt like was a really interesting moment
Because up until that point, I was very freely putting, you know, virtual people, virtual humans from our platform out there as an example, examples of our work, case studies with clients. But there’s something very personal about like, you know, think about, I don’t know, if I’m going to post photos of myself or my family or friends on Instagram, I’m very thoughtful about that. Do I feel comfortable with that? Does this feel appropriate?
And I think with having a virtual twin, there are a few that I was then comfortable about posting sort of openly online, I think there was a few things at play. One was around the quality of it. Like I liked the way it looked. I felt that it was sufficiently expressive. I felt that it was flattering, and that sort of thing.
And then also kind of, I think it was, I felt like culturally a moment in time where the metaverse had just happened, or the hype around the metaverse had peaked. And it was this moment where suddenly, well, of course, if the metaverses are given, then, you know, the metaverse needs to be populated and who’s going to populate the metaverse if it’s not your virtual twin.
And so there was this real kind of like moment where suddenly, virtual people and virtual worlds, was, you know, just kind of like normalized where I kind of felt like this is the moment where my virtual twin can kind of be out there on my behalf and present content for me.
That said, prior to that, I’ve had my virtual twin appear in the context of presenting Hour One and demonstrating the power of the technology, but not really kind of out there as my virtual twin that I’ve put to work on my behalf.
Helen Todd: Since you mentioned metaverse, it’s definitely come up many times on the show because I kind of had written it off. But so many guests over different episodes have really just impressed on me that it’s really inevitable. And I know when we were talking earlier, the dream for our virtual humans will be interoperability, especially once we get more into or spending more time in immersive worlds.
But you said something interesting about how video is kind of interoperable now. So I was wondering if you could kind of like expand your point. Cause I thought it was a really interesting one.
Natalie Monbiot: Yeah, I think in a Web 2.0 era, you could kind of call video interoperable in the sense that it can travel everywhere.
You can view a video file anywhere. It can be streamed in so many different platforms and contexts. And video is the most pervasive medium. So and you know, I think that was one of the key choices at the beginning of the founding of Hour One is that this would be a technology that enabled you to scale your content, scale your likeness, generate content that could reach a lot of people.
So in that sense, I think that it’s kind of a Web.2 version of interoperability. But yes, I think Web 3.0, that will be kind of a truer definition of an interoperable, virtual twin or avatar, that you can truly own, and that doesn’t need to be attached to the servers of the companies that actually own the technology.
Helen Todd: You definitely made me think differently about video after that conversation, so I really appreciate that perspective. Well, I guess another question too that I know is important for both of us is, you know, using these tools, synthetic avatars to augment humans and not replace them. So I was wondering if you could kind of expand on your thoughts and Hour One’s like position around the idea of augmentation over replacement.
Natalie Monbiot: Absolutely. So I think that goes back to kind of the choice as to whether we’re going to use real humans as essentially the training data behind the virtual humans. And the choice was that, you know, there are enough real humans in the world that we don’t need to invent kind of new likenesses.
And also we wanted to tether real humans this future of AI. And we also thought about the opportunity to enable real humans to create versions of themselves that could empower them as individuals.
And so one of those, I think the most obvious way that individuals can be empowered through sort of the extended skills that AI can give you is through this ability to speak languages that you don’t know.
But I think what we’ve seen with it’s been really interesting, particularly in the last 12 months of ChatGPT and similar kind of explosion in other generative AI technologies is the ability to connect your virtual twin to augmented intelligences.
So that could be your total body of work that you can now plug into your virtual twin and have somebody, you know, engage with your virtual twin and to kind of get the fuller spectrum of your body of work that you have, you know, achieved over your lifetime.
When we’re having a conversation like this, we’re sort of depending on recency of memory and experience, you know, I think obviously live conversations, I think, still sit at the very pinnacle of human experience.
But there’s something to be said for, you know, a podcast interview with the fully kind of capable version of you that has at its fingertips all the knowledge and all the insights and all the breakthroughs that you’ve ever come up with and have, you know, those readily available.
And so I think it’s kind of interesting to think of how we can have virtual versions of ourselves that are augmented selves are the best that we can possibly be. And also to be able to connect other intelligences, you know, like we just talked about languages, like those are skills and abilities that we might not have natively and inherently.
There’s, you know, all kinds of knowledge that we can now sort of connect to our virtual selves, to help us be more productive, to help us, who knows, like create entirely new ideas and businesses that weren’t possible before.
An example of that, we work with a futurist called Ian Beacraft, who has a virtual twin. And his virtual twin has been hired as the AI correspondent for a broadcaster. And this broadcaster, by the way, is also a startup, also founded around 2019, that is a news broadcaster that doesn’t have any anchors and doesn’t have any studios, doesn’t even have a physical location.
And that business was allowed to, sort of, you know, be created and to thrive thanks to this technology that exists. So I think we’re going to see lots of people being able to take advantage of this technology and new ways to be able to kind of fulfill that potential in ways that weren’t possible in the past.
Helen Todd: Natalie, you’ve given me another reason to appreciate my clone. Not only will it probably be able to pronounce things better than me, but it will have a better memory once it’s trained on all my data about my own work than I probably have myself.
Natalie Monbiot: Let’s start that, yeah.
Helen Todd: I already see Google and Wikipedia as like my external memory, so this is only going to be amped up that much more. Well, and another thing too since the show is Creativity Squared and exploring AI at the intersection of creativity, one of the things that we discussed too is what these digital twins and our virtual twins and virtual humans could do as far as like creative expression.
So I’d love to hear you kind of expand on your thoughts around this topic.
Natalie Monbiot: Yeah, I’d say kind of up until around now. Your virtual twin is a less expressive version of you, right? And it’s and it’s only good for certain things today, like presenting information, presenting decks, like kind of, that kind of thing, kind of sober to the camera, not super expressive stuff, because again, it is this kind of less expressive version of you, but one that is capable of, you know, featuring in endless amounts of content on your behalf.
But I think that where we’re going with this is that your virtual twin will be able to be at least as expressive as you and actually in cases where some people just aren’t it’s not just inconvenient to be in front of the camera, a lot of people aren’t good in front of the camera and they’re not able to express themselves in the way that they intend and in the way that they feel.
So your virtual twin will be something that you’ll be able to guide and in a way, maybe be the truest expression of you, so I think that’s really interesting and I think that’s where creativity can really, we’ll see a lot of creativity in this expressiveness.
And then I think, you know, when you combine, what we’re doing and what we specialize in and our foundational models around the virtual human and how we’ve already combined combined different GenAI tools into our platform to make it really really easy to create content without any creative skills kind of at all, you just need to be able to prompt decently, we actually take care of a lot of the prompting for you too with pre made prompts that are the ones that are going to be the most useful for getting the kind of content out of the system that you want when you start combining all the kind of innovations that we’re seeing in AI video outside of virtual twins and combining all of that together, I mean, it’s incredible to think what kind of creativity we’re going to see.
Helen Todd: I love it. And I’m so excited to see what comes out of it too. And something that you said about your virtual twin might be better on camera than you, I think one really interesting thing, and I haven’t done this with my own clone yet is personal development.
And I’ve been wanting to like feed it, like, mantras and then see someone who looks like me and sounds like me. There’s this phrase that I repeat often that you cannot be what you cannot see. And you already see some of these things like personal development in VR. I think it’s a really compelling idea to see yourself, whether it’s speaking another language, like what you mentioned earlier in the interview or whatnot.
So I’m like really curious about the personal development and how these avatars can actually help us improve our own selves too.
Natalie Monbiot: Yeah, I mean, there are various versions of the saying that all the knowledge already lies within you, right? So it’s like, well, how do we coax it out of us? And how do we essentially manifest it?
And I think that’s something really interesting. And the idea of literally manifesting it. And seeing the kind of a role play version of you doing the thing that you are trying to actually achieve in real life. So, you know, having a personal you as a coach, I think there’s a really interesting concept that I have enjoyed thinking about.
And, yeah, it will be really interesting to see what businesses and ideas come out of that.
Helen Todd: I’m so excited. It’s such an interesting [00:51:00] time to be alive, not only within the avatar space, but the genAI in general, there’s so much going on. And I know you and I can keep geeking out about this.
Natalie Monbiot: And we will
Helen Todd: First of all, well, one thing that I always like to ask my guests is if you want our listeners and viewers to remember one thing, what is that one thing that you want them to remember?
Natalie Monbiot: I think we should, the main takeaway is that we are living in this virtual human economy, and while it might seem kind of distant and irrelevant to you as a person right now, I think it’s worth thinking about the entire ecosystem of a virtual human economy.
You know, whether it’s actually creating your own virtual twin that you can put to work on your behalf, and what does that mean? Like, what could you do with that virtual twin? What, you know, did you ever dream of becoming a, I don’t know, a TV host? What would your topic have been? You know, it was like, Oh, not in my lifetime, but maybe in your lifetime. Yes, you can do that.
Or thinking about what the virtual human economy means for the rest of the ecosystem that it kind of touches. You know, we’ve seen that celebrities are being hired by giant platforms like Meta as their virtual selves. And the promise there is that they can further build their communities. And so that’s, you know, look at those incentives. So platforms want to enable more engagement. These celebrities and creators, they want to build their communities and their fan bases and fans want more access to their favorite creators and celebrities.
So if you have any role in any of that ecosystem today, or anything adjacent, like this could really touch you or could benefit you, or could be something that’s worth thinking about and exploring, from different angles.
Helen Todd: What are you looking forward to most in, you know, the upcoming year or where this tech is going?
Are we all going to have our virtual twins? And what does that world look like? Like, what, where is it going? And what’s the ideal scenario for you?
Natalie Monbiot: Yeah. I think we’ve for years thought like, you know, well, what is the ultimate expression of what we’re doing, at Hour One. And I think having everyone with a LinkedIn profile, having a virtual twin.
I kind of like what we were saying before about, you had a professional headshot done and in the same breath, you captured your virtual twin because you can now just bring that to life. It is your LinkedIn headshot, that’s working a lot harder for you than it was. And that seemed like a pretty distant vision, but I can see us accelerating towards that point much more quickly.
And I think that that’s pretty exciting.
Helen Todd: Well, I couldn’t agree with you more, and I think that if any photographers who do portrait studios are listening, that they should take note, because it’s definitely a future that is here and is coming.
Natalie Monbiot: I would actually just say, you know, for example, if you are a photographer that takes headshots, then, you know, a just a practical extension of what you’re doing based on this virtual human economy trend would just be to kind of partner with a company that enables this, seems like a very simple kind of extension of what you’re already doing.
Helen Todd: Natalie, I know that we can keep geeking out forever about clones and the virtual human economy, but it has been so wonderful having you on the show today. So thank you so much for all of your time and insights and perspective.
Natalie Monbiot: Helen, this has been really fun. Thank you so much for having me on the show.
Helen Todd: Thank you for spending some time with us today. We’re just getting started and would love your support. Subscribe to Creativity Squared on your preferred podcast platform and leave a review. It really helps and I’d love to hear your feedback. What topics are you thinking about and want to dive into more?
I invite you to visit creativitysquared.com to let me know. And while you’re there, be sure to sign up for our free weekly newsletter so you can easily stay on top of all the latest news at the intersection of AI and creativity.
Because it’s so important to support artists, 10% of all revenue, Creativity Squared generates will go to Arts Wave, a nationally recognized nonprofit that supports over a hundred arts organizations. Become a premium newsletter subscriber, or leave a tip on the website to support this project and Arts Wave and premium newsletter subscribers will receive NFTs of episode cover art, and more extras to say thank you for helping bring my dream to life.
And a big, big thank you to everyone who’s offered their time, energy, and encouragement and support so far. I really appreciate it from the bottom of my heart.
This show is produced and made possible by the team at Play Audio Agency. Until next week, keep creating
Theme: Just a dream, dream, AI, AI, AI