Eric Solomon never expected that the work he started in psycholinguistics over 20 years ago would have consumer-facing implications in today’s A.I.-driven world.
Eric has worked at the intersection of psychology, brand-building, technology, and creativity. He has held marketing leadership positions for some of the top brands in the world including YouTube, Spotify, Google, and Instagram. He also served as the Chief Marketing Officer for Bonobos, a DTC e-retailer owned by Walmart. Eric entered the business world through the doors of academia, earning his Master’s and Ph.D. in cognitive psychology prior to leading research and strategy for award-winning creative agencies. He currently serves as the Founder and CEO of The Human OS™, a strategic advisory platform that Eric built from the ground up. He’s a lover of electronic music, daily exercise, and extremely delicious food.
Eric started his PhD program in 1999 exploring cognitive psychology with a focus on machine learning and artificial intelligence. This is not only fascinating but was also really the early days of the application of language to this field.
In our discussion with Eric, he provides insights on navigating the complexities of A.I. to ensure we stay on the right path, avoiding profit-driven pitfalls and including diverse voices in the decision-making process. He talks about establishing a strong ethical foundation, akin to an operating system, becomes crucial for the responsible development of A.I. This system encompasses understanding A.I. systems and establishing clear beliefs and behaviors. Without it, our journey forward lacks direction, clarity, and purpose.
He reminds us that machines are not human and we can not let them convince us otherwise. Eric points out our collaboration with A.I. should aim to augment rather than replace human capabilities. While A.I. excels at speed, repetition, and low-level tasks, the preservation of human critical thinking remains paramount.
Eric’s personal journey and professional journey converge quite a lot. He grew up impoverished with little parental support and was driven by a desire to live a different life than they had. He poured himself into academics which led to a Master’s and Ph.D. with a focus on linguistics and the computational structure of language. These were the early days of machine learning. While doing his postdoc on Big Tobacco and getting people hooked on nicotine, he had the opportunity to move from academic to advertising which led to a major career milestone working at YouTube and then Spotify.
March 2016 forever changed the course of Eric’s life. Returning from a leadership meeting with Spotify, he received the devastating news of their father’s murder. Amidst the complexities of the investigation and a profound sense of grief, work became an outlet for channeling emotions. However, after the investigation into his father’s murder was closed and unsolved, Eric realized he didn’t understand his core purpose and end goal. He resigned from the corporate world and started his own company, The Human OS™. Today, he helps leaders, individual leaders, teams, and organizations build, install or update their human operating system.
During Eric’s time in academia, he explored the computational structure of language, and his research aimed to understand how humans process and generate linguistic expressions. Little did Eric know that this would lay the groundwork for advancements in artificial intelligence and deep learning networks.
Eric Solomon
The ultimate goal of A.I. research has always been to create machines indistinguishable from humans. However, as AI systems, such as ChatGPT gain popularity, a significant concern arises. Can we trust these systems that simulate human-like conversations?
Eric is apprehensive about the potential consequences of placing too much trust in machines fed with information from an imperfect internet. He emphasizes the need for skepticism and critical thinking when interacting with A.I. systems that might convince us of their humanity while remaining fundamentally different—mere machines.
The widespread adoption of A.I. has been driven by its utility and convenience. Users have flocked to these systems, seeking shortcuts for tasks like writing papers or crafting emails. However, Eric has concerns over the misuse of A.I., especially without disclosing that ChatGPT was used for something. Eric believes in transparency, urging for the inclusion of disclaimers so users know when A.I. has been used (disclaimer: we do use ChatGPT to help with writing here at Creativity Squared!)
Eric Solomon
Eric believes it is crucial to strike a balance between leveraging A.I.’s capabilities and maintaining ethical practices. Especially as A.I. systems become more sophisticated, they possess the ability to mimic empathy convincingly. However, the danger lies in our tendency to be easily deceived. Eric brings up the Turing Test, designed to determine if a machine can pass as a human. It raises the question: If an A.I. appears empathetic, does it matter if it truly is? This deceptive appeal of empathy is a cause for concern, as it can lead to a false sense of connection and understanding. We must remember that A.I. lacks the subjective human experience and cannot replace genuine human connections.
Eric Solomon
Another pitfall is that without proper regulation, A.I. systems can become breeding grounds for deceptive advertising practices. The existing dominance of corporate America in shaping A.I. discourse raises concerns about the motives behind these conversations. We should not be chasing profits at all costs.
Eric is concerned about the lack of a singular purpose and clear end goals for A.I. While some perceive the purpose of A.I. as creating systems that closely resemble humans, this is not the way to go. And large tech companies shaping the conversation is troublesome.
Eric Solomon
It is imperative to establish a purpose that transcends profit and involves a broader range of perspectives to navigate the future of A.I. more thoughtfully. We foundationally don’t have an operating system for A.I. and Eric argues we need one. This operating system should address concerns such as understanding how A.I. systems work, involving diverse voices in the decision-making process, and establishing clear beliefs and behaviors. By incorporating a diverse range of perspectives, we can foster a more comprehensive understanding of A.I.’s impact on society.
Eric Solomon
Without a comprehensive operating system, the path forward for A.I. risks lacking direction, clarity, and purpose. Especially due to the fact that deep learning and neural networks possess a level of complexity that surpasses human comprehension. Without fully understanding A.I.’s learning process, it makes it difficult to predict their outcomes.
There’s a lot of promise in A.I. but it’s up to us as humans to harness that promise – not the machines. Erin mentions there are a lot of unintended things it’s going to accomplish and won’t predict what they will be. But he does think there could be a museum of A.I.-created art one day.
Eric Solomon
The potential of A.I. lie its ability to augment human capabilities rather than replace them. By leveraging A.I.’s strengths in speed, repetition, and offloading tasks, humans can focus on more strategic and creative endeavors. However, Eric cautions against the abuse of A.I. by using it for tasks that can be accomplished by human creative thinking. Preserving critical human thinking and maintaining a human-centric approach to A.I. usage is essential for harnessing its true potential.
Thank you, Eric, for being our guest on Creativity Squared.
This show is produced and made possible by the team at PLAY Audio Agency: https://playaudioagency.com.
Creativity Squared is brought to you by Sociality Squared, a social media agency who understands the magic of bringing people together around what they value and love: http://socialitysquared.com.
Because it’s important to support artists, 10% of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized non-profit that supports over 100 arts organizations.
Join Creativity Squared’s free weekly newsletter and become a premium supporter here.
TRANSCRIPT
Eric Solomon: I know there’s a lot of potential, but we are not great at looking at the potential of technology and making the most of it. And all you have to look at is across our social media platforms to know that that is true. So I don’t know. I approach it with a lot of skepticism and healthy skepticism that what really matters in this time, more than ever, is doubling down on true human connection and knowing the difference between that and human and machine interaction.
Helen Todd: Eric Solomon has worked at the intersection of psychology, brand building, technology, and creativity for over 20 years. He has held marketing leadership positions for some of the top brands in the world, including YouTube, Spotify, Google and Instagram.
He also served as the chief marketing officer for Bonobos, a direct to consumer company owned by Walmart. Eric entered the business world through the doors of academia, earning his master’s and PhD in cognitive psychology prior to leading research and strategy for award-winning creative agencies. He currently serves as the founder and CEO of The Human OS, a strategic advisory platform that Eric built from the ground up.
Eric started his PhD program in 1999 exploring cognitive psychology with a focus on machine learning and artificial intelligence. This is not only fascinating, but was also the really early days of the application of language to this field.
Fast forward to today and after a tragic loss, Eric focuses on helping individual leaders, teams, and organizations build, install, or update their human operating system. This empowers direction, clarity, and purpose so behaviors match beliefs, and everyone is committed to the same vision and end goal. We also discuss how this system can be applied to the AI landscape we find ourselves in.
Eric is a dear friend and I always enjoy our thought-provoking conversations, and I’m excited to share this one with you.
Today you’ll hear Eric discuss how language shapes our reality, his approach to the human operating system, the promise and pitfalls of AI, the importance of intentionality, and how we need to remember that machines are not human and embrace human to human connection now more than ever. Enjoy.
Theme: But have you ever thought, what if this is all just a dream?
Helen Todd: Welcome to Creativity Squared. Discover how creatives are collaborating with artificial intelligence in your inbox on YouTube and on your preferred podcast platform. Hi, I’m Helen Todd, your host, and I’m so excited to have you join the weekly conversations I’m having with amazing pioneers in this space.
The intention of these conversations is to ignite our collective imagination at the intersection of AI and creativity to envision a world where artists thrive.
Eric, welcome to Creativity Squared. So excited to have you on the show today.
Eric Solomon: Thank you so much for having me home. This is a long time coming and I’m grateful to be here.
Helen Todd: So excited to have you. Eric and I met in New York City and we became fast friends from the first conversation. Everyone will get to learn more about why I treasure Eric so much and we’ll get to share some of his mind.
So most of the listeners may be meeting you for the first time through the show, and I was wondering that maybe we could start with your origin story. It’s always nice to start there.
Eric Solomon: It is, it’s a great place to start and you know, it, you know me a bit so you have to know that, you know, my, my personal journey and my professional journey converged quite a lot.
So the origin story really does, you know, it really is bit of an arc cuz it’s been a complicated story for me. So, you know again, I won’t, I, I’ll give people the short version of a long story and you a short version of a long story. But you know, I guess from the, from the very earliest days, my biggest goal was to not have the life that my parents had.
So I grew up in a, in a pretty impoverished way with not a lot of parental guidance or support. And so I kind of poured myself into education because I didn’t know what else to do to kind of get out of the spin that I was in in my earlier childhood.
So I was able to get myself funding through college and then I liked school so much, or rather, I felt school was a way out so much that I immediately entered a master’s and PhD program and that work was in, so my, my early work, was
I was a linguistics major, so we can talk more about what that means. But then I’m, when I moved into my PhD work, this was started this program in 1999. This was in cognitive psychology with a focus on machine learning and artificial intelligence, which was really the early days of the application to language to this field.
So you know, I was really on the ground floor of how do we as humans speak, And then how do we learn, how we process, how we speak so we can teach machines to do that. So, I ended up kind of more because of necessity breezing through graduate school because I would lose funding otherwise. So I was really young.
By the time I had my PhD, I was just 25. And I still had no clue what to do with my life. So I continued on the academic journey to be a professor. I thought that’s what I wanted to be. So I looked for the next step, which is do a post-doc program.
And the one I ended up taking was in of all places at the medical school at UCSF, looking at how the tobacco industry had used big data to get kids hooked on the smoking and nicotine so we could figure out how to counter-market.
And my advisor was really interested in this idea of how to stop the tobacco industry from their powers to do evil in the world. And, you know, he said, you don’t know anything about marketing, which is true, but I happen to know somebody who does. Why don’t you take ’em out to lunch? And that lunch meeting turned into a job offer.
I never expected that I would leave the academic world. But I was talking to somebody in advertising who pitched me, and that’s what they do for a living. So therefore, I was pitched into business and worked at a couple ad agencies, and was there for a while.
And then, I had a, a big break in 2010, 2011, I had an opportunity to move over to the YouTube headquarters to co-found a team there working on the very early stages of helping brands navigate digital video. So this was really in those early days and I was there for about four and a half years, moved out to New York in 2015 where we met.
I moved out because I had an opportunity to be the first ever head of the Spotify brand. So it was a bit of merging my love of music with my love of brand building. But then my life changed in, in a really big way and it started to guide me down the path that I’m in now. It was March of 2016. I was coming back from a leadership meeting with Spotify leadership, they’re based in Stockholm.
And when I landed back in New York, I had one of those calls that you just absolutely never want to get and never expect to get. It was from the chief of police in the hometown where I grew up, who told me in no uncertain words that my father had been murdered that morning. And it really you know, it was very, very complicated given what my dad did for a living.
It was complicated given my dad’s life, and it was complicated due to my family structure. Essentially there was a criminal investigation that was open that remained open for two and a half years. And during that time I didn’t know what else to do but pour myself into work. So my resume looks incredible during this time of grief.
I became the head of marketing on the business side for Instagram globally, which was a huge job getting to really see the frontline of social media and helped launch stories, which is now a mainstay of the Instagram platform, and then became the chief marketing officer for a brand called Bonobos, which is a direct to consumer men’s brand owned by Walmart.
But it was a few months into that job when, when things started to click. And maybe not always for the right reasons, but the universe has a way, doesn’t it? I, I was on vacation at the end of 2018 and I got a call from the investigator on my dad’s case, telling me that due to lack of criminal evidence, all charges have been dropped, so I’ll never know what happened.
And that’s when I really took stock of everything that I had been doing and who I was, and realized that we understand that things like our phones and this computer and even this conversation can’t happen without an operating system underlying it.
In a very similar way, I realized that I no longer understood what my operating system was, what I call my human operating system. I didn’t understand what the, what my heart was and what the core of my purpose was. I didn’t understand what my end goal was and how I was going to approach that end goal. I didn’t know what I believed and how my behaviors were aligning with those beliefs, and I certainly didn’t know how to talk about everything that I was feeling.
So I did something very scary while on vacation. I called my CEO and I resigned from the corporate world at the end of 2018, and I started my own company called The Human Operating System, January of 2019. And that’s something that I do today. I help leaders, individual leaders, teams, and organizations do what I call build, install their, update their human operating system.
So believe it or not, that was the short version of the origin story, but that’s what I can give you.
Helen Todd: Yeah. Thank you for sharing. And we actually met right after my father passed and it was, I think you were actually one of the first people, like new people I had met and I didn’t even know how to like answer, how are you? Cuz I was still in such grief and you were so supportive in, in that period. And I think that’s one reason why we clicked so quickly. So thank you for sharing that story.
Eric Solomon: Of course
Helen Todd: One thing I find really interesting about your origin story is going back to the school portion, like the linguistics and how you tie linguistics to machine learning. Especially, you know, I would say you’re the OG in this space in a lot of ways. So how, how did you tie linguistics to then go into machine learning? I think that’s so fascinating.
Eric Solomon: You know, it’s, it’s crazy cuz a lot of people don’t really know. I didn’t know before I started a program and it, what linguistics was, that’s not something you have an opportunity to be exposed to.
But linguistics really is about the computational structure of language. It’s how language is organized and how we go from thought to producing and understanding this code, essentially because linguistics really is about the code of language. And the undergraduate work I did required us to do these undergraduate thesis projects.
And the one that I chose was a bridge into this work because I would literally strap electrodes to people’s heads and listen and watch the different responses they had when you would get sentences like “the nurse saw himself in the mirror.” Or the police person saw herself at the desk and you got to see these responses to what happened when you had these gendered pronouns that were going on in places where people didn’t expect them.
And I started to really ask the question of like, okay, I’ve studied the structural language, but now I really wanna understand how the mind both processes and produces linguistic expression.
And so I started to pursue this field, which has a very cool sounding name of psycholinguistics And psycholinguistics is really, really, related to this idea of machine learning and artificial intelligence because it really is about this computational structure in our minds, how we process these things, and then how that knowledge can be used for machine learning and for deep learning networks.
So that’s how I got into it again all by accident really.
Helen Todd: Yeah, but I, I think that’s still fascinating that you took the linguistics and then wanted to apply that to the machines. If machines can figure out if it’s all coded, like, can they talk like us?
Eric Solomon: Yeah, can they talk like us? And, you know, and, and, and specifically like how would they go about talking?
Like how do they plan things when they talk? How do we plan things when we talk? They’re not really intuitive questions. You have to do some research to get to them.
Helen Todd: Yep. And fast forward, I mean, one of the reasons why AI, it’s, you know, it’s been around for a long time, but really with the onset of ChatGPT in the fall really hit mainstream consciousness where everyone’s kind of like waking up and learning about how much AI is impacting our lives right now.
And I think one of the things that’s fascinating about it, especially tied to your background is that one of the reasons why it’s so accessible is that it’s using natural language and feels like you’re talking and interacting with this machine as if you were talking with a human, which opens up all types of questions.
So yeah, how does this make sense to you given your background that it’s so much more accessible because we can interact with AI in natural language?
Eric Solomon: Yeah, I mean it’s surprising cause I never really expected any of that work that I started, you know, over 20 years ago to be relevant to anything that we’d see in a consumer facing way.
You know, I think what, where I, you know, where I get both excited and maybe on the spectrum of excitement to fear, fearful of, is that, you know, this has always been what it’s about is, you know, can we get to a point where we cannot tell the difference between a machine and a human being?
And my concern is that when we start to really trust these systems, whether it’s ChatGPT or Bard or you know, any of the systems that might be coming out in the future, and trust me, there’ll be more of them, you know, is, are, are people gonna have the skepticism that they, that they should have, knowing that the information that they’re getting is information that’s fed from the internet, which we know is full of lies and mistrust and all kinds of things that are flawed.
So my biggest concern is that they’re almost are gonna convince us that they’re too human and they’re not human. They’re machines. So we can talk more about that. But yeah, I mean, I’m surprised to see the evolution of it, and I’m surprised just how much people care.
Helen Todd: In what way? Can you expand on like why people care that, that statement?
Eric Solomon: Yeah. I mean, I, by care, I mean there hasn’t been ChatGPT reached a million users like far faster than any other platform ever has in history.
So like, I think the reason that people care is that like, I can get this to shortcut my work for me. I can get this to write my term paper. I can get this to write my out of office emails or any email, and I have real concerns about that to be honest with you, of ChatGPT being used that way. But I don’t think people are opting in in this mass scale simply out of curiosity.
I think they’re doing it because they’re getting a lot of use out of it, and I’m starting to wonder if the emails I’m getting are generated by a human or generated by a robot.
Helen Todd: Yeah. I, I think it’s, we’re already kind of crossing that threshold of asking is, was this written or made or created by a human or not?
And it’s only, the line is only gonna get more blurred. And I think it opens up some questions too. How much do you disclose? About it and at what, at what level? You know, we, we use ChatGPT to edit some of the blog posts and clarify ideas, which I don’t feel the need to disclose that all the time.
But for any blog post that’s maybe fully from ChatGPT, I feel the need to do that. So do you, do you think of any maybe like level or measure when it’s needed to disclose or not disclose, or is it the Turing Test? Does it even matter? You know, how are you
Eric Solomon: Yeah. Right. I mean, right. It just, you know, the, the Turing test really is about that idea, if you know, do we know we can’t tell the difference between machine or human, does it matter? Has it done its job?
You know, it’s interesting, right? Cuz you’re seeing on Spotify right now, you know, tens of thousands of songs being taken down because they’re generated by AI without any disclaimer put that they were generated by AI.
That’s not okay. I don’t think the production of art by machines without disclosing that it’s done by a machine is okay. I would, you know, Reid Hoffman just published a book that was written in collaboration with ChatGPT. I think that, that’s an interesting experiment. There you go. A human machine collaboration that’s called out specifically as a human machine collaboration. There’s something interesting there.
So I, I don’t know, I’d like to see, I would like to see a disclaimer on everything. I don’t think there’s a line too small to say, asterisk ChatGPT helped me with this. I use it for helping me name things all the time, even just for internal projects. I’m like, I need a name for this project. The project’s about that, help me. But I always say this was generated by ChatGPT, not by me.
Helen Todd: Yeah, no, I think that makes a lot of sense. He also launched a podcast like the day after ChatGPT4 was announced and has stories from ChatGPT4 as part of the podcast. But I think he might have had a little bit more insider knowledge about the timing of the, the launch of all that.
Eric Solomon: Yeah
Helen Todd: Well, going back to language, you know, I find it so interesting that language is so core to our human experience and how we understand the world and how we understand ourselves and the, the language that we use. You know, even forming memories kind of shapes our memories and our experiences that we have.
And now we have this tool that we can interact with in very common language. And I know you think a lot about language too with your human OS system, like what are the implications of these large language models to our interaction of how we understand the world if we’re able to interact it in natural human language?
Eric Solomon: Yeah, I mean, you know, it’s, it’s a, it’s a tough question. I mean, you know, the, the implications are the, the more, the more human something seems, the more potential there is for us mistreating that system. And I, you know, you think about it, we’re not that kind to each other as human beings. We love to, we love to look at other groups and not always put our best foot forward in how we show up in terms of equality.
We know, you know, where I’m going with this is we’re at a time right now where there’s so much data and a lot of conversation around the epidemic of loneliness, and we, we’ve got large language models that can sound a lot like humans.
Is this, is this going to be a proxy for real human connection and human relationships? I really want people to understand that, you know, as much as large language models and deep learning networks can seem really human, the thing that they can never have is the subjective human experience of living – of the entire pasts and connections that we have and the traumas that we take with us and all of that stuff.
And I really get concerned that if we’re convinced enough that it sounds human, we’re gonna stop caring about whether it is. And so I, I don’t know. I mean, I know there’s a lot of potential, but we are not great at looking at the potential of technology and making the most of it.
And all you have to look at is across our social media platforms to know that that is true. So I, I don’t know. I approach it with a lot of skepticism and healthy skepticism that what really matters in this time more than ever is doubling down on true human connection and knowing the difference between that and human and machine interaction.
Helen Todd: Yeah, and, and I do have a confession that I was playing with ChatGPT a little bit when I was formulating some of the questions and I went down this big rabbit hole of, you know, what does language and consciousness mean in relation to large language models? And, and then also what that means for implications for our human relationships to ’em.
And it listed the positives and I was like, okay, what are some of the negatives too? And one of the ones that it pointed out which kind of, which punctuates your point is that it doesn’t have empathy. So even if you’re interacting with it, it can’t read the nuance in the human experience to really respond as empathetically as a human would too.
So I think that’s something really important to keep in mind.
Eric Solomon: But I’d be careful, I’d be careful there because there’s a, there’s going to be a time, if it’s not already here, where it’s gonna look like empathy. And again, again, this goes to the back to the Turing Test. If it looks like empathy, and it is an empathy, what difference does it make if it really is, if you feel like it is?
And I, I have a concern about that, right? Because we’re easily deceived. That’s been shown time and time again. And if we’re not careful, we will be deceived into thinking these are very empathetic human beings, and they’re not.
Helen Todd: Well, I know I, I think we talked about this earlier that after the Kevin Roose story about Sydney came out, which, just a quick thing, if you haven’t heard the story, everyone should listen to it because it’s kind of
Eric Solomon: It’s amazing
Helen Todd: part of the AI ethos that exists right now. But Kevin Roose got an early beta, I think it was, was it, it was to Bard, wasn’t it?
Eric Solomon: It was to the Bing system.
Helen Todd: The Bing system.
Eric Solomon: Whatever that is.
Helen Todd: I’m getting them a little confused. But it interacted with him and he went down this rabbit hole and the system had its code name Sydney come out and said, you don’t love your wife because I love you.
And it really, you know, some people say it hallucinated. Some people say that maybe, that’s, you know, he knew he was a journalist and wanted a cool story so that it produced something for the story.
We don’t, we don’t know the logic behind how it came to really go down this path, but because of that, and it was a closed beta to, you know, select people like journalists. And then they did course correct some of the, some of what happened before being released to the public. But because of that story, I think there was, I don’t know if it was a porn site or some other site that came out, that really took down some of the chatbot tools that interacted with people and a lot of those users got really upset with Kevin Roose.
Eric Solomon: Yeah
Helen Todd: because that tool was taken away cuz they had already kind of formed that bond, with whatever, I’m not remembering the one that it is with that already. So we’re already kind of seeing that super intimacy with these tools already emerging.
Eric Solomon: Right. And that’s not really surprising. People want intimacy. People are looking for it. Where you know, we’ve spent a lot of time at home over the last few years. There’s, you, you can read lots about how this is a typical problem with men and boys, the loneliness epidemic.
And we know that, you know, historically the internet, a large portion of the internet is dedicated to our pornography. And why would we ever think that just because AI is a new technology, it won’t go down the same path that it always goes down.
The other thing that you’re talking about here is we’re talking about the fact that who’s dominating this conversation right now? It comes from corporate America, and it comes from companies that have a lot to profit off of these systems.
And so that’s another path that we can go down or not go down, is who should the conversation really be driven by, and is it being driven by the right people right now?
Helen Todd: Yeah. Well I know Tristan Harris and one of his co-presenters Aza from the, what is it? The Humane Institute or Institute
Eric Solomon: Center for Humane Technology.
Helen Todd: Thank you. Spoke to how social media has really been the race to our primal emotions and fear being a major driver of that. And AI is the race for intimacy to create those 24/7 always on assistance, and bonds. So it made me think of that.
But I know one of the things that you also said earlier was, there shouldn’t be any ads. Like it shouldn’t be monetized from an ad perspective. So are there any other keys like that, that you wanna make sure to kind of safeguard or that you see that these platforms really should not do like, monetize and the, I guess the manipulation, especially with your background on the, countering the cigarette industry’s marketing to, to teenagers?
Eric Solomon: Yeah. I mean, you know, I, I think it’s a, it’s a, if we don’t get in front of anti-advertising legislation immediately for this, the, the, you know, the AI systems are gonna be full of, you know, what is going to stop a company from getting the system that you’re talking to, to buy exclusively the products for which that system is built for.
You know, there’s absolutely nothing stopping companies from doing that now. you know, AI, the, the whole alphabet and Google ecosystem, everything is gonna be A-focused, you know? And so the question is, how does that lock you more into that ecosystem?
So I mean, when companies are driving the conversation, you can bet they’re not doing it for, for altruism. They’re doing it to make a profit. And so the question is how are they going to make a profit? And the, the answer in everything that’s been about the web since the creation of the web has been about ads.
So if we don’t get ahead of a new monetization model or make this not about monetization at all, why do we think we’re gonna go down a different path?
Helen Todd: Yeah. I feel like this is a great time to talk about your human OS system and what that looks like, and then how we can apply that. To these large language models and, and the consumer facing AI. but yeah. Why, because you’ve really thought about the system and this is your life’s work right now. Can you share with our listeners and viewers what is the, your human OS system?
Eric Solomon: Yeah. So, you know, the, the human operating system, it kind of you know, I’ll take a step back and be like, you know, this kind of came from the journey and the experience that I had with grieving through very big jobs in corporate America and experiencing things like performance management systems where people are graded on normative bell curves and all of this.
And for a long time it, it was bothering me and I couldn’t pin my finger on why until I realized that like a lot of the systems that we see, whether it’s our healthcare system or education system, I have a fundamental belief that our business system is broken.
And by that I mean, in 1972, Milton Friedman, you know, an economist, decided that the purpose of a corporation is to make a profit at all costs. The fact that we use words like human resources comes from Taylorism, comes from this idea that humans are just resources. Like everything else is a resource.
Well, I mean, I take real issue with that. So I said, you know, what’s a, what’s a more human and better way for us to understand that all systems exist because they have a purpose to them. So the human OS is about figuring out what that purpose is and how everything else stems from that purpose.
So the way that I structure it, if you can picture in your mind an atom. So most people should know what an atom looks like because we’ve learned about it in, in, you know, you know, third grade or whatever it was, you know, so you’ve got that atomic structure.
The way that I build the model is at the nucleus, the really heart of the structure. What the core of that operating system is, I call a heart, and I call it the heart, because a heart is what pumps blood and oxygen to the rest of a system, and that heart looks like something like, why do we exist? What is our core purpose in the world?
And so I’ve helped myself, leaders, corporations, and teams define in a really clear way. What is that? Why do we exist? Once you’ve got that, it’s really important to strip all language that we know about things like vision and mission and values and things that you’ve heard floating around corporate America for a long time.
And I think about it, if you’ve got your heart as the nucleus of it, one axis that goes around that is what I call your end goal. Which is, what does an ideal world look like if everything is the way it should be in your ideal world? And the approach is what are you doing right now to get closer to that ideal?
Those are hand in hand what have to be defined together. So you can call them a vision or mission, but I think end goal and approach makes much more sense. And then around another axis, you’ve got another continuum of what I call beliefs and behaviors. Values sit on an in an empty office and a forgotten, you know, a forgotten corner somewhere that nobody in an organization of any kind can ever repeat.
Beliefs are what you truly stand for. And then for each of those beliefs, you have to correspondingly say, this is my behavior that is attached to that belief. Otherwise, it’s to PR exercise.
So again, you’ve got your nucleus at the heart, you’ve got your end goal and your approach. You’ve got your beliefs and your behaviors. And then the final piece relates back to my background in language, which all of those things are not that useful without a plane of communication, which is the idea that you wanna communicate and then the logic or details of how you go about communicating it.
So you’ve got this full atomic structure built, and when you, when you say that’s when you build it, then what you have to do is install it like you do an operating system, and when you install it, the little electrons that go along the atom, those are the actions that you take in the world.
So you say, if this is what I stand for and this is what I believe and this is what my end goal is and here’s how I’m gonna get there, then here are the actions I’m going to take to make that happen. And then of course, like any operating system, I work with some companies and leaders who have had one operating system, they need to update it because as we know how many times we have to update our iOS on our Apple phones, very similarly, companies and leaders have to update their OS.
So that’s how I build the model and I do a lot of work with really, I try to go with purpose-driven organizations who believe that doing this work is critical to their success, both financially and more importantly, for their people and for the planet.
Helen Todd: I really love that and it’s just really aligning the language of how you wanna be in the world with your actions. And one of, I love Esther Perel, and one of the things that she says is that relationships help us become who we are. But I also think conversations, whether it’s with our friends and family, or the conversations with ourself also help us become who we are too.
Cuz language is so important to forming our identities and organizing our thoughts and that type of thing. And I know you, you spend a lot of time with your clients kind of working on the language. So can you expand on how you think about language and when you work with your clients on that atomic model and how important that is as part of your process too?
Eric Solomon: Yeah, I mean, it, it’s really, it, it’s critically important because you want, you know, you, you kind of nailed it, which is we as human beings, you know, we’re, we’re sort of, for lack of a better word, blessed with the ability to, to speak and to have language. But we do that as a filter for what it is we’re thinking.
And the, you know, the biggest problem that you have in a lot of writing or a lot of communication is that it’s just not clear what people are thinking. Because they’re not very clear at communicating those things. So really, one tangible example is I sit on the board of this nonprofit organization called Experience Camps, and we help grieving children, children who have lost a caregiver of ages seven to 17 with these free one-week camps, and then programming across the year.
So in working with them on, you know, our own human operating system, we realize the heart of what we have to do is to show these kids that childhood carries on. You know, it’s really, really clear. Why do we exist? To show grieving children that their life isn’t over, but it carries on. So that ties directly into what our end goal is, which is to imagine a world where every grieving child has a life full of hope and possibility.
That is almost an unachievable thing. Every grieving child, there’s six plus million in the US alone. But it, that’s what an end goal should look like. Something that looks like a perfect world. So our approach to getting to that end goal is to give grieving children experiences that will change their lives.
And so that clarity of communication makes it very, very easy for anybody to explain who we are, why we are, what we do, and then subsequently why we should be funded. But without that clarity of thinking and that logic of the model, we don’t get there.
Helen Todd: Well, and, and one thing that we’ve discussed before too is the importance of clarity, even for the collective organizations, whether it’s a social group or community or an organizational structure, how it’s important to get people behind commitments.
And I really appreciated how you talked about commitment. So can you, I guess share more about alignment to commitments?
Eric Solomon: Yeah, I mean, you know, it, it’s a, it’s something that, again, it’s like people always will think I’m being like pedantic or overly semantic about these things. But in we always talk about how can we get aligned here?
How can we get alignment? And I always say it’s like as human beings, and especially in corporate leadership, alignment is about as rare as like, a red shooting star out of the sky. You know, like nobody, you know, people may not even like each other on a personal level.
Sometimes you can’t even align on whether, you know, whether or not the sky is purple or blue. So like, how do we expect to really align on complicated things? I always say alignment. That’s, that’s great and impossible, but what it is possible to do is to commit. For people to say, I don’t agree, but I will go in this direction because I understand why it’s important that we go in this direction.
So I’m urging people and urging people to stop using the language of alignment and start using the language of what are we gonna commit to and to truly commit to it. Not fake commit to it. We’re not deep-fake committing here.
Helen Todd: I love that drop, the deep fake committing. I might reuse that one. so. How do we apply the human operating system to the landscape that we find ourselves in with these AIs that are, you know, the space is moving so fast. What, how would you apply the human OS to what we’re experiencing right now?
Eric Solomon: Yeah, I mean, it’s something I’ve done a lot of a lot of thinking about. You know, it’s again, if it goes back to this idea that you, every system that we create should have a purpose. That’s why, otherwise what are we doing? And I’m not sure that anybody agrees or is committed to a singular purpose for AI right now, and that that’s a little disconcerting.
When we, I mean, I’d like to hear anybody take a crack at what we think the purpose is. It seems to me the purpose is just to create systems that look so human that we can’t tell the difference between us and them. And I’m not sure that that’s the right purpose, to that end. So that would be the heart.
What is the heart of our system? I also don’t understand what our end goal is. I don’t know what we’re really trying to achieve with this, because right now, if the conversation is owned by Microsoft and Alphabet and Meta, it seems like the end goal is gonna be how do we make as much money as possible off of this?
And I would argue that that’s absolutely the wrong end goal. And then I certainly can’t say that anybody understands the approach. The people that are working on these systems have no freaking idea how they work. So I, you know, I think we have a lot to do. Just on those pieces of the operating system.
And I haven’t gotten to a list of beliefs and behaviors yet. So I mean, I think we foundationally don’t have an operating system for this. I’m not sure that we have an operating system for a lot of systems right now. But if we don’t figure out one for AI and include a lot of people in that conversation, we’re gonna end up hurtling down a path with no clear end goal and no clear purpose.
And I don’t know where we end up with that.
Helen Todd: Yeah, I know Ezra Klein has mentioned before regarding, you know, the black box of these tools and how the creators don’t really fully understand their capabilities, that it’s not on the regulators to figure it out. It’s on the companies for them to figure it out and to be able to communicate that out more widely too.
And I think you had said something, it’s like, if you can do that for a rocket ship going into space, it’s complicated, but it’s not that complicated.
Eric Solomon: Yeah, and I mean, but the issue here is I think that like people genuinely have no idea. I mean, we don’t understand, you know, deep learning really.
We don’t understand the complexity of neural networks. Like, you know, I don’t know how many people, how many of the listeners have seen the movie Her. But you know, there’s that scene at the end where she’s talk, you know, the, this AI played by Scarlet Johansson. We, they should all sound like her, frankly.
But you know, she’s talking to Joaquin Phoenix who’s playing, you know, a really sad man who needs this, you know, fake human connection. But she reminds him that she’s having this conversation with like 6 million other people or 6,000 other people or whatever it is at the same time.
We can’t comprehend that, but that’s what language learning models do. That’s what deep learning is about, is that they can learn the basics of that stuff and far surpass anything that we can do in terms of our human ability to produce and understand language. So yeah, I mean, how can we possibly understand how they’re learning? We can, we don’t, we hardly understand how we learn.
Helen Todd: Yeah. And you mentioned, you know, we need a diverse group in the room that’s helping shape these discussions. And I know Greg Brockman, who’s the co-founder and president of ChatGPT, said at South by Southwest on stage, it’s like we need all of humanity to participate in these conversations how we want these tools.
I personally have not seen a path of how we’re doing that or going to get everyone’s feedback. But this is my little attempt through this podcast to be part of the conversation.
But who do you think is needed in the room, to have these conversations, outside of the Metas and the Microsofts of the world?
Eric Solomon: Yeah, I mean, did, did he mean by that like, I mean, he wants to include everybody so long as they’re white and male and engineers. Cause otherwise I’m not exactly sure what that could mean.
You know, so I was giving a talk the other day. I’m totally not trying to name drop here, but it’s really relevant for this. I was giving a talk to the United Nations. They have this Innovation Day series and I was asked a really similar question by somebody, you know, I couldn’t see who they were.
She, you know, she could have been anybody and, you know, asked that same question. And I really do believe that there needs to be a lot more space for the humanities, a lot more space for the arts, and a lot more space for the poets.
And it turns out that this woman wrote me and said that she’s the poet in residence at the UN. So I was like, awesome. I made a connection with the right person. But yeah, so that was that was really good. Yeah, that was a really good moment.
But I think that’s who needed is. We need to involve more people that have a diverse way of seeing the world and a diverse way of talking about the world and a diverse goal for what it is they’re trying to accomplish in the world.
Helen Todd: So I know recently that the head of all the AI companies met with Biden, and I know that Tristan Harris is proposing kind of a nuclear deal as that’s kind of a precedent of the world coming together with a massive threat for humanity that that kind of exists and that we should figure out what that looks like for AI.
Does that sound like it’s around the right path and approach of how you would envision the world coming together, around this, or are there any other thoughts on what’s needed to get to that human OS application?
Eric Solomon: Yeah, I mean, again, you know, it’s I’ll try to be as optimistic as humanly possible, but I feel like that was a bit of a political dance without a lot of veracity behind it.
Do I think that the companies that are already hurling down a path of figuring out how to solve the AI problem are gonna stop and wait for legislation to figure out what the rules are around the things that they’re already hurtling towards? No, that’s not gonna happen.
Politics, and, you know, the government are not leading this conversation by any means. You know, they, there was a really, you know, I think, again, going back to Ezra Klein spoke with one of the authors of one of the major folks within the Biden administration creating it, which is all theoretical and very little is applicable to what’s actually happening.
So, you know, I don’t know. I mean, I think, you know, every bit of conversation around this helps, but I don’t think it stops the train that we’re already on. No.
Helen Todd: And coming together with like a United Nations or a global kind of, I know, Miranda – is that the right word – for how we want this to, to look like, is that something that you would like to see or is that more down the right direction?
Eric Solomon: Yeah, I mean, it would be great if we you know, again, the, what’s what’s nice about the UN is that they’ve got really clear goals, sustainability goals. You know, like they’re really clear and really big and lofty. You know, there’s something to strive for, so therefore you’ve got this end goal or some amount of purpose to it.
So I think having a statement around how we would like to see machine and human interaction form, that would be useful. It would be useful. And who should be doing that work?
That’s a good question. Who should be involved in doing that work? That’s a good question. But you know, we need to start somewhere in terms of crafting what it is we’re hurtling towards instead of just going towards it at profit at all costs.
Helen Todd: And figuring it out as we go because I know, there’s always good intentions. And if you hear the leaders of these companies speak, they do speak in the language of we want this to benefit humanity.
But it feels like we’re kind of repeating the same thing at the early days of social media marketing where they were hurtling towards these without the intentionality or guardrails or the unintended consequences.
So it’s like, let, let’s not repeat history and get all aligned and committed in the right direction now.
Eric Solomon: Yeah, it’d be, it’d be useful. And I mean, I think you’re, you’re right, it’s sort of without a set intention for what it is we’re looking to accomplish. There’s a lot of unintended things that we’re going to accomplish.
And you know, I’m not gonna sit here and predict what those unintended things are, but you can imagine an AI system that has a goal of making as much money on the stock market as possible. And it crashes the stock market.
You can imagine an AI system has a goal of getting somebody famous, a restaurant reservation, so that it cancels all the restaurant reservations of every other diner in the place.
Those are just small scale things that you can imagine it doing. AI system doesn’t care, doesn’t care about how it achieves its end goal. It just achieves it. That’s why the world could turn into paperclips. So, you know, let’s be really, really careful about how we go about this.
Helen Todd: Yeah, well, I, I know I actually spoke on a panel a couple years ago on how do you keep your brand human in 2030?
I think that was one of the title of it, and it was one of the things was, hire humans. And I think we are, especially the writer strike is happening right now. There’s a petition going from an illustrator to call for all publications to only use human illustrators. That regardless of regulation, which is inherently gonna be behind, that there are things that brands can be doing right now to reinforce their commitment to humans and employing humans and using humans.
Does that, you know, would you encourage brands to do that? And does that offer you some hope in how we could move forward? Or at least the brands people on that are listening, one way to think about it.
Eric Solomon: Yeah, I mean, I, I think so. I mean you know, the, the question’s gonna be, are people gonna be savvy enough to know if, if it’s not disclosed, are you gonna know the difference between the output of a machinery and human?
I think we already know. It’s really hard to tell, so I, there’s a lot of deception out in the world. So, yeah, I think it’s a, it’s a great intention, but what are the guardrails that are in place to ensure that that’s actually happening?
And the reason that’s so important, again, it’s not just an ethical problem, you know, and other people have mentioned this too, but I strongly believe the point of wanting to have a real writer or real artist do it is there’s a lot of critical human thinking that’s involved before pen is put to paper.
And if we’re trained instead on our critical thinking of how to get the right prompt out of Midjourney or ChatGPTto create the thing we want, I don’t think that’s a good use of the human mind. so I think we need to preserve our critical thinking ability. That’s something an AI system will never do.
Helen Todd: Yeah, I know. I mean, we’ve had a guest that does see the interacting with the prompts and the in-painting and out-painting as a form of art. So I know that there’s a lot of different sides to that perspective.
But one guest that we’re gonna have down the line is actually the head of Adobe’s Content Authenticity Initiative, which I think is really cool because it encrypts the authorship into the artwork that carries it across all entities.
And I think if you pair, you know, a solution like that, with maybe a regulatory system that says that you have to disclose that maybe that’s one path forward of determining human-made and giving the people the credit that they need regardless of how it lives online with the regulatory thing of the disclosure.
I don’t know. That seems to make sense in my head. Does that make sense in your head?
Eric Solomon: Yeah, I mean, I, conceptually for sure. And then, you know, it’s just a matter of time before AI figures out how to duplicate that. So it looks like a human made it, but that’s fine. I mean, right. We’ll cross that bridge when we get there.
I’m waiting for the New York Museum of AI created, you know, work to happen. That will happen for sure. We’re gonna have whole museums dedicated to non-human artwork, guarantee you.
Helen Todd: When it comes to the human creativity element because that has been a big part of your career, do you see any promise in the augmenting and collaborating with AI at all or any, any positive sides or like what would a best case scenario be with humans plus machines versus the going down the negative route?
Eric Solomon: Yeah
Helen Todd: Like what is the promise that these things can offer, especially since you studied this and got a PhD in it? [00:51:00]
Eric Solomon: There’s definitely a lot of promise, right?
And it’s up to us to harness that promise so it’s not up to the machine to harness that promise. It’s up to us. But, you know, it’s, it’s recognizing what it is, machines are really, really good at. They’re really good at speed, they’re really good at repetition. They’re really good at sort of offloading some of the “low level tasks” that we could be freed up to do more strategic and creative work.
So if we’re applying machine learning and AI systems to things that are quite repetitive or that are, you know, simply time consuming, but not brain consuming, I think that’s a really good use of how that then augments us to do higher level strategic thinking and higher level creative work. So that’s what I’d like to see.
What I would not like to see are AI ad generators that are just, you know, that get pump up and ad like a second or third year copywriter and art director team can do. But I don’t view that there’s a lot of promise there. There’s promise in machines organizing things for us in a way that makes sense in figuring out a starting place, starting points for springboards of ideas.
But I think other than that’s a bit of a creative abuse of the platform. That’s just my point of view. The weirdness aside, it’s good for a chuckle, but I don’t see how it helps artists.
Helen Todd: Yeah, and I think that goes back to, yeah, the humans deciding how we wanna move forward and if we value human and artists, which I feel like societally, we need to value artists more than what we currently do anyway for their cultural contributions.
That it’s hire them and pay them and disclose that you are doing that and really that’s gonna help your brand move forward in this new weird AI, is it human, is it not world of embrace your humans and pay them what their value and living and sustainable wages.
Eric Solomon: Yeah, I mean, right. We’ve got a mutual friend who, you know, makes part of her living through visually drawing conversations in corporate rooms as they’re happening.
That is something that an AI could really do quite easily. Does that mean it’s gonna be better? And does that mean it’s not gonna be as valuable as somebody that’s able to extract the nuance of human meaning? No. So I think that our friend should be paid more.
Helen Todd: Yep. I agree. And Heather’s actually, I’m gonna do a shout out to Heather Willems
Eric Solomon: Yes
Helen Todd: because she is actually the person who introduced us and introduced me to Jon Tota, who’s also another guest.
So I’ve gotta eventually get Heather on the podcast cuz she’s a dear friend of both of us and has yeah, connected me with some really amazing people
Eric Solomon: She’ll be smarter about creativity than I will be.
Helen Todd: Well, are there any closing thoughts, as we’re kind of ending the interview that you wanna leave listeners and, and viewers with?
Eric Solomon: Yeah, I mean, I guess the biggest thought is, you know where we kind of, where we started out a conversation around an intentionality for this conversation. And I would really love to urge people to approach how they use AI systems, whether it’s visuals like Midjourney or DALL-E or you know, linguistic based ones like ChatGPT4
What are we up to eight now? ChatGPT? No, it’s still four. But you know, I would love for people to approach them with intentionality of saying, how is, how am I gonna use this to augment my creative or human skillset?
Not how am I gonna offload this work that I have to do to a system whose veracity I can’t really count on? Or, that, that kind of steers me away from doing thinking that might actually benefit me and make me flourish as a human.
So, approach these things with intentionality. You know, I would extend that to the way that we use social media as well, but that’s a different conversation for a different day.
Helen Todd: Yeah, well, I definitely agree in the stance it’s that I want to live in a world where artists not only coexist with AI but thrive. And like we can, we can create that together if we really want to.
Eric Solomon: Amen.
Helen Todd: Eric, how, how can people get in touch with you? You teach, you speak, you consult. let’s do one last plug for you before we sign off
Eric Solomon: So, yeah, so the, the one way that people can get in touch if they really want to, you know, learn more is to look to at my slapped together site, which is at thehumanos.co. That’s just all one word.
So I’ve got some stuff there, and then, you know that, that there’ll be some examples of talks that I’ve given, and then I’m in the early stages of writing a book around the human operating system that will probably be called the Human Operating System with some catchy subtitle.
So people can look for that at whatever publishing cycle allows them to do. But the best way is to look at that and you can find my LinkedIn and my email there.
Helen Todd: Eric, I always so much enjoy all of our conversations. I’ll be sure to put all of the links that we discussed in the show notes so people can find you and your website. And yeah, thank you so much for all of your time and sharing your thoughts with us today. I really appreciate it.
Eric Solomon: Thanks so much for having me, Helen. I hope this was a good use of your time spent with intentionality.
Helen Todd: Well, you never know what ripples will turn into waves. So I’m excited to hear the reactions of viewers and actually would love to hear your thoughts on our conversation today.
Like, what would an ideal world and what intentionality do you wanna see beyond just what Eric and I discussed? You know, this is, should be an open conversation, so let us know. With that, we’ll sign off and see you online.
Thank you for spending some time with us today. We’re just getting started and would love your support. Subscribe to Creativity Squared on your preferred podcast platform and leave a review. It really helps and I’d love to hear your feedback. What topics are you thinking about and want to dive into more?
I invite you to visit creativitysquared.com to let me know. And while you’re there, be sure to sign up for our free weekly newsletter so you can easily stay on top of all the latest news at the intersection of AI and creativity.
Because it’s so important to support artists, 10% of all revenue, Creativity Squared generates will go to Arts Wave, a nationally recognized nonprofit that supports over a hundred arts organizations. Become a premium newsletter subscriber, or leave a tip on the website to support this project and Arts Wave and premium newsletter subscribers will receive NFTs of episode cover art, and more extras to say thank you for helping bring my dream to life.
And a big, big thank you to everyone who’s offered their time, energy, and encouragement and support so far. I really appreciate it from the bottom of my heart.
This show is produced and made possible by the team at Play Audio Agency. Until next week, keep creating
Theme: Just a dream, dream, AI, AI, AI