As A.I. continues to shape our world in profound ways, host Helen Todd sat down with Dr. Andy Cullison, founding executive director of the Cincinnati Ethics Center and research professor at the University of Cincinnati, for a thought-provoking conversation exploring the complex moral landscape surrounding artificial intelligence, the urgent need for robust ethics education, the challenges of replicating human moral reasoning in machines, the importance of proactive governance in navigating the future of A.I., and more.
Cullison brings a wealth of expertise to the discussion.
With over 15 peer-reviewed publications to his name and a passion for helping people of all ages develop crucial moral reasoning skills, Andy spent seven years as Director of the Janet Prindle Institute for Ethics at DePauw University, one of the largest collegiate ethics institutes in the country, before his current roles. His scholarly work focuses on questions about how we can have moral knowledge, and he conducts workshops for teams and organizations to apply philosophical tools to advance diversity, equity, and inclusion.
In addition to his academic work and passion for helping people of all ages develop crucial moral reasoning skills, Andy conducts workshops for K-12 students, teams, and organizations like the FBI to apply philosophical tools to advance diversity, equity, and inclusion.
Where do we draw the line with A.I. from an ethical lens — and how can you sharpen your thinking about the thorny moral challenges surrounding A.I. development and implementation?
To discover Cullison’s thoughts on these timely topics, listen to the episode or continue reading below.
As A.I. becomes increasingly ubiquitous, the need for ethics education has never been more pressing. To this point, Cullison emphasizes the importance of training practitioners, designers, and users to identify and address ethical issues in A.I. development and deployment.
“We need people to be real good at identifying what those rules should or shouldn’t be,” Cullison explains. “And that’s where moral reasoning becomes a kind of critical component of all this.”
Andrew Cullison
By equipping individuals with the skills to recognize moral dilemmas and weigh competing values, Cullison elaborates, we can foster a more ethically conscious approach to A.I. innovation.
He then breaks down the key components of moral reasoning, which include identifying moral issues, generating possible answers, evaluating reasons and arguments, and engaging in civil dialogue to reach a resolution.
Another intriguing aspect of Helen’s conversation with Dr. Cullison centers on his experiments with ChatGPT’s moral reasoning capabilities.
While the A.I. was able to provide a realistic counterexample to the definition of knowledge, Cullison explains that he remains skeptical about the ability of current language models to fully grasp the nuances of human moral decision-making. “There is,” Cullison insists, “a human component to that.”
Andrew Cullison
He highlights the importance of human empathy and understanding in navigating complex moral situations, acknowledging the limitations of A.I. systems that rely primarily on pattern recognition and data analysis.
Throughout the episode, Andy shares practical tips and exercises for navigating everyday ethical dilemmas. One key strategy involves identifying moral issues by paying attention to what angers or upsets people, as these emotions often signal underlying values at stake.
Andrew Cullison
By peeling back the layers of frustration and engaging in thoughtful dialogue, we can better understand the moral landscape and work towards principled solutions.
Cullison breaks down the key components of moral reasoning, which encompass identifying moral issues, generating a range of possible answers, carefully weighing reasons and arguments, and engaging in civil dialogue to reach well-reasoned resolutions. These skills, he argues, are not merely academic exercises but essential tools.
While the capabilities of A.I. continue to evolve at a breakneck pace, Cullison reminds listeners that the human element remains indispensable to moral reasoning.
To demonstrate this, he introduces the concept of “draw-the-line puzzles,” where clear principles for decision-making are needed to navigate complex situations. He provides the example of teasing and making fun of people, highlighting the intuitive sense of a line to be drawn between acceptable and unacceptable behavior.
“We would never be so prudish as to say you can never, ever, ever in your life make fun of people,” Cullison explains, “and we would never be so cruel to say that all jokes are fair game at all times, right?”
As he points out, people have an intuitive sense that there’s a line they don’t want to cross. By identifying obvious instances on either side of the line and examining the patterns that emerge, we can begin to articulate principled positions on thorny ethical issues.
When it comes to subscribing to particular moral frameworks, Cullison advocates for a balanced approach that considers the consequences, rights, and duties involved in a given situation. Rather than adhering dogmatically to a single theory, he likens moral reasoning to piecing together a puzzle, with each framework providing a piece of evidence for what is right or wrong.
He compares it to the television drama ‘House,’ in which Hugh Laurie’s lead character, a doctor, solves medical mysteries. “We don’t realize the degree to which doctors are very uncertain about some of their diagnoses,” Cullison clarifies. “There’s just doing the same thing…There’s this symptom in place that points to these things…that narrows it down to these possibilities.
By weighing the various moral considerations at play and engaging in ongoing evaluation, Cullison suggests, we can navigate the complexities of ethical decision-making with greater nuance and humility.
While acknowledging the existence of some universal moral principles, such as the wrongness of murder, Cullison cautions against excessive moral certainty in the face of difficult cases. He suggests that society should cultivate greater intellectual humility and tolerance for moral disagreements, while still being willing to draw lines in the sand on certain fundamental issues.
Andrew Cullison
By recognizing the limits of our moral knowledge and engaging in respectful dialogue, we can work towards a more nuanced understanding of the ethical challenges posed by A.I. and other emerging technologies.
As the conversation turns to the potential dangers of A.I., Helen and Cullison discuss the double-edged nature of this powerful technology. While A.I. holds immense promise for solving complex problems and advancing human knowledge, it also carries the risk of causing unintended harm if not developed and deployed responsibly.
With this great power comes an equally great responsibility to ensure that A.I. systems align with our deepest values and aspirations. As Cullison pointedly observes:
Andrew Cullison
This sobering reality underscores the urgent need for proactive governance and diverse perspectives in shaping the future of A.I. By bringing together experts from various fields, including ethics, philosophy, computer science, and beyond, Helen and Cullison agree that we can work towards developing A.I. systems that align with our values and mitigate potential risks.
Throughout the episode, Cullison returns to the theme of moral reasoning skills as the bedrock upon which a more ethically conscious approach to A.I. must be built. He emphasizes that by developing these capacities in society at large, from K-12 students to corporate leaders, we can foster a more ethically conscious approach to innovation.
Andrew Cullison
The Cincinnati Ethics Center offers a range of programs and workshops aimed at cultivating these essential skills, from a high school ethics bowl to leadership development initiatives — and even a program called Ethics & Dragons, which teaches kids moral reasoning through playing Dungeons & Dragons.
Through initiatives like these, Cullison and his colleagues are working to equip the next generation with the tools they will need to navigate the moral complexities of an A.I.-shaped future. By engaging diverse communities in ongoing dialogue and reflection around these vital issues, he hopes to build a shared foundation of ethical principles and practices to guide us forward.
WIth these programs available in five Cincinnati-area libraries, Cullison encourages anyone interested in participating or in getting involved with the Ethics Center to reach out via email through info@cincyethics.org. He also mentions that the Ethics Center has many workshops on workplace moral reasoning and ethical leadership development, so he encourages organizations and companies to reach out via the same email address, as well.
As the conversation draws to a close, Cullison leaves listeners with a powerful call to action: to actively participate in shaping the ethical landscape of A.I. By developing our moral reasoning skills, engaging in respectful dialogue, and advocating for responsible innovation, we can all play a role in building a future that aligns with our deepest values and aspirations.
Andrew Cullison
In an era of breakneck technological change and profound moral uncertainty, the insights and provocations offered by Dr. Andrew Cullison serve as a powerful reminder of the vital importance of ethics education and proactive governance in the age of A.I.
Thank you, Andrew, for being our guests on Creativity Squared.
This show is produced and made possible by the team at PLAY Audio Agency: https://playaudioagency.com.
Creativity Squared is brought to you by Sociality Squared, a social media agency who understands the magic of bringing people together around what they value and love: http://socialitysquared.com.
Because it’s important to support artists, 10% of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized non-profit that supports over 150 arts organizations, projects, and independent artists.
Join Creativity Squared’s free weekly newsletter and become a premium supporter here.
TRANSCRIPT
Andy: We need people to be real good at identifying what those rules should or shouldn’t be. And that’s where moral reasoning becomes a kind of critical component of all this. We need people to be trained up to, to spot those ethical issues that we don’t even know are on the horizon because AI is such a new technology, right?
Andy: And we need the practitioners to be trained up on this, right? Like. The ethicists aren’t going to be in the lab or the basement or wherever it is they do their AI development.
Helen: Dr. Andrew Cullison is the founding executive director of the Cincinnati Ethics Center and a research professor at the University of Cincinnati who specializes in moral reasoning, ethics education, and leadership development.
Helen: With over 15 peer reviewed publications to his name, Andy is a leading scholar on questions of how we can have moral knowledge. Prior to his current role, Andy spent over seven years as director of the Janet Prendle Institute for Ethics at DePauw University, one of the largest collegiate ethics institutes in the country.
Helen: I first met Andy at the 2.0 grand opening of the University of Cincinnati’s Digital Futures building. You’re in for a treat to hear Andy’s insights that offer a philosopher’s perspective on some of the most pressing issues of our time. In our conversation, Andy shares the importance of teaching ethics in an AI world, what he’s learned from experimenting with Chad GPT’s moral reasoning abilities, practical tips and moral framework exercises for navigating everyday ethical dilemmas and more. He emphasizes the urgent need for practitioners developing AI. To receive robust ethics training so they can spot potential issues early on.
Helen: Where do we draw the line with AI from an ethical lens? Listen in to sharpen your own thinking about the thorny moral challenges surrounding AI development and implementation. Enjoy.
Helen: Welcome to Creativity Squared. Discover how creatives are collaborating with artificial intelligence in your inbox, on YouTube, and on your preferred podcast platform. Hi, I’m Helen Todd, your host, and I’m so excited to have you join the weekly conversations I’m having with amazing pioneers in the space.
Helen: The intention of these conversations is to ignite our collective imagination at the intersection of AI and creativity to envision a world where artists thrive.
Helen: Andy, welcome to Creativity Squared. It is so great to have you on the show.
Andy: I’m excited to be here. Thanks for having me.
Helen: So everyone who’s meeting Andy for the first time, he’s the founding executive director of the Cincinnati Ethics Center. We met at the UC digital futures building grand 2.0 celebration.
Helen: And I’m so excited because we always talk about ethical AI and we have someone who is an actual expert in ethics and morality to talk about it on the show. But for those who are meeting you for the first time, can you tell us who you are, what you do, and your origin story?
Andy: Yeah. So, Andy Cullison as you said, founding executive director of the Cincinnati Ethics Center.
Andy: I was born in Cincinnati, grew up right across the river in Northern Kentucky. Went to college in Indiana, did my PhD in philosophy in upstate New York, [I] have been a philosophy professor for 20 plus years. But then I crossed over into the dark side of administration and became the executive director of an ethics institute back in Indiana for about eight years and then got hired two years ago to direct the Cincinnati Ethics Center.
Andy: But my background is in philosophy and the technical term is epistemology or the jargony term is epistemology. But what I really am interested in is moral or ethical reasoning, thinking about what counts as evidence for our moral or ethical beliefs.
Andy: And then I’m also very interested in the connections in between and being good at that stuff and effective leadership. So I also have some interest in ethics and philosophy of leadership.
Helen: And you’ve got to do some consulting for like the FBI and some interesting companies, correct?
Andy: Yeah. So, there’s a lot of data that when you are good at moral reasoning, which is one of the main learning outcomes of ethics education, you tend to be good at a lot of other things, including a lot of effective leadership traits and behaviors.
Andy: And so where I was before I was charged with developing a leadership development consulting service that emphasized ethics, education, and moral reasoning development with the eye that if you focused on that, you would create better, more effective leaders. And a local branch of the FBI was working with us for a time being many years ago, but that evolved into working with the national academy for a while, where I was before.
Andy: Yeah, it was a lot of fun.
Helen: I remember when we were talking, goodness, I think it was like a month or two ago now that you had a joke of like, when you’re sitting on an airplane, the quickest way to stop a conversation is to say, “What do you do? I’m a philosopher.”
Andy: Yeah, there is. There’s philosophers often at philosophy conferences will talk about the airplane question, right?
Andy: When you get asked that and like, what do you do? And some people will just say, like, oh, I’m a mathematician. Okay. Boom. Ends the conversation. But some people will say, oh, I’m a philosopher, I teach philosophy. And then people will ask you things like, what’s your philosophy? And it’s like, well, philosophy doesn’t work that way.
Andy: You know, philosophy is about like… what philosophers do is we pick some questions that we’re really interested in and then we just do a deep dive into that question. And so it’s a funny, fun, awkward conversation that often happens. On airplanes.
Helen: Well, you’ve never sat beside me on an airplane cause I would have so many questions.
Helen: So today is kind of a sitting beside each other on an airplane and everyone getting to hear our conversation of me getting to pick your brain about all things.
Andy: Right, this will be the not awkward airplane conversation.
Helen: Well, you’re also building a course on AI and ethics. For UC, which I’m so excited about.
Helen: So, on the airplane question, what [are] some of the biggest questions that you have right now when it comes to AI with morals and ethics or through that morals and ethics lens?
Andy: Well, one of the fun things about thinking about ethics of AI is [that] we know what some of the ethical questions are and what some of the moral challenges are.
Andy: We can anticipate those and I’ll say something about that. But because it’s so new, it’s likely we’re going to encounter challenges that we haven’t even anticipated yet. Right? And so it’s fun to develop a course in thinking about that because it could very well be that you develop this course and then something happens in real time with like Chat GPT and you’re like oh my gosh, there’s this whole other area of worry or concern that we need to be talking about, and so it’s going to be fun that we can kind of pick topics on the fly.
Andy: But in general, I kind of would like to divide a course into at least four general categories. There may be more but they’re basically there’s going to be ethical issues about the development of AI, right? So questions about intellectual property, right? I mean these large language learning models are you know, drawing in a bunch of copyrighted information and then building on top of that, so there’s ethical issues there. There’s ethical issues about not destroying the world or, you know, developing an AI – how do we develop an AI that doesn’t do that and where do we draw the line on what sorts of things we enable our AI to do.
Andy: So there’s design or development ethical issues, then there’s ethical issues related to the use of AI in higher ed. One of the big ones is to what degree should teachers and professors use artificial intelligence in their research, in their grading. There’s have been professors who have been like using AI to grade essays.
Andy: Right. We often talk a lot – you hear a lot of professors talk about students doing bad things with AI, but we don’t talk about like how professors could also do bad things with AI. And of course, there’s plagiarism, academic integrity kinds of things.
Andy: There’s questions about how we even structure education in an AI world, like what’s that going to look like? How is assessment going to look? Then there are sort of broad societal questions, right? Like what are the likely impacts of AI? How is that going to be good for humanity? Potentially bad for humanity. And you sort of get away from traditional ethics and more into political philosophy.
Andy: You start asking questions like, what kinds of regulations should there be? What kinds of restrictions should there be? You know, what does society look like for better or worse? How do we try to achieve a society that doesn’t look worse in light of AI. And then the fourth category, I like to try and come up with some kind of, I like to challenge my students to try and think of where there might be what I call moral marginalization.
Andy: And what I mean by moral marginalization is, there are ethical issues where everyone automatically thinks this is the right thing to do. And it’s obviously the right thing to do. And then what you realize is, if you assume that’s obviously the right thing to do, you are marginalizing a group of people in some way that you might not have realized.
Andy: So a real good example is when we try to think about designing packaging that is environmentally sustainable. So there’s a really good example when whole foods started to peel oranges, pre peel oranges and then put them in plastic [00:10:26] packaging. And of course everyone was up in arms. They’re like, “Oh my goodness, we’re wasting plastic. Who’s too lazy to peel their own orange?”
Andy: But then you have members of the disability community being like, “I can finally eat oranges by myself because all I have to do is open this thing and then the oranges, but I don’t have the physical capability.” So that’s an example of moral marginalization where everyone is like, this is obviously what we should be doing and you’re a monster if you do this other thing.
Andy: And then you realize, actually some people at the margins might possibly benefit and I’m still trying to think is there something about our visceral reaction against AI that might fit this bill and one thing I can think of is, is there something somewhat ableist about the idea that we just shouldn’t use it at all. Like [we] just shouldn’t use it at all because you know, some people are privileged to have highly functioning cognitive and linguistic abilities and writing is effortless for them, right?
Andy: They just can do it without even thinking about it. But for some people, it’s a real significant challenge and it can be a barrier, right? And if there’s something that can help people do the thing that they’re not good at doing through no fault of their own, just because their cognitive equipment functions differently, you know, that should give us some pause having this visceral reaction thinking, under no circumstances should we ever use this to help us in our writing, something like that.
Helen: Yeah, and I love that AI is actually [able to] be a very accessible medium. So we have our Cincy AI meetups that take place at the UC Digital Futures building and one member in the community is actually building a tool. I forget the name of it off hand, but it uses AI vision. So people with different vision abilities who can’t see, it’s taking the camera and helping decipher what they can’t see. And then you also have like voice modalities. So there’s a lot of accessibility benefits to AI.
Andy: That’s really cool. I didn’t even know that project was happening. I like that.
Helen: Yeah. Oh, I’m hoping he’ll be at our next Cincy AI meetup and share more. So you’ll have to pop down for that one.
Helen: So when it comes to defining it Ethical AI; maybe we should start there of like, what do people get wrong when they think about ethical AI and just defining it?
Andy: That’s, you know, that’s a good question. It might be good to just sort of define ethics first and then define ethical AI.
Andy: When we want to be thinking about the ethics of AI, what is it we want to be doing? So, ethics is horribly ambiguous in the English language. So sometimes when people use the word ethics, they mean my own personal values, like according to my ethics, this would be wrong. And if you think that that’s what ethics is, and you talk about ethics education around AI, people are going to think, “What? You’re just trying to give me your values around AI. Well, I’ve got my values. I don’t need you to give them to me.”
Andy: And in a collegiate setting, that’s not what we mean by ethics. Another thing people mean by ethics is something like compliance, right? If you, in a corporate setting, when the ethics people come in, it’s usually the lawyers who come in and tell you, you can’t do that or else we’re going to get sued, right?
Andy: So it’s a – you’re thinking more about staying within the confines of the law, and that’s actually an important piece to the ethics of AI. We need to think, be thinking about what the compliance rules need to be [and] what the regulations need to be. That is a very important part of the ethics of AI, but there’s another thing that we mean by ethics that I also think is crucially important and crucially important when we’re thinking about the ethics of AI.
Andy: And that’s thinking about ethics as more like a set of skills or a field of study where you have a particular set of learning outcomes, which is what I like to call moral reasoning skills. And that’s what we focus on here at the Cincinnati Ethics Center. I like to pivot away from the word ethics and talk more about moral reasoning development or moral skill development.
Andy: And so we’re talking about training people to be able to identify that something is a moral issue. Once they’ve identified that moral issue or ethical issue, I use those interchangeably, training them to be able to identify all the possible answers to those moral issues that you’ve identified or those moral questions, and then be able to identify all the reasons and arguments and viewpoints and perspectives that people might have on that moral issue.
Andy: Once you have that. The third thing you want is the skill to weigh these competing values and decide what to do or think about that issue. And then finally, you want people to be trained up to be able to talk about that. How do you engage in civil, thoughtful dialogue, particularly with people whom you might disagree with, to try and reach some resolution around some ethical issue and maybe even find some common moral ground, based on the work you did exercising those first three skills?
Andy: So those are at least four of the core skills. And we need to be thinking about what the rules for AI should be for us as a society. We need people to be real good at identifying what those rules should or shouldn’t be and that’s where moral reasoning becomes a kind of critical component of all this. We need people to be trained up to spot those ethical issues that we don’t even know are on the horizon because AI is such a new technology, right?
Andy: And we need the practitioners to be trained up on this, right? Like the ethicists aren’t going to be in the lab or the basement or wherever it is they do their AI development. Wherever that is, it’s going to be, you know, it’s going to be the programmers and the engineers who are doing that work and they’re the ones who need to be in a position to be like, “Whoa, I had no idea you’d be able to do this and they need to be the ones to spot it as a moral issue, right?”
Andy: There’s a person who used to do hostage negotiations and there’s a really good article in The Atlantic. They now coach CEOs and executives on how to do business negotiations and something that stood out about what they said in that article is, “the worst negotiation to be in is the one you don’t know is a negotiation,” right, because you’re already, you know, set up for failure. And I think the same is true about moral dilemmas and moral issues. The worst moral dilemma to be at the center of is the one that you don’t even recognize as a moral dilemma, because that’s when real problems happen and so we really need engineers and practitioners to be trained up to spot those dilemmas that the ethicists aren’t going to be in the room [for] and the ship will have sailed if they’re not the ones to be able to recognize these things.
Helen: And I would argue, too, that we just need more ethicists in the conversation in general as well.
Andy: I would like that very much.
Helen: Oh, so let’s spend a little bit of this conversation talking about some of the moral reasoning and maybe some takeaways that some of the listeners and viewers can get. Like, can you walk us through some practical tips of how to approach or even identify what’s a moral issue or not a moral issue?
Andy: Yeah. We’ll do a few things. So just spotting that something might be a moral issue when I do work with high school students, for example, or even my college students, or even some of the organizations we work with. I tend to not ask them what the ethical issues are [that] they care about. Because again, that word is ambiguous and people don’t really know what to say sometimes.
Andy: So I asked them what their pet peeves are, or what angers them or what upsets them. Because anger, being annoyed, being upset, it’s sort of – there’s a foundational moral component to that. When you’re angry about something it’s because the world is not a way that you want it to be and you think it should be that way. So you can zero in on what people’s values are pretty quickly by figuring out what upsets them and so just being attuned to when something might upset someone, right?
Andy: That right there should have you pause and try and figure out, okay, why are they upset about this thing? What is it about that? And you can usually sort of peel back the onion layer and figure out what it is they value, right? And so, anger or frustration, that’s a good, that’s a good signal to pause and have a conversation with someone and be like, okay, something’s upsetting you about this.
Andy: Let’s peel back. What’s, what is at the core of this? And usually what’s at the core, there’s going to be some value that they have that you either don’t share or that you were not even aware that it was in place. So that’s one very simple thing. Let anger be a signal that there’s a moral issue at stake and don’t just be dismissive of people’s anger or frustration.
Andy: So that’s one. Another good thing to do, I think; a lot of ethical questions are what I call draw the line puzzles. So sort of where do you draw the line? If you’ve ever been in a situation where you go to the boss and you’re like, “I need the thing.” And the boss is like, “Well, If I give you the thing, I have to give everybody the thing,” so no one gets the thing or everyone gets the thing, and then they cut budget from somewhere else, right?
Andy: But they’re either, it’s all, either everyone gets the thing, or no one gets the thing. And if you’ve ever been in that situation, or if your viewers or listeners have ever been in that situation, you’re probably really frustrated, because you know there’s got to be a principled way to draw the line, where I get the thing, and there’s a good reason why some other people don’t get the thing, right?
Andy: That’s a draw the line puzzle. And a lot of AI questions are going to be like that. Like, where do we draw the line on, you know, using other people’s property to develop this? Where do we draw the line on permissible use cases? And draw the line puzzles; there are some simple things you can do to try and make headway on draw the line puzzles.
Andy: You tend to start with obvious instances where it’s okay. Obvious instances where it’s not okay. And then and then you can start to spot patterns. Do you want me to give you a concrete example of how you might use this in practice on something not related?
Helen: Oh, yeah, I would love that. Please. Yeah.
Andy: One of the ways I like to introduce this is on the ethics of teasing people and making fun of people because we all have strong views that like, sometimes it’s okay, sometimes it’s not okay. And we would never be so prudish as to say you can never ever, ever in your life make fun of people.
Andy: And we would never be so cruel to say that all jokes are fair game at all times, right? So that’s a good one where people have an intuitive sense that there’s a line to be drawn. So when you ask people, when I do, and I do workshops on this, I’ve actually done workshops with the FBI on this. When you ask people, what are some obvious cases of it being okay?
Andy: They almost always list specific examples of them and their family, or them and their friends, right? It’s in close, in the context of close relationships. They also tend to list cases where someone is screwing up in some way, and it’s okay to call them out on it, right? Like they’re always late to practice and it’s their fault, so you call them late Larry or something like that, right?
Andy: So those are two cases. And then when you ask them to give not, obviously not okay instances, they tend to give specific examples of what I call immutable or unchangeable traits, right? So race, gender, sexual orientation, ability, status and also things where there’s nothing wrong with being that way.
Andy: It wouldn’t be, it wouldn’t be appropriate to call someone out for having that trait. And so you’ve got these specific concrete examples that everyone can agree on, then you start to spot patterns and then you can start to generate a principle that you can use to articulate your position on where you draw the line.
Andy: So what you can say is, well, in general, when it’s in the context of, you know, teasing family and friends. Or when you’re calling someone out in a way that you think is appropriate and you’re not targeting immutable traits in some way, shape or form where there’s nothing wrong with being that way, then that gives you pretty good reason to think it’s probably okay, but if it’s outside any of that it should give you pause and you at least need to think about it more before you engage in it.
Andy: So that’s an example of how you can quickly figure out for yourself where you might draw the line. That’s one example. Another thing that is helpful is just to educate yourself about all the different things that people might think are morally relevant in a situation. Some people are very consequentialist and utilitarian in their thinking.
Andy: They think the only thing that matters is consequences, right? But most of us think that sometimes the consequences might go really well, but it would still be wrong to do, because there are other important competing values in place.
Andy: Like, so another thing that people tend to think are important are certain sets of fundamental rights. So educating yourself about what people think are the fundamental rights and which ones you think are worth accommodating or at least trying to compromise on and which ones you think are maybe not as important, but you sort of got to start to think about how do you rank and prioritize these kinds of rights?
Andy: Other things that people think are morally important. There are philosophers and ethicists who think that one of the things we ought to do is, this is like a care ethics tradition, we ought to prioritize duties of care. That’s meeting the needs of people who can’t easily meet their own needs.
Andy: And this is how we justify things like, you know, the American Disabilities Act and things like, things of that nature, right? There are going to be people who say, “Ooh, we could get more good for more people. If we spent this money [that’s] being spent on accessibility for other reasons,” like, yeah, if you’re a pure consequentialist, that might make sense.
Andy: But if you think duties of care are important, right, prioritizing the needs of people who can’t easily meet their own needs, then that gives you some additional reasons to think even if we could get slightly better consequences using the money this way, you know, we ought to invest in meeting the needs. Another thing that people think are important are sort of social agreements that might not be obvious to you that are in play. There are all sorts of what I like to call implicit or hypothetical agreements that no one ever says out loud, but we tend to think we can hold people to account for.
Andy: So, an example I like to use is… if I were to walk into, let’s say, Helen, you are a server at a restaurant and I walk into the restaurant and you say, “Hello, would you like to sit down?” And I’m like, “I would love to sit down.” And then you say, “Would you like something to drink?”
Andy: And I was like, “I would love something to drink. I’m really thirsty.” And you say, “Well, would you like a burger to go with your drink?” And I was like “I’d love, you know, a burger to go with my drink.” Keep in mind, you haven’t even handed me a menu yet or anything. You’re just, you know, casually conversating, and you hand me all this stuff and I get up to leave and you’re like, “Whoa, you haven’t paid.”
Andy: And you’ve, I think could rightfully be like, well, no. But like, if you grew up in America and you’re in your forties, you’ve got to know what the rules of the game are. No one has to, we don’t have to sign a contract for you to pay me for a burger when you walk into a restaurant. That’s an example of an implicit agreement.
Andy: And there’s a lot of things like that, that people sort of justifiably think are in play. So like this, one of my pet peeves is when you call someone out for something and they say, “Well, I never agreed to that.” They strike me as that person in the restaurant pretending that they don’t know that you’re supposed to pay for food when you walk into a restaurant.
Andy: So, that’s another thing that’s, I think, worth keeping in mind. So there’s all these different frameworks or perspectives that I think are. worth taking time to educate yourself about and then try to figure out when you’re thinking about ethics of AI, there’s questions you can ask yourself that might help you spot ethical issues that you might not have picked up on before or come up with reasons for a particular position on why you think AI ought to go in a certain direction by drawing on these kinds of these frameworks or these ethical theories.
Helen: Thank you for sharing that. When I lived in New York City, a friend of mine, Spencer Greenberg, who has a great podcast called Clearer Thinking, would throw these parties that were kind of social experiments. And one that he threw was actually how to constructively argue with someone who disagrees with you.
Helen: And it was really interesting. We filled out a questionnaire of things like, do you support abortion? Yes or no. And then you’d get paired with someone who disagreed with you and then you’d go through the framework that he gave and it was repeating back to the person their stance to make sure that you understood it.
Helen: And then you’d go through an exchange to see if it was a values difference or an information difference. If your information was incorrect or not. And to get to a conclusion of why do we disagree? Is it values or different information or incorrect information? And it was very enlightening. I know I walked away and still think about that.
Helen: But you mentioned that you also work with you know, in your course or in your leadership training of how to argue with people who disagree with you. Is there something else that you would add to that because outside of AI there’s so many things in the world happening right now where this could be extremely relevant too.
Helen: So I’d be curious, what you would add to that conversation?
Andy: Absolutely. In fact there’s a method that I do in some of these workshops that I’ve done, again, I’ve done with the FBI, but I also do it with K-12 students, is [that] there’s a way to engage in dialogue with people, where you don’t even have to figure out what they think.
Andy: And so I call this sort of “third personing” the conversation. So, what you do is you have everybody write down on a piece of paper a couple of reasons; and I say that someone might think, that someone might have. And you don’t even have to think it’s a good reason. You just know it’s a reason that some people might have for thinking the answer is yes, it’s wrong. And then you write a couple of reasons down for thinking that someone might think the answer is no, you know, it’s okay or something like that. And then you just go around and you share and you do it in this very third person way. What I like about this, and I also encourage this when you want to do moral dialogue in a classroom, right?
Andy: The goal of doing ethics in the classroom or in the boardroom is to get all those reasons on the table and then people can go away and mull over it for a while and sort of think about what they think about, you know, how they would prioritize their values, but to have a opportunity for everybody to just collaboratively get all those reasons out on the table.
Andy: The nice thing about this is if you are the conservative in the room surrounded by liberal colleagues, you might not want to say, well, I think this, right. And you’re going to, you’re going to feel silenced and it’ll go the other way too. If you’re the liberal in the boardroom, surrounded by conservative colleagues, you might not want to out yourself as the one who has the liberal take on that, right?
Andy: And you might feel silenced. But if everyone’s given an opportunity to sort of distance themselves from those reasons, then your liberal reasons get out there or your conservative reasons get out there and you can even, and if people are like, oh, gosh, why would anyone think this is true? Let’s say it’s the conservative position and you’re the conservative and someone’s like, why would anyone think that?
Andy: You could say, well, look actually people do have some good reasons for why they think that. And here are some of those things. For all they know, you’re just the thoughtful liberal who has thought more carefully about the conservative position. And I can flip it too, right? Like if it’s the, if it’s some liberally type position and everyone’s like, why would anyone think that?
Andy: And you’re the liberal, you could say, well, you know, actually there are these other good reasons for all they know, you’re the thoughtful conservative on this one who’s thought more about [it]. So it’s an easy way to get your voice and your views on the table. And in a way that feels a little bit more safe, there’s a little bit more psychological safety there.
Andy: Eventually, you do want a team to get to the point where we actually know what everyone’s values are and where they’re coming from, but I think, as an initial team building kind of activity, starting out by doing it in this third person way gets people a little bit more comfortable, and it gets those reasons on the table. Reasons that people might not have heard before and then let them go off and decide what to think.
Andy: But I think the main goal of dialogue is let’s make sure everyone understands the full breadth of the moral landscape, they can go make their own decisions, and that’s an important part of dialogue that I think people miss. Most people think dialogue is about winning an argument or winning a debate or convincing everybody that your side is right, and there’s more to dialogue, and there’s more we can benefit from dialogue, and that’s a good way to realize those benefits.
Helen: Yeah, I love that. So thank you for sharing and hopefully it will inspire people to try these exercises and of course maybe hire you if they need help at the workplace. When we spoke before this interview you mentioned about six different moral frameworks and that you don’t subscribe to one specifically.
Helen: Can you kind of walk us through like your approach to moral reasoning with the frameworks that do exist?
Andy: Yeah. So when I talk about frameworks or theories, those are rooted in a philosophical tradition that goes back to people who think that’s the ultimate framework. That’s the only one that matters.
Andy: So I’ll just as an example, when I talk about, we need to be thinking about the consequences and how much harm is caused, how much pleasure is caused. You know, there’s a philosophical theory called utilitarianism that thinks that’s the only thing that matters, right? That ultimately all right and wrong action can be explained in terms of the consequences.
Andy: And a lot of the history of philosophy are a bunch of people being like, no, That’s not the only thing that matters. There are these other things like rights, for example, and then some people think all right and wrong action can be explained in terms of violating or not violating rights, right? That’s sort of their idea.
Andy: And so I don’t know which of those philosophers is right or correct, right? Like I’m not, I wouldn’t call myself a consequentialist. I wouldn’t call myself a pure rights theorist. I just, I don’t know, right? I don’t know which of those I would say is true. And maybe they’re all wrong. Maybe they’re all false, right?
Andy: I don’t know that either. And so there’s a kind of frustrating thing that like when I work with students, they’re like, well, if I don’t know if the consequentialists are right, like how do I do ethics? Or if I don’t know, like, I got to know which of these guys or girls is right in order to make headway.
Andy: And I’m like, I don’t think you do. Because the way they arrived at those theories, the way like John Stuart Mill decided he was going to be utilitarian. He said, “Well, I look at all the examples of good action, and lo and behold, they tend to have the best consequences, and I look at all the bad actions, and lo and behold, they tend to not have the best consequences.”
Andy: So maybe that’s what, maybe that’s what’s fundamental, right? He spotted a pattern, and then he just went extreme with it and that’s the way all the other theories worked. And so what I tell my students is, you know, even if you don’t think consequentialism is true, given that a really smart person, did an analysis of looking at all the right actions and all the wrong actions and recognizing this pattern, you don’t have to think he’s right, that that’s all there is to morality, but if something doesn’t have the best consequences you should treat that as a little bit of evidence that it’s wrong, right?
Andy: And if it has really good consequences, you should treat it as a little bit of evidence that it’s the right thing to do and I think of it like weights on a scale right? So does it have really good consequences? All right, a weight goes on this side. Does it violate someone’s rights? Okay,then a weight goes on this side, right and it’s not as clear that it’s wrong. How fundamental do we think that right is right?
Andy: Maybe you get another weight on that side if it’s a really fundamental right. And you sort of can go through all these different things and what you’re sort of piecing together, it’s like pieces of a puzzle. If all of these different theories would point in the direction of thinking that it’s right, that gives you some bit of evidence and it’s uncertain.
Andy: You’re not going to be 100% certain that this is the right thing to do, but I compare it to House MD, the doctor who’s trying to diagnose a disease, right? Like, you know doctors are… We don’t realize the degree to which doctors are very uncertain about some of their diagnoses, right? They’re just, they’re doing the same thing.
Andy: They’re like, “Well, there’s this symptom in place that points to these things. There’s this other thing in place that narrows it down to these possibilities. There’s this third thing,” right? And they say, “It’s probably one of these two things. Why don’t we give you medicine for the first one and see what happens?”
Andy: I mean, have you ever been like sort of a guinea pig with your own primary care physician where they’re like, “It’s probably one of these. Let’s give you this and see if it works,” right? There’s a lot of uncertainty in reasoning about medicine. And I think that’s a good analogy to think about reasoning in the moral domain as well.
Helen: Well, one thing that came up when I was just reading a little bit ahead of our interview is like, is there like undeniable, like, this is right and wrong, like truths when it comes to these things, or is everything just collective for the algorithm that you just shared where we all kind of agree that this is morally correct. Where do you land on that?
Andy: That’s a good question. So I do think there are some things that are just fundamentally wrong. And I do think there are some things that are fundamentally right. And I think there’s a lot of agreement about what those things are. So there’s a really good story that gets used in ethics classes to talk about moral disagreement.
Andy: When the Romans first encountered, I think it was the Colossians and the Romans had this practice of burning their dead at the time. I think that I think it was the Colossians had this practice of eating their dead at the time And the Romans were like, “You do what? You eat them?” And the Colossians are like, “Yeah, what do you do?”
Andy: And they’re like, “Well, we burn them.” And they’re like, “You burn your dead? Are you insane?” You talk about moral factual things or moral, non moral disagreements. There’s a kind of explanation there that if you think the afterlife is up there and you think it’s a spirit that goes up there, right?
Andy: What happens when you burn things right, smoke, it rises, right? So you might think this is how we get, you know, our ancestors up there. If you think that really there’s no afterlife up there, but there’s like a life force that your group has and the life force has to stay within the group, right? Well, what do you do?
Andy: Are there certain important essences and you need to keep your ancestors within the family? Well, a good way to keep the ancestors within the family is… literally put them within the family, right? That’s sort of how it goes but the idea here is, that’s a weird extreme example, but the point of that example is
Helen: That was not on my bingo card of what was going to be discussed.
Andy: Well, the point of the story is what the Romans and the Colossians, and again, I’m not sure it was the Colossians, I may be getting the example wrong but the point is still illustrative. They both agreed you should do whatever best honors your dead and every culture will probably agree on that, right?
Andy: You should do whatever best honors the dead, right? And what they disagree on is sort of like what’s going to best honor the dead. But that’s a principle that I think is probably almost universal, right? Same thing for things like murder, right? Every culture has a notion that like murder is wrong. I think there’s one society we found where it’s like the purge year round and like it’s just totally fine.
Andy: But for the most part, almost all cultures will agree. So I do think that there are some things that are safe to regard as bedrock. But I do also think, I do think we need to respect that, you know, beyond that, it gets real murky, real quick, and it’s not always going to be easy to figure out what the right thing to do is and I think we need to be a little bit more intellectually humble than we are in our society and not be as morally certain about some of the more difficult cases. So, I’m kind of giving you a weasley answer, which is I do think there are some definite right and wrong answers, but I also respect the idea that for some things we may not come to universal agreement and, you know, there are going to be certain things that we might need to be a little bit more tolerant of but then that’s a where you draw the line question, right?
Andy: Like what kinds of moral disagreements are we going to be tolerant of and be like “To each his own, to each her own?” And what kinds of moral disagreements are like, “No, we got to draw a line in the sand here.” That’s [the] kind of thing where as a society, even if we can’t be certain that we’re right, we’re going to treat you like you’re on the wrong side of history here. You’re on the wrong side of the line.
Helen: Well, I can see how challenging it can be with our current, I don’t know, social structure, just in the sense that there’s not much room for nuance, especially as we’re consuming so much content on our screens, and everyone seems to have an opinion about everything or feel the need to have an opinion that, yeah, there’s a, it seems like a loss of that dialogue that you mentioned that’s so important and that we really need to have more respectful dialogue.
Andy: You’re absolutely right, and this is where that moral reasoning piece, and being able to identify all those different reasons. I’m amazed at the degree to which people will think, you know, it’s just obvious that this is the right thing to do and you’re a monster if you think otherwise, right?
Andy: It’s just how could you possibly do X? Well, you know, part of the explanation for that is there’s some kind of framework you’re latching on to and some set of options you’re latching on to and you’re just not aware. Of some other element of the situation that might be going on. So there’s all kinds of interesting examples of this.
Andy: Do you want to give you an example or should we…?
Helen: Yeah, sure. I’d love to.
Andy: You can cut it out if you don’t like it.
Andy: So, you know, if you wanted to… Let’s say you didn’t want to eat meat for moral reasons right? Like you don’t want to cause unnecessary pain and suffering in animals.
Andy: And so of course I’m going to switch to a plant based diet. But with the way modern agriculture works, you have these big combines that go through fields and just grind up things and harvest things and they’ve done studies on, you know, they’ll sample like the vole, rat, mice population in a field before the combine goes through and then they’ll sample it after.
Andy: And then they try to estimate, like, how many mice, or voles, or other kinds of animals were, like, ground up in the combines. And the size of an animal shouldn’t make a difference. Rats and mice are, you know, very intelligent, sentient creatures capable of pain and suffering and things like that. And so there’s some reason to think that, like, you know, amount of animal suffering per pound of protein you might get out of these two processes. It’s a little less clear that the vegetarian diet is going to result in less animal suffering. Now, I have a lot of vegetarian friends.
Andy: And so I have to say this, because they’ll get mad if I don’t, which is, you know, maybe those mice are like running out of the field. They’re just real smart. Like kind of like Secret of NIMH. You remember that old animated show from the eighties? Like the, oh, there’s this old animated show from the eighties where the mice, the, this thing’s coming through, that’s going to destroy them.
Andy: And they all just duck out. They jet, right? So, it could be that those animals are just saving themselves and you’re really not causing a lot of animal suffering. There’s reason to think that study is flawed. I only introduce it as an example where what might seem obvious is at least not so obvious, even my vegetarian friends are going to say, “Look, you’re right. We do need to see, you know, are we causing more animal suffering from a plant based diet or not?”
Andy: I happen to think my vegetarian friends are probably right, that it’s probably mice getting out of dodge rather than getting ground up. But that’s just an example where you can be really confident that if you care about this thing The only course of action is this and then a little bit of digging reveals that’s not necessarily the case.
Helen: Yeah I’m a plant forward eater and it’s very hard in the US. The more that you dig into companies and how our food is produced to to have any ethical high ground, unless you’re like, literally making your own food because it’s so jumbled the deeper that you dig. Oh, one, one thing that you were saying as you were kind of, explaining, it sounded like an algorithm to me of applying the different moral frameworks and weighing the different components differently, that it could be a really interesting custom GPT of putting it in there and then asking, Chat GPT, or your custom GPT to give it out.
Helen: But I was curious cause you mentioned an example when we were talking before the interview about Gettier and Chat GPT for actually being really good. So what has been your experience with Chat GPT? Giving out good answers related to some of these questions.
Andy: So, someone had said, chat GPT is going to replace philosophers. I was like, “What? No…”
Helen: Chat GPT is like replacing everyone. So everyone’s getting in that bucket.
Andy: Yeah. So, Chat GPT, this was 3. 5. I asked it… I was toying with the idea of doing a video series where I would like sort of, you know, Andy does philosophy with Chat GPT and just sort of see what happens and just pretend like it’s a dialogue partner. So I just treated it like I was talking to someone and so I gave it a definition of knowledge. A common definition of knowledge is a true belief with justification or good reasons, right?
Andy: Like if you just guess the lotto numbers, you don’t know what they are even if you get it right, but if someone tipped you off, that this is what they were going to be and that it was rigged and then you guessed the lottery, you had evidence that it was going to be that, right? So that’s sort of the idea.
Andy: So there’s this famous counterexample to that, where people can end up with true beliefs and really good evidence, but we would say they don’t know it, right? So one very simple example is, if I’m looking at a very realistic, we just recently went to Los Angeles and went to Madame Tussauds wax museum, the celebrity wax museum.
Andy: Now, let’s say I go in there and I don’t realize it’s a wax museum, right? And I remember thinking [that] the Johnny Depp wax figure looked like a real human being from about 10 feet away, right? So imagine I don’t know it’s a wax figure, right? And I’m like, “Oh my gosh, Johnny Depp is in this building.”
Andy: I have really good reason. I don’t know it’s a wax museum, so I have really good reasons to think Johnny Depp is here and let’s suppose Johnny Depp is actually on the other side of the wall. The real Johnny Depp is there but I don’t see the real Johnny Depp. So I have a true belief that Johnny Depp is in this building, right?
Andy: But it’s weird to say I know it because my evidence is coming from this wax figure. I’m not connected to the real Johnny Depp. So that’s a counterexample, right? Okay. So that’s why I had to tell you that to give you the backstory. So I asked Chat GPT, “Here’s the definition of knowledge, can you give me a counterexample to it?”
Andy: And it started to give me like existing Gettier cases, right? Things that you could find in the literature. And then I was like, no. I have a challenge when I teach my students this. I have them give me their own version. I was like, “Give me your own version.” And I even gave it parameters.
Andy: Give me a version… Actually, I didn’t give it parameters at first. And so it started to try and give me an example, and it just couldn’t do it. It couldn’t get all the pieces together. But then Chat GPT4 came out. I tried the same thing and it gave me a Gettier example. And I was like, I think you’re reading a bunch of literature out there.
Andy: Give me a Gettier case without, you know, drawing on that existing literature.” And it gave me a realistic Gettier case. It could tell me what it was about the features that did that. Now, I still think it was heavily dependent on existing literature to do that because of the way the language models are structured, but it was interesting to me. Now, it sounded like you had another idea that, could Chat GPT be programmed to algorithmically work through the different frameworks: consequences, rights, care ethics.
Andy: I mean, I don’t know enough about the programming to know but here’s why I think it couldn’t do that. One, I try to get it to do those things and it’s very hesitant to make any kind of moral proclamation. I think these are the guard [rails]. We put so many – my understanding is we put so many guardrails so that Chat GPT doesn’t teach people how to make meth or bombs or things like that. I think they’ve made at least the existing models like Chat GPT I’ll say morally cautious. So current Chat GPT, I’m not that hopeful. But could you theoretically take an uncensored version and sort of program it?
Andy: Again, not knowing a lot about the programming and with my limited layperson understanding of how the tech works, I still am not very optimistic because my understanding is, it’s just drawing from patterns that it sees out there on the internet and I don’t think there is enough of that out [there]. There’s no like there’s no paper that a philosopher’s written with the formula: Here’s how heavily you weigh the consequences.
Andy: Here’s how heavily you weigh the rights. Here’s how heavily [etc.] and part of moral decision making is, there is a very human element in the moment, like when I’m making a moral decision, part of what I’m basing it on is my human ability to detect how much this really upsets you, right? And how much it upsets other people, right?
Andy: Like if I’m, you know, in a team, part of what I’m doing when I try to gauge how much harm is this causing is figuring out who really is deeply impacted by this. And so there’s a real human element. In broad brushstrokes, you can abstractly weigh these kinds of things, but there is a human component.
Andy: Maybe there’s some way to have AI do it in the future, but I’m not optimistic [about] how you train AI to be like in a room, give it some cameras and heat sensors and figure out like, “Okay, who is this hurting more? Helen or Joe?” Right? And like, I don’t know how you train an AI to do that by just reading faces.
Andy: But it’s amazing what we as humans have evolved to be able to do to know what’s going on inside the minds of other people, just based on their expressions, based on their body language and things like that. And there are some of those elements that I think are [an] important part of the moral domain that I’m not sure how computers could do it.
Andy: But like I said, I’m a lay person here in this space on the tech side.
Helen: I mean, everyone’s trying to figure out the answers to these. Even OpenAI released a grant of lik e new and novel ways of governance to ask some of the, or to help answer some of the biggest moral issues. So it’s definitely an interesting time of yeah, as a society, how we’re going to grapple with AI’s impact but I think you might be the first person who has given me one reason to maybe try Elon Musk’s new AI tool. What is it, Grok? Which has no guardrails and the irony of Chat GPT having too many guardrails on moral and ethics questions seems so ironic.
Andy: What did I say that gave you a reason to put Elon Musk’s chip in your head? What did I say? Haha.
Helen: Haha. I might turn away some viewers, I know. I’m not an Elon fan. Especially with how he’s on Twitter. But, saying that there’s too many guardrails for the current large language models whereas Elon’s, I think has zero to minimum guardrails. You had said that they have too many guardrails to be able to engage.
Andy: Yeah, they do.
Helen: So, yeah, Elon’s might not have those guardrails.
Andy: Yeah. There was actually an article that recently came out [and] the title was something like “Welcome to The Uncensored World of AI” and it’s precisely because we’re starting to see a kind of visceral reaction to the guardrails because it’s like everything that we want to do that is really good and exciting about AI that could solve real problems also has the potential to create real problems, right?
Andy: And, you know, in all sorts of innovations are like this. I mean you know there’s now a genre of TV show or movie where a scientist wrestles with the fact that this thing they’ve created could destroy the world, right? And so there’s, you know, whatever it is they designed couldn’t do really great things unless it was capable of really awful things.
Andy: And so we’re sort of at that moment with AI where people are realizing it’s going to be real hard to have it be able to do the great things that everyone’s hopeful it can do without opening the floodgates to some pretty awful things. That’s one of the big things I think we’re going to need to be wrestling with.
Helen: And I know it’s been reported that some of the Open AI employees specifically have been really interested in the people who, the biographies of people developing the atom bomb and like, you know, justifying like, I’m working on this tool that can, you know, maybe solve cancer or whatever, but could also potentially, you know, harm all of humanity.
Helen: So it’s an interesting time for sure in that regard. Well, I feel like we could go on and on. I think the captain just said that we’re preparing for landing for our conversation. So what’s one thing that you’d like our viewers and listeners to remember or walk away [with] from our conversation today.
Andy: Something I said earlier that when we think about ethics, we often, particularly like in the private sector, in the business world, we think about compliance, rules, regulations and I think we need to have those in place. But, I think something that really needs to be in place is robust moral reasoning or ethical reasoning development for practitioners.
Andy: For users, for designers, you know, the emphasis can’t be everywhere all the time, right? And we need more people in our society trained up to be able to identify moral issues, to identify all these different reasons, to have the capacity to weigh those reasons thoughtfully and talk about those things because, the real problems for AI are going to happen in spaces where there’s only a handful of people or a single research team and that’s where the moral problem needs to be nipped in the bud right away and so we need massive amounts of I think moral reasoning development across all areas of our population.
Helen: I couldn’t agree more, especially since having met you and I’m so excited to hopefully sit in on your AI and ethics class once it’s up and running. And is there anything else that you want to make sure to include in today’s conversation?
Andy: I guess, I guess one other thing to add is on that topic of, you know, really sort of maximizing the degree to which people have opportunity to develop these skills to think through these problems. It can actually start very early.
Andy: Kids are very capable of starting to develop these skills and one of our biggest programs is really a suite of programs or a set of programs doing work with K-12 schools in the Cincinnati area. So we have our Greater Cincinnati High School Ethics Bowl, which I would love your viewers or listeners if they wanted to get involved. We’re always in need of community members to be volunteer judges, volunteer coaches, volunteer moderators and you don’t need to be an ethicist.
Andy: I actually think community members make some of the best judges. We have programs with the library. We have an Ethics And Dragons program that’s a Dungeons And Dragons program. Turns out when kids play Dungeons And Dragons, they get better at moral reasoning. So we have programs in five area libraries if you’re interested in that.
Andy: Just reach out to us, we’re at CincyEthics.org, or email us at info@cincyethics.org, that goes directly to me and our program manager. So if you’re interested in figuring out how you want to get involved with the center, being involved in any of these programs, you know, it takes a community or it takes a village, as they say, to raise our kids well, and so I would welcome anybody, any of your viewers or listeners who want to get involved with the center, please shoot us an email.
Andy: Oh, and if you want us to work with your organization, you know, your company, your executive team, your leadership teams, also email us at info@cincyethics.org. We have a ton of workshops that are sort of workplace moral reasoning development and leadership development stuff that we can do with your organization.
Helen: And I’ll be sure to include all of these links and your contact information in our dedicated blog posts that will accompany this interview. Well, Andy, I’m so glad that we sat beside each other on this plane ride. I thoroughly enjoyed our conversation. So thank you so much for all of your time and sharing all of your insights and perspective when it comes to AI and ethics and moral reasoning.
Andy: Well, thank you, this was the most fun airplane ride conversation about ethics and philosophy I’ve had. So thank you.
Helen: Thank you for spending some time with us today, we’re just getting started and would love your support. Subscribe to Creativity Squared on your preferred podcast platform and leave a review. It really helps. And I’d love to hear your feedback.
Helen: What topics are you thinking about and want to dive into more? I invite you to visit CreativitySquared.com to let me know. And while you’re there, be sure to sign up for our free weekly newsletter so you can easily stay on top of all the latest news at the intersection of AI and creativity.
Helen: Because it’s so important to support artists, 10 percent of all revenue Creativity Squared generates will go to Arts Wave, a nationally recognized nonprofit that supports over a hundred arts organizations. Become a premium newsletter subscriber or leave a tip on the website to support this project and Arts Wave. And premium newsletter subscribers will receive NFTs of episode cover art and more extras such as to say thank you for helping bring my dream to life.
Helen: And a big, big thank you to everyone who’s offered their time, energy, and encouragement and support so far. I really appreciate it from the bottom of my heart. This show is produced and made possible by the team at Play Audio Agency. Until next week, keep creating.