+
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Who's In This Podcast
Helen Todd is co-founder and CEO of Sociality Squared and the human behind Creativity Squared.
Dr. Andrew Cullison is the founding executive director of the Cincinnati Ethics Center and a research professor at the University of Cincinnati who specializes in moral reasoning, ethics education, and leadership development.

Ep45. Dr. Andrew Cullison: A.I. & Ethics — Where Do We Draw the Line?

Up Next
Ep46. Rudi Bonaparte: A.I. & the Future of Gaming

Ep45. A.I. & Ethics: Where Do We Draw the Line? Discover How to Navigate A.I. with Leading Moral Reasoning & Knowledge Scholar Dr. Andrew Cullison

As A.I. continues to shape our world in profound ways, host Helen Todd sat down with Dr. Andy Cullison, founding executive director of the Cincinnati Ethics Center and research professor at the University of Cincinnati, for a thought-provoking conversation exploring the complex moral landscape surrounding artificial intelligence, the urgent need for robust ethics education, the challenges of replicating human moral reasoning in machines, the importance of proactive governance in navigating the future of A.I., and more.

Cullison brings a wealth of expertise to the discussion.

With over 15 peer-reviewed publications to his name and a passion for helping people of all ages develop crucial moral reasoning skills, Andy spent seven years as Director of the Janet Prindle Institute for Ethics at DePauw University, one of the largest collegiate ethics institutes in the country, before his current roles. His scholarly work focuses on questions about how we can have moral knowledge, and he conducts workshops for teams and organizations to apply philosophical tools to advance diversity, equity, and inclusion.

In addition to his academic work and passion for helping people of all ages develop crucial moral reasoning skills, Andy conducts workshops for K-12 students, teams, and organizations like the FBI to apply philosophical tools to advance diversity, equity, and inclusion.

Where do we draw the line with A.I. from an ethical lens — and how can you sharpen your thinking about the thorny moral challenges surrounding A.I. development and implementation?

To discover Cullison’s thoughts on these timely topics, listen to the episode or continue reading below. 

Ethics and Moral Reasoning in an A.I. World

As A.I. becomes increasingly ubiquitous, the need for ethics education has never been more pressing. To this point, Cullison emphasizes the importance of training practitioners, designers, and users to identify and address ethical issues in A.I. development and deployment.

“We need people to be real good at identifying what those rules should or shouldn’t be,” Cullison explains. “And that’s where moral reasoning becomes a kind of critical component of all this.”

“We need people to be trained up to spot those ethical issues that we don’t even know are on the horizon because A.I. is such a new technology.”

Andrew Cullison

By equipping individuals with the skills to recognize moral dilemmas and weigh competing values, Cullison elaborates, we can foster a more ethically conscious approach to A.I. innovation.

He then breaks down the key components of moral reasoning, which include identifying moral issues, generating possible answers, evaluating reasons and arguments, and engaging in civil dialogue to reach a resolution.

Another intriguing aspect of Helen’s conversation with Dr. Cullison centers on his experiments with ChatGPT’s moral reasoning capabilities.

While the A.I. was able to provide a realistic counterexample to the definition of knowledge, Cullison explains that he remains skeptical about the ability of current language models to fully grasp the nuances of human moral decision-making. “There is,” Cullison insists, “a human component to that.”

“I’m not optimistic how you train A.I., to be like, in a room, give it some cameras and heat sensors and figure out like, ‘Okay, who is this hurting more, Helen or Joe?’

Andrew Cullison

He highlights the importance of human empathy and understanding in navigating complex moral situations, acknowledging the limitations of A.I. systems that rely primarily on pattern recognition and data analysis.

Ethical Decision-Making, Moral Frameworks, and the Limits of Moral Certainty

Throughout the episode, Andy shares practical tips and exercises for navigating everyday ethical dilemmas. One key strategy involves identifying moral issues by paying attention to what angers or upsets people, as these emotions often signal underlying values at stake.

“Anger, being annoyed, being upset…there’s a foundational moral component to that…because the world is not a way that you want it to be. And you think it should be that way.”

Andrew Cullison

By peeling back the layers of frustration and engaging in thoughtful dialogue, we can better understand the moral landscape and work towards principled solutions.

Cullison breaks down the key components of moral reasoning, which encompass identifying moral issues, generating a range of possible answers, carefully weighing reasons and arguments, and engaging in civil dialogue to reach well-reasoned resolutions. These skills, he argues, are not merely academic exercises but essential tools.

While the capabilities of A.I. continue to evolve at a breakneck pace, Cullison reminds listeners that the human element remains indispensable to moral reasoning.

To demonstrate this, he introduces the concept of “draw-the-line puzzles,” where clear principles for decision-making are needed to navigate complex situations. He provides the example of teasing and making fun of people, highlighting the intuitive sense of a line to be drawn between acceptable and unacceptable behavior.

“We would never be so prudish as to say you can never, ever, ever in your life make fun of people,” Cullison explains, “and we would never be so cruel to say that all jokes are fair game at all times, right?”

As he points out, people have an intuitive sense that there’s a line they don’t want to cross. By identifying obvious instances on either side of the line and examining the patterns that emerge, we can begin to articulate principled positions on thorny ethical issues.

When it comes to subscribing to particular moral frameworks, Cullison advocates for a balanced approach that considers the consequences, rights, and duties involved in a given situation. Rather than adhering dogmatically to a single theory, he likens moral reasoning to piecing together a puzzle, with each framework providing a piece of evidence for what is right or wrong.

He compares it to the television drama ‘House,’ in which Hugh Laurie’s lead character, a doctor, solves medical mysteries. “We don’t realize the degree to which doctors are very uncertain about some of their diagnoses,” Cullison clarifies. “There’s just doing the same thing…There’s this symptom in place that points to these things…that narrows it down to these possibilities.

By weighing the various moral considerations at play and engaging in ongoing evaluation, Cullison suggests, we can navigate the complexities of ethical decision-making with greater nuance and humility.

While acknowledging the existence of some universal moral principles, such as the wrongness of murder, Cullison cautions against excessive moral certainty in the face of difficult cases. He suggests that society should cultivate greater intellectual humility and tolerance for moral disagreements, while still being willing to draw lines in the sand on certain fundamental issues.

“I do think there are some things that are safe to regard as bedrock. But I do also think…we need to respect that, you know, beyond that, it gets real murky real quick, and it’s not always going to be easy to figure out what the right thing to do is.”

Andrew Cullison

By recognizing the limits of our moral knowledge and engaging in respectful dialogue, we can work towards a more nuanced understanding of the ethical challenges posed by A.I. and other emerging technologies.

Navigating the Double-Edged Sword of A.I. to Build a More Ethical Future

As the conversation turns to the potential dangers of A.I., Helen and Cullison discuss the double-edged nature of this powerful technology. While A.I. holds immense promise for solving complex problems and advancing human knowledge, it also carries the risk of causing unintended harm if not developed and deployed responsibly.

With this great power comes an equally great responsibility to ensure that A.I. systems align with our deepest values and aspirations. As Cullison pointedly observes:

“We’re sort of at that moment with A.I. where people are realizing…it’s going to be real hard to have it be able to do the great things that everyone’s hopeful it can do without opening the floodgates to some pretty awful things.”

Andrew Cullison

This sobering reality underscores the urgent need for proactive governance and diverse perspectives in shaping the future of A.I. By bringing together experts from various fields, including ethics, philosophy, computer science, and beyond, Helen and Cullison agree that we can work towards developing A.I. systems that align with our values and mitigate potential risks.

Throughout the episode, Cullison returns to the theme of moral reasoning skills as the bedrock upon which a more ethically conscious approach to A.I. must be built. He emphasizes that by developing these capacities in society at large, from K-12 students to corporate leaders, we can foster a more ethically conscious approach to innovation.

“We need massive amounts of…moral reasoning development across all areas of our population.”

Andrew Cullison

The Cincinnati Ethics Center offers a range of programs and workshops aimed at cultivating these essential skills, from a high school ethics bowl to leadership development initiatives — and even a program called Ethics & Dragons, which teaches kids moral reasoning through playing Dungeons & Dragons.

Through initiatives like these, Cullison and his colleagues are working to equip the next generation with the tools they will need to navigate the moral complexities of an A.I.-shaped future. By engaging diverse communities in ongoing dialogue and reflection around these vital issues, he hopes to build a shared foundation of ethical principles and practices to guide us forward.

WIth these programs available in five Cincinnati-area libraries, Cullison encourages anyone interested in participating or in getting involved with the Ethics Center to reach out via email through info@cincyethics.org. He also mentions that the Ethics Center has many workshops on workplace moral reasoning and ethical leadership development, so he encourages organizations and companies to reach out via the same email address, as well.

As the conversation draws to a close, Cullison leaves listeners with a powerful call to action: to actively participate in shaping the ethical landscape of A.I. By developing our moral reasoning skills, engaging in respectful dialogue, and advocating for responsible innovation, we can all play a role in building a future that aligns with our deepest values and aspirations.

“The real problems for A.I. are going to happen in spaces where there’s only a handful of people or a single research team. And that’s where the moral problem needs to be nipped in the bud right away.”

Andrew Cullison

In an era of breakneck technological change and profound moral uncertainty, the insights and provocations offered by Dr. Andrew Cullison serve as a powerful reminder of the vital importance of ethics education and proactive governance in the age of A.I.

Links Mentioned in this Podcast

Continue the Conversation

Thank you, Andrew, for being our guests on Creativity Squared. 

This show is produced and made possible by the team at PLAY Audio Agency: https://playaudioagency.com.  

Creativity Squared is brought to you by Sociality Squared, a social media agency who understands the magic of bringing people together around what they value and love: http://socialitysquared.com.

Because it’s important to support artists, 10% of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized non-profit that supports over 150 arts organizations, projects, and independent artists.

Join Creativity Squared’s free weekly newsletter and become a premium supporter here.