+
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Who's In This Podcast
Helen Todd is co-founder and CEO of Sociality Squared and the human behind Creativity Squared.
Professor Kelly Cohen, Ph.D. is the Brian H. Rowe Endowed Chair at the Department of Aerospace Engineering and Engineering Mechanics at the University of Cincinnati.

Ep65. Dr. Kelly Cohen: Responsible A.I. & Innovation

Up Next
Ep66. Helen Todd: Are Creativity & Freedom Inextricably Linked?

Ep65. Responsible A.I. & Innovation: What You Need to Know About Responsible and Explainable A.I. from UC Digital Futures’ World-Renowned Researcher Dr. Kelly Cohen

Creativity Squared is keeping it close to home on this week’s episode with a special guest,  University of Cincinnati (UC) Professor, luminary A.I. researcher and a pioneer of the Responsible A.I. movement, Dr. Kelly Cohen

Dr. Cohen is the Brian H. Rowe Endowed Chair in Aerospace Engineering and Professor of Biomedical Engineering at UC. With over three decades of experience in Fuzzy Logic systems research, Dr. Cohen has helped advance the field of Explainable A.I. for use cases such as autonomous robotics and personalized medicine. The importance of his work is reflected in the fact that much of it is funded by grants from critical government and military agencies including the National Institutes of Health (NIH), the U.S.Department of Defense, and NASA.  

He currently serves as President of the North American Fuzzy Information Processing Society. He co-founded Cincinnatti-based Genexia, which offers A.I. solutions for detecting cardiovascular disease (the number one killer of women worldwide) during routine mammograms. Dr. Cohen is also the Chief Scientific Advisor for BrandRank.ai, the marketing intelligence service founded by recent Creativity Squared guest Pete Blackshaw, which helps brands understand and adapt how they’re represented by A.I. search engines and chatbots.  

Although this is Dr. Cohen’s first appearance on the show, we’ve had the privilege of calling him a friend since 2023. He’s an active member of a growing community of A.I. enthusiasts in the Cincinnatti region, serving as organizing partner with CincyAI for Humans, Ohio’s largest A.I. meetup co-hosted monthly by Creativity Squared’s Helen Todd and her cohost Kendra Ramirez at the University of Cincinnati’s Digital Futures building. The new facility is home to several interdiscipinary and globally-recognized research labs, including Dr. Cohen’s. As A.I. rapidly integrates into every aspect of our lives, Dr. Cohen’s focus on Responsible A.I. development couldn’t be more timely or critical.

In today’s conversation, you’ll hear Dr. Cohen’s insights on the current state of A.I., the importance of Responsible A.I. development, and his vision for a future where A.I. enhances human capabilities without compromising ethics. We discuss his work in testing and verifying A.I. systems, the exciting developments coming out of UC, and why he believes that Responsible A.I. development doesn’t have to come at the expense of innovation.

Dr. Cohen’s enthusiasm for Responsible A.I. and his commitment to making the world a better place through technology is truly infectious. Join us as we explore the intersection of A.I., ethics, and human potential with one of the field’s most passionate and knowledgeable voices.

The Principles of Responsible A.I. 

Dr. Cohen says that the Golden Rule of doing unto others as you would have them do to you is at the core of Responsible A.I. development. 

Breaking it down further, he says that A.I. developers should try to minimize bias in their models so diverse identities are represented fairly. That could involve everything from curating training data more carefully to redesigning user interfaces for broader appeal and accessibility. 

Responsible A.I.’s behavior also has to be explainable, accountable, and ethical. That means if an A.I. makes an unexpected decision or provides an incorrect answer, the developer or ideally even the user themself should be able to figure out why it happened and how to fix it. 

“Yes, we can develop systems that have remarkable capabilities while being explainable, transparent, and ethical. It may take more time, more effort to certify. People who are greedy want quick answers, but there is an alternative, and that alternative doesn’t come in the way of our success.”

Dr. Kelly Cohen

Sustainability is another key aspect of Responsible A.I. solutions. It takes a lot of energy to train and run A.I. models, which are only getting more numerous, more powerful, and more widely used as time goes on. ChatGPT received an average of 13 million daily visitors in January 2023. Today that number is closer to 60 million per day. 

Addressing issues like A.I.-generated misinformation, indiscriminate data collection, and sexually explicit nonconsensual deepfakes, Dr. Cohen says Responsible A.I. should respect the privacy of users and non-users alike. 

Ultimately, he says that the best way to build Responsible A.I. is to make that the goal before writing a single line of code. As a professor, Dr. Cohen likes to start off his courses by having students record a video about what Responsible A.I. means to them, why they’re pursuing an A.I. career, and the effect they’d like their work to have on the world. 

Dr. Cohen brings these principles to life outside his research and teaching duties as well. After narrowly surviving a heart attack in 2018 caused by genetic factors he was never aware of, Dr. Cohen helped build an A.I. model which can analyze mammograms to detect heart disease symptoms. He’s also Chief Scientific Advisor for BrandRank.ai, which leverages Responsible A.I. and game theory to hold brands accountable for their claims around sustainability and customer satisfaction, for example. 

How Cincinnati Became a Hub For Responsible A.I. 

“Responsible A.I.” and “Explainable A.I.” may seem like new terms invented amid the recent rise of ethically questionable generative A.I. tools. According to Google Trends, interest in these terms before 2018 was practically zero. 

That’s not the case for Dr. Cohen, though, who completed his PhD focusing on A.I. in 1994 before joining UC in 2007. Since then, he’s been a driving force behind UC’s evolution into a national hub for explainable A.I. research. Today, he leads one of UC’s largest research teams with three permanent staff and 16 graduate students. He’s authored or co-authored close to 200 peer-reviewed publications, and he’s participated in more than 100 grant-funded projects, totaling hundreds of thousands of dollars in federal, state, and nonprofit investment. 

But Dr. Cohen says that the best reflection of his program’s success is the success of his students. One of them, Dr. Nick Ernest, developed a fully explainable fuzzy logic A.I. model that reliably outperformed human fighter pilots in aerial combat simulations. His company, Psibernetix, was acquired by Thales, a French avionics company that has since established a footprint in Ohio and employs five of Dr. Cohen’s former students.

Dr. Cohen’s lab is just one piece of UC’s efforts to foster a collaborative flywheel of Responsible A.I. development through relationships with public and private partners. UC’s Digital Futures building also houses 20 different labs, including the Future Mobility Design Lab run by Professor Alejandro Lozano Robledo who appeared on Creativity Squared Episode 38. Alejandro and Dr. Cohen are currently collaborating on a grant application to develop A.I. solutions for NASA operations.

“There’s this wide range of things that originally I was not supposed to do when I was hired into the department aerospace engineering. But now I work with a wide range of things, and that’s what makes it exciting for me, my students, my capability and the industry around me.”

Dr. Kelly Cohen

Dr. Cohen says that spaces like the Digital Futures building are critical for cultivating innovative ideas with fellow experts in other disciplines. As a result, he says 2024 has been one of the most productive of his career in terms of publications, awards, and securing grant projects. Some of his other work in collaboration with his neighbors include projects like designing digital twins for improving worker safety, exploring how A.I. can improve airports’ cybersecurity, and leveraging A.I. to sift through medical records for insights on treating bipolar and other serious mental health conditions. 

The Need For Accountability in the A.I. Industry

Ensuring there’s accountability for the model’s every decision is the most critical aspect of designing Responsible A.I. systems, according to Dr. Cohen. Since it’s nearly impossible to account for every potential scenario, especially in use cases as risky as flying an aircraft, Responsible A.I. systems must have a framework for making decisions when things don’t go as expected. 

These principles of Responsible and Explainable A.I. contrast with many of the most popular closed-source foundation A.I. models. Some critics refer to models like ChatGPT as “black boxes” because even their developers often can’t pinpoint the exact cause of an A.I. hallucination.

As Dr. Cohen points out, the issues associated with unexplainable A.I. have led to several embarrassing and costly mistakes for companies and individuals that rushed to cut costs with unreliable A.I. solutions. 

“People now realize [A.I.’s] potential, but there is a lot of potential. We can augment human capability, be more efficient, more effective. We can do things we never dreamed of. We can solve some of the major problems that we face as humankind, but the question is … where do you separate the hype from the true potential and understand the gaps between where we are today and what we can do in the future?”

Dr. Kelly Cohen

Air Canada gave the world a cautionary tale this year when its customer service chatbot promised a refund to a grieving customer, only for a human agent to later deny the refund citing a policy that the chatbot never mentioned. When the customer sued Air Canada for the $800 refund, the airline tried arguing that it wasn’t responsible for the chatbot’s mistakes. The court disagreed, ordering Air Canada to issue the refund because the company hadn’t warned customers that their chatbot might provide incorrect answers.

Multiple lawyers have also gotten burned over the past few years by relying too much on ChatGPT’s assistance in drafting court documents. A young Colorado lawyer lost his job after filing a motion referencing nonexistent case law that ChatGPT fabricated out of thin air. ChatGPT was also the prime suspect in a California eviction case where the landlord’s attorney filed a motion containing fictitious case law.

Dr. Cohen attributes these technological growing pains to human greed, A.I. hype, and the fact that  generative A.I. developers haven’t figured out how to completely prevent hallucinations. According to a recent study, even the best A.I. models only produce hallucination-free answers 35% of the time.  

Dr. Cohen points to regulation as one of the solutions, highlighting the AI Act recently passed by the European Union. The Act makes A.I. developers accountable for their products by establishing a duty to minimize potential harms caused by artificial intelligence.  

Testing and Verifying A.I. Systems

Contrary to the Silicon Valley mantra of “move fast and break things,” more testing is better when it comes to Responsible A.I. development. 

That’s especially true in his primary field of aerospace engineering, where even the smallest malfunction can quickly result in catastrophic failure. Dr. Cohen says 

The first approach involves formal design methods. This serves as the initial defense against potential safety violations. By applying formal methods early in the design process, Dr. Cohen and his team can prevent unforced errors to ensure safety across all use cases, and verify the integrity of complex codebases that often run into millions of lines. This proactive approach allows them to catch and rectify potential issues before they evolve into critical problems.

The second approach Dr. Cohen utilizes is runtime assurance. He likens this method to the behavior of a turtle. Imagine a turtle moving as quickly as it can towards the seashore. When it encounters danger, it immediately retreats into its shell, prioritizing safety over speed. Similarly, the runtime assurance systems work with two parallel modes: one for high performance and another for safety monitoring. There’s constant vigilance for safety violations, with the ability to switch to a “safety shell” when needed. The system also incorporates hybrid modes to safely transition back to high performance. This approach enables the systems to adapt to unforeseen circumstances, whether they’re environmental changes or internal malfunctions.

The third approach, which Dr. Cohen initially developed, is called stochastic robustness. This method is based on Monte Carlo simulations and involves an extensive analysis of potential scenarios. The team lists all possible uncertainties, both internal and external, and assigns probability distributions to these uncertainties. They then run millions of simulations to test system performance, including analyses of extreme cases that are unlikely but possible. This comprehensive method helps quantify risks and ensure that the systems can handle even rare, high-impact events.

By combining these three approaches – formal methods, runtime assurance, and stochastic robustness – Dr. Cohen and his team create a comprehensive safety framework. Each method has its strengths, and together they provide a robust solution for certifying AI systems in critical applications. While they’ve successfully applied these techniques to spacecraft and aircraft, Dr. Cohen notes that the challenge intensifies when considering applications like self-driving cars. The complexity of ground-level environments presents unique hurdles, but they’re working diligently to adapt their methods to these scenarios.

Dr. Cohen emphasizes that their ultimate goal is to bring the risks associated with A.I. in critical systems down to an acceptable threshold. Only then can they confidently certify these systems for real-world use, knowing they’ve done everything in their power to ensure their safety and reliability. Through this multi-faceted approach, Dr. Cohen and his team are at the forefront of making A.I. systems not just powerful, but also trustworthy and safe for critical applications.

Conclusion

Dr. Cohen believes that, while A.I. is undoubtedly the way forward, there is a critical need to approach its development and implementation responsibly. He stresses that progress and responsibility are not mutually exclusive; rather, they should go hand in hand.

“A.I. is the future. We need to strive to make the future a better world, and in order to use AI in the best possible way, we need to be responsible about it, and we can do both at the same time, the price to pay for rushing ahead and not being careful. That price is a bit too high for us as a society.”

Dr. Kelly Cohen

He likens the ethical considerations around A.I. development to the Hippocratic Oath taken by medical professionals. He suggests that A.I. practitioners should adopt a similar ethos, using all available techniques with the primary goal of improving lives and enhancing the quality of care provided. In other words, with great power comes great responsibility. 

By adopting this mindset, Dr. Cohen says that the A.I. community can strive to create technologies that not only push the boundaries of what’s possible but also contribute positively to society as a whole. As we move forward, Dr. Cohen’s message serves as a reminder that our ultimate goal should be to harness the power of A.I. to create a more equitable and ethical world for all.

Links Mentioned in this Podcast

Continue the Conversation

Thank you, Kelly, for joining us on this special episode of Creativity Squared. 

This show is produced and made possible by the team at PLAY Audio Agency: https://playaudioagency.com.  

Creativity Squared is brought to you by Sociality Squared, a social media agency who understands the magic of bringing people together around what they value and love: http://socialitysquared.com.

Because it’s important to support artists, 10% of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized non-profit that supports over 150 arts organizations, projects, and independent artists.

Join Creativity Squared’s free weekly newsletter and become a premium supporter here.