Creativity Squared is keeping it close to home on this week’s episode with a special guest, University of Cincinnati (UC) Professor, luminary A.I. researcher and a pioneer of the Responsible A.I. movement, Dr. Kelly Cohen.
Dr. Cohen is the Brian H. Rowe Endowed Chair in Aerospace Engineering and Professor of Biomedical Engineering at UC. With over three decades of experience in Fuzzy Logic systems research, Dr. Cohen has helped advance the field of Explainable A.I. for use cases such as autonomous robotics and personalized medicine. The importance of his work is reflected in the fact that much of it is funded by grants from critical government and military agencies including the National Institutes of Health (NIH), the U.S.Department of Defense, and NASA.
He currently serves as President of the North American Fuzzy Information Processing Society. He co-founded Cincinnatti-based Genexia, which offers A.I. solutions for detecting cardiovascular disease (the number one killer of women worldwide) during routine mammograms. Dr. Cohen is also the Chief Scientific Advisor for BrandRank.ai, the marketing intelligence service founded by recent Creativity Squared guest Pete Blackshaw, which helps brands understand and adapt how they’re represented by A.I. search engines and chatbots.
Although this is Dr. Cohen’s first appearance on the show, we’ve had the privilege of calling him a friend since 2023. He’s an active member of a growing community of A.I. enthusiasts in the Cincinnatti region, serving as organizing partner with CincyAI for Humans, Ohio’s largest A.I. meetup co-hosted monthly by Creativity Squared’s Helen Todd and her cohost Kendra Ramirez at the University of Cincinnati’s Digital Futures building. The new facility is home to several interdiscipinary and globally-recognized research labs, including Dr. Cohen’s. As A.I. rapidly integrates into every aspect of our lives, Dr. Cohen’s focus on Responsible A.I. development couldn’t be more timely or critical.
In today’s conversation, you’ll hear Dr. Cohen’s insights on the current state of A.I., the importance of Responsible A.I. development, and his vision for a future where A.I. enhances human capabilities without compromising ethics. We discuss his work in testing and verifying A.I. systems, the exciting developments coming out of UC, and why he believes that Responsible A.I. development doesn’t have to come at the expense of innovation.
Dr. Cohen’s enthusiasm for Responsible A.I. and his commitment to making the world a better place through technology is truly infectious. Join us as we explore the intersection of A.I., ethics, and human potential with one of the field’s most passionate and knowledgeable voices.
Dr. Cohen says that the Golden Rule of doing unto others as you would have them do to you is at the core of Responsible A.I. development.
Breaking it down further, he says that A.I. developers should try to minimize bias in their models so diverse identities are represented fairly. That could involve everything from curating training data more carefully to redesigning user interfaces for broader appeal and accessibility.
Responsible A.I.’s behavior also has to be explainable, accountable, and ethical. That means if an A.I. makes an unexpected decision or provides an incorrect answer, the developer or ideally even the user themself should be able to figure out why it happened and how to fix it.
Dr. Kelly Cohen
Sustainability is another key aspect of Responsible A.I. solutions. It takes a lot of energy to train and run A.I. models, which are only getting more numerous, more powerful, and more widely used as time goes on. ChatGPT received an average of 13 million daily visitors in January 2023. Today that number is closer to 60 million per day.
Addressing issues like A.I.-generated misinformation, indiscriminate data collection, and sexually explicit nonconsensual deepfakes, Dr. Cohen says Responsible A.I. should respect the privacy of users and non-users alike.
Ultimately, he says that the best way to build Responsible A.I. is to make that the goal before writing a single line of code. As a professor, Dr. Cohen likes to start off his courses by having students record a video about what Responsible A.I. means to them, why they’re pursuing an A.I. career, and the effect they’d like their work to have on the world.
Dr. Cohen brings these principles to life outside his research and teaching duties as well. After narrowly surviving a heart attack in 2018 caused by genetic factors he was never aware of, Dr. Cohen helped build an A.I. model which can analyze mammograms to detect heart disease symptoms. He’s also Chief Scientific Advisor for BrandRank.ai, which leverages Responsible A.I. and game theory to hold brands accountable for their claims around sustainability and customer satisfaction, for example.
“Responsible A.I.” and “Explainable A.I.” may seem like new terms invented amid the recent rise of ethically questionable generative A.I. tools. According to Google Trends, interest in these terms before 2018 was practically zero.
That’s not the case for Dr. Cohen, though, who completed his PhD focusing on A.I. in 1994 before joining UC in 2007. Since then, he’s been a driving force behind UC’s evolution into a national hub for explainable A.I. research. Today, he leads one of UC’s largest research teams with three permanent staff and 16 graduate students. He’s authored or co-authored close to 200 peer-reviewed publications, and he’s participated in more than 100 grant-funded projects, totaling hundreds of thousands of dollars in federal, state, and nonprofit investment.
But Dr. Cohen says that the best reflection of his program’s success is the success of his students. One of them, Dr. Nick Ernest, developed a fully explainable fuzzy logic A.I. model that reliably outperformed human fighter pilots in aerial combat simulations. His company, Psibernetix, was acquired by Thales, a French avionics company that has since established a footprint in Ohio and employs five of Dr. Cohen’s former students.
Dr. Cohen’s lab is just one piece of UC’s efforts to foster a collaborative flywheel of Responsible A.I. development through relationships with public and private partners. UC’s Digital Futures building also houses 20 different labs, including the Future Mobility Design Lab run by Professor Alejandro Lozano Robledo who appeared on Creativity Squared Episode 38. Alejandro and Dr. Cohen are currently collaborating on a grant application to develop A.I. solutions for NASA operations.
Dr. Kelly Cohen
Dr. Cohen says that spaces like the Digital Futures building are critical for cultivating innovative ideas with fellow experts in other disciplines. As a result, he says 2024 has been one of the most productive of his career in terms of publications, awards, and securing grant projects. Some of his other work in collaboration with his neighbors include projects like designing digital twins for improving worker safety, exploring how A.I. can improve airports’ cybersecurity, and leveraging A.I. to sift through medical records for insights on treating bipolar and other serious mental health conditions.
Ensuring there’s accountability for the model’s every decision is the most critical aspect of designing Responsible A.I. systems, according to Dr. Cohen. Since it’s nearly impossible to account for every potential scenario, especially in use cases as risky as flying an aircraft, Responsible A.I. systems must have a framework for making decisions when things don’t go as expected.
These principles of Responsible and Explainable A.I. contrast with many of the most popular closed-source foundation A.I. models. Some critics refer to models like ChatGPT as “black boxes” because even their developers often can’t pinpoint the exact cause of an A.I. hallucination.
As Dr. Cohen points out, the issues associated with unexplainable A.I. have led to several embarrassing and costly mistakes for companies and individuals that rushed to cut costs with unreliable A.I. solutions.
Dr. Kelly Cohen
Air Canada gave the world a cautionary tale this year when its customer service chatbot promised a refund to a grieving customer, only for a human agent to later deny the refund citing a policy that the chatbot never mentioned. When the customer sued Air Canada for the $800 refund, the airline tried arguing that it wasn’t responsible for the chatbot’s mistakes. The court disagreed, ordering Air Canada to issue the refund because the company hadn’t warned customers that their chatbot might provide incorrect answers.
Multiple lawyers have also gotten burned over the past few years by relying too much on ChatGPT’s assistance in drafting court documents. A young Colorado lawyer lost his job after filing a motion referencing nonexistent case law that ChatGPT fabricated out of thin air. ChatGPT was also the prime suspect in a California eviction case where the landlord’s attorney filed a motion containing fictitious case law.
Dr. Cohen attributes these technological growing pains to human greed, A.I. hype, and the fact that generative A.I. developers haven’t figured out how to completely prevent hallucinations. According to a recent study, even the best A.I. models only produce hallucination-free answers 35% of the time.
Dr. Cohen points to regulation as one of the solutions, highlighting the AI Act recently passed by the European Union. The Act makes A.I. developers accountable for their products by establishing a duty to minimize potential harms caused by artificial intelligence.
Contrary to the Silicon Valley mantra of “move fast and break things,” more testing is better when it comes to Responsible A.I. development.
That’s especially true in his primary field of aerospace engineering, where even the smallest malfunction can quickly result in catastrophic failure. Dr. Cohen says
The first approach involves formal design methods. This serves as the initial defense against potential safety violations. By applying formal methods early in the design process, Dr. Cohen and his team can prevent unforced errors to ensure safety across all use cases, and verify the integrity of complex codebases that often run into millions of lines. This proactive approach allows them to catch and rectify potential issues before they evolve into critical problems.
The second approach Dr. Cohen utilizes is runtime assurance. He likens this method to the behavior of a turtle. Imagine a turtle moving as quickly as it can towards the seashore. When it encounters danger, it immediately retreats into its shell, prioritizing safety over speed. Similarly, the runtime assurance systems work with two parallel modes: one for high performance and another for safety monitoring. There’s constant vigilance for safety violations, with the ability to switch to a “safety shell” when needed. The system also incorporates hybrid modes to safely transition back to high performance. This approach enables the systems to adapt to unforeseen circumstances, whether they’re environmental changes or internal malfunctions.
The third approach, which Dr. Cohen initially developed, is called stochastic robustness. This method is based on Monte Carlo simulations and involves an extensive analysis of potential scenarios. The team lists all possible uncertainties, both internal and external, and assigns probability distributions to these uncertainties. They then run millions of simulations to test system performance, including analyses of extreme cases that are unlikely but possible. This comprehensive method helps quantify risks and ensure that the systems can handle even rare, high-impact events.
By combining these three approaches – formal methods, runtime assurance, and stochastic robustness – Dr. Cohen and his team create a comprehensive safety framework. Each method has its strengths, and together they provide a robust solution for certifying AI systems in critical applications. While they’ve successfully applied these techniques to spacecraft and aircraft, Dr. Cohen notes that the challenge intensifies when considering applications like self-driving cars. The complexity of ground-level environments presents unique hurdles, but they’re working diligently to adapt their methods to these scenarios.
Dr. Cohen emphasizes that their ultimate goal is to bring the risks associated with A.I. in critical systems down to an acceptable threshold. Only then can they confidently certify these systems for real-world use, knowing they’ve done everything in their power to ensure their safety and reliability. Through this multi-faceted approach, Dr. Cohen and his team are at the forefront of making A.I. systems not just powerful, but also trustworthy and safe for critical applications.
Dr. Cohen believes that, while A.I. is undoubtedly the way forward, there is a critical need to approach its development and implementation responsibly. He stresses that progress and responsibility are not mutually exclusive; rather, they should go hand in hand.
Dr. Kelly Cohen
He likens the ethical considerations around A.I. development to the Hippocratic Oath taken by medical professionals. He suggests that A.I. practitioners should adopt a similar ethos, using all available techniques with the primary goal of improving lives and enhancing the quality of care provided. In other words, with great power comes great responsibility.
By adopting this mindset, Dr. Cohen says that the A.I. community can strive to create technologies that not only push the boundaries of what’s possible but also contribute positively to society as a whole. As we move forward, Dr. Cohen’s message serves as a reminder that our ultimate goal should be to harness the power of A.I. to create a more equitable and ethical world for all.
Thank you, Kelly, for joining us on this special episode of Creativity Squared.
This show is produced and made possible by the team at PLAY Audio Agency: https://playaudioagency.com.
Creativity Squared is brought to you by Sociality Squared, a social media agency who understands the magic of bringing people together around what they value and love: http://socialitysquared.com.
Because it’s important to support artists, 10% of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized non-profit that supports over 150 arts organizations, projects, and independent artists.
Join Creativity Squared’s free weekly newsletter and become a premium supporter here.
TRANSCRIPT
[00:00:00] Dr. Kelly Cohen: We can do things we never dreamed of doing, right? We can solve some of the major problems that we face as humankind. But the question is the technology mature? Is there too much of hype? And where do you separate the hype from the true potential and understand the gaps between where we are today and what we can do in the future with AI?
[00:00:24] Helen: From aerospace engineering to being a leading voice of responsible AI, meet Dr. Kelly Cohen. Dr. Cohen is the Brian H. Rowe Endowed Chair in Aerospace Engineering and Professor of Biomedical Engineering at the University of Cincinnati.
[00:00:41] Helen: With over three decades of experience in fuzzy logic systems research, Dr. Cohen has advanced explainable AI from autonomous robotics to personalized medicine. His work has earned him the presidency of the North American Fuzzy Information Processing Society and grants from NSF, NIH, USAF, and NASA. He’s also co-founded Genexia, and serves as chief scientific advisor for BrandRank.AI, bridging academia and industry in the pursuit of ethical transparent AI systems.
[00:01:18] Helen: I first met Dr. Cohen at the Digital Futures 2.0 Grand Reopening. And he not only became a fast friend, but also our organizing partner for Cincy AI for Humans, the largest AI meetup in all of Ohio that I co-host with Kendra Ramirez. Cincy AI takes place every month at the University of Cincinnati’s Digital Futures building, which is an interdisciplinary research facility, home to globally recognized leading research and responsible AI, including Dr. Cohen’s lab.
[00:01:52] Helen: As AI rapidly integrates into every aspect of our lives, Dr. Cohen’s focus on responsible development couldn’t be more timely or critical. In today’s conversation, you’ll hear Dr. Cohen’s insights on the current state of AI, the importance of responsible AI development, and his vision for a future where AI enhances human capabilities without compromising ethics.
[00:02:18] Helen: We discuss his work in testing and verifying AI systems, the exciting developments happening at the University of Cincinnati, and why he believes that being responsible with AI doesn’t mean sacrificing innovation. Dr. Cohen’s enthusiasm for responsible AI and his commitment to making the world a better place through technology is truly infectious.
[00:02:40] Helen: Join us as we explore the intersection of AI, ethics, and human potential with one of the field’s most passionate and knowledgeable voices. Enjoy.
[00:02:59] Helen: Welcome to Creativity Squared. Discover how creatives are collaborating with artificial intelligence in your inbox, on YouTube, and on your preferred podcast platform. Hi, I’m Helen Todd, your host, and I’m so excited to have you join the weekly conversations I’m having with amazing pioneers in the space.
[00:03:17] Helen: The intention of these conversations is to ignite our collective imagination at the intersection of AI and creativity to envision a world where artists thrive.
[00:03:35] Helen: Kelly, welcome to Creativity Squared. It is so good to have you on the show.
[00:03:41] Dr. Kelly Cohen: It’s a pleasure being here. Thank you for inviting me in.
[00:03:44] Helen: Well, it’s a long time coming. I first met Kelly at the UC Digital Futures 2.0 Grand Reopening, or Grand- however it was called. And ever since then, we became fast friends.
[00:03:58] Helen: Kelly is our organizing partner for Cincy AI for Humans, and is just a national treasure when it comes to responsible AI and all of the work that he does, which we’ll dive more into, in this episode. But Kelly, first for the people who are meeting you for the first time, can you do a quick introduction and tell us a little bit about who you are, what you do and your origin story?
[00:04:20] Dr. Kelly Cohen: Hi, Kelly Cohen. I’m a professor at the University of Cincinnati. I’m also the Brian Rowe Endowed Chair in Aerospace Engineering. I have a secondary appointment in biomedical engineering. And I’m the current president of the North American Fuzzy Information Processing Society. Now that’s a mouthful.
[00:04:41] Dr. Kelly Cohen: But, fuzzy logic is one of the ways in which one can provide services, capabilities in the area of artificial intelligence. My journey into AI was quite accidental. I won an award, my previous employer, where, as a reward for the good work that I did, they told me to pick any lab, any topic, any professor in the country and, go ahead and study.
[00:05:09] Dr. Kelly Cohen: That was in Israel. And so, I went back to my alma mater, Technion in Haifa, and I decided to conduct my studies and my Ph. D. in the area of AI, artificial intelligence. That was 30 years ago. It started off in 1994, and, since then I’ve been developing and testing AI systems, so it’s been 30 years now.
[00:05:36] Dr. Kelly Cohen: And then when I joined the University of Cincinnati back in 2007, I started off in the area of my expertise and created, capabilities, started attracting students. And since then, I’ve been focusing on explainable, trustworthy, transparent AI. Now, why is that important? How does that lead to being responsible?
[00:06:02] Dr. Kelly Cohen: The first thing about responsible AI is, can you be accountable for your actions? Can you take responsibility for that? Can you ensure that the outcome is what you had desired, which is doing something positive and solving a problem in the best possible manner and not saying, Oops, I’m sorry, I hallucinated.
[00:06:24] Dr. Kelly Cohen: And so, that ability to also understand and quantify risks due to uncertainties in the environment, things you didn’t take care, or would usually account for when you design a system, as well as risks associated with a combination of internal malfunctions. So you want to know that under these uncertain conditions, can you still yield results which are robust enough?
[00:06:56] Dr. Kelly Cohen: So, if you are shutting down, can you give a warning to shut down? Can you ensure that the shutting down process is done in a manner in which nobody’s hurt. No harm is done. And you also resolve things in a manner. So since I’ve been at the university, I’ve got contracts in the area of responsible AI from the government, Department of Defense, Department of Homeland Security, NASA, federal agencies, NSF, NIH and I have a relatively large team, three staff and 16 graduate students. It’s one of the larger teams in my college and, I take a lot of pride in the work I do with my students, we’ve had several successes. Back in 2016, one of my students graduated, Dr. Nick Ernest. He took his PhD in garage operation, literally, with his parents.
[00:07:58] Dr. Kelly Cohen: He beat a pilot in combat simulation under contract with the Air Force Research Labs, and he could then sell his company Cybernetics to Thales for a decent amount. Now, Thales is a large company, 22 billion a year, 90, 000 workers, headquartered in Paris, lobby France area. And why would a company like Thales, one of the top 10 aerospace and defense manufacturers in the world, come here to Cincinnati of all places?
[00:08:27] Dr. Kelly Cohen: They don’t have a footprint in Ohio. They’re not looking for business in Ohio, it’s just that what we provided was very good and so unique that you don’t have them in Silicon Valley and all those awesome, highly ranked universities on the East Coast, and that, in a funny way, gave me more confidence than what we do bring value.
[00:08:49] Dr. Kelly Cohen: So I learned here from my students to be brave. To be able to say, yeah, we’re doing something which is, if you’re looking for transparent, understandable, ethical AI, this is the place to be. And so, I’m so happy that Thales has decided to set up shop in Cincinnati. They’re five folk, all of them are my grad students.
[00:09:10] Dr. Kelly Cohen: And they are now coming over to the Digital Futures. Now, when I first met you, Helen, it was, I had Dr. Nick Ernest on stage, if you remember. You know, the best way in which I present myself is through the success of my students. Say, look, you know, this guy, he’s doing explainable AI, that is what was the result of the work he did with me.
[00:09:31] Dr. Kelly Cohen: Of course, the hard work, the innovation comes from him, but I ignited the fire under the seat of his pants, which helped propel him to this area. I’m the one that initiated or implanted the ideas in his brain about explainable fuzzy AI. So, given the success that I’ve had with him and, you know, the last six years back to back, my students, who learned fuzzy from me, won the best PhD thesis award in the North American Fuzzy Information Processing Society.
[00:10:09] Dr. Kelly Cohen: And the success of my students is what made it, made the board pick me unanimously as a president of the organization. So getting the support of my community, my professional community, looking at the success of my students. That’s what makes me tick. And I just want to do more of the same.
[00:10:33] Helen: Well, we love seeing all of the great success coming out of, the University of Cincinnati. And for those who are listening, who don’t know, I think entrepreneur.com or entrepreneur magazine actually did a study or presented a study where, they predicted more unicorns will come out of the University of Cincinnati than any other school in the United States and we definitely give you a lot of credit for that, Kelly.
[00:11:00] Helen: So you’ve been in the industry and doing research for three decades now. From the seat that you sit, can you kind of give us a snapshot of the AI industry? Like clearly it’s been around for a long time and then Open AI kind of opened a can of worms with gen-AI.
[00:11:18] Helen: But when it comes to responsible AI and the quick, you know, everything’s moving super fast these days, can you kind of give us a snapshot of how you see the industry and the gaps and the opportunities as well?
[00:11:33] Dr. Kelly Cohen: There has been, you know, I’ve been following AI now for 30 years. My career goes back 39 years. Until I embarked on AI 30 years back.
[00:11:43] Dr. Kelly Cohen: What I’ve seen is we’ve gone in AI through some winters where funding was low, the belief was low, and now we are on a huge trajectory upwards, with AI given the, as you mentioned, generative AI and Open AI coming up and then, the flurry of products, people now realize the potential.
[00:12:12] Dr. Kelly Cohen: There is a lot of potential. We can do so many things. We can, improve, augment human capability. We can be more efficient. We can be more effective. We can do things we never dreamed of doing. Right? We can solve some of the major problems that we face as humankind. But the question is the technology mature?
[00:12:33] Dr. Kelly Cohen: Is there too much of hype? And where do you separate the hype from the true potential? And understand the gaps between where we are today and what we can do in the future with AI. So, my reflection based on my ability to develop and test AI systems in a responsible manner is that there are many low hanging fruit.
[00:12:59] Dr. Kelly Cohen: Things that today, I can resolve. On the other hand, there is also a lot of hype happening and having to understand the difference is not very trivial. Having to understand the difference because of the oversale. Now, how do we see the difference? We look at results. We look at what is happening around us, the last couple of years with gen-AI, right?
[00:13:26] Dr. Kelly Cohen: And so we ask ourselves as these chatbots that the idea is to replace a human, right? Are they ready? So, the examples we’ve seen is, with Canada Air making a big, you know, mess with the refund policy they have for their customers. They’re not ready. Why? Because chatbots based on gen-AI hallucinate. Hallucinate means they provide you with BS answers.
[00:13:56] Helen: And for our listeners who might not know that story, can, you share what happened with the Air Canada chat bot and how they got into trouble with it?
[00:14:04] Dr. Kelly Cohen: Yeah, so, there was a traveler, he had a bereavement in the family. He needed to figure out a refund. So he goes to the Air Canada website, where there’s a chat bot that greets him, you know, very good in conversation.
[00:14:21] Dr. Kelly Cohen: “How can I help you? What can I do?” And so he asked for a policy. Okay. What do I need to do in order to get a refund? Because, you know, I have an unexpected, change in plans. So he got a set of instructions that he had to follow. He took a picture of it with his cell phone of those instructions that he got from the chatbot.
[00:14:43] Dr. Kelly Cohen: After he addressed the emergency with his family, he wanted a refund. And so he follows the instructions that were given to him to the T. But what happens? No refund. So he goes and asks [Air] Canada, “what happened? I did just what he told me to do.” And the response was that he was misled by the chatbot and then in some obscure website that somehow the chatbot hadn’t referred to, there is another ruling in their playbook that goes against that policy.
[00:15:18] Dr. Kelly Cohen: And as a result, that is the true policy and not what the chatbot said. “So no refund for you today.” Like in that Seinfeld episode, “no soup for you today, right?” And so this poor chap, what does he do? He goes and files a case. He takes Air Canad to court. Well, Air Canada’s chatbot doesn’t have a disclaimer saying, “hey, from time to time, I’m going to mislead you intentionally.
[00:15:42] Dr. Kelly Cohen: Otherwise, I’m going to hallucinate. Don’t listen to me.” No, that’s no such disclaimer. And so, you cannot try to save money, remove the human in the loop in responding to your customer and then mislead them, right? And so that’s an example of immature technology coupled with greed. So the problem we have with AI is that people are greedy.
[00:16:13] Dr. Kelly Cohen: That’s nothing to do with AI. That’s a, you know, age old problem that people are greedy. Whenever they feel they can make a buck, why not? Another example is what happened to these Manhattan lawyers where they tried to save on paralegals who, you know, if you watch all those legal, TV shows like I do, you have that dark room, no windows, paralegals working until midnight with under some neon lights, trying to figure out those cases with a big library full of books, you know?
[00:16:41] Dr. Kelly Cohen: And so, you have a situation where there was this Colombian airline and a bunch of these lawyers from Manhattan came in and depended on chatbot. They said they would be cost effective, no paralegals, nobody needs to check anything. Why? Chatbot would give me great answers. And then they went ahead and took that information from the gen-AI, Chat GPT, and they presented it in court.
[00:17:09] Dr. Kelly Cohen: The lawyer, sorry, the judge, Judge Castile, he decided to, well, double check them because, you know, you want to make sure that those cases presented. Well, there is a precedent for that, a legal precedent, not in some imaginary world, but in the real world. And what do you find?
[00:17:30] Dr. Kelly Cohen: I think he found out that those cases were completely fictitious. Never ever existed. And so what happens? You’re lying to a judge. You’re lying to win a case. The judge kicked them out, made this famous. It appeared in many newspapers. He said, “look, it’s okay to use AI, but you are accountable for it. You, the lawyers, not Chat GPT.
[00:17:52] Dr. Kelly Cohen: I’m not going to fine them, but I’m going to fine you because in the end you decided to trust that.” And present the case based on what he got from them. Chat GPT openly says, “Hey, you know, from time to time, I can give you errors. I don’t give you a warranty on what I’m telling you.” So they’ve forewarned you, but because of the hype and because of the interaction, the interface is so very convincing.
[00:18:16] Dr. Kelly Cohen: That, “hey, this is true.” So we believe it. We want to believe it. Why? Because we’re greedy and this is a simple way out. And so that’s another example. And there’ve been many other examples of rollouts, Gemini’s… Google’s Gemini, right? Where you have folk dressed up in, weird backgrounds because they try to force the diversity aspect in a very, in a manner which is very unbecoming.
[00:18:44] Dr. Kelly Cohen: You have had cases where Professor Tully was, there was a claim that, a Washington Post article, where he joined a trip to Alaska and he, inappropriately sexually harassed one of his students, right? Now, something like that, kills your career completely. If there is a finding that tells you that you’ve been, that there’s a Washington Post article, because we believe the Washington Post, they’re the guys who, you know, did the, All The President’s Men.
[00:19:13] Dr. Kelly Cohen: You remember that film? Dustin Hoffman and Robert Redford, they’re famous for uncovering such stories. Now, if there’s such an article, it must be correct because they have a good brand. We believe that brand. But it so happens that article that was associated with the Washington Post never existed.
[00:19:33] Dr. Kelly Cohen: Professor Tully, poor chap, he never went to Alaska. He never worked in that school. And so it is easy for him to refute all those claims. Simply. There’s no such article in Washington Post, but damage is done. And it shows you how, you know, in the past, we worried about fake news, which was intentional. Now this is unintentionally filtering in and coming in.
[00:19:56] Dr. Kelly Cohen: So it makes it even more difficult. So these are examples in which we know that there is hype. There are ways to go about it, but people are trying to use rags, to use other approaches to cut down on the number of hallucinations, but bottom line is foundationally, in the architecture of deep learning, which is the foundation of gen-AI, there is a problem in which you keep getting hallucinations and errors.
[00:20:24] Dr. Kelly Cohen: Now you can reduce them. There are several gen-AI products that are better than others, but you don’t have even a 3 percent hallucination rate in a study that was done in November 2023. Chat GPT still has 17 million hallucinations a month.
[00:20:43] Dr. Kelly Cohen: Now, some of those could be very… and so the question is now, who’s minding the store? Europe, they have the EU Act passed in March of this year, where they’ve categorized all AI into four bins. The unacceptable risk, the high risk, the medium risk, the low risk. Unacceptable risk is where it goes against the norms of society.
[00:21:04] Dr. Kelly Cohen: It causes huge impact. And they have huge fines associated with all these categories if you don’t adhere to them. So if a US AI company wants to do business in Europe, with the EU, and they don’t adhere to this, they’d be out of business. The fines are very heavy, in the millions of Euros. And so, you can see that there is a move now.
[00:21:23] Dr. Kelly Cohen: That act was passed by 540 to what, 50. So you had a ratio of about 10 to 1 in passing that act. People want to do well for society. So what is the act basically saying? Use AI, but act responsibly. And if you don’t act responsibly, it’s like, if you run a red light today, even though you have a license to drive a car, you’re still not allowed to run a red light.
[00:21:48] Dr. Kelly Cohen: So the fact like, if I do an equivalence to that, I have a license to operate an AI system, but I’m not allowed to cause harm. Running a red light is causing harm. I might hit somebody. I might cause an accident. I might cause danger to other folk and also to the people in my car who opted out to drive with me as a driver.
[00:22:08] Dr. Kelly Cohen: And so, as we have regulations on the norms and the behaviors of our citizens, okay? If you steal something that belongs to somebody else, you have to, well, answer for that. You’ve got to be accountable for that today. You’re a person. But if you’re an AI that steals somebody’s privacy, there’s no regulation for that.
[00:22:28] Helen: You know, one, I guess criticism of that, a lot coming from our friends on the coast, is that responsible AI and even more so regulation stifles innovation and lets others have a competitive advantage. How would you respond to people who take that stance?
[00:22:48] Dr. Kelly Cohen: The West Coast feels that regulation comes in the way of success.
[00:22:53] Dr. Kelly Cohen: Give you a few examples with self driving cars. That’s considered to be high risk according to the Europeans. Why? Because it can cause harm to people. So, without certifying the quality of their AI, self driving taxis were introduced without regulation. Why? Because regulation comes in the way of advancement into the city of San Francisco.
[00:23:16] Dr. Kelly Cohen: No regulations, no certification. It’s GM. GM is a trustworthy company. They got a lot of money. They can help us. Let’s give them permission. Well, if you do a test with rats at the University of Cincinnati, I have to go through an IRB. Why? Because even if I end up killing the rats, I have to do it following a certain protocol.
[00:23:36] Dr. Kelly Cohen: But these folk who are anti regulation, they say, “okay, let’s unleash our AI on our unsuspecting citizens without any process.” And you know, it ended up dragging a lady 20 feet under the car. And there was nothing in between to stop that from happening. And at the end, folks asked GM what happened?
[00:24:00] Dr. Kelly Cohen: “Oh, sorry. Flaw in my AI.” Okay? Now, according to Missy Cummings, who worked for a while on Department Of Transportation’s safety to do with self driving cars. One of the big problems GM’s Cruise taxis, Waymo of Google, and also Tesla’s self driving cars have is that the foundation they’re based on, deep learning that’s brittle, is impossible to cover each and every scenario.
[00:24:36] Dr. Kelly Cohen: And there’s so many new scenarios that come. We as people learn how to adapt. We have certain principles. Do not hurt people. Do not cause harm. So they’ve got all these hypothetical situations they love to play with. “Oh, on the left you’ve got a bunch of kids, on the right you have your grandmother, whom would you hurt?”
[00:24:55] Dr. Kelly Cohen: Crap. In this case, there was no other scenario. There was just, do I have to drag an old lady or not? And they decided, yes, we should drag the old lady. Now, where does that make sense? So, okay, that is supposed to advance you? They figured out that the technology is not mature. The license of GM to operate Cruise taxis was revoked.
[00:25:19] Dr. Kelly Cohen: And then you go back to the drawing board. They lost, close to a billion dollars in that acquisition of Cruise taxis. They fired the entire leadership of Cruise because they found they were lying to GM. There was no, proper auditing or quality control on the product. And the question is no regulations, but is that exactly advancing us to be able to use our citizens in a way in which they’re worse than rats?
[00:25:49] Dr. Kelly Cohen: And I’m glad that Ohio is not as advanced as Silicon Valley and that we haven’t unleashed such harmful AI on our citizens here. And I hope we don’t do until we come up with a testing process that can certify the safety of such systems. So, that is the way I have to answer. I know the Europeans have got it right.
[00:26:17] Dr. Kelly Cohen: And I feel that we should learn from mistakes made when we come up with our overindulgence and greed and with large companies, just because under the guise of advancement, they try to sell us immature technology. You have to be responsible to ensure, and there should be somebody certifying you, some NIST policy.
[00:26:48] Dr. Kelly Cohen: So today, the advancement in AI is so very vast and fast that the policy makers are not keeping up. And those that permit this to happen in our society, they’re ill advised. So it is groups such as ours and events such as what we did yesterday, where we talk openly and we advise our policymakers that there is a different approach.
[00:27:18] Dr. Kelly Cohen: It is important to be cautious. So there are some applications that are not harmful to, you know, in the use of AI. So you have to assess the risks. You have to be able to differentiate between what is mature and low risk. That’s good. Let’s advance that. There shouldn’t be a blanket on all of AI, but we should assess every AI to categorize them to assess the risks.
[00:27:43] Dr. Kelly Cohen: And that’s what I’m doing within the CIC. I lead a team to do risk evaluation of AI. And we’re coming up with a series of tests. But the whole idea is, does this cause harm? In the area of health, you don’t want a diagnostic to have a false positive, which is a lot of anxiety. You go through a great deal of anxiety because you’re told that you have a disease which you don’t have.
[00:28:11] Dr. Kelly Cohen: Worse still is a false negative. A false negative is a death sentence. That means, you know, for example, you go for a mammogram, you have cancer, but you get the all clear signal and because you don’t initiate your treatment early. Now, don’t you think we need to test this? Well, I hope [the] FDA does [some] kind of testing, to be able to figure out is this good enough to be utilized in the market and don’t listen to Silicon Valley and say, Oh, FDA is coming in the way of using AI for health, right?
[00:28:53] Dr. Kelly Cohen: Like everything else, we need to ensure that the interests of our public [is] the most important decision making factor when we decide on regulations. So I believe we can advance AI, and I’ve been working, trying to advance AI all these three decades, without having to look at our fellow citizens and say they are collateral damage because that’s basically what’s happening.
[00:29:27] Dr. Kelly Cohen: Citizens in certain cities where the policy makers don’t know better, a collateral damage to the advancement of AI, of greedy companies trying to market their immature technology.
[00:29:41] Helen: You mentioned the CAIC, for our listeners and viewers, that’s the Cincinnati AI Catalyst. Kelly was actually just named a board member.
[00:29:51] Helen: I’m also a member. Yesterday’s event that we did was actually all volunteer based, a workshop for our local governments here for the eight counties in the Cincinnati region to make sure that no one gets left behind and especially our local governments and Kelly, your presentation was on the eight principles of responsible AI.
[00:30:11] Helen: So I’d love for you to kind of share, kind of at a high level, what is responsible AI? Because we hear those words a lot. Responsible, ethical AI; how do you define responsible AI?
[00:30:22] Dr. Kelly Cohen: So responsibly I go back to the golden rule. Don’t do anything unto others that you want them to do to you.
[00:30:28] Dr. Kelly Cohen: Right? We’ve seen this principle play down in the major religions. We see this principle as one of the cornerstones of ethics and based on this main foundation, now we can translate that into AI. So you want to make sure that there is no biases. We are fair. We allow for, the minorities, the underrepresented populations, you want to give them an equitable opportunity here, okay?
[00:31:11] Dr. Kelly Cohen: That is the norm of society, do what is right. Today, if you look at it, cars are designed around men. Many of the treatments, apps to do with heart attacks, focus on men’s symptoms. Okay. Is that fair? 52, 53 percent of the population of women. Why do you want to leave that behind? Right? Also AI has a tendency of learning from history.
[00:31:42] Dr. Kelly Cohen: Is our history perfect? No, it isn’t. We’ve had a bad history when it comes to discrimination. So if we’re going to learn from and use the data from that period, obviously that would contaminate our decision making into the future. So, fairness. That’s an important principle. Then we are looking at, explainability, accountability, ethical behavior.
[00:32:13] Dr. Kelly Cohen: It’s, you know, can I explain? Can I see what I’m doing? Can I take responsibility for that? Because when we have a transparent system, it enables me to debug it. It enables me to pinpoint where I have gone wrong. And so having to have that demand and I have been showing over the past 30 years that, yes, we can develop systems that have remarkable capabilities while being explainable, while being transparent, while being ethical, it is just that it may take some more time, more effort to certify, but, you know, those people who are greedy want quick answers, what other products. But there is an alternative and that alternative doesn’t come in the way Of our success, it is just having to ensure that safety comes first, that doing the right thing comes first.
[00:33:12] Dr. Kelly Cohen: Okay? Sustainability; today, AI systems, need a lot of energy, need a lot of power, okay? To train an AI model. And it is expected that by 2030, about three to 4 percent of all the energy that we have, electrical power, would be focused on training AI models. That’s a huge price to pay. What are we getting in return as a society, okay?
[00:33:41] Dr. Kelly Cohen: So can we create AI solutions that lead less energy, that are more sustainable, that have a smaller carbon footprint? The answer is yes. There are alternatives, but if people are focused on shortcuts, on how do I become an AI genius and come up with a new startup within 24 hours, then, well, you’re, paying a big price for that on sustainability, okay?
[00:34:12] Dr. Kelly Cohen: The foundation to those principles do no harm. And if you have to be able to understand and internalize that, you look around and you see that there are ways to get about doing the right thing. And that is what the focus needs to be on. And it is said in the Bible, follow the golden rule.
[00:34:44] Dr. Kelly Cohen: Everything else is commentary. I think that’s true here as well. So that’s what we need to do. [This is] the foundation’s principle. I don’t want my grandmother to be run over by a car. So let me not run over your grandmother. I don’t want my 12 year old girl to feature in a porn film without permission just because today there is a possibility of taking her pictures from social media and putting them in a very harmful manner that could lead to her wanting to kill herself.
[00:35:15] Dr. Kelly Cohen: Is that fair? Is that good? Utilizing AI to manipulate the behaviors in society. Is that okay? Privacy! Today, you know, just because you’re on social media, just because you want to do something, you know, there’s a case where a woman’s face from Facebook was used to promote sexual enhancement products, and she could not sue anybody.
[00:35:48] Dr. Kelly Cohen: That was it. Just because you put your face on social media, does that mean anybody has the right to abuse that information? steal your, who you are, your identity. And so there are these concerns about what harms can be done to the person and you want to make sure that you test your system to be doing what is good for the society.
[00:36:13] Dr. Kelly Cohen: And then who needs to worry about what is good for the society. We as voters, we as the public in a democracy get to influence who leads us. And we’ve got to look at their policy. And is that policy working towards the betterment of society? And that’s what we need to do. So we have to go out to the rooftops and shout at the top of our voices:
[00:36:39] Dr. Kelly Cohen: “Guys, pay attention. This is important.” If we believe in a better world, then we need to be the vehicles of change. As an academic who teaches, develops AI, a responsible ethical AI, I see it as my duty to be able to do that. And I’m so pleased that when I have that competition of what is responsible AI, you picked that up.
[00:37:05] Dr. Kelly Cohen: And you, helped us choose the best video clip. You remember that? That was good. And then you shared that with your community. So when I teach my classes, I don’t only teach the how to, write a software program in Python, how to do this, how to do that. I also teach them why is what we’re doing important.
[00:37:27] Dr. Kelly Cohen: How do I see the very, very big picture? How is it to use AI responsibly? So I dedicate four lessons and one assignment, which is graded, which is creating that YouTube video. What is responsible AI as my first assignment, because I want the students, before we get down to the details of the how, it is important to say, why am I doing AI? And what will it lead to the world? Right now, some people across the coasts do not have the same approach into teaching AI.
[00:37:55] Helen: Or developing their AI. Like, why are we developing AI? I think the “why” should be at the beginning of almost all of these conversations.
[00:38:06] Dr. Kelly Cohen: And there are alternatives. So I survey the alternatives, survey the good, the bad and the ugly. Right?
[00:38:12] Dr. Kelly Cohen: And there is a lot of good to be had. So I’m not anti-AI at all. That’s my career, is developing products, okay? And I am contributing as an entrepreneur as well to a few solutions. You mentioned Genexia, BrandRank.AI. Genexia is about using memograms, but not just for cancer detection, but also to be able to use that same mammogram that captures breast arterial calcification and then correlate that with what one would find in an angiogram, that is cardiovascular risk assessment.
[00:38:49] Dr. Kelly Cohen: And so by being able to make that correlation, you get two tests for the price of one. You go, take the trouble, get a mammogram, go through that very uncomfortable test. And then you get assessment for cancer, as well as an indicator whether your risk for cardiovascular health is low, medium or high. And if your risk is high, then you can do something about it.
[00:39:16] Dr. Kelly Cohen: There are about 500,000 women who die every year in the United States from cardiovascular disease. We believe, together with, you know, cardiovascular health specialist in Cleveland, Professor that nobody needs to die from a heart attack. You can reduce that number significantly if there is warning provided.
[00:39:44] Dr. Kelly Cohen: That is diet, exercise, low stress, and a few medications. And then you can even reverse the problems one has. It’s not unlike cancer, which is far more difficult. We don’t have proper cures for that, but heart disease can be regressed, stopped, avoided. And so what we want to do with, Genexia is to provide that possibility to them.
[00:40:11] Dr. Kelly Cohen: And, I’m a heart attack survivor from 2018. UC was formed in 1819. And so the eighteens and the nineteenths were all mixed up with me. So while we were celebrating our, you know, 200th birthday, I was just recovering from a heart attack. And there my wife detected something wrong. I had a hundred percent blockage in my widowmaker.
[00:40:33] Dr. Kelly Cohen: She called 911 against my better judgment. As I was thinking, she was just being hysterical, but she was right. There was something wrong and I was lucky. There’s a genetic predisposition that I have that made me very sensitive to something I never got tested [for], but the chance that I got allowed me to say, hey, you know, within my capability as AI, if I can bring about a difference, if I can save people the way I was saved, the chances of, by the way, somebody surviving a widowmaker heart attack outside of the hospital, which what happened to me is about 11%.
[00:41:09] Dr. Kelly Cohen: So I’m really, really lucky. And not everybody has this good fortune. And so I want to share that with society. Part of what I’m doing being responsible, is a result of this awakening where I see things from a different light. I see things as wanting to be positive. So it’s not just the talk that I do, it’s also having to make a difference.
[00:41:30] Dr. Kelly Cohen: In BrandRank.AI, which is led by Pete Blackshaw, you know, they started off, they kick started the whole journey with you, Helen and Kendra, you know, at the Cincy AI for Humans in March. So I was there first other hire as chief scientific advisor. And what I do is I use my responsible AI together with game theory to hold brands accountable and to show them whether they’re living up to their promise.
[00:42:04] Dr. Kelly Cohen: So people can come up with claims. Oh, we are sustainable. We have a low carbon footprint. We’re doing all these awesome things. But then do they really give you good customer support or they’re like Canada Air that cuts corners and lies to the customer and misleads them, right? That’s not good for your brand.
[00:42:23] Dr. Kelly Cohen: How often do people really take and internalize all the complaints they get? So what we’re trying to do is bring about something positive to society by looking at AI to provide us with information on making brands better, more responsible, more ethical. And having to help them reach the goals that they’ve set for themselves.
[00:42:49] Helen: We’ve had Pete on the show to talk about his optimization for brands in the chat bot. So anyone who’s listening, who missed that episode, it’s a really good one. And we’re so lucky to have you in Cincinnati and these startups in our burgeoning AI ecosystem here. One thing I do want to talk a little bit about what’s happening in Cincinnati, but before we do that, you mentioned that you work on testing and verifying AI systems so that the outcomes are responsible and minimizing hallucinations.
[00:43:22] Helen: Our audience, at least I’m not so technical. Can you kind of break down what’s needed for robust testing and like what you’re working on to, to verify that these systems are, are responsible and safe? Like how many, I don’t know, dimensions or tests do you have to run to ensure that AI is safe? Just kind of give us a glimpse into your research and what you’re working on in that regard.
[00:43:52] Dr. Kelly Cohen: So most of my work is done on safety critical and time critical systems where intervention of AI can make a difference and save lives. But you have to test and validate that. Thus far, we don’t trust generative AI solutions there, but we use three basic approaches. One is based on formal methods during the process of design, where we look into potential violations, violations that can cause safety issues.
[00:44:23] Dr. Kelly Cohen: Okay? And so we make sure that the design the fundamental software, because today, many systems have got hundreds of thousands of millions of lines of code, and sometimes you can have a conflict there. And so you want to make sure that in all of the use cases, that where you’re going to apply your system to, that your logic is secure.
[00:44:47] Dr. Kelly Cohen: So formal methods is an approach that helps us do that. There was the funding that I received from Air Force Research Labs, one of my PhD students, who’s also in Thales right now, Dr. Timane. Another approach is based on runtime assurance. Runtime assurance looks at safety the way a turtle would. So imagine a turtle moving as fast as a turtle can across to the seashore, and so that’s its performance mode, high performance from a turtle’s point of view, okay.
[00:45:21] Dr. Kelly Cohen: It’s dashing. And then all of a sudden, what happens to a turtle if it encounters safety? Or an intruder? It gets into its shell, right? And so what takes precedence for a turtle? Safety. Safety first. It gets into the shell. So it postpones its high performance stash for a while, right? So runtime assurance is similar.
[00:45:44] Dr. Kelly Cohen: We have two modes working in parallel. One is always on the lookout for a safety violation. And the other is its high performance mode. And they work in parallel. Now what happens in reality? In reality, I could be faced with something like technology [that] was not developed for environmental conditions, new wind conditions, and also malfunctions.
[00:46:08] Dr. Kelly Cohen: Internal malfunctions or the combinations of malfunctions I may not have ever imagined, okay? So you have the malfunctions and you have environmental, external stimuli that push you off your training mode. And so that could bring you towards some safety violation. “Oh, I’m going to collide with somebody.
[00:46:28] Dr. Kelly Cohen: Oh, there is something happening.” So you want, at that point, get into a shell. That is, be careful. And then we’ve looked into hybrids between the safety mode and the high performance mode to get you back to where you want to be without having to violate any safety rules. So this is the second policy. The third one is something I’ve originally started working on called stochastic robustness.
[00:46:56] Dr. Kelly Cohen: It’s based on Monte Carlo simulations, so we come up with a list of all possible uncertainties that can impact in internal, external, and we assume some sort of a probability distribution function. So take for example, weather, climate, winds. So we know that winds can fluctuate between zero wins to a maximum amount.
[00:47:20] Dr. Kelly Cohen: And that what are the likelihood that those extreme cases would happen? And can your system adhere to that extreme case, right? And why is that important to look into extreme cases? You know, that nuclear facility in Japan that faced an extreme typhoon, they call it in Japan, right? Like a hurricane. And, well, there were safety issues, nuclear radiation, they couldn’t address it.
[00:47:49] Dr. Kelly Cohen: It was a national disaster. Many people died. And so that’s an extreme case, but it is not unlikely to ever happen. So what we do is we list all the possible situations and permutations that can happen. And then we run each of them one at a time. Or the one of a time could include interactions between malfunctions.
[00:48:14] Dr. Kelly Cohen: We could run a few million of these cases based on a digital twin, based on a simulation, okay? And so the three approaches, formal methods, the turtle safety violation called Runtime Assurance, and the Monte Carlo simulations, which is the foundation of Stochastic robustness provides us with a quantification of the performance, safety, and cost associated with your system, given all the uncertainties that may happen, even if the uncertainty is not very likely.
[00:48:44] Dr. Kelly Cohen: Now, we don’t relate to uncertainties like what happens if the sun comes down on me? Well, the whole world will be destroyed, but the likeliness of that is not something we want to consider. Okay? What happens if a big meteor hits the earth and we all die like the dinosaurs did? We’re not talking about those extreme cases, but we’re looking at the history of winds, history of weather, history of malfunctions to look into permutations and combinations and that in itself is a handful, right?
[00:49:14] Dr. Kelly Cohen: And so utilizing these approaches, we can quantify risks and then if the risks are under a certain threshold, okay, we can then say, okay, I’m now using that to certify my approach. Certify the system, or if it is above that threshold, I have to go back to the drawing board, say I perhaps need to specify better sensor system, specify better actuators, come up with a new design philosophy, better algorithms, because what I have doesn’t work well.
[00:49:45] Dr. Kelly Cohen: Now, it is these three techniques that is quite unique that one group of researchers apply all three. And so think of it as a triple layered cake that we provide our customers. And we tell them, look, you know, each of these approaches have got strengths, and they have areas where they don’t cover as well.
[00:50:06] Dr. Kelly Cohen: So if you want the mathematical rigor, it’s the formal methods. If you want to make sure that you’ve addressed uncertainties and quantified performance and safety as a result of uncertainties, then it’s the Monte Carlo simulations. If after you finish the design, which is with these two first modes in real time, there is a new scenario that comes up where doctrines don’t matter, where all what you’ve prepared for and learned and trained for does not matter because it’s completely new.
[00:50:34] Dr. Kelly Cohen: Then you have the runtime assurance where you’re focusing on violations of safety. So having to utilize all three methods brings you down to an acceptable threshold of risk. And it is that threshold of risk that would allow you to certify your safety critical systems. Now we’ve been doing this for spacecraft and for aircraft.
[00:50:57] Dr. Kelly Cohen: In the area of self driving cars, it’s far more complex. And that’s funny that it’s simpler in space or simpler in the three dimensional world of aircraft. That’s because there’s more clutter. There’s more things happening on the ground. Whereby we as humans adapt to quite well. We’ve progressed to a way in which once we understand, if we are a cautious driver, chances are that the risks are under a certain threshold. And so that’s what we’re trying to do.
[00:51:26] Helen: It sounds like Google and all these big companies should reach out to you before releasing Gemini to say that you can certify their AI products. You mentioned, well, it’s come up in conversation, the University of Cincinnati, and you’re in the UC Digital Futures building, which is where we have our monthly Cincy AI for Humans Meetup.
[00:51:55] Helen: And it’s part of the hundred million dollar investment into the Cincinnati Innovation District. Can you tell us about UC and what our listeners and viewers might not realize in terms of what’s happening there, just on an innovation and AI and responsible AI front, because I think, even here in Cincinnati, a lot of people don’t realize what’s happening in our back door.
[00:52:23] Helen: So yeah, I’ll hand over the mic for you to share what’s happening at the University of Cincinnati.
[00:52:30] Dr. Kelly Cohen: So I’ve been in the Digital Futures, which is this new complex, developed as part of the Cincinnati Innovation District, one of the three Innovation Districts. There’s the other one is in Cleveland and one in Columbus.
[00:52:44] Dr. Kelly Cohen: And in the Digital Futures complex, I’m one of about 20 labs that are operating. We had to compete across the university to get access and space in the lab. So I’m one of the first cohorts that competed and was given space. I left all of my real estate. I was playing Monopoly for about 15 years before that, on main campus, getting a lab here, getting an office there, getting space there.
[00:53:08] Dr. Kelly Cohen: I gave that all up to come here, the Digital Futures, to be able to focus and be with my students co-located. The other big advantage of Digital Futures is that it allows for, we have folk here from the College of Medicine, from DAAP, from Arts and Sciences, and by being co-located, it nurtures this collaboration.
[00:53:28] Dr. Kelly Cohen: So I have new collaborations with teams from DAAP, for example, one of the folk that was interviewed by you, Alejandro. Right. He’s in Digital Futures Transportation Hub. He’s from Design, the College of Design, in another college. I would never have worked with him. But right now, the two of us, I’m the PI, he’s my co-PI, and, we won a NASA Phase II contract with a company in Philadelphia on AI and the safety of operations to assist NASA with their work, right?
[00:54:00] Dr. Kelly Cohen: Now, this whole combination would never have come back. So, in the past, you know, the faculty, we have a tendency to live in these ivory towers, and never the two shall meet, right? There’s no bridge between the towers, and as we progress in our careers, these towers get taller and taller, and all we want to do is reach to the clouds, but we don’t want to see and make these bridges between the groups.
[00:54:22] Dr. Kelly Cohen: So intention off Digital Futures is to reach out. And recently I worked with Sam Anand, who’s also on the ground floor. When you enter, if you notice is that additive manufacturing lab. So we’re working with them on digital twins for worker safety. We just won 1.3 million.
[00:54:42] Dr. Kelly Cohen: Across from where I am, Professor Umar has got a lab where together, we were just one of two universities that won 11.5 million from the Space Force, looking at robotics and AI in a project called ICESat, that’s in space, maintenance and assembly of spacecraft. I’ve been working with Dr. Manish Kumar, with Richard Hocknett and his group of cyber security on trying to use AI to make airports more cyber secure.
[00:55:17] Dr. Kelly Cohen: So, there [is] this opportunity to reach out. And that’s where there is this huge synergy. And what we’re trying to do is to get out of your comfort zone and utilize your capabilities. So I’ve [been] working with the College of Medicine on bipolar disorder treatment, how to utilize information in the electronic health records in order [for] us to suggest psychiatric care for people coming into emergency post traumatic stress disorders.
[00:55:49] Dr. Kelly Cohen: Working with companies to do with Alzheimer’s and onset of dementia. So there are these wide range of things that originally I was not supposed to do when I was hired into the department of aerospace engineering. But right now I work with a wide range of things, and that’s what makes it exciting for me, my students, my capability, and the industry around me.
[00:56:12] Dr. Kelly Cohen: So I work with several industries today, and my footprint has got much larger being here because it allows me to meet up with fantastic people like yourself, Helen, and, you know, your associate Kendra, co-sponsor and to work with Cincinnati AI for Humans. And there are some success stories of budding relationships that never would have happened had it not been for the presence that you have in our building.
[00:56:38] Dr. Kelly Cohen: And so while reaching out to you, and having these new opportunities that would never have come about had I stayed on main campus. So coming out here, allows me to serve the community better, allows me to improve the learning experience and the opportunities I provide my students. And altogether, the University of Cincinnati does better.
[00:57:02] Dr. Kelly Cohen: So this has been the most productive year in my 17 years at UC. If I look at the metrics, number of publications, number of submissions for awards, and also, the number of awards I have won. And a lot of that is because of my ability to come out. And so I thank the governor for investing in the Cincinnati Innovation District.
[00:57:28] Dr. Kelly Cohen: I thank the vice president for research for being very supportive and also helping me with, to, you know, paying 75 percent salary for two of my staff, and for allowing us with all these opportunities for us to have a more fulfilling career and to do the best we can.
[00:57:46] Helen: I love that. And, you know, we have Cincy AI for Humans to connect and bring people together and hearing stories where relationships, business relationships come out of that, just makes me so happy to hear, I know, we’re getting short on time and I know that we could probably go on and on for hours, but I do have kind of a random question.
[00:58:08] Helen: I’m always fascinated with Einstein. And one of the things that I had read or came across is that, he thought a lot about the question, if he could ride a light wave and him just pondering that question, got him to arrive at the theory of relativity and a mutual friend of ours, John Salisbury, who’s also an amazing AI leader here in the Cincinnati region, shared that when you were working on your, aerospace engineering, that you thought of yourself as a particle.
[00:58:34] Helen: And it made me think of the Einstein quote. And I thought that was just so fascinating. So I was wondering if you could just share a little insight into your mind and how you, I guess, thought about that, and just in general, how you approached your research questions.
[00:58:52] Dr. Kelly Cohen: This is, during the time I was working at the Air Force Academy in the Department of Aeronautics, we were looking at having to tame vortices. Vortices; they’re a phenomenon in nature, they’re a result of an instability. You have vortices in Jupiter, you can see that big spot, it’s a vortex. It’s not something solid, it’s winds moving around.
[00:59:14] Dr. Kelly Cohen: A tornado is a vortex, so we have these vortices. Birds create vortices, the flapping birds, insects, they create vortices and then manipulate them. And that’s how they get far better performance than we do with the aeronautics we did. So, the project was all about controlling vortices. How do you control them?
[00:59:38] Dr. Kelly Cohen: So there is information that we need to collect from the vortices and then introduce a secondary disturbance, with the right frequency and the right amount of amplitude into the flow to interact with the vortex in order to control that. So the question is, you have now here a physics phenomena and information theory.
[01:00:00] Dr. Kelly Cohen: Information is what information do I require to collect from the vortices in order to feed that back? And there was a vast amount of work done on very complex models and theories that come from optimal control. They used AI and neural nets. And for me, it was, if I was mother nature, what would I do, right?
[01:00:18] Dr. Kelly Cohen: And then what comes rigid? The simplest answer is the best. So I made a premise that these vortices were coupled to their frequency, and I have a good target. The least possible information is figure out the frequency of the fundamental cause of the vortex. And I could use that frequency to feed that back with the right shape.
[01:00:44] Dr. Kelly Cohen: So using that principle of trying to come up with the simplest possible information, and then I use that and I could solve a very complex problem using a very, very, very simple approach, because it was all about, as mother nature, how would I solve the problem? And so it was the information propagating downstream and upstream how were those particles talking to one another, right?
[01:01:10] Dr. Kelly Cohen: And then I made an assumption, now you cannot come up with a mathematical approach that starts off with Newton’s laws and then come to this solution. You have to speculate by putting on a hat called “Mother Nature,” as to what the solution would look like, take that approach, apply that, and then see what happens.
[01:01:35] Dr. Kelly Cohen: And so it is having to use insights. It’s having to use heuristics and also, coming up with interesting solutions. Now, another aspect of that work was that, where do I place my sensors on these flow? How do I come up with optimal placement? And I use heuristics of, you know, there’s a game where you have kids bopping the heads up and down on a blanket.
[01:02:02] Dr. Kelly Cohen: And you see that behavior, three dimensional behavior. And so I looked at that and I said, where do I want to place my sensors on the tallest and the shortest kids? Because that would give me a good observation of what’s happening. So I use that approach of bobbing kids to figure out where to place my sensors.
[01:02:16] Dr. Kelly Cohen: I was challenged by a professor at MIT, that she’d come up with a computational approach to be able to see if she could do better than my little bobbing heads of kids. And after a very thorough analysis of the same problem I worked on, she reached the conclusion that these are my solutions, but I used all these resources, computational and everything.
[01:02:42] Dr. Kelly Cohen: And this guy does it with heuristics, but the answer is very much the same, right? Because you ask yourself, Mother Nature is the mother of all optimizers. How would she behave, right? And so these are two little anecdotes that are published, that I’m very proud of that highlight my intuition, my instinct about the way nature would play itself out and that approach for placement of sensors and use underwater, has been used as a variety of other applications that I couldn’t even dream of. But, you know, the premise is very similar to my gut feeling.
[01:03:25] Helen: I love that. Thank you for sharing. And I think it also just highlights, you know, how important intuition is with the human experience and how far AI has to go and probably will never, who knows on the AGI front, probably will never match that intuition that you just described.
[01:03:45] Helen: So thank you for sharing that. Well, if you want our viewers and listeners to remember one thing from today’s conversation or about the work that you do, what’s that one thing that you’d like them to walk away with, Kelly?
[01:03:59] Dr. Kelly Cohen: The one thing is, there’s a lot of AI around. AI is the future. We need to strive to make the future a better world.
[01:04:08] Dr. Kelly Cohen: And in order to use AI in the best possible way, we need to be responsible about it. And we can do both at the same time. The price to pay for rushing ahead and not being careful, that price is a bit too high for us as a society. We as academics and the community that has embraced responsible AI have to continue doing what we’ve been doing in the past year.
[01:04:33] Dr. Kelly Cohen: We’ve got to continue letting people know that there is another way. There is a way in which we can build better products, but we can do that responsibly. Isn’t that what an oath of a doctor is? “Hey, I’m going to use all the techniques in the world, but my whole goal is to improve the lives, improve the quality of care to be able to provide.”
[01:04:55] Dr. Kelly Cohen: So there’s that Hippocratic oath. We basically have to do the, we need to talk about what oath we would take as AI practitioners. What oath? And the oath should be to make the world a better, ethical, fair world where we can all live.
[01:05:13] Helen: That is such a great note to end the show on. Thank you so much for all of your time today.
[01:05:19] Helen: And just in general, for everything that you do for the Cincinnati community and for responsible AI, not just only for this community, but for the nation and the world too, you’re doing incredibly important and timely work. And, it is, greatly appreciated. And you always bring so much enthusiasm and are always so generous with your time. So thank you so much, Kelly.
[01:05:43] Dr. Kelly Cohen: Thank you, Helen.
[01:05:46] Helen: Thank you for spending some time with us today. We’re just getting started and would love your support. Subscribe to Creativity Squared on your preferred podcast platform and leave a review. It really helps. And I’d love to hear your feedback. What topics are you thinking about and want to dive into more?
[01:05:58] Helen: I invite you to visit CreativitySquared. com to let me know. And while you’re there, be sure to sign up for our free weekly newsletter so you can easily stay on top of all the latest news at the intersection of AI and creativity. Because it’s so important to support artists, 10 percent of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized nonprofit that supports over 100 arts organizations.
[01:06:23] Helen: Become a premium newsletter subscriber or leave a tip on the website to support this project and ArtsWave. And premium newsletter subscribers will receive NFTs of episode cover art and more extras to say thank you for helping bring my dream to life. And a big, big thank you to everyone who’s offered their time, energy, and encouragement and support so far.
[01:06:44] Helen: I really appreciate it from the bottom of my heart. This show is produced and made possible by the team at Play Audio Agency. Until next week, keep creating.