+
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Who's In This Podcast
Helen Todd is co-founder and CEO of Sociality Squared and the human behind Creativity Squared.
Rob Richardson is the CEO and Founder of Disrupt Art and the Founder of Disruption Now Media, a platform that includes MidwestCon where Rob heads programming and the Disruption Now Podcast which Rob hosts.

Ep25. Rob Richardson: Don’t Trust A.I. — Verify

Up Next
Ep26. Sergiu Ardelean: No A.I. Gimmicks

Ep25. Don’t Trust A.I. — Verify: Explore Explainable A.I., Policy Innovation, and A.I. for Good with MidwestCon & Disrupt Art Founder Rob Richardson

How do we build trust in a technology that nobody fully understands? On the latest episode of Creativity Squared, our multi-hyphenate guest discusses the need for transparency around A.I. from his perspectives as an attorney, academic leader, political candidate, innovation advocate, and entrepreneur. 

“I care deeply about technology and its implementation, making sure that it’s done in an inclusive way. Because what we build, and why we’re building it has never been more important. I believe technology is actually not neutral. It depends on who’s building it and the purpose behind the people and institutions that are building things. That’s why I think the greatest thing that people can do who want to create a better world is to be part of building in this new ecosystem of opportunity.”

Rob Richardson

Rob Richardson is the CEO and Founder of Disrupt Art and the Founder of Disruption Now Media, a platform that includes MidwestCon where Rob heads programming and the Disruption Now Podcast which Rob hosts.

Disrupt Art helps brands enhance loyalty programs and events, transforming them into dynamic platforms that elevate brand equity and unlock invaluable customer insights. Brands unlock the ability to gamify interactions using digital collectibles and A.I., ensuring every customer’s experience is not just memorable but meaningful. Disrupt Art envisions a world where customers aren’t just consumers but co-creators and stakeholders in the process.

Rob is head of programming for MidwestCon, a one-of-a-kind conference and cultural experience where policy meets innovation, creators ignite change, and tech fuels social impact. He also hosts the Disruption Now Podcast where he has conversations with disruptors focused on impact.

Rob was the youngest Trustee ever appointed as Chairman of the Board at University of Cincinnati. During his chairmanship, he helped lead the University’s innovation agenda, establishing the 1819 Innovation Hub, where industry and talent collaborate to spark groundbreaking ideas. He created a leadership development and academic preparedness program for high school students. He also championed reforms to the University’s police policies.

Beyond business and academia, Rob’s also been involved in politics. He ran for public office as the Democratic nominee for Ohio treasurer in 2018. He received over 2 million votes running on the platform of inclusive innovation for Ohio. Rob has appeared on MSNBC, America this Week, and is a regular contributor to Roland Martin Unfiltered.

In today’s episode, discover how A.I. can be a force for good and why explainable artificial intelligence is crucial in today’s tech landscape. You’ll hear about the trends Rob is paying attention to when it comes to web3, blockchain, and artificial intelligence and how he’s thinking about policy innovation, trust, and transparency. You’ll also learn more about the vibrant tech scene here in Cincinnati and how the University of Cincinnati is at the forefront of innovation in explainable artificial intelligence. 

Cincinnati’s Growing Tech Community

Cincinnati has always been home to innovators: The Cincinnati Reds (known as the Red Stockings back then) was the first professional baseball team in the United States, the University of Cincinnati (UC) was the first municipal university established by a U.S. city, and now that same university is helping to lead a culture of tech innovation in the Queen City.  

As a board member at UC, Rob has been involved firsthand in some of the efforts to advance tech innovation in the city and broader region. 

He mentions Dr. Kelly Cohen, who’s the Brian H. Rowe Endowed Chair in aerospace engineering and Director A.I. Bio Lab in Digital Futures, who’s applying fuzzy logic to develop explainable and responsible A.I. with the goal of helping A.I. reach its full economic potential by ensuring its trustworthiness. UC faculty are also researching A.I. applications in higher education, advanced manufacturing, environmental services, healthcare, and renewable energy. 

I believe the University of Cincinnati is going to be one of the key institutions really leading when it comes to A.I., when it comes to blockchain, and doing so in a way that is transparent, and that actually can help. Because everyone’s very excited about A.I., but I think we also need to be responsible about how these things are being employed.”

Rob Richardson

Rob also plays a key role in bringing critical tech conversations to the Cincinnati area through MidwestCon. Rob says that their goal with MidwestCon is to make it the South by Southwest of the Midwest. Last year’s lineup included speakers from the realms of public policy, web3, A.I., finance, fashion, advertising, and social impact. 

Mitigating Bias and Building Trust with Explainable A.I.

Rob is an advocate for explainable artificial intelligence, which does not really exist yet. 

All of the popular A.I. chatbot models available right now are essentially black boxes. User input is processed through a series of mysterious technological functions that nobody can define (even the developers behind the technology). We understand the concepts undergirding the process – transformers, neural networks, etc. – but understanding the actual path from input A to output B remains elusive. 

Rob says that this paradigm presents significant risk for bias to infect artificial intelligence because he says that “technology is actually not neutral,” but rather it depends on who is building the A.I. and the purpose of building it.

That’s why he says that we also need to make sure that the people driving innovation represent diverse perspectives and experiences. But Rob says that the inclusion of diverse perspectives needs to go beyond Diversity, Equity, and Inclusion (DEI) corporate vanity projects. Rather, companies have to see diversity and inclusion for what it is: a fundamental component of business success. 

Rob says that we should think about training A.I. like we think about raising children and striving to provide a well-rounded worldview.

The more you teach a child bad habits and bad exposure, and when they become an adult, it’s very hard to change that. It’s the same thing with A.I. models and algorithms. And this is why it’s so important to think about the data, and how we input it and how we monitor it, and how we actually have transparency behind it.”

Rob Richardson

Rob cites an example from 2016, when Microsoft released an experimental A.I. chatbot called Tay on Twitter, encouraging users to engage in casual and playful conversation with the bot. Within 24 hours, Tay had transformed from an ambivalent machine into a conduit for all of the most vile misogyny, anti-semitism, and racism that Twitter has to offer (which is saying a lot given the dumpster fire that Twitter, now X, can be!). 

Certainly A.I. has come a long way since then, with some guardrails in place on all of the major chatbots to mitigate harmful outputs. However, A.I. has been used for a long time in more serious applications such as the battlefield, where consequences could be much higher and where the public is inherently unable to scrutinize those decisions.  

UC Department of Aerospace Engineering Alum, Javier Viaña, Ph.D., is among those working to develop explainable A.I, which not only provides accurate predictions but also human-understandable justifications of the results. At the MIT Kavli Institute of Astrophysics and Space Research, Javier is working to develop deep neural networks to study planetary movement. 

Rob says that research like Javier’s and the quest for explainable A.I. is a crucial step to achieve the best outcomes for everyone. 

I think the most important trend is to actually build more trust. And if we don’t build more trust, for all of our prosperity, it can end up really backfiring on us. If we can’t trust what’s being put in front of us, if we can’t trust any institution to be transparent about how our data is being used, it creates dysfunction within society.”

Rob Richardson

For Rob, A.I. for good means that we understand why we’re building A.I. and builders are transparent about how it’s being built, and how it will be used to better society beyond just making profit. 

Don’t Just Trust — Verify: Web3’s Applications in Combating A.I. Deception

It wasn’t long ago that Web3 was the greatest new thing in tech. Bitcoin’s massive rally in 2021, combined with the economic uncertainty introduced by the pandemic, resulted in an explosion of developers building trustless systems for almost every application imaginable. While the bubble seems to have burst for the NFT market, Rob says that it’s wrong to think that Web3 is dead. 

In fact, he sees blockchain as an effective counterbalance to the new tools of deception enabled by A.I. technology. 

I see blockchain and A.I. being integrated together. And what I want to get away from is people get so lost in trends that they just say, ‘it’s only A.I. and Web3 is over.’ That’s just a fundamental misunderstanding of what’s happening. A.I. is another application of how we’re going to use the next iteration of commerce on the internet.”

Rob Richardson

Much like the way that cryptocurrency tokens such as Bitcoin enable users to instantly transfer funds without a bank acting as a middleman, blockchain can provide the infrastructure for verifying authenticity of images, videos, and even personhood in the age of artificial intelligence. 

That’s the thinking behind projects such as Worldcoin, the blockchain application co-founded by OpenAI CEO, Sam Altman. The mission of Worldcoin, according to Altman, is to solve the problem of digital ownership that Altman ironically helped create. The technology works by scanning your eyeball and assigning you a unique digital ID that can’t be transferred or stolen (Note: there are A LOT of concerns with Worldcoin, but it’s worth noting as an interesting proposal that’s trying to solve an identity solution outside of governments). 

In the creative world, The Content Authenticity Initiative (CAI) is leveraging blockchain in multiple ways with the goal of creating a system to verify the authenticity and provenance of images distributed across digital spaces (here at Creativity Squared, we fully support the CAI’s efforts!). 

Earlier this year, the European Union adopted the world’s first comprehensive package of laws to regulate crypto assets. Rob says that the U.S. has a lot of catching up to do in order to protect users from bad actors.

Policy Innovation and A.I.

Rob sees parallels between the harms caused by social media and the risks that we’re facing with the rise of generative artificial intelligence. He points to the 2016 presidential election as the critical juncture where many started to see how machine learning algorithms distort the public dialogue by boosting the most inflammatory (and engaging) content. Now that A.I. can generate deceptive content on an industrial scale, Rob says that we need to start thinking critically about proactive regulation rather than waiting for a crisis and reactively rushing to impose half-baked regulation. 

As a former political candidate and current entrepreneur, Rob recognizes that regulation needs to balance public safety concerns with the economic and national security benefits of unstifled innovation. He says that good regulation can drive innovation by establishing clear rules of the road.

What people fail to understand is that, without a stable government, innovation fails. Otherwise, Afghanistan would be killing innovation, because they have no government. Done right, a stable government promotes innovation, not the other way around. Innovation does not promote a stable government.”

Rob Richardson

Rob says that achieving effective policy will require efforts from both tech companies and public officials. Companies have a duty to build transparent systems that can be understood and scrutinized by policymakers. While policymakers have a duty to engage in good faith and try to understand the technology they’re regulating. 

But what should a regulation regime look like? Self-regulation? A civil enforcement watchdog group? 

Historically, self-governance has succeeded in certain industries. Hollywood and video gaming are two examples where companies collaborated to establish their own guidelines around age-appropriate content, rather than cede that authority to government institutions. Such a regime can balance First Amendment considerations, corporate profit motives, and protecting children from inappropriate content. 

However, Rob questions whether self-governance is sufficient to protect public interests against A.I. risks. Returning to the social media comparison, we’ve seen how self-governance can fall short with Meta’s controversial, “independent” Oversight Board. 

Critics have lambasted the Oversight Board as a toothless “PR stunt” meant to rehabilitate the company’s image following criticism about misinformation campaigns that have proliferated on Facebook.  

Rob is also skeptical about the idea of a watchdog or civil enforcement solution, like the Securities and Exchange Commission which oversees the stock market, or the Food and Drug Administration. Both agencies receive criticism for perceived deference to the companies they’re meant to regulate. Critics point to the fact that budgets for both agencies rely largely on fees paid by companies subject to regulation. 

Watchdogs are good, but generally, they probably get funded by the people that they’re watching. Which becomes part of the problem, right? And so, where you’re going to need policies, you’re going to need somebody that has at least somewhat removed from the entities that have to be monitored.

Rob Richardson

Regardless of how government and industry decide to regulate generative A.I., Rob says that citizens and users cannot afford to take a back seat. 

Our Individual and Collective Responsibilities in Adapting to A.I.

Rob says that A.I. is similar to computing and the internet in the sense that they are “general purpose” innovations. There are plenty of people who don’t use social media or cryptocurrency, but there are very few people who don’t use the internet if they have access to it. Similarly, Rob sees GenAI becoming an unavoidable and inevitable part of our lives (as A.I. is already embedded into all facets of our lives through the tech we’re already using). 

While we must ensure that our institutions, laws, and corporate interests are moving in the same direction toward building transparency, Rob says we all have an individual duty to prepare ourselves for the paradigm shift as well. 

I want technology to help bring us together to create more opportunities for folks. But that requires us to be informed advocates and citizens and to not just accept what’s in front of us, be it from Google, or be it from the government. We have to be willing to disrupt and define our own path.”

Rob Richardson

Listen to the episode for more of Rob’s thoughts on A.I. regulation, protecting the creator economy, the importance of strong government institutions, and using artificial intelligence to augment human ability. 

Links Mentioned in this Podcast

Continue the Conversation

Thank you, Rob, for being our guest on Creativity Squared. 

This show is produced and made possible by the team at PLAY Audio Agency: https://playaudioagency.com.  

Creativity Squared is brought to you by Sociality Squared, a social media agency who understands the magic of bringing people together around what they value and love: http://socialitysquared.com.

Because it’s important to support artists, 10% of all revenue Creativity Squared generates will go to ArtsWave, a nationally recognized non-profit that supports over 150 arts organizations, projects, and independent artists.

Join Creativity Squared’s free weekly newsletter and become a premium supporter here.