Truth in the Balance: The Critical Role of Media Literacy Education in the Age of Algorithms & GenAI
Over the past seven years, social media and the engagement economy created the conditions for the “post-truth” era in the global information ecosystem, where truth is a popularity contest enabled and amplified by algorithms. Artificial intelligence now threatens to break the system completely, where anyone can fabricate evidence of anything to further their agenda.
Despite the harms that society has already endured, digital citizens are unprepared for the onslaught of disinformation that experts expect to come. More so than ever, academic institutions and governments need to prioritize media literacy education to protect societies from chaos.
Disinformation is Pervasive, and We’re Unprepared for More
Propagators of mis- and disinformation employed generative artificial intelligence to sow division, discredit political opponents, or manipulate public opinion in at least 16 countries over the past year, according to the 2023 annual Freedom of the Net Report by Freedom House.
Manipulated audio of President Joe Biden appearing to make transphobic remarks circulated on social media. Donald Trump and Dr. Anthony Fauci were also victims of digital imagery manipulated to look like the two were embracing each other. In Venezuela earlier this year, state media outlets distributed deepfaked videos of nonexistent anchors on a fabricated news channel spreading propaganda.
These are only a few glaring examples of how A.I. is deteriorating the quality of the global information ecosystem, though the issue is much more granular. In a survey of 10,000 British and American adults, half of respondents said they encounter false or misleading content weekly. Even Freedom House acknowledges that their report likely undercounts instances of A.I.-generated mis- and disinformation campaigns.
Almost 60 percent of Americans think that A.I. will increase the spread of misinformation during the 2024 U.S. presidential election, according to new polling by The Associated Press.
Efforts such as Adobe’s Content Authenticity Initiative (CAI) aim to eliminate the guesswork in identifying images that are generated or manipulated by artificial intelligence. In episode seven of Creativity Squared, CAI’s Senior Director, Andy Parsons, discusses the importance of making sure content authenticity is verifiable and the technology they’re developing to achieve that.
Hardware solutions for image authenticity are coming to the market as well. This week’s episode of Creativity Squared features the Vice President of Marketing at Leica Camera North America (Leica is also a member of the Content Authenticity Initiative), Kiran Karnani, who discusses the iconic brand’s newest offering: the world’s first camera that automatically assigns content credentials for every photo it captures.
However, the rapid pace of A.I. development and ease of access means that trying to stem the tide of harmful A.I.-generated content may forever be a game of whack-a-mole.
At the other end of the information superhighway, media consumers need to be better equipped to evaluate the content they encounter. A Poynter survey found that 75 percent of respondents were not confident they could identify online misinformation. In fact, almost two-thirds of American adults’ academic careers included no media literacy training whatsoever, according to a 2022 study by the nonprofit Media Literacy Now.
Today’s youth are not necessarily better prepared. In a 2016 study, the Stanford History Education Group determined that college-age and younger “digital natives” could often not discern between ads and news articles or identify clear signs of political bias.
The study’s authors said, “Overall, young people’s ability to reason about the information on the internet can be summed up in one word: bleak.”
Still today, only three U.S. states mandate media literacy education in K-12 schooling, even as teens increasingly name “other people on TikTok and Instagram” as their primary news sources. A nationwide survey of 18 to 24-year-old Canadians this year found that 84 percent were unsure that they could distinguish true from false content on social media, and 73 percent said they follow at least one social media influencer who expressed antiscience views.
Countering the Content Confidence Crisis
Public policy, academia, and nonprofit groups are developing answers for our crisis of confidence in online content.
The News Literacy Project (NLP) is a nonpartisan nonprofit focused on increasing citizens’ ability to determine the credibility of news and other information and to recognize the standards of fact-based journalism to know what to trust, share, and act on. The project offers resources for educators, as well as webinars and a podcast dedicated to news literacy.
NLP and other groups like Media Literacy Now (MLN), a grassroots educational policy advocacy group, are mobilizing communities to push their policymakers for mandatory media literacy education. According to MLN’s 2022 Media Literacy Policy Report, bipartisan cooperation is growing around the need to do more.
Last month, Governor Gavin Newsom signed a bill to make California the fourth U.S. state to mandate K-12 media literacy training. However, there’s been little momentum in Congress for a bill introduced last year to establish federal funding for media literacy programs in local schools.
Companies, nonprofit groups, and think tanks are trying to pick up the slack by developing media literacy curricula for schools to implement for free.
Adobe’s Content Authenticity Initiative offers media literacy curricula for college, high school, and middle school through the company’s education exchange. Course topics include “The Basics of Media Literacy” for middle schoolers and “Visual Literacy” for higher ed students.
Each curriculum includes a foundational unit as well as lessons for use in social studies, the arts, and English & language arts (ELA), with media literacy lessons and themes integrated throughout all components.
The MIT Center for Advanced Virtuality (CAV) offers a free online mini-course on “Media Literacy in the Age of Deepfakes,” which offers tips and case studies for thinking critically about misinformation and a brief history of how our media landscape arrived here. The third and final module uses CAV’s multi-platform, immersive project, In Event of Moon Disaster, to explore the implications of deepfake technology by inventing a historical timeline where (an A.I.-generated clone of) Richard Nixon addresses the nation following the failure of the Apollo 11 mission.
Regulation and Education: Fighting Disinformation Holistically
It’s not enough for schools to teach students how to use technology without also teaching the risks. However, funding determines focus. Surveys show that adults are not inherently more or less capable of discerning mis- and disinformation than young people, so time and effort has to go into training teachers to prepare students to enter a radically different media environment than earlier generations encountered.
To accomplish that fairly and equitably, Congress needs to heed the call of advocacy groups and dedicate resources to steeling Americans against the harms of malicious disinformation.
While ongoing efforts to regulate the cutting edge of the A.I. industry are worthwhile, a holistic approach to fortifying the social fabric should also encourage a culture of reasonable skepticism across all age groups.
If leaders continue to neglect the need to strengthen media literacy, no amount of regulation will prevent future instances of A.I.-enabled mass deception and social unrest.