Skip to content
Back to blog

How Artificial Intelligence Contributes to Misinformation During an Election Year

September 30, 2024

Are you a California voter who uses social media or search engines when doing election research? You’re not alone. According to a recent study by UC Berkeley’s Institute of Governmental Studies, 39% of California’s registered voters use Google and other search engines to learn about elections, and 32% of voters use social media. However, the majority of California voters who use social media for election-related news are also distrustful of the content they see

Rise of AI-Disinformation 

Three major components contribute to the rise of Artificial Intelligence-driven mis and disinformation.

As California’s newsrooms continue to shrink, news deserts are formed, and, in the absence of reliable and trusted journalism, misinformation spreads quickly. 

Generative AI technology has increased in quality and accessibility in the last year, which means misleading content can be generated quickly and by more people, leading to an increased distrust in the news and even in our government.

Major social media platforms lack the accountability to implement and enforce effective policies that address misinformation, disinformation, and misleading AI-generated content on their platforms (because of the profits they make off our clicks), so we must rely on our own media literacy skills to recognize and stop the spread of misleading content online. 

Here are 3 ways to put media literacy into practice in the digital world

Stop before sharing

It’s happened to the best of us: a post online looks almost too good (or bad) to be true and we share it with our friends only to learn it was a fake news article or AI-generated image made and shared to capture our attention. Stopping before we share or engage with news posts online ensures we give ourselves enough time to analyze the content and identify whether or not it originates from a credible source. With Generative AI becoming more accessible to users, misleading and click-bait content becomes easier and faster to create.

Because the technology is still fairly new, AI-generated images often contain errors that help us identify them. For example, AI-generated photos of people may include extra fingers, extra teeth, oddly placed jewelry or accessories, or an unnaturally smooth complexion. 

Identifying AI-generated or misleading news articles can be tricky when they are specifically designed to imitate trusted news sources and may require a little digging after you’ve analyzed them. But, It is worth the effort. 

It’s important to note that like all media, human bias exists in generative AI allowing for users to generate images that align with their values while being devoid of historical or factual accuracy. An analysis by Bloomberg found that Stable Diffusion, a generative AI text-to-image model, exacerbated racial and gender stereotypes when prompted to generate images showing people working different jobs and when depicting people who have committed a crime. Based on their analysis, it was found that the Generative AI model overrepresented lighter skin tones in images representing people working higher-wage jobs, and darker skin tones in the images of low-wage workers and criminals. AI-generated images are incredibly harmful considering they worsen the perpetuation of stereotypes and strip the voice of marginalized communities who become subjects in someone else’s agenda without consent. 

When it becomes difficult to tell the truth from AI-generated fiction, the first step of applying media literacy is to stop before sharing.

Verify the source

When coming across any online news content, it’s important to ask: Who created this content? When was this published? And where did this come from? Identifying the content’s source is a critical step when identifying the trustworthiness of the information being shared, and its intention.

Misleading content online is often designed to look like traditional-style news articles. “Pink slime” websites are examples of propaganda sites that imitate news publications but often publish partisan, unethically sourced, and false information, including content generated by AI. With the loss of local journalism and the closure of newsrooms across the state, pink slime websites are outnumbering trusted news sites. According to data collected by NewsGuard, an internet tool that rates news content to identify trusted sources, 1,265 “pink slime” websites were identified online in June 2024, outnumbering the 1,213 daily newspaper websites operating in the United States.

Support Journalism

To help combat the spread of misleading and fake news content, it’s important that we support our local news outlets and journalists. By supporting our local news, following and sharing trusted news sites online, and telling our friends and family when the content they share is false or misleading, we’re investing in our ability to access credible information now, and in the future. 

Media and local newsrooms are critical to helping us stay informed about our communities and hold our local, state, and national elected officials accountable. But it is important to remember, good journalism isn’t free and if we don’t use it – we will lose it. By frequenting and sharing trusted sources of information, we can stay informed about election information, the policies, and budgets that affect our everyday lives, and keep tabs on how our elected officials perform while in office. And we’ll be telling advertisers, social media platforms, and leaders alike that we value our newsrooms and access to credible sources. 

Protect your vote this election by practicing media literacy online and rejecting misleading outside influences.