How to harness A.I. in the fight for democracy

Autocratic regimes are using artificial intelligence to attack democracies, stirring concern from Romania to Rhode Island. But the technology could also become a powerful tool to safeguard elections if deployed in the right ways.

How to harness A.I. in the fight for democracy
Photo illustration by Compiler/Unsplash/Philip Oroni

COMMENTARY BY Isabel Linzer

Warnings about artificial intelligence-fueled attacks on democracy have become all too commonplace. Canadian intelligence services claimed Beijing would use it to interfere with elections in April, and Australia’s public broadcaster recently told readers how AI could sway that country’s vote in May. 

The threats are anything but theatrical. Last weekend, Romanians went to the polls for an election that was rescheduled after officials annulled a 2024 vote due to Russian-created AI interference. Moscow also tried to influence Germany’s February elections using AI. And to brace for a potential onslaught of similar risks in the United States, more than a dozen states—from Connecticut to Texas—introduced bills in 2025 to govern the use of AI in future elections.

These measures may be driven by a degree of creeping technophobia. But they are also a sober recognition that emerging tech can have very real consequences for social and political life. 

So far, efforts to reduce AI risks to democracy have largely been defensive attempts to prevent bad outcomes. OpenAI, for instance, prepared for the hefty slate of 2024 elections by limiting the potential election-related responses that ChatGPT could provide. Some defensive efforts, however, come too close to violating fundamental rights: Connecticut’s bill to create criminal penalties for knowingly distributing election deepfakes would likely run afoul of users’ right to express themselves and participate in the democratic process. That bill, like many similar ones across the United States, comes at an unacceptable price, and is counterproductive to the broader goal of safeguarding democracy.

Ultimately, we’re left with a puzzle of how to defend democracy from AI-fueled threats without undermining democratic principles. But the well-warranted focus on AI harms should not mean missing out on its potential benefits. As AI developers release updated versions of tools and governments consider regulations, those of us committed to strengthening democracy should go beyond trying to limit bad outcomes and use new technology such as AI to proactively protect information integrity—and civil society is leading the way. 

Although generative AI hasn’t yet had the catastrophic impact on elections that some predicted, the risks are real and likely to grow. Generative AI can make phishing, hacking, and mis- and disinformation more effective and frequent. Already, leading candidates in the United States and abroad have been the targets of sophisticated deepfakes.

Ultimately, we’re left with a puzzle of how to defend democracy from AI-fueled threats without undermining democratic principles. But the well-warranted focus on AI harms should not mean missing out on its potential benefits.

At the Center for Democracy & Technology, we found that just months before the U.S. election, top AI chatbots—ChatGPT, Gemini, Llama, Claude and Mistral—responded to questions about voting with a disability with incorrect information over a third of the time, potentially preventing people from exercising their right to vote. Other studies found similar problems when asking questions about voting in the United States and Europe, and one found that rates of inaccurate information were worse in Spanish. Beyond the impacts on individual voters, the proliferation of fake and manipulated content undermines trust in the information environment more broadly; as people learn that deepfakes are increasingly realistic and difficult to detect, false claims that real content is AI-generated become more persuasive.

Photo illustration by Compiler/Unsplash/Philip Oroni

While these concerns have only gained public attention in the past couple of years, civil society groups have been experimenting with AI for far longer and are stepping up their work as AI adoption accelerates. Chequeado, a Latin America-wide fact-checking collective, has saved time by pooling members’ expertise and using natural language processing and machine learning to determine whether a given query has been fact-checked before—a meaningful development, as speed is crucial to impactful fact-checking.

Taiwanese civil society has similarly used AI to expand their fact-checking capacity and have created AI chatbots that integrate into messaging apps, making access to vetted information more convenient and personalized. It’s simpler to ask the friendly “Auntie Meiyu” chatbot and receive an immediate response than to sift through web search results to see whether a news outlet has answered your exact question. In Taiwan, prominent and reliable fact-checking platform Cofacts relies on public participation, crowdsourcing responses in a manner that shares similarities with the community notes model used by some social media platforms. And in the U.S., researchers studying AI and elections have experimented with using AI to help collect data, assess reporting on deepfakes and evaluate foreign election interference.

To be sure, the argument that AI tools can help mitigate AI harms deserves to be viewed with caution. Tech solutions risk introducing unintended problems. Adding fact-checking chatbots to group chats can create privacy risks, to say nothing of the errors these tools are known to make. The Abundance Institute, for example, stopped using automated sentiment analysis in AI research because it was “inaccurate and not particularly informative.” 

These concerns are why user rights and uncompromising performance standards should be at the heart of efforts to use AI tools to combat AI harms, and civil society is best-placed to ensure that’s the case. We have the appetite and capability to make smart, nuanced decisions about how to responsibly use AI in different social and political settings. The problems we have encountered in our own work can also offer important guidance on how to address potential harms of others’ efforts—but doing so effectively requires the cooperation of companies and funders, who have immense power over how AI shapes the information environment.

In contrast to the consistent AI-related pro-democracy work by civil society over the years, companies’ attention to elections ebbs and flows according to electoral calendars, and their time-limited election policies have missed critical election events—including the January 6 insurrection. The AI Elections Accord, a 2024 agreement among 27 major technology companies to address “deceptive AI elections content,” lapsed in February. With election security and disinformation work in the Trump administration’s crosshairs, there are already signs we’ll see a sharp and dangerous decline in work to thwart tech-related harms to the democratic process.  

Most Americans are worried about just such repercussions. Ahead of the 2024 U.S. elections, the majority of voting-age Republicans and Democrats were highly concerned about the use of AI in elections. Those worries persisted even after the election went relatively smoothly; a separate post-election poll found that most Americans think AI will negatively affect national security. Those fears are well-founded. As the Canadian, Australian, and German governments have articulated, foreign interference and cyberattacks continue to imperil election integrity worldwide. 

In another reminder that these issues aren’t only relevant during campaign periods, Vice President JD Vance asserted in March that a “leaked” audio clip in which he purportedly disparaged Elon Musk was AI-generated. To the casual listener, the clip wasn’t blatantly phony, but experts assessed that Vance was correct. Attention to the incident was minimal, perhaps because deepfakes have lost their novelty or because Vance-related news at the time was focused on his alleged participation in sharing military plans via group chat, but it’s a signal of a persistent problem that merits equally dedicated attention. 

Companies should continue investing in and collaborating on provenance and detection technology, building on their agreement under the erstwhile Elections Accord. Well-established and publicly available tools could help avoid future confusion, like that which occurred last year when old interviews of Mexico’s now-president Claudia Sheinbaum were misleadingly spliced together in a clip that was initially held up as an example of how generative AI threatens electoral integrity. The clip was not AI-generated, yet underscored how difficult it can be to identify the source of media, particularly audio in languages other than English, as models are often trained more thoroughly on English-language data. 

Companies should continue investing in and collaborating on provenance and detection technology, building on their agreement under the erstwhile Elections Accord.

That spirit of sharing should extend to company transparency and usability data, which companies have shied away from recently. Most prominently, Meta eliminated CrowdTangle, and X made access to its data prohibitively expensive. Bowing to reputational anxieties and other reservations, industry leaders sacrifice the huge benefits transparency offers to research and policymaking. Many researchers and civil society use APIs to support their work, helping code data and consolidate information. Researchers that I interviewed about tracking AI incidents in elections told me that it’s disruptive when companies make changes to their API and it requires a lot of time and investment to correct their work—a tall ask for resource-strapped nonprofits and a potential barrier to their use of new technology. AI developers will continue to make updates, but they could simultaneously offer more notice and support to third-party researchers about changes to their APIs.

Photo illustration by Compiler/Unsplash/Philip Oroni

As fact-checkers have come under political fire around the world, companies should reinvest in fact-checking partnerships and reverse the recent trend of opposing fact-checking, demonstrated most strikingly by the end of Meta’s fact-checking program in the United States. Traditional fact-checking has its shortcomings, but should not be disregarded: We still use seat belts even though they don’t prevent all injuries and cars also have airbags. Civil society organizations’ use of AI to expand personalized, user-driven fact-checking proves that there is great room for growth, especially when paired with responsible practices to limit risk. Even moderate investment from donors could go a long way toward exploring those opportunities.

When considering innovative ways to protect democracy and build a resilient information environment, however, the elephant in the room is that funding for democratic-minded work is under extreme strain. At least 14,000 nonprofit jobs have been eliminated in the United States since President Trump’s inauguration, 71% of Global Majority digital rights organizations have scaled back their work as State Department and USAID cuts ended millions of dollars in funding for civil society around the world, and the $1 billion of canceled National Science Foundation grants includes those focused on mis-, dis-, and malinformation.

What’s at stake is greater than the loss of individual projects. Those who want to defend democracy, protect free expression and ensure the survival of crucial research should respond by investing in fields of work. Research has shown the power of collective models and information-sharing. Projects that approach similar issues from different angles are the secret sauce in the knowledge economy. The diversity of knowledge they create is not an accident, but a core ingredient of robust interventions, research and innovation. Grantmaking should reflect that reality, and prioritize funding that incentivizes collaborative and complementary efforts, instead of pitting potential allies against each other in competition for grants at a time when so many organizations are existentially threatened.

Globally significant elections happen every year, and AI is not going away. Alongside continued improvements to AI-related products and policies to mitigate risk in the first place, like proactive product testing and well-resourced trust and safety teams at companies, supporting innovative uses of new technology is crucial.

It does no good to shrink from such a dominant technology. Instead, we should collectively commit to making sure democracy can grow with the times—before it’s too late.

Isabel Linzer is a fellow on the Center for Democracy & Technology’s election team, focusing on information resilience, AI and elections, and international democracy and elections work. Isabel’s writing has been published in The Washington Post, Slate, Just Security and Foreign Affairs, among other leading outlets.