One region, all voices

L21

|

|

Read in

Artificial intelligence in electoral campaigns: How and for what

Artificial intelligence is redefining electoral campaigns: it can either strengthen democracy or become its major threat.

The emergence of artificial intelligence (AI) in the electoral ecosystem has accelerated dynamics we already knew—administrative automation, message segmentation, network monitoring—and created entirely new ones, such as the mass production of hyper-realistic deepfakes and coordinated disinformation campaigns. For electoral bodies and civil society, the challenge is not to decide whether AI “enters” elections, but to confront its malicious use—when some actors seek to discredit authorities or tilt results—and, in parallel, to harness its benefits to better manage elections and strengthen informational integrity. A recent report by the International Institute for Democracy and Electoral Assistance (International IDEA) summarizes this agenda well: AI opens opportunities across the electoral cycle but also demands response plans, human oversight, and transparency frameworks to avoid eroding public trust.

The dark side: deepfakes, bots, and fabricated narratives

In Latin America, one of the most recent cases is Argentina. In May 2025, deepfakes circulated attributing false messages to figures such as former president Mauricio Macri and Buenos Aires Province Governor Axel Kicillof, right in the middle of the campaign. Journalistic fact-checking documented the reach and intentionality of these pieces, even disseminated during the “electoral silence,” with the obvious intent of influencing the vote and sowing confusion. Macri himself publicly denounced the videos, and AI incident databases recorded the episode as an attempt at informational manipulation on election day.

Ecuador experienced something similar but in a more disturbing format: fake “newscasts” generated with AI that imitated the graphics, tone, and presenters of real outlets. The result was a simulation of journalistic authority placed at the service of misleading content. Media monitoring and fact-checks by outlets like DW and France 24 described these pieces—even with fake “anchors”—as a reminder that technology not only distorts candidates’ words but also impersonates news brands to “parasitize” their credibility.

In the United States, the novelty last year was not a video but an audio recording: a robocall with a cloned voice of Joe Biden urged people not to vote in the New Hampshire primaries. The Federal Communications Commission (FCC) declared automated calls with AI-generated voices illegal and later imposed a multimillion-dollar fine on the consultant behind the operation, as well as sanctions on a telephone company that transmitted the calls. This is an example of how regulation, investigation, and sanctions can deter malicious use of AI in real electoral processes.

Bolivia, during its 2025 electoral cycle, showed the other side of the same coin: deepfakes, invented polls, and coordinated attacks, with local fact-checkers reporting hundreds of misleading pieces since the beginning of the year. The first round, held in August, was marked by fabricated narratives and viral synthetic content, forcing verification organizations and the press to redouble efforts to slow the snowball effect of disinformation. The conclusion is clear: the marginal cost of producing “plausible” falsehoods has dropped, while the cost of verifying them remains high and their impact limited.

Beyond the region, the same reality emerges: Moldova, a former Soviet republic, offers an open-air laboratory of interference. Ahead of its elections, deepfakes targeted President Maia Sandu, alongside coordinated campaigns linked to pro-Russian networks. European observatories and specialized media reported manipulated videos aimed at eroding the credibility of pro-European leadership, a tactic already familiar in the region. The lesson here is geopolitical: AI amplifies transnational influence operations that exceed the response capacity of any isolated electoral authority.

The virtuous side: logistics, registries, document review, and countering disinformation

The same technology that cheapens deception can—if well designed and governed—improve electoral administration. In Buenos Aires Province, electoral authorities used AI to relocate hundreds of thousands of voters to bring polling stations closer and balance school capacity. The change sparked debate—as any massive modification of the operational registry would—but illustrates a legitimate use: algorithms to optimize election-day logistics and, potentially, reduce travel times and overcrowding at polling sites. The key is to communicate in advance, audit criteria, and maintain effective channels for complaints.

Peru, meanwhile, introduced EleccIA, a tool by the National Jury of Elections that applies natural language processing to review government plans and candidate files. The promise is to drastically reduce reading times and more consistently identify omissions or inconsistencies. This is a typical back-office use of AI: less glamorous but more impactful in processes that currently consume weeks of human effort and, when automated under supervision, can free up capacity for oversight and dispute resolution.

At the same time, authorities can use AI to respond—not just react—to disinformation. Dashboards that combine network analysis with anomaly detection, semantic search engines to locate rising rumors, and digital forensic labs to label audiovisual manipulations are already part of today’s information integrity toolkit. Comparative experience and guidelines from entities such as the U.S. Election Assistance Commission, as well as research centers and think tanks, converge on good practices: rapid-response protocols, partnerships with platforms to tag synthetic content, and literacy strategies targeting vulnerable groups.

What works when everything speeds up

In hyper-fast environments, the difference is not slogans for or against technology but institutional capacity:

  • Governance and traceability. If an agency uses AI to clean registries, assign polling places, or prioritize audits, it must be able to explain why and how: criteria, training data, bias assessments, human controls. Ex-ante explainability is the best insurance against ex-post suspicion. The IDEA report stresses human oversight and audit procedures; it’s not a technical detail but a democratic guarantee.
  • Communication windows. Citizens are more receptive to technology when they understand its purpose and limits. Logistical changes, such as those in Buenos Aires, require educational campaigns, polling place simulators, and quick complaint channels.
  • Rules and consequences. The U.S. case shows that when concrete harm occurs—the robocall with Biden’s voice—the regulatory and punitive response can be swift, sending deterrent signals.
  • Verification ecosystems. No authority alone can keep up with synthetic manipulation. Networks of fact-checkers, universities, observatories, and platforms must coordinate to debunk rumors and share technical deepfake signatures that can be reused region-wide. The cases of Ecuador and Bolivia show that distributed fact-checking shortens the exposure time of falsehoods.

What lies ahead: AI as an ally of integrity

Looking at the map—Argentina and Ecuador with viral deepfakes, the United States with exemplary sanctions, Bolivia with coordinated campaigns of fabricated content, Moldova under siege from pro-Russian networks—it is clear that AI is not an electoral accessory. It is a structural determinant of trust. But it is also an opportunity to professionalize administration: more efficient resource allocation, faster document review, smarter monitoring of public conversation. The question is not whether AI “works,” but for what and under what rules.

For electoral bodies and civil society, the reasonable path combines three vectors: strategic use (automating bottlenecks with explicit human supervision), informational defense (detecting and neutralizing coordinated synthetic content before it matures), and public transparency (explaining every algorithmic decision with accessible language and audit channels). This is the bridge between a technology that can be a weapon and a tool that, well-governed, enhances democratic outcomes.

*Machine translation, proofread by Ricardo Aceves.

Autor

Otros artículos del autor

Executive Director of Transparencia Electoral. Degree in International Relations from Universidad Central de Venezuela (UCV). Candidate for a Master's Degree in Electoral Studies at Universidad Nacional de San Martín (UNSAM / Argentina).

spot_img

Related Posts

Do you want to collaborate with L21?

We believe in the free flow of information

Republish our articles freely, in print or digitally, under the Creative Commons license.

Tagged in:

SHARE
THIS ARTICLE

More related articles