“The nightmare of schizophrenia is not knowing what is true,” Dr. Rosen tells John Nash’s wife in the film A Beautiful Mind, directed by Ron Howard. The line describes one of the most distressing symptoms of the illness and, used here in a metaphorical rather than clinical sense, offers a starting point for thinking about the very real complexity of twenty-first-century society. A visual culture of dizzying circulation on social media, where images and videos that appear real but are entirely false abound, fuels powerful dynamics of disinformation.
If we were dealing only with inaccurate data, the problem would be manageable. But when these false pieces are inserted into conflicts between countries—such as Israel–Palestine or Russia–Ukraine—and are designed to provoke fear, rejection, stigmatization, or division, the situation changes scale. This is no longer just about informational errors, but about devices that shape emotions, identities, and political positions. At that point, it is worth pausing to ask to what extent generative artificial intelligence (AI) is reconfiguring culture, everyday life, and the very idea of shared truth.

In Latin America, the boundary between the true and the false is no longer a futuristic dilemma. Brazil introduced rules against deepfakes in its 2024 local elections while manipulated content was circulating; in Mexico, that same year’s electoral cycle was marked by debates over synthetic audios and videos; and the corporate world has already faced fraud attempts using cloned voices and faces. The result is a more uncertain daily life, in which verification often arrives too late and damage spreads quickly: emotions circulate first, and only later—if at all—does evidence appear.
In this context, we propose reading the phenomenon as a kind of AI-induced “social schizophrenia”: a misalignment between what we see and hear and what we can believe, with large-scale political, economic, and cultural effects.
We use the term “social schizophrenia”—again, in a metaphorical and non-clinical sense—to describe the collective desynchronization between perception and trust that occurs when technical systems generate, at low cost and large scale, plausible representations of events that never happened. At least three mechanisms contribute to this.
First, sensory mimicry: AI systems produce voices, faces, and scenes that “pass” as real and are consumed at the speed of the feed, dramatically shrinking the window for doubt. Second, the attention economy: platforms optimize for exposure and engagement; synthetic pieces, because of their drama and novelty, compete advantageously for clicks, reactions, and advertising. Third, institutional asymmetries: the capacity to verify, regulate, and sanction always lags behind the capacity to generate and distribute content.
In the region, agencies and electoral courts have begun to respond, but the incentives of the informational ecosystem—political, commercial, and media—continue to reward virality over democratic care. The consequence is an erosion of interpersonal and institutional trust, with uneven impacts: certain population groups are preferential targets of fabrications and hate campaigns, while media outlets face a double challenge of sustainability and verification. Reversing this misalignment requires public policies, responsible technologies, and media and visual literacy that rebalance the relationship between seeing, believing, and acting.
To understand how we came to trust images so deeply—so much so that AI can unsettle our perception—it is useful to revisit some ideas from Visual Studies. From this perspective, we can interrogate the relationship between image, gaze, and reality, and better understand the terrain on which AI-created or AI-manipulated images now circulate.
Gérard Wajcman, in L’oeil absolu (2010), argues that we inhabit a society that idolizes the image. We expect reality to fully submit to our gaze, leaving nothing hidden. Hence the ubiquity of cameras in phones, computers, and surveillance devices. Under the premise that whatever lacks an image easily becomes rumor, we end up trusting images as if they guaranteed, by themselves, the truth of what they show—even while knowing they can be manipulated. First it was photography, then digital editing software; now images produced or transformed by AI. In all these cases, an almost automatic trust in the visible persists.
Wajcman illustrates this compulsion to see everything with the example of a group of scientists at Hiroshima University who made frogs transparent in order to observe their organs without opening their bodies. Beyond what might appear as a scientific advance, the gesture reveals a desire to anticipate any threat—such as the growth of malignant cells—through an external eye that produces constant information. This logic helps explain why we now normalize technologies that promise to monitor, predict, and control the future through images and visual data.
Guy Debord, meanwhile, argued in The Society of the Spectacle (1967) that in modern societies social life is organized as an immense accumulation of spectacles. What was once lived directly is displaced into representation, into staging. The spectacle is not simply a collection of images, but a form of social relationship mediated by those images and by the devices that disseminate them. The economy and power, Debord notes, legitimize themselves through this visual regime that shapes how we perceive the world and ourselves.
In this context, visuality is increasingly associated with surveillance, voyeurism, and spectacle, and ever less with critical reflection. If we once spoke of an “inquisitorial eye,” today we face an eye full of doubts: an eye saturated by the proliferation of images manipulated or generated by AI, making it extremely difficult to distinguish between true and false, between reality and fiction.
The “social schizophrenia” of our time does not consist only in doubting what we see, but in the fact that this doubt erodes already fragile bonds in societies marked by inequality, violence, and weak institutions, as so many in Latin America are. Allowing generative AI to operate without counterweights in this context risks deepening confusion and polarization. Putting it at the service of democracy, by contrast, implies strengthening verification systems, promoting visual and media literacy from school onward, and opening public debate about the legitimate uses of synthetic images. Ultimately, what is at stake is not only the status of visual truth, but the possibility of sustaining a minimum of shared trust that makes life in common livable.
*Machine translation, proofread by Ricardo Aceves.











