The opportunities and challenges posed by artificial intelligence (AI) and its impact on various areas of human activity are the subject of debate worldwide. Latin American countries are not exempt from this wave. In Chile, a bill has been introduced to regulate AI and promote its ethical and responsible development; Colombia’s Constitutional Court has ruled, among other things, that AI cannot replace a judge; and Brazil’s new AI Plan foresees a $4.1 billion investment through 2028 to develop its own technological infrastructure.
In Argentina, AI is also gaining more relevance in political discussions. In May, President Javier Milei traveled to the United States and met with some of the world’s most powerful tech entrepreneurs. Upon his return, the president stated that he wants to make Argentina the fourth global AI hub, alongside China, the United States, and Europe. Argentina’s advantages, according to him, include skilled human resources, available energy, and the cool temperatures needed to cool large data centers.
At the same time, the Argentine Ministry of Security announced the creation of an AI Unit aimed at “the prevention, detection, investigation, and prosecution of crime.” The new AI Unit’s tasks would include monitoring social networks, websites, and other applications, processing real-time security camera footage through facial recognition, and analyzing historical crime data using machine-learning algorithms.
Against the grain?
According to the Ministry of Security’s resolution, many countries at the forefront of integrating these technologies are already using AI for these purposes. What the resolution failed to mention is that these same functions are being banned in some of the very countries it refers to. The AI law recently approved by the European Union, for example, prohibits the use of real-time biometric identification systems in public spaces for security purposes, except in specific, well-defined contexts.
In Argentina, the National Chamber of Deputies organized a Regional Summit of Parliamentarians on AI in June to “create appropriate regulatory frameworks and promote responsible innovation while protecting fundamental rights and values.” The summit served as a prelude to various AI-related bills that began to be discussed in August.
The fact that the debate on AI’s development and its societal impact has reached the parliamentary level could be good news, as it brings it under the realm of democratic deliberation. What is concerning is that the debate is being pushed with some haste and superficiality by a small group of leaders and public officials, without broader public engagement or fostering the necessary negotiations and consensus that such a significant issue requires.
Limitations of the libertarian vision
As in many other areas, the Argentine government claims that to create a favorable environment for AI development in the country, any form of state intervention must be avoided. However, the idea that regulation hinders innovation, particularly in digital technologies, is quite simplistic. Today, most experts agree that broad, well-designed, and timely regulation incentivizes innovation, partly because it provides greater certainty about the rules of the game.
If we accept that some form of regulation is necessary, the question is no longer whether to regulate but how to regulate a technology that is far more complex (and much less abstract) than often portrayed. After an initial phase in which governments preferred to leave regulation in the hands of tech companies, most countries now recognize that self-regulation does not work without some form of state intervention.
Decades of an unregulated digital environment have allowed a few companies to accumulate unprecedented power. We live in a world where the combined market capitalization of the leading tech giants exceeds the GDP of some of the wealthiest countries and even entire continents. This is economic power based on the growing extraction, accumulation, and processing of vast amounts of data. It is also political power, stemming from the increasing importance of the virtual world as a space where public issues are resolved. This power allows figures like Elon Musk, for example, to behave on the international stage like de facto heads of state.
The advancement of AI in all areas of our lives is not, as many claim, inevitable, nor something to simply “adapt” to. How this technology develops and is deployed depends on the direction we choose to take. That direction can be determined by the boards of a few companies dominating the global market, or by citizens, their political representatives, and democratic institutions, including and channeling the voices of civil society, the concerns of the media, journalists, academic researchers, and the interests of the private sector, among other key stakeholders.
Creating a favorable environment for AI could mean offering comparative advantages for large tech companies to find the ideal space to expand their operations or developing the public policies and governance frameworks necessary to steer technological development toward practices that serve the common good. Thinking about AI within the framework of our democracies ultimately involves fostering an open, inclusive, and pluralistic debate about technology and how we want it to shape our society.
Autor
Researcher at the Chair of Artificial Intelligence and Democracy of the European University Institute. Master in Transnational Governance from the School of Transnational Governance in Florence, Italy.