Artificial Intelligence: Should we risk losing control of our civilization?

The latest leap in artificial intelligence (AI) is often illustrated by calling it “generative” AI. It is no longer limited to a type of software or programming available through algorithms, i.e., logical procedures that are oriented towards obtaining results that are arrived at by resorting to sequential, selective structures, and that provide for cycles of repetition. Now the software incorporates a “generative” leap, that is to say, a learning capacity that enables it to produce novel responses or instructions, beyond those foreseen.

How far can such decoupling go? Futuristic conjectures abound. What is certain, however, is that due to the aforementioned innovation, AI-enabled devices (powered by massive advances in memory development and data processing speed) can make remarkable contributions in countless areas.

At the same time, generative AI brings with it serious threats of various kinds. There are numerous studies that argue for the progressive substitution of labor activities in the most diverse sectors at the hands of automated devices.

Nevertheless, other harms should be noted that would affect the general population, albeit in a more diffuse and imperceptible manner. AI, and even more so with its generative mode, intervenes without the need to resort to it voluntarily, so that people can be manipulated for both commercial and social control reasons.

In a recent open letter entitled “Pause Giant AI Experiments: An Open Letter“, renowned researchers were lapidary. “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening (…) “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control…”. Hence, the signatories go so far as to ask such pressing questions as, “Should we risk loss of control of our civilization?”

Faced with such a level of uncertainty, the letter calls for the adoption of measures to ensure the safety, transparency, robustness and reliability of AI systems. In addition, it is proposed that all AI laboratories suspend for at least six months the training of systems more powerful than GPT-4.

In this context, it is worth asking: would the six-month suspension be enough to stop the ferocious race that corporations have been waging? And what values or altruistic purposes would finally prevail?

It would be difficult for such values or goals to prevail when many researchers show signs of technocratic messianism in their own work. In the book “Infocracy”, its author, Byung-Chul Han, recalls that Alex Pentland, former director of the Human Dynamics Lab at the Massachusetts Institute of Technology (MIT), wrote: “If we had a ‘divine eye’, a global vision, we could achieve a true understanding of how society works and take measures to solve our problems”.

EU points the way to multilateral regulation

Due to the risks associated with AI, it would seem that an international disciplinary order will be imposed, which is difficult given the dizzying nature of technological transformations. Hence, the warnings, but also the cautious approach of the proposal for regulation conceived by the European Parliament and the Council of the European Union “laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain legislative acts of the Union“. The regulation was adopted by the Parliament on June 14, 2023, and the negotiation with the Member States is now open.

By classifying AI according to different levels of risk and contemplating various hypotheses of exception and transience, the EU initiative is seen as a germinal test of future multilateral regulation on the subject.

One of the main aspects considered by the EU and that would deserve special attention from Latin American governments is the global functioning of the digital economy, which now rides on the back of generative AI. As is well known, from the production and supply of content, programs operate in a network by functioning and updating themselves, referring to databases through cross-border flows that are usually managed in the cloud. A small number of transnational corporations (big tech) tend to concentrate the capital generated by these businesses.

Consequently, the incipient EU regulation takes due note of the possible extraterritorial location of providers. Thus, the regulation will also apply to providers and users of AI systems established in a third country, when the output information is used in the EU.

Similarly, developing countries should protect users located in their territories from operations that would have been prohibited or restricted to providers of such technological offerings in industrialized countries. But how can this be done?

Until such time as a multilateral or plurilateral arrangement is formalized, a transitional stage seems to be opening up. During this time, national or sub-regional regulations adopted by Latin American countries would have to aim at agreements with governments of industrialized countries. This would be done by means of mutual recognition agreements on procedures for assessing conformity with the regulations established for AI operations considered to be high risk.

This would inhibit cross-border operations that seek to introduce generative AI software providers into Latin American markets when the level of risk is unacceptable in the countries of origin.

Regulating is not enough

It is already possible to imagine how the threshold of psychic defense mechanisms against the digitalization of the human mind (mind uploading) could be crossed by articulating mental content and digital devices. In the same vein, the new generation of robotics seeks to integrate the behavior of the people with whom the devices interact by means of sensors, according to experiments by companies such as Embodied and Alphabet (Google’s parent company).

Although without the pretension of delving into the near future, we should review what has been going on for some years now. Digital logic aims to align cognitive functions with the operation of digital devices. In other words, the minds of billions of people have already been “digitized”, thus curtailing Socratic and dialectical logical thinking and, by extension, critical thinking.

It is not new that the training aimed at installing digital logic is sponsored by the same big tech companies in their capacity as content providers. Their objective is to promote mass consumption by executing programs conceived from virtual platforms that set dichotomous options: acceptance or rejection.

Electronic communications are part of speech and, in this sense, the asymmetrical relationship between the user who is pressured to define himself and each option presented to him in front of the device is particularly relevant. Computer security expert Patrick Wardle states in this regard: “I always think of phones as our digital soul.”

When it is advisable to place greater emphasis on educational initiatives, it should be noted that it is no longer a question of promoting greater digital literacy, because that is efficiently taken care of by the providers of devices and content. It is a matter of devising educational policies that promote the development of Socratic and dialectical logical thinking and, by extension, critical thinking.

At this point, Latin American governments face a challenge that they should not shirk.

*Translated from Spanish by Micaela Machado Rodrigues

Our Newsletter