One region, all voices

L21

|

|

Read in

Would “Community Notes” be a real solution to guarantee freedom of expression?

Either because of the lack of transparency of how this model actually works, or due to the type of the data, it is difficult to assume that the “Community Notes” are more democratic than the professional data verification system.

We have barely started the year 2025 and we already have a crucial issue on the global agenda. Meta’s CEO, Mark Zuckerberg, recently announced that the company will eliminate fact-checking from its platforms to adopt a moderation system based on the so-called “Community Notes”, in force in X by Elon Musk.

The statement used unusual language, full of bellicosity and with clear contours of a worldview that occupies the most radical political arena, more polarized and demonstrating the position of one of the technological giants in the clash for the regulation of social networks.

Thus begins another chapter of a dispute that divides the world and focuses on freedom of expression, one of the most classic rights since the seventeenth century.

It is from this context that the debate remains hot, as the resistance of digital platforms to submit to external controls (such as legislation, for example) is well known, as well as the great conceptual controversy surrounding the meaning of this freedom of expression, which has become a mantra of the far right. The freedom of expression is interpreted as something that cannot be limited, in addition to placing it as the core of a life that, according to these spokespersons, must be lived by individuals.

With a little more rationality, however, one realizes that this whole ideologically tainted discussion lacks the most basic analysis: after all, would “Community Notes” really be a solution to the potential problems countries face in managing freedom of expression in the virtual environment? Or is it merely a mere “siren song” for platforms to shirk their moderation duties, while society as a whole becomes even more conflicted?

To introduce this approach, it is necessary to understand how the new Meta policy will work. This has not yet been publicly disclosed and, in a recent response to the Brazilian Attorney General’s Office, it appears that there is not even a clearly constructed policy on how this will be adopted. All that has been stated is that the changes will only affect the United States for the moment, so that the model can be tested. Something that, due to the lack of transparency that Meta has been imposing in recent times, does not generate certainty that this will really be the case.

Therefore, we will begin with a brief analysis of the “Community Notes” applied in X, since in theory they would be the basis for the Meta model.

According to X information and research on the subject, anyone with a valid phone number and an X account for more than 6 months can sign up for “Community Notes”. It was thought that it would only be people paying for the verified profile, but no, it is something more general. However, the acceptance of the registered person is not automatic and it was not possible to get more information about this admission process.

Once accepted, the person can suggest notes on content that they believe contains disinformation or fake news, but this note is not yet made public. Once a note is suggested, it is sent to other people in the community, who verify its content and vote on whether it is relevant or not. There is an important point here: it is the algorithm that decides who reviews the note, as it is said that the sample of reviewers must be diverse, as well as the votes. If there is no consensus on the note, it is not published. The algorithm also checks whether there really is diversity in the sample of reviewers, but does not provide information on how this actually happens. Meanwhile, the note remains unpublished, but misleading or false content continues to circulate normally.

According to data from two surveys, Center for Countering Digital Hate and The Washington Post, both from 2024, while the proposal is inspiring because it bets on a large arena of public debate, it does not work in practice and especially at key moments such as elections.

The first survey found that suggested notes that actually corrected false and misleading claims about U.S. elections were not shown in 209 cases, out of a sample of 283 publications deemed misleading, or 74%.

In the second survey, the data are even more troubling. Even when a note is published, the process usually takes more than 11 hours, enough time for the questioned content to circulate freely on the network among millions of people. Still, only 7.4% of proposed 2024 bills related to U.S. elections were made public, a percentage that dropped even further in October to just 5.7%.

But let’s think about the community itself. According to these studies, volunteers have little incentive to stay in moderation, further reducing the chances of getting the consensus sought by the algorithm when voting the suggested grade. As the nature of the notes is suggestive, there may be a bias towards political positions, coming to unbalance the content that is more “monitored”. On the other hand, for those who are there in the spirit of collaboration, it can be frustrating that content denying the effectiveness of a vaccine, for example, circulates normally while the suggested note, supported by reliable sources, denouncing the fake news, is subject to intense dispute for their vote.

In addition to requiring significant dedication to this voluntary work, which can be intense for content moderation, in the end the feeling that you are not contributing anything can also be real. In research conducted by Lupa Agency in partnership with Lagom Data, out of 16,800 suggestions for notes in Portuguese made in November 2023, only 1,352 were published in the X feed, meaning that 92% of all “Community Notes” in Portuguese remained in the X feed; 89% were pending evaluation and had not yet been shown to users; and 3% had already been rejected, leaving only 8% available.

Either because of the lack of transparency of how this model actually works, or because of the data described above, it is difficult to assume that the “Community Notes” are more democratic than the professional fact-checking system, where people are dedicated exclusively to this and where the content rating guidelines are clearer and more accessible. Community ratings may improve, but until then, I would not bet on a miracle solution to the confrontation over the truth that circulates on social networks.


*Machine translation proofread by Janaína da Silva.

Autor

Otros artículos del autor

PhD and Master in Legal and Political Sciences, University of Salamanca, Spain. Professor of Constitutional, Electoral and Human Rights Law at several institutions in Brazil and Latin America. General Coordinator of the organization Transparência Eleitoral Brasil.

spot_img

Related Posts

Do you want to collaborate with L21?

We believe in the free flow of information

Republish our articles freely, in print or digitally, under the Creative Commons license.

Tagged in:

SHARE
THIS ARTICLE

More related articles