Invisible trap: LLMs may be fuelling a vicious circle of misinformation

Invisible trap: LLMs may be fuelling a vicious circle of misinformation

Rodrigo Pereira, CEO, A3Data, a consulting firm specializing in data and artificial intelligence, on the risks posed by the advance of LLMs if companies aren’t taking precautions.  

In recent years, we have witnessed a significant transformation in the way we search for and consume information. LLMs are becoming increasingly widespread, progressively replacing traditional search engines such as Google.

With fast, natural language and seemingly secure responses, these models are becoming the first choice of many ordinary citizens. But are we aware of the risks built into this new feature?

According to a recent paper written by researchers at Stanford University, the University of Southern California, Carnegie Mellon University and the Allen Institute for AI, LLMs such as GPT and LLaMA-2 are often reluctant to express uncertainty, even when their answers are incorrect: about 47% of the answers provided with high confidence by the models were wrong.

In addition, the research addresses the issue of biases in models and human annotation. During the Reinforcement Learning with Human Feedback (RLHF) process, language models are trained to optimize responses based on human feedback. However, this process can amplify certain biases present in the training data or in the feedback itself.

Among the biases that must be taken into account are gender and race. In the case of providing feedback with these stereotypes or avoiding expressing uncertainty in contexts that involve minorities, models end up perpetuating and amplifying these human perspectives.

Another worrying bias is the preference of note-takers for answers that sound more assertive, even when there are uncertainties about this information. This leads models to avoid expressions of doubt to the user, creating the false illusion of sound knowledge, when in fact they may be wrong.

For example, a categorical statement about a country’s capital might be preferred by note-takers, even if the model was uncertain, resulting in a potentially incorrect but confidently presented answer.

These biases are concerning because they shape the way responses are generated and perceived by users. When combined with the excessive trust that users tend to place in the answers of LLMs, these biases can lead to the spread of distorted information and the consolidation of social biases.

We are, therefore, facing a possible vicious circle. As more people turn to LLMs for information, overreliance on these models can amplify the spread of misinformation.

In this sense, the process of aligning models with human feedback (RLHF) may be exacerbating this problem, reinforcing assertive responses and underestimating the importance of expressing uncertainties. Not only does this perpetuate misinformation, but it can also reinforce prejudices and social biases, creating a cycle that feeds back and intensifies over time.

To prevent this vicious cycle from consolidating, it is important that actions are taken on several fronts, such as transparency and clarification in the tools, since LLMs must be designed to express uncertainties in a clear and contextual way, allowing users to better understand the reliability of the information provided. In addition, include a more diverse range of feedback during model training to help mitigate biases introduced by a limited subset of users or annotators.

In this process, it is important to promote education and awareness of users about the limits and potential of AIs, encouraging a more critical and questioning approach. And, finally, the development of regulations and standards by regulatory bodies and the industry itself, to ensure that AI models are used ethically and safely, minimizing the risk of large-scale misinformation.

We are at a pivotal point in the history of human-AI interaction. In this context, the massive dissemination of language models without due care can lead us to a dangerous cycle of misinformation and reinforcement of biases.

With this, we must act now to ensure that technology serves to empower society with correct and balanced information, and not to disseminate uncertainties and prejudices. In the information age, true wisdom lies not in seeking the quickest answers –  but in questioning and understanding the uncertainties that come with them.

Browse our latest issue

LATAM English

View Magazine Archive