Fri. Dec 8th, 2023

Researchers at the Oxford Internet Institute are raising concerns about the dangers of Large Language Models (LLMs) used in chatbots to hallucinate. These models can generate false content and present it as accurate, posing a direct threat to science and scientific truth.

The paper published in Nature Human Behaviour highlights that LLMs are designed to produce helpful and convincing responses without any guarantees regarding their accuracy or alignment with fact. Currently, LLMs are treated as knowledge sources and are used to generate information in response to questions or prompts, but the data they are trained on may not be factually correct.

One reason for this is that LLMs often use online sources which can contain false statements, opinions, and inaccurate information. Users often trust LLMs as a human-like information source due to their design as helpful, human-sounding agents. This can lead users to believe that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.

The researchers urge caution in using LLMs as a source of knowledge and suggest using them as “zero-shot translators.” This means that users should provide the model with the appropriate data and ask it to transform it into a conclusion or code rather than relying on the model itself as a source of knowledge. This approach makes it easier to verify that the output is factually correct and aligned with the provided input.

While LLMs will undoubtedly assist with scientific workflows, it is crucial for the scientific community to use them responsibly and maintain clear expectations of how they can contribute. It is essential not to rely solely on AI models for scientific research but rather use them as tools to enhance our understanding of complex problems.

Overall, while LLMs have shown great potential for enhancing communication between humans and machines, they must be used with caution and an understanding of their limitations. The scientific community should prioritize ensuring information accuracy in all areas of research while leveraging these tools for their intended purpose.

By Editor

Leave a Reply