Team teaches AI models to spot misleading scientific reporting

Artificial intelligence isn't always a reliable source of information: large language models (LLMs) like Llama and ChatGPT can be prone to "hallucinating" and inventing bogus facts. But what if AI could be used to detect mistaken or distorted claims, and help people find their way more confidently through a sea of potential distortions online and elsewhere?

from Tech Xplore - electronic gadgets, technology advances and research news https://ift.tt/9FdsL6x

Comments

Popular posts from this blog