Fighting fake 'facts' with two little words: A new technique to ground a large language model's answers in reality
Asking ChatGPT for answers comes with a risk—it may offer you entirely made-up "facts" that sound legitimate, as a New York lawyer recently discovered. Despite having been trained on vast amounts of factual data, large language models, or LLMs, are prone to generating false information called hallucinations.
from Tech Xplore - electronic gadgets, technology advances and research news https://ift.tt/nKTrysc
from Tech Xplore - electronic gadgets, technology advances and research news https://ift.tt/nKTrysc
Comments
Post a Comment