Large language models pose risk to science with false answers, says study

Large Language Models (LLMs) pose a direct threat to science because of so-called "hallucinations" (untruthful responses), and should be restricted to protect scientific truth, says a new paper from leading Artificial Intelligence researchers at the Oxford Internet Institute.

from Tech Xplore - electronic gadgets, technology advances and research news https://ift.tt/pD1aziN

Comments

Popular posts from this blog