Large language models make human-like reasoning mistakes, researchers find

Large language models (LLMs) can complete abstract reasoning tasks, but they are susceptible to many of the same types of mistakes made by humans. Andrew Lampinen, Ishita Dasgupta, and colleagues tested state-of-the-art LLMs and humans on three kinds of reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task.

from Tech Xplore - electronic gadgets, technology advances and research news https://ift.tt/My5G68e

Comments

Popular posts from this blog