Study finds bias in language models against non-binary users
What happens when the technology meant to protect marginalized voices ends up silencing them? Rebecca Dorn, a research assistant at USC Viterbi's Information Sciences Institute (ISI) has uncovered how large language models (LLMs) that are used to moderate online content are failing queer communities by misinterpreting their language.
from Tech Xplore - electronic gadgets, technology advances and research news https://ift.tt/kTKjO0g
from Tech Xplore - electronic gadgets, technology advances and research news https://ift.tt/kTKjO0g
Comments
Post a Comment