Stanford Researchers Find ‘Alarmingly High’ Failures In Legal AI

(January 12, 2024, 1:29 PM EST) -- STANFORD, Calif. — Large language models (LLMs) demonstrated an “alarmingly high” error rate when used in the legal field, with the AIs unable to identify core holdings in opinions at least 75% of the time and producing hallucinations at rates of up to 88% when they were asked specific legal inquiries, Stanford University researchers said Jan. 11....

Attached Documents

Related Sections