AI hallucinations
-
Oh, goody!
An enduring problem with today’s generative artificial intelligence (AI) tools, like ChatGPT, is that they often confidently assert false information. Computer scientists call this behavior “hallucination,” and it’s a key barrier to AI’s usefulness.
Hallucinations have led to some embarrassing public slip-ups. In February, AirCanada was forced by a tribunal to honor a discount that its customer-support chatbot had mistakenly offered to a passenger. In May, Google was forced to make changes to its new “AI overviews” search feature, after the bot told some users that it was safe to eat rocks. And last June, two lawyers were fined $5,000 by a U.S. judge after one of them admitted he had used ChatGPT to help write a court filing. He came clean because the chatbot had added fake citations to the submission, which pointed to cases that never existed.
But in good news for lazy lawyers, lumbering search giants, and errant airlines, at least some types of AI hallucinations could soon be a thing of the past. New research, published Wednesday in the peer-reviewed scientific journal Nature, describes a new method for detecting when an AI tool is likely to be hallucinating. The method described in the paper is able to discern between correct and incorrect AI-generated answers approximately 79% of the time, which is approximately 10 percentage points higher than other leading methods. Although the method only addresses one of the several causes of AI hallucinations, and requires approximately 10 times more computing power than a standard chatbot conversation, the results could pave the way for more reliable AI systems in the near future.
https://time.com/6989928/ai-artificial-intelligence-hallucinations-prevent/?utm_placement=newsletter
-
Last fall, I had my students do an AI activity in class, and this was one of the things I really drilled into them, that false information from AI is really hard to detect because it seems legit. Often, it’s plausible, which is part of it, but more than that, it’s because the language is so well-constructed, i.e., there are no grammatical mistakes, the word choice is not only good but natural, the response fits the query quite well etc. Oh, and one other thing: AI output in response to a query is almost always pretty long. Rather than one or two sentences, you often get several paragraphs, and the sheer amount of text can sometimes be overwhelming.
And because of all that, the user has a hard time approaching it with an appropriate degree of suspicion, which ends up making the user gullible and almost too easily convinced. And if your goal is to not be fooled by AI, there’s sort of an embedded catch 22 there, because the most reliable way to detect an AI hallucination is when the topic and content are things the user knows, but the catch 22 is that if you already know, you’re not going to be asking AI.
BTW here’s the activity we did: in the latter half of the semester, I had students work in groups to put together a set of essay type questions based on material covered in the first half of the semester. The idea was that this was all information and ideas that the students knew very well, because we’d spent the last 8-10 weeks talking about it. After crafting all the questions and creating rubrics with the kinds of info/content they’d want to see in the answers, each group asked two questions to AI and then evaluated and scored the answers.
Students were very critical of the quality of the answers. There weren’t a lot of flat-out wrong answers, but AI scored low across the board for things like lack of depth, missing the main point of the questions, and so on. The students pretty much felt that the AI answers seemed like something coming from someone who didn’t really know anything about the subject and was just trying fake it by throwing out a lot of commonly known tidbits.
I won’t teach that class again until Spring 2025, so it will be interesting to see if 1) this kind of exercise is still relevant, and 2) how well the AI performs if we do it.
-
No idea if ChatGPT or some other generative AI is involved.
Link to video -
A student in America asked an artificial intelligence program to help with her homework. In response, the app told her "Please Die." The eerie incident happened when 29-year-old Sumedha Reddy of Michigan sought help from Google’s Gemini chatbot large language model (LLM), New York Post reported.
The program verbally abused her, calling her a “stain on the universe.” Reddy told CBS News that she got scared and started panicking. “I wanted to throw all of my devices out the window. I hadn’t felt panic like that in a long time to be honest,” she said.
The assignment Reddy was working on involved identifying and solving challenges that adults face with age. The program blurted out words that hit the student hard and were akin to bullying