
The other day, I was at a meeting where we were warned about falling into the trap of believing everything AI tells us. AI frequently “hallucinates”, we were told. Indeed, artificial intelligence can make things up. But guess what, so can human beings. Indeed, the notion that AI hallucinates “all the time” is a made-up “fact” generated by human beings because they do not trust a machine. They would rather rely on what people tell them, even though people lie and cheat.
People concerned about AI claim to have evidence that platforms like ChatGPT produce false information. They tell me they have seen this highlighted on social media. Wow. Sometimes when I ask people how they know that AI is making things up, they just say “everyone knows”. Personally, I prefer evidence to relying on “everyone”.
Several researchers are testing various AI services to detect the hallucination rate. One of these puts hallucinations of the top AI models at a rate of 0.7%. Humans, on the other hand, have a hallucination rate of 7%, according to research at the University of Wisconsin La Crosse. In other words, the people in your office are ten times more likely to make things up than ChatGPT. Who do you trust now?
However, despite the facts, people are increasingly untrusting of artificial intelligence. This was shown this week in research from Georgia State University, which demonstrated that the public believes those working on AI are the least prudent scientists of all. People have negative views of scientists generally, and until recently, climate scientists were more negatively perceived than others. Now, they have been replaced by people working on AI. They are at the bottom of the pile in terms of public perception. It is all tied up with the belief that these AI scientists could be producing something with significant unintended consequences.
That view is confirmed in the recent book “Smart Until It’s Dumb”, which reveals stories from “inside” the world of AI software engineering. It turns out that the humans working on AI have done some hallucinating themselves. The book has suggestions of inadequate controls, even possible cover-ups. It does not paint a positive picture of the world of artificial intelligence companies.
Perhaps the concerns raised in the book are the kind of issues that many people worry about. Indeed, a study by Statista shows that the majority of people are not excited about the potential for AI. In the USA, for instance, only 22% of people have a positive tendency towards AI. The UK does not even feature in the league table.
Even in the academic world, I still find people concerned about artificial intelligence, worried that it will put them out of a job, or desperately wanting to cling to the world they were used to in the 1980s. They seem oblivious to the fact that the power of AI is increasing steadily. One study says it is doubling every three months or so. Plus, one study suggests that the size of the training material used by AI programs is growing fivefold each year.
Here’s the problem. People who are reticent about AI and wary about its potential use face a future that could exclude them. Indeed, the boss of Amazon sent a message to staff this week saying they had to embrace AI. He said, “We will need fewer people doing some of the jobs that are being done today.” You have probably heard the modern adage that “you will not lose your job to AI, but to someone who knows how to use AI”.
Yet, I know, you will still have feelings of distrust and concern about AI. So, it is fortunate that Japanese psychologists have come up with an understanding of our relationship with AI, based on “Attachment Theory”. It turns out that our reticence to use AI is possibly down to the lack of emotional support it provides us. We see it in more human terms, yet it doesn’t quite respond as effectively as a human. That can create what is called “insecure attachment”, which is at the heart of the development of a lack of trust. It turns out that in order to get the best out of something like ChatGPT, you really need to see it as a reliable friend. And the data show, on the whole, it is much more reliable than your parents, whom you became attached to, because they lied to you throughout your childhood. “No, the injection won’t hurt,” they said.
People lie, but AI is much less likely to do so. It’s not time to give up on AI. Just judge AI in the same evidence-based way you interpret information from your colleagues. Then you’ll find it is actually rather good.
Now, all you have to do is guess whether I made any of this up.