Could you learn to trust artificial intelligence?

Trusting AI robots

Last night, along with half the adults in the UK, I was glued to my TV set to devour the last episode of “The Traitors”. If you haven’t watched this series, you won’t understand the gripping nature of watching 22 people try to work out who to trust. Spoiler alert – the most devious liar was the winner.

Why the other 21 players in the game could not spot his untrustworthy behaviour is something of a mystery to those of us who watched. How could they not spot his lies? Psychologists at the University of Aberdeen point towards a suggestion in their research which shows that when we are in a group, we mentally combine the faces of the individuals. This gives us a mental short-cut to help us trust a group of people. In other words, when in a social situation, we are focused on the potential group threat rather than focusing on individuals.

Artificial intelligence, however, would have no such problem. When looking for the facial cues of someone telling fibs, it could be programmed to check each individual, not what the Aberdeen psychologists refer to as “ensemble” recognition. Even so, AI would be much more helpful than human instinct, which is notoriously poor at recognising untrustworthy individuals.

However, as I discovered this week, it is AI that people do not trust. I was at the annual educational technology conference and exhibition called BETT (British Educational Training and Technology) where I met several people with a sceptical view about artificial intelligence. Even though almost every exhibitor, it seemed, was claiming to have integrated AI into its software, the people I spoke with were somewhat concerned. There was a sense of unease, a lack of trust in this emerging technology.

This mistrust was heightened this week, with the fake phone calls supposedly from President Biden suggesting that voters did not need to turn out in the primary elections. Further evidence that AI should not be trusted came when thousands of deepfake video images were published on “X” (formerly Twitter) of Taylor Swift. The pictures and videos were pornographic and so inevitably they went viral. It was further ammunition in the argument, though, that artificial intelligence is not to be trusted.

Meanwhile, in the more sedate world of the International Monetary Fund, there came the warning that AI is going to affect 40% of all jobs in the world. Goldman Sachs has already said that 300m jobs are at risk. This is another reason we dislike artificial intelligence. Plus, there have been several examples of the bias that AI can develop. Back in 2019, users accused Apple of gender bias in its credit card. It appeared that women were offered lower credit limits than men because of the AI system.

So, with all of this going on, is it any wonder that we don’t trust artificial intelligence? Yet, at the same time, we believe humans are more trustworthy, even though The Traitors shows that is not the case. This incorrect psychological preference for trusting humans more than AI is driving us away from the technology.

Indeed, even in the academic world I find people scared to use things like ChatGPT. They don’t trust it and therefore hang back from exploring it or experimenting with it. The problem with this is that the more people put up a trustworthiness barrier to AI, the further away they get from being able to use it positively. The development is so rapid that if you do not get to know AI systems, you will be so far behind those who are dipping their toe in the water.

The dilemma for people is how do they get involved with AI if they do not trust it? But if they don’t get involved, they’ll fall so far behind their competitors they could face potential business collapse. If that happens, the people who do not use AI will blame the technology, rather than their reluctance to deal with something they do not trust.

One thing that you can do, if you are wary about AI, is to follow the ideas from researchers at the University of Oregon, which was published this week. Their study showed that it is possible to learn about something while not actually taking part in usage. The research explored that notion of passive exposure to something, making it easier to understand it. The experiment showed that passive exposure speeds up subsequent learning.

This means that you do not need to take part in anything to do with AI if you still do not trust it. Instead, all you need to do is to watch other people using artificial intelligence. Sit with colleagues as they do things on ChatGPT. Watch videos about using artificial intelligence. Wander up and down the aisles at exhibitions, like BETT, where artificial intelligence is on display at every corner. Then, when you are ready to “dive in” to actual usage of AI, you will be able to do so more quickly. You may lack trust in artificial intelligence, but passive exposure to it is likely to improve your perception of it. Sitting back in blissful ignorance is not an option. Indeed, if you did that, you would be the same as the 21 losers on The Traitors.

Like this article?

Share on Twitter
Share on Linkdin
Share on Facebook
Share via email

Other posts that might be of interest

Photo manipulation in progress
Psychology

Do you airbrush the real you?

I am running the risk this week of being disciplined at work. If my boss reads this (and she often does) she will discover I have done something naughty. Recently, I posted a picture on

Read More »