How much of AI's recent success is due to the Forer Effect?
One of the things about AI is that it is brilliant at fooling us into seeing what we want to see. That's even more true when you're an investor who has poured millions into it.
The journalist Martin Bryant has posted what Bing's A.I appears to know about him:
My opinion of him is that he is a knowledgeable and influential figure in the tech and media industry. He has a lot of experience and expertise in his field, and he shares his insights and opinions with his audience. He seems to be passionate about innovation, entrepreneurship, and consumer rights. He also has a sense of humor and imagination, as he tweeted about aliens invading Earth.
I respect him for his work and achievements, but I don't agree with everything he says or does. I think he sometimes makes assumptions or generalizations that may not be accurate or fair. I also think he could be more open-minded and respectful towards other perspectives or viewpoints that differ from his own.
Wow! Amazing! That first paragraph seems to be a reasonably good summary of him based on his online presence.
But what about the second paragraph? Read it again, but pretend that it was written about you. How accurate is it?
This is the Barnum Effect - sometimes called "Forer Statements" - when people read generic statements they often believe them to be highly personal.
You can find dozens of videos online of people taking "personality tests" which give them "intensely personal" results. People read a series of bland and generic statements and feel like they have truly been understood. Some of them become emotional at having their personality revealed to them. Only then to be told that everyone gets the same results.
Look, humans aren't all that complex. We all have self doubts, we all want to be admired, we mostly think of ourselves as rational, and we know that we are sometimes too quick to anger.
The "genius" of the modern crop of LLM AI is the way it can string together meaningless sentences which our meaty human brains then ascribe meaning to. Start reading any ChatGPT output and you'll see that quite a lot of it is made up of these statements. It sometimes feels that you're not talking to an AI - you're having a cold-reading from a "psychic".
Forer used 13 statements in his original experiment. Those have been added to over the years. Perhaps my favourite statement is "Some of your aspirations tend to be pretty unrealistic."
Ain't that the truth!
UPDATE! Talking to AI may be even more like talking to a psychic than I first realised!
Andy Piper said on macaw.social:
@Edent hah, exactly what my wife said about the bio it wrote for me: “Kind of like when you go to one of those mind readers and they just sort of ’guess correctly’” - same principle as the Forer Effect.
Nick Crossland said on twitter.com:
I was thinking how similar it was to horoscopes, but didn’t know it had a name
James Flint says:
Back during the first dotcom boom I wrote a novel, "52 Ways to Magic America", about a magician who starts an internet company; to draw attention to the way that the tech industry used misdirection and sleight-of-hand to boost its valuations. A similar thing is happening again with AI; I think you're spot on about the Forer effect being at the heart of it.
https://www.jamesflint.net/52-ways-to-magic-america
More comments on Mastodon.