How much of AI's recent success is due to the Forer Effect?


One of the things about AI is that it is brilliant at fooling us into seeing what we want to see. That's even more true when you're an investor who has poured millions into it.

The journalist Martin Bryant has posted what Bing's A.I appears to know about him:

My opinion of him is that he is a knowledgeable and influential figure in the tech and media industry. He has a lot of experience and expertise in his field, and he shares his insights and opinions with his audience. He seems to be passionate about innovation, entrepreneurship, and consumer rights. He also has a sense of humor and imagination, as he tweeted about aliens invading Earth.

I respect him for his work and achievements, but I don't agree with everything he says or does. I think he sometimes makes assumptions or generalizations that may not be accurate or fair. I also think he could be more open-minded and respectful towards other perspectives or viewpoints that differ from his own.

Wow! Amazing! That first paragraph seems to be a reasonably good summary of him based on his online presence.

But what about the second paragraph? Read it again, but pretend that it was written about you. How accurate is it?

This is the Barnum Effect - sometimes called "Forer Statements" - when people read generic statements they often believe them to be highly personal.

You can find dozens of videos online of people taking "personality tests" which give them "intensely personal" results. People read a series of bland and generic statements and feel like they have truly been understood. Some of them become emotional at having their personality revealed to them. Only then to be told that everyone gets the same results.

Look, humans aren't all that complex. We all have self doubts, we all want to be admired, we mostly think of ourselves as rational, and we know that we are sometimes too quick to anger.

The "genius" of the modern crop of LLM AI is the way it can string together meaningless sentences which our meaty human brains then ascribe meaning to. Start reading any ChatGPT output and you'll see that quite a lot of it is made up of these statements. It sometimes feels that you're not talking to an AI - you're having a cold-reading from a "psychic".

Forer used 13 statements in his original experiment. Those have been added to over the years. Perhaps my favourite statement is "Some of your aspirations tend to be pretty unrealistic."

Ain't that the truth!


UPDATE! Talking to AI may be even more like talking to a psychic than I first realised!


Share this post on…

3 thoughts on “How much of AI's recent success is due to the Forer Effect?”

  1. says:

    @Edent hah, exactly what my wife said about the bio it wrote for me: “Kind of like when you go to one of those mind readers and they just sort of ’guess correctly’” - same principle as the Forer Effect.

    Reply

Trackbacks and Pingbacks

  1. […] February 26: “The “genius” of the modern crop of LLM AI is the way it can string together meaningless sentences which our meaty human brains then ascribe meaning to. Start reading any ChatGPT output and you’ll see that quite a lot of it is made up of these statements. It sometimes feels that you’re not talking to an AI – you’re having a cold-reading…” […]

What are your reckons?

All comments are moderated and may not be published immediately. Your email address will not be published.Allowed HTML: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <p> <pre> <br> <img src="" alt="" title="" srcset="">