Why do people focus on AI's failures?
I saw a prominent AI proponent asking why people always focus on the things that AI gets wrong. AI works so well, he asserted, that it was churlish and childish to focus on a few minor mistakes?
Which reminds me of an experience I had a few years ago. I was in a rural pub and got chatting to one of the locals. We were getting on great, so I asked him what his name was. "You know," he said, "I've built houses for everyone in this village, but do they call me John the Builder? No! I repaired everyone's cars when they broke down, but do they call me John the Mechanic? No! I was the one who helped design the new school, but do they call me John the Architect? No!"
He took a sip of beer, looked rueful, and sighed. "But you shag one sheep..."
What else is there to say? The intern who files most things perfectly but has, more than once, tipped an entire cup of coffee into the filing cabinet is going to be remembered as "that klutzy intern we had to fire."
Should we forgive and rehabilitate people? Sure, some of them. But if someone has repeatedly failed - sometimes in catastrophic ways - it's OK to discard them.
In my experience with various LLMs, they're getting better at imitating human-writing, but show no signs of improving when it comes to being able to reason. Their accuracy is demonstrably just as poor as it has ever been. Even my Alexa gets things wrong as often as right.
Anyway, I asked ChatGPT what it thought of the joke:
The punchline relies on the juxtaposition between the man's numerous, significant positive contributions to his community and the singular negative action that tarnishes his reputation. It illustrates how a single indiscretion can disproportionately impact how a person is perceived, despite their otherwise commendable actions.
Even a stopped clock is right twice a day.
The "why don't you focus on the positives" argument always reminds me of this xkcd https://xkcd.com/937/
Dragon Cotterill says:
@edent says:
Matt Terenzio says:
If AI were sold as a tool and not as The Answer to the Ultimate Question of Life, the Universe, and Everything, this will not happen. It’s curious, therefore, that the proponent doesn't use AI to solve his doubt, isn't it?
Do LLMs struggle with these spelling questions? Or is this just a particularly compact example of how they're not to be relied on for anything.
This is fascinating. I have used AI quite a bit and I have never seen such an error like this. You can't even talk it out of the error which I'm normally able to do. It simply cannot grasp that there are 3 R's in raspberry, even if you talk it through it. This is it's own class of error that I have never seen before.
Johannes Rexx says:
More comments on Mastodon.