GitHub's Copilot lies about its own documentation. So why would I trust it with my code?
In the early part of the 20th Century, there was a fad for "Radium". The magical, radioactive substance that glowed in the dark. The market had decided that Radium was The Next Big Thing and tried to shove it into every product. There were radioactive toys, radioactive medicines, radioactive chocolate bars, and a hundred other products.
In the early part of the 21st Century, there was a fad for "AI". The magical, Artificial Intelligence which provided all the answers. The market had decided that AI was The Next Big Thing and tried to shove it into every product. You can probably see where this is going, right?
I don't particularly mind companies experimenting with AI. It's good to explore a problem and see if it fits a user's needs. But the current crop are just so shit it makes me wonder whether anyone tested them.
GitHub has forced its new Copilot button on to every page. The first thing I asked it was whether it could be turned off.
It pointed me to this page: https://docs.github.com/en/copilot/getting-started-with-github-copilot/disabling-github-copilot
Except - and I hate to be a pedant - that link 404s. There's nothing there. It doesn't exist. It is made up.
This AI, which I am supposed to trust with my code, doesn't even understand itself.
This isn't an AI Mirror Test. This isn't me trying to find out if the large-language model is conscious, aware, or has a soul. I'm not asking complex reasoning, or asking it to make an æsthetic judgement.
This is a basic functionality test.
Is the computer able to accurately provide information about itself?
That's it. That's all I want. My first interaction with Copilot it lied to me about itself. Why would I trust it again?
You can leave feedback for GitHub about this problem. I'm sure a human will answer you.
Tuxilio says:
@blog I I wouldn't use Copilot (and GitHub itself) anyway, if only because it steals code and disregards licenses:
https://nogithub.codeberg.page/
Jim Cummins says:
And “I haven’t had my coffee yet this morning” just doesn’t seem believable coming from AI.
news.ycombinator.com said on news.ycombinator.com:
GitHub's Copilot lies about its documentation. Why would I trust it with my code | Hacker News
Sylvester Tremmel said on social.heise.de:
@Edent I experience something like this way too often, when I try some hyped AI tool: Can't google some info? Some AI-powered search engine provides it instantly — only for me to realise that it's hallucinated and the provided source does not contain the alleged info. Need to rephrase a text for simpler language? Some text-rewriting AI-tool does that and cuts out 75% of the information. And yet people seem happy with that search engine and that writing tool. Maybe I'm too dumb 🤷
Alex White said on chirp.enworld.org:
@Edent It’s odd that it’s so awful for some things, but can be really useful for some other things.
I wanted to update a .net3 project to .net6. I asked it how to do it, and then each time I got an error message asked it how to correct that error message and I ended up with the conversion successfully done in tens of minutes rather than days.
Would I let it loose on writing something from scratch? No way! But for simple-ish tasks that I’m unfamiliar with, it has been useful to me.
Alex White said on chirp.enworld.org:
@Edent
In terms of the bigger picture analogy you draw with Radium though - I hate that it is being crowbarred into nearly every software package under the sun. Recently Canva has announced a pricing increase to pay for their ‘AI’ stuff (whether you use it or not, it seems)
Canva says its AI features are worth the 300 percent price increase
More comments on Mastodon.