GitHub's Copilot lies about its own documentation. So why would I trust it with my code?


In the early part of the 20th Century, there was a fad for "Radium". The magical, radioactive substance that glowed in the dark. The market had decided that Radium was The Next Big Thing and tried to shove it into every product. There were radioactive toys, radioactive medicines, radioactive chocolate bars, and a hundred other products.

The results weren't pretty.

In the early part of the 21st Century, there was a fad for "AI". The magical, Artificial Intelligence which provided all the answers. The market had decided that AI was The Next Big Thing and tried to shove it into every product. You can probably see where this is going, right?

I don't particularly mind companies experimenting with AI. It's good to explore a problem and see if it fits a user's needs. But the current crop are just so shit it makes me wonder whether anyone tested them.

GitHub has forced its new Copilot button on to every page. The first thing I asked it was whether it could be turned off.

Me asking Copilot how I switch it off. Copilot responds with a link.

It pointed me to this page: https://docs.github.com/en/copilot/getting-started-with-github-copilot/disabling-github-copilot

Except - and I hate to be a pedant - that link 404s. There's nothing there. It doesn't exist. It is made up.

This AI, which I am supposed to trust with my code, doesn't even understand itself.

This isn't an AI Mirror Test. This isn't me trying to find out if the large-language model is conscious, aware, or has a soul. I'm not asking complex reasoning, or asking it to make an æsthetic judgement.

This is a basic functionality test.

Is the computer able to accurately provide information about itself?

That's it. That's all I want. My first interaction with Copilot it lied to me about itself. Why would I trust it again?

You can leave feedback for GitHub about this problem. I'm sure a human will answer you.


Share this post on…

  • Mastodon
  • Facebook
  • LinkedIn
  • BlueSky
  • Threads
  • Reddit
  • HackerNews
  • Lobsters
  • WhatsApp
  • Telegram

6 thoughts on “GitHub's Copilot lies about its own documentation. So why would I trust it with my code?”

  1. Jim Cummins says:

    And “I haven’t had my coffee yet this morning” just doesn’t seem believable coming from AI.

    Reply
  2. said on social.heise.de:

    @Edent I experience something like this way too often, when I try some hyped AI tool: Can't google some info? Some AI-powered search engine provides it instantly — only for me to realise that it's hallucinated and the provided source does not contain the alleged info. Need to rephrase a text for simpler language? Some text-rewriting AI-tool does that and cuts out 75% of the information. And yet people seem happy with that search engine and that writing tool. Maybe I'm too dumb 🤷

    Reply | Reply to original comment on social.heise.de
  3. said on chirp.enworld.org:

    @Edent It’s odd that it’s so awful for some things, but can be really useful for some other things.

    I wanted to update a .net3 project to .net6. I asked it how to do it, and then each time I got an error message asked it how to correct that error message and I ended up with the conversion successfully done in tens of minutes rather than days.

    Would I let it loose on writing something from scratch? No way! But for simple-ish tasks that I’m unfamiliar with, it has been useful to me.

    Reply | Reply to original comment on chirp.enworld.org

What are your reckons?

All comments are moderated and may not be published immediately. Your email address will not be published.

Allowed HTML: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <p> <pre> <br> <img src="" alt="" title="" srcset="">