Demonstrating a LLM using children


There are many improvisational games which are great for improving creativity, helping a team bond, or simply having a lot of fun. But there's one which is perfect for demonstrating how things like ChatGPT work.

The "Once. Upon. A. Time." game requires two or more people with a basic grasp of English. Even a small child can play. The way it works is very simple.

The first person says "Once..."

The second person says "Once upon..."

The next person says "Once upon a..."

The next person says "Once upon a time..."

The next person says... Now, this is where things get interesting! Perhaps they'll say "Once upon a time there...", or they might mix things up and go "Once upon a time I...", some people will go straight to the action and say "Once upon a time dragons..."!

From here, the game progresses. The next player comes up with the next word in the story.

"Once upon a time there was..." or "Once upon a time I stole..." or "Once upon a time dragons attacked..." and so on.

The game usually ends when there are no more obvious routes through the story. Play the game a few times with a friend - what tropes and biases do you notice? Are there some common dead-ends? How does it feel when the person before you plays the perfect word? What negative reinforcement techniques can you apply? Where does the story go if you've all just finished watching a Disney Princess marathon? What early choices dominate the rest of the story?

The game is subtly different to how ChatGPT works. For a start, it uses multiple models. Your idea of a good story is going to be different to the person sat next to you. They may have read more books about unicorns rather than dragons, which will change their mental model.

Additionally, some people may be adversarial - they introduce non-sequiturs or other words which attempt to derail the story.

Finally, there is a resource issue. Training massive data models may be more or less expensive than raising a bunch of children and buying them books.

But that's (very) roughly how these new breed of Automatic Improvisers work.


Share this post on…

  • Mastodon
  • Facebook
  • LinkedIn
  • BlueSky
  • Threads
  • Reddit
  • HackerNews
  • Lobsters
  • WhatsApp
  • Telegram

One thought on “Demonstrating a LLM using children”

  1. says:

    I love this analogy, and I think I like "Automatic Improvisers" -- my only hesitation being around potential denigration of human improvisers.

    I wonder if perhaps your analogy holds up better than you suggest in your last few paragraphs. Because these systems are trained on a huge corpus produced by millions, if not billions, of people, do they encode different vying biases (which the "heat" randomness allows to seep through), or are they a more gestalt average? I suspect the former, but with heavily-built rails from their creators to go down one particular bias pathway.

    Reply

What are your reckons?

All comments are moderated and may not be published immediately. Your email address will not be published.

Allowed HTML: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <p> <pre> <br> <img src="" alt="" title="" srcset="">