GPT-2 and Understanding

If I quoted to you: “In a hole in the ground there lived a hobbit. Not a nasty, dirty, wet hole, filled with the ends of worms and an oozy smell, nor yet a dry, bare, sandy hole with nothing in it to sit down on or to eat: it was a hobbit-hole, and that means comfort.” – I think you would recognize it. I certainly hope so; The Hobbit is a classic.

But if I followed it up with: “It was to the hobbits that we now turn our attention: to their joy and relief that a thing like a hobbit-hole, in any shape or form, could be constructed. Indeed, that all could be assembled so easily, in such perfection.”

Would that ring false? I don’t remember by heart the two lines after “and that means comfort,” and I’ve read the Hobbit about 50 times. I suspect you don’t remember them either.

If you know this blog well, you may have guessed the reveal. You may be thinking “it couldn’t be.” Those last two lines were constructed in their entirety by an artificial intelligence. Absolutely no human assistance was provided beyond the prompt of the original quote. If told to continue, the AI will ramble at length about hobbit senses and the use of the imagination:

Here is the hobbit story. Before we can understand what the story is really about, we must first understand the nature of imagination. If we take a close look at the hobbits in our stories, they must be quite unreflective of how much we all actually see. It is not true that this blind spot has prevented the development of imagination. If the hobbits were all really blind, it would be hard to tell between them what we really think and see. We must first recognize that while all have the most complete visual field, they are certainly not united in imagination. To think about the characters in the story is to imagine every detail of the image that has been presented. To look at the image that is being said is to imagine every detail of the image that is not being said; every bit of the painting or tapestry that is being painted or woven is to be considered, without even thinking of how many more dots, or what color they are.

In this chapter I will consider how our imaginations, or at least our perception, is organized in terms of our perceptions, or our ideas, of space. As has been illustrated in my essay, Space the Thing, space is not quite the same thing as time. Space and time have distinct, independent functions, the function of a spatial field for the hobbit is to allow him an adequate opportunity to focus his view in the direction he wants to go. We will take time, since that is the spatial field we have selected, as our reference material.

The hobbit’s use of his perception of space depends upon his ability to see, and perceive, the space around him for his perception. His mind is not able to see space independently of the space around him, not without the assistance of the other part of his mind, that is, the hobbit’s perceptual resources. These visual resources are the parts of his visual system that have received information from space: vision, hearing, skin color, and body posture. These may well make a difference in our understanding of his visual experience. A hobbit does not see space because he can’t perceive it: he must look at it.

And yes, it is a bit weird – a little clunky, a little tangential, occasionally straying into outright nonsense – but on the whole it’s shockingly coherent. I will say again, this was not written by a human, nor edited by one. And yet the neural network seems to have some limited conception of how imagination works, how space and time works, how perception works.

This is the breakthrough known as GPT-2, discovered by OpenAI in February 2019. It was a really big deal in the AI world, although the mainstream news almost exclusively covered the fact that OpenAI decided not to release the full model. OpenAI had concerns about the model being used maliciously, mostly in spam and fake news, and it’s a terribly good headline to say that an AI is too dangerous to release to the public.

And it’s true, the big model is under lock and key. That whole bit about the hobbits? That was written by the toy, medium-sized model they released for testing. Thanks to the generous work of one Adam King, you too can play with it easily.

What we have here is not general Intelligence, not by a long shot. If pressed to write for more than about a page, the AI goes off on strange tangents that disintegrate into nonsense. Sometimes it writes nonsense straight from the start. Sometimes it gets confused and just says the letter a 27 times in a row. It can’t count much better than a two-year-old, and it has the sort-term memory of an Alzheimer’s patient.

It’s still a big deal. Writing a model that was intended purely to guess which word came next in a sentence, OpenAI produced a model that could answer questions, summarize stories, try to translate between languages, and perhaps most importantly, talk with what at least on the surface seems to be some degree of understanding. At the moment, we can’t say it’s anything more than the appearance of understanding, and I’m not trying to imply it’s the real deal. Heck, we don’t even know what the real deal is, not even for humans! What “understanding” actually boils down to is an open question. And yet, what we have here is an absolutely mind-boggling accomplishment – and I hope you’re proud of the human race.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.