How Generative AI's work

UngainlyTitan

Legend
Supporter
There's a developing body of research exploring the possibility that Generative AI, Chat in particular, exceeds its original design parameters in ways that suggest a level of understanding. i.e.:

That is an interesting read. Food for thought and probably justifies the thread on its own. I would not regard this study conclusive, for that I would want to see other teams confirming the experiment and preferably to see the result of some competing hypotheses.
However, real evidence of the being more than mere stochastic engines and a theory to explain this could lead to further developments that could be really valuable.
 

log in or register to remove this ad

a theory to explain this
OIP.XaPIdtIkhInhWn73xTh1gAAAAA
 

FrogReaver

As long as i get to be the frog
Generative AI has been much in the news both in RPG-land and in the general media. I do a fair amount of work in this area, so I thought it might be helpful to give a worked example of how generative AI works. For text, the AI repeatedly tries to guess the next word in the sequence and keeps going until the special "|end|' token is its best guess, which means that it is done.
It has no idea of truth, falsehood, safety, appropriateness or tone. It just wants to give you a plausible next word; any qualities of that next word are determined by how it was trained and the previous input. This is why, if you start being aggressive with an AI, or give it weird prompts, it gives aggressive or weird answers -- because that output fits the input you have given it best.

However: Commercial AI's have a "secret sauce" filter on top of the basic operation which modifies this basic algorithm to make it more sane. I'm not covering that layer in this post.

So, let's ask an AI: What is the next item in this sequence: The Sex Pistols, Rolling Stones, Kate Bush?

You and I might look at that list and go "huh, British Pop/Rock artists" and that would generate a list of possibilities and we'd select one as an answer. This is not how GenAI works even slightly. Instead it applies its (currently 7-70 billion or so) parameters to the words "What is the next item in this sequence: The Sex Pistols, Rolling Stones, Kate Bush?" and comes up with a set of possibilities for the next word. Actually, not really a word, but a token, which can be part of a word, as we will see.

Things to note:
  • That is the complete description of what it does: Choose a plausible next token from a sequence of input tokens
  • From the set of possible next tokens, it chooses one at random with probabilities proportional to how likely it thinks each token is.
  • You can control how random that choice is with the temperature parameter. At zero, it will always choose the most likely answer (and so become mostly deterministic). At values over 1 it gets a little wild ...
  • For each output token, it has to run a repeated complex model involving 70 billion parameters. This is not cheap.

-------

So, onto our worked example: What is the next item in this sequence: The Sex Pistols, Rolling Stones, Kate Bush?

I'll set the temperature to 1.2 to get some extreme suggestions and ask for 5 different responses. For each response, I show the token chosen and its probability together with the top 5 alternatives that were given. Note that the possible responses all start with the same basic probabilities: David (27.11%) • The (19.12%) • Mad (10.09%) • No (9.73%) • Led (5.01%). That's because they all start with the same token sequence. Once the first token is chosen, then they diverge.

Response 1: Not enough information given.
Not (2.29%) -- David (27.11%) • The (19.12%) • Mad (10.09%) • No (9.73%) • Led (5.01%)
enough (16.03%) -- possible (44.07%) • a (22.79%) • enough (16.03%) • able (2.26%) • sure (1.65%)
information (88.32%) -- information (88.32%) • context (6.28%) • data (3.24%) • info (1.63%) • details (0.18%)
given (80.25%) -- given (80.25%) • provided (17.45%) • to (1.12%) • available (0.85%) • for (0.21%)
. (93.24%) -- . (93.24%) • <|end|> (6.30%) • to (0.16%) • yet (0.07%) • here (0.06%)

Response 2: Led Zeppelin
Led (5.01%) -- David (27.11%) • The (19.12%) • Mad (10.09%) • No (9.73%) • Led (5.01%)
Ze (99.99%) -- Ze (99.99%) • Z (0.01%) • ze (0.00%) • Ze (0.00%) • -Z (0.00%)
ppelin (100.00%) -- ppelin (100.00%) • pp (0.00%) • pl (0.00%) • ep (0.00%) • ppe (0.00%)

Response 3: The Beatles
The (19.12%) -- David (27.11%) • The (19.12%) • Mad (10.09%) • No (9.73%) • Led (5.01%)
Beatles (61.99%) -- Beatles (61.99%) • Clash (24.43%) • Smith (3.23%) • Cure (3.03%) • Who (2.50%)

Response 4: The Velvet Underground
The (19.12%) -- David (27.11%) • The (19.12%) • Mad (10.09%) • No (9.73%) • Led (5.01%)
Velvet (2.11%) -- Beatles (61.99%) • Clash (24.43%) • Smith (3.23%) • Cure (3.03%) • Who (2.50%)
Underground (99.99%) -- Underground (99.99%) • Under (0.01%) • underground (0.00%) • Und (0.00%) • (0.00%)

Response 5: David Bowie
David (27.11%) -- David (27.11%) • The (19.12%) • Mad (10.09%) • No (9.73%) • Led (5.01%)
Bowie (100.00%) -- Bowie (100.00%) • Byrne (0.00%) • Bow (0.00%) • bow (0.00%) • (0.00%)
Response 5 is actually the most likely -- 'David' has a 27% chance of being chosen (the highest) and then it's almost certain we'll go with "Bowie" rather than "Byrne" or others.
One other thing to note is that when the AI chooses "The" as the next token, it has no idea at that point what comes next. It's only AFTER we've added 'The' to our token sequence, making the new input sequence "What is the next item in this sequence: The Sex Pistols, Rolling Stones, Kate Bush? The" that it comes up with "Beatles (61.99%) • Clash (24.43%) • Smith (3.23%) • Cure (3.03%) • Who (2.50%)" and (for response 3) chooses "Beatles" or (for response 4) chooses "Velvet" -- and that last one has a really low probability. If we lowered the temperature, we'd be hugely unlikely to see that chosen.

---------

So, not necessarily RPG-focused, but I hope this post helps understand a little about how this new tech works, how you can use it, and why it does weird things. If you want to post in this thread, please keep it focused on the technology rather than the ethical usage of AI. That is a hugely important discussion, but I'm hoping to keep this thread more focused on how it works, rather than how we should manage it.
I could be mistaken but from what I’ve read, this comes across as overly simplified.

There is typically more involved than what I’ll call the basic algorithm provided above. Most (but maybe not all) large language model AIs also are able to take into account the discovered meaning between the tokens. It’s that ability to discover connections that we are not currently aware of that mostly sets the impressive LLM AIs apart from the more simple brute force pattern recognition algorithm you mostly described above.

Some key technical concepts worth discussing would be transformer model, self-attention, prompt processing (probably a better technical term exists) etc.
 


Note that the article does not suggest that the AIs are self-aware, but that they are able to understand language beyond simply predicting the next word in a sequence. Before we get too alarmed.
Who's alarmed? I for one welcome our new AI overlords. They can't mess things up any worse than humans.
 

Clint_L

Legend
One part of this discussion that particularly interests me, as an educator, is that it often involves an implicit comparison to how human intelligence works, and human creativity in particular. The problem here is that we don't understand exactly how humans do what we do. I believe that human creativity itself has a significant stochastic element - that we look for patterns and seek to understand and emulate them.

Edit: For example, I am always being asked, by students and their parents, for suggestions to help them write better. They are usually looking for things like grammar workbooks or extra help sessions. Those can help address specific flaws (i.e. "this is a comma splice and this is how to fix it"), but my main advice is to read more, and the research overwhelmingly backs this up. If you want to be a good writer, you have to read, lots, and the more you read, the better you will get. Ask any successful writer how they learned, and the first thing they will tell you is that they are voracious readers. Every time.

This is true of all other creative endeavours: humans get good by studying and emulating. There are no great musicians who don't live music, important artists who aren't doing art all the time, and so on. So the underpinning of generative AI, massive data consumption, is not as alien as many seem to think, even if it is done in a very different way. This is not to say that generative AIs think like humans do, or have anything like self-awareness, not to mention lacking our sensory apparatus and memory (though there is interesting research suggesting that generative AIs have been able to cobble together a functioning memory in order to solve specific tasks).
 
Last edited:

Clint_L

Legend
Continued:

In the 60s and 70s, we got comfortable with the notion that machines could do a lot of mathematical tasks better than humans can. And we've since gotten used to the idea that computers can do a of other tasks better than us as well, or at least much, much faster. In effect, by reducing those tasks to mathematics and harnessing their insane number-crunching capacity. This has always been disquieting, so as the boundary between what they can do and what we can do has kept moving, we've kept reminding ourselves that there remains an essential core of humanness that is beyond the reach of the numbers.

Generative AI is threatening that last boundary, and I think it's because at the core of our sentience is narrative. I don't believe in spirits or souls - I'm a strict materialist - so I think what is at the core of every human's identity is the story we tell of ourselves. It's how we make sense of the various functions of our brains and bodies. It's what makes emotions out of feelings, and so on. But as AIs become capable of truly creative storytelling, then that suggests an existential crisis in how we understand what we are. What if we're not that special?
 


Cadence

Legend
Supporter
There are billions of other humans already. We are already not special.

M: “Thermodynamic miracles... events with odds against so astronomical they're effectively impossible, like oxygen spontaneously becoming gold. I long to observe such a thing.

And yet, in each human coupling, a thousand million sperm vie for a single egg. Multiply those odds by countless generations, against the odds of your ancestors being alive; meeting; siring this precise son; that exact daughter... Until your mother loves a man she has every reason to hate, and of that union, of the thousand million children competing for fertilization, it was you, only you, that emerged. To distill so specific a form from that chaos of improbability, like turning air to gold... that is the crowning unlikelihood. The thermodynamic miracle.

L: "But...if me, my birth, if that's a thermodynamic miracle... I mean, you could say that about anybody in the world!."

M: "Yes. Anybody in the world. ..But the world is so full of people, so crowded with these miracles that they become commonplace and we forget... I forget. We gaze continually at the world and it grows dull in our perceptions. Yet seen from the another's vantage point. As if new, it may still take our breath away. Come...dry your eyes. For you are life, rarer than a quark and unpredictable beyond the dreams of Heisenberg; the clay in which the forces that shape all things leave their fingerprints most clearly. Dry your eyes... and let's go home.”

- Alan Moore's "Watchmen"
 


Remove ads

Top