What's the difference between AI and a random generator?

mamba

Legend
I don’t know if generative AI is deterministic.
any computer code for our current computers (Intel CPU, etc.) is deterministic, there is no other way for it to be. Even RNGs are deterministic, they just have a random component that is not tied to the CPU to make it 'random'
 

log in or register to remove this ad

FrogReaver

As long as i get to be the frog
any computer code for our current computers (Intel CPU, etc.) is deterministic, there is no other way for it to be. Even RNGs are deterministic, they just have a random component that is not tied to the CPU to make it 'random'
in the sense of the 1’s and 0’s yes. In the sense that we can write an algorithm that we cant foreknow the full set of results due to the complexity of the algorithm, the inputs and output possibilities - no.
 

mamba

Legend
in the sense of the 1’s and 0’s yes. In the sense that we can write an algorithm that we cant foreknow the full set of results due to the complexity of the algorithm, the inputs and output possibilities - no.
That still means it is deterministic. Being able for someone to predict the outcome has nothing to do with it being deterministic (well, you would have a harder time doing so if it weren't...), that only requires that the result is always the same given the same input
 

This seems to align with my claim that AI is not currently predictable. (Not to be read as nothing about ai can be predicted, but rather that not everything about ai can currently be predicted).
I believe our only point of disagreement involves your use of the bolded statement. I contend it would be more accurate to say, "not everything about AI is being predicted." I have seen enough first-hand evidence to convince me that everything about AI can be predicted. There is no theoretical limit on our understanding of AI. Our knowledge is only limited by practical considerations (i.e., how much time we want to devote to making predictions about increasingly marginal use cases).

if the data set is so large it’s not human readable, then at best software can summarize it for us - but summaries often hide important details. Which suggest that humans cannot perfectly predict even perfectly written AI behavior. Basing predictions on the summary will get quite a bit right, but not necessarily everything.
Humans don't have the resources to perfectly model every possible use case of AI (except by actually running the AI). But humans absolutely can model any given use case to arbitrary precision. Once you've processed a data set too large to be human readable, you iterate your AI by identifying and improving upon manageable subsets of that data.
 

BTW, there seem to be two different definitions of determinism being used in the last few posts. Some posters (such as myself) are talking about what is deterministic in a strictly technical sense. We're talking about what it would be possible to predict given access to any arbitrary but finite amount of resources.

Other posters seem to be talking about what's deterministic for all practical purposes. They're talking about what it is actually possible to predict given the amount of resources we can realistically throw at the problem. This definition of determinism is a stricter one than the purely technical version.

Moving forward, it's probably worth considering which of these two definitions a poster is using when responding to the points they're making. A discussion of what's technically understandable about AI and a discussion of what it's practical for us to know about AI are two very different topics.
 

aramis erak

Legend
I see one difference of note:

Non-ai is fixed code; it executes the same code every time.
AI, including generative, has provisions for self-modifying code.
Genereative code usually isn't self modifying in operation, only during training.
 


This is a little bit late coming to the thread, but in case anyone reads this thread in the future, let me give some hopefully helpful comments. My job is as an AI/ML Architect and I've followed this discipline for 20+ years. I am currently spending about 50% of my time researching ways to use Generative AI's to improve health care. I work for a not-for-profit; our motivation is simply to make healthcare better.
  • A very high level take on Generative AI's is that they produce new text/images/whatever by starting with a random pattern of variables that represent an answer and repeatedly modifying that random answer with probabilities related to the model's parameters until the solution is stable.
  • They are thus inherently non-deterministic. However you can set two sets of parameters to make them more so:
    • Start with the same seed in the random number generator so the initial state is determined by that seed (only really useful if you want to reproduce an experiment)
    • Set the temperature parameter to a low value. That means that at each iteration, it is more likely to choose the "most likely" next state of answer. With temperature zero, it will always chose the "best" answer state and so will always iterate to the same solution.
  • Training a GenAI model does not modify code; it simply sets parameters. Roughly, it's the same as training any neural net or even basic Baysian models. You present it with an example, and the parameters are modified a little bit to make it more likely to be able to reproduce that example.
  • The reason that GenAI is not easily explicable is simply that there are a lot of parameters. Typically billions. And they interact non-linearly and are not tied directly to properties of the inputs. Contrast that with something like linear regression which has few parameters, each of them is tied to specific input features, and they interact in linear, predictable manners.
  • When we say a GenAI "hallucinates" or "says something false", we are trying to think of the way we answer a question and apply it to the GenAI. If I am asked "Who killed Richard III" I search my memory for facts and try to find the right one. An error I make is to believe something is truthful when it isn't. A GenAI program has absolutely no concept of truth at all. It is simply trying to find a set of words in the sea of all possible words that when place at the end of the words "Who killed Richard III?" best fits the set of probabilities that it has been trained on.
The above is obviously very simplified, but hopefully helpful.
 

giant.robot

Adventurer
They are thus inherently non-deterministic. However you can set two sets of parameters to make them more so:
  • Start with the same seed in the random number generator so the initial state is determined by that seed (only really useful if you want to reproduce an experiment)
  • Set the temperature parameter to a low value. That means that at each iteration, it is more likely to choose the "most likely" next state of answer. With temperature zero, it will always chose the "best" answer state and so will always iterate to the same solution.
To expand on this a bit. If you built a small AI that used only integer math, a fully deterministic pRNG, and a small training set the determinism of the system would be evident. Such a GenAI wouldn't necessarily be useful but it would show the determinism from first principals.

The issue with determinism in "useful" GenAI is their training sets are enormous, they have billions of parameters, most use floating point math, and then of course tunable parameters like seeds and temperature. The total search space of those various factors is too large to practically search for deterministic paths even though it's technically possible.

I dislike the "black box" terminology because it suggests the determinism is technically unknowable as if the code is performing something supernatural. It's just code. It's not magic.
 

tomBitonti

Adventurer
I'm not trying to be awkward I promise. I'm just trying to understand the technology.

We've all used random generators for years. Some random generators are totally random, others use inputs or some logic to construct their results. Some aren't even all that random!

If we ask an AI to write an NPC description or we ask a random generator to write one, other than the AI probably being better at it, what's the difference in process? What is the AI doing that a random (or non-random) generator isn't?

Is it just the data scraping part? If the AI did the same thing but didn't scrape data except for whatever data set the designer gave it is it then doing the same thing? Or is it fundamentally different? Or should we be looking askance at random generators too? There are generators which make maps and dungeons--is that similar to AI art generators, except for the scraping part?

I guess I'm asking is AI just a big fancy (semi-random) generator but with added data scraping?

Returning to the original question:

I presume that generative AI includes some "randomization", with complex steps between the initial request and the generated output to constrain the output to "match" input data.

Then generative AI really is a "big fancy (semi-random) generator", with the caveat that an awful lot has been stuck in "big fancy".

That "data scraping" is set to the side is unfortunate, in that the "big fancy" steps have a lot to do with how the data is scraped and used to structure those steps. That is, the data set is used to train a very large set of weights, meaning, the step of data scraping cannot be meaningfully separated as was done in the question.

The answer would seem to be "yes, but", with the "but" being that the question unreasonably sequesters too much under "big fancy" and "added data scraping".

TomB
 

Remove ads

Top