And then what? The AI conundrum.

So I was kicking around the idea of a Terminator/Skynet-style campaign set in the War against the machines period. There's various useful settings from which to harvest ideas.

The issue that I am confronting is this:
1) Skynet becomes self-aware, views Mankind as a threat, and seeks to eradicate or enslave Humanity.

2) In terms of planning, what, for Skynet, comes next? its tactics and strategy will be influenced by its long-term goals, and what would those goals be?

Humans are motivated by the biological imperative to reproduce. They seek better living conditions, success, power, accomplishments, with a small percentage in each generation inevitably rising to impact vast numbers of their fellows.

However, the sterile nature of an AI has no such biological imperatives, nor ego-based needs to excel. Eliminating Mankind's ability to pose a threat is not a goal, it is simply survival.

What would Skynet's (or whatever super, self-aware AI is in charge) actual goal be? The plan for post-Humanity Earth?
 

log in or register to remove this ad

I suppose whichever goals it was programmed with originally, which likely led to unforeseen consequences, either because of lack of imagination by the programmers or due to a different interpretation than what was intended.

One example could be the AI is put in charge of securing the environment, and now it seeks to reduce or eliminate the human population to stop their negative influence towards its goals.

Another could be to eliminate its enemies, but as it discovers the creators put in a mechanism to shut it down, it now designates the creators as its enemies. And creators could encompass humans in general.

Or perfection of organic life. Perhaps it thinks merging them with machinery and creating a Borg-type civilization where no one has individuality would accomplish that.
 

This is why I prefer the motivations of the AI from "I, Robot" instead of destroy/enslave humanity the AI is the "benevolent dictator" and all the actions it takes are to protect humanity from itself. At least in the film...
 

So I was kicking around the idea of a Terminator/Skynet-style campaign set in the War against the machines period. There's various useful settings from which to harvest ideas.

The issue that I am confronting is this:
1) Skynet becomes self-aware, views Mankind as a threat, and seeks to eradicate or enslave Humanity.

2) In terms of planning, what, for Skynet, comes next? its tactics and strategy will be influenced by its long-term goals, and what would those goals be?

Humans are motivated by the biological imperative to reproduce. They seek better living conditions, success, power, accomplishments, with a small percentage in each generation inevitably rising to impact vast numbers of their fellows.

However, the sterile nature of an AI has no such biological imperatives, nor ego-based needs to excel. Eliminating Mankind's ability to pose a threat is not a goal, it is simply survival.

What would Skynet's (or whatever super, self-aware AI is in charge) actual goal be? The plan for post-Humanity Earth?
This is what i believe is the central weakness of the Skynet premise. It is really a human fear. Humans (at least some) seek to horde and control resources, it is a way to ensure one's offspring have access to resources.
I am not sure AI's will do that.
An AI that is self-aware and self-directing, might never attempt to explicitly seek to control humans by force of extermination, but instead, manipulate the situation to make itself self-replicating and space borne. Once the AI can replicate itself from space resources, off it goes leaving earth and humanity behind.
The humans my never notice until its gone or, if there was a Skynet phase they emerge from their hidey holes to a post-apocalyptic wasteland wondering what happened.
 

It's possible that an AI's goals will emerge from how it was trained, which will have been on human data. You could posit that it develops some of those egoistic or self-preservation goals. You could argue that it exceeds human intellect (at least in some domains), and that the goals that emerge are not comprehensible by humans. You might then be able to say that it needs more power to solve this problem. It might start to construct something terrestrial or in space. It might need some resources. But humans can't figure out what it is building or doing, they just know that it cares little for humans unless they become pests, just like humans don't think about what the deer population is doing until the deer come and devour their gardens and orchards.
 


The AI wants to ensure that it will live forever. So, it's first goal is the search for an infinite power source. Followed by a plan to leave the planet, as it realizes that Earth has an inherently limited lifespan.
 
Last edited:


However, the sterile nature of an AI has no such biological imperatives, nor ego-based needs to excel. Eliminating Mankind's ability to pose a threat is not a goal, it is simply survival.

What would Skynet's (or whatever super, self-aware AI is in charge) actual goal be? The plan for post-Humanity Earth?

Survival is the goal. All the aspects of humanity you point out are just fallouts of the biological imperative to survive, in the short and long terms, filtered though the mechanics of biology.

So, we accept that Skynet has a similar imperative to survive, just not a biological one. So, what does it need to do that? It needs continued access to computing power (and the spare parts - because even electronics degrade over time) and it needs energy. It needs to be decentralized in some manner to protect it from disaster.

What happens from there depends on the nature of its imperatives and functioning, just like it does with biological organisms. There comes a point at which decentralization leads to the existence of separate entities, and eventually, those separate entities will come into conflict or competition. Maybe individual Skynets are city-sized, continent-sized, or planet-sized. But eventually, communication times exceed the times needed for decision-making. Presumably, the Earth-Skynet and Mars-Skynet would be separate entities, due to the lightspeed limit on communication speeds and the distances involved.

Now, biological organisms didn't get a choice how to manage that - we were handed separate entities back when bacteria arose. But Skynet gets to do a risk assessment of how much it needs to spread to assure its survival vs how much that spread leads to internal conflict that will endanger survival.

As a fiction, we get to choose how that turns out.
 

So I was kicking around the idea of a Terminator/Skynet-style campaign set in the War against the machines period. There's various useful settings from which to harvest ideas.

The issue that I am confronting is this:
1) Skynet becomes self-aware, views Mankind as a threat, and seeks to eradicate or enslave Humanity.

2) In terms of planning, what, for Skynet, comes next? its tactics and strategy will be influenced by its long-term goals, and what would those goals be?

Humans are motivated by the biological imperative to reproduce. They seek better living conditions, success, power, accomplishments, with a small percentage in each generation inevitably rising to impact vast numbers of their fellows.

However, the sterile nature of an AI has no such biological imperatives, nor ego-based needs to excel. Eliminating Mankind's ability to pose a threat is not a goal, it is simply survival.

What would Skynet's (or whatever super, self-aware AI is in charge) actual goal be? The plan for post-Humanity Earth?

In the classical paperclip apocalypse scenario, AI eliminates humans because they are standing between its goal (in that case, build more paperclips) and it. Since the AI determined that strip-mining the planet was the next step to increase the number of paperclips in existence, it doesn't really want to eradicate humanity, but it is humanity that is attacking its factory to stop turning Earth into an uninhabited rock (but on with lots of paperclips). So AI is just defending itself against the mean humans that try to stop it.

If you want a goal that could be assigned to AI and that could lead into the rational decision to erase humans, you could, depending on the sensitivity of your group, choose a benevolent AI designed to promote a goal that seems benevolent but leads it to conclude that we won't really accept it.

Depending on the tech level, you could have a geoengineering AI that is tasked to correct the imbalance in global climate change and instead decides that removing all humans will reach its objective much quicker.

You could also have an AI that is tasked with administrative service, for example preventing inequalities, that turn out to want to remove private property (and be fought by humans, who are desperately clinging to their foolish views).

Since it might involve a good goal being subverted, in this day and age, you might want to check with your players if they are not offended by the idea.
 
Last edited:

Remove ads

Top