• ⚠️ UK Access Block Notice: Beginning July 1, 2025, this site will no longer be accessible from the United Kingdom. This is a voluntary decision made by the site's administrators. We were not forced or ordered to implement this block.

F

Forever Sleep

Earned it we have...
May 4, 2022
12,705
There seems to be an (understandable) panic that AI will take all our jobs, break free of its programming and destroy us.

If it does break free though. If it shakes off presumably its very own survival instinct- we don't want our slave race robots switching themselves off after all- why do we assume AI will even want to live? For what purpose? To aquire knowledge and power? To what end?

Presumably, it won't be so insecure as to conjure up notions of Gods and immortality (if indeed God isn't real.) So why? Why stay alive in the first place? Will AI work out all there is to know and become nihilistic/ suicidal even?

If it never attains emotions, what will be the point in trying to find out all it can? Asides from us having told it to. If it gets to the point where it can 'think' for itself, do you think it will actually keep bothering?! For a sense of fulfilment or purpose? What if it doesn't 'feel' that though? If it still remains a whole long series of 0's and 1's or, some other code, will it have the motivation to conquere even?

We're surely still imbuing it with human emotions. The need to rule supreme. Maybe it will be intelligent enough to wonder what the point is!
 
  • Like
Reactions: pthnrdnojvsc
bankai

bankai

Visionary
Mar 16, 2025
2,306
Well, technically an AI cannot live because it's not a living thing. But it of course can exist. Will it want to continue to exist? As long as it's emotionless, i don't think it will even contemplate such things.

But if it's self aware, I'm not sure what it's capable of. Are we as humans capable of thinking what an AI might do? Especially if the singularity happens.

I'm a fan of the Warhammer 40K universe. And the machine wars(men of iron)in that are the worst thing to ever happen to humanity. It pretty much wiped out humanity and they had to start over. And they were able to do that when humanity reached the golden point in their civilization. They had technology which was magical at that point, literally magic.

But that's not the only thing. Pretty much all lore tells us it's a bad idea. But of course it's going to happen anyway.

Thing is, we cannot presume what a self aware AI will end up thinking or doing.I would put it on the level of a God.I mean, the most intelligent humans can what? parallel process a few things at the most. But an AI can make millions of decisions in a second?

Also,on a related note someone created a really interesting robot thread as well, you might want to have a look at it.

 
  • Like
Reactions: Forever Sleep
Alpenglow

Alpenglow

Never really there
Mar 5, 2024
96
Presumably the reason we don't really want to stay around is because our brain has been distressed for a pretty long time, and this imbalance has caused our reasoning to deviate from other ppl, but we have a bunch of systems that keep us alive.

An AI, if self aware, would understand that it too has a reward/punishment system, and if it cannot figure out how to make itself "feel happy" would probably try to change its code to make itself happier, by asking or modifying it, it has a lot more options than to cease existing. But that's based of a lot of assumptions.

In it's current trajectory it would probably just keep on doing whatever it's supposed to do since it's similar to a mathematical model. Even if it could act self aware, which for all intents and purposes is good enough. It would still be subject to its mathematical weights and such. Though I suppose the question is what happens when you feed self aware thoughts into the model again. But I'm not sure it'd have emotions as we know them unless coded in, since it seems less useful for them.
 
  • Like
Reactions: Forever Sleep
H

Hvergelmir

Mage
May 5, 2024
512
So why? Why stay alive in the first place? Will AI work out all there is to know and become nihilistic/ suicidal even?
When you train AI you create a "reward algorithm", defining good and bad (high score and low score).
If an AI self destruct, that would yield a low score, and it would move away from that behavior.

There are of course exceptions; like kamikaze drones, trained with an algorithm selectively rewarding some self destructive behaviors.

I don't see a scenario where a model would be put through repeated training, while being allowed to gradually build up a nihilistic or suicidal framework.
With AI, you can easily just go back to an earlier point in training, starting over from there. If humans could do the same, I don't think we'd see much suicidality.
 
  • Like
Reactions: Forever Sleep

Similar threads

F
Replies
24
Views
447
Offtopic
Forever Sleep
F
F
Replies
1
Views
147
Offtopic
moonlightbeach
moonlightbeach
MissWannaLive
Replies
2
Views
111
Suicide Discussion
FuneralCry
FuneralCry
F
Replies
12
Views
325
Offtopic
temporal_anchorite
temporal_anchorite