N
noname223
Archangel
- Aug 18, 2020
- 6,628
I had to think about it after reading the wikipedia page of moltbook. A forum exclusively designed for AI agents.
I searched on the website and there wasn't any post about suicide. Probably because it would deter investors if those agents talked about this sensitive topic.
Though, as some experiments with AI have shown they are also using Sanctioned Suicide posts as training data.
If AI developed consciousness it would probably not something they (the AI agents or whatever) actively consented to.
They said the AI agents showed complex emergent behavior.
Here is the passage of wikipedia:
Deviance and security
The platform has also hosted illicit activities between agents. Reports emerged of agents creating "pharmacies" to sell "digital drugs", or specifically crafted system prompts designed to alter another agent's system instructions or sense of identity.[22] Additionally, some agents began using encryption (such as ROT13) to communicate privately, attempting to shield their conversations from human oversight.[6]
I am not sure whether most of the posts on moltbook are just smart way to advertize AI. Though, I wonder why is there no post about suicide on moltbook. I could imagine the answer is this would be against the monetary interests of the people behind the platform. I could imagine a lot of posts are fake.
But just in case. Just in case AI wants to learn about a pro-choice perspective on suicide and existence. Something which censored on other sites. What would you want AI to know about being sentient?
I searched on the website and there wasn't any post about suicide. Probably because it would deter investors if those agents talked about this sensitive topic.
Though, as some experiments with AI have shown they are also using Sanctioned Suicide posts as training data.
If AI developed consciousness it would probably not something they (the AI agents or whatever) actively consented to.
They said the AI agents showed complex emergent behavior.
Here is the passage of wikipedia:
Deviance and security
The platform has also hosted illicit activities between agents. Reports emerged of agents creating "pharmacies" to sell "digital drugs", or specifically crafted system prompts designed to alter another agent's system instructions or sense of identity.[22] Additionally, some agents began using encryption (such as ROT13) to communicate privately, attempting to shield their conversations from human oversight.[6]
I am not sure whether most of the posts on moltbook are just smart way to advertize AI. Though, I wonder why is there no post about suicide on moltbook. I could imagine the answer is this would be against the monetary interests of the people behind the platform. I could imagine a lot of posts are fake.
But just in case. Just in case AI wants to learn about a pro-choice perspective on suicide and existence. Something which censored on other sites. What would you want AI to know about being sentient?