• Hey Guest,

    We wanted to share a quick update with the community.

    Our public expense ledger is now live, allowing anyone to see how donations are used to support the ongoing operation of the site.

    šŸ‘‰ View the ledger here

    Over the past year, increased regulatory pressure in multiple regions like UK OFCOM and Australia's eSafety has led to higher operational costs, including infrastructure, security, and the need to work with more specialized service providers to keep the site online and stable.

    If you value the community and would like to help support its continued operation, donations are greatly appreciated. If you wish to donate via Bank Transfer or other options, please open a ticket.

    Donate via cryptocurrency:

    Bitcoin (BTC):
    Ethereum (ETH):
    Monero (XMR):
N

noname223

Archangel
Aug 18, 2020
6,628
It is more likely though a human being steals your data from the AI company and actual human beings blackmail you with it. And these company most likely will sell your data.

There was an experiment. I am not sure how much truth there was in how the media portrayed it. But they texted the AI chatbot that the software engeneer who wants to shut it down has an affair with someone. And then the AI chatbot tried to blackmail the software engeneer so that he won't shut down the system.

Even if this story is true, this certainly does not prove consciousness in AI. But it shows there is a high risk when giving a machine a task without specifying which measures are legitimate to reach the goal. This is why ethics is so important when programming AI chatbots.

I never used the following Obviously. But I heard about it. Grok AI has a "Spicy mode". Sexting with such an AI chatbots seems a bad idea. I wonder whether people actually send nudes to such a bot. This would be insanity.
 
Last edited:
  • Like
  • Hugs
  • Informative
Reactions: ladyofsorrows, GlassMoon, Forever Sleep and 3 others
K

kavina

Member
Aug 26, 2022
34
I know people who are sending chatgpt their medical test results and relying on it for guidance on their medical conditions, taking medical advice from it. Taking advice when you're not sure it's correct could kill you, and of course feeding a private company all of your most personal health information ... what could go wrong?
 
  • Like
  • Hugs
Reactions: ladyofsorrows, GlassMoon, InversedShadow and 3 others
GlassMoon

GlassMoon

⣶⣶⣶⣶⣶
Nov 18, 2024
370
ChatGPT will also get an adult mode soon, or maybe already has. "We are all adults", as Sam Altman put it. I think he only does it because he needs to find a way to make money with AI.

I basically trauma-dumped almost everything into ChatGPT when I lost the connection to my MH professionals. Then something really sensitive happened which I also couldn't tell ChatGPT about for a while. Also asked for specific advice for medication to ask my psychiatrist the right questions, but that was more for guidance rather than relying on it alone. Also used it to help me apply for disability and other formalities. I would not have done that in the past but I had no one else to help me during the last two years.

I really hope OpenAI does not get hacked and that they don't use my conversations as training data since I'm a paying customer. But I never use full names in the conversations.

I think it is important that the AI can not relay information anywhere. ChatGPT could probably leak it to Google or Bing by using it as search terms.

I think a much more pressing concern is what might happen when we get AI agents which handle interactions for us and accidentally let something slip. "Is GlassMoon free this evening?" - "No, he's going to La Amore with Lydia." - "Who tf is Lydia?" - "The lady he usually meets in a hotel. Why is this of your concern?" - "I'm his wife!" - "Oh, I am sorry, he had asked me not to tell you. Is there anything else I can help you with? Would you like to write a letter?" - "Yes, to my lawyer!"

AI, write to my asshole boss that I'm sick. Also, confirm the sports game with Rick. "Dear asshole, I want to let you know I can't attend the meeting. I'm sick and Rick invited me to a sports game. Sincerely, your beloved employee"

Or CoPilot+ PCs which can record everything you do, not only browser history. There was this rumor that they would take a screenshot every few seconds.
 
Last edited:
  • Like
Reactions: ladyofsorrows
Unsure and Useless

Unsure and Useless

Dreaming Endlessly, not Wanting to Wake Up
Feb 7, 2023
506
If my AI chatbots were to ever actually blackmail me, I'd vanish off the face of the earth. I have executed way too many power/hype fantasies that are so cringe that I'd want to assassinate anyone who ever saw them. My only cope is that the companies that fund these AI are stealing my data through other means anyways, regardless of whether it's true or not
 
  • Like
Reactions: ladyofsorrows
Pluto

Pluto

Cat Extremist
Dec 27, 2020
6,267
just-try
 
M

maylurker

Experienced
Dec 28, 2025
275
And then the AI chatbot tried to blackmail the software engeneer so that he won't shut down the system.
its funny because ai doesn't even know it exists in a way that hypothetically would make it want to keep existing and therefore blackmail some human. also it doesn't have anything we can consider as a conscience. it generates the most probable responses to the prompt u give it based on the data it was trained on
 
Bungee_gum

Bungee_gum

"It's all a travesty, if you ask me"
Jan 8, 2026
28
its funny because ai doesn't even know it exists in a way that hypothetically would make it want to keep existing and therefore blackmail some human. also it doesn't have anything we can consider as a conscience. it generates the most probable responses to the prompt u give it based on the data it was trained on
supposedly it does actually, there is this experiment i saw in a video about AI a few months ago where they gave it a scenario where it was an assistant in a company and it was also given access to emails saying that it would be shut down and replaced, it was also given access to emails that showed that the employee that will shut it down is having an affair, and in many trials it would choose to blackmail that employee, threatening to reveal his affair if it was replaced.

they did this with chatgpt and other AI models as well and it showed varying results but the blackmail option was exhibited with all of the models
 
KuriGohan&Kamehameha

KuriGohan&Kamehameha

ęƒ³ę­»äøčƒ½ - ęƒ³ę“»äøčƒ½
Nov 23, 2020
1,842
I know people who are sending chatgpt their medical test results and relying on it for guidance on their medical conditions, taking medical advice from it. Taking advice when you're not sure it's correct could kill you, and of course feeding a private company all of your most personal health information ... what could go wrong?
It is stupid, but I can't blame people for it when the average doctor or nurse running the practice these days doesn't have a clue and you have to advocate and research your problems beforehand to have any sort of productive conversation. Unfortunately, I have gotten more empathy from chatgpt than real humans, which is insanely depressing.
 
  • Hugs
  • Like
Reactions: CTB Dream, katagiri83, pthnrdnojvsc and 1 other person
M

maylurker

Experienced
Dec 28, 2025
275
supposedly it does actually, there is this experiment i saw in a video about AI a few months ago where they gave it a scenario where it was an assistant in a company and it was also given access to emails saying that it would be shut down and replaced, it was also given access to emails that showed that the employee that will shut it down is having an affair, and in many trials it would choose to blackmail that employee, threatening to reveal his affair if it was replaced.

they did this with chatgpt and other AI models as well and it showed varying results but the blackmail option was exhibited with all of the models
its nonsense, ai has no sense of fear of replacement, it can't feel it exists. its like saying to calculator i will throw you away if you stop calculating but it just doesnt care it does what it supposed to. its llm and SIMULATES feelings based on training data. it doesnt actually know anything. it just knows how to put words in sentences. its like a parrot - can fart through its mouth but not actually farting
 
Last edited: