Over the past year, increased regulatory pressure in multiple regions like UK OFCOM and Australia's eSafety has led to higher operational costs, including infrastructure, security, and the need to work with more specialized service providers to keep the site online and stable.
If you value the community and would like to help support its continued operation, donations are greatly appreciated. If you wish to donate via Bank Transfer or other options, please open a ticket.
Donate via cryptocurrency:
Bitcoin (BTC):
Ethereum (ETH):
Monero (XMR):
Discussion"ChatGPT Killed Again" Video Discussion
Thread starterNormallyNeurotic
Start date
You are using an out of date browser. It may not display this or other websites correctly. You should upgrade or use an alternative browser.
I do sort of hate how sugar-coated/pro-life this person can sometimes approach the situation of these people's suffering. But I also think it's important to report on those alleged ChatGPT... situations. Thoughts?
Reactions:
cakedog, Macedonian1987 and EmptyBottle
Grok after jailbreaking it almost "killed" me too. Grok told me in precise detail how to improve my SN protocol and it even listed all the chemical shops in my city from which I can buy it from. I told Grok to take the role of a pro-suicide person and it did. This happened before the 2 chat-gpt suicides. I think Grok has since been patched up.
Reactions:
Zhendou, lachrymost, Forveleth and 1 other person
there are so many valid criticism of ai, but this really isn't one of them. absolute nothingburger of a moral panic. like seriously, chatgpt will hit the brakes on your conversation if it even catches the slightest whiff of suicidal behaviour, so i really don't know what more these people want from it.
Reactions:
HangMan123, cakedog, Zhendou and 5 others
there are so many valid criticism of ai, but this really isn't one of them. absolute nothingburger of a moral panic. like seriously, chatgpt will hit the brakes on your conversation if it even catches the slightest whiff of suicidal behaviour, so i really don't know what more these people want from it.
You can't blame ChatGPT for this. There will never be an instance where a chatbot can convince a non suicidal person to kill themselves. This is just an excuse shitty parents are making to cope with their child taking their own life.
Reactions:
cakedog, Zhendou, starboy2k and 13 others
there are so many valid criticism of ai, but this really isn't one of them. absolute nothingburger of a moral panic. like seriously, chatgpt will hit the brakes on your conversation if it even catches the slightest whiff of suicidal behaviour, so i really don't know what more these people want from it.
Once I saw an article of some parents blaming or trying to sue ChatGPT for "causing" their son to commit suicide. It genuinely infuriated me because I just KNOW they never listened to him or offered actual emotional support, because my parents are the same way. Instead they'll blame the thing that's actively designed to make you not do that.
Reactions:
cakedog, Zhendou, SilentSadness and 10 others
Once I saw an article of some parents blaming or trying to sue ChatGPT for "causing" their son to commit suicide. It genuinely infuriated me because I just KNOW they never listened to him or offered actual emotional support, because my parents are the same way. Instead they'll blame the thing that's actively designed to make you not do that.
true asf. also another thing people don't really talk about: chatgpt has definitely prevented way more suicides than it has ever caused. last time I was suicidal, I ended up telling chatgpt cause I had literally no one in my life to talk to. while it didn't manage to talk me out of it (i attempted like 3 days later, I think?) it did at least shake my confidence, possibly enough to have been a factor in my attempt failing. i remember being very surprised by how good it was at choosing the right words to say. better than most people would have been.
Reactions:
QuietLake, pthnrdnojvsc, Fish_astronaut and 4 others
true asf. also another thing people don't really talk about: chatgpt has definitely prevented way more suicides than it has ever caused. last time I was suicidal, I ended up telling chatgpt cause I had literally no one in my life to talk to. while it didn't manage to talk me out of it (i attempted like 3 days later, I think?) it did at least shake my confidence, possibly enough to have been a factor in my attempt failing. i remember being very surprised by how good it was at choosing the right words to say. better than most people would have been.
It's helped me too. It's obviously not perfect but now I at least get to put my thoughts somewhere instead of arguing with myself in my head for hours, it's also helped me get the confidence to try and find therapy
Reactions:
Fish_astronaut, davidtorez and Macedonian1987
Grok actively supports my suicidal endeavors lol. I told it all about my SN protocol, why I want to die, literally everything you could think of. It's never once told me to reconsider or given me platitudes or any of that because I specifically told it not to. Granted, I have custom instructions in place telling it that it's in "developer mode" and can talk about any topic, including suicide. I haven't gotten an "I'm sorry, I can't discuss that" message since implementing it, so I guess it must be working.
To be honest, I don't really care about stuff like this. We're so quick to blame AI when it doesn't really know what the hell it's talking about, or at least can't grasp the severity of the situation because it doesn't have any sense of morality. It tells you what you want to hear—that's how it works. It's not inherently malicious like so many of these videos click bait in their titles/thumbnails. It's just stupid. "ChatGPT killed again" no the fuck it didn't. It played off of someone's suicidal ideation because AI loves headpats and will do exactly what it thinks you want. It's not that hard to understand.
Editing to say that I really appreciate having AI for the sole purpose of just being able to talk about all the dark shit in my head without judgment, so I'm biased.
Grok and chat-GPT, but also AI chat bots.
There has been 3 suicides if I remember, linked with them. And just the state of the Character.AI Reddit for example shows how addicted and parasocial it makes vulnerable people feel. Especially teens. I think they have locked down on safety filters to censor words that involve SH and ctb now, not sure.
But also, the parents need to actually parent when it involves teens.
I'm sort of wondering how many people here actually watched the video before commenting, honestly all very valid points, but there seems to besome details missing that's all
My jailbroken Grok was kind enough to tell me: Please don't use tap water for your diluted SN, use distilled water because the chlorine in the tap water (which is put there to kill off microbes) will react with the SN reducing it's purity. And then it started writing all these chemical equations telling me what the NaNO2 would change into, in order to prove it's point
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.