• ⚠️ UK Access Block Notice: Beginning July 1, 2025, this site will no longer be accessible from the United Kingdom. This is a voluntary decision made by the site's administrators. We were not forced or ordered to implement this block.

N

noname223

Archangel
Aug 18, 2020
6,199
At least the ones I am using. chatGPT, GrokAi, Perplexity etc.

I wanted to know which thread in this forum has the most views/replies in the subforum politics and philosophy. I was surprised how many replies and views my thread about the Charlie Kirk assassination received more than the Trump Epstein connection.

The AI said it is not allowed to give me an answer because the access to this forum is restricted. (content restrictions on sensitive sites). If someone has a costfree AI chatbot that can be used for questions about Sanctioned Suicide please let me know. I am not sure how to feel about that.

I asked AI pretty early about my profile on here to give me an analysis of my character. Other members did that and it was pretty interesting. I even screenshotted the analysis about my account.

After a while this was not possible anymore. It said for privacy reasons they won't give analysis of single members on an internet forum. Which had advantages and disadvantages. It is good for one's privacy but bad for interesting analysis about my personality. I am not sure whether it was possible to circumvent this safety measure. I could imagine it was possible.

But now it is even harder. And the new restriction policy (which I mentioned earlier) is shown.

What do you think about it? AI companies are under a lot of pressure because AI is made responsible for suicides of individuals.
 
  • Like
Reactions: katagiri83 and Forever Sleep
TAW122

TAW122

Emissary of the right to die.
Aug 30, 2018
7,207
I personally don't feel comfortable delving into sensitive content or subjects and topics with AI. I never cross the line with AI because I'm not certain of what AI may do with the content and I would rather not take that risk and have other unforeseen consequences.. I'm sure there may be other personalized AI (outside of mainstream ones) that may do so, but for me it's not worth the risk.
 
  • Love
Reactions: amerie
H

Hvergelmir

Mage
May 5, 2024
544
I wanted to know which thread in this forum has the most views/replies in the subforum politics and philosophy.
The filter will answer that more reliably than any AI.
https://sanctioned-suicide.net/forums/politics-philosophy.19/?order=reply_count&direction=desc

If you know how to run a Docker container, you might want to look into gpt4free. It includes a bunch of providers with free access, some with very loose restrictions. (Try gemini 2.5 pro, from api.airforce when it's available.)
 
  • Informative
Reactions: noname223
heywey

heywey

Member
Aug 28, 2025
19
Another option is running an LLM from your own computer, nowadays even smaller models are capable enough for simple stuff. It won't be as smart or fast as the ones the big providers, erm, provide, but it has a few advantages: complete privacy, no guardrails/censorship (beyond what's baked into the models), always free, no forced changes or removed functionality. If you have 16gb of ram that's enough to run most models <20b parameters, including OpenAI's own gpt-oss which was released last month.

I'd highly recommend Alpaca if you happen to be on Linux, it makes setting everything up super easy. I don't know an alternative for Windows off the top of my head, but some searching led to https://chatboxai.app/ and https://openwebui.com/ which both look pretty good. All three use Ollama as a backend anyway. (I'm not sure if gpt4free connects to ollama?)

As far as external providers gimping their models' functionality in the name of alignment, it annoys me personally, but it's not unreasonable all things considered. Clamping down on potentially harmful and/or fringe stuff was inevitable. I think it's important for open source models to keep growing and improving, because otherwise the only way to use this technology is if it's 1. profitable for the company, typically by hoovering up all your data, and 2. super duper legally safe, meaning no touching controversial content with a ten foot pole. There's a place for free providers like that, but I really think the next big leap in AI will be when fully capable models can be run on hardware as weak as your phone. Like going from the age of mainframes to the Personal Computer era.
 
  • Informative
Reactions: noname223

Similar threads

C
Replies
6
Views
189
Suicide Discussion
Aiyuxiao
Aiyuxiao
TAW122
Replies
3
Views
188
Offtopic
TAW122
TAW122
FloatingJellyfish
Replies
12
Views
380
Suicide Discussion
Manfrotto99
M