• Hey Guest,

    We wanted to share a quick update with the community.

    Our public expense ledger is now live, allowing anyone to see how donations are used to support the ongoing operation of the site.

    👉 View the ledger here

    Over the past year, increased regulatory pressure in multiple regions like UK OFCOM and Australia's eSafety has led to higher operational costs, including infrastructure, security, and the need to work with more specialized service providers to keep the site online and stable.

    If you value the community and would like to help support its continued operation, donations are greatly appreciated. If you wish to donate via Bank Transfer or other options, please open a ticket.

    Donate via cryptocurrency:

    Bitcoin (BTC):
    Ethereum (ETH):
    Monero (XMR):
N

noname223

Archangel
Aug 18, 2020
6,628
I had a good day. After handing in the complaint about my therapist I am relieved. I visited my friends and it felt really good to spend time with them. We dicussed stocks, Gold, silver I just talked like a waterfall and they listened to me. But we also discussed AI my friends hate AI and its impact on society/humankind and their own personal future.

Now, to the core question. Will SaSu have one day an AI Chatbot? It seems unlikely because the primary reason this forum is run is not financially. I think so at least. I think the current AI modells are not efficient at all. All in one modells are the wet dreams of big companies. But it would be way smarter to train small modells for a specific purpose. For example, training a modell locally with the data of one's own family business. This could actually be a business case.

I don't make assumptions about the motives of the people who are responsible for this forum. Maybe the people in charge will change. Maybe one could compare it in some way with assisted suicide Organizations. The assisted suicide organizations don't want to be held accountable and this is why they invented the Sarco for example. There are even concepts where the Sarco is combined with AI in order to help whether someone is in the right mind to make this decision. And because an AI makes this decision they say there are less human mistakes. And they hope to avoid legal responsibilities when things go wrong.

If AI moderated this forum who would be responsible? There would be less humans who have to make important decsions. Tech optimists would say this could offer help to the ones that need it the most. But it is more likely this would be highly unethical. And AI is not ready to make life and death decisions.

At the same time we don't know how AI develops. Maybe in a couple of years in a dystopian future we let AI write a Summary about our feelings, despair and suicidality. And we only nod it off. We become so passive that even venting on a suicide forum becomes a chore. And there might be doubts whether the person we are talking to online isn't a bot. Though, gladly this forum seems to be bot free when it comes to discussions. I wondered why in the past. In a world where AI agents become mainstream the question of accountability needs to be asked. Especially, on a suicide forum that's delicate. It is unlikely SaSu will have an AI bot in the near future. But I could imagine that on mainstream forums AI chatbots might replace many human Moderators. I think using a chatbot on SaSu would be highly problematic and this is why I think it won't be implemented in the near future. But AI might change the Internet fundamentally. And SaSu remains a unique place thus far where some capitalistic incentive structures are not implemented yet. I doubt there will ever be ads on this website. But the more the nature of interactions on the Internet change, this will increase the chance that SaSu might will change too eventually. And there might even be a demand for that. In a dytopian world which doesn't seem totally unrealistic anymore.

What do you think?
 
Last edited:
  • Like
Reactions: Forever Sleep and katagiri83
F

Forever Sleep

Earned it we have...
May 4, 2022
14,528
I imagine an AI would stuggle here. Given that we're in a very grey area- crossing over into a lot of taboo subjects. How well is it trained to understand nuance? Would it know when to lock a thread for example?

That said, I was arguing for AI robot police! (If they could get them to work) the other day so, that would be even more responsibility.

I suppose with law though, it feels like there is right and wrong. Then, it's up to the courts to figure out the nuance side to it. Here- we're talking about perspectives. All of which a robot won't understand.

Surely, AI is programmed to be pro-life though. Wouldn't it just be offering up helplines all over the place? The poor thing would likely go into a meltdown on the first day.
 
Pluto

Pluto

Cat Extremist
Dec 27, 2020
6,267
images
 
  • Like
  • Yay!
Reactions: Happy Cat and hurb
hurb

hurb

Member
Jan 22, 2026
47
people probably made their own personal AI thats simply not monitored , the closest u can get to a marketable sasu AI is grok if u know how to prompt it
 
  • Like
Reactions: Xi-Xi
H

Hvergelmir

Warlock
May 5, 2024
725
If AI moderated this forum who would be responsible? [...] AI is not ready to make life and death decisions.
Frankly, neither is unpaid volunteer moderators.
If we want a free Internet, users has to be responsible for themselves.

I think AI will come into play eventually, but it's hard enough to have LLMs consistently reflect mainstream values. I don't think human moderation can be completely cut out, anytime soon.
But it would be way smarter to train small modells for a specific purpose. For example, training a modell locally with the data of one's own family business. This could actually be a business case.
...until you want help with taxes, or compare to competition, or move production to Asia. The scope can easily expand to encompass huge and complicated areas.
From my experience with smaller support bots, traditional indexed documentation and FAQs, often do much better. A bot desperately trying to make up for missing information, is just frustrating and sometimes misleading.
Where LLMs really shines, is when you have huge amounts of data, that can't be easily indexed.
How well is it trained to understand nuance? Would it know when to lock a thread for example?
How well are the human moderators trained? Do they know when to lock a thread?

A "better" moderator with more robust training, could reduce conflict, boost retention, and raise overall satisfaction. I think AI could potentially do that.
I think that it would alienate the very community this place was meant to serve, though.

The requirements are both fuzzy, and ever-changing. I don't think AI lack capability, as much as we lack a rigid definition of what we really want. We tend to instead rely on constant adjustments, compromises, and negotiations.
 
  • Like
Reactions: Forever Sleep
pax420

pax420

Someone in my head but it's not me
Jan 19, 2026
31
It's really simple. I don't care how advanced AI gets I won't gamble against a computer and I won't let one get inside my head. Humans only thank you.
Sorry not trolling but had to check the spelling but Billy idol said it best "les yeux sans visage"
 
Last edited:
N

noname223

Archangel
Aug 18, 2020
6,628
...until you want help with taxes, or compare to competition, or move production to Asia. The scope can easily expand to encompass huge and complicated areas.
From my experience with smaller support bots, traditional indexed documentation and FAQs, often do much better. A bot desperately trying to make up for missing information, is just frustrating and sometimes misleading.
Where LLMs really shines, is when you have huge amounts of data, that can't be easily indexed.
I think there is no exclusion needed. Small and big LLMs can exist both at the same time.
And a specific purpose for AI is also a matter of definition. There can be a large subset of tasks for the LLM. But it does not need to able to cover every niche subject available when a large LLM might be better suited for that.

AI that works locally or with open source technology has its own advantages. It is also better for the protection of sensitive dats. I don't deny that a lot of data helps LLMs to shine but specializing for a business case often requires to adapt to the needs of customers. And sometimes that's better done with small modells than an all in one solution where they is for example a lack of transparency or lack of privacy rights.
 
H

Hvergelmir

Warlock
May 5, 2024
725
AI that works locally or with open source technology has its own advantages.
I love open source, GPT-OSS, and Deepseek, etc.
I would not be surprised if I end up running a local model in the future. But for now, we're mostly talking about models like Llama 8B. With such small models, I've seen no real use case.

Part of the problem is that the freeform prompting gives the appearance of them being generic, and how poorly they handle things they don't know.
To me it appears like a small model, almost every time, can be replaced by a better solution - traditional algorithms, etc.
An exception might be spell checking and grammar, code completion, and things along those lines, where the model presents suggestions, rather than talk with you.

It's a topic I try to follow closely, and I'm curious to hear about successful use cases. Especially for local 8B models, and smaller - things that can run on a consumer GPU.
 
  • Informative
Reactions: noname223

Similar threads