

1·
1 day agoPeople will always find a way to kill themselves no matter how many warnings and guardrails are put into place. This is just Darwinism shaking the tree.


People will always find a way to kill themselves no matter how many warnings and guardrails are put into place. This is just Darwinism shaking the tree.


People got mad at me for pointing out this is the case when people die because they listen to an AI chatbot, but it’s true. AI 100% needs more regulation, but introduce any new tool to everyone all at once, and some idiots will use it to remove themselves from the gene pool. If you sent everyone in the world a thin, 2in rod of inert iron, there would be a handful of people who would figure out a way to kill themselves with it.
So what?
Seriously, so what?
Chatbots have been around for ages, way longer than the current “AI” trend. It’s always been possible to get them to say some version of “kill yourself”.
ChatGPT didn’t gaslight a guy into killing his mom. A mentally ill man killed his mom. If a fucking chatbot is the thing that triggered it, then anything could have. Same thing with the people killing themselves because a chatbot told them too. That’s just cold Darwinism. We didn’t suddenly ban Catcher in the Rye because some schizophrenic guy decided it was telling him to kill John Lennon; we recognized that there are simply crazy people in this world who could be set off by anything.
The “safeguards” you want in place are not feasible because you want them to account for people with mental illness, or people so stupid that something else would have killed them first.
You want the real story to this article? Dumbass dies from using drugs irresponsibly, parents blame anything they can except their son because they are too blinded by grief to recognize that their precious little junkie was a fucking idiot. ChatGPT did not force him to take drugs. ChatGPT did not supply him with drugs. That was all him. The only one to blame for his death is his dumbass self.