From Futurism: “For the better part of a year, we’ve watched — and reported — in horror as more and more stories emerge about AI chatbots leading people to self-harm, delusions, hospitalization, arrest, and suicide.
As the loved ones of the people impacted by these dangerous bots rally for change to prevent such harm from happening to anyone else, the companies that run these AIs have been slow to implement safeguards — and OpenAI, whose ChatGPT has been repeatedly implicated in what experts are now calling “AI psychosis,” has until recently done little more than offer copy-pasted promises.
In a new blog post admitting certain failures amid its users’ mental health crises, OpenAI also quietly disclosed that it’s now scanning users’ messages for certain types of harmful content, escalating particularly worrying content to human staff for review — and, in some cases, reporting it to the cops.”

The post OpenAI Says It’s Scanning Users’ ChatGPT Conversations and Reporting Content to the Police appeared first on Mad In America.
IPAK-EDU is grateful to Mad In America as this piece was originally published there and is included in this news feed with mutual agreement. Read More
Leave a Reply