Three years after OpenAI ‘changed’ the world with ChatGPT, CEO Sam Altman warns, ‘they are getting dangerous’

126302261.jpg


Three years after OpenAI 'changed' the world with ChatGPT, CEO Sam Altman warns, 'they are getting dangerous'
OpenAI’s Sam Altman seeks a high-stakes ‘Head of Preparedness’ to address AI’s growing security risks and mental health impacts. The company faces lawsuits over alleged harm caused by ChatGPT, while its safety teams have been disbanded. This new role aims to balance rapid AI advancement with crucial safeguards, a challenging task given the technology’s unprecedented capabilities and potential dangers.

The man who unleashed ChatGPT on the world is now sounding the alarm about his own creation. OpenAI CEO Sam Altman is hunting for a Head of Preparedness to tackle what he calls “real challenges” emerging from AI systems that have grown so sophisticated they’re finding critical security vulnerabilities and impacting users’ mental health. The $555,000 role comes as the company that revolutionised generative AI in late 2022 grapples with lawsuits, safety team departures, and the uncomfortable reality that its technology may be causing genuine harm.“Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,” Altman wrote on X. He didn’t sugarcoat the position’s demands, warning applicants: “This will be a stressful job and you’ll jump into the deep end pretty much immediately.”

OpenAI’s Master Plan for India

The new hire will lead OpenAI’s preparedness framework, evaluating frontier AI capabilities and coordinating safeguards across cybersecurity, biosecurity, and the prospect of AI systems that can improve themselves—challenges that Altman admits have “little precedent.” The company specifically needs help figuring out “how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm.”

Safety teams keep vanishing as products multiply

The job opening arrives amid a disturbing pattern at OpenAI: safety teams appear, then disappear. The company’s Superalignment team, launched in 2023 with the mission of preventing AI systems “much smarter than us” from going rogue, lasted less than a year before being disbanded in May 2024. Co-leader Jan Leike quit with a blistering exit statement, declaring that “safety culture and processes have taken a backseat to shiny products.The AGI Readiness team met a similar fate in October 2024, dissolved after advising on the company’s preparedness for artificial general intelligence. Meanwhile, Aleksander Madry, OpenAI’s previous Head of Preparedness, was reassigned in July 2024 to work on AI reasoning instead—leaving the crucial safety position vacant for months even as the technology advanced.

Lawsuits paint darker picture of AI’s mental health toll

The urgency behind the new position becomes clearer when examining the lawsuits piling up against OpenAI. Multiple families allege ChatGPT reinforced delusions, deepened isolation, and even encouraged suicide. One case involves a 16-year-old California boy; another centers on a 56-year-old Connecticut man who allegedly murdered his mother and killed himself after interactions with the chatbot that fueled his paranoid delusions.OpenAI says it’s working to improve ChatGPT’s ability to “recognize and respond to signs of mental or emotional distress,” but the company hasn’t slowed its product velocity. Despite Altman’s 2023 signing of a letter warning that AI extinction risks deserve priority alongside “pandemics and nuclear war,” OpenAI keeps accelerating. The company, now valued at $157 billion after October’s $6.6 billion funding round, is reportedly negotiating with Amazon for another investment exceeding $10 billion.That contradiction—tech leaders prophesying catastrophe while racing to build the very technology they fear—hasn’t gone unnoticed. “If they honestly believe this could bring about human extinction, then why not just stop?” asked one critic after Altman’s Congressional testimony. The new Head of Preparedness will inherit that impossible balancing act: making AI safer while the company that employs them pushes relentlessly forward.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *