Illustration: The Verge
Jan Leike, a key OpenAI researcher who resigned earlier this week following the departure of cofounder Ilya Sutskever, posted on X Friday morning that “safety culture and processes have taken a backseat to shiny products” at the company.
Leike’s statements came after Wired reported that OpenAI had disbanded the team dedicated to addressing long-term AI risks (called the “Superalignment team”) altogether. Leike had been running the Superalignment team, which formed last July to “solve the core technical challenges” in implementing safety protocols as OpenAI developed AI that can reason like a human.
The original idea for OpenAI was to openly provide their models to the public, hence the organization’s name, but they’ve become proprietary…