OpenAI just snagged an Anthropic safety researcher for its high-profile head of preparedness role

Este artículo fue publicado originalmente aquí

Sam Altman speaks during an event
OpenAI CEO Sam Altman

  • OpenAI hired an Anthropic safety researcher for its head of preparedness role.
  • Sam Altman said in a post on X that he is “extremely excited” to welcome Dylan Scand to OpenAI.
  • The role, which pays up to $555,000 plus equity, generated buzz online last month.

OpenAI has filled a key safety role by hiring from a rival lab.

The company has brought on Dylan Scand, a former AI safety researcher at Anthropic, as its new head of preparedness, a role that carries a salary of up to $555,000 plus equity. The role caught attention last month thanks to its eye-catching pay package amid OpenAI’s rising AI safety concerns.

Sam Altman announced the move in a post on X on Wednesday, saying that he is “extremely excited” to welcome Scand to OpenAI.

“Things are about to move quite fast and we will be working with extremely powerful models soon,” Altman wrote.

“Dylan will lead our efforts to prepare for and mitigate these severe risks. He is by far the best candidate I have met, anywhere, for this role,” he added.

Scand said in a post on X on Wednesday about his move that he’s “deeply grateful for my time at Anthropic and the extraordinary people I worked alongside.”

“AI is advancing rapidly. The potential benefits are great — and so are the risks of extreme and even irrecoverable harm,” he added.

Last month, Altman described the job as “stressful.”

“You’ll jump into the deep end almost immediately,” he wrote on X.

In the job posting, OpenAI said the role is best suited for someone who can lead technical teams, make high-stakes calls under uncertainty, and align competing stakeholders around safety decisions. The company also said candidates should have deep expertise in machine learning, AI safety, and related risk areas.

Tensions have arisen over OpenAI’s approach to safety. Several early employees — including a former head of its safety team — have left the company in recent years.

OpenAI has also faced lawsuits from users who allege its tools contributed to harmful behavior.

In October, the company said that some ChatGPT users had shown possible signs of mental health distress. An estimated 560,000 users a week show “possible signs of mental health emergencies,” it said.

The company also said it was consulting mental health specialists to refine how the chatbot responds when users show signs of psychological distress or unhealthy dependence.

Read the original article on Business Insider

Leave a Reply

Your email address will not be published. Required fields are marked *

Traducir »