- OpenAI CEO Sam Altman responded to the open letter calling for a pause on AI for one of the first times Thursday.
- Speaking at MIT, Altman said safety is important to OpenAI, and the letter “lacked technical nuance.”
- The letter called for a six-month pause on AI development, and was signed by tech leaders like Elon Musk.
In one of his only public responses to the open letter signed by Elon Musk and hundreds of others in the tech industry about concerns over AI, OpenAI CEO Sam Altman said it “lacked technical nuance about where we need the pause.”
The letter, released last month, called for at least a six-month pause on development of artificial intelligence models “more advanced” than OpenAI’s GPT-4, the latest version of the popular chatbot ChatGPT.
“There’s parts of the thrust that I really agree with,” Altman said during an MIT event Thursday. “We spent more than six months after we finished training GPT-4 before we released it, so taking the time to really study the safety of the model … to really try to understand what’s going on and mitigate as much as you can is important.”
Altman said an earlier version of the letter claimed OpenAI is currently training GPT-5, but he said, “we are not, won’t for some time, so in that sense it was sort of silly.”
Altman has tweeted about the letter a few times and retweeted his cofounder Greg Brockman, but largely stayed quiet about it.
The letter has been criticized by some. Other tech giants like Bill Gates and Google CEO Sundar Pichai have called a pause impractical and nearly impossible to enforce without government involvement.
Others, like LinkedIn cofounder Reid Hoffman, claimed Musk’s signature on the letter is just jealousy. Musk left OpenAI in 2018, and Hoffman said Musk and others want the pause so they can catch up and offer their own AI products. Insider’s Kali Hays reported this week that Musk has purchased thousands of graphics processing units used to power an AI project, possibly at Twitter.
Altman also said OpenAI is developing additions to GPT-4 that present safety issues, which need to be addressed. Generally, “as capabilities get more and more serious, the safety bar’s gotta increase,” he said.
The OpenAI CEO also said his company will continue to be as honest as possible about its developments with AI. He said he believes they will impact everyone, so as many people as possible should be involved in testing and learning about them.
“We believe that engaging everyone in the discussion, putting these systems out into the world, deeply imperfect though they are in their current state, so that people get to experience them, think about them, understand the upsides and the downsides, it’s worth the tradeoff even though we do tend to embarrass ourselves in public and have to change our minds with new data frequently.”