Wyłącznik bezpieczeństwa dla sztucznej inteligencji

AI Safety Switch: OpenAI Panic

AI is growing like cancer right now in a way that is virtually uncontrolled and dangerous to all of us. California wants a safety switch on that.

ChatGPT provider OpenAI says no

Security protocols are nothing new in the IT industry, but in the case of tools and global firepower like ChatGPT, they seem almost obvious. That’s why SB 1047 bill, which would regulate the AI ​​market in California.

A hot topic is the tool that allows you to disconnect artificial intelligence servers in justified situations — a kind of emergency stop switch. OpenAI, however, openly criticizes the entire bill, not just individual aspects of it.

Former OpenAI employees are surprised by the company’s approach

In a letter seen by Politico to California Gov. Gavin Newsom and other lawmakers, two former OpenAI employees criticize the company’s opposition. William Saunders and Daniel Kokotajlo wrote:

We joined OpenAI because we wanted to ensure the security of the incredibly powerful AI systems it develops. But we left OpenAI because we lost confidence that the company would develop its AI systems safely, fairly, and responsibly (…).

Developing leading AI models without appropriate precautions carries a foreseeable risk of catastrophic harm to society.

As the authors of the letter rightly point out, Sam Altman, CEO of OpenAI, has repeatedly and publicly supported the concept of AI regulation, including during testimony before Congress.

What does OpenAI say about this?

In a statement to Business Insider, an OpenAI spokesperson says this is all over the top. The company is eager to discuss SB 1047 and have a thoughtful debate. We strongly disagree with the misinterpretation of our position on SB 1047. – says the company spokesman.

Similar Posts