OpenAI forms safety and security committee as concerns mount about AI

You May Be Interested In:4/18: America Decides


ChatGPT creator OpenAI on Tuesday said it formed a safety and security committee to evaluate the company’s processes and safeguards as concerns mount over the use of rapidly developing artificial intelligence technology.

The committee is expected to take 90 days to finish its evaluation. After that, it will present the company’s full board with recommendations on critical safety and security decisions for OpenAI projects and operations, the firm said in a blog post.

The announcement comes after two high-level leaders, co-founder Ilya Sutskever and fellow executive Jan Leike, resigned from the company. Their departures raised concerns about the company’s priorities, because both had been focused on the importance of ensuring a safe future for humanity amid the rise of AI.

Sutskever and Leike led OpenAI’s so-called superalignment team, which was meant to create systems to curb the technology’s long-term risks. The group was tasked with “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” Upon his departure, Leike said OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

OpenAI’s new safety and security committee is led by board Chair Bret Taylor, directors Adam D’Angelo and Nicole Seligman and Chief Executive Sam Altman. Multiple OpenAI technical and policy leaders are on the committee as well. OpenAI said that it will “retain and consult with other safety, security and technical experts to support this work.”

The committee’s formation arrives as the company begins work on training what it calls its “next frontier model” for artificial intelligence.

“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” OpenAI said in its blog post.

Controversies about use of AI have dogged the San Francisco-based company, including in the entertainment business, which is worried about the technology’s implications for intellectual property and the potential displacement of workers.

Actor Scarlett Johansson criticized the company last week over its handling of a ChatGPT voice feature that she and others said sounded eerily like her. Johansson, who voiced an AI program in the Oscar-winning Spike Jonze movie “Her,” said she was approached by Altman with a request to provide her voice, but she declined, only to later hear what sounded like her voice in an OpenAI demo.

OpenAI said that the voice featured in the demo was not Johansson’s but another actor’s. After Johansson raised the alarm, OpenAI put a pause on its voice option, “Sky,” one of many human voices available on the app. An OpenAI spokesperson said the formation of the safety committee was not related to the issues involving Johansson.

OpenAI is best known for ChatGPT and Sora, a text to video tool that has major potential ramifications for filmmakers and studios.

OpenAI and other tech companies have been holding discussions with Hollywood, as the entertainment industry grapples with the long-term effects of AI on employment and creativity.

Some film and TV directors have said AI allows them to think more boldly, testing ideas without having the constraints of limited visual effects and travel budgets. Others worry that increased efficiency through AI tools could whittle away jobs in areas such as makeup, production and animation.

As it faced safety questions, OpenAI’s business, which is backed by Microsoft, also must deal with competition from other companies that are building their own artificial intelligence tools and funding.

San Francisco-based Anthropic has received billions of dollars from Amazon and Google. On Sunday, xAI, which is led by Elon Musk, announced it had closed on a $6-billion funding round that goes toward research and development, building its infrastructure and bringing its first products to market.

share Paylaş facebook pinterest whatsapp x print

Similar Content

iOS 18.2 has a child safety feature that can blur nude content and report it to Apple
iOS 18.2 has a child safety feature that can blur nude content and report it to Apple
FCC now requires georouting for wireless calls to 988, the National Suicide Prevention Hotline
FCC now requires georouting for wireless calls to 988, the National Suicide Prevention Hotline
Biden's top hostage envoy Roger Carstens in Syria to ask for help in finding Austin Tice
Biden’s top hostage envoy Roger Carstens in Syria to ask for help in finding Austin Tice
DOJ to seek death penalty against Luigi Mangione in CEO murder case, AG Bondi says
DOJ to seek death penalty against Luigi Mangione in CEO murder case, AG Bondi says
IT Services Companies in the Wichita MSA
Largest Incubators and Accelerators in Maryland
Musk tells Tesla employees to hold on to their stock amid protests
Musk tells Tesla employees to hold on to their stock amid protests
The Daily Lens | © 2024 | News