Top AI CEOs, experts raise 'risk of extinction' from AI

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read

May 31, 2023 09:59 am | Updated 07:07 pm IST - STOCKHOLM/LONDON

Top artificial intelligence (AI) executives including OpenAI CEO Sam Altman on Tuesday joined experts and professors in raising the "risk of extinction from AI", which they urged policymakers to equate at par with risks posed by pandemics and nuclear war.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS).

Also read | A word of caution about India and the AI wagon

As well as Mr. Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google.

Also among them were Geoffrey Hinton and Yoshua Bengio — two of the three so-called "godfathers of AI" who received the Turing Award for their work on deep learning — and professors from institutions ranging from Harvard to China's Tsinghua University.

Meta singled out

A statement from CAIS singled out Meta, where the third godfather of AI, Yann LeCun, works, for not signing the letter.

"We asked many Meta employees to sign," said CAIS director Dan Hendrycks. Meta did not immediately respond to requests for comment.

The letter coincided with the U.S.-EU Trade and Technology Council meeting in Sweden where politicians are expected to talk about regulating AI.

Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April.

"We've extended an invitation (to Musk), and hopefully he’ll sign it this week," Mr. Hendrycks said.

Fear of misuse

Recent developments in AI have created tools supporters say can be used in applications from medical diagnostics to writing legal briefs, but this has sparked fears the technology could lead to privacy violations, power misinformation campaigns, and lead to issues with "smart machines" thinking for themselves.

The warning comes two months after the nonprofit Future of Life Institute (FLI) issued a similar open letter, signed by Mr. Musk and hundreds more, demanding an urgent pause in advanced AI research, citing risks to humanity.

"Our letter mainstreamed pausing, this mainstreams exctinction," said FLI president Max Tegmark, who also signed the more recent letter. "Now a constructive open conversation can finally start."

AI pioneer Hinton earlier told Reuters that AI could pose a "more urgent" threat to humanity than climate change.

Last week OpenAI CEO Sam Altman referred to EU AI — the first efforts to create a regulation for AI — as over-regulation and threatened to leave Europe. He reversed his stance within days after criticism from politicians.

Mr. Altman has become the face of AI after his ChatGPT chatbot stormed the world. European Commission President Ursula von der Leyen will meet Altman on Thursday, and EU industry chief Thierry Breton will meet him in San Francisco next month.

Top News Today

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.