
A group of business leaders and academics cautioned on Tuesday that artificial intelligence technology poses “the risk of extinction” and that world leaders should take action to lessen it.
Dealing with the hazards from AI should be “a global priority alongside other societal-scale risks such as pandemics and nuclear war,” according to a one-line declaration signed by dozens of experts, including Sam Altman, whose company OpenAI produced the ChatGPT bot.
With its ability to produce essays, poetry, and discussions from the shortest of cues, ChatGPT shot to fame late last year, inspiring billions of dollars in investment in the industry.
However, skeptics and insiders have sounded the alarm over everything from unfair algorithms to the potential for significant job losses as automation fueled by AI permeates daily life.
The most recent warning, which was posted on the website of the US-based nonprofit Center for AI Safety, provided no specifics regarding the existential threat that artificial intelligence might represent.
However, a number of the signatories, including Geoffrey Hinton, who developed some of the technology behind AI systems and is regarded as the industry’s founder, have previously issued similar cautions.
Their main concern has been the concept of “artificial general intelligence” (AGI), a broadly defined term for a time when robots are able to execute a variety of tasks and can create their own software.
The concern is that if humans lost control, it might have terrible effects on the species, according to researchers.
The most recent letter, which comes two months after billionaire Elon Musk and others asked for a freeze in the development of such technology until it could be proven to be safe, was signed by dozens of academics and experts from firms including Google and Microsoft.
Be the first to comment