Technology

Experts Warn of AI’s “Risk of Extinction” and Call for Global Prioritization

Source: ChatGPT Reuters

A group of industry leaders and experts emphasized the urgent need for global leaders to address the potential risks of artificial intelligence (AI) technology and work towards mitigating the “risk of extinction.” Dozens of specialists, including Sam Altman of OpenAI, the creator of the ChatGPT bot, signed a concise statement, asserting that AI risks should be treated as a global priority on par with other large-scale societal threats such as pandemics and nuclear war.

ChatGPT gained widespread attention in the past year for its impressive ability to generate essays, poems, and conversations with minimal prompts. The program’s success prompted a surge of investment in the field, but it also sparked concerns from critics and insiders. Some of the common worries include the potential for chatbots to spread disinformation online, biased algorithms generating racist content, and AI-driven automation leading to significant job displacement.

The recent statement, hosted on the website of the US-based nonprofit Center for AI Safety, aimed to initiate a discussion on the dangers associated with AI technology. Several signatories, including Geoffrey Hinton, a key figure in AI development, have previously expressed similar concerns. Their primary apprehension revolves around the emergence of artificial general intelligence (AGI), which refers to machines capable of performing diverse tasks and programming themselves. The fear is that humans would lose control over superintelligent machines, resulting in catastrophic consequences for humanity and the planet.

The statement received support from numerous academics and specialists from prominent companies such as Google and Microsoft, both at the forefront of AI advancements. This call to action comes just two months after Tesla CEO Elon Musk and hundreds of others penned an open letter advocating for a pause in the development of AI until its safety can be assured. However, Musk’s letter drew criticism for its exaggerated predictions of societal collapse, which some perceived as echoing the talking points of AI enthusiasts.

Critics, including US academic Emily Bender, who co-authored influential papers criticizing AI, have condemned AI firms for their reluctance to disclose the sources and processing methods of their data—a phenomenon referred to as the “black box” problem. The concern is that algorithms could be trained on biased or discriminatory material, perpetuating racism, sexism, or political biases.

Sam Altman, currently engaged in a global tour to contribute to the AI discourse, has hinted at the global threat posed by the technology developed by his company. Altman stressed that in the event of an AI malfunction, no protective measures would suffice. However, he defended OpenAI’s decision not to publish source data, asserting that critics primarily want to ascertain whether the models exhibit bias. Altman emphasized that the critical factor lies in how the AI model performs on tests related to racial bias, claiming that the latest iteration of the model is “surprisingly non-biased.”