A.I.-created corridor of code in The Matrix

Hundreds of Tech Experts (and Also Elon Musk) Call For Pause and Risk Assessment on A.I. Development

When Elon Musk, Apple co-founder Steve Wozniak, and over 500 tech experts sign an open letter calling for an immediate pause in artificial intelligence (A.I.) development, you know the issue is more serious than a case of watching too many sci-fi films. While A.I. taking over the world is often dismissed as an improbable event that happens only in fiction, the fears expressed in the letter almost verged on such an event taking place. It might not necessarily be as dramatic as Ultron or Skynet taking over, but there are genuine concerns that unfettered A.I. development could result in A.I. gradually replacing humans.

Recommended Videos

What seemed to especially motivate the penning and signing of the letter was the recent development and early release of OpenAI’s Multimodal Large Language Model (MLLM), GPT-4. An MLLM is an A.I. tool that has the ability to interpret multiple modes of input, such as text, images, videos, and audio. Additionally, it can perform natural language processing (NLP), which means it can understand and generate language in a similar way as a human can. GPT-4 is available in a limited form called ChatGPT Plus, which has already raised a variety of controversies.

The controversy ranges from concerns over students using ChatGPT for cheating to detection of bias in ChatGPT to harm arising from ChatGPT’s provision of incorrect answers to the potential of ChatGPT to begin replacing humans in certain professions. Now that OpenAI is already forging ahead with the development of an upgrade, GPT-5, tech experts are sounding the alarm.

Tech experts (and Elon Musk) sign open letter to pause and assess A.I. development

An open letter was published via the Future of Life Institute, a non-profit devoted to providing protections against global catastrophes, including the hypothetical threat of A.I. The letter has garnered 1,324 signatures as of the writing of this article, including from hundreds of tech experts and even OpenAI’s own co-founder, Musk. It outlined the fact that A.I. development has grown increasingly unregulated, with the release of GPT-4 creating a dangerous A.I. race. The letter asked a few rhetorical questions that highlighted the potential dangers of A.I. It reads:

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

It went on to highlight that the development of more powerful A.I. systems shouldn’t move forward until there is a certainty that they will be beneficial and that “their risks will be manageable.” To ensure this happens, the letter suggested a six-month pause on the development of A.I. that is more advanced than GPT-4. During the pause, the letter recommended the implementation of safety protocols, regulatory authorities, a stronger auditing process, and the establishment of liabilities for harm caused by A.I.

The letter captured the beliefs of more ardent skeptics of A.I. Some signers believe that even GPT-4 is already very dangerous and shows signs of artificial general intelligence, which is the ability to learn and interpret the world in a way that is equal to human ability. Less cautious skeptics, though, refuted this idea, stating that GPT-4 doesn’t even come close to being on par with the human mind. So while many critics of A.I. do agree on the need for regulation and acknowledgment of potential risks, some believe the ideas outlined in the letter to be an exaggeration.

Ultimately, many tech experts are uniting in agreement that the dangers of A.I. are so significant it requires an immediate pause in development and reform in the industry. It remains to be seen if A.I. labs and independent A.I. experts will heed the call or if, as the letter suggested, the government will even step in to invoke the pause.

(featured image: Warner Bros.)


The Mary Sue is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Rachel Ulatowski
Rachel Ulatowski
Rachel Ulatowski is a Staff Writer for The Mary Sue, who frequently covers DC, Marvel, Star Wars, literature, and celebrity news. She has over three years of experience in the digital media and entertainment industry, and her works can also be found on Screen Rant, JustWatch, and Tell-Tale TV. She enjoys running, reading, snarking on YouTube personalities, and working on her future novel when she's not writing professionally. You can find more of her writing on Twitter at @RachelUlatowski.