When Push Comes to Shove in the Race for AI Systems

October 17, 2023
By Melanie Stern

In an open letter dated March 22, Elon Musk and Steve Wozniak were among the technology leaders calling for a moratorium of at least six months on the further development and release of artificial intelligence (AI) systems.

As a follow-up, Max Tegmark (president of the Future of Life Institute, which issued the open letter) and other technology experts expressed their thoughts and concerns for businesses, governments, community and the human race during a Reuters webinar in September.

A Subversive Truth

While much of the chatter within corporate boardrooms and on social media alludes to an “all-in” AI mentality, there are prominent outliers. “I was hoping for a mainstream conversation about making a pause,” Tegmark said. “That’s not what technology companies are saying.”

An immense amount of pent-up anxiety exists among experts and policymakers, he stated, all questioning whether a more powerful AI should go full steam ahead. “Many didn’t feel safe talking about it for fear of being mocked as scare-mongering luddites,” Tegmark said.

The letter’s request for a six-month pause was not prudent, according to Stuart Rossell, University of California-Berkeley computer science professor, who felt that an “indefinite pause” was more appropriate. “Let’s look at this in the same manner as sandwiches,” he said. The safety criteria of sandwiches — how they are prepared and whether safe to consume — should be applied to AI, he added.

When policymakers are slow or fail to respond, Rossell said, “I would send them a simulated email discussion between an alien civilization that writes to humanity saying, ‘We’re arriving soon, be prepared, be warned.’ And humanity responds by saying, ‘Humanity is currently out of the office. We’ll get back to you when we return.’ ”

Rossell indicated that although the letter for pause issued did not meet its initial objective, it did generate “a discussion at the government level, which means it had a much greater affect” — bringing humanity back to the office, so to speak.

“Large scale risks of AI systems are now taken more seriously by a larger group of policymakers, changing the public narrative as well,” said Gaya Dempsey, CEO of Metaculus. She said she is cautiously optimistic about how we collectively take steps in thinking through such issues.

What History Has Shown

AI has helped in the past, Tegmark said: “But in the early ’70s, the consensus about AI was that it was too dangerous to move forward with human cloning, for example. China decided to stay away from developing that technology.” A recent poll indicated that most (56 percent) of Americans want AI systems regulated, he said.

“What matters is how long it will take to construct AI at that level and guarantee its safety for humanity,” Rossell said. Because AI capability has dramatically advanced in the last few years, he added, that it’s a race humanity is losing. “The same people who thought these advancements would take 20 years now believe it will happen in five years,” he said.

AI technology CEOs are genuinely well-meaning people, Tegmark added. “We haven’t seen experts warning of their own technology since the days of Oppenheimer,” he said. Discussions of voluntary self-regulation are just as ridiculous as the tobacco industry promising to self-regulate, he said: “The leaders who signed the pause letter, warning of an extinction level event, was really a cry for help.”

AI Systems Without Pause

Multiple scenarios surroundings the potential effects of superintelligent AI integration include the regulatory landscape, technology, self-improvement capabilities, as well as effects on the economy, labor and society, said Dempsey. The biggest risks, she said, include an inability to use AI systems correctly and the potential for use supporting terrorism.

Questions to consider on an AI pause, Dempsey said, include: Are we transitioning into a good future with AI? And are we increasing our defensive capabilities at the same degree as our offensive ones? The escalated progression of other AI development gives just cause for concern. “Since the release of GPT-4, the timeline has shortened to 2032, adding the levels of risk mitigation involved,” she said. Dempsey estimated the current rate for a negative impact is 50 percent.

It’s not hyperbole to say we should worry about human extinction or disempowerment by 2032, Tegmark said. According to research from Metaculus, AI super intelligence will be reached in less than one year, “enabling a superior civilization which we will have built ourselves. It’s part of why the same CEOs who approved the AI pause signed a follow-up statement two months later,” he said, admitting their fear of what could come after an artificial general intelligence (AGI) release.

Benefits Versus Safety

Rossell addressed controversy around the AI narrative in search of balance between innovation and safeguards. “You can have benefits only if you have safety. The nuclear power industry is a perfect example of this. There were corners cut on safety, which (limited) the industry,” he said. Rossell cited the 1986 Chernobyl nuclear disaster as a model to scale potential catastrophe due to AI and similar technologies.

Dempsey said, “The incentives are incredibly strong to create systems that you can delegate more and more paths to, stringing together a set of capabilities, increasing productivity and wealth. This is AI’s headwind, and it’s driving investments.”

Tegmark said the issue with AI is whether safety mechanisms are adopted fast enough. “Technology has been progressing much faster than governance in this space, and we need a little pause so that governance can catch up,” he said.

Wide-Scale Risk Mitigation Practices

Rossell suggested a more stringent approach. “Put a ban on the impersonation of human beings,” he said. “We have a fundamental right to know if we are interacting with an AI system or a real human being.” He stated that by applying specific red lines toward AI deployment risk assessments, any questionable practices will surface.

“We’re getting to a point where the rubber needs to meet the road,” Dempsey said. The focus on innovation, she said, and policy conversations should bring in more technical details. “It’s not the devil in the details, but what got us in the details,” she said.

Referencing the medical field and its reliance on clinical trials, Tegmark said the technology industry should call for a similar pause on innovation to ensure safety standards are met. “The pause should last as long as needed until safety assurances are met, mitigating risks,” he said.

The European Union’s AI Safety Summit, set to take place in November, will present an opportunity to gage C-suite, policymaker and public sentiment about the progression of AGI. Rossell hopes the summit will bring: (1) an international regulatory process, 2) an international research collaboration on AI safety and (3) a consensus on specific regulations.

What the EU AI Act provides, Tegmark, said, “is a great first step, but it is not enough. The original verbiage contained a loophole for ChatGPT, as the regulation would not apply to that technology. We need redlines, similar to the biotech industry, and show that the benefits outweigh the risks. “

The pause asked for in March, and yet to be realized, is necessary, for the “establishment of safety standards, regulations and ongoing oversight,” Dempsey said.

(Image credit: Getty Images/Laurence Dutton)

About the Author

Melanie Stern

About the Author

Melanie Stern is Manager, Communications at Institute for Supply Management®.