Home
News
Products
Corporate
Contact
 
Friday, March 29, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

Calls for a six-month pause on the training of AI systems more powerful than GPT-4


Wednesday, April 26, 2023

The AI industry has responded to an open letter from the Future of Life Institute signed by AI academics and key tech industry figures, which calls for a six-month pause on the training of AI systems more powerful than GPT-4. The letter was signed by AI experts including Turing Award winner Yoshua Bengio, UC Berkeley computer science professor Stuart Russell, Apple co-founder Steve Wozniak and Twitter CEO Elon Musk.

“This pause should be public and verifiable, and include all key actors,” the letter says. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Written in the shadow of recent hype about the capabilities of GPT-4–powered AI agents like OpenAI’s ChatGPT, concerns cited by the experts in the letter include AI “[flooding] information channels with propaganda and untruth,” “[automation] of all jobs, including the fulfilling ones,” and “[risking] loss of control of our civilization.”

“Such decisions must not be delegated to unelected tech leaders,” the letter says. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

‘Sparks of AGI’ cited

Writing in The Observer, UC Berkeley professor Stuart Russell (one of the letter’s signatories) argues that “the core problem is that neither OpenAI nor anyone else has any real idea how GPT-4 works.”

“Reasonable people might suggest that it’s irresponsible to deploy on a global scale a system that operates according to unknown internal principles, shows ‘sparks of AGI’ [artificial general intelligence] and may or may not be pursuing its own internal goals,” Russell wrote, referring to a provocatively titled Microsoft paper that argues GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an AGI system.”

Russell further pointed out that OpenAI’s own tests showed GPT-4 could deliberately lie to a human to pass a captcha test designed to block bots.

The basic idea of the proposed moratorium is that systems shouldn’t be released until developers can show they don’t present undue risk, according to Russell .

Biggest networks MIA

Cerebras CEO Andrew Feldman told EE Times it’s difficult to read the letter without considering the signatories’ interests, and questioning their motivations.

“The letter calls for a moratorium on models above a certain size…. It may well be a good idea, but it’s hard to parse out self-interest,” Feldman said. “The people with the biggest networks aren’t signing the letter, it’s the people who don’t have the biggest networks who are most worried. Their worries are reasonable, but it has the unfortunate appearance of those who are not on the cutting edge saying, ‘Hey, let’s have a ceasefire while we move all our supplies to the frontline’.”

Feldman added that people need to decide whether AI applications like ChatGPT fall into the category of something that should be regulated, or not—whether it’s like the airplane that requires the FAA to regulate or closer to books and information, which should not be restricted.

“These are bad analogies, because there aren’t good ones,” he said. “But we have to decide as a society which bucket this falls into. That decision ought to be made in the open—not via letter—and ought to include those who have the capabilities to be the biggest and those who have profound worries about the impact.”

While he considers OpenAI’s team to be “extraordinary scientists, and deeply thoughtful,” he cautioned that leaving companies to regulate their own products is also the wrong course of action.

“A cynical view of the purpose of the letter is: These are really, really smart people. They knew the letter wasn’t going to do anything, except maybe start a conversation and initiate regulation,” he said.

Open letter on AI in response to ChatGPT

Google Brain co-founder Andrew Ng, currently an adjunct professor at Stanford University, and Turing Award winner Yann LeCun, currently head of AI at Meta, hosted a live, online discussion as a response. Neither signed the letter or supports an R&D pause, but both are ultimately in favor of appropriate regulation.

“Calling for a delay in R&D smacks of a new wave of obscurantism, essentially—why slow down the progress of knowledge and science?” LeCun said. “Then there is the question of products. I’m all for regulating products that get in the hands of people—I don’t see the point of regulating R&D, I don’t think that serves any purpose, other than reducing the knowledge that we could use to actually make technology better and safer.”

LeCun compared the letter to the reaction of the Catholic church after the invention of the printing press: While the Catholic church was right that the technology did “destroy society,” and led to hundreds of years of religious wars, it also enabled modern science, rationalism and democracy.

“What we need to do when a new technology is put in place like this is make sure the benefits, the positive effects, are maximized and the negative ones are minimized,” he said. “But that doesn’t necessarily [mean] stopping it.”

One widespread analogy Ng, in particular, had a problem with is comparing a potential six-month moratorium on large language model development to the 1975 Asilomar conference on recombinant DNA. That conference famously put in place containment mechanisms to guard against the potential spread of an escaped virus.

“It’s not a great analogy, in my opinion,” he said. “The reason I find it troubling to make an analogy between the Asilomar conference and what happens in AI, [is]… I don’t see any realistic risk of AI escape, unlike the escape of infectious diseases. AI escape would imply that not only do we get to AGI, which will take decades, but also that AGI is so wily and so smart that it outsmarts all of these billions of people that don’t want AI to harm us or kill us. That’s just an implausible scenario for decades, maybe centuries, or maybe even longer.”

LeCun speculated about the motivations of the signatories. While some are genuinely worried about an AGI being turned on that eliminates humanity with short notice, more reasonable people think there are harms and danger that need to be dealt with.

“Until we have some sort of blueprint for a system that has at least a chance of reaching human intelligence, discussions on how to properly make them safe is, I think, premature, because how can you design seat belts for a car if the car doesn’t exist?” he said. “Some of those questions are premature, and I think a bit of the panic toward that future is misguided.”

Ng said that one of the biggest problems with a six-month pause is that it would not be implementable.

“I feel like some things are implementable, so for example, proposing that we do more to research AI safely, maybe more transparency, auditing, let’s have more [National Science Foundation] or other public funding for the basic research on AI—those will be constructive proposals,” he said. “The only thing worse than [asking AI labs to slow down] would be if government steps in to pass legislations to pause AI, which would be really terrible innovation policy. I can’t imagine it being a good idea for government to pass laws to slow down progress of technology that even the government [doesn’t] fully understand.”

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved