A raft of technology leaders and top AI researchers have signed an open letter warning that AI labs are locked in an out of control race to develop and deploy ‘digital minds’ and singling out OpenAI’s recent statement that “at some point it may be important to get an independent review,” commenting, “that time is now.” The letter calls on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.
The letter, published by the non-profit Future of Life Institute, includes among its signatories Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, and Elon Musk, CEO of SpaceX, Tesla and Twitter. It says: “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Other signatories to the letter include well-known AI researchers Gary Marcus, Stuart Russell and Yoshua Bengio. The letter continues: “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects.”
OpenAI’s recent statement regarding artificial general intelligence, said that ‘At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.’ The signatories to the letter observe: “We agree. That point is now.”
Signatories call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4, commenting: “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
The letter continues to say that AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. “These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause,” the letter says.
It concludes: “Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”
The open letter is published on the same day that the UK Government published its AI white paper to guide the use of artificial intelligence in the UK and drive “responsible innovation.”
caroline@legaltechnology.com
Yeah right. A call from all Open AI’s competitors for them to be given 6 months to catch up. lots of fear mongering and broad calls for something to be done, but no specifics on how this might be achieved. I’m not saying that we don’t need some form of oversight and transparency, but this is a naked PR exercise from Elon Musk et al. Nothing more.