
In an open letter signed by some of the biggest names in tech, including Elon Musk, a call has been made for the world’s top artificial intelligence labs to pause the development of super-powerful systems for six months. The letter warns that recent advances in AI technology present significant risks to society and humanity.
The timing of this letter follows the public release of OpenAI’s GPT-4, marking the debut of one of the most advanced AI systems to date. This leap in AI capabilities has led experts to revise their predictions about the arrival of artificial general intelligence (AGI) – an AI surpassing human cognitive abilities. As the AI arms race accelerates, many fear that society is moving dangerously fast, with potential catastrophic consequences.
The letter, hosted by the Future of Life Institute, highlights concerns about AI’s rapid development, stressing that AI labs are racing to build increasingly powerful digital minds that even their creators cannot fully understand, predict, or control. According to the letter, this unchecked progress represents a potential turning point in history, one that requires far more careful planning and management than is currently in place.
More than 1,000 signatories, including renowned figures like Apple co-founder Steve Wozniak, author Yuval Noah Harari, and AI researchers behind major machine learning breakthroughs, have lent their support. While the letter has garnered widespread attention, no OpenAI employees have signed it, and there was brief controversy over whether CEO Sam Altman’s name appeared before being removed. Other notable names include Emad Mostaque, CEO of Stability AI, and Tristan Harris, executive director of the Center for Humane Technology.
In the letter, the signatories ask a profound question: Should we allow machines to flood our information channels with propaganda or automate away all jobs, including meaningful ones? Should we risk creating nonhuman minds that could eventually surpass, obsolete, or replace us entirely? The signatories argue that such critical decisions should not be left solely to tech leaders but need broader, more democratic oversight.
To address these concerns, the letter urges AI labs to pause development for six months and use the time to collaborate with independent experts to develop rigorous safety protocols for advanced AI. These protocols would ensure that AI systems are safe, predictable, and controllable, reducing the risks associated with unforeseen outcomes. Importantly, the letter clarifies that it is not calling for a halt in all AI development, but rather a slowdown in the race to build increasingly larger, unpredictable AI models.
Gary Marcus, an AI expert and one of the signatories, expressed that he felt a moral responsibility to speak out. “There are serious risks, and corporate AI responsibility seems to have lost its way just when humanity needs it most,” he said. He stressed the urgency of slowing down development to avoid dangerous outcomes.
Simeon Campos, CEO of the AI safety startup SaferAI, echoed these concerns, noting that scaling systems without fully understanding their capabilities is reckless. “What we’re doing right now is scaling systems to unprecedented levels without knowing exactly how they work or what they’re capable of,” he said. “We need to slow down so society can adapt.”
The letter concludes with a call for hope, referencing previous instances where society has paused the development of technologies with the potential for catastrophic consequences. “We can do so here,” the letter says. “Let’s enjoy a long AI summer, not rush unprepared into a fall.”
As AI continues to shape our future, these voices are urging the world’s most influential labs to reconsider the pace at which they are advancing toward an uncertain and potentially dangerous future.