A Predicted Descent Into a New Era
Mo Gawdat, the former chief business officer at Google X, has issued a stark warning about the societal challenges he believes will emerge from the rapid adoption of artificial intelligence. In a recent interview on the “Diary of a CEO” podcast, Gawdat predicted that the world will enter a 12- to 15-year period of instability starting in 2027. This period, he said, will not be defined by AI seizing control in a science fiction scenario, but rather by how humans use the technology to amplify existing weaknesses and systemic problems. According to Gawdat, freedoms, social bonds, accountability, and our shared understanding of reality are all at risk in this coming phase of transformation.
He emphasized that the problem is not inherent to AI itself but lies in human decision-making. “There is absolutely nothing wrong with AI,” Gawdat said. “There is a lot wrong with the value set of humanity at the age of the rise of the machines.” In his view, the technology will act as a magnifying glass for existing flaws — intensifying divisions, enabling exploitation, and allowing bad actors to operate with greater efficiency. These changes, he believes, will not occur in the distant future but are already visible in early form through political manipulation, economic disruption, and shifts in workplace dynamics.
Gawdat admitted that he once held a far more optimistic outlook on AI’s role in society, envisioning it as a force for positive transformation. However, he said the unprecedented speed of AI development has convinced him that a near-term dystopia is now unavoidable. While he insists that humanity could still steer AI toward more beneficial outcomes, he doubts society currently has the collective awareness or political will to do so. The lack of preparation, combined with the relentless pace of AI adoption, is what he believes will usher in the most challenging period in recent technological history.
From Utopian Promise to Capitalist Reality
Artificial intelligence was initially heralded as a tool that could enhance productivity and improve quality of life. By automating repetitive tasks, AI promised to give workers more time for creative, strategic, and personally fulfilling activities while maintaining or even increasing productivity levels. Early proponents envisioned a world where automation could help reduce stress and allow people to focus on work that truly matters. Yet according to Gawdat, this ideal has been overtaken by the reality of profit-driven implementation, with capitalism steering AI’s development toward maximizing efficiency and shareholder returns rather than improving the human experience.
Instead of reducing workloads, AI adoption has in many cases led to staff reductions, hiring freezes, or an intensification of demands on existing employees. The technology’s ability to replace human labor in certain sectors has made it a cost-saving tool rather than a workload-sharing one. Gawdat argues that this outcome is not accidental but an inevitable consequence of applying new technologies in a system where economic gain is prioritized over social benefit. “All technology ever created magnifies existing human abilities and values, and the biggest value set of humanity currently is capitalism,” he said.
This pattern is not unique to AI. Previous technological innovations have followed a similar trajectory, where their utopian promise was overshadowed by unintended or negative consequences. Social media, for example, was promoted as a way to foster human connection but has also been linked to rising rates of loneliness, political polarization, and mental health issues. Likewise, mobile phones were once marketed as tools that would give people more free time, yet for many, they have created a constant tether to work and digital distractions. Gawdat sees AI as the latest chapter in this recurring cycle, where well-intentioned tools are reshaped by the systems into which they are introduced.
Amplifying Risks and the Path Forward
Beyond economic and social disruption, Gawdat warns that AI will accelerate harmful activities that exploit the technology’s capabilities. He points to the rapid rise of AI-generated deepfake content, including nonconsensual sexual imagery, as evidence that these risks are already here. In the military sphere, AI is being incorporated into autonomous weapons systems designed to increase lethality, raising concerns about an arms race in automated warfare. Financial crimes are also on the rise, with a report from blockchain intelligence firm TRM Labs showing that AI-powered cryptocurrency scams have grown by 456% in the past year. Gawdat argues that these developments illustrate how AI can serve as a powerful enabler of “the evil that man can do.”
The threat is not limited to the private sector or bad actors outside government. AI-powered surveillance systems are now deployed at a massive scale in several countries, with China’s public monitoring infrastructure often cited as one of the most advanced examples. In the United States, government agencies have begun using AI to monitor social media accounts of immigrants and travelers seeking entry. Gawdat warns that in societies with concentrated political or corporate power, such tools can be used to consolidate control and limit freedoms. The combination of sophisticated surveillance technology and centralized authority, he says, is a critical risk factor for the years ahead.
Still, Gawdat acknowledges that AI has the potential to be a force for good. Breakthroughs in medicine, pharmaceutical research, and scientific discovery demonstrate its capacity to address some of humanity’s most pressing challenges. He believes a more balanced and beneficial use of AI is possible, but only if policymakers focus on regulating its application rather than its underlying design. “You cannot regulate the design of a hammer so that it can drive nails but not kill anyone, but you can criminalize the killing of a human by a hammer,” he said. For Gawdat, the central question is whether societies will take the necessary steps to limit harmful uses of AI before the predicted 15-year dystopia fully unfolds.