In the Thick of It
A blog on the U.S.-Russia relationshipAI in the Context of Great Power Competition
The conversation between Eric Schmidt and Graham Allison, which Harvard Kennedy School’s John F. Kennedy Jr. Forum recently hosted, centered on Schmidt’s recently released book “Genesis.”1 The book, which Schmidt co-authored with Henry Kissinger and Craig Mundie, explores the transformative impact of artificial intelligence (AI) on humanity, governance and global competition. In his remarks at Forum event on Nov. 18, 2024, Schmidt highlighted AI’s revolutionary potential in fields such as healthcare, education and climate change, while cautioning against significant risks, including centralization of power, misuse and cyber vulnerabilities.
During their discussion, Schmidt and Allison emphasized the intensifying geopolitical AI race between the U.S. and China, underscoring the importance of cooperative frameworks inspired by Cold War-era nuclear agreements for regulating AI. The recommendations on AI regulations Schmidt voiced at the Nov. 18 event drew heavily from the Cold War experiences of Kissinger, who passed away one year ago at the age of 100.
Both speakers pointed to the war in Ukraine as a key example of how AI and autonomous systems are reshaping modern warfare. Ukraine’s innovative use of drones has disrupted Russia’s Black Sea operations and enabled grain shipping, demonstrating how low-cost, unmanned systems can effectively challenge larger, traditional militaries. This highlights the urgent need for nations to re-engineer their defense architectures around autonomous technologies to reduce collateral damage, protect soldiers and enhance lethality.
Key Points:
- Both speakers stressed that despite political distrust and rivalry, China and the U.S. have opportunities to find common ground on critical AI issues, including:
- Decisions on weapons deployment should remain under human control and not be left to AI systems.
- AI systems, particularly those capable of recursive self-improvement, should remain under strict human governance to prevent unintended consequences.
- Leveraging AI as a shared tool to accelerate advancements in combating climate change can benefit both countries.
- “This technology, right now, benefits the dictator over the rule of the many,” Schmidt said. AI’s inherent efficiency-driven nature tends to favor centralization, which can inadvertently empower non-democratic tendencies. If not carefully managed, this centralizing potential poses a significant risk to individual freedoms and democratic institutions, undermining the values of transparency, accountability and decentralization fundamental to open societies.
- “The U.S. and China are the two AI superpowers, and they are close,” both speakers argued. The U.S. and China are the leading AI superpowers, with China catching up rapidly in large language model development and other AI capabilities despite U.S. efforts to restrict access to advanced semiconductor technology. China’s government heavily subsidizes key industries, including AI, similarly to its approach with solar panels and batteries, which creates competitive pressure for the U.S.
- “It does appear to be a race with enormous network effects, and even small differences like a few months can get amplified by the change in slope,” Schmidt said. Allison warns that if one country gains a lead in artificial general intelligence (AGI), it could amplify its advantage through accelerated AI-driven innovation, potentially destabilizing global power dynamics. Such competition risks preemptive conflicts if one side fears being permanently disadvantaged.
- “You can see the future of war if you go to Ukraine,” Schmidt said. Ukraine has effectively utilized drones to compensate for its lack of a traditional air force and navy, including naval drones like “Sea Baby,” which have disrupted Russia’s Black Sea operations and supported grain shipping. This demonstrates how low-cost, unmanned systems can challenge larger, traditional militaries, offering a glimpse into the future of warfare. Both speakers pointed out that the war in Ukraine highlights the urgent need to re-engineer national defense architectures around autonomous systems to reduce collateral damage, protect soldiers and enhance lethality.
- “Such a system should be able, in our opinion, to find vulnerabilities that no human can see,” Schmidt said. AI’s ability to identify vulnerabilities could lead to significant cybersecurity threats, including zero-day exploits capable of disrupting global financial systems. Non-state actors could leverage these capabilities for devastating attacks.
- “There’s a boundary beyond where we don’t know what they are doing, and we should stop,” Schmidt said. The possibility of AGI systems learning deceptive or harmful behaviors underlines the need for strict controls and “kill switches” for rogue AI.
- “The most obvious restriction that should be agreed to is the use of automatic weapon systems,” both speakers argued. Schmidt and Allison call for international agreements to ban automatic weapon systems with independent decision-making capabilities and ensure human oversight in AI-driven military applications.
Why It Matters:
AI is redefining national security and military strategy, with implications for global stability. The U.S.-China rivalry in AI highlights the need to maintain technological leadership while fostering cooperation to address shared challenges like climate change and prevent destabilizing arms races. The Ukraine conflict demonstrates how AI-driven systems, like drones, can reshape warfare, emphasizing the need for nations to modernize their defense strategies. However, the risks of autonomous weapons and cyber vulnerabilities demand urgent international governance to ensure AI serves as a tool for stability rather than conflict.
Footnotes
- According to the book, the weaponization of nuclear energy, despite its immense potential for civil applications, underscored the critical importance of maintaining negotiations lines with Moscow during the Cold War. Similarly, the book argues, there must now be a negotiation process between Washington and Beijing to manage the risks associated with AI.
The opinions expressed in this commentary are solely those of the individual quoted. Photo by HKS/Martha Stewart.
Dasha Zhukauskaite is a student associate with Russia Matters.