For several years now, the United States has been locked in an intensifying race with China to develop advanced artificial intelligence. Given the far-reaching consequences of AI for national security and defense, as well as for the economy, the stakes are high. But it is often hard to tell who is winning. Many answers focus on performance: which AI models exceed others in speed, reasoning, and accuracy. By those benchmarks, the United States has a clear, if not commanding, lead, enabled by the presence of world-class engineers, billions of dollars in data center investments, and export controls on the most advanced computing chips. That focus on performance at the frontier is why the release, in January, of a powerful new model, known as R1, by the Chinese company DeepSeek drove headlines and crashed markets around the world. DeepSeek’s success seemed to suggest that the U.S. advantage was not as comfortable as many had thought.
Yet focusing only on the technological frontier obscures the true nature of the race. Raw performance matters, but second-best models can offer significant value to users, especially if they are, like DeepSeek’s, cheap, open sourced, and widely used. The real lesson of DeepSeek’s success is that AI competition is not simply about which country develops the most advanced models but also about which can adopt them faster across its economy and government. Military planners like to say that “amateurs talk tactics; professionals talk logistics.” In AI, amateurs talk benchmarks; professionals talk adoption.
To preserve the U.S. lead in AI, then, the U.S. government needs to supercharge the adoption of AI across the military, federal agencies, and the wider economy. To get there, it should set rules of the road that focus on transparency and choice while boosting trust and enabling the development of cloud infrastructure and new sources of energy. It should also help the industry export American AI products to the rest of the world to support U.S. companies, entrench democratic values, and forestall Chinese technological dominance. Only by winning the adoption race can the United States reap the true economic and military benefits of AI.
THE COMING WAVE
Talk of military AI often sparks fears of killer robots unleashing indiscriminate damage with no accountability for their actions. Yet in the military as elsewhere, most AI adoption will come through human-machine cooperation, in which AI tools accelerate and improve existing workflows. For the U.S. military, autonomous weapons—as opposed to uncrewed systems remotely piloted by humans—still require an accountable human to make the decision to use force.
In the national security context, this will primarily mean improving how, and how fast, militaries and intelligence agencies can use data and make decisions. AI will enable better threat detection, giving humans more time to react; let militaries conduct more detailed and realistic planning exercises; shorten crisis response times; and streamline essential back-end processes like finance and logistics.
Militaries around the world are already developing and deploying AI tools. The war in Ukraine has given Russia the opportunity to rapidly integrate AI into a wide variety of military systems, such as increasingly autonomous weapons and systems that can translate sensor data into targeting information for human decision-makers. Iran and North Korea are both investing in AI tools to assist military operations, government surveillance, and cyber-activities. And in the United States, the Pentagon is adopting AI in a variety of applications, including operational planning, predicting when critical platforms will need maintenance, and enabling greater autonomy in weapons systems. Each of these applications illustrates the importance of adoption at scale: there is a stark difference between a tool that lets a few commanders make decisions faster and the widespread use of that tool to accelerate operational activity throughout the military.
Likewise, in the economic realm, AI’s benefits will be driven by its reach. A 2023 report by the consulting firm McKinsey estimates that widespread AI adoption could bring trillions of dollars in productivity gains to the global economy. Advances in AI are driving innovation in science, medicine, advanced manufacturing, and more. But countries will not benefit if they cannot access AI systems. Rich countries are likely to incorporate AI first, so U.S. policymakers will need to help U.S. companies export AI technology to the global South. Doing so will not only advance key development goals, but also help contain China’s global influence, as the leading alternatives to U.S. suppliers are likely to be Chinese.
Here, as elsewhere, simply having the leading models will not be enough. Even if Chinese models fall short of their U.S. competitors, DeepSeek’s success shows that low-cost open-source technology, even if it is behind the cutting edge, can still provide plenty of value to users. For many ordinary applications, such as drafting legal contracts, assisting in commercial research, and triaging customer service queries, AI adoption does not require the best-performing models; it requires good-enough solutions that can be deployed quickly and at scale. Chinese models such as DeepSeek may appeal to countries seeking cheap and effective AI tools for a wide array of ordinary uses. The United States, meanwhile, has plenty of advantages—the most advanced chips, more cloud infrastructure, better foundational models, and more useful applications—but it needs a strategy to diffuse them.
Winning the AI diffusion race will shape the future of U.S. global leadership. Beijing uses its technological investments abroad to build spheres of influence that weaken U.S. interests and magnify Chinese political pressure. If American AI systems are adopted around the world, then the values underlying those systems, including free expression, privacy, and lack of bias, will spread, too. If Chinese models win out, then censorship, surveillance, and bias are likely to be the result.
ADOPT, DON’T SHOP
Given the importance of adoption, the U.S. government cannot simply focus on preserving the lead in frontier models; it must foster lower-cost, more efficient models that can be widely deployed. For starters, that means being clear-eyed about what the United States can and can’t get from the export controls imposed by the Biden administration to bar Chinese access to advanced semiconductors and other AI technologies. Because AI is a general-purpose technology, policymakers should not expect export controls to prevent China from acquiring key AI technologies indefinitely. AI models are not like nuclear weapons, whose essential ingredients, including plutonium and uranium, are scarce enough that strict controls can prevent other countries from acquiring them. When it comes to AI, although hardware is essential, the models themselves are software, which can be easily copied and transferred. Governments will never be able to restrict computing power as tightly as nuclear material, because chips are used for so many other things. Ultimately, export controls are a limited tool. They can protect specific exquisite technologies for a limited time to help U.S. companies stay ahead of their Chinese competitors. But they cannot constrain development short of the frontier, and they will have inevitable unintended consequences, including encouraging foreign companies to find creative workarounds. The question, then, becomes what the United States does with the time it buys through export controls.
Policymakers in Washington should also design AI regulation to enable responsible technology diffusion. That will require the United States to offer a concrete alternative, one that is focused on transparency and choice, to more prescriptive regulations, such as the EU’s restrictions on AI applications that do not comply with extensive EU rules. Approaches such as the EU’s are responding to real risks, but they also stifle innovation, undermine efforts to make models more transparent, and encourage users to find alternative ways to access forbidden tools, especially if they are available as open-source applications. The DeepSeek app, for example, is available in the EU, even though it doesn’t comply with privacy requirements in the EU’s General Data Protection Regulation or the safety and security provisions of the EU AI Act.
A better approach is to mitigate risks while encouraging rapid adoption of trusted tools. An AI governance framework should feature a mix of voluntary and regulatory approaches that reduce the likelihood of catastrophic risks, such as AI models that enable the development of weapons of mass destruction, while encouraging voluntary improvements in security and reliability. For example, model developers who adopt government-developed risk management tools or standards could be granted reduced liability for harm caused by their systems in return. U.S. regulation should also protect intellectual property rights and ensure data privacy, actions that would both protect U.S. industry and differentiate American AI products from foreign competitors.
Focusing only on the technological frontier obscures the true nature of the AI race.
Such a framework will help drive the adoption of AI technologies developed in the United States. In the same way that better brakes enabled faster yet safer trains and cars, a clear, harmonized governance strategy, with transparent rules, user choice, and narrow restrictions can foster more effective, useful AI. Tools that enhance transparency will promote trust and make consumers and businesses more willing to use AI systems. In contrast, simply restricting the use of AI tools slows innovation and creates incentives for users to work around regulatory requirements and for AI developers to simply find other markets.
A regulatory framework for AI that better balances risks and tradeoffs could also persuade Middle Eastern countries, such as Saudi Arabia and the United Arab Emirates, to more exclusively invest in U.S. data centers and AI technology. This means clarity on both what they need to avoid, such as sharing certain technology with U.S. adversaries, and on what they may get, such as access to advanced AI models within their own sovereign cloud data centers. Gulf countries have already demonstrated strong interest in American AI technology; the trick will be facilitating their access while keeping the technology from spreading to China in a way that circumvents U.S. export controls. This requires enhanced processes in Gulf countries to validate that their technology companies and public-private partnerships can secure and protect U.S. technology that might otherwise flow to Chinese-controlled entities. By strengthening AI partnerships with the Gulf, Washington will encourage broader alignment by these countries with the United States and open up greater access to energy for U.S. industry. Working with the Gulf to enable trustworthy access will also help U.S. companies bring their AI services to the global South, through partnerships such as the one announced last year between Microsoft and the Emirati technology company G42 in Kenya, which aims to bring AI tools and cloud access to businesses there. If the U.S. government does not smooth the way for more of this kind of cooperation, competitors such as China’s Digital Silk Road initiative, which is designed to bring a suite of digital technologies such as 5G and cloud services to the developing world, will fill the gap.
To accelerate domestic adoption, the United States will need to make foundational investments in chip production, data centers, and energy. Just as cars were considerably less useful without roads, leading some World War I–era military analysts to doubt the impact of the combustion engine on warfare, AI technologies will not be able to realize their promise without new cloud environments, more accessible computing power, and usable data. The country, and the government in particular, needs sufficient computing power and energy to run AI models, along with trusted sources of chips. U.S. President Donald Trump’s recent announcement of a planned $500 billion in private sector data center investment is an important start. But news reports suggest that only $100 billion has been committed so far, and much of the work seems to have begun before the announcement.
Greater government investment could encourage additional commercial funds to enable adoption at scale. The federal government has already committed $50 billion to supporting the domestic production of semiconductors, including the leading-edge chips required for AI data centers. Those funds will help the United States produce the chips it needs to develop and deploy AI tools throughout the economy, but by themselves they will not be enough to ensure AI adoption. The government also needs to play a leading role in unlocking large-scale sources of energy to power AI data centers. Cloud service providers have invested in greater production, but the federal government will also need to expand and modernize transmission and delivery infrastructure so an upgraded electric grid can effectively expand local access and distribution. Getting power to energy-hungry new data centers and broader access at lower cost is critical to enabling the widespread use of AI.
Simply having the leading AI models will not be enough.
The government can further accelerate the use of AI across the economy by prioritizing the rapid adoption of AI by its own agencies. By directing federal dollars toward AI technologies, major agencies such as the Department of Defense can signal to companies where to invest and to capital markets which technologies are likely to be in high demand. To have the greatest effect, agencies need to explain their priorities clearly, streamline procurement, and focus on delivering specific capabilities. It would help if Congress provided regular annual appropriations rather than continuing resolutions, which generally do not allow new contracts.
Federal investment can also reassure businesses and consumers that AI tools are safe and reliable. U.S. business adoption of generative AI is lagging behind initial investor expectations, in part due to risk aversion from industry, and polls suggest that the American public is wary of AI. Compare that with China, where some polling has indicated that a majority of the public is excited about the promise of AI and both consumer and commercial adoption are moving faster. Public sector adoption would help put some of those fears to rest.
Even without a clear government strategy, U.S. companies continue to push forward the AI frontier, but it remains uncertain just how far ahead they are, and for how long they can keep that lead. Competition in general purpose technologies such as AI has always been fierce. DeepSeek’s success demonstrates that a U.S. lead is far from guaranteed and that there will be many fast followers for any breakthrough. Especially because it is unclear to what extent having the most advanced models will translate into economic gains, the competition for AI leadership is likely to end up being mostly about adoption. It is the adoption of AI in the U.S. military, government, and private sector—and the ability of U.S. firms to export AI technologies to the rest of the world—that will most clearly demonstrate U.S. strength in AI. To get there, the United States needs to cut through the red tape while accelerating the foundational investments, stronger energy grids, low-cost technologies, and strategic partnerships that will make possible the use of AI at scale.
Loading…
Please enable JavaScript for this site to function properly.
link
