American AI companies love to say that the US must win the AI arms race, or China will.
Anthropic, OpenAI, Google, Microsoft, and Meta have all invoked the threat of a Chinese victory to justify speeding ahead on AI development, seemingly no matter what. The argument is simple: Whoever pulls ahead in building the most powerful AI could be the global superpower for a long, long time. China’s authoritarian government suppresses dissent, surveils its citizens, and answers to no one. We cannot let that model win.
And to be clear — we shouldn’t. The Chinese Communist Party’s human rights abuses are real and horrific, and AI technologies like facial recognition have made them worse. We should be scared of a scenario where that becomes the norm.
But what if authoritarian rule that uses tech to surveil people in alarming ways is already becoming the norm in the US? If America is shape-shifting into the bogeyman it critiques, what happens to the case for racing ahead on AI?
This is the question everyone should be asking now that the Pentagon has blacklisted Anthropic — and embraced its rival, ChatGPT-maker OpenAI, which was more willing to accede to its demands. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent. Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic. They do not have any editorial input into our content.)
The US Department of Defense is already using AI powered by private companies for everything from logistics to intelligence analysis. That has included a $200 million contract with Anthropic, which makes the chatbot Claude. But after the US used Claude in its January raid in Venezuela, a dispute erupted between Anthropic and the Pentagon.
The two redlines Anthropic insisted on in its contract with the Defense Department — that its AI shouldn’t be used for mass domestic surveillance or fully autonomous weapons — represent such fundamental rights that they should have been uncontroversial. And yet the Pentagon threatened that it would either force Anthropic to submit to full and unfettered use of its tech, or else name Anthropic a supply chain risk, which would mean that any external company that also works with the US military would have to swear off using Anthropic’s AI for related work.
When Anthropic didn’t back down on its requirements, Defense Secretary Pete Hegseth followed through on the latter threat — an unprecedented move, given that the designation has previously been reserved for foreign adversaries like China’s Huawei, not American companies.
As a journalist who’s spent years reporting on China’s use of AI to surveil and repress Uyghur Muslims, learning of the Pentagon’s threats reminded me of nothing so much as China’s own policy of “military-civil fusion.” That policy involves compelling private tech companies to make their innovations available to the military, whether they want to or not. Either wittingly or unwittingly, Hegseth seemed to be borrowing directly from Beijing’s playbook.
“The Pentagon’s threats against Anthropic copy the worst aspects of China’s military-civil fusion strategy,” Jeffrey Ding, who teaches political science at George Washington University and specializes in China’s AI ecosystem, told me. “China’s actions to force high-tech private companies into military obligations may lead to short-term technology transfer, but it undermines the trust necessary for long-term partnerships between the commercial and defense sectors.”
To be clear, America is not the same as China. After all, Anthropic was able to freely voice its opposition to the Pentagon’s demands, and the company says it’ll sue the US government over the blacklisting, which would be unthinkable for a Chinese firm in the same situation. But the US government’s embrace of authoritarian conduct is undeniable.
“Racing” to build the most powerful AI was always a dangerous game; even AI experts building these systems don’t understand how they work, and the systems often don’t behave as intended. But it’s even more dangerous to try building that powerful AI under the Trump administration, which is increasingly proving itself happy to bully American companies in order to preserve the option of using AI for mass surveillance and weapons that kill people with no human oversight.
Those who are still bought in on the idea that the US must win the AI race at all costs should now be asking: What’s the point of the US winning if the government is going to create a China-like surveillance state anyway?
At least one of the major AI companies is not taking this question seriously.
What’s really in OpenAI’s deal with the Pentagon — and why many are now boycotting ChatGPT
OpenAI announced that it had struck a deal to deploy its AI models in the Pentagon’s classified network — just hours after the Pentagon blacklisted Anthropic.
This was extremely confusing.
Sam Altman, the CEO of OpenAI, had claimed that he shares Anthropic’s red lines: no mass surveillance of Americans and no fully autonomous weapons. Yet somehow Altman managed to cut a deal that, by his account, didn’t compromise either of them. Apparently, the Pentagon had no problem with that.
How is that possible? Why would the Pentagon agree to OpenAI’s terms if they’re really the same as Anthropic’s?
The answer is that they’re not the same. Unlike Anthropic, OpenAI acceded to a key demand of the Pentagon’s — that its AI systems can be used for “all lawful purposes.” On the face of it, that sounds innocuous: If some type of surveillance is legal, then it can’t be that bad, right?
Wrong. What many Americans don’t know is that the law just has not come close to catching up to new AI technology and what it makes possible. Currently, the law does not forbid the government from buying up your data that’s been collected by private firms. Before advanced AI, the government couldn’t do all that much with this glut of information because it was just too difficult to analyze all of it. Now, AI makes it possible to analyze data en masse — think geolocation, web browsing data, or credit card information — which could enable the government to create predictive portraits of everyone’s life. The average citizen would intuitively categorize this as “mass surveillance,” yet it technically complies with existing laws.
For Anthropic, the collection and analysis of this sort of data on Americans was a bridge too far. This was reportedly the main sticking point in its negotiations with the Pentagon.
Meanwhile, take a look at an excerpt of OpenAI’s contract with the Pentagon, and you can see in the first sentence that it is allowing the Pentagon to use its AI for “all lawful purposes”:
You might be wondering: What are all those other clauses that appear after the first sentence? Do they mean your fundamental rights will be protected?
Altman and his colleagues certainly tried to give that impression. But many experts have pointed out that they don’t guarantee that at all. As one University of Minnesota law professor wrote:
In fact, as several observers noted, the contract clauses call to mind what an Anthropic spokeswoman said about updated wording it had received from the Department of Defense at a late stage in their negotiations: “New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” she said.
OpenAI did get some assurances into the contract; the company’s blog post says it’ll have the ability to build in technical guardrails to try to ensure its own red lines are respected, and it’ll have “OpenAI engineers helping the government, with cleared safety and alignment researchers in the loop.” But it’s unclear how much good that’ll do, given that the impact of technical safeguards is limited and the language doesn’t guarantee a human in the loop when it comes to autonomous weapons.
“In terms of safety guardrails for ‘high-stake decisions’ or surveillance, the existing guardrails for generative AI are deeply lacking, and it has been shown how easily compromised they are, intentionally or inadvertently,” Heidy Khlaaf, the chief AI scientist at the nonprofit AI Now Institute, told me. “It’s highly doubtful that if they cannot guard their systems against benign cases, they’d be able to do so for complex military and surveillance operations.”
What’s more, “Nothing in the contractual language released up to this point seems to provide enforceable red lines beyond having a ‘lawful purpose,’” said Samir Jain, the vice president of policy at the Center for Democracy & Technology. “Embedding OpenAI engineers does not solve the problem. Even if they are able to identify and flag a concern, at most, they might alert the company, but absent a contractual prohibition, the company could not have any right to require the Pentagon to halt the activity at issue.”
OpenAI and Anthropic did not respond to requests for comment. OpenAI later said it was amending the contract to add more protections around surveillance.
Perhaps if Altman did not already have a reputation for misleading people with vague or ambiguous language, AI watchers would be less alarmed. But he does have that reputation. When the OpenAI board tried to fire Altman in 2023, it famously said he was “not consistently candid in his communications,” which sounds like board-speak for “lying.” Others with inside knowledge of the company have likewise described duplicity.
Even Leo Gao, a research scientist employed by OpenAI, posted:
For now, only a minuscule portion of OpenAI’s contract with the Pentagon has been made public, so we can’t say for certain what guarantees it does or doesn’t contain. And some aspects of this story remain murky. How much of the Pentagon’s decision to replace Anthropic with OpenAI was due to the fact that OpenAI’s leaders have donated millions of dollars to support President Donald Trump while Anthropic’s Amodei has refused to bankroll him or give the Pentagon carte blanche with the company’s AI, earning him Hegseth’s dislike and Trump’s insistence that he leads “A RADICAL LEFT, WOKE COMPANY”?
While these uncertainties linger, public mood has turned against OpenAI with nearly the speed of the tech itself. A public campaign called QuitGPT launched last month and has gained immense traction since the Pentagon clash, urging those who feel betrayed by OpenAI to boycott ChatGPT. By the group’s count, over 1.5 million people have already taken action as part of the boycott.
It’s no coincidence that Anthropic’s chatbot, Claude, became the No. 1 most downloaded app in the App Store over the weekend, with users seeing it as a better alternative to ChatGPT.
Historian and bestselling author Rutger Bregman, who has studied the boycott movements of the past, was one of those who felt fired up upon seeing the QuitGPT campaign. He has since become its informal spokesperson.
“What effective boycotts have in common, in my view, is that they’re narrow, they’re targeted, and they’re easy,” Bregman told me. “I looked at the ChatGPT boycott and was like: This is exactly it! This is the first opportunity to start a massive consumer boycott in the AI era, and to send an incredibly powerful signal to the whole ecosystem, saying, ‘Behave, or you could be next.’” He suggests switching over to the chatbot of any other AI company, except Elon Musk’s Grok.
Mind you, it’s worth noting that Anthropic itself is no dove. After all, the company has a deal with the AI software and data analytics company Palantir, which is infamous for powering operations of Immigration and Customs Enforcement (ICE). Anthropic is not opposed to all forms of mass surveillance, nor does it seem to be categorically opposed to using its AI to power autonomous weapons (its current refusal is based on the fact that its AI systems can’t yet be trusted to do that reliably). What’s more, it recently dropped its key promise not to release AI models above certain capability thresholds unless it can guarantee robust safety measures for them in advance. And as an employee of Anthropic (or Ant, as it’s sometimes known) pointed out, the company was happy to sign a contract with the Department of Defense in the first place:
Still, many believe that if you’re going to use a chatbot, Anthropic’s Claude is morally preferable to OpenAI’s ChatGPT — especially in light of the recent clash at the Pentagon.
What else can be done to ensure AI isn’t used for mass surveillance or fully autonomous weapons?
There was a time when some AI experts urged an alternative to a US-China AI arms race: What if Americans who care about AI safety tried to coordinate with their Chinese counterparts, engaging in diplomacy that could ensure a safer future for everybody?
But that was a couple of years ago — eons, in the world of AI development. It’s rarer to hear that option floated these days.
Some experts have been calling for an international treaty. A dozen Nobel laureates backed the Global Call for AI Red Lines, which was presented at the UN General Assembly last September. But so far, a multilateral agreement hasn’t materialized.
In the meantime, another option is gaining prominence: solidarity amongst the tech workers at the major AI companies.
An open letter titled “We Will Not Be Divided” has garnered more than 900 signatures from employees at OpenAI and Google over the past few days. Referring to the Pentagon, the letter says, “They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure.” Specifically, the letter urges OpenAI and Google leadership to “stand together” to continue to refuse their AI systems to be used for domestic mass surveillance or fully autonomous weapons.
Another open letter — which has over 175 signatories, including founders, executives, engineers, and investors from across the US tech industry, including OpenAI employees — urges the Department of Defense to withdraw the supply chain risk designation against Anthropic and stop retaliating against American companies. It also urges Congress “to examine whether the use of these extraordinary authorities against an American technology company is appropriate” — a tactful way of suggesting, perhaps, that the Pentagon’s moves were an abuse of power.
Federal regulations and global treaties would be a much stronger defense against unsafe and unethical AI use than relying on the goodwill of individual technologists. But for the moment, cross-company coordination is at least a start — a way to push back against Pentagon pressure that would lead, if left unchecked, to something America keeps insisting it’s nothing like.



















































