This is very easy to explain. Anthropic outlines some limitations in their terms of service. Palantir accepted those terms. The DoD did not.
OpenAI claims their terms of service for DoD contain the same limitations as Anthropics proposed service agreement. Anthropic claims that this is untrue.
Now given that (a) the DoD terminated their deal with Anthropic, (b) stated that they terminated because Anthropic refused modify their terms of service, and (c) then signed a deal with openAI; I am inclined to believe that there is in fact a substantial difference between the terms of service offered by Anthropic and OpenAI.
Yeah, it never made sense when Sam immediately said that they had the same constraints yet de DoW immediately agreed with that.
From what I can see, OpenAI’s terms basically say “need to comply with the law”, which provides them with plenty of wiggle room with executive orders and whatnot.
Are you sure about that? Every information I’ve seen suggests that the DoD has been using Anthropic’s models through Palantir.
My understanding is that Anthropic requested visibility and a say into how their models were being used for classified tasks, while the DoD wanted to expand the scope of those tasks into areas that Anthropic found objectionable. Both of those proposals were unacceptable for the other side.
Wasn’t the trigger for all this what happened with Maduro earlier this year? From what I understood, Anthropic wasn’t very happy how their systems were being used by the DoW through Palentir which caused this whole feud.
And why would they have an objection to that? They sold a product to a customer. They should have no business in how that customer uses their software.
> And why would they have an objection to that? They sold a product to a customer. They should have no business in how that customer uses their software.
They sold a service to a customer, contractually subject to terms they both agreed upon. How do people keep missing this? The government changed their mind after agreeing to the restrictions and tried to alter the deal with Anthropic ex-post-facto.
It’s a bit more complex than that, but to be fair I don’t know what they were expecting after they integrated a purpose-built model with Palantir to be deployed in high-security networks to carry out classified tasks.
I'd hate to break it to you, but companies do have a right to determine how their products are used. You were subject to that when you wrote that comment. Did you not notice that?
No, I do not think they do. If a buy a car a run somebody over on purpose, the manufacturer has no right to come take my car away. Even if it were to be written in a contract.
If you tell the car dealership that your plan is to run someone over with the car you are buying, they 100% have the right to refuse selling the car to you.
If you tell a gun dealer you're going to kill someone when you walk out of the shop, they have a right and an obligation to refuse the sale.
Please feel free to tell me how these analogies are incorrect.
“We’ve actually held our red lines with integrity rather than colluding with them to produce ‘safety theater’ for the benefit of employees (which, I absolutely swear to you, is what literally everyone at [the Pentagon], Palantir, our political consultants, etc, assumed was the problem we were trying to solve),” Amodei reportedly wrote.
“The real reasons [the Pentagon] and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot),” he wrote, referring to Greg Brockman, OpenAI’s president, who gave a Pac supporting Trump $25m in conjunction with his wife.
Another reason is that Sam Altman has been willing to "play ball" like providing high-profile (though meaningless) big announcements Trump likes to tout as successes. For example:
> "The Stargate AI data center project worth $500 billion, announced by US President Donald Trump in January 2025, is reportedly running into serious trouble.
More than a year after the announcement, the joint venture between OpenAI, Oracle, and Softbank hasn't hired any staff and isn't actively developing any data centers, The Information reports, citing three people involved in the "shelved idea."
Reminds me of when they cut the camera to Zuck and he made the $600 Billion Deal announcement, but was hot mic'd after and said "I'm sorry I wasn't ready... I wasn't sure what number you wanted to go with". I will be extremely surprised if half of these deals actually go through
OpenAI claims their terms of service for DoD contain the same limitations as Anthropics proposed service agreement. Anthropic claims that this is untrue.
Now given that (a) the DoD terminated their deal with Anthropic, (b) stated that they terminated because Anthropic refused modify their terms of service, and (c) then signed a deal with openAI; I am inclined to believe that there is in fact a substantial difference between the terms of service offered by Anthropic and OpenAI.