They can choose to sell to government agencies or not. But selling to them and then trying to have some veto power is wrong. So it sounds like we're in a agreement.
I would like western Democratic powers to have the most advanced technology personally but you may disagree.
I've worked in government outside of the Federal level. The government has a moral and often legal incentive to do inefficient things for the simple reason that the work they do needs to be safe, controlled and deterministic.
Any US state maintains a birth registry, death registry and DMV. But firewalls exist so that live links don't exist between these and other programs. It's inefficient, but avoids many hazards and conflicts in regulatory or legal compliance. For example, income tax information is secret, and cannot be shared outside of the tax processing scenario. Police investigatory data should not be linked to your unemployment claim. Fundamentally, those are examples of why the stuff that Palantir is doing is problematic.
With military applications, it's even more fraught, and human life is in peril by design. It's important for a professional army like the US Army that strict discipline and rules of engagement are followed. Soldiers may find themselves in situations where people are shooting at them, and they are ordered to take no action.
AI is not capable of functioning in that environment.
My point is these are complex issues, and we are in a political environment where people seeking simple answers are looking at technology like AI to disconnect them from accountability. There's a nuance there, and a reason why Anthropic is willing to partner with Palantir for their work, but hesitant to powering drones that are dropping hellfire missiles on people.
I would like western Democratic powers to have the most advanced technology personally but you may disagree.