Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This completely misses the point. The problem isn't that people do worse, but that automating warfare changes the cost function. Technologically superior and determined enemies can still be dissuaded by incurring significant numbers of casualties or losses to their infrastructure. A small number of casualties stiffens resolve, a large number of casualties or a never ending trickle of them eventually breaks or wears it away.

If military actors can reliably change outcomes by the relatively low-cost expedient of throwing in autonomous weapons platforms that cost about as much as a washing machine, they will and they'll do it at scale, and (in the short term at least) their political backers will cheer and get off on it. In the longer run it will lead to a considerable increase in terrorism against the technologically advanced power.

Sure, people ultimately make these decisions and deploy such technologies, but so what? it's not like that can change in any way because you can't take people out of the equation and you can't just wish away political forces by pinning the blame on select individuals. Rather than retreating into truisms, it's more important to assess the impact of this emerging force multiplier and develop countermeasures.



My point is that it's already automated. Those things I mentioned are already not hand-to-hand combat and don't have human soldiers risking their lives. So however scary some new automated weapon is, it should be no worse than existing automated weapons. What makes a robot soldier worse than a cruise missile or land mines or a bomber aircraft or a remote piloted drone? All those things can already be used by technologically superior enemies without incurring casualties themselves.

You mention attacks against a technologically advanced power (does an "enemy" become a "power" when it's a friend?), but obviously those powers will find ways to defend against them. Maybe it's just in the form of slightly more advanced "washing machines".

This fear thinking seems to come from assuming no secondary advancements occur. Suddenly robot soldiers are cheaply available and nobody develops any defense against them, either political or technological.


You left out the bit where the various superpowers inevitably have to try using the shiny new technology against their rivals because it's never been tried so we can't be certain it's a bad idea. That's the bit that worries me the most - I'd rather do without a rerun of World War 1.

> it's more important to assess the impact of this emerging force multiplier and develop countermeasures

What is there to do other than develop your own equivalent systems though?


There's a whole literature on the logic (and meta-logic) of deterrence called power transition theory that is worth looking into, as it sheds a lot of light on the unpleasant topic of nuclear deterrence and how that works.

In a more general sense, the solution to an elevated attack is not always a retaliatory attack, but perhaps a better defense that neutralizes it. Helmets can be used as weapons, but their primary purpose is to make weapons less effective and change the strategic calculus - now the enemy gets lesser results for the same effort, and either gives up or tires out and can be defeated with a smaller retaliation. In general, defense is thought to be somewhat stronger than offense, which is why surprise is so important. Technological edges tend to be negated over time.

Deeply understanding this takes a long time and a lot of study. Military science is a difficult but interesting subject, and tips over into systems theory.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: