Air traffic (and ground traffic) control are not simple problems. La Guardia has 350k aircraft operations (takeoffs and landings) every year. 1000/day. Peak traffic is almost certainly more than 1 plane every minute. Runways are always in use and the idea that some simple software will solve all the safety problems is not grounded in reality.
This isn’t hypothetical, this system just exists in other countries. Digital systems can confirm flight instruction from ATC with zero radio communication.
The issue is that the final approach and landing (and taxiing?) environments are probably too dynamic for that: in this particular situation one of the vehicles was responding to an emergency (fire).
In addition to huge planes, there is baggage transportation, passenger buses (to mid-field terminals), fuel pumpers, emergency vehicles, snow plows, deicers, and general maintenance vehicles (clear debris off runways).
I’m not saying we couldn’t move more into automation. What I’m saying is that doing so will not solve all of our air/ground control problems. We still have human pilots and humans driving vehicles on the ground. Switching from humans directing landings to machines might improve some things but will not solve for all (and probably not most) risks.
Literally the crash here was caused by a fire truck entering the runway.
The ATC told them to enter the runway because they were confused or distracted due t overwork.
No one here or anywhere is saying automation would solve or be able to handle everything that human operators handle, that's an argument you invented that no one is making.
People are saying automation could handle a significant portion of the routine things allowing humans to handle the more complex/finicky issues.
Even if automation could handle 10% of the most common situations it would be a huge boon. In reality its probably closer to 50%.
There's unfortunately an alertness problem WRT automated systems.
If the reason you have the human there is to handle the unusual cases, you run the real risk that they just aren't paying attention at critical moments when they need to pay attention.
It's pretty similar to the problem with L3 autonomous driving.
Probably the sweet spot is automation which makes clear the current set of instructions on the airport which also red flags when a dangerous scenario is created. I believe that already exists, but it's software that was last written in 1995 or so.
Regardless, before any sort of new automation could be deployed, we need slack for the ATC to be able to adopt a new system. That's the biggest pressing problem. We could create the perfect software for ATC, but if the current air traffic controllers are all working overtime and doing a job designed for 3 people rather than one, they simply won't have the time to explore and understand that new system. It'll get in the way rather than solve a problem. More money is part of the solution here, but we also need a revamped ATC training program which can help to fill the current hole.
> The ATC told them to enter the runway because they were confused or distracted due t overwork.
Very possibly. It will be interesting what comes from the investigation.
> No one here or anywhere is saying automation would solve or be able to handle everything that human operators handle, that's an argument you invented that no one is making.
I’m asking if it would have solved even the current situation. The truck presumably saw the red light, and was asking to cross. Would traffic control have said no if more had been automated and if so, what automation would fix this? Unless we are supposing the truck would be autonomously driven and refuse to proceed when planes are landing, in which case, maybe, though that’s not really ATC automation anymore.
an automated system that could check if a plane is about to land on a runway and show some kind of alert or red light is hardly a stretch of the imagination
Thank you for providing your aviation knowledge to this discussion. What a classic example of tech people thinking that because they're smart, every other industry must be dumb and they can just jump in and fix it.
I also do not like this persistent tone of “everyone else is stupid; software would easily fix it” that pops up so often. Not all problems are easy to fix with some code.
To be clear, though, I don’t even have significant aviation knowledge. But this isn’t hard to learn about. That’s part of what irks me so much about this tone. It’s not just “I’m so smart” it’s “I’m so confident that you’re dumb that I don’t need to know anything about the domain you’re working in to know better than you”. Someone could ask ChatGPT why airports don’t have stoplights to stop traffic from crossing the runway and it would reveal the existence of this system.
Yes, in fact I had considered adding your same thought to my initial comment. It's not impossible that a smart tech person might be able to improve the existing systems. The problem is the arrogance of not even checking what existing systems there might be, as if obviously they'd be too backwards to have any.
Something like that. It feels a bit different because it’s less about overestimating one’s knowledge/ability and more about underestimating the complexity of domains outside one’s expertise. But yeah. Very similar.
Me too, but I don’t like referring to Dunning-Kruger ever for multiple reasons. There are perfectly good labels like cockiness, arrogance, ignorance, presumptuousness, and wrongheaded. ;)
There are many issues with DK, and the paper’s widely misunderstood. For one, the primary figure demonstrates a positive correlation between confidence and competence, so according to DK’s own paper, high confidence is not an indicator of incompetence, contrary to popular belief. The paper also measured things in a very funny way (by having participants rank themselves against other people of unknown skill), and it measured only very simple things (like basic grammar, and ability to get a joke), and it only polled Cornell undergrads (no truly incompetent people), and there were a tiny number of participants receiving extra credit (might exclude the As and Fs in the class). Many smart people have come to the conclusion that DK is a statistical artifact of the way they did their experiment, not a real cognitive bias. Some smart people have pointed out that DK is probably popular because it’s really tempting to believe - we like the idea of arrogant people getting justice. The paper also primes the reader, telling them what to believe even though the title isn’t truly supported by the data. It’s an interesting read that I think would not pass today’s publication criteria.
Agreed, but I see this in every industry. And though it's certainly arrogant on some level, I think of it in a more positive light: people are generally optimistic and want to solve problems.
My grandfather had a rule at his business for 55-ish years: we welcome your ideas and suggestions, but not for the first year. You spend that time learning our processes, decisions behind them, pain points, areas that need improvement, etc. You also spend that time doing the work and hearing from your colleagues. Then you can (hopefully) make informed suggestions. That's not possible in every situation, but I like the intent.
I meant something in-vehicle for ground vehicles, like an extremely simple extrapolation of current velocity and the extremely predictable trajectory of a plane, instead of depending on going back and forth over radio asking a very busy fallible human, but sure
even my cheap car has geofencing and automatic braking
I've worked on avionics professionally and I haven't crashed any of my planes yet...
“These lights … turn red in response to traffic, providing direct, immediate alerts without the need for input from controllers”.
It will be interesting to see what the report says. Did the light system not function? Did they override it? Do they ignore it consistently?
> geofencing and automatic braking
I’m not at all sure I want emergency vehicles to be blocked like this. And if they can override then it’s no different. They didn’t roll onto the runway on accident.
> I've worked on avionics professionally and I haven't crashed any of my planes yet...
In an ideal world this would be like rail traffic, where the runway would be 'locked' (red signal) due to the landing plane, and the fire engine would have to explicitly request an override to cross the locked runway, and importantly, this process has to be _rare_. If it's something that's done 5000 times a day, it'll be normalized. Everyone involved should be aware of the dangers of traversing a 'locked' runway.
What is _really_ needed is a replacement of the archaic narrowband analog FM radio. Where you can't listen and talk at the same time. There are probably at least several dozen accidents where the inability to communicate with an aircraft or a road vehicle was a contributing factor.
I would settle for a good digital system with an ability to issue emergency/priority calls to specific receivers. Oh, and full-duplex communication.
I'm practicing for a sports pilot license, and I really have problems with understanding other pilots and the ATC.
Not only that, if 2 people talk at once they can cancel each other out and neither can be heard by anyone else.
Much of aviation is still based on pre WWII tech and practices like this and people underestimate how slow and difficult it is to change. Many piston aircraft still run on leaded gas, for example, the last existing market for it in the US.
I'm pretty sure the amount of data isn't the problem here. Maybe it's the number of corner cases? You would still want some human-in-the loop with quality UI for ATC.
There are plenty of stories of ATC helping to guide pilots back to the ground after an engine failure or after a student pilot had their instructor pass out on them or something like that.
Even if most of the work is routine, you definitely still want a human in the loop.
It's worth pointing out that plenty of pilots take off and land safely at uncontrolled airports. ATC is a throughput optimization; the finite amount of airspace can have more aircraft movements if the movements are centrally coordinated. It feels like we are nearing the breaking point of this optimization, however, and it's probably worth looking for something better (or saying no to scheduling more flights).
The FAA already does issue temporary ground stops for IFR flights when ATC capacity is saturated. This acts as a limit on airlines scheduling more flights, although the feedback loops are long and not always effective. The FAA NextGen system should improve this somewhat.
>> Software routinely solves database coordination problems with millions of users per second.
A naive view that confuses the map with the territory.
While in a database state you write a row and reality updates atomically....for aircraft they exist in a physical world where your model lives with lag, noise, and lossy sensors, and that world keeps moving whether your software is watching or not. Failed database transactions roll back, a landing clearance issued against stale state does not. The hard problem in ATC is not coordination logic but physical objects with momentum, human agency, and failure modes that do not respect your consistency model.
A third runway for Heathrow was formally proposed in 2007 and is projected for completion in 2040. This is an airport so overburdened people are buying and trading slots.
This isn't a Kubernetes cluster where you can add VMs in 30 seconds.
Did it? They didn’t get there so did we get bigger fire at their target?
I imagine the training will consist of something like changing the comms protocol to say “runway lights are on, control. Truck 1 confirming cross runway 4D?” prior to crossing. Double check so to speak.
Ground vehicles consistently have radio conventions that just don't fit into the aviation world. It feels like a contributor to this accident, you can hear the controller's brain skip a couple gears trying to understand the goofy word order from the truck.
Pilots and controllers speak the same language in the same order; ground vehicles just kinda say stuff.
The aviation-ized version of your proposal would be something like this:
> tower truck 1 short of 4 at delta, red status lights
But context is important. "Low-hanging fruit" doesn't mean the solution is "easy" in a vacuum, it just means this specific aspect is the easiest and/or most obvious place to start attacking a problem.
Or to stick with the language of the analogy, every fruit tree has some fruit that is lower than the others. That doesn't mean all "low-hanging fruit" is within arm's reach of the ground, some fruit just doesn't require as big of a ladder as other fruit.
This comment isn't a judgment of this specific case. I don't know enough about ATC to have any confidence in my opinion on the viability of replacing humans with software.
I disagree with you entirely. "Lowest-hanging fruit" isn't the same as "low-hanging fruit". The phrase "low-hanging fruit" does specifically mean that the solution is easy, in a vacuum - the fruits are "low", which is not relative to the other fruits or the height of the tree, but relative to the ground.
>the obvious or easy things that can be most readily done or dealt with in achieving success or making progress toward an objective
So not only can it be "obvious" rather than "easy", it is also in the context of "achieving success or making progress toward an objective". There is nothing in the definition that requires either this specific step or the overall goal to be easy.
I think you're mistaken. That whooshing sound must have been my comment flying over your head.
That was my first comment in this thread, so there was no established goal to change. My sole goal was to clarify the meaning of an idiom that the comment I was replying to was misstating.
I even included a disclaimer that "This comment isn't a judgment of this specific case", so I don't know how you could have received it as such.
One jet landing every minute, coordinating the airspace for miles around the airport, along with coordinating non-landing traffic (helicopters, small craft), while making sure these (already heavily automated) flight systems dont get confused and kill several hundred people sounds easy to you, along with keeping everything on time and schedule?
You say it “…sounds like a simple problem,” and sure, if you think this is a computer problem, it sounds simple. But if all you’re getting back is indignant sputtering, that’s your cue to explain why it’s simple—explaining something simple shouldn't be hard. What do you actually know?
It takes all of two minutes of Wikipedia reading for me to understand why this isn’t simple; why it's actually extremely not simple! If you ignore the incumbency, the regulations, the training requirements, the retrofitting, the verification, the international coordination, and the existing unfathomably reliable systems built out of past tragedies, then sure, it’s "simple". But then, if you're ignoring those things, you’re not really solving the problem, are you?
If you ignore the incumbency, the regulations, the training requirements, the retrofitting, the verification, the international coordination, and the existing unfathomably reliable systems built out of past tragedies, then sure, it’s "simple".
Those are excuses and encumbrances, not reasons. If they are so important, it leads to a question: what existing automated systems can we improve by adding similar constraints?
If these are just "excuses" and not "reasons," then explain how you have determined them as such.
I would like to say, "Because knowledgeable people have explained the difference to me." But again, this has come up before, and no explanations are ever provided. Only vague, reactionary hand-waving, assuring me that humans -- presumably not the same ones who just directed a fire truck and an aircraft onto the same active runway, but humans nevertheless -- are vital for safety in ATC, because for reasons such as and therefore.
There you are doing it in order to avoid engaging with the substance of what people are saying.
There is no substance in the replies. There never is. Only unanchored FUD.
Ok. You have shared that what some say are reasons, you say are excuses. Do you want to be told you are right, or do you want to propose a valid solution? If the latter requires the former, I maintain that this is not a simple problem.
I just want what I've been asking for: someone to explain to me why, in 2026, humans still need to be involved in the real-time aspects of ATC.
"Because it's always been done that way, and that's what the regulations say," will not be accepted, at least not by me.
(Really, my question is more like why humans will still be needed in the loop in 2036. If we started automating ATC today, that's probably how long it would take to cut over to the new system.)
If you ignore the incumbency, the regulations, the training requirements, the retrofitting, the verification, the international coordination, and the existing unfathomably reliable systems built out of past tragedies, then sure, it’s "simple". But then, if you're ignoring those things, you’re not really solving the problem, are you?
You retorted.
Those are excuses and encumbrances, not reasons.
I rebutted.
Ok. You have shared that what some say are reasons, you say are excuses... I maintain that this is not a simple problem.
Which you ignored to make a new claim against a straw man.
I just want what I've been asking for: someone to explain to me why, in 2026, humans still need to be involved in the real-time aspects of ATC.
That is what is not acceptable. You cannot simply abandon your original claim because it has been plainly pointed out that it is incorrect. You were not simply asking for someone to explain why humans need to be involved in real-time aspects of ATC. That is a wholly different question! You claimed this problem was simple, and it has been explained to you why it is not. Please reason about your argument more soundly.
On the heels of tragedy, you reasoned this could've been avoided simply. We are all ears. And yet, at no point did you demonstrate any understanding of the problem containing real world constraints, and instead demand that it be explained to you how the world works and how systems are implemented.
If you want to discuss an idealized system in a vacuum, then say as much; I would find that interesting. But do not demand to be given an explanation when you do not understand—and cannot accept—why things are the way they are.
Let me summarize it like this: you may very well have the best solution in the world, but if it doesn't include a strategy for how to share it (let alone implement it), then I maintain you do not understand the problem and therefore cannot claim it is simple.
Let me summarize it like this: you may very well have the best solution in the world
I have no solution at all, for the 35th time.
This conversation is over; it's clear I'm not going to get what I asked for. If someone could answer my question, they would have by now, rather than throwing one smoke bomb after another.
Er, I sort of do think that's how it works? The ultimate rebuttal to "you can't do X" is to actually do X. Until you do that I think that ultimately the burden of proof falls on you. It can be very easy to imagine certain tasks and systems can be automated - especially when you aren't actively involved in those tasks and systems and are unfamiliar with their intricacies.
...insert specific example of currently intractable problem...
What makes the problem intractable? We can now do both voice recognition and synthesis at human levels, and any video game programmer from the 1980s can keep some objects from running into each other.
When an emergency is declared, keep the other objects in a holding pattern and give the affected object permission to land. Then roll the fire trucks. Preferably not routing both the trucks and another aircraft onto the same runway, as the humans apparently did here.
It’s not weird that you believe automated ATC is possible. The weird thing is that you insist it’s simple.
People’s lives hang in the balance of a system built of corner cases. And you trot out radiation treatment as your metaphor? As if we didn’t royally fuck that up and kill a bunch of people at first.
The 'simple' remark was in response to your wide-eyed implication that 1000 takeoffs and landings per day is somehow a challenge for modern computing systems.
You'll lose this argument sooner or later. I just hope it happens before several hundred people find out the hard way that humans no longer have any business in a control tower. With your attitude, Therac-25 would have been seen as grounds to shut down the entire field of radiotherapy.
Your “simple” springs from your assumption that the problem is easy and anyone who disagrees is dumb. This is also why you can’t hear any of the answers others have given you. You don’t want answers. You want to be “right”.
No one thinks that the difficulty with automatic ATC is that computers have trouble counting 1000 things.
One approach that has always served me well in life is when someone appears to say something that seems obviously not true (like that computers can't count to 1000), consider whether I actually have misunderstood them.
> What makes the problem intractable? We can now do both voice recognition and synthesis at human levels, and any video game programmer from the 1980s can keep some objects from running into each other.
Great point!
It must be that despite the reliability, obvious advantages, and accessibility to "any video game programmer from the 1980s", everyone else is just choosing not to do it.
Alternatively, these things are not as simple or as reliable as you, a person who has no familiarity with the problem, assumes them to be.
The only difference between an excuse and a reason is the designator's belief as to the validity of the reason provided. You have already said you do not have the expertise required to assess validity, yet here you are doing it in order to avoid engaging with the substance of what people are saying.
If these are just "excuses" and not "reasons," then explain how you have determined them as such.
while making sure these (already heavily automated) flight systems dont get confused and kill several hundred people
Confusion is indeed a common side effect of a job done halfway.
Replying:
I'm really confused at the point you're trying to make - you declared yourself not an expert in this field, while loudly declaring it's so easy to automate.
Because we've already done harder things. 1000 takeoffs and landings per day equals a trillion machine cycles between events... on the phone in your pocket. It is an extraordinary claim, requiring extraordinary proof, to say that this task isn't suitable for automation.
Why don't you do it then? What am I missing?
I'm not qualified to do it, I didn't say I was, and in any event, I don't work for free. I'm asking for concrete reasons why it's not feasible. Spoiler: there are no reasons, only excuses.
The concrete reason your ideas won’t work is you don’t have any.
It's not my job to explain how to do it, it's your job to explain why it can't or shouldn't be done. The extraordinary claim is yours, not mine.
Remember how we installed traffic lights all over the roads and now car crashes never happen any more at intersections? Truly automation solves all problems.
Hard to respond to an argument of this quality, at least without getting flagged or worse.
I'm really confused at the point you're trying to make - you declared yourself not an expert in this field, while loudly declaring it's so easy to automate. Why don't you do it then? What am I missing?
I know this was rhetorical but the obvious answer is a complete lack of any actual ideas. “Just automate it” is a common refrain from people who don’t know how to fix the actual issues with any domain.
Remember how we installed traffic lights all over the roads and now car crashes never happen any more at intersections? Truly automation solves all problems.
> I'm asking for concrete reasons why it's not feasible. Spoiler: there are no reasons, only excuses.
It sounds like you're not asking anything at all
Just to play it out a bit, are you imagining that a pilot would be reporting a mechanical failure upon descent into busy airspace to some type of like AI voice agent, who will then orchestrate other aircraft out of the way (and not into each other) while also coaching the crippled aircraft out of the sky?
Are you imagining some vast simplification that obviates the need for such capability? Because that doesn't seem simple at all to me.
To repeatedly declare something simple to fix, but then have no idea how to fix it, and indeed to declare oneself unqualified to fix it, is kind of an astounding level of hubris.
> I'm asking for concrete reasons why it's not feasible.
The concrete reason your ideas won’t work is you don’t have any.
> Every time I've asked what's so hard about automating ATC
Why don’t you describe the hypothetical automation you believe would solve the problems then?
My hunch is that either your ideas are already implemented (like GP post who said they need to add red lights at the runway instances, except yeah, they do have that), or they are just bad.
> indignant sputtering and patronizing hand-waving.
Preemptively insulting everyone who might respond to you certainly looks like you’re asking for a real conversation. :|
Your accusation of “patronizing hand-waving” is especially off base considering you literally proposed nothing except “automating”. Hand waving indeed.
That's because it's a political problem, and not a technical problem. It could have been done then, and it can be done now.
Just curious: how many people in this thread know what SAGE was? A $5 Arduino has more computing power than the whole SAGE network. This isn't 1958, so we don't need the 'Semi' part of 'Semi-Automatic Ground Environment' anymore.