I just don't think you can call that a real value prop if it's only for when the truck is stuck or a few minor edge cases. There are many scenarios where self-driving may not work or behave erratically so if their version of teleop doesn't solve those then not sure how Starsky argued they were ahead of competition.
Additionally I think investors backed out primarily because of risks associated with operating an autonomous fleet, not the shortcomings of the tech itself.
I feel that it covers an awful lot of them. If you cap teleop driving at 20km/h or something (or maybe a dynamic cap based on your rtt), that still covers all of the parking lot scenarios, as well many sensor-failure situations, like if you needed to crawl along in the right hand lane because it's a blizzard and the radar is blind.
In any case, the Forbes article specifically addresses how they modeled these things:
"Up ahead a deer jumps into the truck’s lane and hundreds of miles away a teleoperator is asked to take control of the vehicle. But they aren’t able to in time – either the deer jumped too quickly or the teleoperator wasn’t able to get situationally aware or worse yet: the cellular connectivity isn’t good enough!
Such was the situation painted to me time after time after time as CEO of Starsky Robotics, whose remote-assisted autonomous trucks were supposed to face exactly such a scenario. And yet, it was an entirely false scenario.
As I’ve written about before, safety doesn’t mean that everything always works perfectly, in fact it’s quite the opposite. To make a system safe is to intimately understand where, when, and how it will break and making sure that those failures are acceptable."
The fleet argument also confuses me; hasn't that been the Waymo/Uber pitch since forever, a centrally owned and managed fleet of autonomous vehicles for hire? Why would that be considered an especially risky direction?
> We also saw that investors really didn’t like the business model of being the operator, and that our heavy investment into safety didn’t translate for investors.
This is what Stefan said here [0]. Honestly I hear contradicting reasons for the failure. It could be that their investors had a different risk tolerance than Waymo/Uber's.
I guess I'm confused, sure, teleop could cover a lot of the edge cases but if there is a fat long tail you still end up with a pretty unsafe technology. The deer example is kind of a distraction and goes to show that maybe Starsky had a problem imagining and classifying catastrophic failure events. For every deer jumping in front of the vehicle there is a 10x more serious scenario that could lead to human fatalities.
After reading his posts I'm still confused about the reasons they failed. Can you list the reasons from high priority to low as to why they failed?
Additionally I think investors backed out primarily because of risks associated with operating an autonomous fleet, not the shortcomings of the tech itself.