Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Starsky value prop was teleop, but that was the same thing that cooled investors. Adding an extra 20-100ms latency to driving is akin to driving after two drinks. Operating a vehicle 10x larger than the ones on the road does not make this problem smaller.

Operating large trucks is not a game VCs wanted to play.



I don't think it was ever meant to be live driving at highway speeds:

https://www.forbes.com/sites/stefanseltz-axmacher/2020/06/16...

The point was that it was an autonomous system that could ask for help, and the "help" scenarios would mostly be cases where the truck was already stopped or at very low speeds: navigating a construction zone, a transfer yard, etc. Possibly in some of these situations it wasn't even wheel-to-wheel, but rather a system of choosing between a handful of high-level courses of action for the machine to then proceed with, or helping the perception system classify an unknown object it was looking at.

I didn't sense from the postmortem articles by Stefan that safety concerns were what killed it. It was investors being disappointed that they weren't trying to build a truck without a steering wheel at all, since that was clearly where Uber, Waymo, Tesla, and others were headed (and at least at the time, external safety concerns were not seemingly impacting any of them).


I just don't think you can call that a real value prop if it's only for when the truck is stuck or a few minor edge cases. There are many scenarios where self-driving may not work or behave erratically so if their version of teleop doesn't solve those then not sure how Starsky argued they were ahead of competition.

Additionally I think investors backed out primarily because of risks associated with operating an autonomous fleet, not the shortcomings of the tech itself.


I feel that it covers an awful lot of them. If you cap teleop driving at 20km/h or something (or maybe a dynamic cap based on your rtt), that still covers all of the parking lot scenarios, as well many sensor-failure situations, like if you needed to crawl along in the right hand lane because it's a blizzard and the radar is blind.

In any case, the Forbes article specifically addresses how they modeled these things:

"Up ahead a deer jumps into the truck’s lane and hundreds of miles away a teleoperator is asked to take control of the vehicle. But they aren’t able to in time – either the deer jumped too quickly or the teleoperator wasn’t able to get situationally aware or worse yet: the cellular connectivity isn’t good enough!

Such was the situation painted to me time after time after time as CEO of Starsky Robotics, whose remote-assisted autonomous trucks were supposed to face exactly such a scenario. And yet, it was an entirely false scenario.

As I’ve written about before, safety doesn’t mean that everything always works perfectly, in fact it’s quite the opposite. To make a system safe is to intimately understand where, when, and how it will break and making sure that those failures are acceptable."

The fleet argument also confuses me; hasn't that been the Waymo/Uber pitch since forever, a centrally owned and managed fleet of autonomous vehicles for hire? Why would that be considered an especially risky direction?


> We also saw that investors really didn’t like the business model of being the operator, and that our heavy investment into safety didn’t translate for investors.

This is what Stefan said here [0]. Honestly I hear contradicting reasons for the failure. It could be that their investors had a different risk tolerance than Waymo/Uber's.

I guess I'm confused, sure, teleop could cover a lot of the edge cases but if there is a fat long tail you still end up with a pretty unsafe technology. The deer example is kind of a distraction and goes to show that maybe Starsky had a problem imagining and classifying catastrophic failure events. For every deer jumping in front of the vehicle there is a 10x more serious scenario that could lead to human fatalities.

After reading his posts I'm still confused about the reasons they failed. Can you list the reasons from high priority to low as to why they failed?

[0] https://medium.com/starsky-robotics-blog/the-end-of-starsky-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: