Is it correct Comma.AI sees this as the following statement below appears to say, or am I missing something? If so, why would anyone be using this product outside of a test environment that’s fully controlled?
—
“Any user of this software shall indemnify and hold harmless comma.ai, Inc. and its directors, officers, employees, agents, stockholders, affiliates, subcontractors and customers from and against all allegations, claims, actions, suits, demands, damages, liabilities, obligations, losses, settlements, judgments, costs and expenses (including without limitation attorneys’ fees and costs) which arise out of, relate to or result from any use of this software by user. THIS IS ALPHA QUALITY SOFTWARE FOR RESEARCH PURPOSES ONLY. THIS IS NOT A PRODUCT. YOU ARE RESPONSIBLE FOR COMPLYING WITH LOCAL LAWS AND REGULATIONS. NO WARRANTY EXPRESSED OR IMPLIED.”
The latest commit in the repo [0] right now is "should work" (34ff295). Filtering by "bug" in the issue tracker gives:
Comma two freeze and reboot while engaged. I recently had an incident on the interstate where my comma two froze completely (while engaged) and rebooted. The video froze, Comma's steering torque turned off, then after about five seconds in this state, the device rebooted.
Zygote restarting while OP active. So for the past couple months, after a couple days of uptime, the comma two offroad UI will glitch out. The buttons respond with highlighting upon touch, but everything else stops working. ... This time, I left the comma two to bask in its glitched state and this ended up happening; the comma two had the spinning logo, while ALSO still driving my car. In the video below, I nudge the wheel to cause on purpose ping pong to
prove it was still steering.
Spontaneous disengagement/reboot. Cruising on expressway and OP spontaneously disengaged and the comma2 rebooted
Hard braking while following the lead car. Was following the lead car on a highway traffic jam, that car was going without lights so might be a reason. Braking was really hard when he stopped, almost hit him ) I had a
feeling that C2 don't see it at all.
What's more worrying is that Comma's response is often either a) declare it a hardware failure or b) basically a WONTFIX:
Comma support's response is to return/exchange the unit due to presumed hardware failure. It would be nice to know what exactly happened but I get you can't thoroughly investigate every anomaly. Folks at @commma feel free to close this issue.
@Torq_boi said that it is not a model bug, but old known problem with no time to brake as lead car accelerated and braked fast. (So could INDI tuning fix that problem?)
Closing this issue since it probably was hardware failure.
If it happens a lot it's usually a hardware failure. But try running openpilot release instead of dragonpilot before drawing any conclusions.
Comma.ai is trying to do big things and I hope they succeed. No reason self-driving technology should be bundled with a car and I have little faith in auto manufacturers to deliver.
Lane assist technology exists. Look at consumer reports for a comprehensive review [0] (comma.ai was #1 in lane assist, above even tesla). They are open about their mistakes, issues and tradeoffs, much more so than other companies. I don't think its right for engineers use this as a cudgel to beat them over the head.
> They are open about their mistakes, issues and tradeoffs, much more so than other companies. I don't think its right for engineers use this as a cudgel to beat them over the head.
This openness gives us consumers an opportunity to look under the hood and evaluate the tech for what it is, which is a good thing and should be applauded.
However.
I’ve used that freedom to look at their code and processes and I’ve decided that this is not software that I want my life to depend on. There are serious outstanding bugs that Comma doesn’t have the manpower to investigate fully, their default assumption seems to be that the hardware is faulty, their test suite is regularly failing, pair programming / code review is sometimes used and sometimes not, PRs are created and merged without a description or comment. That’s just scraping the surface, without going into things like whether or not Python is a good choice for this sort of thing or if there’s a bug in my car’s CAN implantation that will get triggered by this and end up killing me.
All in all, I trust Volvo more than I trust Comma.ai. Maybe that trust is misplaced? For what it’s worth I don’t really trust any level 2 self driving tech.
They're completely opaque. There's no issue board, can't even find a terms of service online. No over the air updates. Can't even find a technical manual or version number of the software you're running. I assume you just have some car salesman point you to the button to press. Independent reviews put open pilot much higher than Volvo despite volvo having much more reach and influence in the industry
Like I've said elsewhere, they have a very loyal user base, probably driven more miles than the others combined, lots of discussion online and are more open by a mile. They even store the raw video to train, not just feature vectors.
To me this is all signs that their product is relatively strong. And I'd be willing to bet their processes and standards are much better than big auto.
> And I'd be willing to bet their processes and standards are much better than big auto.
Comma has a neat product, but I really do not agree with comments like this. The auto industry is terrible, but they are miles ahead of Comma in the process realm.
The industry has spent a lot of time on safety standards like ASIL and AUTOSAR. They would never allow Comma to ship, because it is written in Python, a pausable language (absurd), and without any hardware protections like lockstep processors or watchdogs, if only because it is literally a consumer phone. A system connected to the steering and acceleration of a vehicle freezing while still in control is ridiculous.
Comma will kill someone; it is only a matter of when, not if.
This just isn't true as far as I am aware. The vehicle interfaces[0] and control loop are all in Python. It doesn't help if other components are in C if your commands to the vehicle are interrupted by GC pauses.
Even if it was true, and it was all in C, it is only better because there is no GC -- there are still a myriad of potential issues that the auto industry routinely addresses in both software and hardware that Comma either cannot or does not. Not to say they are perfect, but Comma is well below any sort of bar.
> No reason self-driving technology should be bundled with a car
It seems to me that there are many reasons it should be bundled, and I'll bet that in the long run all self-driving cars will be integrated systems. It's not a good place for inconsistent installations or a modding mentality--imagine multiplying Uber and Tesla's programs a thousandfold with fewer resources and less accountability.
I disagree. I think if there's a clean uniform interface in which a device can receive video and make commands it could work out. Consider the fact that no auto maker I've seen, apart from Tesla, has made an infotainment system equivalent to a first generation iPad. I know its trendy to hate on infotainment, but I just want to see a map w/ directions to where I'm going. They're a lost cause in my opinion, and the less they do on the software front, the better.
Automakers gave up on that and now include android and apple car play as a feature. So I imagine they'll eventually give up and outsource things like lane assist, which are considerably more difficult
Autonomy is going to require a lot more than a CAN bus & some video interfaces. The specific sensors matter. The framerates matter. The field of view & mount points matter. The specific compute platform matters -- and it certainly shouldn't run on an Android or iPhone without automotive-rated components, realtime guarantees, etc.
All of these things argue strongly for a wholistic vehicle design rather than an aftermarket afterthought.
That's fair. But I think for their current use case, lane keep assist, they make due with less. They're comfortable in level 2 self driving. For anything above that you're probably right, but I would love to see big auto outsource much of it a company dedicated to self-driving. Specialization is essential in these sensitive fields and automakers already rely on suppliers heavily. Very few can get away with the Apple model and even Apple relies on outside parts.
The funny thing is that w/ any other product we would scoff at fully integrated end to end solution for a product that doesn't exist. No product market fit, no minimum viable product, no incremental products.
Yet they do-the-tesla trick and present the product as fully SDV. Disingenuous to the point of fraud: "here's a car in public traffic, with nobody at the wheel, nudge nudge [small print: it's level 2 and you get to clean up all those nasty pieces, nothing to do with us at all, you're only supposed to run it in tightly controlled, isolated private environment]"
or, imagine if there's standard interfaces between the motors/control system and the "brains" of self driving.
Like how CDs and peripherals don't all need to come from the same manufacturer ('cept apple's!), because there's an industry standard for interoperability.
Apple music peripherals are a great example of what I'm talking about. They don't use bluetooth because W1 chips (their integrated solution) blow bluetooth out of the water. Bluetooth can still exist in other devices because people don't need low latency and high bitrates, but they definitely need their cars not to crash because a standard causes hardware and software from different vendors to run into interoperability quirks. There's no agreement on the best hardware or even the type of hardware for self-driving, so there will be a lot of those quirks.
The mistake seems to be "there's a standard [and it Works more than 80/20]." There are multiple to choose from, and in versions, and each has revisions, and now Foo v2.1 doesn't want to work with Foo 2.0, except on even Mondays, and don't get me started on Foo 3.x with Bar 1.5.x!
You're handwaving away all the inherent complexity, but shoveling it off into a box labeled "standardization" doesn't make it go away.
In other words, the interoperability is very much "try to swap these components, if they work, yay, if not, try swapping them for something else until you get a combination that works." And that's for non-critical components.
> No reason self-driving technology should be bundled with a car
Aftermarket self-driving is inherently a compromise.
OEMs can design the car for their self-driving system. They can place sensors in the right places, size the actuators appropriately, and integrate vehicle dynamics data into the system.
OEMs can also properly safely test the entire system, operating as a unit, as it would operate on the road.
OEMs also leverage economies of scale to make the best technologies available at reasonable prices. Automaker net profit is on the order of 10%, and their scale is massive. You're getting a lot of bang for your buck.
Aftermarket systems are inherently a compromise. They're limited to whatever sensors can be easily mounted in the aftermarket device. The device is only mounted where convenient. The device can't take advantage of vehicle-specific sensors or actuators or vehicle dynamics data because they need to keep it generic.
And most importantly, aftermarket self-driving device manufacturers obviously aren't safety testing the device in combination with every single car on the road. They expect their users to just sort of wing it and see how it goes.
I think it's cool that Comma is working on this, but it's not going to replace OEM solutions any time soon.
> Look at consumer reports for a comprehensive review [0] (comma.ai was #1 in lane assist, above even tesla).
The report shows Tesla's system was more capable and performed better. Comma tied with 3 other manufacturers for performance.
The advantage of the Comma system was supposedly in the fact that it keeps drivers the most engaged. Kudos to Comma for that, but it's not exactly accurate to say that Comma's lane assist performance beats Tesla.
I don't think comma.ai should be faulted for being open. I have trouble finding any statements on liability on any other lane assist technology. Would love to be proven wrong with an actual policy.
Volvo sells a car with an optional feature that has hardware and software components. If you can prove that the hardware or software is “unreasonably dangerous” in design or implementation, you may have standing to sue Volvo.
Comma sells only some hardware (with some limited software?). Comma suggests, but is careful not to encourage for street use, that customers modify the hardware with open source software that makes the hardware dramatically more useful. If the hardware functions perfectly but this software proves unreasonably dangerous, you do not likely have standing to sue Comma.
I think it’s really cool that late model cars are adequately unencumbered to make experimentation like Comma possible. Aside from the sketchy commercialization, Comma seems impressive and valuable. I have seen no evidence Comma is any less safe than the auto manufacturers’ LKA products.
My beef is that Comma-the-product imposes an externality on our roads by bringing experimental driver assistance software to the mass market that is not backed by the software product’s developers (or any other entity).
> If you can prove that the hardware or software is “unreasonably dangerous” in design or implementation, you may have standing to sue Volvo
I imagine this is true for comma as well. I don't think their little waiver and notice that it should only be used in a research setting would hold up in court. Much like "incense" were banned despite being marked "not for human consumption".
> My beef is that Comma-the-product imposes an externality on our roads by bringing experimental driver assistance software to the mass market that is not backed by the software product’s developers (or any other entity).
I think their product is safe and I'm glad it exists out there. They claim 35 million miles driven and I am yet to see one serious accident. So I think the risk is overstated.
People like Thiel often complain about the lack of innovation. This is real innovation and could transform society. I think we should applaud the people that have the audacity to tackle hard problems that can change people's lives. The only way you'll get there is real miles driven on real roads. As a society, we should not have a zero risk attitude, otherwise nothing would ever get built.
In the technical aspect you are correct. In the legal aspect, nope. Even if (for the sake of the example) this were identical technology under the hood w/ Volvo and Comma, there's a world of difference between "I bought a stock car that is certified to be street-legal" and "I installed some aftermarket thing into my car, despite being warned that it might void the car's street-legality."
It's not just what you think, not least because your driver's license is a permit, not a right. (On the other hand, I agree that the bar requested for computer driving safety is at least an order of magnitude higher than for human driving safety, and that the elephant in the room is the risky road environment that we pretend doesn't exist for humans.)
That essentially boils down to the color of the bits. In other words, what git log says is a different bit color (technical) from what the EULA (legal) says, and you can't meaningfully use the legal-colored bits in a technical context and vice versa. In other words, if it breaks, you get to keep all the nasty pieces, not the company, because the EULA says so.
>THIS IS ALPHA QUALITY SOFTWARE FOR RESEARCH PURPOSES ONLY. THIS IS NOT A PRODUCT.
It is weird that one of the only things I see above the fold on the company's home page is a "Buy Now" button considering they don't actually sell "a product".
I think their distinction is that they’re selling the hardware, which is capable of controlling the car just fine. So the thing you’re paying for is delivering as promised. The software is a separate project, and you could theoretically load whatever software on the hardware you wanted. So the fact that the software is glitchy is not (in the view of the company) something you can hold them responsible for. You paid for hardware, you got hardware. What you do with it is up to you.
This is at least what I remember from a years old Wired article when the comma one was being developed.
Whether that will actually hold up in court is TBD, considering how closely coupled the software is to the company and hardware.
Sure. You can load any software you want on that, so it's actually for playing DOOM on your car's entertainment system, and it has absolutely no relation to the demos on their pages.
IANAL, but "the dog ate my steering wheel" would be a more plausible defense.
It’s because the standard product doesn’t have autopilot, they sell a driver assistance tool. The tool they sell does not have autopilot.
The device is open, and you can flash with their open source code from GitHub to give you hacky autopilot. This is how they get around the legal issue.
It’s like a “we sell you a legal product. We advise you don’t flash this code on it which we are hosting on GitHub wink wink”
Reminds me of university. "No officer, we weren't selling tickets to the keg, we'd need a license to sell booze. We're only selling cups for $5, the beer is free!"
This appears to be the strategy. In the U.K. they used to sell legal highs in tablets marked as “do not eat - plant food” since technically they weren’t safe for human consumption. That’s effectively the same too!
> Yeah - I don't think this would hold up, you can't really have it both ways.
I don't think they would have any legal problems due to this. They sell it but clearly label it as experimental, for research only, and urge buyers to comply with local regulation. And the law pretty much everywhere states that the driver is responsible for driving the car and for the outcomes of any modification brought to the car that was not pass homologation.
Tesla is a real example that passed this test. Their marketing language brands AP as "fully self driving, some features unavailable due to local laws". The "wink wink" may be obvious for the buyer but not in the eyes of the law. Letting the car drive itself is the driver's failure, not Tesla's. Tesla can at most be held responsible for misleading advertisement and ordered not to use specific language (as it actually happened).
> "Tesla is a real example that passed this test. Their marketing language brands AP as "fully self driving, some features unavailable due to local laws"."
This isn't true, FSD has always been a 'coming soon' feature you can prepay for distinct from autopilot. Autopilot has always been advanced lane assist. "Autopilot" in planes just holds the same flight pattern and doesn't really do anything sophisticated, autopilot in Tesla is similar.
German authorities did find Tesla's claims about the AP as misleading [0]. Tesla's AP page was updated (globally) to reflect "full self driving in the future" but until recently it just said "Tesla cars come equipped with all the necessary hardware for full self driving" with a footer claiming some features are unavailable due to local laws.
Regardless, Tesla was not found guilty for anything other than misleading advertisement, not for failing to build a car that drives itself. I don't see how comma.ai could be held to a higher standard.
> The court, in Munich, said: "By using the term 'autopilot' and other wording, the defendant suggests that their vehicles are technically able to drive completely autonomously." [0]
I don't agree, but I know this is a position a reasonable person could hold.
I just don't think autopilot means autonomous driving and I think that's clear to people.
If someone thinks cruise control means you can crawl into the backseat of the car, is that the fault of the car manufacturer?
I also think Germany may have a bias given their own car industry.
What comma.ai is doing I think is categorically different.
I'm okay with people hacking on their own stuff, I just don't like the "This is Alpha don't use it" when they obviously intend you to buy and use it that way.
> I also think Germany may have a bias given their own car industry.
This isn't really a fair argument to be honest. You're brushing the court's justification aside ("By using the term 'autopilot' and other wording, the defendant suggests that their vehicles are technically able to drive completely autonomously.") to focus on a weak link between the country having a strong auto industry, and the justice system banning advertisement for something Tesla does not actually deliver.
> I just don't think autopilot means autonomous driving and I think that's clear to people.
Tesla was claiming their cars have "all the necessary hardware for FSD" since at least 2016 [0]. That's an obviously misleading statement since not even expert engineers know if that hardware is enough. If anything the general consensus is that it isn't, and Musk's missed promises support this.
> If someone thinks...
When it comes to misleading advertisement the technical definition isn't very relevant. As the name suggests, it's about whether enough people are mislead into believing they're buying something else. This was raised by consumer groups after realizing the payed promise of FSD never came. Customers shouldn't be expected to be experts in all things. So if the marketing makes it sound to regular people like the product is something other than what it actually is then it's fair to call it "misleading".
> This comes across as them selling a product they know could fail in dangerous ways, but they don't want to be responsible for any of it.
This is just a safety precaution. Why wouldn't they put this in there? It may not hold up in court but it can't hurt. I don't think this means they "know it could fail in dangerous ways".
The safety model in comma.ai is actually quite brilliant. It can't perform any action faster than you're able to correct and disengage. To test it, they have someone drive while a malicious passenger seat has full access of the controls as limited to by the software. The passenger then messes with the steering and acceleration without the main driver's knowledge. The driver has to prevent the actions. The torque limit is much lower than that of Tesla or other lane-keep assist tools.
If you sell someone something with a nudge-nudge, wink-wink, and they get killed using it, it absolutely hurts. You may be able to weasel out of being held accountable for it, in which case it won't hurt you, but the larger issue here is that this kind of misleading copy can lead to people making poor decisions.
You may have put it in the fine print that it's not a real product, but the whole point of nudge-nudge wink-wink is to strongly imply that it's a real product worth real money, and thus you are going out of your way to encourage people to try it and take chances with real lives.
If I buy a cell phone holder for my car, and it distracts me and I get into an accident? What if Car Play lags and i'm distracted and I get into an accident? What if radio plays an ambulance and I freak out and get into an accident? What if my sunglasses make me mistake a red light?
This product does lane assist. It does a good job according to consumer reports [0], higher than all other lane assists. It doesn't detect stop signs or traffic lights or drive for you. It keeps your lane. It acts predictably and gives the driver enough time to react.
Unfortunately the liability model is messed up. I think this product is relatively tame and should allow to exist. And you need to pay attention. They even have inward facing cameras to make sure you're paying attention, more than most other companies. They do everything they can to be safe but of course they're not stupid and they'll put in a sweeping statement on liability.
This is really pushing forward the self-driving industry and is an incredible feat of engineering. It's much more open and transparent than every other lane keeping software, and it's being developed with a lot of thought and care from a talented engineer as opposed to some nameless faceless bureaucratic commission in Ford or some other dinosaur.
I'm not gonna debate the "appropriate level of liability."
My point has to do with what you're signalling. If a thing is alpha-level, and real humans can get killed, I wouldn't let random people buy it and use it in their cars, period.
Informed consent is deeply problematic for a product like this: Very few people have the expertise to look at the code and the hardware and properly evaluate the risks, right down to understanding which kinds of edge cases need to be very carefully avoided.
Unless you're vetting researchers and barring people who just want to save a few bucks and brag their car self-drives, you really don't know if every person who downloads the extra software really does grasp the implications of what they're consenting to.
You might grasp the implications, and so might many people in this thread, but that doesn't guarantee that everyone does. THE AUDIENCE OF HACKER NEWS IS NOT A REPRESENTATIVE SAMPLE OF SOCIETY.
And we are talking about a product to be used on open roads: In addition to informed consent from the person who downloads the software, if they get into an accident with another vehicle, pedestrian, or cyclist, did any of those people consent to share the road with someone who installed alpha software on their device?
Morally, I can't get behind a few disclaimers and a nudge-nudge, wink-wink for any kind of autonomous driving tech, even if it's "just" lane-keeping.
———
Update: But to be clear, I am in favour of people tinkering with all sorts of digital automotive tech, and we really should find a way for lone inventors or small teams to innovate without the "enterprise outfits" using regulatory capture to drown small competitors with red tape.
I'm only arguing in favour of truly informed consent, which I believe is tricky for driver assistance technology being provided to arbitrary customers.
So your main problem is about the disclaimer and that its called alpha. I provided a source that rates it the best product among all other competitors and the highest score on keep driver engage. And they have the most miles of any other lane assist technology. So I think its safe. I think the alpha is more tongue in cheek and is not a term that means anything really apart from, as you say, a wink and a nod.
For the laymen user, they won't read the disclaimer or understand what Alpha means or even know that is is "alpha". I'm an engineer and I probably won't ever really audit the code. I will do my research like most other people, read online reviews or testimonials like Consumer Reports.
So are you against all lane assist technology? How about auto-braking? Anti lock breaks?
I'm against just heaving that technology out over the fence into the hands of consumers and leaving it up to consumer reports and/or individual consumers to decide if it's safe enough.
Safety is a 'picking up nickels off of railway tracks" problem. A thing might work 10,000 times in a row, but then suddenly, catastrophically fail because something is different that hadn't been tested before, like dealing with a woman walking her bike across a multi-lane road.
this is not a good scenario to leave up to consumers to decide whether a thing is safe. Not even with consumer reports to help out with testing.
Now as to ABS, the comparison is not even close. I do not buy ABS by purchasing brakes and then flashing some ROM with code I download from the internet. ABS is covered by all sorts of regulatory frameworks around the world, it isn't simply cooked up and offerred for download like it's an MP3 player skin.
Even though it's a much more mature technology, the problem with ABS is again, consumers cannot give informed consent to a disclaimer when purchasing it from some random person.
When I buy it as part of an automobile from a manufacturer that complies (I'm looking at you, VW) with regulations, I'm consenting to trusting something in a completely different way than when I download code and there's an MIT license or whatever weasel-wording somebody em ploys to say, "If you die, sucks to be you. If you kill someone, it's your soul that will be in torment."
Your equivocation of 1. downloading code for a safety feature from the internet that's marked "alpha" and has been tested according to whatever the author feels like testing because it's not offered as a "product," with 2. purchasing an automobile that has ABS brakes which are tested and maintained within a global safety regulatory framework...
You're entitled to whatever workdview you like, but on this pointI believe our discussion ends. There is a fundamental axiomatic belief I hold that is not compatibvle with a fiundamental axiomatic belief you hold.
I don't want to spend all day trying to explain why I believe Volvo selling a three-point harness is not the same as some random person knitting a seta -belt, selling it on etsy, and leaving it up to you and I to read the consumer reviews to decide whether it's safe enough.
You believe the free market plus informed consumers will sort all this out. I do not.
Please don't doxx Ford engineers if you don't give any proof. There are hard working, ethical people working who don't want to kill people by lightheartedly pushing stuff on the road. Just because you don't know them does not mean they are not talented.
So no guarantees whatsoever then. Because you are always responsible and are always expected to recover from anything the autopilot might ever come up with.
Teslas do fail in deadly ways. Everyone that cares to look knows this. Yet Tesla is fine with it, even while knowing that humans can't reason about safety when the car drives perfectly the other 99% of times.
> Because you are always responsible and are always expected to recover from anything the autopilot might ever come up with.
That's always been the case for any driving assistance systems that automakers offer, AFAIK. Do you object to the state of driving assistance in general or just how Tesla implements it?
Well driving assistance are mostly about assistance.
While Tesla allows for and gives the impression of it doing more than that, when it can't. You are expected to react within a split second at any time. Actually just driving the car is a way simpler task than supervising someone else that has unintuitive blind spots.
Something that google discovered early on and anyone that thinks about it realize. A car that mostly drives itself is way more dangerous than a car without any assistance at all.
"No ADAS system currently on the market has safety guarantees on perception or planning algorithms.
So, what must be guaranteed is the ability of the driver to easily regain full control of the vehicle at any time. In openpilot, this is done through the satisfaction of the 2 main safety principles that a Level 2 driver assistance system must have:
1. The driver must always be capable to immediately re-take manual control of the vehicle, by stepping on either pedal or by pressing the cancel button;
2.The vehicle must not alter its trajectory too quickly for the driver to safely react. This means that while the system is engaged, the actuators are constrained to operate within reasonable limits."
They don’t guarantee it they just provide a disclaimer. There’s plenty of driver monitoring solutions they could provide but don’t. Combine that with deeply unethical promises of full self driving coming just around the corner, feature complete by 2020, fully autonomous road trip by 2017, and you’re left with a dangerous product and customers that overestimate its abilities.
I'm pretty sure the disclaimer does nothing. If they put out a product they need to expect people will use it, and they need to take actions to keep those people safe.
...and woe betide you if you try to get your insurance company to pay up, ever again. "Aftermarket? Cool, a get-out-of-reimbursement-free card, have a nice day!"
Sure, I'm all for that. OTOH, my experiences with insurance companies is that they'll spend 10x the money and effort for denying payment, and the contracts are heavily weighted in their favor.
I think its just them trying to fend off those people that are looking for anything to sue companies. Telsa gets these lawsuits all the time, but they have a bunch of lawyers to deal with it.
The whole point is they're selling you some hardware only (a modified Android), and it's legal if you yourself modify your own car or something. You have to manually install the software and mount it physically after buying this.
They are selling this product as a dashcam for obvious reasons. The autopilot feature is an experimental feature that you have to enable yourself on your own risk.
It's for 1/10th of the price, so twice better then? Just kidding, my point is that Tesla is selling it in about the same terms. Beta software for extra money, no guarantees.
Not sure that is true, you are comparing apples to oranges. Beta on Tesla is for FSD.
Autopilot = lane keep on the highway, it's as mature as lane keep is on Toyota or Honda. It's also included for no additional cost in all tesla, so it's actually 100% cheaper.
Most software developers have mostly operated in largely unregulated domains, so there's a MISunderstanding of how manufacturer responsibility works in industries like automotive. Saying "I AM NOT RESPONSIBLE FOR ANYTHING" in the automotive software space is the product liability equivalent of Michael Scott screaming "I DECLARE BANKRUPTCY" in The Office.
Well, their legal technique is a little more nuanced.
They are selling a product which is a legal and legitimate driver assistance tool which does not have autopilot.
You, as a user, can then modify the device by flashing unregulated code onto it to give it autopilot code, which is not advised by comma.ai *wink wink*
On public roads. We don't know how many they've done if you include closed track testing, but I'm willing to bet it's at least as many as they've done on public roads.
The real point is that Google isn't selling their system to consumers.
I remember the founder interviewed me to be CEO, when he hit the investment and publicly insulted Papa Elon for kudos and bad assness. The guy was a jerk on the phone and 10 mins in I told
Him to piss off and thought to myself “wow - who would work with this guy” he’s been at it since 2016 so glad it didn’t flop, but looks like he ate his words to Papa Musk.
it's pretty cleared being sold as a devkit. would you buy a PS5 devkit and expect it to be exactly the same as the retail PS5? I don't understand the issue here
Their business is not only taken less capital, but they have also just become profitable even with selling hardware which I find that impressive, unlike their other competitors who have either shutdown or have been acquired. As for their autopilot system, they are self-hosting their deep learning training systems (Not in the cloud but in-house) and their competition is literally off road and non-existent (expect for Tesla).
The consumer report on comma.ai is also very interesting and outstanding: [0]
One of the rare startups I've seen that are able to do this with less funding and still profitable with hardware. That's how you do it. Well done.
EDIT: So the above is not true about comma.ai? As for the report, it shows the overall results of the design of assisted driving systems and the test results for comma.ai overall is that it is ranked 1st. For a startup with less capital than its competition it is very rare to see this especially with its own hardware.
Surprised how much hate there is here for this technology from people who never tried it. I've had one for a year. It is game-changing. You can drive what you like - it supports many different cars.. I can road trip for hours on autopilot and then drive my Subaru on rocky forest roads that would rip the undercarriage out of a Tesla.. It does adaptive cruise control and lane centering for hours on end.. Being able to relax (but monitor) for the 8+ hour drives up and down the state of California is great. Being able to adjust the radio, have a snack, or just sit with your hands in your lap while the car handles lanekeeping feels like a miracle.
There is a 'dating' period when you get it where you and the autopilot learn each others limitations.. Knowing how steep a turn your car can do without hitting torque limits, learning to trust the ACC to slow down with traffic ahead, knowing how long you can sit in stop-and-go traffic without it disengaging (varies based on car and car capabilities), what light conditions it finds challenging (same ones as humans, driving into the sunset) etc. But now I know its limits, know how it notifies when it is reaching limits or unsure, I have full confidence in it when its working within its capabilities.
For those who think this device is going to kill them -- the safeties built in make it virtually impossible. Yes the main device (Eon) runs python code. However there is a hardware device which acts as a firewall between the Eon and the CAN bus which prevents any commands beyond a (very low) torque limit such that the car cannot take any action which a human couldn't notice and correct within 1 second. BY FAR the most dangerous part of my drives continue to be the drivers surrounding me who don't look or signal when entering my lane.
I've been using openpilot for about 6 months and this is exactly how I feel.
Even if it did do something erratic (which it hasn't so far) the torque limit means there's just no way the car's going to do anything so fast and aggressively you couldn't take control.
Shady alert: I just clicked on one of the testimonials ("Jason S Co") and it seems apparent that it is an employee of Comma, all retweets of comma's main twitter account.
I bought my car with the express intent of getting a compatible car for Comma. My comma absolutely increased the utility of my car to me.
Been on 3 huge road trips that I wouldn’t have considered without the device. 12 hours on the road isn’t something I would do everyday, but absolutely bearable with a comma.
If anything it sounds to me as if the comma provided a false sense of security. Drawing from experience driving 9 hours is already grueling. At hour 10, 11 or 12 would you have been able to take control during a failure of the system?
I certainly think so. I have built a nice intuition for the kinds of scenarios that the system doesn’t handle well. So it’s a pretty seamless transition where I assume control whenever a slow vehicle is merging or an aggressive driver is weaving through the lanes.
I recently rented a car and on a much shorter journey (90 minutes) I was struck by how much more tedious and frustrating it was to manually pilot the vehicle.
I think the thing that makes driving grueling is that you can mostly drive at a subconscious level. Staying in the lane and matching speed are mostly automatic, but occasionally you zone out for a second, or get absorbed into the audiobook. Then you start to drift to the edge of the lane or get a little to close to a decelerating car in front. Then there is a hard attention snap where you are forced back to focused attention of the driving task.
An L2 system does marvels for just smoothing out those peaks. It’s much less frequent that you have to intervene, so your attention can stay at a much more comfortable level and smoothly ramp back to high focus when a situation presents itself.
I have one. It's fairly limited so you are forced to pay attention all of the time and think about whether you need to take over. You just aren't doing the menial tasks of keeping the car centered in the lane or keeping a safe following distance.
If the system does something you don't like you just grab the wheel or touch the gas/brakes (the control messages it sends are lower priority for the car than driver inputs). It's also designed to disable itself if it sees gas or brake input.
It's really best considered as an upgrade to the stock lane keeping assist and dynamic cruise control found on most cars. Compared to the stock systems it's much better behaved (my stock Toyota system will happily drive straight across multiple lane lines in certain lighting conditions and it astounds me that such a thing could be sold), but it is a long long way from Level 4.
> you are forced to pay attention all of the time and think about whether you need to take over
The same people who cram an orange in to their Tesla's steering wheel will just download a sketchy repack of the open source software that disables the attention checks.
if you override the safety precautions, aren't you inherently taking your safety and the safety of your passengers into your own hands? I don't think you can hold that against any manufacturer.
Which car and what did you drive before? And can you quantify how much benefit you see over adaptive cruise / lane assist / etc that is standard on many cars these days?
I agree with your sentiment, just unsure what your point of reference is and how much impact the fancy AI actually has. My car (2017 model) has the features I mentioned from the factory and is great on long trips too.
Yeah the Toyota 2.0 system is pretty lousy. I rented a 2019 RAV4 Hybrid for a road trip from the Bay Area to San Diego. Lane keep assist didn’t work well, even in broad daylight. One of the blind spot monitors was broken (missed a semi that passed us), which was a shocking failure. The adaptive cruise control was good though.
I didn’t own a car before ;) I rode a bicycle and took the train. Although I would rent cars when driving up to Tahoe or Yosemite.
I would say that the lane-keep systems are pretty conservative. I tend to think of it that those systems help you steer while comma will do it for you.
There’s a huge difference between hands and feet off the steering wheel/pedals and needing to guide it every step of the way.
In particular, I think I tried a VW and a Ford’s lane assist. They would lessen the torque for turning the wheel, but wouldn’t actually make a turn by itself, which has marginal value, but significantly less.
do you find benefit in dynamic cruise control in slow moving traffic. Now add the lane centering logic and you dont have to apply a lot of torque on the wheel either, because the system does adjustments for you.
So the benefit is strictly in slow-moving traffic?
And it's never occurred to me that steering wheel torque is an issue. And so many of these processes -- lane centering, not bumping into the car ahead of me -- are subconscious for me at this point, but I've been driving for decades.
May I ask how good is the technology that notices when someone is distracted and forces them to attend to the road?
Yes, this has been his project for many years now - he threw a fit and fled to China in 2016 when the NHTSA sent him a Special Order requesting test data for the Comma One product, which he then cancelled and released as pseudo open-source (the ML model is still a closed black box).
Yours is the very essence of a HackerNews comment. Comma is everything a startup should be. They are not wrapping a lame business model in CRUD and living off of malinvestment. They are solving ridiculously hard problem with a small team of very smart people. Their competition has burnt billions. Meanwhile, Comma is profitable, has a better safety model than anyone.
This is a straw-man. I did not compare Comma to any other startup, or some bizarre ideal of "what a startup should be."
I strongly disagree that Comma "have a better safety model than anyone" - their entire safety model revolves around two things, an assertion that the driver is fully responsible, and a poorly documented set of callback functions in their CAN-intercept board which, depending on the vehicle, do different things, but generally try to limit angular input and introduce an emergency override. This is smart, but the code is not documented well enough to be considered a "safety model" in any commercial sense, and the system, while it has a MISRA C linter plugged into it somewhere, is not built using safety-approved hardware or to any known safety standard. Plus, the fundamental approach on most cars involving CAN message-stuffing simply cannot be as reliable as an integrated system - in that sense, the entire model is flawed and cannot "have a better safety model than anyone."
And, Hotz explicitly and openly left the United States and cancelled his first Comma product launch, rather than respond to a letter NHTSA sent him asking questions about Comma's testing protocols and safety process. Call that what you will.
> pseudo open-source (the ML model is still a closed black box).
?
I am confused with this statement. Sure the ML model is a black box, but it's better than closed source completely with no model. It's more realistic to build the software yourself than training your own self driving ML system.
The most fundamental part of the system, the one which makes driving decisions, is not open. I did not say anything about whether or not this was "better" than the product being fully closed source, only that it is not truly open, and I fully believe this. "Open source autopilot" implies to me that the autopilot is open - that an end-user can inspect, audit, and attempt to understand the decisions their vehicle is making. This is not the case for Comma - rather, it is an open-source CANbus translation layer attached to a closed source autopilot.
When you say not everything is open source. I assume you mean the training code is not open source? I'm curious what you would want to learn from that? You wouldn't be able to actually train a model since you wouldn't have access to the data.
The end-user can inspect, audit and understand the decisions their vehicle is making. All you have to do is see how the neural network behaves for different inputs. That's the correct approach, whether you have access to the training code or not.
Comma don't even say _how_ the model works! What layers are there? What learning strategies are they using? What do they do? It's literally a black box! "All you have to do is see how it behaves for different inputs" is just black box reverse engineering! Machine Learning is NOT a magic black box.
Comma have constructed a "stack" of models, just as you would connect a series of functions to make a kernel in the mathematics sense, or a series of algorithms or instructions to make a program. And that stack is entirely closed.
https://medium.com/@chengyao.shen/decoding-comma-ai-openpilo... here is an example of reverse-engineering the driving model. If Comma released this exact sort of documentation, including what ML modeling strategies they were using, what each input and output parameter affected, and how the model was trained, I could maybe consider the system open.
The models are now saved in ONNX format. Which is the most readable format available. You can view the architecture of the model with a basic neural network viewer.
Again, I'm curious what you want to learn from the training code?
Chengyao's medium post is great, but it is only possible because the models, the code that runs them and the code that parses the outputs is fully open source.
My binary is saved in a PE format. Which is the most readable format available. You can view the architecture of the software by opening it in the basic Ghidra pseudocode decompiler. All Windows software is now "fully open source."
Chengyao's Medium post is advanced reverse-engineering work requiring a detailed knowledge of the appearance of specific ML algorithms saved in a binary format. And even with this knowledge, Chengyao was only able to _speculate_ about the behavior of the model and the desired response to certain inputs.
What would satisfy me from Comma, if they were aspiring to some kind of "open" label, would be a detailed document explaining each layer of the ML system and what its goals are - like Chengyao's Medium post, but without the need to reverse-engineer the system and attempt to infer its behavior!
Now, maybe Comma don't aspire to be truly open, in which case, that's fine - In that case, Comma is a closed model with an open-source CAN interceptor on top. So essentially, crowd-sourcing the tedious and high-liability parts (vehicle integration, driving video) while owning the valuable parts (training data and model architecture). Very cool!
What format would you rather the model be saved in? ONNX is the most cross platform and standard as far as I know, and it's also what we use internally.
It's not like a PE format which is compiled from something else higher level.
Isn't any ML model "just a bunch of weights," if you look at it right?
So, where does "modelV2" come from here, in the part that plans the lateral steering action? https://github.com/commaai/openpilot/blob/4ace476f14bb73c354... . It's a model. A video frame goes into the model, and somehow the desired path comes back out. That's the core of the driving system!
If self-driving companies really believed in their product, they would bundle car insurance that only works when self-driving is on with the product, and it would be cheaper than normal car insurance.
In fact we have the opposite, where MetroMile advertises lower premiums, while conveniently occupying your ODB2 port, without providing pass-through, preventing the use of Comma’s product (this honestly makes me consider switching from Geico to MetroMile, so I’m not subsidizing Comma customers, but there are other issues with MetroMile...).
The Comma AI founder has explicitly stated that their endgame was the insurance business. I'm not sure if he has since changed his stance on this.
It makes sense, though, in a super surveillance-capitalism way. Offer a close-to-cost self-driving product and use the data to estimate the risk of everyone on the road (both the users of your product, but really anyone your product has eyes on).
The comma AI founder has said it's remarkably easy in the data to identify good and bad drivers, and that there is a strong bimodal distribution.
That last statement is almost more interesting than the autopilot stuff. How do they determine who's "really" a good driver in order to correlate it with whatever they're measuring that has a bimodal distribution? Is it possible to teach someone to change their driving style so that they're in the other lobe? If so do they become an actual better/worse driver?
As a motorcyclist, I figure I will eventually end up as a JIRA ticket in some "self-crashing car" system's backlog.
But then again, that is the story of society - progress slowly built up over time on the pile of irradiated, mangled, dead and destroyed bodies over the course of millennia of lessons learned.
Just hoping that they name the feature / patch after me.
I've been using this on/off since early 2019. Much better than stock honda LKAS / cruise control in terms of detecting lanes, lower speeds, etc.
The driver monitoring has gotten a lot better as well. If you are inattentive (based on your gaze) and don't respond the the prompts quickly, it plays a loud alarm and disables the system (until the car restarts) to prevent something like the videos of people sleeping in Teslas you see online.
I don't know well the system works but do they plug into the cruise control mechanism? How are they able to provide the automatic braking functionality?
Yes, they intercept whichever CAN bus in the target vehicle is used to send Adaptive Cruise Control and Automatic Lane Keeping messages, and send their own instead. On many target cars, they actually simply use the stock Adaptive Cruise Control and openpilot provides only the steering (ALK).
* Filtering the OEM messages to prevent them from reaching their destinations, which can be accomplished if Comma hardware is installed in the correct place in the bus to function as a two-ended "gateway" / "relay." Of course, then if the hardware or forwarding software fails, something else breaks...
* Relying on the end-user disabling the OEM feature which could conflict with Comma, and signaling an error state if they do.
* Relying on factory sequential integrity checking to overwrite factory messages by beating them in a timing race, causing the Comma messages to be "valid" and the factory messages "invalid" (no joke!).
* Just "spamming" the message so it's received more often than the OEM ones.
As you can tell, this is not a trustworthy safety product by any stretch of anyone's imagination.
CANbus relies on message identifiers to prioritize communication. Spamming messages of equal arbitration ID at the same time, as is possible in an override system without careful design consideration, produces undefined behavior, as both messages can win the arbitration process.
Furthermore, corrupting command streams by "racing" valid messages which are sequence-checked is expecting a fault state to produce valid "normal" behavior. That's a lot of trust in OEM control modules - good thing they're usually written with safety in mind!
For supported cars, the car's stock AEB system still works as intended
FOr some cars, openpilot only does lateral (steering) and doesnt do longitudinal (gas/brake), so the car's stock dynamic cruise control is in control which includes AEB
Not sure I'm so keen on Python driving my car. I've looked through some of the code and I think for me to buy into the safety the quality of the code would need to be improved, well commented, and audited.
Haha I just had a horrifying throught about troubleshooting broken virtualenv installs and deadlocked dependencies, while on the side of the road with a car that refuses to start.
Plus, the actual driving model is a black box, so there's really not a lot here that's any more reassuring than any other black box lane-keep-assist system on the market, and there's been an active resistance to apply any form of rigor to the system or produce any actual test data from the Comma company for many years (with the argument being that the system is simply an augmented lane keeping system and is just a supplement to the driver etc. etc. and thus does not need to be held to any sort of standard of safety or compliance - where have we heard that one before?)
CPython includes both reference counting and a stop-the-world tracing garbage collector. You can turn off the GC -- and openpilot appears to do so[1] -- but the tradeoff is that any objects that are part of reference cycles will be leaked, and will not be deallocated until the program exits.
Anybody want to place bets on how many of these dependencies[2] have been audited to determine whether they can create cyclic references?
It seems like the market for a product like this (augmenting older vehicles) will diminish over time as more and more vehicles come with these features plus rolling updates and more advanced sensors standard.
If this proves to be the case, I wonder what the company will do to pivot
Sell directly to manufacturers? Nearly no one (except Tesla) has anything coming in the near future. Why anyone they pay billions to develop the technology in-house if they can just pay Comma.ai to integrate a sub $1000 thingy in their cars.
I don't think Comma would pass compliance as any control-related automotive system, much less a self-driving one, at any established automotive manufacturer in a regulated market (US, EU). The code isn't written to any commercial audit standard that I can tell, the hardware is COTS mobile phone hardware, and change management and testing (in a formal sense) seems pretty much non-existent. Most manufacturers demand compliance with standards like ISO 26262 for liability reasons. Arguments about the value of these kinds of standards aside, it's vanishingly unlikely that any mature automaker would buy Comma's product as it stands - a major or nearly complete overhaul would be required. Now, that's not to say they couldn't sell into a less-mature company or one in an unregulated market, but I don't think the opportunity for them is as great as it seems.
EDIT: On an additional reading of https://github.com/commaai/openpilot/blob/master/SAFETY.md Comma claim to be ISO 26262 compliant on the premise that there's no real risk model, because the driver is paying attention and will simply take back control. One could call this the "Elon Model" to safety compliance. I suppose it works for some manufacturers.
It does seem that starting around 2018, Comma added a C layer to their CAN intercept board (Panda) which audit-checks some safety-critical CAN messages to bounds-check the control inputs. That's... a start, but not really a detailed risk analysis model by any means.
"We have done most of the ISO26262 analysis, we're hiring someone right now to get it written up nicely and open sourced. (those interested can find the job posting) It's one of our goals for openpilot 1.0"
They have a slightly more in depth explanation of their safety model in the "Background — safety architecture" section of this post.
This is the first place I've seen this acknowledged - and, bizarrely, I read most of the PRs and commits to the `safety` part of Panda, and didn't find a single reference to the checklist in that Medium post (and, only some of the requirements seem to be implemented in most cars). It feels really late to me to be doing this, and it seems like they could use a good docs person and some, well, leadership in the project.
One thing I noticed in general was that it seems like most Comma communication is side-channeled - most commits and PRs do not have much of anything in terms of description or documentation, and code review is really sparse, it feels like there's a back-room discussion happening rather than GitHub, presumably on Discord? This makes it almost impossible to understand the safety constraints and reasoning, or to audit changes to the system.
But, it also sounds like they could very well be on the right track for 1.0, provided they hire the right person and they're able to clean things up.
>and didn't find a single reference to the checklist in that Medium post
Yeah, I always thought safety.md in the panda repo was lacking and the points from the medium post should be included. Perhaps someone should make a PR. I doubt anyone who has worked on panda code hasn't seen that medium post though.
>(and, only some of the requirements seem to be implemented in most cars)
May or may not be what you're referring to, but the majority car brands, don't support openpilot's longitudinal control, and maintain the stock ACC system while openpilot just controls steering. That's why you may not see any acceleration/deceleration safety code. Some brands also have lkas torque severely limited by the the eps firmware, which should already be ASIL D rated. Honda for example will get around 5 degrees of max steering at highway speeds despite what openpilot says it wants, so there's no real need to add steering safety code to the panda.
I think you would mostly likely see code review on merged PRs done by the community. Like look up almost any PR by deanlee. Comma employees most likely do have most of their communications side-channeled though.
Nearly all luxury brands have had lane keeping for years. Similar quality to what Tesla offers, just not marketed as broadly and typically not available on a model 3 kind of entry level. But at Tesla S price levels, all other manufacturers offer similar lane keeping and adaptive cruise capabilities.
Perhaps they are waiting for someone to buy them. Even though most of their software is open source, the crucial parts are still closed IP. If it is really better than the current LKA systems and has the potential to handle even more driving scenarios in the future, it will be an easy decision for a big auto company that needs to quickly catch up.
Have you seen recent Volvo, BMW and Mercedes systems? They all have good lane keeping and adaptive cruise / stop & go for several years now. And I'm sure there are many more that I'm not aware of.
I have seen and driven recent BMW, latest VW (ID.3) and OpenPilot on Toyota RAV4 was giving me a much better experience in both lane keeping and as a product (driver monitoring, communication with the driver, overall UI and UX). BMW, VW score by having maps data integrated, that allows them to adjust speed for curves and traffic signs. However, it is not directly related to what comma is doing right now, so I am taking it out of the equation.
Moreover, I think adding map data is just a matter of time for Comma.ai and I can imagine that adjusting speed in before entering a steep curve will be much better solved with vision, which seems also to be on Comma's roadmap.
Not sure why the downvotes, it is a legit risk for this business. Remember how one could convert analog SLR cameras to digital with some kits? It wasn't a success story...
If we are concerned with cars killing people, we should get rid of cars. This FUD around self-driving == killing people will, in the long-term, cause more deaths than the handful of sensationalized stories about self-driving deaths.
The only reason those car crashes get national attention is because they were self-driving. In every other way they are boring. Bicyclists hit by car: every day. Man killed by semi: every day. etc.
Do you think you'd see a bunch of FUD about a 2021 manually-driven Chevy mystery car getting in a minor fenderbender and equating it with the safety of the entire car industry? No.
This ridiculous assertion any self-driving == better than humna is just ridiculous too. This is a website selling a self-driving devkit with a big "buy it now" sign that also has a disclaimer saying "Well you're not really buying it, if you kill someone, we've never met".
Sure, self-driving cars that were better than humans would be good, but what we have right now is self-driving cars that are maybe better than humans in normal conditions (controlling for type of car, conditions etc), and completely break down in bad conditions and often fail unsafe in between.
>Do you think you'd see a bunch of FUD about a 2021 manually-driven Chevy mystery car getting in a minor fenderbender and equating it with the safety of the entire car industry? No.
Well of course not, because we all moved to self-driving cars in 2017 as Elon Musk told us we would.
There's a very simple reason we aren't worried if Bob down the road crashes his car, it's because we're not binary identical bob, and people haven't been systematically lying about bob's capabilities for the last decade.
For some, it feels different because of the potential scale. Watching Falcon Heavy boosters return in perfect unison is spooky in a way watching one Falcon 9 return isn't.
Put another way, consider all the IT professionals who advise their relatives to wait a month or two before performing a major OS update. When self-driving cars are the majority of vehicles on the road, and we get our first buggy software update that results in a string of crashes, how likely will people be to update if they're even given a choice at all?
Comparing human to self-driving per-mile fatality statistics like that's the primary measure for people, while ignoring the fact that we're looking at the first mainstream manifestations of a coming type of threat modeling that the species has never before had to even consider, seems a little narrow a way to view the issue.
For insurance companies and actuaries looking to define collectivized risk, spreadsheets are the right way to consider this kind of thing. For individuals who've driven their whole lives without an accident, deciding to let emerging tech take over for you when taking the kids to visit grandma is going to be a significant transition in human history.
Consider that there are very elderly people alive today who will still never fly on an airplane because of their early safety records.
Does this mean that if I build an AI controlled gun turret to shoot people who walk across my lawn it shouldn't be news when it kills someone because people get shot by other people quite regularly?
In my opinion the application of technology, and (more importantly) the delegation of human responsibility to a computer, should be something that's part of the national conversation.
Not the same. The people wouldn’t’ve gotten shot in the first place. The idea with self driving is that it’s safer than a human, not that people will die regardless. So in your scenario, unless you’re just shooting everyone that walks by for some weird reason, and your turret is less accurate, it’s not the same.
> sensationalized stories about self-driving deaths.
I disagree that they're sensationalized. Self-driving systems are designed to improve user safety, but when they end up making mistakes that human drivers _wouldn't_ make then it's appropriate to thoroughly investigate these systems.
> is because they were self-driving.
Well, precisely. We tolerate driving accidents because we know that the mobility that cars provide end up offering far more value to the world than the occasional accidents that we end up tolerating.
What's the trade with self-driving cars? What is the technology enabling that is worth the possible additional risks?
> car getting in a minor fenderbender and equating it with the safety of the entire car industry?
I don't think that comparison offers anything interesting, and it ignores the long and storied history of improving automobile safety. Look into Ralph Naders "Unsafe At Any Speed" if you really want to see people looking at "regular" accidents and deciding there was something to be fixed.
And the way you establish safety is by doing a lot of miles with trained employees and clearly established liability, not by releasing alpha software to the untrained public with complete liability disclaimers.
Citation needed. So far, they have mostly driven into static obstacles on highways, the only environment where the vendors even allow to use their "self-driving" system because of the overwhelmingly favorable conditions.
Yes, self-driving systems will have fewer deaths/distance driven and therefore be statistically safer than human drivers.
But human drivers make human errors, understandable, explainable, common errors that we are used to, and therefore we underestimate their severity.
Self-driving cars will make space-alien machine-logic crazy weird and definitely not human errors, they will cause fatalities in situations where any reasonable human wouldn't, and that is much scarier than human errors, and therefore we overestimate their severity.
And that makes self-driving a very, very hard sell, it's simply not enough to be statistically better on just the numbers, you have to be psychologically better, and that's a huge hurdle.
Like I've said before, I don't have any problem if those that want to accelerate humanity's progress in this area volunteer themselves and their families to be self-driving test dummies.
I'd even support the idea of building them statues, should they meet an unfortunate hero's end.
If they can prove fewer crashes/deaths with their system than with human drivers, then perhaps they'll have something. But with only 35 million miles driven, they don't have enough data to prove much of anything.
Tesla has something like 4 billion miles driven with Autopilot. This apparently has 35 million miles. (It isn't a direct comparison, but the overall rate in the US is around 1 death per 100 million miles driven).
Anyone who uses these numbers to even imply this is safer than Tesla knows nothing about statistics. It is way too early to draw any conclusions about Comma's safety. It is also probably too early to even draw statistical conclusions about Tesla's safety.
Be my guest if you want to argue the merits of one system over another based on some technical specs or design decisions. My point is that the track records are not long enough to draw any statistical conclusions and the Tesla and Uber have killed people but Comma hasn't argument is at best wildly misleading given the large differences in miles driven.
I just started an aerospace company making planes in my garage. My company has killed less people than either Boeing or Airbus. Does that mean I make safer planes than either of those two companies?
A sentence can be technically correct while also being actively misleading.
This seems to deliberately ignore the pretty reasonable argument being made.
If you can't address the argument the other person makes, it seems like the more reasonable thing to do is acknowledge that fact, rather than just ignore their point entirely.
Tesla has had “driver monitoring” in that you had to apply some torque to the wheel, while the comma does gaze detection to make sure you aren’t watching a movie or sleeping.
Telsa has cameras in the cabin for all cars now, but have never stated that they are going to enable them for driver monitoring purposes. (But their code has flags like driver_eyes_up and drivers_eye_down, so at least it's on their mind...)
I don't think it has yet. It's also one of the safer assist systems because it observes the drivers eyes to check they are looking where they are going, unlike Tesla say.
So how much do these things actually cost? The title makes it seem like it's $999, one-time. But the website refers to $24/car/month. And there also appears to be a $200 harness fee at installation. Are there other fees?
I might be interested in a system like this, but I would only use this infrequently. If it costs $1,200 + tax up-front and about $300/yr in perpetuity, that seems like much less of a good deal.
The $24/year is optional if you want to remote login via cell service into your device. You can do certain things like take a picture remotely and in the future might be able to lock/unlock doors.
So, its $1200 without that stuff. George is strictly against SaaS revenue model, although for software that is continually improving, seems reasonable to me
Yes, this is usually leveraging the LKA feature. Depending on the manufacturer/car the LKA generally has a torque limit (to allow the driver to override the system by hand, and prevent wrist injuries from steering input) and an angle limit.
Most cars (none, that I am aware of) do not allow the steering motor to be actuated over the OBD port, the CAN bus containing the steering/LKA sits behind a diagnostic gateway that doesn't pass steering messages. You need to tap into this CAN bus, which in the Comma product is accomplished by the connector at the rearview mirror (usually used for the stock lane-keeping assist camera).
Most modern cars have something like lane assist yes, which moves the steering wheel for you. Many have automatic parking which also moves the wheel. So yeah it's built in anyway.
Since the early 2000's most cars have been using a drive-by-wire system for steering. They aren't exposed on OBD2 though, that's just for legacy emissions. Most are on a CAN bus but there isn't some big standard to control them all--it's very much a manufacturer by manufacturer thing right now.
This is only partially true - the only cars which used "real" drive-by-wire (wheel -> computer -> steering actuator) were a few Infiniti vehicles, and even then they had a clutch which could re-engage a physical steering column.
Rather, a few cars in the early 2000s and many since the early 2010s or so offer electric power steering assist: the steering wheel is still very much connected by a physical steering column to the steering rack, and normal steering input is purely physical - there's just also an electric motor attached to the rack to provide the usual power steering boost. And, that power steering assist can be controlled over a non-diagnostic CAN bus to facilitate LKA.
I've seen a few videos of it. It really is just a half step up from active lane keeping. It will proactively keep you centered and manage your speed with no interaction, whereas normal lane keeping only kicks in if you are about to drift out of the lane.
A "half step up" doesn't really do it justice. Most current LKA + Adaptive Cruise Control on cars will only keep you in the lane for a few seconds before requiring human engagement. The Comma Openpilot has already driven intervention free for hours.
Yeah I realize I am a bit reserved in my description, and it is incredible that a phone can be used to do what its doing. I really like this product and I'm glad somebody is making a self upgradeable module you can put in a car and get these features, as opposed to them being trapped in a model year's trim level and irreversibly obsolete like other modern car electronics.
My main goal was to provide a reference to its capabilities without invoking "self driving".
There's a lot of drives on youtube available. https://www.youtube.com/watch?v=22y-LaiDkrM
I love it for freeway driving. I don't really use it for city driving though. If you would find cruise control annoying on some road open pilot probably wouldn't help you.
Only if you want to look like a drunk driver. I wish it worked like that, but all it does is jerk back to the center of the lane about half the time and then complain that "steering is required".
Yes I really like the Honda adaptive cruise control, especially in bumper to bumper traffic. Does a fantastic job, saves a lot of driver fatigue and other than perhaps not being as smooth and fuel efficient as I could be, works exactly how you'd hope.
The LKA on the other hand only works only about half the time even in perfect conditions, when it works it doesn't even do that good a job and you have to provide steering input every 10 seconds anyways.
This is my experience on a 2020 Honda CRV so I assume this is among the best you can expect from Honda.
I honestly did not know this product existed and am very tempted, because an upgraded version of the adaptive cruise control would be very tempting. I've found that monitoring that the car is doing the right thing is a lot less taxing than actually driving, but feels equally safe.
Exactly. I love the adaptive cruise (although I wish it would go below 25mph) and it makes long highway drives so much less tiring. If I also didn't have to worry about minor steering adjustments, it would be even better.
My adaptive cruise control does work from 0-25mph and it is great, especially for bumper to bumper. You only have to watch out for cars aggressive merging in front of you (which the distance sensor doesn't always pick up) but otherwise the car does everything. It doesn't do pure stop and go, but the car rarely every fully stops and once it does you only have to press one button to get it going again.
I understand this is marketed as a "dev kit". And I understand the cost benefit for shipping a COTS OnePlus phone to provide the camera and UI. But I wouldn't buy one of these due to how much windshield visibility is blocked by the device and the wire dangling from the headliner.
It would be better to have the camera and wiring hidden away next to the rear view mirror similar to how other driver assist cameras are packaged, with a CAN connected processing box in the glovebox (so audible chimes can be made), and infotainment screen integration for the UI (perhaps as an Android or CarPlay app).
Last time I checked, they have a stripped down Android that would run a custom Termux and a single app with UI and different threads. Some of the services in Python, some of them using C, all communicated using capnp over zeromq
Adding the self driving option to a Tesla model 3 or S is up to what, like $3000 now? (on top of whatever extra tech package you have to add too) This is pretty competitively priced. A lot of folks spend more than that just on upgrading the stereo in their car.
Highway Autopilot is free and included on all current Teslas.
For an additional 10k you can do a software upgrade to FSD which does things like lane changes, and autopark, and summon. Someday, it will also hopefully do this for city streets (15 public users are on a beta of it right now).
But yah, you would never buy this on a Tesla, as the car includes a better version of it for free on all cars.
after watching a lot of his live coding content, i'm not sure i'd trust him or anyone he would consider a good hire to write code to drive my car for me
so let me get this straight, this is a single camera, single point of failure device that you mount under your rear view mirror and trust your life with? ok...
Full functionality requires both lane keep assist for steering and active cruise control for gas/brake. It uses data from car systems, but the model is closed so I'm not quite sure how the data is used.
The website does list what cars are officially supported. People have gone a lot further to get it on older cars, make unofficial forks, or even add radar or swap EPS.
It doesn't give you a self driving car, but I found it's significantly better than stock systems for highway driving.
Not sure that's the right question to ask. It takes two hours to get from San Francisco to Auburn, and it's highly unusual to have snow there.
Setting aside that I80 is generally cleared of snow very quickly, two hours of lane assist seems nice before tackling the remainder. And in most cases, it's more like "use this all the way to truckee and then drive the last 30 mins to squaw valley"
I use my Nissan LKA all the way to maybe Colfax. Beyond that I'm skeptical any modern system can tolerate the relatively tight turns and varying road conditions even in the summer. Still, getting 2+ hours with LKA is really helpful.
Long trips means spending a lot of time boringly driving on the highway with clear weather. I do not care for help in the more difficult first 15 min or last 15 min. I care about having help during the many hours in between.
That's interesting. I'm the complete opposite. Highway driving is the least taxing -- I can do it subconsciously. It's the more difficult first/last 15 mins of stop and go that I want automated. I could care less about the highway in clear weather. I might even enjoy that bit occasionally.
I thought the same until I tried a good L2 system. ACC and LKA are pure value adds. Lower the number of moments where your focus snaps back in as you are meandering out of the lane or creeping up on the person in front of you.
I generally agree, but heading north on I-5 from LA to SF is bumper to bumper traffic constantly swinging between 50 MPH and 75 MPH. Normal cruise control is worthless here because you're never going the same speed long enough to be a net benefit. I would absolutely love adaptive cruise control plus semi-automatic steering in that one scenario.
Stick me out in the middle of nowhere and I'm perfectly content driving for hundreds of miles straight. I don't find myself in that too often now, though.
This is purely anecdotal, but I've found driving like a "grandma" makes the drive completely stress free even in bumper to bumper traffic on the freeway. Just pick the center lane and set the cruise control to just around the average speed of that lane. Sure, I might arrive 5 mins latter and annoy some impatient people, but it does the trick most days. Hopefully more people start doing this and we can reduce phantom traffic jams.
That's absolutely the sane approach: shoot for the middle of the pack. It still never fails that someone races past me and cuts in, then forgets that their car also has cruise control and sucks at manually maintaining a constant speed.
As someone else here wrote: until I tried a good L2 system. ACC and LKA are pure value adds. Lower the number of moments where your focus snaps back in as you are meandering out of the lane or creeping up on the person in front of you.
It's cool tech, but I think Comma's quickly running out of room for commercial success here because a single camera can only take you so far. With GM rolling out Supercruise onto the new Bolts next year, it's hard to see how these guys last for more than another couple of years. Best case for them is to get acquired by a desperate laggard, but that's not really a great outcome if you're an engineer at Comma.
is the "retrofit" strategy living in the past? living in 2021 it seems a bad choice to buy gasoline cars. Most new cars coming out will have some kind of driving assist (L2 autopilot).
It's the opposite. I already own a compatible gas car. Instead of wasting resources on a new car, I can just retrofit the one I already own.
Also, I desperately want an electric car. But I need a minivan because (post-pandemic) I'm often driving six people around who are elderly or children and can't climb into an SUV.
There is no such electric van. This is the only way I can get "autopilot" in a van.
Too bad they don't support Mazda officially... because default the auto-bracking on my CX-5 is way too sensitive for my driving habits... and it comes to a complete stop as soon as it can even if the danger is out of the way.... I'm thinking about disabling it because I almost got rear-ended once.
What an insufferable use of lowercase. I can't even identify the products in some text later on:
```
Your first three months of comma prime are free with the purchase of a comma two.
```
Later...
```
The comma two and openpilot are currently compatible with dozens of cars with new models being added regularly. See if your car is compatible or check out our complete list of compatible cars.
```
Why do first words of a sentence still get capitalization but not the actual proper nouns?
—
“Any user of this software shall indemnify and hold harmless comma.ai, Inc. and its directors, officers, employees, agents, stockholders, affiliates, subcontractors and customers from and against all allegations, claims, actions, suits, demands, damages, liabilities, obligations, losses, settlements, judgments, costs and expenses (including without limitation attorneys’ fees and costs) which arise out of, relate to or result from any use of this software by user. THIS IS ALPHA QUALITY SOFTWARE FOR RESEARCH PURPOSES ONLY. THIS IS NOT A PRODUCT. YOU ARE RESPONSIBLE FOR COMPLYING WITH LOCAL LAWS AND REGULATIONS. NO WARRANTY EXPRESSED OR IMPLIED.”
SOURCE: https://github.com/commaai/openpilot/blob/devel/README.md#su...
—
EDIT: Here is the terms of use too, which appears to align to the prior legal clause above:
https://my.comma.ai/terms