My advice to the company BOD & shareholders: have him take the Udacity's self driving car online course and after the first project, he'll clearly understand how limited detecting lanes by CV only can be. To Uber's ex-CEO's point: LIDAR is the SAUCE.
Musk's central point is that if humans can drive with two cameras, so can a machine. And he's right. Why wouldn't we do just as well as the visual cortex?
Humans have superior intelect, even if specific performance is lower. We have experince. We learned to drive, in the particular area where you operate your car generally, with all its idiosyncorcies. Then consider eye contact, nonverbal communication, bias, and personal investment in outcome that computers are incapable of. Its not as simple as better sensors and reaction time.
But isn't the whole point of autonomous cars that humans are pretty shitty drivers? If I could augment my vision with a 360º setup of cameras and LIDAR you better believe I would!
I'd say people are actually pretty good drivers. I'm more interested in autonomous for the time savings than I am for the potential safety improvements.
Tens of thousands of people are killed every year[1] in just the United States, humans are awful at driving. If autonomous vehicles are able to make driving as safe as flying it will be like curing breast cancer in terms of lives saved.
our eyes are a lot better than cameras in a lot of ways. Eyes have better dynamic range, better sensitivity in low light, extremely high resolution in the center, and an extremely wide field of view. The nerve cells in our retina are also wired up to do a lot of processing in real time at extremely high resolution, eg. things like motion/zoom/edge detection.
And that‘s not even taking into account that we actually understand what we see and can reason about unexpected input and react accordingly.
From https://www.tesla.com/autopilot: "All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver"
Why do we keep referring to something that we understand should require human supervision as "auto"? Stop the marketing buzzfeed and let's be real.
I'm not sure whether it's intentionally been worded that way, but that sentence makes only a statement about hardware, not software. So technically, it's correct that the hardware is more capable than that of a human ("sees" and "hears" more), but it's the software that's not up to par.
That's like "Made with 100% local organic chicken" is only pointing out that the organic chicken is local, unlike the non-organic chicken whose provenance is not guaranteed.
That would be great to live in the world of marketing people where everyone is so able to parse weasel words. That would solve the fake news problem overnight.
People still use waterproof and water resistant interchangeably. People don't get the difference. Same with HW/SW, they won't know the difference. They read this web page and they think they are buying a self driving car.
And if the coma was remote software upgrade induced? Stretching the analogy a bit, but a lot of people hype Tesla’s remote updates without considering how many remote updates tend to brick our devices.
That sentence is just saying that the _hardware_ (not the software) is sufficient for "full self-driving capability". The current _software_ doesn't support that.
The point being that in the future the car _could_ get "full self-driving capability" via a software update. In contrast, a car that doesn't have the necessary hardware will never be fully self-driving even if we do develop the necessary software to accomplish that in the future.
And yet that is quasi criminal (from an ethical pov) that they have worded it that way for 2 reasons:
a. When you buy a car why should you even care about that hw/sw distinction, and more importantly do you have the distinction in mind at all time, and are advertisement usually worded that way, stating that maybe the car could become self-driving one day (but without even stating the maybe explicitly, just using tricks)
b. It is extremely dubious that the car even have the necessary hardware to become a fully autonomous driving car. We will see, but I don't believe it much, and more importantly competitors and people familiar with the field also don't believe it much...
People clearly are misunderstanding what Tesla Autopilot is, but this is not, ultimately, their fault. This is Tesla's fault. The people operating the car can NOT be considered as perfect flawless robot. Yet Tesla's model consider them like that, and reject all responsibility, not even considering the responsibility that they made a terrible mistake in considering them like that. We need to act the same way as when similar cases happens for a pilot mistake in an Airplane: change the system so that the human will make less mistakes (especially if the human is required for safe operation, which is the case here). But Tesla is doing the complete opposite! By misleading buyers and drivers in the first place.
Tesla should be forced by the regulators to stop their shit: stop misleading and dangerous advertisement; stop their autosteer uncontrolled experiment.
A.) Pretty sure that statement was made to assuage fears that people would be purchasing an expensive asset that rapidly depreciates in value, only to witness it becoming obsolete in a matter of years because its hardware doesn't meet the requirements necessary for full self-driving functionality. Companies like Tesla tout over-the-air patching as a bonus to their product offering. Such a thing is useless if the hardware can't support the new software.
I think I actually sort of disagree with your reasoning in precisely the opposite direction. Specifically, you state the following: "The people operating the car can NOT be considered as perfect flawless robot.".
I agree with that statement 100%. People are not perfect robots with perfect driving skills. Far from it. Automotive accidents are a major cause of death in the United States.
What I disagree with is your takeaway. Your takeaway is that Tesla knows that people aren't perfect drivers, so it is irresponsible to sell people a a device with a feature (autopilot) that people will use incorrectly. Well, if that isn't the definition of a car, I don't know what is. Cars in and of themselves are dangerous and it takes perhaps 5 minutes of city driving to see someone doing something irresponsible with their vehicle. This is why driving and the automotive industry is so heavily (and rightly) regulated.
The knowledge that people are not save drivers, to me, is a strong argument in favor of autopilot and similar features. I suspect, as many people do, that autopilot doesn't compare favorably to a professional driver who is actively engaged in the activity of driving. But this isn't how people drive. To me, the best argument in favor of autopilot is - and I realize this sounds sort of bad - that as imperfect as it may be, its use need only result in fewer accidents, injuries, and deaths, than the human drivers who are otherwise driving unassisted.
Wow! I'm glad you pointed that out. It was subtle enough I didn't catch it. But perhaps we should consider this type of wording a fallacy, because with that level of weasel-wording, almost anything is possible! The catch is that it presupposes a non-existent piece of information, the software. And we don't know if that software will ever - or can ever - exist.
Misleading examples of the genre:
My cell phone has the right hardware to cure cancer! I just don't have the right app.
The dumbest student in my class has a good enough brain to verify the Higgs-Boson particle. He just doesn't know how.
This mill and pile of steel can make the safest bridge in the world. It just hasn't been designed yet.
Your shopping cart full of food could be used to make a healthy, delicious meal. All you need is a recipe that no one knows.
Baby, I can satisfy your needs up and down as well as any other person. I just have to... well... learn how to satisfy your needs!
All depends on how likely you think it is that self-driving car tech will become good enough for consumer use within the next several years.
If we were well on the way to completing a cure for cancer that uses a certain type of cell phone hardware, maybe that first statement wouldn't sound so ridiculous.
Yes, but of course the only thing that matters is whether or not the car can do it. That it requires hardware and software is important to techies but a non-issue to regular drivers. They buy cars, not 'hardware and software'.
And if by some chance it turns out that more hardware was required after all they'll try to shoehorn the functionality into the available package. If only to save some $ but also not to look bad from a PR point of view. That that compromises safety is a given, you can't know today what exactly it will take to produce this feature until you've done so and there is a fair chance that it will in fact require more sensors, a faster CPU, more memory or a special purpose co-processor.
I agree that since that statement is at the top of the /autopilot page may insinuate that that's what Autopilot is, but that statement is describing the hardware on the cars rather than the software. I think that's intended to be read as "if you buy a new Tesla, you'll be able to add the Full Self-Driving Capability with a future software update; no hardware updates required." It could be made more clear, though.
People will differ about whether the statement is worded clearly enough, but it is a bizarre thing to put on the very top of the page. It is completely aspirational, and there is no factual basis for it either. No company has yet achieved full self-driving capability, so how can Tesla claim their current vehicles have all the hardware necessary? Even if it's true that future software updates will eventually get good enough, what if the computational hardware isn't adequate for running that software (e.g. older iPhones becoming increasingly untenable to use with each iOS update).
The autopilot page needs to start with a description of what autopilot is, and then farther down the page, the point about not having to buy new hardware for "full" self driving could be made. This probably still needs a disclaimer that that is the belief of the company, not a proven concept.
But that's not going to happen, because Tesla wants to deceive some people into believing that autopilot is "full self driving" so they will buy the car.
The Odersky's course is phenomenal. Highly recommended and it shows how much attention and craftmanship was put into designing Scala. Bonus: Martin speaks like Arnold and it's very enjoyable to have a "Terminator" voice teaching you complicated material.
Interestingly, you took the same path I did with Scala and ML. My criticism about these courses is that some of the projects and content can be too easy to get right, skimming on the surface in some areas that would need more time to grok. Lately I've moved to Udacity and there I can find more in-depth projects and discussions with virtual classmates. The price is steep but you pay for what you are getting.
Swift feels very immature in many areas. For example, Swift fails with generics out of most ordinary cases. Today, you still can't restrict protocol conformance (inheritance clause) to a type-constrained extension. That's a big issue with arrays. A struct wrapper is the go-to solution for many things where Swift generics break down and it's not the best or more elegant way to resolve its limitations. Many hopes for Swift 3.0 but I doubt they'll fix those.
My advice to the company BOD & shareholders: have him take the Udacity's self driving car online course and after the first project, he'll clearly understand how limited detecting lanes by CV only can be. To Uber's ex-CEO's point: LIDAR is the SAUCE.