So many "but what if this and that and this..." & "and yeah let's see if it can handle X & Y"
This is the iPhone 1 of self-driving cars!
That's akin to saying Apple should have waited to release their phone until iPhone 7 "because of this & that & this..."
Don't we have to start somewhere??
Aren't there supposed to be a big user base here who understands that it's an evolutionary process - we build the plane before we build the rocket before we shoot people into space?
Oviously the perfect self-driving car is still some way off, but I for one am thrilled this race is on!
I laughed out loud when I googled "tsla" after watching the video and the top headline in the news section of the google results was "Analyst doubts the new move by Tesla Motors."
Clicking on the article for humor's value, I continued to read:
> However, Edmunds.com, Inc. analyst Jessica Caldwell questions the value of purchasing a self-driving car before regulations catch up. Caldwell said that, meanwhile, competitors could introduce better solutions, potentially making Tesla’s hardware “obsolete almost as soon as it’s activated for prime time.”
It's just hilarious the contortions of logic people will go to in order to put Musk down.
Having the equipment ready and in action first is somehow a disadvantage by this argument, and now you're better off being later to market.
These same people have written that Tesla will be left in the dust as its competitors beat it to the market because it can't keep up with their manufacturing, and thus being first to market is only an advantage if you're not Tesla.
No surprise since Edmunds makes money off of everyone except Tesla (no advertising, no dealers, no comparison shopping that converts to lead gen).
"How does Edmunds make money?
Edmunds sells advertising to marketers who have contextually relevant messages for site visitors. Also, car-shoppers who visit Edmunds have the opportunity to request price quotes from dealers and providers of insurance, financing and extended warranties. Edmunds is paid by the automakers, dealers and other service providers for the lead referrals."
http://www.edmunds.com/about/faqs.html
Or, how about Edmunds observes car resell values and is predicting whatever Tesla puts in cars today, will not maintain it's resell value (relative to other vehicles in it's class) tomorrow since this market aspect is going to move rapidly before it "gets it right".
The resell value may be less than a comparable "normal" car, and there's the risk that some government regulation makes all the technology in a Tesla today obsolete or illegal to use in the future.
(and I believe Tesla quietly killed their promise to artificially inflate resell values by footing the bill themselves a while back)
Some people naturally won't care about resell value - of course, others will.
My anecdote: I discovered Edmunds because of their review of first Tesla sedan. It was for me, a technically inclined but nevertheless technically naive reader, the best car review I've ever read.
> It's just hilarious the contortions of logic people will go to in order to put Musk down.
As much as I anticipate more cool stuff from Tesla, one could say the same about akin-ness of people to find everything that Tesla does "super innovative" and "revolutionary" while the same features already existed in other cars for quite some time.
I actually laughed out loud when I saw the Headline of this submission "Tesla released a video of a car driving itself" being the number one entry on HN - sorry but this speaks volumes about the "neutrality" of HN regarding Tesla.
Especially since this is trying to put a positive spin on Tesla having to announce that all their newly shipped cars will now come without those features that already existed in other cars for quite some time, and that anyone who purchases a car with them will not actually have those features until they're reimplemented and patched back in.
Well those roundabouts do give them an unfair advantage. All joking aside, that video was of a research vehicle. The article above is for a car you can buy right now.
Wonder how long it will take for someone to jail break that hardware and hack up some v0.0.1alpha open source self-driving software for it. Not saying it would be wise, but it seems like the kind of thing people traditionally do for "locked up hardware, with software-enable coming in the future" (see gaming consoles, routers, etc).
>> jail break that hardware and hack up some v0.0.1alpha open source self-driving software for it.
That "someone" would have to be a world leading AI research team, to "hack" something that would be a few decades ahead of the current state of the art. But alright.
Not really. The state of the art will get you a self driving car, just not a very safe one. Think more 1995 CMU Navlab and less anything that would ever be approved or marketed to the public. Self-driving car technology is 20 years old. Self-driving car technology I would be willing to trust my life to is... well -2 to -5 years old at best.
Sure. But that's not the kind of thing a IoT hacker would consider a success. Someone might be content, however, with making a mod for Tesla that can e.g. follow "complex" paths of bright orange cones in a parking lot. Test it there, without being in the car themselves and put it on GitHub for bragging rights of having made a cool AI+Systems project. The problem is then someone might see that "cool hack", think it is more than it really is, and kill themselves turning it on while on the highway...
It's not just self-driving cars. It's features like autonomous parking, lane-assitance, cruise control that automatically keeps the distance to the car in front of you and the like. Stuff like that has been repeatedly hailed by HN'ers as revolutionary when Tesla introduced it. Neglecting that other companies had those features for years.
Whenever Tesla is the topic it's almost guaranteed every negative comment on HN will be downvoted (I just see that right now with my comment above. It's constantly gaining and losing me karma).
Besides that, the self-driving Tesla isn't news. That is the joke here. It's a press release that gets frenzied on HN's #1 spot.
I've read comparisons of tesla's system vs others. They aren't even in the same ballpark. C&D has one with Infiniti, Mercedes, Telsa, and BMW.
They drove 50 miles on a fairly challenging road and recorded the number of times they had to grab the steering wheel. BMW and Mercedes were twice as often, infiniti about 3 times.
The #2 BMW "City streets also foil this equipment; occasionally the BMW lost the trail on clearly marked straight sections of pavement for no obvious reason at all."
The Tesla (with driving hardware 2 revisions ago, and software at least on revision ago) "the Tesla Model S locks onto the path ahead with a cruise missile’s determination and your hands resting on your lap.","The Tesla’s Autosteer performance can be distinguished from our other contenders by two words: no wobbling.", and "This system rises well above parlor-trick status to beg your use in daily driving.", "to Tesla’s credit, this is the only car capable of hands-free lane changes.", and " but by tallying only 29 interruptions in 50 miles, Tesla’s Autopilot app lives in a class of one."
So do other companies have lane following and related features... yes. Do they do as well as Tesla, not from what I've seen.
True. In a way this is kind of analogous to how Apple 'fanboys'/users behave when new features are introduced in iPhones but have already been present in Android for many years prior to that.
Can't speak for him, but I haven't. And I know others that haven't. Doing so only reflects an insecurity in your own opinion. I downvote for ad-hominems, or blatantly (easily verifiable) false claims.
Assuming that others downvote over differing opinions does paint you in the light of doing that yourself, though...
The downvote button isn't a disagree or dislike button. If I disagree (and care enough), I use the other button, reply, to try to understand their point of view and try to get them to understand mine.
The forum founders have stated that the downvote button is for disagree/dislike. If you think a comment is very bad, there's a 'flag' function if you click on the comment's timestamp (if you have the requisite karma).
My understanding was that downvoting was for comments which do not contribute to the conversation.
I'm both surprised and sad to hear that the downvote button's intended purpose is disagreement. This seems like it would discourage people from posting alternative points of view if their view is the minority.
As much as I tried looking for it on my own, I couldn't find one, so would you mind posting a source as to where the founders stated the purpose of voting?
That... kind of makes me want to visit this site much less.
I often upvote both sides of a polite to-and-from between two parties with differing opinions, if the discussion is interesting, since that, I feel, adds to the quality of the whole page. I frown upon learning that it is encouraged to downvote if you disagree. Isn't it an equivalent of childish sticking fingers into your ears and hoping the other opinion goes away?
I usually try to argument with them, that's what a discussion is about, a exchange of argument and a test of there validity. Everything else, is just bubbling oneself into another ideology.
I think the interesting point here is that the hardware is available on all Teslas. The functionality shown in the video could be available on everyone's Tesla by installing an update. What they showed may be similar to what's been demonstrated by other companies in the past, but the potential for broad availability is novel.
Some would argue that Gathering training data is beneficial to users in the long term, and the more aggressive auto-manufacturers go about it the better the cars will be to us users.
Eh, I don't think it's HN being fond of Tesla. I think it's just that Tesla (rather, Elon Musk) is very good at grabbing headlines and people are sometimes not as well informed about the state-of-the-art as they wish they were.
Same thing happens with other companies of course. Er, Apple, in particular. If you ask most people "what was the first smartphone?", they will probably say "the first iPhone"... even though there were a few earlier devices that used the same tech and would have been called "smartphones" today (by the same token as the original iPhone). They just never caught on in the same way as the iPhone did.
It's marketing, innit. I don't think you can win against it.
> They just never caught on in the same way as the iPhone did. It's marketing, innit. I don't think you can win against it.
The iPhone is synonymous with smartphone technology because its debut was a quantum leap ahead of the competition at the time (Blackberry, Palm, etc.). Google was working on a phone that used a physical keyboard until they saw the iPhone announcement and then quickly modified Android to ape iPhone/iOS.
I'm really sick of people trying to downplay Apple's innovations as nothing more than slick packaging and marketing. It's a stupid meme that refuses to die.
As far as I know, there were two new things that came along with the iphone: multitouch capacitive screen and 3g data plans that were actually worth using because apple threw their weight behind negotiating with the carriers. A few generations later, the iphone app store was a huge achievement too. All of those were huge, but there are also a lot of other innovations associated with the iphone that were actually around years before.
> one could say the same about akin-ness of people to find everything that Tesla does "super innovative" and "revolutionary" while the same features already existed in other cars for quite some time.
Exactly. I would go as far as saying it's simply infotainment.
Here is a test that I would use to determine the value of this. If it wasn't infotainment, it probably wouldn't need to be set to a soundtrack and would stand on it's own w/o music. The music creates an emotion and helps stave off the boredom.
I found this to be impressive because it seems to be working without LIDAR and performing very well. I may be wrong but most other contemporary cars that drive this well use expensive laser sensors
Lidar sounds great, but 250 meters x 360 degrees without a tower on top of your car sounds cost prohibitive for now.
Cameras are cheap and I suspect the 12 cameras have overlap to help with depth perception and maybe even fault tolerances. I suspect as soon as LIDAR becomes a better solution for the price (if that happens) that Tesla will use them. Or maybe a hybrid of both so they can use the strengths of both as needed.
I hear you, but this move is >actually< super innovative and revolutionary, and has potential to catapult Tesla's self driving team way ahead of the pack.
If we're continuing the iPhone metaphor from the GP, note that Apple is almost always late to a market segment. When the iPhone was released, Windows CE had owned the smartphone segment for years. The iPod was hardly the first hard drive mp3 player. Apple Watch wasn't the first smart watch. Etc etc etc. Apple makes good products, not innovative ones. It is not at all unthinkable that someone else will release a better self driving car after Tesla, and win marketshare.
Continuing that path I'd say: Tesla gets software right. Way better than what I've seen in other cars.
But what stops that iPhone metaphor for me is Design, Materials and Finish. Apple Products look and feel like luxury. Where as Tesla only achieves that on the software side - and in your pocket.
At the risk of sounding like a nationalistic asshole, Teslas look very American - i.e., a mix of cheap and tacky. Put a 5 series bmw next to it - two different leagues really. I want to root for Tesla, and I'd buy one if I could park it somewhere where I could charge it, but I don't think they're doing great in the looks department, especially not to Europeans.
Clearly you never used a windows Mobile device if you can even think to compare the experience with those awful devices with a far, far superior experience.
The iPhone is the first smartphone as we know them today, denying it is simply lying.
Source: I owned Windows CE / windows mobile devices since as early as 2001 if I recall correctly.
The second generation iPhone was the first as you know them today. The first was a glorified iPod Touch with a cellular radio stack. In fact, aside from the better touch-friendly screen, the iPhone was a step backwards from Windows CE/Smartphone/Mobile devices because on those I could run applications.
Source: I owned Windows CE/Smartphone/Mobile devices starting with the Samsung i600 and was actively using a Treo 700w when the iPhone was released.
The iPhone was the first smartphone without a physical keyboard, allowing for a much more functional screen in a convenient form factor, right? I recall it seeming like a non-trivial difference at the time, and clearly a game-changing one.
Some models of the Tungsten didn't have keyboard. Most palms didn't, the advantage of the treo was that it did so you didn't have to learn the palm handwriting system. The screens weren't sensitive enough for on screen keyboards.
Ya, I should have phrased my original comment differently. The iPhone was the first phone that effectively made the on-screen keyboard unnecessary. Past devices did without one, but not without a significant loss of functionality.
Palm treo/tungsten had a keyboard but also a good screen and were far more functional than the first iphones. Not as flashy and the software was much more expensive though.
> Having the equipment ready and in action first...
That seems to be Ms. Caldwell's point: Without the software or regulations in place, it's impossible to know if the "equipment is ready." I don't disagree with her on this one.
The strategy Musk has clearly been using for a long time now with this is to put as much hardware and software into the field as possible, often even without the customers knowing it's there, and gathering vast amounts of data from it, and gradually forcing it into widespread availability and acceptance, all while letting the system learn and improve itself, which is a necessary step.
He's doing what he does best: forcing a hard change on an established system.
Autonomous driving not allowed? Well, we'll give them "Autopilot"-- make sure to keep your hands on the wheel! Nope, it's not autonomous, it's Autopilot! It just happens to have all the hardware needed to be fully autonomous, and Autopilot will "gradually gain new capabilities." Questionable? Sure. Best way to make an extremely hard industry transition? Almost definitely.
It would be a chicken and egg problem if he didn't do this. Can't test the system extensively beyond the lab without regulations, can't have a system that doesn't meet regulations because it hasn't been tested enough yet. He's just sidestepping this dilemma.
It's an extremely bold commitment on Tesla's part. The available details are a bit ambiguous, but it appears they're putting $8k worth of hardware into every new car though Tesla owners need only pay for it if they want it enabled.
All this in spite of tremendous uncertainty not just with regards to the regulatory environment but to the not yet fully developed capabilities of their own software.
Someone needs to pay for the R+D before the hardware can exist, so I don't see anything wrong with considering R+D as part of the cost of the hardware (its "worth" if you will). The alternative is only considering the raw materials as part of the cost of building something.
Right, but the parent comment was questioning the widsom of installing the hardware in every car. My point is the pricing model is that people using the software pay for it. The price of putting the hardware in the cars is probably low, and also Tesla probably benefit from the data gathered by that hardware anyway.
Tldr: I suspect Tesla know what they're doing with the pricing of this thing.
> Thousands of cars out there collecting data is a huge advantage over groups working with a handful of prototypes.
I'm not sure this is true. At least for Google, it seems like the main bottleneck to progress is engineering time to fix the problems that arise. The problem is not a lack of vehicles driving and finding problems.
I say this because Google has made very little effort to expand their fleet of testing vehicles. The last big expansion, from 28 to 48 vehicles, was in Sept. 2015. Since then they've expanded from 48 to 58, but it doesn't seem to be a priority. [1]
I think it's just that it isn't an option. The data from 100 cars isnt much more useful than the data from 50. That doesn't mean the data from 10000 cars isn't much more useful than the data from 50.
It just isn't reasonable for Google to build and manage a fleet of that size for a development program. It would cost Google half a billion dollars, but for Tesla it's just a bit of cost they roll into the product.
Teslas approach is different to Google's. Tesla is gathering hi-res location data from both human driven and autopilot driven cars to build an accurate map of every lane of every highway.
Sure. But having auto-pilot a significant % of the roads in the USA (and norway by the sounds of it) gives tesla an advantage. After all you can't learn from a road you haven't been on.
In the choice between putting the hardware on all cars vs only on a few prototypes, the R&D is a sunk cost that shouldn't influence the decision and only the pure incremental cost of the hardware matters.
I don't think the iPhone had iOS 10 ;). Apple is a pretty good analogy for what Tesla's HW/SW plans seems to be. "Reasonable" backwards software support (at least compared to the otherwise "none") while iterating hardware.
Yeah, well... it's not really autonomous. It's a slightly better cruise control + assisted lane change. In fact the "Autopilot" marketing term makes it sound much more like fully autonomous driving than it actually is.
>> The strategy Musk has clearly been using for a long time now (redacted for brevity)
Elon Musk does what he does best alright: make billions. The rest is just educated guesses at best.
At worse, any attempt to "explain" marketing strategy as some sort of passionate drive to improve the technology, or similar, is just so much cold-reading yourself at for the benefit of a commercial company's bottom line.
> Having the equipment ready and in action first is somehow a disadvantage by this argument, and now you're better off being later to market.
So I think it might be from an investment standpoint; just think of Tivo they made the DVR, but once others saw what they were doing there are plenty of other DVR choices and Tivo didn't win the category.
Tesla might be forging ahead and blazing the trail for others, who will actually capitalize on it.
I think that the Tesla brand is sexy enough that they'll maintain their niche (as Tivo has to some extent), I'm not sure if that will translate into investment returns.
It's not a horrible comparison, but the barriers to entry between an electric car (+entire charging network) are much higher than a DVR. First mover advantage digs a bigger moat.
The stakes are higher and the margins are smaller. First mover advantage either digs a deeper moat or a trap for yourself. (Vercingetorix vs. Julius Caesar at Alesia.)
Tivo lost because cable/sat went digital and it made a lot more sense to integrate the DVR into the digital receiver than it did to have a separate box that required using a CableCARD. I'm pretty sure most people don't even know what a CableCARD is.
When Tivo was a simple as plugging in the analog coax signal from your cable or antenna, it made a lot of sense. But once the signal from the cable company needed digital decoding, it hurt them, despite Tivo having a better UI.
> It's just hilarious the contortions of logic people will go to in order to put Musk down.
It's probably because there is a substantial gap between what Musk spins, and reality.
Take Tesla 3 pre-orders, for instance. Tesla spins 400,000 pre-orders for a car that you might not receive in the next three years as the most amazing innovation in automobile manufacturing.
Historically, there have been a few other companies that had hundreds of thousands of pre-orders, where you would end up waiting years for your car to be built and delivered to you. Their names were Lada, Volga, etc. Soviet auto companies had the exact same problem Tesla has - inability to meet demand.
> Historically, there have been a few other companies that had hundreds of thousands of pre-orders, where you would end up waiting years for your car to be built and delivered to you. Their names were Lada, Volga, etc. Soviet auto companies had the exact same problem Tesla has - inability to meet demand.
Because genuine competition in the Soviet Union was not allowed. Don't want to wait for Mr. Musk's Magical Mystery Car? Then buy a damn Honda Civic or something. A huge pre-order list is notable when it occurs in a context where people have many other options not involving a long wait. It proves the existence of real demand, not "demand-because-of-a-lack-of-alternatives". Proving this demand is important for a small company which is trying to scale up.
The reality is that they are the first serious electric car company. A tiny enterprise that took on the old Detroit giants and is winning.
When you see people putting down hard earned cash for something three years into the future, they are not being tricked, they are investing in disruption. He's such a terrible orator you couldn't describe him as a charlatan. It's his actions and people's wallets speak the loudest.
I'm not at all implying that the money will go poof. What I do mean is that inability to meet demand is actually an incredibly serious problem for a car manufacturer - one that's more closely associated with failing Soviet enterprises, regardless of spin to the contrary. Having three years of cars in your backlog isn't winning. Waiting three years for your car to arrive isn't revolutionary.
Meanwhile, Detroit produces more cars then that in two weeks.
Tesla might win eventually, but let's not get ahead of ourselves.
That's a pretty big difference to me. The possibility that the model 3 could save my family and I an accident over it's 10 year lifetime is pretty important to me.
How can there an "inability to meet demand" on a product that hasn't been released yet? Or right, that isn't possible.
Once it is released then we can talk about the rate of mfg vs backlog but it is pure silliness to argue against a product because lots of people want it.
>A tiny enterprise that took on the old Detroit giants and is winning
In what way are they "winning" against Detroit? Certainly not in revenues, profits, miles driven or cars produced.
Yes, they "own" one segment": high end electric vehicles. No one there is close. But Detroit owns many others. Why does the one that Tesla dominates carry so much weight?
Tesla's market cap is $30bn, more than Renault, Mazda, Chrystler or Subaru (https://news.ycombinator.com/item?id=9055885). If they execute well on the Gigafactory, they may apply a multiplier on this market cap, reaching above Toyota ($200bn), even with a tenth or a hundredth of their production. Why would it be more valued, per car produced? Because of its future potential.
> Why does the segment that Tesla dominates carry so much weight?
The stock market is adjusting for a <10% market share of ICE cars in 2050. It's a Schroedinger situation: If the <10% hypothesis is wrong, then we're all dead (more precisely: Half of us are living under giant cataclisms, the Gulf Stream has stopped, the costal cities are flooded, and our economy is so disorganized that we can't plan where the money will be), so markets don't need to account for this situation. So all Detroit/Toyota industries are slowly shrinking in comparison to Tesla, unless they adapt. It may be hard to imagine if you live in USA because they rely so much on cars for their lifestyle, but our civilization will get rid of petrol cars within our lifespan. This is also the genius of Musk: He saw a version of the future that is obvious but that no other American citizen could understand, and he executed on it. Hence the importance of travelling the world or meeting many people when we want to innovate.
By the way, maybe a sweeping change is preparing that the petrol generations can't foresee. We could imagine things like an international agreement on global warming suddenly reducing the worldwide production of cars. It's impossible today, but educated people may have good reasons to believe something like this will happen, in which case Tesla wins all.
I'm not sure how having a disproportionately high market cap indicates "beating" anyone. It's the whim of the markets.
>If they execute well on the Gigafactory, they may apply a multiplier on this market cap, reaching above Toyota ($200bn)
Based on what, exactly? This is Silicon Valley math. Who on earth is going to pay Toyota prices for a company that you yourself say might only have a fraction of the production? What sort of investment would that be?
I know this is taboo around here, but one day high-growth slows and companies become "established" and it's about how much money they make and return to shareholders. If Tesla is making as many cars as Toyota and as much profit is Toyota, then they'll be priced like Toyota. Why would it be any other way?
>So all Detroit/Toyota industries are slowly shrinking in comparison to Tesla
There were over 70 millions cars sold in 2015. Tesla delivered about 50 thousand, for about a 0.07% market share. Are you surprised that they are able to grow relative to specific players in a massive industry?
> If Tesla is making as many cars as Toyota and as much profit is Toyota, then they'll be priced like Toyota
But maybe Toyota will return fewer profit per car? Ah, you've already accounted for that. There are other variables: If production is equal today, but if Toyota is expected to halve its production in 2040, the Toyota share will get lower and lower until it's half of Tesla. Other variables can be the health of their distribution system (a vendor with no cardealer lock-in will save a lot on sales), customer trust, innovation methods (e.g. Toyota's methods only led to the invention of Prius, whereas Tesla's methods led to the invention of a fully-electric car with a lot of autonomy, so it's a predictor that Tesla's innovation scheme will produce better things than Toyota in the future), marketing methods (They spend little in advertising and are able to have thousands of viewers watch their keynotes without pushing ads in the newspapers), a legacy of employees that might be hard to re-train, or a forecast that huge R&D expenses will be necessary to keep being relevant in the market of 2040. It's not "a whim of the market", it's that Toyota is shrunk by default in 2050 whereas Tesla is alive by default in 2050.
> Tesla delivered about 50,000, for about a 0.07% market share. Are you surprised that they are able to grow relative to specific players in a massive industry?
..and Tesla plans to deliver 500,000 in 2018, for about 0.7% of the market share. Then I personally think they'll become even more popular (because of law advances, both against pollution and in favor of self-driving cars, and because of social status, geeks, comfort and market-specific innovations like the air filter that will be appealing in China, etc) and that means 7% market share in 2022, etc. So I'd buy Tesla shares for a higher price than Toyota's because I'd trust them more for delivering good dividends in the long term.
Note: I'm neither a financial advisor nor an owner of Tesla shares.
Not high end EV — EV period. Why so much weight? Because it's literally the only direction left for land based vehicles to go (the puns write themselves!) in the next century at least?
> Take Tesla 3 pre-orders, for instance. Tesla spins 400,000 pre-orders for a car that you might not receive in the next three years as the most amazing innovation in automobile manufacturing.
Because it kind of is when we're talking about an electric car. Tesla's competitors can't come up with those numbers for EVs combined (even with hybrids). GM is expected to make 30,000 Bolts for next year, for instance (and it may or may not sell all of them).
That said, I'm skeptical the Autopilot 2.0 hardware is a "Level 5 autonomous system". I think a good rule of thumb would be to subtract about "one level" from what Musk promises, as he always tends to be a little overoptimistic, even if what he creates still ends up being better than the competition.
Take the first Autopilot, which Musk said is "Level 2". It's probably more like a real Level 1. It only works under very specific situations, and even when those specific situations are met (a highway) it can still fail, because Tesla may have accounted for US highways but not European ones, or other road quirks. See the recent Autobahn accident because the Autopilot couldn't properly identify "yellow lines" as opposed to white ones.
So yeah, I expect this to be more like Level 4 ... three years from now (as ready for mainstream use, not just a demo). I don't think we'll see true Level 5 until the 2020s. I think there are still many unexpected things Musk and his engineers aren't foreseeing right now.
> Because it kind of is when we're talking about an electric car. Tesla's competitors can't come up with those numbers for EVs combined (even with hybrids).
Check your sources, we're at well over a million EV/PHEVs[0] combined nowadays and the model S is a bit over 10% of that. It's been a while since I last saw an EV parking spot with chargers that was empty.
Now that I checked it, the Leaf alone is past those numbers for the Model 3.[1]
Level 5 is unachievable and unnecessary. There will always be some small number of locales that confound the system. But if you can't use autonomy a few times per year, who cares?
When we get to level 4 the game is over, it's just an adoption curve at that point.
I don't know how "unnecessary" it is if we're talking snowed roads, country roads, roads that haven't been painted in a decade or longer, or if we're talking about autonomous cars without a wheel.
Under those conditions, it's pretty much Level 5 or bust. But yes, that doesn't mean Level 4 cars won't start getting adopted, especially if the autonomous systems are just another "feature" on a regular car (hopefully EVs in all or most cases).
Having to turn off self-driving features will be like losing cell service. It's not what you paid for, it can be mildly inconvenient, but you get over it and on with your day.
I've seen some sources that the original Volkswagen Beetle had upwards of 150,000 pre-orders. But then WWII happened and none of them were delivered to civilians until the company was resurrected after the war.
That's ridiculous. My 2+ year old Mercedes self drives on the freeway quite nicely. They do require I keep my hands on the wheel, but other than that, it's doing the driving.
Other manufacturers aren't simply responding to Elan.
I've read comparisons between the Mercedes and Tesla. It wasn't even close. Even through the comparison was with 2 versions ago of the hardware (the single camera) and at least one major software revision ago. Car and Drive has the details.
It is on the list, and notice I said "mass market" vehicles. No other manufacturer, even today, has as many autonomous cars on the road as them. And the ones that do exist are not as advanced as Tesla's.
These other car companies have had so much more time and resources, and they have not pushed the way Tesla has.
Because Tesla needs to make a lot of money, very quickly. The other makers are already big, and can afford to play it safe. Tesla has to keep risking everything in silly marketing tricks.
Very grateful to misunderstandings like this. It had caused Tesla and several tech stocks to be severely undervalued in the market. At 40$, I knew Tesla was worth 140$. And sure enough it started to go up suddenly mostly because of media coverage.
Traditional car investors look at certain tried and proven market indicators to judge stocks. When new tech comes along, they don't know what to think. So they think here and there like in this case.
The real value in bringing self driving tech here is to provide cool features to the buyer, not necessarily bring the stock of the company up. If investors do not know how how to place value on this, I only shrug.
I look at Tesla as the exception to a lot of rules. I agree with you here-- launching hardware which not only is a value add to users but also Tesla data teams, is so far ahead that this seemed like a dumb comment.
That said, if some company had manufactured bluetooth hardware last year way in advance of the release and full spec, I would argue that it was a pretty bad move. Clearly this is not a perfect analogy, but it should illustrate that this can seem counter intuitive. Building in the cost of beta tech that doesnt fully work and is not legal is a huge gamble. I just wouldn't personally bet against tesla and for the reasons i outlined; it is likely the right call
I cannot judge for myself having not read them yet.
The only people who suggest them are people who I do not respect (rich assholes or people who think they should be rich assholes) or people I not do not know mixed with people insulting them.
There's a lot of cartoonish hubristic crap in Rand's books, and she only really has one philosophical point to make, which is that self reliance and being unafraid of naysayers is good. Which is true.
Basing your entire philosophy on a cartoon is a mistake, but bashing an author when you haven't read any of their work is a grosser mistake.
Her one good book in my opinion is "we the living" - oh, and Anthem was an interesting novella - think she inspired Tom Disch's 334
Rand was a mundane, myopic, self righteous ideologue with the philosophical depth f a puddle and the macroeconomic wisdom of a 6 year old with a 20 dollar bill.
Randian politics are a comic strip masquerading as a political movement.
Worthless except as a study in sociology of those people who hold it up as a worldview.
OK, I will try, given that I am mere a poor asshole.
The classic literature (and Atlas without any doubt belongs to so called modern classic) much like classical music, is a complex and highly artificial phenomena. The word for referring to a piece of art is artifact.
In the case of classical music there is absolutely no objective criteria to judge symphonies. It is possible to compare the pieces of music, and lots of pseudo-intellectual assholes do it for living, but it will be nothing but a joggling with a metaphorical jargon. Nevertheless, one almost always could distinguish some fragments from this or that piece of classical music which he likes for some not very articulable reasons. Bach, arguably, composed most of such sources of a beautiful fragments, Mozart is the second, etc.
These notions could be applied to literature. Like a symphony, a big novel, even a small poem, cannot be judged by a single criteria, but some parts of it we might find delightful and beautiful. The question, as it is in the case of classic music, is who is the reader (or listener).
In case of Atlas Shrugged, there are too many to count arguably beautiful parts (majoring in physics and philosophy is one of them), fragments or even passages, which appeals to the mind of modestly educated reader. The best personages she created were, ironically, not the central, heroic and positive, but all the crooks and weaklings, mediocrities and minor idiots. This, perhaps, is why Rand is hated so much.
As a student of philosophy, Rand got most of the philosophic and economic aspects right. Indeed, her Hegelian professor is a true masterpiece, her bureaucrats are convincing, James Taggard is remarkable. Her main characters are overloaded with virtues.)
Certain naivety of the plot - the miraculous alloy and especially the perpetuum mobile, for which the book has been criticized so often by idiots, actually are mere nuances. The real story is about focus and persistence, accepting the challenges, adapting and evolving and never giving up - all the postulates of a sound moral philosophy, which goes back to ancient Greeks. No one takes the plot of Atlas seriously.
She is also right that the motivations which drive us should be simple and pure (the Greeks, again), should be grounded in our human nature (shaped by evolution, constrained by biology) not in some abstract nonsense, dogmas, fleeting fashions, or even social norms. Rigid social norms are always reflect the ugliness of urban societies. While her main characters are rather angular and clumsy, the driving forces behind them are clearly recognizable and proper. Let's not blame Rand for this angularity.
I could do it for hours, but I think you got the idea. Good literature requires a "good" reader to be able to appreciate what is behind the words in a sentence.
"but all the crooks and weaklings, mediocrities and minor idiots. This, perhaps, is why Rand is hated so much."
You may well be right - many might read atlas and empathise with the antagonists, and dislike rand's treatment of them. Certainly many self proclaimed conservatives disagree with her philosophy, because it doesn't agree with protecting their rusting vested interests at the expense of innovation and the taxpayer.
An awful lot of people who like rand are assholes, however. The average randroid is more representative of James Taggart than Roarke, bitter and small, scathing of those who disagree with their narrow opinion.
Unfortunately this is really common on HN nowadays.
What I find really interesting about Tesla (and SpaceX and HyperLoop) criticism is that people think it will fail for what it doesn't do, instead of focusing on what it does do. What they don't realize is that all products in history that got hugely successful didn't do a ton of stuff initially. They just did some things really well.
Early washing machines were noisy as hell and got unbalanced all the time. But who cares? they washed clothes and everyone wanted one.
Early TVs were massive things with fussy screens and only in B&W.
The first iPhone was extremely slow and only had EDGE.
But rather than focus on what all of those things don't do, you only need to realize they do something really well, and that's why they are a huge success.
Even if the Tesla can't drive itself too well in heavy snow, or if the range is crap in really, really cold weather - WHO CARES? There are still tens of millions of people on earth that will lap them up because of the list of things they do so well.
Products don't fail or succeed because of what they don't do. They fail or succeed because of what they do do.
To be fair, being enthusiastic and positive about the future isn't that useful of a comment. There is only one way to say 'Elon's vision is right and it will happen,' but there are countless, interesting, ways to point out how it might fail. So after the first few excited comments come in, often the only novel thing to contribute is a counter point.
This doesn't mean that the majority of people aren't optimistic about the future, just that their opinion has already been expressed.
>To be fair, being enthusiastic and positive about the future isn't that useful of a comment. .... but there are countless, interesting, ways to point out how it might fail
Sure, but there is more than one way to say it:
"I live in the mountains of Virginia and this will never work here with the snow, therefore it's crap, therefore it will fail!!!!"
OR
"It's going to be really hard for them to solve the problem of heavy snow. I wonder if they'll use <something interesting we can all learn about> or <something else>. I wonder if they've teamed up with <who knows> to research <something interesting>. I think the best approach would be x,y,z and I read the Google are using a,b,c".
etc.
I mean, we're talking about a multi-billon dollar company trying to push forward - OF COURSE there are tons of hard parts. Pointing them out it not useful - thinking about (and working on) how to solve them is what HN should be about.
"I wonder if Elon will go down path A...." "Interesting that the battery is now x, this is probably a result of Y technology they developed last year." Etc. I don't find it hard to discuss a topic in a positive light at all.
It's interesting to think about the ways your opponent might capture your pieces, but it's just as interesting and just as important to think about the ways you might capture theirs.
Time we spend bickering about whether electric cars are ready is time not spent talking about the new grid, the new architecture, the new social organization that will form around these (to me obviously) imminent technologies.
I don't think "how it could succeed" is less interesting than "how it could fail". And if I expect something to succeed, frankly the failure speculation starts to get a bit boring.
It's a bit personal for me because I am pursuing a high-setback business model in my startup and so people always want to talk about the looming dangers they see. It starts to get boring, because I expect a certain number of failures, and I intend to work through them. What's interesting to mr is whether the first principles analysis is right, and how the plan looks from a Zen perspective... Not is everything right, but am I applying pressure in the right place today given the conditions.
I agree this attitude is common on HN. But is it really a recent development? Engineers are skeptical by nature. I feel like engineers are especially skeptical of marketing and "idea" people.
>Unfortunately this is really common on HN nowadays.
I see things differently. Anyone who criticized Theranos was met with the same reaction, "Everyone on HN is so negative!" But the questions were valid. Look what happened.
The same thing is occurring with Tesla. Yes, there is fantastic work being done, but many people here can't separate marketing from the real world. It's shocking, frankly.
When I looked into the Solar City after the announcement of the Tesla buying them, these "Solar Bonds" did make me a bit uneasy. Their interest rate implies junk...
>Are you suggesting that Tesla is defrauding investors and regulators, as Theranos did?
I'm suggesting that there's a lack of critical analysis applied to what are, essentially, Tesla press releases. Like when it was implied that people were sexist for questioning the genius of Elizabeth Holmes, because she was changing the world and all that, according to the press releases.
Again, I have no problem with being a Tesla fan. I probably come off as a "hater" here because I like to play Devil's Advocate and it's a pretty one-sided discussion on HackerNews (expectedly). I think what they're doing is fantastic. But give some credit to other companies who are also doing good things for this space, people's safety and the planet.
My opinion might not be popular here, but when I studied computer science, there was a big discussion about the ethical aspects of our job. Especially, I had the opportunity to speak with Prof. Joseph Weizenbaum [1] a couple of times, who always remembered me that users don't know what we as programmers, technicians and scientists know about the limitations of a solution. So we have the duty to ensure that our solutions don't harm and don't fuel unreasonable expectations.
From the Wikipedia article [1] about Joseph Weizenbaum:
"His influential 1976 book Computer Power and Human Reason displays his ambivalence towards computer technology and lays out his case: while Artificial Intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom."
So, asking "but what if ..." in a responsible way should always be a big part of any innovation. Just shouting "hey, we are hackers, lets innovate and be open" isn't always responsible.
As a bonus, there's a great archive [2] about Joseph Weizenbaum with audio and film clips for anyone interested. Most of the site and the documents are in german unfortunately, but some transcripts are available in english as well.
It's not at all the same -- self-driving cars can risk the lives not only of the passengers, but of innocent bystanders. Several commenters have pointed out problems such as rain, snow, darkness, etc., but there are more fundamental problems. For example, self-driving cars based on machine learning may be susceptible to adversarial attacks that cause the car to behave unpredictably [1,2]. How will self-driving cars know how to react when things go wrong, e.g., the stop light is broken? What about construction?
I'm not saying humans react perfectly in these situations, but self-driving cars are rule-based, and have zero ability to adapt to unanticipated situations. I'm not arguing that Tesla shouldn't push forward. However, I believe that so far, Tesla has demonstrated a lack of concern for user safety. Companies like Google, Mercedes-Benz, etc. have tech at least as good as Tesla's, but have been much more cautious about deploying it.
The new system is not generally rule-based (although it does have a few rules for some things), it has integrated deep learning. It adapts to novel situations based on patterns learned from experience. The network will be trained by actual Tesla driver actions in these situations over the next months/years.
That's different from the previous auto-pilot version.
I'm fully supportive of self-driving vehicles, but arguments like this don't do the "pro-selfdriving" any favors.
"Autopilot v1" miles are also largely on freeways and conditions favorable to the driving-assist features. The 1 in 90 million is comparing to all cars and not just other cars with driving-assist. I don't think many people would argue that driving-assist is more dangerous than without. The 1 in 90 million is across far more driving conditions, many of which are more dangerous than what the Tesla Autopilot is driving in.
The comparison between the two isn't really a fair comparison and, in my opinion, is dishonest.
E:
In addition, picking any random model car may yield similar statistics. "Car accidents involving a Honda Civic per mile driven" is probably a lot higher than "Car accidents involving a Pontiac Firebird per mile driven". Because there are more Civics on the road... Firebirds may be in fewer accidents, but my money is on the Civic being a safer car.
Indeed, it took Samsung to develop that technology.*
*Joke; in fact Israeli security services invented it in the '90s (https://en.wikipedia.org/wiki/Yahya_Ayyash). Also there aren't reported fatalities from Note 7 explosions.
The thing about their plan though, if you read the post entirely, is that this new car with the new sensor comes with almost all features disabled.
The reality is, with machine learning, you need a lot of data, and yes, you can send a few of your own cars with employees inside like Google, but that's slow. Instead, Tesla is using their existing brand and user base to collect orders of magnitude more data, and this way, they will be able to validate their models much faster.
Again, the thing with data and algorithms is that you can test and retest them. When you have all the data, you can see the info at time X, and also have access to the data at time X+1, so you can check if your algorithm made the right choice. Furthermore, you can actually take tricky situations and replay them thousands of time while tweaking your algorithm to do better.
So yes, while getting it right is very important, I believe their approach of collecting as much data as possible is the best way of getting there
Yes self driving cars could save lives, if they're working properly. That's literally what we're talking about. If they're working as good a humans, good. If they're working better, they're saving lives. If they turn into oncoming traffic like the Tesla did in the video, they can kill people.
A first-gen iPhone couldn't save people, but a first gen auto-driving car can certainly kill a lot of people if the software isn't working right.
But critically, it's (probably) impossible to get past the "good as humans" point without a huge amount of learning data. That means either a lot of time or a lot of testers.
It's interesting that the new Tesla software will be shadowing while the meatbag is driving: not just reporting back driving data, but also simulating what it would do all the time. There's going to be a lot of situations where it would be wrong, all nicely laid out for the developers, ready to fix.
This is an area where it's easy to be too credulous. If you want "iPhone 1 of self-driving cars", Google has been tooling around in them for a while now. They do a great job under ideal conditions, on a sunny day. The future already happened!
It simply isn't enough to post a video of a car driving in light traffic on a beautiful, California afternoon. That isn't what makes the problem hard. Mess up with your video-based system in the dark, or in bad weather, or...lots of people will die.
This isn't web development. You don't risk lives with your beta release. (Which, not incidentally, is why they haven't released anything other than a video.)
Google's cars are not iPhone 1. They still just have a few dozen or a few hundred, who knows, and they have only been driving 2 million miles until now. They are not even for sale and will not be any time soon. Oor ever, if they don't market their own electric cars very soon or hook up with a car manufacturer.
Tesla can collect data from 5 billion miles of driving already one year after the Giga Factory is full operation, if production reaches the planned 500,000 in 2018 and each car drives 10,000 miles annually. And then the miles accumulate exponentially.
Here is my guess:
After a few million miles of shadow driving, i.e. a few weeks after the first few thousand new Teslas have been sold, Tesla is likely ready for the real thing under perfect driving conditions like those Google experiences in California. So Tesla is going to open for the self driving capabilities in such geographic areas during the day when weather is nice.
After a few hundred million miles of shadow driving, i.e. within half a year or so when Tesla has marketed 1-200,000 cars that have driven for a couple of months, Tesla will have collected enough data to drive in more difficult traffic conditions, like other American urban areas.
After a billion miles, around a year after Tesla has gotten the Giga Factory up in full speed and the first 2-300,000 cars have been on the streets for half a year, Tesla is probably ready for certain bad weather conditions, like rain, light snow, light fog, and for driving in the dark.
Give it another few billion miles, so still before 2020, and the streets of Istanbul or the snowy roads of Nebraska have been tamed.
Not sure why you keep talking about the gigafactory as if that's relevant to self-driving cars. Your only point can be summarized in one sentence: they've released a senor suite into the field, and through some combination of Big Data and magical AI, they will win. This, as much as anything else, is a religious belief.
Ultimately, this game is about the sensors. If your sensor suite is inadequate to detect every conceivable bad situation a car can be in, then it doesn't matter how much data you accumulate. One million miles of good LIDAR data is worth more than a billion miles of video, if the video can't see white objects during the daytime, or figure out on what side of the parking lot the car should be driving (ahem).
I have no idea how good or bad their sensors are, but instinct tells me they're not up to the task of all-condition driving, and this video provides no useful information either way.
Production capacity is the most important factor for data collection.
The Teslas sold in October 2018 don't need to have the same sensors as the ones sold in january 2018. As you can see, the car is designed modularly. No other mass marketed car on the planet has any sensors integrated yet.
It's all about big numbers. Google has spent 5 years or more driving a few cars around California. It's never going to match billions of miles driven around the globe.
EDIT: Imagine the engineers working at Google's autonomous cars today. I bet many of them are preparing their resumes for Tesla. They can launch an algorithm or a new sensor type now, wait a few months and then get the feedback on how it worked. With Tesla, they will get that feedback in days. In which lab would you want to work if you want to take part in shaping the future? In the Google lab where you have a Lexus or two assigned to you that drive up to 25 miles per hour in California or in the Tesla lab where you have 2,000 Teslas assigned driving in 50 different countries?
Once again: if the sensors don't work for the problem domain, the quantity of data gathered is irrelevant.
If you mounted a forward-facing video camera on every car in in the world and gathered the data for decades, you'd still be nowhere: you're missing the side and rear views. This is a thought exercise, but it demonstrates the point. If your robot car has a blind spot, all the data in the world won't fix it.
Nobody knows how good these cameras are, but every camera-based system so far has had the same critical limitations: they don't work well at night or in poor visibility.
But we know that the sensor suite must be at least as good as a human; more vision, plus other sensors. Therefore we know the sensor suite is sufficient to be as good as (and likely better than) a human.
It's not. For example, the eye has much higher dynamic range than any camera sensor available today. Try taking a picture at night that looks half as decent as it does in your head.
> but instinct tells me they're not up to the task of all-condition driving
From a sensor perspective, what does a human have that are missing in the new Tesla suite?
The sensors cover the whole spectrum of light humans can see, plus a lot more. There is also ultrasonic sonar.
Human eyes have significantly higher dynamic range than cameras though, which is important when you have scenes with both dark and brightly illuminated areas.
Once again, far from being an expert in this field.
That was also my understanding for the general case, but I was also assuming that cameras were available that could at least approach human dynamic range.
This isn't web development. You don't risk lives with your beta release
First: Why do you always need to assume web dev is stupid. Or less impactful. Imagine a bank website showing 10,000 <your currency> less in your account. I myself paid several customers a penalty/reimbursement, when because of our website, they suffered. And I am sure there are websites/apps which are more critical.
Second: You are assuming, they have done a half-assed job and hurrying to release. If you look at the earlier thread[1], several people have explained, how Tesla has 100s of millions of kms of their customers driving footage, using which they have trained the AI/ML algos. So I won't be surprised if these cars can do well enough in the dark.
Sebastian Thrun, one of the pioneers of self driving cars, recently has taken a turn to self driven cars just using cameras and AI. In his own words[2] he says, that we humans also just have our two eyes, so cameras should do the job well.
Well, for starters, the only thing Tesla has "released" here is a video. So there's that. I don't know if they hurried the release of the video. The real product launch is the sensor suite...and those haven't shipped in any quantity, yet.
As for AI, there's no magical "algo" that solves the problem of an all-black video frame. Having more black video frames than anyone else doesn't exactly solve that problem. See also: rain, snow and fog.
But finally, nobody said that web developers are stupid. Calm down. But it's completely fair to note that, however much money you lose for your customers with half-baked beta releases, you're still not killing them. The stakes are higher here. The expectations of proof are higher as well.
there's no magical "algo" that solves the problem of an all-black video frame
So do human eyes. I perhaps should have used the word night rather than dark. So basically it should work in the night also is my understanding.
And I'm totally calm, and not angered :-). Just trying to make my point. Because I don't like arguments like - 'this is not rocket science', or 'this is not web development.
> First: Why do you always need to assume web dev is stupid. Or less impactful. Imagine a bank website showing 10,000 <your currency> less in your account.
The impact of that, terrifying as it may be, is still significantly less then the impact of a self-driving-but-buggy vehicle into the side of a semi.
I was arguing against the trivialization of web dev -- this isn't web development -- while I agree that self driven car is one of the highest level of critical. But that said, people can equally suffer, if not so visually, due to financial app bugs. I shudder to think of state of people who lost lot of money in hacks on MtGox and other Bitcoin exchanges for example.
There was no net impact in that accident. It replaced three deaths which would have occurred without autopilot, so it was part of a net gain in lives.
Perhaps that particular human was special and worth saving over the other two who were saved instead, but I am not aware of any reason to believe that.
Only after a particularly tortured interpretation of highway safety statistics.
If you exclude pedestrian deaths (Few jaywalkers on the interstate), motorcycle deaths, deaths in conditions where autopilot cannot be used, and deaths due to collisions of lighter, less safe, older, or poorly-maintained automobiles, then regular highway death statistics start looking better then autopilot-enabled ones. [1] [2]
Again, there's a great gap between the hype of Tesla, and the reality.
AI/ML algos cannot magically guarantee safety, for example, deep nets are known to be very un-robust: https://arxiv.org/abs/1312.6199. We are not yet at the point where AI/ML based algos can be guaranteed to be safe.
Bet you there is a clause that it's only usable during perfect conditions. I don't doubt there is a ton of information being sent back to Tesla for machine learning. They have the capability to send software updates to the cars. When you have hundreds of thousands of these cars driving around, they should be able to gather so much information and tweak the design to perfect the whole issue.
If I had one of these cars, I would not put 100% faith into this technology yet. But, after 5 years, who knows? I may finally be able to drink more than 1 beer at a bar with friends and have my car drive me safely home.
Which brings an interesting point, traffic violations. What will local police do now that self-driving cars drive safely?
If we assume self-driving cars will be an on-demand service, they could be offered intermittently - only on good weather, at daytime, at cities(or even routes) already mapped. Even with that limitation, it could be a pretty valuable service, if complemented by the right ride-sharing services.
And even when purchasing a car , a car that "only" drives itself when it's safe would be great, even if it's only 40% of the time, for people in the right city.
This level of negativity is typical for HN. Look at what happened when dhouston announced Dropbox- the second comment was about how any Linux user can trivially replace it for themselves:
A comment from the iPod thread: "AAPL is going down fast! - Mmmm. APPL is already down $1.00... Looks like the markets aren't looking too favorably on Apple's new forays into the digital device market." [1]
Today on Yahoo Finance: 'Tesla Falls After Announcement - Tesla Motors (TSLA) fell nearly 2% and remained in the lower parts of a six-month-old consolidation. Late Wednesday, Tesla said all of its new vehicles will be equipped with hardware that enables fully autonomous driving "at a safety level substantially greater than that of a human driver." [2]
Now go look at Theranos, and what the defenders were saying here.
Or the hundreds of other bad ideas that were criticized and failed because they were bad ideas. Picking a few big successes doesn't make an argument. Because, I agree, there are naysayers for everything. I'm glad; keeps people honest.
This is one of my favorite nuggets of internet history–also excellent because the thread ID (500) makes it easy to find since it'll redirect from: https://forums.macrumors.com/threads/500.
That is GOLD! My favorite comment is this, which is so wrong it almost sounds sarcastic:
"I still can't believe this! All this hype for something so ridiculous! Who cares about an MP3 player? I want something new! I want them to think differently!
Why oh why would they do this?! It's so wrong! It's so stupid!"
For balance, here are other comments from the first two pages of that thread:
"Come on everyone, y'all are saying it sucks before you have even held it in your hand. I mean 5GB in a little tiny thing like that, it's amazing. I don't see anyone else making something like that. Do you?"
"No matter what Apple does there are always people who are NEVER happy. Give it a rest. It's a great idea and the first of many."
"The truth is that is really is revolutionary. 5 gigs? Where do you see 5 gigs in an Mp3 player?"
"This is not like any other MP3 player on the market, imagine being able to store several days worth of music at once! The iPod will be great for travelers, students, heck anyone who is really into music"
"This thing's too cool. It makes my Rio 500 (recently upgraded to hold 128 MB of songs) look pathetic. It's beautiful. It looks too easy to use. It has all sorts of cool features that I will never live without again. This is a home run, and y'all who keep complaining its not a $200 Newton device, buy a Visor. They can play MP3s, by the way, but they're still stuck in the 21st century compared to the iPod."
"THIS THING IS AMAZING!!!
It's not jus ANOTHER mp3 player. It's a BREAKTROUGH mp3 player!"
"with all the hype surrounding it, the iPod still came out looking quite good."
"I think it looks like a great toy. The price is steep for 5 GB, but everything else about it is great."
"iPod is a great idea. It has huge capacity for MP3s (5GB is much more than any other similarly sized mp3 toy), syncs with iTunes, and a nicely sized backlit LCD screen with good battery life (10hours). Recharges by FireWIre is also a neat thing."
In all fairness, the initial iPods were nothing to write home about and, as I recall, iTunes didn't even run on anything except Macs--then a rather niche hardware platform. Of course, iPods improved relative to the competition but it was really the iTunes store and Apple's ability to fundamentally change how music was sold that really had the impact.
This sounds like the perspective of someone who didn't actually use early iPods and definitely didn't use Sony Discmans and Walkmans and other predecessors.
Because folks who actually used all that stuff know that the iPod was a quantum leap, even in its first few generations. 5GB of storage, MUCH better battery life than any competing device of any kind, and FireWire 400 felt fast enough back then that it was like actually being on fire.
Indeed. I had a walkman in the 80's. I still remember not being able to walk too quickly with my discman.
As soon as the ipod came out, I bought one for my girlfriend as a birthday present. She has never used any other system for listening to or acquiring music since. It obviously switched to the smart-phone, but still...
The first iPod provided unprecedented amounts of storage in a form factor that actually fit in your pocket. They accomplished this by buying out the world's initial supply of tiny hard drives for over a year and change. This kept would-be competitors at bay, making every other 5GB MP3 player look like a giant brick by comparison.
Most of the people who were shitting on the iPod never held one in their hands.
The difference between this and the Iphone 1 is that the technology is not there yet for fully autonomous cars, and it's unclear you can get there just by trying really hard and spending lots of money. Fully self-driving cars wouldn't just have to stay in their lane on the freeway, they would have to make the decisions during edge-case driving situations that a human would. That means knowing to slow down when a ball rolls across the road, reading people's hand signals, responding correctly when a police car tries to stop traffic on the freeway, etc. I'm not sure how you even begin to handle cases like that. Our driving situation was designed for humans, so you would have to make the algorithm "think like a human" and understand all our wierd idiosyncrasies. So far, machine learning has had very little luck with trying to similate human decision-making.
What Tesla has done so far with "autopilot" is a big deal, but releasing a "video of a car driving itself" is the classic marketing trick with machine learning, where you cherry pick aituations in which your system does well, and most people don't second guess it.
What is with all these meta posts on HN recently by people who think they understand HN, and then need to tell everyone that they are not fitting into their idea of HN?
And why are you comparing iPhones to self driving cars? That's like comparing paper airplanes to commercial airliners. Honestly posts like these should be automod deleted.
That's funny, but it's not quite accurate. There's a difference between a) saying that certain viewpoints don't belong here, and b) saying that comments which say that certain viewpoints don't belong here don't belong here.
It's like there are three positions:
1. Certain viewpoints or comments should not be allowed.
2. Everything should be allowed except comments which say that certain things shouldn't be allowed (i.e. tolerant of everything except intolerance).
3. Everything should be allowed.
#1 is intolerant, #2 is an attempt at enforced tolerance (which is necessarily intolerant in itself), and #3 is actual tolerance.
It seems to me that the real problem is that meta discussions are frowned upon on HN, but there is no designated place to have them instead. So whenever someone starts one, they face the risk of censure-by-downvote.
Speaking for myself, I really want self-driving cars to happen in my lifetime but I've become incredibly wary of the credulous attitudes seen in media hype, among the general population, and within the tech community amongst people who don't understand how hard (and possibly intractable) some of these problems are.
If self-driving cars don't happen in my lifetime, it will be because companies piggybacked on the media hype too readily and shipped dangerously misleading technology to consumers before it's ready. A few CNN incidents later, et voila, no one's talking about self-driving cars anymore. The whole thing can grind to a halt just like that. Every overly enthusiastic, precocious CS undergrad who posts in forums about how incredibly inevitable self-driving cars are, and how it's basically a solved problem and a done deal, are helping to set up self-driving cars for failure by encouraging overly optimistic expectations.
For people who care deeply about the technology, the possibility of a company like Tesla putting something out there before it's ready and potentially causing CNN incidents can be more alarming than no one putting anything out there at all. Maybe some people are being too negative and nitpicky, or maybe not, but I think that's where the negative attitudes come from. It's from people who want self-driving cars to succeed.
If we've learned anything about AI learning systems, it's that they require as much training data as possible. By putting the hardware in the cars now, Tesla can start gathering that training data on a scale that nobody else can dream of. Anybody who can't see that as a huge competitive advantage isn't well informed.
It's sort of like startups. 90% of potential world-shaking innovations fail but pretty much everything that changed the world looked risky at one time. It's important to be sensitive to both sides of the coin.
And in this case, it's even more important than just being the first from a market perspective. Being first here allows Tesla to acquire a ton of data for their ML to churn on. Data is the monopoly that makes tech companies like Facebook so valuable today.
Facebook open sources most of their infrastructure, and anyone can copy their designs because they're fully public.
> This is the iPhone 1 of self-driving cars! That's akin to saying Apple should have waited to release their phone until iPhone 7 "because of this & that & this..."
Differently from mobile phones, cars actually have the capacity to go horribly, horribly wrong. This is not something to have a "let's fix it in public beta" Approach on.
> This is the iPhone 1 of self-driving cars! That's akin to saying Apple should have waited to release their phone until iPhone 7 "because of this & that & this..."
We don't know yet whether this is the iPhone 1 of self-driving cars or the Newton of self-driving cars, a decade or more ahead of its time.
MVPs are of course great and I think this will be huge, but this is a safety critical piece of kit. There is a big difference between having your phone crash and having your car (literally) crash, into something solid.
yes and no... if cellphone is imperfect, at most it can cause inconvenience (let's ignore Note 7 topic for now), if car's driving is imperfect, it can easily kill not so low amount of people
we are geeks, many happy to be early adopters (aka testers), but try to explain this to average middle class joe who sees this and thinks it's already there in point of perfection, no mistake possible.
I just hope for Tesla that they delivered an outstanding technology that will just work, too much at stake with it. For now, respect to engineering team behind this!
I think what people are missing here is an understanding of how these systems are tested and deployed at scale. While I have no involvement with Tesla I do have first-hand knowledge of similar programs at tier 1 automotive suppliers.
The suppliers provide (or are looking to provide) an electronics suite to car manufacturers. The car manufacturers want the system to be safe lest they be sued out of existence. One part of that will include contractual requirements for the system to have clocked n-kilometers on the highway in full (or partial) operation. For example, one project had a requirement for car(s) with full sensor data recording and partial automation enabled for 1 million kms.
The automotive suppliers will outfit a handful of, say 2019 model year test cars with the proposed sensors in the correct place and drive them around roads and highways in the specified conditions. Outfitting the cars can be expensive with prototype hardware, collecting the resulting data is a pain, and as a result the suppliers I'm familiar with run a (relatively) small number of cars for a lot of miles to record all that data.
The point of all this is to collect sensor data for resimulation as models are developed and trained. If an exceptional event occurs, they can modify the driving model, then "replay" the new model against all prior collected data to make sure the change doesn't do something unexpected elsewhere.
This process takes a lot of time (years) to pursue in this manner. What Tesla is doing is deploying the hardware in the field, then using the deployed systems to collect data to be used for the development of the automation platform. Instead of a couple of test mules they can use every single car they sell and let you drive it around for them while they record the results. Data collection that would take years can happen in weeks. This is a brilliant shortcut to the process and it puts them a couple years in front of the competition.
"Data collection that would take years can happen in weeks"
I think that we are talking days, not weeks, or probably that Teslas can have multiple teams testing various hardware and algorithms at the same time: Tesla plans on producing 500k cars per year from 2018. If all of them have this equipment, and they drive a conservative 10k miles per year in average, Tesla will gather data from 5 billion miles driven after a year. Next year they will have 15 billion miles. After 6 short years that will amount to more than 100 billion (B) miles accumulated.
For comparison, Google's self driving cars have driven less than 2 million miles until now and unless they team up with a car manufacturer or launch their own car, their project is soon irrelevant. I imagine this would only be accentuated as tech talent would prefer to work where their algorithms can be tested in a couple of days rather than where such test would take years.
EDIT: And just imagine how cool it is that since their cars drive all over the globe, in all traffic situations, under most traffic rules, in all conditions, and both winter and summer, day and night at the same time, a team can actually test changes to an algorithm on cars selected for certain conditions (500 cars in Minnesota's winter weather at the same time as 500 in Australia's summer, 500 during daylight in Asia at the same time as testing 500 driving at night in Europe, 1,000 cars in rainy weather, 1,000 cars in sun, 1000 cars in fog, 1,000 cars in snow, all within hours).
I don't know if any amount of data collection and model refinement will allow self driving in ALL situations. We always see self driving demos in California on very large roads, I'd like to see one in a small crowded street of Napoli or on a crossroad in Ho-Chi Minh.
That picture doesn't do it justice during the most busy hours of the evening, where there's tons of people crossing and bicycles flying about in every direction.
Given that, I've never seen a crowded street in Napoli or a crossroad in Ho-Chi Minh, but I can imagine they are a completely different level of complex all together.
As noted elsewhere, the announcement was that the necessary hardware will now ship standard. Musk has said elsewhere that the software is many times harder than the hardware.
> Given that, I've never seen a crowded street in Napoli
My three tests for self-driving cars would be:
1. Drive through Milan on a Saturday afternoon, from the outside of the city, and park on the opposite side of the centre of the city.
2. Drive through London during a weekday, similar thing... cross the centre and find a parking space and park.
3. Drive through Paris during a weekend, same deal.
Those three cities are the most stressful to drive through for various reasons.
With Milan it's speed and decisiveness, small spaces and tight navigation.
With London it's the mix of traffic with a high cycling % along with a good mix of motorbikes and heavy goods vehicles as well as a phenomenal number of pedestrians who will walk out at any moment.
With Paris it's the speed of the ring-road (and the challenge of handling variable speed traffic on and near it) as well as the tight spaces on the streets and the numbers of scooter riders.
I also like the diversity of traffic signals and signs across those cities.
When I see a self-driving car manage those cities I am going to be impressed. It will exercise so much of their systems to do any one of those.
PS: I like that they are all fashion capitals... it would be a good marketing campaign to throw in Tokyo and NYC and present an accessible and fashionable angle to a set of super complex technical achievements.
Those all sound like more or less the same problem: traffic.
Why don't some of the teams impress me and try driving somewhere with black ice, strong crosswinds and blowing snow. If their systems can't drive in negative environmental conditions, they will be largely worthless in many locales.
I see this response a lot about autonomous vehicles won't work until they can handle the situations that you mentioned. That's not fair. Those are the ultimate worst/hardest situations for ANY driver, human or autonomous.
In my opinion, the fully autonomous vehicles shouldn't need to handle those situations. Leave that up to the human to navigate.
Fully autonomous should be able to handle the NORMAL day-to-day driving responsibilities. If you're going to require those situations that you mentioned, we'll never have fully autonomous vehicles.
NOTE: when I say fully-autonomous, the car will still have a steering wheel and someone sitting in the driver seat. The google car that had no steering wheel would never happen in the "real-world" only tightly controlled environments.
For me, the entire reason for being excited about these developments is the hope that we'll reach fully autonomous ("level 5" autonomy) vehicles, as that will be game-changing. No more need for driving ed or licenses, mobility for younger, older and disabled people, cheap autonomous taxis, being able to take a car home after drinking, etc.
If we never see a car that can actually drive itself without human intervention, I will be severely disappointed. All that work and hype just for some driving assistance systems?
>Those are the ultimate worst/hardest situations for ANY driver, human or autonomous.
They may be difficult conditions, but people drive in them all the time. Something like 30-40% of the US population drives in snow and wind for months at a time. They're not uncommon. What's the great utility of a car that can only perform its function in ideal circumstances, i.e. on a closed track.
> In my opinion, the fully autonomous vehicles shouldn't need to handle those situations. Leave that up to the human to navigate.
This will make driving in those conditions even more dangerous. If people start relying on self driving cars, their driving skills will deteriorate. If computer can't drive the car safely, rusty driver won't do better job.
I've experienced London and Paris but I would still swap one of them with a city in a less developed country for variety.
Say Dakar for example (2.5M people): no road signs, pedestrians on the highway, constant honking from every directions, more varied "vehicles" like horse carriage and "car rapides" (small buses).
> 3. Drive through Paris during a weekend, same deal.
Having recently returned from Paris, I was completely flabbergasted by the driving patterns. It was surreal. I am excited about Tesla and this self-driving car, I really am. If it can in fact navigate around the city of Paris on a Saturday I'm not sure how I would react. Probably wouldn't believe it.
I do rather wonder just how hard these situations really are for machines. Watching videos taken at the Arc de Triomphe actually leave me thinking the google car could possibly be fine already.
It knows where it is, it knows where it needs to go and they already have an "aggression" where they'll pull out a bit to signal to other drivers they're going.
The really complex part for us is that we can't easily keep track of all the things around us and have very slow reactions. The computer knows the distance of all things around it accurately and can respond in milliseconds and doesn't get flustered or angry. Or am I missing something, is there a lot of complex reasoning?
I ranked them in order of crazy. Milan is the most crazy of crazy. As are the motorways around Milan. I am pretty sure that the shared philosophy to not use the brake is that if they just go faster they can be ahead of any trouble about to occur.
Dude, compared to streets in European cities, Castro is most definitely a "very large road". The lane widths anywhere in California are massive compared to the older roads in Europe.
I'm in Rome, which isn't quite as bad, but yes compared to driving in the US it's completely different. Every day I have to deal with someone cutting me up, or doing something otherwise stupid. You need to drive quite aggressively here, and in a self-driving car I'd imagine the journey would take twice as long, as it would keep stopping to give way to avoid an accident - and drivers would definitely try to take advantage of that.
It will be a long time before Teslas can drive on dirt roads or roads covered in snow. But most daily driving happens on pavement either on the highway or in the city. Both of which it already handles quite well and will just get better.
Note the definition of urban area though: "To qualify as an urban area, the territory identified according to criteria must encompass at least 2,500 people, at least 1,500 of which reside outside institutional group quarters."
Urban really means not clearly rural. But there are a lot of "urban areas" 10s of miles outside metropolitan areas that people would consider exurban with forests and homes on significant acreage.
To clarify even more, Kansas is listed as having almost 80 Urban Areas in the 2010 census. I think many of the people in this thread would flatly classify those areas as rural.
The reason most people can handle driving on shitty dirt roads, in snow and other "uncommon" situations is because they've first logged thousands of hours of practice driving under more normal conditions. Remove that experience and routine and why would you expect them to have any idea what to do in those tricky situations where the AI fails on them?
That's not the case. Children can learn how to drive in snow or dirt roads in a matter of hours. Learning to drive on public roads has more to do with learning the rules than learning to drive a car per se.
I didn't grow up on a farm, but my grandparents were from rural Arkansas and raised me in a small Texas town. I had plenty of opportunity to learn to drive cars on back roads and had my own dirt bikes and motorcycles by age nine. (Modern day helicopter parents would boggle at the freedom we kids had in the 70's). By the time I was twelve my grandmother retired and bought a 3.5 acre plot in rural Texas.
I wasn't allowed to fell the trees but I had to use a chainsaw and cut the branches off the felled trees and then section the denuded trees into small enough rounds they were suitable for busting into firewood with an axe. I was given the option to learn to drive my grandfather's stick shift Toyota pickup truck in order to move all the wood with that, or move it by hand. It was an easy decision.
Driving a tracktor with it's big nobbly wheels and low gear ratios along a dirt path is trivial compared to trying to coax your Honda with worn out tires up a icy hill or out of the mud. The two just don't compare.
Learning the rules and developing situational awareness.
But, yes, until you're talking about more serious off-road/bad road (which is challenging in a completely different way), there's nothing especially hard about driving on a private dirt road.
I did my driving education during the summer and got my license. Got my first car during the middle of winter and just learned to drive in snow (we did have a 2 hour course on a slippery track as part of the driving education). It isn't really hard. Just have proper winter tires and don't go too fast.
Also driving the shitty back roads in Finland was a lot of fun in my old Honda Civic which didn't have power steering or ABS (let alone any kind of traction control and stabilization you have in newer cars). Driving a modern car with all the modern aids is very easy on dirt. On snow some of them can hinder you though (some automatic gearboxes just don't know what to do when none of the wheels have any traction) but for the most part it is very easy too if you have winter tires.
Driving is very easy. Too easy I think which causes people to drive way too fast for the situation quite often.
Main issue for driving on dirt roads/snow for automated cars I see is just seeing the road itself correctly not with controlling the car.
I don't think you're wrong, but I predict that this will be a complete non-problem: by the time people's skills have atrophied significantly, self-driving car technology will have improved to the point where it won't matter anymore.
Honestly, I kind of want to drive in those situations. (I enjoy driving for the most part.) It's exactly on long stretches of highway or during an urban commute that want autopilot.
While I agree for roads with no other cars around, I think there are shortcuts that can be taken on roads where there are indeed other cars. If you think about what humans do an unknown conditions or abnormal conditions, they basically follow the car in front of them. Self-driving cars could follow the same paradigm
>> What happens when the car in front goes off a cliff on a twisty mountain road?
Humans have been known to "follow" the car in front of them when its not appropriate. Being parked on the side of the freeway (especially on a curve) is one of the worst places to be. I recall hearing that people hit those parked cars quite often.
I was driving down a two way street, a car parked on the right was facing the wrong way, which made me think I was driving down a one way street. I saw a street light and had to use it to conclude I was in fact not on a one way street and that guy parked illegally.
Yes, small cobblestone streets in Philly? I wonder how it handles other bad/rude drivers? In the video I saw it merging on to the freeway, but sometime you have to go over the speed limit to get in front of the car that is in the merge lane, will it be courteous to allow others to merge properly?
The most important thing is if the car has the capability to detect when its effectiveness is lower than some threshold and smoothly hands control back to the human.
At the same time, it would beam this data back to HQ and to other cars, telling them to avoid the area of uncertainty.
>For comparison, Google's self driving cars have driven less than 2 million miles until now and unless they team up with a car manufacturer or launch their own car, their project is soon irrelevant.
Except the Google car is in the wild, collecting miles every day (as are the Uber vehicles), while Tesla's miles using this technology are all theoretical.
Closely related to what you wrote, it will also be about verifying the hardware itself, independently of any software models.
New hardware needs hours in the field, to verify failure rates and performance under real world conditions. Early deployment means Tesla's hardware will be ready for full service as soon as software is available. Autonomous driving then becomes "just a software problem", with hardware eliminated as a variable.
Of course, Tesla acts like it owns that data. Their privacy policy permits them to monitor your vehicle unless you opt out. But there's no reason you can't download that data yourself, if you can figure out how to do so.
Does anybody make an antenna-level middle box for GSM phones? Something you hook into the antenna cable as a firewall/monitor?
I would be surprised if Tesla backhauls their data over anything but a TLS connection (even if they only need GPRS connectivity). - thus making a middlebox useless.
Besides products sold to governments agencies, the only 2 options I know of to make a GSM middlebox are OpenBTS or the osmocom stack.
You'd probably have better luck getting at that data through onboard diagnostic ports or the like.
At the very best it will only put Google on par with Tesla in terms of collected data. Given that it will require Google to reverse-engineer how Tesla process their data it seems like it would still require quite a large investment on their part.
He opened the patents for most of the car, maybe he will do the same for the data or even the self driving stuff once he's ready after all his goal is to get people into electric cars.
Thinking Google was screaming for more opportunities to acquire and test data. Had to be satisfied with a few cars and a small area shared with other companies. Tesla is making its own jackpot here.
The big difference is that Google's car are actually self-driving themselves. Tesla's cars will collect a lot of data, but about human driving. The actual automated driving software will have to be tested separately.
Actually Tesla has access to two sources of large-volume real-world data:
1. When the driver has full control of the vehicle
2. When the 'autopilot' is engaged and the driver is ready to intervene if necessary
So if the AI passes the safety test on Type 1 data, Tesla can promote it to being tested on Type 2. And if it passes that safety test it can be promoted to full autonomous control.
The 'autopilot' mode effectively does for Tesla what Google's test drivers do, but for free and on a much larger scale. Seems to me Tesla have a very strong hand here.
There's a third type as well - 'Shadow Mode' where the software is running constantly but the driver is in full control.
So if there's an accident, Tesla can check to see if the autopilot would/could have avoided it. If they can turn round to lawmakers and say that "X% of accidents could be avoided if hands-off autopilot was legal" it should help speed up the regulatory side of things.
"It would have avoided this accident" (by braking, steering). It can say nothing about the future, "... but it would still have been in a collision 0.42 seconds later".
For all the collisions it would have avoided, there's another subset where it would only have "delayed" the collision. But that won't be mentioned. Because it doesn't fit the narrative.
But this way they can't tell how many accidents the hands-off autopilot could have caused. Number of accidents avoided alone is not enough to say this technology would decrease the number of accidents.
That is only partially true. They can resimulate what the automation _would_ have done given all the telemetry and video data leading up to the event.
This is why "partial automation" for the initial data collection still produces valid results. You can replay data against updated models and do "what if" testing without actually sending the car back out on the road again.
Yes, but it has to understand the difference between minor human errors (allowing more drift than autopilot) and actual important differences (braking ahead of a potential collision that the autopilot can't see).
Kind of. There's a lot of different machine learning approaches, but at least with neural networks, you essentially have a black box (the network) that you feed inputs into and it spits out which category it gets classified into. When learning, you know what the right category should be, so you go through the network in reverse (backpropagation) and update the weights inside by an arbitrary amount multiplying the difference between what is wanted and what is spitted out.
Eventually, (and with a lot of handwaving) the difference converges towards 0 and the network gives the right answer.
True, but my point was more a scaling factor. Google can crank N amount of data with its small fleet, Tesla will be able to reach 10^5 N soon. If they're able to process it properly they may pass Google tests quickly.
I don't know if it's that easy. There are tons of images and text laying around, but research is focused on just a few datasets. Sometimes it's hard to make use of 100x more data.
Research in focusing on just a few datasets in that case because they need labelling.
In this case, what you need is mostly "what would most humans do?"
There would be things to refine about that (e.g. prevent speeding; analyse how humans reacted right before crashes etc. and improve on responses), but as a starting point it is immensely useful.
I would assume that Google Maps vans aren't driven in the same manner as other cars. They've got a large complicated camera system on top, so presumably they're driven very conservatively, and presumably they're also not driven particularly fast (kind of hard to capture good pictures when you're driving really fast).
Which is to say, if Google wants self-driving Google Maps vans, then collecting data using Google Maps vans makes sense. But if they want general-purpose self-driving cars, then collecting data using Google Maps vans will only give them a very narrow set of data.
Again, speaking as someone who has knowledge of the products, I can tell you that the Google Maps vans are useful for recording geography but would not be terribly portable to a new sensor suite mounted to a different car.
In the OP where I mention outfitting an example 2019 car, that car might be a current-model-year car with roughly similar vehicle dynamics to the proposed 2019 model. The only thing that they are paying particular attention to is the specific sensors in use and their placement on the vehicle. This apparently is critical to development of the driving model, the camera on the test vehicle must be in the precise position and direction and must react in the same way as the production model or the data is bordering on useless.
Now, with a full 3D pointcloud and enough sensor data you might be able to translate one recording into a lower-resolution resampling to model the production version, but I've not heard of anyone doing that. Test data is collected on the production sensor suite, no changes allowed.
But Google Maps vans cover land comprehensively, whereas Tesla's "convenience sample" is not comprehensive. However, Tesla's data then covers the relevant data more often.
Definitely two data sets that should be put together.
But they intentionally don't cover the same ground frequently, which is a key part of the Tesla data. The same intersection in many different conditions is pretty important.
I'm assuming that once a Google Maps van has covered an area, it intentionally avoids that area until a significant time later.
All miles are not created equals though. Sure Tesla will gather a lot of data quickly but most of this data will be from people who use their car for their daily commute, everyday at roughly the same hour. This doesn't have the same value as data from different areas/road conditions/time of day
And should there be places, terrains, urban conditions etc. that Tesla doesn't cover, they will see it and can hire a number of chauffeurs to drive around there. Even if they hired 1,000 drivers to gather data in certain geographical areas at certain hours or weather conditions that they lacked input from it would be a drop in the ocean compared with the cost other car manufacturers face to keep up with Tesla now.
You can't just install autonomous driving systems in any combustion engine car without changing lots of systems. Tesla is electric fist. Everything is already wired, ready to be measured and in feedback loops.
There was a good Freakonomics podcast (http://bit.ly/2dQNPBh) a couple of years ago which looked at why Norway has so many Teslas. Road conditions would be pretty different to California.
Sure. I am mostly wondering about the demographics of Tesla owners. My guess would be that people who own Tesla cars tend to live in similar areas, with comparable lifestyle.
There is also the case of driving style. I doubt there are people that tend to drive like my grandmother, and I would assume that a variety of driving style is important as well.
All I'm saying is that the claim of N million miles should be taken with a grain of salt as long as you do not know the entropy that these miles contain.
Question: Does the fact that the collection of the required volume of kms driven is performed over a shorter period of time based on a higher number of test subjects collecting the data impact the diversity and quality of the data negatively?
Said another way: Is there a downside to parallelizing the collection process across a broader horizontal population of collection vehicles over a shorter period versus the traditional 1-2 year collection period?
The only obvious downside to an outside observer is that they will find it difficult to collect 'all weather' data in substantial volume in less than approximately a single year.
From the positive side, they will collect and be able to evaluate a much /wider/ range of driving data from a far larger set. They are much more likely to run across 'better morons' both in operators of the Tesla and in operators of other cars.
Really a lot of weather patterns are analogous; for vision heavy rain isn't terribly different from a blizzard, etc. With a large enough span across the US you can get pretty much all weather conditions inside a 6 month window from mid summer to mid winter.
...for vision heavy rain isn't terribly different from a blizzard...
I would say that heavy rain is incredibly different from a blizzard. Snow has a much greater horizontal-to-vertical velocity, reflects more light, and flutters around. Rain mostly moves in straight lines, even in high wind. Not to mention traction is completely different on water vs. slush vs. snow vs. ice, as well as with good snow/rain tires vs. worn all-season tires.
I'm sure Tesla knows all this, but just in case you haven't driven in a blizzard before, you will need to drive very, very differently from driving in heavy rain.
I hope they know that when there's a south eastern style afternoon thunderstorm that dumps several inches of rain in 20 minutes that it is best to just pull over and wait.
Just had a flashback to a near death accident in Louisiana on the highway at about 70mph this way. Cloudy no rain, pavement dry, then almost instantly a torrential downpour dumped what looked like a standing half inch of water on the road. The tires lost traction and I slid off into the median. Thankfully I the median was wide and grass and I coasted to a stop (sideways) about a yard from a tree. If you've driven I-10 through Louisiana you know what I mean.
Not sure what an autonomous car could have done in such a situation but anticipate that maybe we should preemptively pull over for a bit before it starts raining.
Hopefully any autonomous system will be able to make the call of whether it is safe to drive itself - whether because of heavy rain, blizzard, tornado etc. Some situations are just not safe to drive a car in, regardless of who's driving.
That's only helpful if you have a fairly random distribution across the US. I very much doubt that's the case for Tesla. You'll probably have far more Teslas driving around in California, maybe even the Silicon Valley, than in say Alaska.
The question whether a time average over a long period of a random quantity in one system and an average over multiple systems over a short period are the same is the question for ergodicity [1]. If both are the same, the random process or system is called ergodic.
The answer is: If the measurement period for the average over multiple systems is long enough that each random variable has visited a representative sample of states which make up the majority of contributions to the mean value, then the system is ergodic.
Here's an interesting problem I hadn't thought about before. Teslas are expensive and rich people drive in different places than poor people, so teslas will have more data about places rich people go. That's another interesting data imbalance.
In the future, would Telsa use these data for other purposes? ie after analysing the data, it finds that my driving behaviour risk and has a higher chance of accident. It could trigger high insurance premiums or restriction on which car I can purchase etc.
What exactly is the problem with punishing people for risky driving? Or rather, let them pay a corresponding premium to cover for the (statistically expected) higher damages they will cause? Keep in mind that driving behaviour is something the user has full control of, in contrast to e.g. disabilities, for which of course no insurance premium should be paid.
I actually think this is a good idea, and would like to hear opposing opinions.
The difference is that swinging a chainsaw in your bedroom isn't going to kill some stranger.
Driving like a moron in a 2T vehicle is.
Risky driving isn't a private problem. It's a public problem.
And the government already monitors your on road behavior and punishes you for engaging in risk. Speed cameras and traffic cops exist, in case you didn't know
Credit score is used for lending. Basically, helps to understand the repaying capacity of the person.
However, presence of alcohol in sweat or by another means if detected and taken by systems to feed data to insurance companies it will make judicious decision in charging the person a higher premium.
Because, drunken driving risk lives of others on the road and yourself. If a person is not in a position to live on a agreeable consensus of social security model without putting others life at risk, jungle suits him.
I suppose it's a similar argument as answering the question "why shouldn't sicker people pay way higher insurance premiums instead of driving up the average cost". For health insurance I think the pool of insured people to protect against black swan type problems is reasonable and ethical (it protects people with birth defects etc.). However for car insurances I feel like this argument doesn't hold as your driving behavior is very much in your control.
The interesting question is if it might actually be a smart strategic move to announce such a use of data. It's a bit counterintuitive as that would limit your customer pool. However I think there are benefits to limiting your customer pool to more "positive" cases. Let's say Tesla would announce that they'd collect data on reckless driving, warn the drivers and upon ignoring the warnings a couple of times inform insurers about it. That would probably result in less reckless drivers buying their cars which might actually not be horrible since that would also reduce the number of accidents etc.
The big issue is of course that it would violate all reasonable privacy laws I can think of (I guess they could make it opt in).
"I suppose it's a similar argument as answering the question "why shouldn't sicker people pay way higher insurance premiums instead of driving up the average cost"."
I don't think that is a good analogy ... many "sicker" people have played no role in their sickness which may not be related to behavior or decisions.
A better analogy would be: Why don't mountain climbers and scuba divers[1] pay higher insurance rates ?
I sort of think that they should, however I don't have a good answer as to what happens when an injured stunt skier is broke (and broken) ... do they get no medical care ?
I know this was just a hypothetical question but as a scuba diver I actually pay for a completely separate insurance on top of my standard health plan called DAN insurance: http://www.diversalertnetwork.org/insurance/dive/
So stuff like a helicopter evac, access to a hyperbaric chamber and the ability to call a number and speak to a doctor who specializes in diving related illness means I already do pay more for insurance than someone who isn't a diver.
The problem is that statistics isn't that accurate a branch of math. For example, drivers who drive faster are statistically more accident prone. I'd wager, though, that Lewis Hamilton driving faster is less accident prone than me driving slower, something that'll never show up in any statistics based result set.
Statistics are obviously useful, but they are a big gun that sometimes points towards feet.
I'd wager that Lewis Hamilton driving at speed would exhibit very different driving behaviour than I would.
With enough data we wouldn't need to just rely on measuring speed. Reaction times, steering behaviour and lots of other driving terms I am not familiar with can all come into play. Cross reference this with actual accidents and you could come up with a fairly accurate model of what makes a safe driver.
Let me try to rewrite what you are saying in-line with my opinion, hopefully without any loss in translation. You are saying that statistics precision is dependent on model quality, and a better model yields a precision good enough to be usable.
You are correct. My problem with that approach is twofold:
1) There is no clear definition of "good enough";
2) The balancing mechanism between improving model accuracy and simplifying the model (for predictability/market competitiveness) involves actors with very unbalanced bargaining power (insurance companies vs very small subsets of customers)
If I can state the problem in another way: When you use a very refined statistical model for insurance premiums you are basically creating dynamic pricing for insurance services. Dynamic pricing hurts competition in the market through the development of an exaggerated bargaining power by one side of the market. Consumers would be worse off. What they gain in justice of their pricing (I get lower premiums because I'm a safe driver), they lose in bargaining power (via fragmentation of the consumer market).
I would say a good enough measure would be down to accidents. With a large enough dataset I am sure there would be a strong correlation between driving quality and accidents. In that regard the desires of the insurance companies are largely in line with the general public.
I 100% agree that more dangerous drivers should pay a higher premium, but there needs to be a fair, equal, transparent way to determine what dangerous is. Between law enforcement agencies, speed cameras and red light cameras, we have established well-tested legal frameworks which determine whether a driver is dangerous or not.
We we don't have are any legal or moral frameworks for passive, always-on methods of observation. Until we do, we shouldn't be letting private companies arbitrarily determine whether a driver is dangerous or not.
I believe we have established red light cameras are the cause of danger in a large number of circumstances, with yellow light time being reduced to increase government revenue.
Not to mention the lack of due process.
I don't think the government should rely on private companies algorithms to determine danger or guilt.
We should not be punishing people for something they will/might cause. The system should panish only people for something they have done already.
Otherwise, we should punish you with a speeding ticket, because you will sometime in the future with some probability exceed a speed limit. Would that be ok for you?
That's essentially something which already exists (telematics boxes) and I'm not sure how they'd be able to sell the un anonymised data to insurers - but if they could legally do so (confident it's not possible in the EU) then yes you could see the above.
The realistic one for me on this is when they show aggregate data to insurers to get lower premiums for their cars (or indeed provide the insurance themselves at low rates and use the data to get excellent reinsurance rates). That seems a very tesla thing to do.
I have to admit, while on some ways it seems like "Cheating".. I feel like having this real world data is in reality much safer than training artificially. Will include all sorts of varying areas and stupidity of drivers.
Why do you say cheating? Most (all? I'm not an expert) AIs are "seeded" on datasets which are generated by humans. For example, Google translate uses documents translated by experts to convert say, French into German. They use translated government documents, amongst other sources, to get this, instead of some impossible-to-program decision tree that contains every possible grammar rule.
Both real-world training and artificial data have their own benefits. For example, you can use artificial data to simulate conditions that are rare in the real world, or would be very hazardous to try and test (for example, how does the car react when it thinks another car is about to t-bone it).
The only problem is, Tesla's sensor suite is too weak to ever get to full autonomy. So once they upgrade the sensors, what good is the data from the old sensors? Perhaps it's possible to extrapolate the data somehow, to have a database of "situations" rather than actual sensor data?
And look at how many accidents that causes. We also have a highly refined system to process that input and from the looks of it, this is a very hard problem to solve with a computer. I don't think stereoscopic vision and a few microphones can ever beat a human driver on average for all types edge cases without creating a good artificial general intelligence first.
I think a combination of LIDAR and cameras is probably going to be required for a truly autonomous car, at least for the first decade or two.
That I would call: driving autonomously with a human driver ready to take over if something unexpected were to happen. I believe that to get to superhuman performance (which is probably what will be needed to make it legal) you will need 360 degree ranging (i.e. LIDAR). Also, it is inevitable that these sensors will become so cheap that it's a no-brainer to include them.
> The video clearly shows a fully autonomous vehicle in a range of driving conditions.
The video shows a fully autonomous vehicle in a range of (relatively) easy driving conditions.
1. There is not a single "sticky" situation in that video. Handling 90% of situations is (relatively) easy; it's the last 1% that is extremely hard.
2. It's not clear how many tries it took to record that video, but likely more than one.
3. At the end of the video, the car drives in the opposite lane of where it's supposed to. That smells of a highly-staged demo, not a general purpose solution.
In short, this video in no way proves that the sensor suite is anywhere close to being sufficient. Can that car repeat the same trip at night? In rain? Can it drive down Castro St. in Mountain View or University St. in Palo Alto at lunch time? Etc. etc.
Why aren't other manufacturers following this model? I've seen 12-year Mercedes with internet connection, so the infrastructure has been there for a while.
Disclaimer : i work on one of these car manufacturers IT departement, in close link with the "connected car" team.
The reasons are multiple.
1) building and upgrading a car model take them at least 2 years... in the meantime, they have time to clock the kilometers anyway.
2) They have no idea how to build that type of data "massive" environment. They just don't know. Like all our "connected car" data came onto one single thread acceptor on one single machine. No load balancing, no scaling nothing, until August when the vacations congestion overflooded it. So now we have 4 threads in that single machine.
I could ramble on and on on this type of mistakes they do. They just don't know. And this year, they asked the team to reduce their budget. Because it is how the car industry works. After two years to kickoff the project, you have to cut your cost by 5% to 10% every year.
Noone wants to have the first headline of "Massive multideath accident involving kindergardeners caused by software glitch" attached to their brand. Particularily not German carmakers who both emphasize how pleaseant and exciting it is to manually drive their vehicles ("Fahrvergnügen", "Vorsprung durch Technik", "Freude am Fahren"), but also use passenger safety as a major selling point.
Not sure why you're getting down voted. Car makers have to be very careful here and the older car makers understand this. They know NBC is just chomping at the bit for bloody news to headline. Move fast and break things doesn't work for an industry that has had to prove how safe it is. Do _any_ of the down voters even know who Ralph Nader is?
It doesn't fit into the development model of car companies at all. Typically you have a 3-4 year development cycle for cars. Within that period lots of stuff happens, from getting contracts with all the suppliers, developing all the individual parts (often from scratch, if a different supplier than the previous one was chosen), integrating that into the cars and testing on the road.
After that a car is pretty much seen as completed, it will go into the factory and to the customer and probably get minor updates and fixes during the first year. However development already starts for the next model. Data and information from previous cars won't help you that much, since the systems in the new car will often be very different (as I said, most things are developed externally and from different suppliers, reuse is limited).
> One part of that will include contractual requirements for the system to have clocked n-kilometers on the highway in full (or partial) operation.
I wonder what safeguards the manufacturers have in place to determine that those tests are not gamed and are honest as presented. I am thinking of VW emissions scandal. Given the stakes it would be interesting to know what level of trust is involved vs. verify.
More cameras. Better sonars (very short range). Better radar processing, but apparently the same old single radar at bumper height. Still no windshield-height radar. No radar scanning in elevation. No LIDAR.
That's a better, but still weak sensor suite. It's probably enough for freeway driving under good conditions. It's far below Google's sensor suite. Or Volvo's.
Now they just have to write software smart enough to not plow into stationary vehicles on the shoulder. There are videos of three separate Tesla crashes where the Tesla plowed into a vehicle partially blocking a lane.
There have been several announcements of low-cost solid-state LIDAR units for automotive. Quantergy announced last year, but didn't ship.[1] Innoviz announced this year to ship in 2018.[2] Advanced Scientific Concepts can't get their costs down.[3] (They have a great unit that costs $100K; the Dragon spacecraft uses it during docking.) Those are all-solid-state devices. There are also some companies trying to use MEMS mirrors, like TV projectors.
Eventually somebody will get 3D LIDAR technology working at a low price point, but it hasn't happened yet.
It's staggering that Tesla don't use LIDAR at all.
Sure LIDAR is more expensive but the data quality is very stable and good. Cameras won't work that well or at all in night or when the sun rays blend in morning / evening or in bad weather conditions like heavy rain or snow. All other self-driving cars I know of have at least a small LIDAR beside several other types of sensors.
Lidar doesn't make sense for Tesla with the failure rate alone. They're mechanical devices, and they often have defects when produced at scale. They need to be replaced very often. This makes sense when you're doing vehicles as a service, but not as much for Tesla's use case.
Once the cost goes down, multiple LIDAR units can be mounted behind the windshield. Cars designed for automatic driving should reserve the top inch or so of the windshield for sensors, with the optical sensor suite mounted at the front edge of the roof and covered with padding matched to the headliner. (Yes, auto glass today is often infrared-opaque to reduce heat input, but the IR-opaque layer could be omitted for the top inch.) Putting cameras behind the rear view mirror is already done, but more space may be needed.
The rotating Velodyne thing Google uses is an interim research tool. In production, expect to see better design.
With enough LIDAR units, full-circle coverage can be achieved. This is already being done with cameras. The rear and side units need less power and range than the front-facing units, so they can have a smaller collecting aperture, making them easier to place. For the front-facing units, the emitter can be spread out, allowing higher power. The rules on eye safety are concerned with the maximum power coming through a 1/4" hole (human eye pupil). If the laser emitters are spread out over the width of the windshield, the maximum power per unit area is reduced.
The CMU/Cadillac demo a few years ago managed to conceal all the sensors. It can be done. It just costs too much until one of the three companies claiming to be building low-cost automotive LIDARs Real Soon Now manages to deliver.
> ...costs too much until one of the three companies claiming to be building low-cost automotive LIDARs Real Soon Now manages to deliver.
All I see is future promises. Tesla wants to do this now, that too at mass market costs, independent of whether those promises materialize or not.
It also depends on how much you trust their R&D team. If they thought this was a week sensor suite and something else was just around the corner, they won't risk fitting their cars with inferior technology which would be obsolete in a couple of years, making all the collected data worthless. Yes I am aware they did this to existing cars lacking the full autonomy hardware, but I find that OK, since, for them, they never promised full autonomy to begin with.
No, Tesla doesn't want to do automatic driving now. They want to hype it now. The software isn't ready yet. In their announcement, they say that the new hardware will, right now, support fewer automatic driving features than the current hardware.
Low-cost LIDAR is a problem that can be solved with money and a customer ready to buy a lot of units. So far, no car company has said "we'll take a million units a year at $100 if they meet these specs." It's lack of demand, not lack of technology. It's a risk for the LIDAR maker. There are automotive radars available better than those currently on cars, but they don't have a volume buyer yet.
You provided no evidence for the hype claim but even then you are contradicting yourself:
Statement A: Tesla is just hyping autonomous driving since no technology can deliver fully autonomous driving right now.
Statement ~A: Low-cost LIDAR is available to any high-volume customer, of which Tesla is one, given Model 3 demand. And LIDAR can deliver fully autonomous driving.
The point simply being, I have a hard time believing that even with good-looking, cheap LIDAR being a possibility, as you claim, Tesla chose to go with an inferior sensor suite, for no apparent reason.
What they're doing now is a minor retrofit to their existing crashes-into-big-solid-objects "autopilot".
The Model 3 is at least two years away. By then, the technology should be better.
Volvo already has a concealed LIDAR in their self driving car.[1] Volvo will put 100 self-driving cars in the hands of ordinary drivers in Sweden in 2017. This is self-driving by the pros. Makes Tesla look amateurish.[2]
And Volvo is way ahead on marketing self-driving.[3]
Besides being almost as expensive as the car they mount on top and stick out like a sore thumb. This is the reason. It doesn't look cool. Amazed that people don't realize that.
LIDAR's high cost[1] is prohibiting it from being commercialized. This article below was dated 2015 so price might have actually gone down a bit - but still a few orders of magnitude higher than cheap ultrasonic sensors and cameras.
This is actually common problem not only for Tesla. We won't see LIDAR in production cars (you will certainly see them in prototype cars as usual) until its cost drastically reduced
It's more than just high cost, LIDAR has issues with rain making it questionable in real world conditions. AKA, if you need some fallback, just use that fallback.
With range gating, you can do a lot with LIDAR in rain and fog.[1][2] It's possible to ignore all reflections out to a given distance, and you can adjust that distance. There's been a lot of work on this for the military. Automotive LIDAR units don't have this yet, but it could easily be added. If you have both range gating and "first and last", where you get the time of the first and last reflection, you can easily distinguish fog and rain from hard objects. The distribution of first and last times on fog and rain should look like Gaussian noise, while a hard target has near-equal first and last.
This is something we'll probably see in second-generation self-driving cars. With range-gated LIDAR and millimeter radar, self-driving cars will do far better in fog and heavy rain than humans do.
All you have is two overlapping cameras with poor resolution and they let you behind the wheel, right?
This is a software and processing hardware problem only. Human beings drive OK for the most part and we don't have lidar/radar/sonar/etc.
With 8 cameras providing 360 vision and an advanced NVIDIA hardware suite, this problem is entirely solvable and this is a very reasonable solution to it.
In theory, sure. But you're effectively saying computers have to process vision as well as humans, which some experts think is comparable to solving AI. I'm not saying Tesla won't make a lot of progress, but considering how many decades of research has gone into this very problem (I worked on it at Carnegie Mellon as a graduate student, and they've been doing it since the 80's) it will likely take longer than most people like to admit.
I actually don't know a single expert who thinks we're anywhere close to having a stage 4 autonomous vehicle. Most of the people I respect are pegging it at decades instead of years.
Google is at stage 3 and getting close to stage 4 for slow (< 25MPH) vehicles. See Urmson's talk at SXSW, where he shows Google cars handling many unusual situations. (A woman in a powered wheelchair chasing a turkey with a broom is one example.) If you're willing to drive slow, more situations can be handled. It's not necessary to have as much sensor range, situations are less ambiguous, braking hard will resolve most problems, and predicting what's going to happen becomes less important.
At higher speeds, prediction becomes more of an issue. Will that car on the side street drive into the intersection, or not? At slow speeds, you can wait and see what they do. At higher speeds, you have to predict behavior. That's hard, but a machine learning problem.
This is with the caveat that the weather is nice and you're in one of Google's specially mapped areas. As I understand it - and I never worked at Google and I keep waiting for someone who does to correct me on it - those maps are very difficult to scale and maintain.
The limited area/weather isn't a huge problem.commercially . You just need to start offering it in a single city when possible, prove the tech and business works , and than you could get tons of money to scale to other cities .
I don't see how that solves the weather problem, unless you're limiting yourself to cities with nice weather. Even then - all cities have bad weather occasionally.
I'm not arguing against progress. I think it's great people are working on this and obviously you have to start somewhere. I'm just pointing out the fully autonomous vehicle is not a few years away.
Part of the problem I think is that the image is not fully used; I mean, generally these systems consist of a black-box routine that extracts interest points and then passes it onto to a SLAM routine, which inturn keeps an estimate of the car state and the interest point positions. There is no "physical" model of the world being inferred from images, and I imagine this makes things rather tricky (and also why a LIDAR is so much more useful).
AFAIK deep-learning hasn't really brought much change to this manner of doing things - the mapping part esp.
I think you must mean that you don't know an academic expert who thinks we're close. The teams at Tesla must think it's achievable in the next few years.
Well I also used to work with the team that's now at Uber (I worked at the National Robotics Engineering Center which is where they got almost all of their initial 40 engineers), and some of the guys I used to work with went to Google as well as Tesla. Tesla has a retention problem - their top engineers regularly quit every few years. Google lost nearly their entire team working on self-driving cars because Sergey insisted on making a fully self-driving vehicle instead of rolling out incremental levels of autonomy.
If you're only reading article after article hyping the technology instead of talking to the people actually building it, of course you're going to believe the journalists instead of the engineers.
I don't know anybody actually writing the code that thinks we're close. Of course the PR teams and managers are going to hype everything up - that's their job.
Excuse my rather hand-waving explanations below but I'm not an expert in this field.
The guys you spoke to, was this before or after they went to Google/Uber? As I understand the approach at Tesla would be (a) collecting an enormous amount of real-world driving data that possibly others are not or have not done yet, (b) do as little "coding" as possible, but rather take a deep-learning approach. I.e. the "algorithm" gets better the more data you throw at it, it doesn't depend on human intelligence, but rather on how much data you collect. Similar to how Google Translate got so good (it's not good because of any linguistic model or because a team of linguists "coded" it, it's just good because of the sheer amount of data it was trained on). (c) they have enough money to throw at any hardware requirements for a platform that could train on such an amount of data.
Are the conditions above not different perhaps from what you experienced whilst working at NREC and perhaps different from the way you guys approached autonomous-driving? I'm trying to think from Elon Musk first-principles. If it was technically possible, what are all the lego blocks you need to go about building this?
That certainly seems like the idea for Tesla - collect all the data from the cars in real-world condition and use it for training. I think it's brilliant. Time will tell to see how well they can leverage that data. Having a better sensor suite would certainly make it easier though. It's a lot easier to interpret LIDAR (it basically makes a 3D map for you) than it is to interpret a camera (you have to figure out how to turn 2D into 3D).
My skepticism is in the leap towards a fully autonomous vehicle, safe for driving on real roads in harsh conditions. Very few people have attempted anything in bad weather yet, although Ford has actually done some work on it.
If we are talking about incremental improvements - sure that will happen constantly. What I'm saying is I don't believe you're going to get into a car without a driver anytime soon.
Tesla has been recording driving data, sensor data, and GPS data for years. They already have a system in place for machine learning driving data. This combined with their sensors is IMO a great basis for fully autonomous cars.
Tesla is planning to have their factory produce 500,000 cars in 2018. Even if they only meet half of that they'll still be collecting a lot of data.
I would guess most major cities and freeways will have full autonomous support by 2019.
I think the comparison to google translate is interesting. How much confidence do you have that a translation by google translate is correct? How would you compare the complexity of translating documents between various languages to the complexity of driving a car in arbitrary environments and situations. And the most interesting think: Would you trust your life on google translate? At least in the sense that it does not corrupt the meaning of a translated text (small errors ok)?
The way I see it panning out is that the first generation of autonomous vehicles is going to have a tough time. As more and more cars become fully autonomous there will develop some sort of protocol where every autonomous vehicle will be relaying exactly what it's "about to do" to anything nearby. I.e. your car will know if the car in front / behind / on the other side of the freeway is about to do anything and thus eliminate what is currently seen as "unpredictable environmental factors" to a very large extent. Internet of Things will also increase the amount of additional data being fed to the car's computer, e.g. it will know exactly when a traffic light is about to turn red because they will receive this data directly from it.
Once you reach a certain critical percentage of fully autonomous vehicles regulation will probably change to such an extent that car insurers will void your insurance if you drive it manually, or give a significant discount to your insurance premium if you don't ever put it in "manual" mode, etc.
I think the problem is really difficult right now because humans are still allowed to drive, but as that ratio goes down it becomes less of a serious problem.
That's a really cool paper. While I wouldn't say they're doing better than a human (I really think they're twisting semantics with that one), it is impressive and I hope to see a lot more progress in this area. Thank you for sharing!
Hey, the human eye is a damn fine camera and has the equivalent of a room's worth of NVidia cards in processing power behind it. It does have very poor depth perception compared to radar or lidar, a quite limited field of view, and an inability to see through fog but lets not sell a person short.
With modern driver assistance packages, that's not true anymore. For example Mazda's safety technology apparently gives drivers access to radars and radar coverage that Tesla simply don't have; see e.g. http://www.mazda.com/en/innovation/technology/safety/active_... There's also been talk of mandating active safety systems on new cars because regulators are worried about the safety of drivers relying on just their own two eyes.
> It's probably enough for freeway driving under good conditions.
It might rule out driving in fog or snow, but if the weather is OK then the sensor suite is probably enough for driving on any paved road.
I think you are underestimating how quickly Tesla's AI software can improve. The AI will always be running and checking to see if the driver does anything unexpected. For example if you brake or swerve suddenly when the AI did not expect you to do so, all the sensor data and state of the AI will be sent back to Tesla headquarters. Tesla will amass millions of these 'significant events' and be able to back-test updates to their AI logic against every single event. It will be test-driven development on steroids, allowing rapid software improvement while maintaining high quality.
"I think you're underestimating how quickly Tesla'a AI software can improve"
Do you have any basis for that statement, other than that you like Elon Musk, and read an article about "big data" once?
I have actual experience in this field. The parent is right: the sensors matter more than hypothetical gains from "big data". Because right now, sensors suck. There's a reason every serious player in this game is plopping pairs of $10,000 lidar units on their test cars (and still failing to work in the rain). You can't distinguish signal from noise by collecting more noise, and until you have a decent sensor suite, "big data" is just garbage-in, garbage-out.
Reading this discussion, you'd almost forget that the reason Tesla is doing this is because its existing sensor suite was proven to be a total joke in actual use...but hey, here's to the emperor's new outfit! One hundred percent less naked than the old one!
Perhaps irq11 was downvoted because of his/her belittling tone rather than the substance of his/her argument.
It's valid and important to say, "I have experience in this field and sensor quality will always be more important than a suite of regression tests, no matter how large".
It's not valid and important to say, "You're just saying that because you like Elon Musk, and read an article about 'big data' once"
Well, yes. That's why I didn't say that. I asked if the parent had any evidence for the otherwise bald assertion of a dubious fact (other than being a fanperson).
It would be a perfectly reasonable response to my comment to say "why yes, I do, and here's my evidence (along with a great reason that I didn't provide it in my original comment!)"
It stands to reason that if the parent had a better argument than "test-driven development on steroids", then they'd have provided it. But asserting that someone's well-informed comment about sensor quality "underestimates" the power of big-data-AI-TDD-learning-rapid-innovation-foo, well, that sounds like magical thinking.
In any case, this is a meta-discussion about voting, which is definitely not okay, so I'll leave it at that.
Here's Mobileeye's CTO talking in detail about their technology.[1] They use a a deep neural network that recognizes cars. Just cars. Look at this point in the video.[2] There's a stationary obstacle on the left, and it doesn't recognize that. Maybe this is why Tesla, using Mobileye hardware, plowed into three different stationary vehicles at the left of a highway. They were all trucks (one was a street sweeper), and seen from the back while stationary, may not have matched the trained model of "car".
Mobileeye's algorithms are good at recognizing moving cars from most angles, and that's valuable for feeding into the model of "what's the other player doing". But it's not enough for the "where are all the obstacles" and "where is the road surface" questions. Those questions have objective answers, which can be answered with a LIDAR, and maybe millimeter radar if it has the angular resolution.
> existing sensor suite was proven to be a total joke in actual use
The existing sensor suite was is sufficient for the existing driving assist functions. The existing suite was never expected to be sufficient to provide the predicted, future levels of autodrive.
What do human sensors cover? Primarily our eyes. Have we had sensors that provide about the same dynamic range as our human vision?
I am NOT a part of this field, so please excuse my presumption.
I believe that most/all(?) current/previous 'automated control' systems used sensors that were lower resolution than our eyes, but that also provided some indication of range data.
Do our eyes provide distance data? I think that data is calculated with quite a bit of accuracy in our brains.
Again, I'm not part of this field, so this is just speculation.
In theory, given a fast enough computer and advanced enough algorithm, given nothing but visual input streams no more advanced than are available with existing optics, could a car autopilot not be just as good a driver as a human?
Agree on that. Would however love to read about some confirmed info whether AI/NeuronalNets/BigData really plays a significant role for any of the current automatized drive developments. And how this fits together with safety regulations (ISO26262, ASIL) and the required deterministic behavior.
Could you share with us what the state of art in SLAM is ? Last I looked, the backend is essentially the same old Kalman-smoother, but the frontend is generally used with hand-tuned features (stuff from the 90s - 00s). Big Data, incl. all the rapid advance in object-recognition CNNs can't really seem to do much in this case; unless there are radical new techniques that I'm not aware of.
I'd advise others on this thread, who tend to dismiss the difficulty of vision, to realize that techniques in ML and Deep-learning are very problem specific and not the holy grail of AGI they're made out to be.
Actually we also have very sophisticated direction finding sound processing, and tactile sensors on the wheel and seat plus proprioceptors (a trade secret that no at tesla has been able to reproduce). This allows for e.g. driving over snow and being able to process the sound of the snow as it is driven over as a hint to the rest of the driving system.
There's no reason why a microphone and an accelerometer would be hard to connect as input features. The car can make use of the same hints, and most probably make better use than even humans by learning to distinguish more informative patterns.
When we were designing our vehicle for the 2004 DARPA Grand Challenge, we considered installing a guitar pickup on the frame to sense vibration. Early plans for the Grand Challenge involved more off-road driving, and we wanted something to give us a sense of surface roughness. In practice, the off-road aspect wasn't important and we were overdesigned in that area. We had a 6-wheel drive vehicle and a windshield washer/compressed air cleaning system for the sensors, which were totally unnecessary.
Yeah, but our cameras have adaptive resolution, a large field of view, very good dynamic range + low-light performance, multi-axis control, and they are tightly integrated with a sophisticated processing system.
Google's technology was already working years ago without machine learning. If you look at the object recognition/detection benchmarks, they are already human level, so cameras should be enough (with a good enough GPU). 5 years ago deep learning wasn't working. Algorithms are getting better faster than hardware getting cheaper.
If you go to the order page of the Model S, it says for the "Full Self-Driving Hardware":
>Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year.
That's a really fucking shitty thing to do, and is gonna cause some pretty big waves IMO if they follow through with it.
The issues with not being able to modify your own stuff is already really shitty, but its not something that the average Joe really can understand easily, and even less care about it.
But not being able to USE your car for some things? That just screams easy headlines and it's a reason I might not get a Tesla in the future.
The most interesting thing about self-driving vehicles is that there is a legitimate reason people should never be able to modify the firmware on their vehicles. The risks would be very high.
There is also the entire liability and planed obsolescent issues not being dealt with. Will Tesla support these cars forever? Will 10 year old models still get software updates?
> The most interesting thing about self-driving vehicles is that there is a legitimate reason people should never be able to modify the firmware on their vehicles
People can already modify their car hardware in dangerous ways, yet no one is welding the hood shut. You may weld spikes on your car to keep prying eyes away if you wish, but when you inevitably impale someone, Ford will not be liable.
Why should software be treated differently? All Tesla has to do to avoid liability is to say "The vehicle was running firmware with aftermarket firmware installed by the owner". They won't even need to run diagnostics, they can simply look at the logs they collect.
Sorry, there is a fundamental difference with modification when you get into software, server-based logic, and machine learning.
It's a bit like when people used to modding their own hardware tools encounter a GPU or a RAM card that won't fit into a PCI slot, so they "cut it to fit", and then are surprised it doesn't work.
When you modify a car physically with spikes or to make more engine noise, or to have looser suspension you are dealing with isolated newtonian environments.
When you tweak software that involves massive machine learned models, you can cause absolutely unpredictable horrific consequences. There are good reasons to restrict this.
Expert witnesses exist for a reason - they will be able to answer the question whether the firmware was factory-installed or not.
> But there is no such visibility for software
Software modification has a lot of visibility, especially since Teslas log everything - why not log when new firmware was loaded? They could also checksum post-facto, but even that is not necessary: Tesla could implement something similar to Androids "fastboot oem unlock" option
> Software modification has a lot of visibility, especially since Teslas log everything
What I mean by visibility is that you can't fudge it, that reality is plainly visible to all actors.
Income is not visible directly (although what one's house looks like for example is visible and is a good indication of income). If you meet a well-dressed guy in a Starbucks, you have no idea whether he's crippled with debt or whether he has so much money he doesn't know what to do with. And that's true regardless of the fact that income is regulated by the government, that banks keep records and all that.
Software is NOT visible in that sense: whether your microwave is running code that will make it explode when you type in 9999 or not, is not something that can be known by everyone but the first person who has access to the device, let alone the public who couldn't understand how to compare two hashes anyway.
As far as any third party is concerned, Tesla could just be faking logs and no one would know. Actually, you may not see it still, so imagine that say, GM, suddenly releases an autonomous car and it crashes. Do you trust them when they tell you that the logs say the driver was at fault?
I guess the concern is that both sides will get expert witnesses with equally long CVs to make equally compelling arguments and then it's on the jury to decide which PhD they want to believe. Not sure how that helps.
Cars are already extremely risky to modify, but it is still possible (and it's even a protected consumer right). I can go out and accidentally cut the breaks on my own car while changing the tire, if it causes harm, we already have a system in place to deal with it.
I would argue that cars are not extremely risky to modify mechanically - the parts are usually large, almost everything you'd want to do on your car has youtube videos and forum threads with people doing the exact same procedure, and you have pretty immediate feedback if you really messed anything up/the car is inoperable.
I don't think that is the same for self-driving software if you were allowed to hack it. It is easy to test if your brakes work, I think it may be harder to test modified software with the huge number of interactions it must deal with.
Oh baloney. Have you any experience with current car tech? It's pretty freaking basic, all things considered. Anyone who codes enterprise applications for a living can deal with a car's ECU, including ABS, traction control, all of it. The autonomous stuff is "scary AI" but if it's not open sourced, I don't want it.
> The most interesting thing about self-driving vehicles is that there is a legitimate reason people should never be able to modify the firmware on their vehicles. The risks would be very high.
Tough. We have liability law to cover that.
You'll be screaming to allow modification when your trip is 20 minutes longer because a bunch of rich assholes paid Tesla to route all traffic around their neighborhood.
There is a fallacy of "appeal to authority" here. If it's my property, I'll do as I please with it. Risks? Look at the risks we face with DRM of all kinds and how are rights are reduced and eliminated for the sake of corporate profits.
> But not being able to USE your car for some things? That just screams easy headlines and it's a reason I might not get a Tesla in the future.
Its your car. It's Tesla software. You don't own it, just like you don't own any of the other commercial software you use.
I can't really find fault in it; they deserve the lion's share of the revenue (not Uber, Lyft, or you) if you put the vehicle into their shared ridesharing fleet for profit.
You don't have to put it into their fleet though. But if you do, them the terms.
You know that all car curently always has some form of software in its driving/braking system right?
What's next?
A blender has some kind of timer software in it so you can't use it in commercial unless you are on blender network?
A TV has software for programmable channel in it, so you can't use it in your bar unless the TV vendor allow it?
Software is their implementation detail, I buy the resulting whole product.
Then you will have to wait until somebody makes a product that suits your needs. Nobody forces you to buy a car where the autopilot cannot be used for commercial purposes.
Sounds like a good reason to look at changing contract law. I'm wondering what kind of precedent there has been in purchasing products but being forced to agree to a license to actually use the product. My uneducated perspective is that this hasn't been an area of concern until general-purpose computers appeared on the scene.
Edit: It's worth clarifying my opinion. I'm totally alright with software "manufacturers" leasing software to end-users (or, in the case of Tesla, leasing the software inside their rolling computers to drivers). What I take issue with is the notion that there is a "sale" when a license is involved. It's a lease, and I'd like to see the force of law involved in reflecting that fact in the marketplace. Maybe it's splitting hairs, but I think consumers need to be clearly told that they do not own virtually anything that they've purchased that contains a general-purpose computer, and that they should not expect the benefits of ownership (ability to modify, to continue using even when the "manufacturer" has decided to no longer "support" the product, ability to resell, etc) in such a transaction.
I don't really buy into the software licensing bs--it was forced on us without any choices. It is the worst kind of legal bs there is. "You paid for this but you have no rights to it." Uh, yeah, keep thinking that.
I'm pretty sure if first sale doctrine applied to software, almost nothing would change. We're not talking about getting rid of copyright, just the idea that this thing that acts almost exactly like a purchase is not 'really' a purchase.
> And if you're in the tech industry, that "legal bs" is what allows you to have an income.
So what you’re saying is that just because his salary depends upon his not understanding the arguments for this to be “legal bs”, he therefore should voluntarily choose to abstain from understanding them? Upton Sinclair would turn in his grave.
Presumably you're aware that comparing one thing to another does not imply that the GP thinks they are anywhere near equivalent. The common meaning of the word "comparable" is not "can be compared", but "roughly equivalent". You are acting as though GP considers them comparable-in-the-second-sense; GP's post actually claims them to be comparable-in-the-first-sense.
A Tesla is not forced upon you, however. If you're wary of the terms, do not buy a Tesla car.
Software has, by the way, always* been this way. I've been "buying software" for twenty years and I've never actually not gotten a license with it rather than the software itself, with all ownership rights conferred to me.
You can use the car all you want. Go drive for Lyft or Uber. You just can't use the self-driving functionality that way. If you want your car to be making you money by driving people around automatically without you in it, you need to do so using the Tesla Network.
Honestly, this doesn't seem all that unreasonable. Self-driving functionality is presumably a piece of functionality that's heavily dependent upon lots of data that Tesla owns, dependent upon regularly getting software update, etc. Why should Tesla give its data and continued work on software upgrades to you for the purposes of using that for the benefit of one of Tesla's competitors?
Just face it. If you're buying a Tesla, you're not just buying a car, you're buying a software platform and access to lots of data and engineering work that Tesla provides on an ongoing basis. If you want just a car, then don't buy Tesla.
That said, if you are looking to buy a self-driving car and using that self-driving functionality for Uber or Lyft, good luck figuring out how. It would not surprise me if other self-driving cars had similar restrictions (though I guess if Uber sells self-driving cars to the general public then presumably you could use those with Uber, but probably not with Lyft).
It's not unprecedented to have severe restrictions on software in EU. E.g. you can't use le Windows you legally obtained in France on two PCs simultaneously. Some vendors charge per-CPU/core licenses, some network software (e.g. certain PABX) charge even by the number of connected endpoints.
That is probably because in order to install Windows on even one PC (let alone two), you have to make a copy of it. Even if an installation is (somehow) not counted as copying, two identical installations is clearly implying a copy has been made somehow. And copyright means you don’t have permission to copy the software. Therefore, you have to have permission to copy it. Another word for permission is “license”, and the license explicitly says under which conditions you are allowed to copy it.
A car, on the other hand, is not copied after purchase, so a copyright license is not relevant. A “license” (as stated above) is not a general list of conditions you have to agree on; it is simply a list of permissions for you to do what copyright law (and trademark law, etc.) would otherwise not allow you to do. No such law restricts what a buyer is allowed to do with a car after a purchase is made, unless a contract is signed. A contract can state arbitrary restrictions, but for something to be a contract, it has to fulfill a number of conditions: it must be knowingly and willingly entered into by both parties, it must not be unilateral or even, in some jurisdictions, too beneficial to only one party. Clearly your standard so-called “license agreement” does not qualify. Ideally, and usually, a contract is something on paper which both parties sign. Do you sign such a thing when “buying” a Tesla? If so, does this contract allow resale? Does the contract require the reseller to require the second-hand buyer to also sign it?
There is however a business trick to go around this "limitation" of copyright, and that is by using a "server" which the car will call home to (like once a week/month/year), or simply accessing data from on a regular basis. By restricting "access to the server", Tesla can impose any limit they want including restriction on the after market. Owners would also not be allowed to fix their own car to skip the check or talk to a third-party server as that would go against the DMCA anti-circumvention (which is part of US copyright law).
Facebook, Google, Amazon, and every other huge company are huge because of their monopolies. Tesla would be making a massive mistake to not start walling in their garden.
Until it's tested in court, the legality of that restriction is highly questionable. (Which is not to say that you can ignore it with impunity. If you don't have the money for a multi-year lawsuit against a billionaire, you might not want to violate it.)
Presuming that such "EULAs for cars" are not legally binding, the next step will be cars that simply will not do the "forbidden" action, because their software detects and prevents it. Thence comes the next round of lawsuits...
It's different because the self-driving doesn't entirely happen within the vehicle. Tesla maintains a road database to base driving decisions off of and it's assumed you obtain that information from the network.
> Initially, the vehicle fleet will take no action except to note the position of road signs, bridges and other stationary objects, mapping the world according to radar. The car computer will then silently compare when it would have braked to the driver action and upload that to the Tesla database. If several cars drive safely past a given radar object, whether Autopilot is turned on or off, then that object is added to the geocoded whitelist.
So, you will likely be accessing a service to use the self-driving features. At that point, I think it's technically within Tesla's rights to dictate how that service is used.
It might be within their rights, but still a big issue and something that IMO shouldn't be within their rights.
I can understand (and fully agree with) limiting usage that can hurt them (like you cant use their object db in your own non-tesla car, and you can't use it as free storage for images or something), but limiting how you USE the system in a way that it's meant to be used?
I've only ever seen this kind of stuff in shitty B2B programs, and even there it was a big enough issue to deter us. This is a big deal and could really open the floodgates to some really shitty practices.
> If several cars drive safely past a given radar object, whether Autopilot is turned on or off, then that object is added to the geocoded whitelist.
Slightly exaggerating of course, this means that if there are a few locals that routinely blast through a given stop sign, your future Tesla will happily do the same. So much for better safety through automated driving.
All those years marketing departments have preached about the "personality" of car brands, now we might finally get it (an aggregation of driving patterns of the worst of the brand's customers, if it was strictly a whitelist algorithm which I sure don't hope it is)
Cue Stallman. This is precisely the motivation for GNU software. How long before GNU (or other prxoy) has an autonomous driving project? Part of the issue is that the dataset is as important as the software. First step would seem to be the construction of a "GNU" hardware pack for autonomous driving and people start driving around with it mounted on their cars, gathering (GPL'd) data.
"They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety. Those Who Sacrifice Liberty For Security Deserve Neither."
This seems like an appropriate quote here. If we want to protect our rights we should not settle for conveniences. The software world in 90s is a good example of what happens.
> "They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety. Those Who Sacrifice Liberty For Security Deserve Neither."
Being able to read and modify your car's software is hardly an "essential liberty". I'm pretty sure the source of this quote had things like freedom of speech, non-oppression, etc. in mind, not copy-left.
People who sanctimoniously insist that other people are engaging in the wrong trade-off between liberty and safety and thus deserve to lose their liberty and safety are the worst enemies of liberty and safety.
One company does something to fuck over consumers and the others hold their breath for a minute to see what happens, then stampede to do it too.
It's why we have binding arbitration agreements and no right to class action lawsuits in EULAs for game consoles.
It's why we have every company around trying to extract their 30% cut by sticking themselves in as a middle man.
It's why we will end up losing the right to use our cars as we see fit. I don't the buy the argument that it's a "software platform" now. The amount of software in cars has been increasing for decades and this being a bit smarter doesn't suddenly mean that Tesla should get to take this step.
They need to be stopped. It's not some Stallman-esque FOSS zealotry, it's just wanting to preserve the basic concept that I own my car.
Tesla is not even assured of getting self-driving approved in lots of jurisdictions. It's going to be a long uphill battle to get it accepted. They are aware that even well-intentioned owners and modders could get into all sorts of trouble, and that the blame would fall on Tesla and self-driving cars generally, potentially doing great damage to their prospects. They're already under scrutiny.
If you buy a self-driving Tesla in the future, you do own it. Drive it anywhere you like. But the software is under license and may not be used for certain things. Demanding that they open up their software to interfacing with who-knows-what is to demand that they introduce new security measures and functionalities, dependent on unpredictable third parties with unknown properties.
Also, there's an easy way to avoid not feeling like you own your Tesla: don't buy a Tesla. I mean, as of now you really have to go out of your way to get one - so if they're "fucking over consumers" so bad with this, why would you buy it? This isn't something they're springing on you after a purchase, they're warning you years in advance.
> Also, there's an easy way to avoid not feeling like you own your Tesla: don't buy a Tesla. I mean, as of now you really have to go out of your way to get one - so if they're "fucking over consumers" so bad with this, why would you buy it? This isn't something they're springing on you after a purchase, they're warning you years in advance.
I won't, but did you read what I posted? Allowing this to pass unchallenged will hurt everyone.
But the way you challenge it is by not buying into it. The policy is restrictive and perhaps a bit paranoid, sure, but that doesn't mean that it needs to be made illegal. Tesla can't force anyone to buy its cars and if this is such a poor value proposition for most people, then I don't see how it'll succeed. They don't have a monopoly on anything they're doing.
The last company I worked for negotiated EULAs with contract development Customers. The Customer's price varied according to the rights they requested.
It's certainly the case that most off-the-shelf EULAs aren't open to negotiation because the Customer is often inconsequentially small to the software "manufacturer". Larger Customers (particularly governments) certainly negotiate for license terms with major software "manufacturers", however.
You're right. I was thinking of EULA as just covering the license terms that end users have to accept before using software, which are not a negotiation, as opposed to special licensing terms that are negotiated (which I previously would have considered to just be "a contract"). But after perusing the wikipedia page on EULA, it does appear that specially-negotiated license terms are still a EULA, as EULA really just means a software license agreement.
Is there any commercial software that lacks an EULA? Not having one would I think expose the company to risks that any for-profit company is incentivized to minimize (i.e. It wouldn't make sense to not have one).
I expect, and one of the lawyers reading can probably clarify, that Tesla's liability and risk changes if you're using the car for a commercial venture with their blessing. This specifically says you can't do that, so you're in violation of the terms of service and presumably that is enough to shield them.
If Tesla isn't confident enough in their self-driving service to accept the liability then I doubt it's good enough to be approved for road-use anyway. It sounds like they're trying to set a precedent to kill competition before it starts. Hopefully other self-driving car manufacturers force their hand or kill their business.
Perhaps, but it is not uncommon for example for chip manufacturers to specifically say their parts cannot be used in safety systems without their express written authorization. They are certainly confident their chips work, and they sell them by the bucket full, but if someone dies as a result of using their product that their chips are in, they don't want to be liable. They use those disclaimers to shift liability to the person who designed it in.
There is a funny example of such a disclaimer in the Java license agreement, it says:
"You acknowledge that Software is not designed, licensed or intended for use in the design, construction, operation or maintenance of any nuclear facility"
I remember reading that line a few years ago (when I was new to computers, and actually read EULAs), and wondering how in the world someone would use a music player to design nuclear weapons?
Note that they say “using a self-driving Tesla”, not “using a Tesla”. Perhaps it means that they don't want you to use the Tesla in its self-driving mode for revenue purposes but it's still fine to use the Tesla for such purposes if you drive it without using the self-driving features.
From an economic perspective there is no difference between a product that can't do X and a product that could do X but it's not enabled or limited in some way.
Implementing logic in an ASIC which cannot be modified and implementing logic with an embedded processor where the software can be modified but this feature is disabled is the same thing. Both are intended to be black boxes that just do one thing and nothing else.
Yet for some reason you expect to be able to modify the embedded processor.
"You will also be able to add your car to the Tesla shared fleet just by tapping a button on the Tesla phone app and have it generate income for you while you're at work or on vacation"
That wouldn't make much sense. Unless you were negligent in some way (modifications, maintenance not done, etc.), I would expect Tesla to be the liable party. The owner is just using a Tesla feature in the way the company intended.
A significant incident will occur shortly after the network is switched on and Tesla's reaction to that incident will affirm his optimism... Or it will cause their whole business model to fail overnight. Either way.
Do you think that a) Tesla may want to enter that market on their own, or b) Tesla is worried that use in that market may sully their reputation if problems arise?
From their site: "doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year"
This does not leave much room for speculation regarding a), this is basically an announcement of intending to introduce "the Tesla Network" as a player in that market.
Then they threaten to sue you for industrial espionage, you don’t own the car, only a license to use it (there was a previous HN discussion with someone who got that threatening phone call)
How interesting. So up-coming Tesla drivers temporarily won't get fancy features.
For a time, new Tesla buyers again become early adopters. But unlike traditional early adopters, who take a trade-off (on price, or features, or polish) for being first, these adopters are promised the features when they are ready.
The nay-saying around Tesla is immense, even in these early HN comments. Obviously there's some risk here, but man. Tesla is sowing the seeds of the future.
You have to be kidding. The Tesla cheer-leading is immense. This is a competitive space within which Tesla might be a leader, but you make it sound as if everyone else around them is stagnant.
The majors are building this stuff, the large tech companies are building this stuff, and even big component providers are building this stuff. This is obvious if you look outside of the SV echo chamber.
I think it's great that they're taking the big risks needed to move the industry forward, and not letting the lawyers run the company.
But some of the risks they're taking are just completely stupid and unnecessary, such as calling their assisted cruise control "Autopilot." That's like waving the proverbial cape at the proverbial bull.
Pilot here. Airplanes have autopilot. While on autopilot you are still expected to be monitoring the flight and take control at a moments notice. Autopilot does not allow the pilot to take a nap or go in the back and party with the flight attendants.
Autopilot is the correct term. I'm tired of always having to cater to the lowest common denominator.
Pilot here as well, unfortunately, it's a fight that won't be easy to win. If in the mind of people, Autopilot means self driving. It's going to be hard to change the global perception.
When I'm using the Autopilot, I feel like I'm in my airplane, always on guard, and in a practice of recovery from unusual attitude in flight.
There are a few times also where being an engineer has helped a lot. You can pretty much predict the situations where the car will run into issues and conflicts ahead of time, just by looking at the scenario in front of you.
It's something that the majority of population is not trained to do.
Now having said that, I used to own a BMW X5 with Adaptive Cruise Control, and that car was also doing some pretty stupid things at times, especially when cars in front were moving out of the way. The car would suddenly think the way was clear and floor the accelerator...
Have a RAV4 2016 SE. Great car, sonars and cameras everywhere but still very stupid driving at times. I don't mind it, because it keeps me alert to take over at moments notice.
I really want to hack the can bus and give it more smarts. What is the legality of that?
You are freed from continually making stupid rote inputs to match the speed of the driver in front, allowing you to apportion a bit more focus to the road further ahead.
Further, it makes the bulk of motorway driving less fatiguing. When you're not having to personally intervene to slow down when the driver in front chooses to go a few kays slower than you, it's nowhere near as irritating.
Agree with parent. Unfortunately for your profession "Airplane!" the movie ruined the perception. When I read "autopilot" I think robot guy driving airplane without any intervention meaning I can sleep from takeoff to landing.
>>Autopilot is the correct term. I'm tired of always having to cater to the lowest common denominator.
Your argument fails as the analogy of car drivers with airplane pilots fails on various grounds.
a. The airplanes, which provides autopilot facility, their pilots have to undergo more rigorous and stringent training than what a ordinary car driver has to undergo
b. for different types of airplane autopilot systems the pilots have to undergo different types of trainings (you have not mentioned what kind of autopilot you have used, but I'd doubt if your autopilot training on one type of plane will automatically certify you to drive all kinds of auto-pilot planes out there, even within the same airplane category)
c. what Tesla is doing is on one hand it is saying (in fact, screaming in its adverts) that "we are bringing autopilot in cars", "we are bringing autopilot in cars" but when it actually is bringing just a driver assistance system and when that causes accidents, Tesla is putting blame on the customers for not understanding what is autopilot. This is not fair, as Tesla does NOT put any requirement for more rigorous and stringent training for its drivers the way airplane manufacturers/airliners put. So, Tesla is playing with the lives of not only its drivers but also of the other road users.
You are welcome to counter-argue my points. But this too much hoohaa about auto-pilot by Tesla and the subsequent bad publicity it is going to bring them is not good for them. People will not be subtle then because Tesla is not being subtle in their adverts now.
There are no specific requirements on avionics training. As long as you have been endorsed for a specific class and type of aircraft, you can fly one, even if the aircraft used for checkout had a different type of autopilot (or none at all).
Things obviously converge with more complex aircrafts, as it would be quite difficult to find, let's say, two Airbus A320 with drastically different stacks, but some of them, like Cessna 172 Skyhawk, have been in production since 1956, with aircraft technology making quite a progress in those 60 years.
If all your training has been done on a 1956 Cessna 172, is it wise to operate a 2016 model without sufficient training? No. Legal? Yes.
> Tesla does NOT put any requirement for more rigorous and stringent training for its drivers
Beyond saying "Don't take tour hands off the wheel and be prepared to take over any moment" what kind of training would you envision them offering outside of [insert state name] Driver's Handbook? Most states already offer free defensive driving courses as well.
I agree that Tesla is using the term "autopilot" opportunistically to say the least. Whether the term accurately represents the feature they want to promote (imho technically it does, but more on that later) is becoming less relevant.
But I'd like to bring counter-argument to your points (a) and (b): taking away the autopilot feature from both airplanes and cars, an airplane pilot still has to undergo substantially more training than a car driver does, so imho the increased training for pilots compared to vehicle drivers is largely coming from the difficulty in controlling the machine and the more deadly consequence of control-loss to the driver / pilot and to the others.
That being said, the term autopilot is being used opportunistically. There's no way Tesla will back away from the term now, so the next best thing is to warn the user, in every step of the way, that "YOU HAVE TO KEEP YOUR EYES ON THE ROAD IDIOT", to the point that if they can detect your hands not on the wheel a warning sounds.
I do believe the Tesla Autopilot and the autonomous mode in the future will save lives, what we're experiencing is the growing pain till we get there.
>>(a) and (b): taking away the autopilot feature from both airplanes and cars, an airplane pilot still has to undergo substantially more training than a car driver does, so imho the increased training for pilots compared to vehicle drivers is largely coming from the difficulty in controlling the machine and the more deadly consequence of control-loss to the driver / pilot and to the others.
But part of the training that a pilot of an "airplane with autopilot" has to undergo has to do with understanding what the autopilot can and cannot do and what the pilot has to do.
Tesla is not making this mandatory for their drivers to undergo rigorous training regarding this very feature that they taut so much. Then as you have pointed out they are opportunistically advertising that very feature and then when the unsuspecting user is caught off-guard, the Tesla is shouting "Gotcha, you didn't read the fine print. See, it is clearly mentioned here". If then the customer says that but that thing was mentioned amongst thousand other things and that too in font so small, Tesla will counter him/her "see, that's your problem not ours."
If that's what they want then be it so. If that's what is making them happy then be it so.
It may help Tesla win some law-suit. But they will clearly and surely lose big on the customer trust.
>>"YOU HAVE TO KEEP YOUR EYES ON THE ROAD IDIOT"
Btw, It's not fair to say the user an "idiot" when the larger idiocy (and not just idiocy but a cruel practical joke on the customers bordering on criminal activity as it may be at the cost of customer's life) is committed by Tesla manufacturers.
Because your industry has coined a term to mean something different from the literal meaning doesn't assert its the proper use the term. Auto implies automatically, it doesn't give any average consumer the indication that there needs to be manual involvement. I get it that pilots monitor stuff when planes are on auto pilot. That doesn't mean its correct for other industries.
Even in this extremely professional environment that is an airplane cabin, where people are trained to keep watch over the autopilot, people still don't follow procedure:
Over half of pilots admit to sleeping while on autopilot.
These are people who had to take months and years of training to fly.
Meanwhile,to be a driver you need to drive for 15 minutes and do some basic manouvers. In some places of the world, not even that. And we hope that those people will "keep an eye on the road" while the autopilot feature is running? It sounds great in theory, but it's not going to happen. We need a system that can operate completely without human supervision - but that ain't happening any time soon either.
The problem is that most consumers, like parent poster, do not really understand what pilots do, or that an autopilot simply holds the course steady (in the simplest form, at least).
Combine ignorance with misguided self-righteousness, and we get...this mess.
I have no involvement with aviation and I thought autopilot was a perfectly good word for it. I thought it communicated well that it basically just tries to maintain what it's doing but can't stop on its own.
Curious, what is "take control at a moments notice" in seconds? From what I've read about aviation, situations where really fast, let's say <2 seconds, reaction to an unusual situation is necessary don't seem very common compared to car traffic? (Excluding starts and landings, even if a plane can do those on autopilot the pilots certainly are going to be fully attentive)
It seems like an airliner flying at altitude has quite large safety margins compared to a car on a highway.
I wouldn't call it larger safety margins. Many if not most of the cases where car drivers should take action are survivable if they don't; that's different for pilots in airplanes.
Pilots typically have more time to react, though (astronauts lie even further away along this axis; in cases where astronauts can take action to save their lives, they typically have even more time, but also, in many cases, there's no action that will save them)
As to the "take control at a moments notice": that's where the problem lies.
The arguments in "Ironies of Automation" (https://www.ise.ncsu.edu/nsf_itr/794B/papers/Bainbridge_1983...) such as "By taking away the easy parts of his task, automation can make the difficult parts of the human operator's task more difficult" haven't lost any of their power in the 30+ years since it was published.
Because of that, I doubt the typical driver will be able to handle emergency situations, as it will require frequent training.
Tell that to a pilot who's had an autopilot failure where a servo went crazy all of a sudden. You better act quick, especially in low visibility situations, to turn it off, or pull the circuit breaker.
It might happen a bit slower than in a car, but the consequences can be much more dramatic.
How are the consequences much more dramatic while flying? I can think of plenty of scenarios while driving where you go from perfect autopilot conditions with no visible risks to death of passengers or pedestrians in <5 seconds.
Errm, I'm pretty sure the car equivalent of that would result in the vehicle swerving towards the side of a bridge at 70+ MPH, with very little chance of reacting quickly enough to turn it off. A car on the highway really doesn't have that much in the way of safety margins.
Even if the plane started falling down like a rock, you still have more time to react than a driver of a car where a tyre burst at 80mph and now the car is heading towards the incoming lane/rock face on the other side. You literally might have less than few seconds to impact and I imagine that if you weren't holding the steering wheel at the time, you won't be able to do anything.
Autopilot is the correct term. I'm tired of always having to cater to the lowest common denominator.
Ask 99 out of 100 people who aren't pilots, but who might be Tesla customers, and see if they can recite what you wrote below:
While on autopilot you are still expected to be monitoring the flight and take control at a moments notice. Autopilot does not allow the pilot to take a nap or go in the back and party with the flight attendants.
And how do you know all that? Because you were trained accordingly.
I'm not saying Tesla shouldn't offer the feature in their cars. I'm saying they shouldn't have named it after a feature that people think allows pilots to party with the flight attendants.
Shouldn't what matters be how many people become actual Tesla owners using autopilot without realising this? For that metric, I suspect the answer is much closer to 0.
It's not clear that the crashes have been people who misunderstood the capability of the system. To me, it seems more likely that they got lazy and put a bit too much faith into it.
I'm not saying it's not a problem, just that it's a lot more complicated than what Tesla decided to name the feature. I think the accidents that have occurred would have occurred regardless of what they named it.
You can solo fly at age 16 after ~30 hours in an aircraft, so about the same. Also, many students take drivers ed. Now commercial flying is far more involved, but so is driving big rigs.
Same in the UK. There's no mandatory number of hours lessons you need to accumulate before taking your test; if you (somehow) pass having had no lessons at all you're good to go.
The technology to realize an autopilot in a car and in a plane is very different though, they might provide similar functionality to the pilot/driver but in a car it's a much more complex and much less reliable system, involving machine learning etc.. On this layer the systems are not really comparable.
I see where you're coming from, but think that's a calculated risk. From a PR perspective it generates buzz, sounds ambitious, and reflects their ambition.
If you ask anybody today, "what company is interested in self-driving cars?" Their answer is going to be Tesla. They're not gonna say Apple, or Ford or Volvo or Mercedes or Mobileye or Intel. They're gonna point straight to Tesla.
A big part of that is their "inaccurate" word choice. From a PR perspective, the name was a solid gold decision.
>From a PR perspective, the name was a solid gold decision.
I doubt it. It creates false hopes and probably hurted/hurts them when crashs happen. The Tesla autopilot death in June was a really big thing, yet other crashes where assistant systems of other manufacturers are never in the press. Maybe because of the name, maybe because the drivers use them better?
Also look at what happened recently in Germany:
1. Basically the head of the DMVs sending a letter to all Tesla owners to RTFM that it is not in fact an autopilot.
2. Department of Transportation pushing Tesla to stop using the term "autopilot" in marketing as it is misleading.
The autopilot in an airplane cannot be used without a pilot paying attention, either, but it is still called an autopilot. (For example, autopilots do not avoid traffic on a collision course.)
This is the worst of both worlds--having people become complacent behind the wheel while a computer drives only to have to suddenly become engaged which you know isn't going to happen. A lot of people nod off while driving as it is. Take away the "you're going to die if you don't pay attention" part and then imagine what happens!
Of course it can be used without a pilot paying attention. But this rarely happens. If we trained drivers the way we train pilots, it wouldn't be a problem in cars, either.
Also, the comparison would be more relevant if you had as much time to correct problems that occur eight feet away from an oncoming lane of 70 MPH traffic, and as many degrees of freedom in which to do so, as you do when something goes wrong with nothing but 10,000 feet of air around you.
Air safety is an intimidating engineering problem, but it's utterly trivial compared to what it will take to build safe self-driving cars. Make no mistake, it's about time we tried... but we just need to not be stupid about it. Calling a glorified cruise control "Autopilot" is stupid.
And while the name is perhaps misleading to the xommon layperson, your comment of "it is not in fact an autopilot" is flat out wrong. Instead perhaps "it is in fact an autopilot, but you don't know what an autopilot is or how it should be used"
You're arguing semantics. The average person will see "autopilot" and experience a few "happy" drives where the car stays in its lane and has no problems. Then they will stop paying attention and die when they are asleep when it "hands over control" at 60 mph due to a failed sensor or something.
Except when they aren't - see Air France 447 for what happens when a driver has to step in after the autopilot can't cope with the situation.
Then enhance the frequency of it happening by three orders of magnitude given that Tesla drivers are generally not professional drivers, let alone trained in the quirks of an autopilot in the beta release stage...
You know that; I know that; the average Joe on the street does not, and instead has a Hollywood interpretation of the term. Tesla knowingly and cynically exploits that.
That previously would have been my answer, but I've realised that Google is interested in working on them, whilst Tesla & Uber are interested in bringing them to market.
> But unlike traditional early adopters, who take a trade-off (on price, or features, or polish) for being first
Actually, in regards to Tesla, they always stated (and were very open about it) that early-adopters pay premium to support research/growth/etc. First with Roadster, now with S (not sure about X). It can be expected that the same extra features (same hardware) on model will cost less. So the trade-off here is just that, being "first".
this really comes off as "look over there" move to distract from the recent negative press as various government bodies take Tesla to task for not only the lack of performance with the current system but also its ridiculous name.
Sorry, but I seriously doubt they will have sufficient hardware installed to take it to level 4 or 5. It might be a stretch to get to 3 with anything they can deploy today simply because no one has demonstrated a real world working solution for these tiers. Oh you might be able to track demo one or controlled loop it.
>this really comes off as "look over there" move to distract from the recent negative press as various government bodies take Tesla to task for not only the lack of performance with the current system but also its ridiculous name.
So you mean there is no advanced hardware in the new cars?
They've been clearly working on AP 2.0 for a long time - this sort of hardware upgrade isn't something you roll out in a few weeks.
This capability was even pretty strongly hinted at for the Model 3.
Improving the self-driving capabilities has "almost certainly very little" to do with the high profile death caused by the use of the self-driving capabilities? Come on.
1. It is a self driving car, it is so clearly the future, I wish it existed now, it is going to be awesome (in my opinion).
2. Despite knowing about and following news about driverless cars for a while, there was something surprisingly (to me) compelling about watching the video. It's like you get a little taste of the full A to B that it can give you (door to door).
Who wants to speculate how long it will be until self-driving cars are common place in the UK? I need to know how long I have to save..!
"It is a self driving car, it is so clearly the future, I wish it existed now, it is going to be awesome (in my opinion)."
Even now, several years into the current news/development arc of self-driving cars, I still can't believe this is going to be the future.
We're really never going to european (or japanese) style train networks ? We're really going to keep binging on roads ? We, as a nation of fat slobs[1] are really not ever going to return to a culture of walking ?
I always thought of car culture (and we absolutely live in a car culture) as some kind of 20th century aberration ... eventually, LA would get their streetcars and subways[2] back ... eventually Minneapolis and Denver would have 20, not 2 light rail lines ... eventually we'd graduate from shitty bus networks operated badly, for poor people.
Tesla cars are awesome. Successful self driving car software would be an incredible achievement. I worry that there is an arc of societal and urban development that has gone very badly awry and that this will further keep us from fixing it.
[1] That is not hyperbole.
[2] Yes, I know there is currently a tiny, minor subway in LA.
Cars changed the world. There's not really any going back from that. It works in places where the urban population is over 90% (like Japan) but not when urban population is substantially lower (like the US). How would the rural population get to the bus/train station? My grandma's house is half an hour by car to the nearest freeway, how long would she have to walk to get to public transport?
Public transport only works inside of cities, and there's a lot of infrastructure already built to support sprawling metropolitan areas where everyone has 1/4 acre of land surrounding their detached home. There's not a lot of push to tear all that up and forcibly relocate the residents to inner-city apartments.
You know the benefit of electric, self-driving cars, though? An intelligent car isn't going to run you over on your bike. Be happy about that.
I think you can go even further. Public transit is tough to do well in cities that don't have a certain level of density. They're building subways in LA, but the city just isn't designed for it, patterns of movement are too dynamic, and too much of the city is composed of single-family homes so the last-mile problem is huge. It can only ever have a limited impact at an huge cost. They would be much better off with congestion pricing and dedicated bus lanes on the highways and arterial roads, transitioning these to self-driving when it becomes possible.
Three of the best public transit systems in the world are Berlin, Tokyo, and New York, which are both amongst the densest large cities in the world (Berlin is almost exclusively 5 story apartment buildings). The Berlin subway is largely self-supporting without public subsidy, Tokyo's system is composed of two competing but interoperable systems, the larger of which is privately run, and New York's system was privately run until 1940. The lifestyle, weather, and zoning of the cities where it works kickstarted the construction, and there exists enough demand that they would have great public transit systems even without government initiative.
The number of people in America who live half an hour from the nearest freeway is so small as to be irrelevant. Nobody is suggesting that 0% of trips be taken by car. It's fine if your grandmother drives around. It does not matter in the big picture.
What matters is if the 82% of Americans who live in cities are forced to crawl through sprawling car-choked landscapes, or are able to live in a walkable environment. High reliance on cars prevents the latter.
By the way you were wrong about that Japan vs US comparison as well. USA has a higher proportion of urban residents than does Japan. And they have plenty of trains in Japan.
82% of Americans do not live in cities. Rather, they live in areas classified as urban (which really means not rural) by the census bureau. Here's the definition: "To qualify as an urban area, the territory identified according to criteria must encompass at least 2,500 people, at least 1,500 of which reside outside institutional group quarters."
Yes, I'm guessing that many of those urban areas are within half an hour of a freeway--though this may be less true in the West. But the vast majority of those 82% don't live in places that are remotely walkable.
Oh no, you're wrong. Public transport literally CANNOT work... after all, it didn't work in the USA. It's fundamentally impractical, and countries that make it work are performing magic.
/s obviously.
God I hate the responses on threads like this that insist on the impracticality of public transport worldwide because it doesn't work in the US. It's like mass Stockholm Syndrome or something!
Okay you can just stop that shit right now. No one said it's unfeasible everywhere just because it's unfeasible in the US. If you're sick of hearing about the US, I'm sick of every time someone says something about the US, someone else has to chime in and say "yeah but you're wrong because there are other countries". What you're arguing right now is that cars shouldn't exist in the US because public transport works somewhere else. Can you find the hypocrisy there?
Notice how I even said "it works in places like Japan"? Notice that? See it? Good. Now quit acting like like you're being oppressed every time someone mentions the US. Yes, the world is a big place, but guess what? The US exists and a lot of people live there. Deal with it.
Jesus christ it's like I insulted your mother or something.
And income inequality means that people can literally afford to pay for other people to not be near them as they travel.
Busses and trains and other mass transport will return when income inequality is reduced to the point where the amount it costs people to pay for their own space is no longer affordable for them, but if you're earning 500 times the person next to you, you'll happily pay what it takes for them to disappear.
Trains are an artefact of income inequality. Someone can afford to build them and it's not you. Someone has all the leverage over whether you get a ticket to ride and at what price, and it's not you.
I think the future is self-driving public transit. I imagine minivans that perform dynamic route optimization. Instead of a bus that stops every block on a pre-determined route, use an app to input your pickup location and destination and the self-driving van picks you and a few people up from a small radius, and drops people off in a small radius. This will be faster and cheaper than both public transit and even driving yourself (no hunting for parking).
That would be great. One problem though: Self-driving tech may offer a very cheap taxi, so many users would prefer not to use shared vans.
Heck, to a certain extent we're already in that scenario, with some sharing services are cheaper than cars(@ridewithvia, @ridechariot, and maybe uber pool and lyft line) - and yet people prefer cars.
Unfortunately, public transit is way less profitable than selling cars to people, so self driving public transit doesn't seem like the top priority, despite having the best net benefit for society
Good point. I have always imagined this to be government subsidized just like public transit is today. I live in Seattle where we're spending billions laying light rail, and I have to wonder if it's all for naught, if self-driving public transit will make rail obsolete.
While self driving cars will do great things for people such as yourself (and overall I fully support their development/adoption), I also share the fear that it will prevent American urban centers from a desperately needed shift away from cars dependence in favor of better public transit and urban density. I really don't see the idea of every person in the country having their own personal transportation device (or even shared self-drivings car with something like uber) as a sustainable path for the US to continue down.
The technology for good transportation in/among major cities already exists: walking, biking, mass-transit, and high-speed rail. I wish Americans would be more open to these concepts rather than dismissing them in favor of the form of transit that has yielded nothing but horribly designed and polluted cities.
Actually, this maybe a step forward towards better mass transit, if there is one. If I run an autonomous taxi service to pick/drop people, and if I see that there is money to be made by offering to pick/drop multiple people at once, why wouldn't I? Economics of bus is relatively minor compared to investing and operating trains, so this may very well be a possibility.
A huge part of the urban footprint today is set aside for parking. With self-driving cars, people will use rideshare services and banish parking lots to remote areas outside the city, freeing up more space for parks or density. This technology is also much safer for bikers and walkers. Self-driving will be a benefit for urban areas and supports more walkable cities.
The math in the following image is not going to change regardless who is driving the car. If you want to have a city, you need to have public transport. If you want to have a suburban sprawl, that's another story.
the math changes because the autonomous cars become the public transport. all the cars on the left aren't necessary because they wont sit in a lot all day, they will continue to drive and deliver passengers all day. it creates an option of a private form of public transportation, something that only sort of 'half' exists with uber, since you obviously still have the human driver factor (who requires pay, health insurance, hours off, can be drunk on the job, etc) which makes costs/prices go up. If you can automate the driver, rates fall drastically. if you can make the cars electric, you help the planet. it all works nicely together if everything plays out right.
The math changes for how much storage space for cars is needed, but not how much space is needed for transporting people. Changing a garage to something more useful is not exactly helping traffic jams.
Self driving cars change what roads mean. Once we get to full fledged self driving taxi culture, cars won't park. Car sharing will be common at the cheaper end. So it becomes a lot more like busses. Much smaller fleet on the road. I expect that roads will get pedestrianized, returning them to common usage, because a Google car will calmly stop if a child is kicking a ball in the road.
Trains are intrinsically unequal, they serve hub towns at best and cities only at worst, they are dirty, cramped, noisy, unprivate and they don't go when or where you want to go. Even with Google maps seaming together journeys out of trains and busses, you end up waiting and walking a lot. And trains are an intrinsic monopoly with a staggering up front investment, you can't make them better with market forces. I foresee self driving cars cannibalizing the train's market of people who just want to pay for journeys rather than the vehicle.
Same. I'm really happy that, insofar there's cars, they're good. However, having lived in walkable cities, where the car is basically a specialty tool for moving furniture or going away to the country for the weekend, the idea of it becoming even more enmeshed into daily life is a grim prospect.
There is hope. More and more young urbanites in dense cities are already eschewing car ownership for walking and on-demand rentals. Even European-style public transit can only be so convenient[1], and a fleet of selfdriving ubers would be perfect. I hope that this will mean even fewer people decide to own cars and that car ownership becomes increasingly uncommon.
[1] public transit is great for when you have a lot of time and aren't carrying a lot of things. Doing, say, your weekly groceries by bus or walking is annoying af. Source: living in a European city without owning a car.
PS: I live in the US now and not having a car is much more convenient than it was in Europe. Uber and zipcar and getaround are magic. We didn't have that in my city
As much as I agree with you on the importance of good public transport, I am hopeful that the "car culture" will evolve into an alternative to the existing public transport systems; Once we have self-driving cars running on clean energy operating in an Uber like sharing economy model, there will be no need for car ownership and the cost of a ride will be low enough for the majority of people to afford.
I understand that this is more difficult to achieve in reality than in ideas but I am hopeful. About health related issues, we will still have to look for other ways to make a healthier society.
don't compare europe with japan with regards to rail usage, they really don't compare. Japan's usage is over half again the nearest European nation; Switzerland.
plus in many if not most countries rail simply doesn't go where people need to be. in countries with a choice cars are winning out and with autonomous and eventual ev powered cars it won't be a bad thing
train networks and subways are significantly more expensive to build, operate and maintain, even electrified ones. Roads + autonomous electric cars are hard to beat in terms of efficiency, utility and ease of operation.
Roads are indispensable and essential for last mile connectivity. No matter if trains and subways become 10 times more popular they will always exist in addition to the road infrastructure. Thus autonomous electric cars are a huge boon, its just common sense.
> Efficiency of energy use? Moving a 2000kg electric car around for 1 person is never going to be efficient.
That depends on how you are defining efficiency (doesn't it always), an electric motor with a good battery backed by a good powergrid (nuclear/renewables) is about the most efficient solution to "one person going from Point A to Point B" in many ways, I'm in favor of public transport but even in a European country it can let you down.
Also in the future with electric self-driving cars I'd happily not own a car (I don't now) and just rent one as and when it's needed from a pool of cars, that would take cars off the road compared to now.
Now that we have electric and self-driving cars you assumptions of what a car is have the ability to be drastically altered.
>Space? Roads take up way more space than tracks.
Highways can now be minimised to two tracks, the standard width of a vehicle, vibration reports can tell the councils where pothole repair is most needed.
>Moving a 2000kg electric car around for 1 person is never going to be efficient.
Car's are 2000kg in part to protect the driver, as crashes trend to 0 there's no reason they need to be so big.
>Roads even with autonomous cars will never have the throughput of a train-line.
Trains are rarely full. Cars can get smaller, we can mitigate traffic jams.
A single lane of highway doesn't get more than 2000 cars per hour. Even self driving won't increase that much. If all cars have to wait for someone to get out of their cars since we're only on 1 lane then it'll get even worse.
Eletrics cars are 2000kg because a the battery weighs a lot.
Trains get smaller too when it's not rush hour. France regularly runs one carriage trains the size of a bus.
Mitigate and smooth flowing aren't the same thing. On way to work I see cars backed up for miles. Doesn't matter if they're self driving or not when cars move bumper to bumper already.
A road even with perfect operation can only carry about 2000 cars per lane mile per hour. That is /very easy/ to beat with any other kind of technology.
I'd like to see a self-driving car navigate London's narrow streets with cars both sides and the negotiation that goes on between drivers about who goes first.
I'm impressed with this self-driving on California's spacious streets in perfect weather. I'll be really impressed when I see a self-driving car go down a London street in pouring rain, realize it needs to allow someone to come from the other direction and reverses and moves to the left to allow them through.
These are problems which are actually improved as more cars move to autonomous driving. A sensible government would mandate that manufacturers must upload anonymised data about each car in real time, which would effectively allow for cross-talk between cars. This could enable automatic remedies to the above situations (car X tells car Y "I've got more room behind me, let me reverse), but also prevent them in the first place: car X might pull over because it knows car Y is traversing that street. It would also help to prevent traffic jams which would be a nice orthogonal benefit.)
I see this pro cross-talk argument all the time... but wouldn't that leave open a big vulnerability for bad actors to send false data between cars and, if they should choose to, control multiple vehicles and cause collisions?
I would hope that the vehicles trust their own sensors before trusting the claims of other cars. The most a car should be able to lie convincingly about should be its intentions since they're unobservable. If car A says "I'm going to let you take this left" and car B tries taking the left and then car A tries to ram them, car B should ideally take evasive action (braking or speeding up) as it would from a human driver just randomly charging an intersection.
Those are emergency situations, though. I'm talking specifically about a model for cooperative decision making between AI wherein the AI has access to data from both sets of sensors (in its most limited example), and is therefore able to make a superior decision to if it had only its own data.
Simple scenario to illustrate this: car X enters a narrow bidirectional road which is lined with cars. There is enough room only for one car to pass at a time, safely. Car Y enters from the other end. Several more cars enter behind car Y (we'll call these Y1, Y2, Y3, Y4).
One car must reverse back down the road from the midway point, in order for any cars to pass through.
Car X and its driver cannot see what is behind car Y, but for car Y to reverse it must rely on Y1, Y2, Y3, and Y4 reversing. In order for this to happen, Y4 must first reverse despite not being able to see why it needs to reverse (either through meat or tech sensors).
The optimal solution is that sensor data reviewed in the aggregate by each car leads to a cooperative decision that car X should reverse until it can pull over, and allow the other cars to pass before proceeding.
In an emergency situation it's also possible to envisage scenarios in which shared data analysis and cooperative decision making are optimal. For example, consider cars X and Y now destined for a high speed, head-on collision. The right lane of the road (car X's right, car Y's left) is clear and the left lane (car X's left, car Y's right) is a deep trench. A primitive or non collaborative AI might suggest that both cars swerve to the same direction to avoid collision, which results in a collision of similar magnitude. A better solution than swerving the two cars into each other might be to swerve one into the ditch and one into the road. The optimal solution is likely to swerve only one car and hard stop the other, knowing that the other car is going to move its direction of travel significantly. This can only be done by giving the cars the ability to make decisions collaboratively.
Those are valid concerns. For the emergency scenario I have to wonder if it is possible to never trust another car to the point where it can induce a less optimal solution than an independent estimate could produce. For instance if the OtherCar says "Don't worry, I'll swerve into the ditch, just keep driving" and both MyCar and OtherCar keeps driving, we both die, so maybe MyCar says "he said he'd swerve, but I'm going to hard brake instead of driving" because impacting a ditch at speed x is roughly similar to getting hit while stopped at speed x and that's the worst case scenario for braking, whereas the both cars moving at speed x at each other is worse.
Not as tight an example as yours I'm afraid, but humans make cooperative decisions all the time where we use information and continue to be suspicious of it and hedge against lies. I think any realistic cooperative tech system where there are untrusted components needs to stay skeptical as well.
I completely understand what you mean and agree it's a valid concern.
I'm probably super naïve, but I think there's probably a way of digitally signing hardware components so that we can trust sensor data from other cars.
If cars upload their data, as well as data about other cars detected in their environment (e.g. positions of other detected cars), we could detect the suppliers of false or incorrect data by comparing data from multiple cars in each situation. It's not perfect, but could be a step in the right direction.
> I see this pro online banking argument all the time... but wouldn't that leave open a big vulnerability for bad actors to steal data and, if they should choose to, control multiple bank accounts and cause fraud?
It's part of the growing myth that is "self driving cars".
People seem to hand-wave and make up what these supposed future self driving cars will do - and worse, they hand-wave and assert they'll be objectively better at X than humans, without any evidence to backup the assertion.
Self driving cars are made by fallible humans using fallible programming languages and constructs. They can't possibly account for every situation or scenario - but people hand-wave and say it magically will.
Sure, one day you'll be able to sleep in the back seat of your car or read a book while it precisely weaves you between traffic only to navigate you right off a cliff. Or the neighbor's kid with a laser pointer prevents your car from turning into the driveway.
> they'll be objectively better at X than humans, without any evidence to backup the assertion.
Google's road tested self driving car is already safer[1] than a human driving. Suggesting that a computer will be a more reliable processor of data and computer of maths than a human is not something which needs data to back it up. The ability of drivers is so variable and in the aggregate there are so many of them that it's almost self-evident that a self driving car which crosses a very low threshold for entry ("being road legal") will be better than a human.
> They can't possibly account for every situation or scenario - but people hand-wave and say it magically will.
Nobody is saying that they will any more than people argue that autopilot on a plane will. It's very plain to see that right now, as of this second, there is a self-driving car which is safer than a human driver. It is not yet legal to buy, but it doesn't change the fact that it's safer. It may be that a bug crops up which kills a few people. But that doesn't make it less safe, it makes the cause of death for some of the users different to "human error".
The important question then becomes - is society OK with bugs and shortcomings in software and hardware killing people? (this is based on the assumption that even driverless cars will not be perfect, some people will still die on the road)
So far, society seems to not be OK with this (as-in we'd rather a person do the killing, even if we think that killing was wrongful).
We aren't OK with autonomous robots having weapons, even though they might be objectively better at guarding prisoners, military bases, killing "bad guys" in bank robberies, etc. We freak out when a fatality occurs at an automotive plant, and those robots only pivot in place!
If society is going to agree we're all OK with a bug left by some short-sighted engineer being responsible for people's deaths - then OK. However, I wager people aren't really OK with this, most just haven't really considered this aspect yet.
A lot of the backlash against autonomous weapon systems is fed by the last 50 years of sci-fi movies showing what might happen (however unrealistic), self driving cars are a different thing and there isn't really an equivalence.
Sure there will be legal issues (in a crash who is responsible, the driver, the manufacturer or the programmers) but they will get resolved with time and case law.
The economic advantages to self driving cars are huge (unless you drive for a living but then progress is what it is), 35,000 people a year die on American roads an order of magnitude improvement would save ~32,000 lives a year (and that's just accidents resulting in fatalities, many many more experience life changing injuries), this generation of drivers might not like it but as the cars get better and better at driving themselves the next generation will hand over more and more of the responsibilities until a human driving a car manually on the road will look like an anachronism.
Also people aren't ever going to be happy with a bug in hardware or software killing someone but we are currently 'happy' with allowing tens of thousands of people to die from car accidents, if the motorcar had been invented in 2000 many people would have wanted to ban it immediately.
"You want to operate a 2500KG metal box at 40mph in proximity to people!? oh hell no!"
There is no reasonable argument for preferring that more people should die as long as the agents of their deaths are the kinds of biological organisms we're used to. What we happen to be already accustomed to has no relevance in determining what we ought to do in the future, except in trivial cases where the different alternatives don't lead to widely distinct numbers of casualties.
Why make it centralized? I generally am suspicious of "anonymized" datasets (in the abstract) as any usable data is probably enough to nonymize someone given a couple other pieces of data. Forgive me if you didn't mean a centralized system but I took upload and anonymize to suggest that.
In the case of cars some radio comms (uhf, wifi, or even bluetooth?) is probably sufficient since there is no reason for a car in New York cares about the opinions of a car in San Francisco. You'd probably even see performance gains under a distributed system since latency is effectively taken out of the equation (time of flight for local radio being effectively instantaneous).
Don't worry. It's not you, it's an anonymous person who leaves your house every morning and comes back after working in the same place you do. Nobody could ever associate that with you.
I agree, it would be interesting to see. I'd also like to see it handle driving in both a heavy snow winter, like Calgary, and also a muddy brown snow winter, like in Ottawa. I feel like for a long time self driving cars won't be able to handle unplowed streets and will have to get the human to do anything truly difficult.
One issue with self driving cars in snow climates, at least in the area I live in, is that in addition to the issues that come with snow covered roads in the winter, the painted lines are often mostly or completely faded come spring and are not fully repainted until months after the snow has melted.
I've been driving on a track like this few weeks ago, where grass and branches were sticking out on the road, any the collision prevention assist on my car was freaking out, because it was sensing an immediate collision from all of its sensors. Not to mention that there was nowhere for two vehicle to pass, if I met someone else either I or him would have to reverse a significant distance.
The situation I'm keen to see is how self driving cars deal with single track roads with passing spaces (which are pretty common in the wilder parts of the UK, e.g. Scottish Highlands) - if there is a conflict the general rule used by locals is that the car that can most easily reverse back to a parking space does so.
NB I'm sure "proper" behaviour can be programmed - I'm just keen to see it! (And who waives if it is a self driving car?)
> Maybe it will have technology to not go on that street until it is good to do so.
Have you driven in London? (Or other comparably busy and narrow city - even NYC at least has 'American-width' roads.)
It's not technology that's required, barring perhaps flying car technology.
If every way is busy, you can't just wait eternally, you have to communicate with other drivers as parent commenter said, maybe let a couple of people go, and then realise when you just need to go for it and 'force' someone to wait for you if they're not otherwise going to.
It will be really awesome to see a car do that safely and autonomously.
There'll come a time, perhaps, when two cars are flashing their headlights at each other to communicate, for no real reason other than "that's how human-driven cars did it in the old days"!
"Common" as in you that you get to see some every day and people don't bat an eyelid anymore? Or "common" as in being the majority of vehicles. My bet on the first one would be like 10-15 years, the other - 30+.
It's going to be slow until it happens in a rush, I think. There will be a tipping point where enough cars are self driving that it stops being a geeky add-on and starts being seen as a moral necessity. Roughly at the point where government statistics of manual versus self driving motor accident rates gain enough data to be comparable.
There will be a huge public to-and-fro about negligence versus freedom, with driving manual being the new driving drunk, but the insurance premiums will settle it.
Yeah, once the technology stabilizes and the legal and regulatory frameworks have been established, there will be a big wave of adoption.
I predict that many in bigger cities will give up on owning their own car, and join car sharing services.
Then you can choose between a private car just for you, or a shared ride where the car optimizes routes and picks up multiple people.
Other benefits: it's just you? take a small 2 person car. On the way to a party with some friends? Get a nice Mercedes to drive you there. Need to transport some furniture? Get a pickup.
Of course this only holds for cities and densely populated suburban regions. Or if you don't have to drive long routes on a regular basis.
I think it's going to be a war of lobbies, with "traditional" automotive companies spreading FUD, and insurance companies pressuring for faster adoption.
It's worth mentioning - which I already hinted at in another comment - that self-driving cars, awesome as this is, will be yet another area where our privacy gets compromised, same as with mobile phones. This will push governments to favour it in the long run and eventually outlaw "regular" cars altogether as too anarchic (or make using them so expensive and cumbersome that it's going to amount to the same thing).
I doubt traditional auto companies will spread FUD at first, they are gonna see this as the next shiny new thing. This is their tape -> CD moment, everyone needs to replace their hardware, it's a bonanza.
When they truly understand that it's going to turn them all into taxi makers, that owning a car is completely passé, they'll panic. That will be their DVD -> streaming moment, but it will already be too late.
>> When they truly understand that it's going to turn them all into taxi makers
I'm sure they understand that. Some even do moves in that direction, like Ford ackuiring chariot, the shared ride provider, and Mercedes collaborating with Via.
> Who wants to speculate how long it will be until self-driving cars are common place in the UK?
I think we can apply the Pareto principle here. The car seems to work surprisingly well in ideal conditions, but probably doesn't in less than ideal ones (let's see it operating in a snow storm!). So, let's say that 80% of the work is done, leaving 20% to fix all the little edge cases that are bound to pop in the real world.
According to Wikipedia, the first truly autonomous cars started to appear in the 1980s. Let us round that off to 30 years of development. According to the principle, that means 30 years represents 20% of the time.
Thus, 120 more years before they become available. If the ownership model persists, I'd add another 5 years to let people replace their existing vehicles to reach common status. If the shared fleet model takes hold, as many suggest it will, then that number may be reduced somewhat.
Sure this can probably account for 80% of the general use cases, and surely the remaining 20% are the highly complex, low occurrence cases.
But I think saying this took 30 years is cheating a bit. There were fits and starts and long periods of no development until breakthroughs in other fields (machine learning) occured, which can now get us the rest of the way. Not to mention, the pareto principle doesn't account for Moore's law
That assumes that the remaining problems are best solved using our current machine learning techniques. It may be that it takes several additional long periods to find the breakthroughs necessary to finally solve the remaining problems. 100 years can go by quite fast.
I'm afraid it will be the same as image recognition technology. It's "easy" to get good image recognition that works in 80-90% of the cases, but it still fails at the easiest edge cases(best image recognition in the world fails at telling a difference between a zebra and a sofa in a zebra print). So I think we will go through a few years of manufacturers making cars which can drive themselves in perfect weather, but we won't see a car that can drive in all conditions for at least a century.
So soon, we'll all be driven autonomously 80% of the time, and 20% of the time, some guy will drive. At that ratio, it seems on-demand transportation wins over cars, and the end result is the same - we don't drive.
> Who wants to speculate how long it will be until self-driving cars are common place in the UK?
And it should be able to automatically detect that it's in the UK and drive on the left side of the road... I'm curious - can you even get a Tesla in the UK (or Japan or Australia, etc), and do they make a right-side driver model?
> Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency breaking, collision warning, lane holding and active cruise control.
Not sure what to make of this. New buyers are getting less than current owners now, but expected to get much more later?
I can't think of a precedent for this as a marketing approach in modern consumer products.
Tesla is known for doing things without precedent. They also have the ability to push major updates OTA to their vehicles, which is what enables them to give "less" while promising more. Like the release said, they'll have the hardware, it just needs to be enabled.
I'm totally just guessing, but possibly they still need training data for the new hardware configuration which they will collect from the newly sold cars. They could pay testers to drive them around for a while to generate that data or just roll it out to the public and crowd source the problem. They deliver less functionality at launch but start generating revenue earlier. Most large car companies don't mind deferring the revenue a little after incurring the initial development costs if it means delivering a complete product, but Tesla is operating on tighter margins than big auto.
Not sure why this is downvoted, Tesla won't have the MobileEye system for new models and they want to roll out their own machine vision system, which is even more important now due to the new suite of cameras. That will take some time.
They been working on this before mobile eye dropped them. Around a year ago a car with this exact setup of cameras ( using webcams) was spotted charging.
This is one of the most insightful comments here. Too bad it gets drowned in all the rest. Also, if you read the WSJ or other articles they don't mention this either.
Good job by Tesla PR I suppose.
>I can't think of a precedent for this as a marketing approach in modern consumer products.
Apple iOS ;) the iPhone 7 had (don't know if it has now) some camera features (autofocus and raw related) disabled that were present on the 6/6s because of software limitations.
This isn't that uncommon many times you can have a new product which comes with some features disabled/not working because the software did not catch up to the hardware yet for what ever reason.
Can you provide a source? I've followed the iPhone 7 pretty closely, own one, and haven't heard of any of these missing camera features that were present on the 6S (just the headphone jack).
You're mistaken. Yes, portrait mode on the 7+ was introduced in 10.1 on the 7+ rather than at launch. But this feature was never available on the 6. It requires the two cameras in the 7+ to create a depth map and then blur the distance. Nothing to do with autofocus. There's no feature that was removed other than the headphone jack.
Well IIRC the portrait mode was absent completely from the 7+ camera app.
This is analogous to the Tesla example, new hardware no software.
The fact that the 7+ has a different portrait mode doesn't matter, the app had none, and in this case the new Tesla comes with new hardware that provides a different and arguably better "autopilot" but the software would trail the release of the hardware.
Right but portrait mode didn't exist before iPhone 7+ iOS 10.1. They didn't take away anything and then give it back, they simply added that new feature after the iphone7+ was released. It wasn't just a standard optical focus feature that the iPhones had previously. It is using software to identify the subject and blur the background to give the effect of a dslr camera when shooting a portrait photo with a low aperature setting.
Happens quite often. Hardware is updated but software is not ready. Similar approach happens with digital cameras, graphics cards, electronics equipment.
I wonder how much it would cost to add hardware for both Autopilot v1.0 and v2.0 for the gap models, so new customers don't have to make the sacrifice. Lack of complete Autopilot for a while is fine, but lack of emergency braking increases your life risk.
Perhaps the original iPhone falls in this category. It did less (games, apps) than the competition when it launched, but soon the same product overtook the competition when the app store and API was released.
When the original iPhone was released, there were no "current owners", because the thing wasn't available before it was released (that's kind of what 'released' means).
It seems that they expect to bring back the 1.0 features by December - it looks largely an approval for the system thing, and the enhanced features later over time.
From 00:50 to 01:10, why is the car driving in the left lane, when the right lane is clearly not turning? It's strange to see this behaviour as someone living in Germany, where you are supposed to, by default, drive in the right lane if you are not overtaking another car or there is a traffic jam...
EDIT: also, did it turn into the wrong lane at 2:25-2:30? is this a security risk?
At 0:47, there is a car in the right lane, so it makes the right turn and immediately merges into the left lane.
Yes, you're supposed to drive in the right lane when not passing - some localities are more strict about this than others. For example, here in Michigan, there's a fine for driving in the left lane and not passing anyone for...one mile? Two miles? I forget, it's never really used. And you're supposed to "FALL" into the First Available Legal Lane. But in practice, if the second lane is open, people will jump across the first lane for the second - just watch your rearview as you accelerate and make sure the oncoming car wasn't moving over for you.
This is more aggressive driving than I would have expected from an automatic algorithm, but perfectly matches my expectations for the driver of a luxury sport car like a BMW, Audi, or Mercedes - and probably a Tesla, or a compact car driven by a young driver. I would expect a minivan, hybrid, or small commercial vehicle to wait. And I would expect a bus, garbage truck, or semi truck to just pull out and force the oncoming car to merge into the left lane.
Also, it would be a safety risk, not a security risk.
I will reply because I actually know that road. That is Sand Hill Road going toward the I280 onramps. The car stops at the top of the hill where the Rosewood Hotel is. On the otherside of the hill, the right line is only for traffic taking the I280 northbound onramp.
Also, the left lane is only for passing rule doesn't apply like that in California. Instead the rule is that you cannot be going slower than the traffic behind you or the traffic to your right. If you are, you have to move over. But if you are going faster than the traffic around you (or nobody else is around you) you can stay in the left lane as long as you like.
German living in the US here: This does only apply for highways / interstates - not for city streets which seems to be the case here.
Side note: People are not very strict about that rule here and nobody cares. Mainly because you're allowed to overtake other cars on both sides - not just only on the left. Isn't that dangerous? Yes, but there's everywhere a speed limit of around 65mph (104kmh) which makes overtaking other cars much easier / safer.
You need to differentiate between highways and local streets. Highways the drive right lane thing is common. In a city street you often can and should drive in the lane based on your next turn. Driving in the right for 1 block then veering across 2 lanes to make a left turn at the next stop sign is no good.
> It's strange to see this behaviour as someone living in Germany, where you are supposed to, by default, drive in the right lane if you are not overtaking another car or there is a traffic jam...
That used the be the norm in the US as well, but it has broken down over the past few decades. Some studies and motorist groups blame the nationwide 55 mph speed limit imposed in the 70s due to oil shortages. Slow drivers suddenly felt safe driving in left lanes and a whole generation has learned to drive like this. [1]
There are laws in many states against impeding traffic in the left lane [2], but they aren't that enforced. You also see signs occasionally on the highway, but my issue with these signs is that they say "Slower traffic keep right" (instead of eg, "Stay right except when passing"). I strongly suspect that very view people view themselves as "slow drivers" -- rather everyone tends to believe they drive the appropriate speed and everyone else is driving too slow or too fast.
In the US, you're "supposed to" drive in the right or middle lanes, using the leftmost lane only to pass. Here's a map of the state laws as of 2010. (It's only necessary on highways and state routes, I think?) http://jalopnik.com/5501615/left-lane-passing-laws-a-state-b...
However, it's not a hard and fast rule, and a lot of people don't follow it. So I have to deal with morons who drive 5mph below the speed limit, in the left lane, on state routes. Every single day. As I drive up and down a decently large hill.
RE: Wrong lane: that Tesla driveway would be confusing to even a human driver. It's definitely not a regular road, so the car didn't understand it.
> also, did it turn into the wrong lane at 2:25-2:30? is this a security risk?
Yes, totally a crash collision risk! I've witnessed this happened once in a Costco parking lot. Car A made a right turn onto the left lane of the crossing road and the oncoming car B didn't pay attention at the moment and bumped into it head on.
I'm not sure we're talking about the same situation.
But: in a standard right turn at a 4-way intersection in California, where neither street is one-way, when turning right, you must turn into the rightmost lane of the cross street. In particular (barring special things like dual right-turn lanes), you can't swing wide into the second lane of the cross street. By "second lane," I don't mean the lane for opposing traffic (obviously you can't go into that lane) -- I mean the left lane of the two that are going in the direction you are turning.
If you are making a left turn, you can go into either the left lane or the right lane of the cross street.
Truly impressive. I wonder if the Model 3 will also be fitted out with all the sensors and cameras. If yes, I'll definitely get one.
As a German citizen, it really bugs me that Volkswagen is incapable of this kind of innovation. I don't see their roadmap play out like they plan it, because Tesla might beat them to market hard. I fear German regulation will jump in (again) to help them against Tesla.
Currently, the German government gives out electric vehicle subsidies (~5k per car), but it is limited cars less expensive than 60k. At the moment there is very low demand for this subsidy, because everyone who goes EV wants to go Tesla.
>As a German citizen, it really bugs me that Volkswagen is incapable of this kind of innovation.
What innovation? Autonomous vehicles? Volkswagen is working with Stanford researchers and Mobileye (you know, the people who helped build Tesla Autopilot):
I mean, sure, if you don't leave the front page of HackerNews, you'd think that Tesla is literally inventing this stuff in real-time. But believe it or not there are lots of smart people and companies working on autonomous vehicles. They just don't market as well or as often as Tesla. And in some cases, yes, they're behind.
I really like Nissan Leaf and I love the fact that Nissan is out there as a competitor. But the reason anyone buys Nissan Leaf is that it is for sale already and that the price is only $29,000.
Otherwise it's inferior to Tesla model 3 in almost any aspect (not to mention the larger Teslas):
- only 107 mile range vs. 215 miles
- no super chargers or own network, relies on third party generic chargers
- less trunk space
- less acceleration
- no battery swap
- no regular software upgrades over the internet
- no self driving capabilities
- no flat screen controls
When Tesla model 3 is out, selling at $35,000, Nissan will have to make serious upgrades to remain a viable alternative.
1. Comparing the range of an actual 'currently being produced' vehicle to the stated range of a vehicle that's not yet in production is disingenuous in my book. Expect the leaf (v2?) to have at least 25% more range by the time the Model 3 is being produced.
2. While 'more acceleration' sound nice in theory, using all this power is usually pretty bad for your range.
3. While the battery swap sounds nice, in practice battery degradation hasn't been that bad according to many EV owners I talked to. Will it actually be necessary? If you're thinking about swapping the battery for a quick 'charge' I'm not sure that's going to be available for the Model 3 (and even if available, what the costs for doing so will be.).
4. Some might like the flat screen, I'm personally not a very big fan. Plain old fashioned knobs and buttons are really easy to work with.
I agree with the rest of your list but I'm pretty sure EV sales (especially for the near future, say the next ten years) are not a zero-sum game. Plenty of Leafs, Bolts, Souls, and Model 3s will be sold regardless of the fact that the Model3 may be 'better' based on your definition.
Disclaimer: I drive a Kia Soul EV with 'only' ~100 miles real-world range.
The problem is that most of the established car companies are not thinking in the future. They become accommodate with the amount of cash they have on the hand, and they do not pursue innovation.
Currently, Tesla is the only one who is delivering practical innovation and success!
True. But they add the hardware only if you pay for it up front.
Tesla has the hardware in every car it sells, and will be mining super useful data with all these sensors, and tweaking it's software (machine learning based on real use cases for a year) before it puts it out to the public! The scale of data mining that can be done with Tesla's approach is near impossible with other cars (if they continue to sell the cars with and without the sensory hardware)
How hard do you think adding sensors to vehicles is to accomplish if the big players decide it's valuable? This is commodity hardware. And if/when they do, they will be gathering far more data that Tesla, by virtue of the massive number of vehicles on the road.
Again, Tesla may very well do it better than anyone else, but it's a bit early to talk like they've won anything here.
Daimler-Benz demonstrated autonomous driving with little human intervention over long distances in the mid 90s [1]. Other German car companies had similar programs. So they are certainly capable of doing it, not sure why they did not push it to the market. Too expensive? No demand? Too hard to tackle the last couple of percent of required human intervention?
Every TSLA will have the hardware fitted and on. Only if owner wants to use the feature, she pays upfront, or a little more if activating the feature later.
Other than being able to charge a premium for activating later why install it on all the cars? Tesla is basically using data from sensors that'd be in a Fully Autonomous Vehicle, driven by a human for at least a year. Thats a lot of free valuable data being collected for the cost of just the hardware installation!
Instead of doing all the testing and tweaking once car is street legal for test runs as an autonomous car, Tesla benefits from putting every single piece of hardware needed for its future as early as possible, just to mine this data gold.
> "The person in the driver seat is only there for legal reasons"
> Person gets out and let's car park itself
But seriously the tech is very impressive. The journey was rather simple though, and didn't cover more difficult areas (inner city driving, heavy stop start traffic, roadblocks, road accidents and so on). I hope that Tesla test these things thoroughly because they've already got one death under their belt, it won't take many more to put people off completely.
That's not really true though is it? The human miles take into account all sorts of driving conditions, while self-driving car miles are only from the safest and easiest driving conditions. Doesn't seem apples to apples to me.
Even if we ignore the apples and oranges comparison, it's still not true; I think someone calculated it'd require almost 300 million miles of autonomous driving without deaths to know the death rate was actually lower, and Tesla's first death happened rather sooner than that.
True but humans are a known quantity. For the first time, we are letting 'something else' drive that thinks completely different to us, yet we are entrusting our lives to it.
Because of our incredible image processing ability we can deal with a lot of shit that we may come across while driving. But the computer still has a lot of catching up to do in this department. Do you really want your life to end because an AI can't distinguish between a white lorry in the way and an empty road? It is an unnerving choice to have to make I think.
It's no less unnerving than going on a plane mostly piloted by computers that rockets through the sky at several hundred miles an hour. There will always be people that fear the loss of control but most people will realize that the convenience is worth giving such a thing up considering that once this technology is perfected you will ideally have a greater chance of getting struck by lightning than dying in a car accident.
It's a bit odd to compare self driving cars (which have to constantly evaluate their surroundings, change direction, and avoid obstacles such as parents and kids) to a machine that flies through the air with zero obstacles, has a minimum of three people at the helm, and they each have at least 1500 flight hours of training under their belt as an airline first officer first.
>(inner city driving, heavy stop start traffic, roadblocks, road accidents and so on
Yeah this. Every demo I've seen is some suburban or sparsely populated area. As a Chicago inner city driver, I really want to see these things handle our rush hour, especially in the snow and rain before I start calling this stuff the future. Its a good start, but I imagine the 'hard' problems with automated driving haven't been solved yet. If they were then these demos wouldn't be in great weather on sunny days and in low traffic.
How sure are they that this hardware revision is going to be what is required? I feel like at any point in time you can make an assumption about the hardware requirements to only discover in future that you could have actually done it with just a software update if the CPU had one more core. They'd have to be pretty sure this HW rev would meet their future demands for self-driving right ?
I assume they're reasonably confident, since they've been doing this for a while, and have ploughed more money/time into this than most others.
That, and judging by Tesla's history - if there was indeed something lacking, they might just retro-fit it for free onto people's cars. So I assume they're keen on getting this right...haha.
Um, this is what we in the tech industry do ALL the time. We can only use _existing_ technology. We can't see into the future, so we do our best to choose HW/SW based on our _current_ understanding of the problems we are solving. If they weren't "sure" based on their current understanding of the problem, then they surely wouldn't be placing this in their vehicles.
The former head of Google's self-driving car project has said that self-driving cars are decades into the future.* Even if that's too pessimistic, nobody today knows what a self-driving car will look like, what kind of algorithms it will run, and what kind of sensors it will need to get there. I'm afraid this pronouncement is another sign that Mr. Musk is taking his investors for a ride.
"How quickly can we get this into people's hands? If you read the papers, you see maybe it's three years, maybe it's thirty years. And I am here to tell you that honestly, it's a bit of both."
This statement is based on what he is aware of at the time. The most pessimistic side of it is purely a data game, and Tesla's ability to have 100k cars collecting data for them will give them a significant advantage to accelerate their progress.
So you buy a "regular" car today which will be automagically converted to a self driving car when all the regulations and software catch up. That's pretty cool. You can buy into the future today :D
Police departments around the country are going to see a loss of revenue. No more rolling through stop signs, illegal lane changes, or speeding tickets.
Funny point! I wonder if they'll add some "police funding tax" to self-driving car sales to make up for this revenue stream, the same way many politicians want a tax to electric vehicles to make up for the revenue lost to the gas tax.
(Note: I don't really wonder this, because cops do not officially make stops in order to generate revenue, even though unofficially it's widely understood that they do.)
From the car purchase page, it seems that they are charging an additional $13,200 (combining the addons Enhanced Autopilot and Full Self Driving capability at $7,900 and $5,300 respectively) for the full experience:
And the "Full Self Driving capability" comes with quite strong limitations as well:
>It is not possible to know exactly when each element of the functionality described above will be available, as this is highly dependent on local regulatory approval.
>Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year.
Not sure why someone would risk this money if they can activate it later on (even though for slightly more) to see how the functionality will be?
How is that different from buying software for a computer? I can install a linux distribution or buy windows. It's the same hardware, different "feature flag". Somehow that's not crazy.
This is more like your computer coming with 2 hard drives.. One active with your OS, one empty. You can at any time pay $100 and start using the other drive. Otherwise it sits empty.
We are talking about giving people hardware for free, then paying money to turn it on or not.
The video was too edited for me to have confidence. There was a moment at about 2:05 where I was interested to see how it handled the termination and merging of the lane -- but then we cut away before that happened. Or at 1:30 when there's no big sign post in the median, and then switching to left-rear camera, there we pass one. It's a nice narrative on the future, but it's far from proof of comprehensive functionality.
So what do these cars do when the hit a puddle of mud and it covers all the cameras. Will there be a new form of vandalism where someone puts scotch tape over/destroys the vehicles cameras and now your fancy autonomous vehicle is rendered incapacitated. Maybe this seems unlikely or ridiculous, but the dependence on cameras at points on the car that seem likely to get dirty and or damaged seems to be a risk to me.
You mean when a puddle of mud simultaneously covers the cameras (front, rear, side facing), radar, and ultrasonics? I guess the vehicle would slow down and pull over (the driver can always take over you know).
Will the new form of vandalism be any different than letting the air out of someone's tires, also rendering it incapacitated?
turn on the windshield wipers. Last I checked there were no water jets, wipers or other cleaning devices for the camera lenses on any Tesla. Again, maybe it seems like a small thing, but its still something you would have to worry about. Imagine driving down a dirt road and your autonomous car becomes disabled until you get out and clean the cameras.
Maybe this would be a problem sometime in the future when autonomous cars don't have manual controls anymore, but if the cameras were disabled for some reason you can just take control and drive off the dirt road your self, and head to a car wash.
I don't see this really being a problem that has to be worried about.
Or whats the maintenance schedule for these things? Suddenly we have tons of complexity to worry about. How will this affect the used market where repairs are usually DIY? Is the used market for autonomous cars pretty much dead because 3rd party mechanics will have no idea how to service these things? I imagine so. What does that do to resale value?
I've read about how hard it is to sell an aging Prius because the battery pack has to be replaced. Its apparently not easy to do and the part is very expensive. For used car buyers you're better off buying a gas civic if you want an efficient older car. ICE doesn't age like electric.
This has been a "potential problem" lobbied against advances in car technology since... the beginning of car technology. Obviously you need different (and many could argue "more difficult to learn") skills compared to mechanical repair. But that's always been the direction car repair has headed, from the beginning.
Ideally, there's a fair amount of (reliable) self-diagnosis. If you merely have to replace a camera that starts to malfunction, it might not be any more difficult than replacing a starter or an alternator (depending on construction.)
Its a little different now that you're just not replacing static parts. Sure you can replace a complex fuel controller and such in a modern car which means plugging a box into some wires, but what about something that is net enabled, requires some subscription (or active warranty) and is constantly being updated and monitored by Tesla HQ? Where do you get a new central computer for an old Tesla? How to you get it back onto HQ? Will you be allowed to?
Its a bit like building a PC vs having a smartphone, which is mostly run by the OS supplier and carrier. The former is doable but the latter is way too locked down for the DIY crowd. The phone-ifcation of all things is concerning from many perspectives.
I'm assuming the system is monitored by external circuitry, so a camera being covered should be easy to pick up. On top that, they're probably ensuring consensus of the outputs of all overlapping cameras.
This is definitely done on start, and probably even during operation.
Tires are likely to be damaged and are crucial (and can be vandalized) and cars run okay. The camera going out probably won't be much worse than all the times my tires went flat or that time when my tire blew out while I was going 75 down the highway.
its not ridiculous, it points out the flaws that must be addressed. there were cases where people posted losing the ability of their car's system not working because of a leaf. throw in the fact that nice day autonomous driving isn't proof of much. Pretty much any manufacturer have systems that do this the difference is they don't try to infer something with it and Tesla clearly is, they are still trying to deflect negative attention.
When Tesla shows me the same car driving in pouring rain, day or night, in snow, or such, then I will say "damn, they are truly making it work". Until then its a cheap magic trick and dishonest.
What is so odd is bragging about driving when people are least likely to need it, nice wonderful days. Safety systems have been aimed at providing control and reaction in the worst environments.
I had hoped to see this technology occur in my lifetime, I said to myself "I hope I live to see the day" a few years ago. Here it is in 2016, obviously its just a highly controlled demo but it has connected the dots. I'm confident the technology is there and the hardest work will be overcoming legislation and politics.
But does anyone else find this bittersweet?
I had an awesome moment of pride for what Tesla and Elon have done here. The dream is now reality.
Followed by a moment of sadness. The dream is now reality..
The legislation and politics bit has been surprisingly easy going and if anything seems to be leaning to forcing safety systems onto people rather than banning them. The bigger problem still seems to be making the systems work reliably.
All German car manufacturers are now fitting their cars with hidden passive sensors for collecting data related to human driving with the intent to use these data for autonomous driving. Their main problem is the cost of transmission, i.e. they are considering buying mobile networks/towers and piggyback on mobile traffic. Then obviously feeding these data to huge datacenters with the projected flow of up to 2MB/s from a single car.
It was already announced before, that the hardware is included. And it was clear, that it is meant to be used for autonomous driving. And as they do not have autonomous driving yet, this is indeed just hot air... How would they know it is completed if there is no demonstration of it actually working?
Looking at what is not said, my interpretation is that previous Teslas will not be software-upgraded to full autonomy because the problem is harder than they previously thought.
Their LIDAR provider pulled out and now they're switching to camera-based CV and ending development of the LIDAR branch. I think they're hoping that this approach matures enough to match the capability of LIDAR in time for them to meet their market goals.
Either you're confused or I'm confused as to what you're talking about. Tesla has never used LIDAR and they actually leaned more heavily on CV until recently when they started using radar more extensively.
And in actually announcing this commitment now, that this one would be the final hardware, could also be just forced, as they need to sell it to all the people getting cars now with temporary less functionality.
Elon just tweeted that a video showing car navigating an urban environment is forth coming. Without actual footage and info the post is a bit light on the crucial implementation for sure.
I wonder who decided/approved the use of a song about death and funeral procession (Paint it Black by The Rolling Stones) in a video about "driving" without hands on public roads...
> To make sense of all of this data, a new onboard computer with more than 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software.
40 times the performance of a Tegra 3 is not particularly impressive.
Also, I sincerely hope that this new faster computer doesn't also run a web browser.
A separation between the running gear of the car, and its entertainment center / user interface, is absolutely vital if we don't want to see an endless stream of "hackers offed Public Figure X by driving their car into a bridge" in the news.
It boggles me that anyone even considers running things like antilock brakes, car security, and especially self-driving capability from an internet-connected device, no matter how nice and convenient it may be.
It needs to be connected to the Internet at some point to download updates. What could happen is having a second computer that all the controls are routed through, and that one does some sanity checks before executing the self driver's actions
I think the video is sped up, and while it may be sped up for a perfectly innocent reason, not indicating which parts of the video are sped up or by how much creates at least the appearance of possible impropriety, especially in a product demonstration.
it's definitely sped up, watch all the way to the end when the guy gets out of it.
i'm not sure if it makes the car seem more capable though. if anything, i was worried that the car seemed to be accelerating and decelerating dangerously quickly until i realized it was sped up.
> Tesla's with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware
Which makes sense, as they'll be pulling in all that new data from the sensors. I guess people won't be too disappointed owning a car that will eventually be able to be fully autonomous!
"including some standard safety features such as automatic emergency braking, collision warning, lane holding and active cruise control". They're shipping without features that are commonplace, probably even expected, on premium cars these days.
I was thinking of this idea the other day when I came to an intersection where a stop sign had been hit. It was now bent in a way that faced the highway that did not have to stop. I was on a highway with no stop signs or lights for miles. What would the self driving car do in that situation? For both sides of the intersection.
Then I thought about another intersection by my old house. For years the cross street had to stop for traffic on the main street. One day I went to work, then I came home and it was all the sudden a 4-way stop. No database of stop signs could work either unless it was updated to the minute.
So, both Nvidia and Tesla are working on self-driving cars based on the sensory data mainly from cameras mounted on the car, which are then run through X number of RNNs to generate models to operate on? While Google pursues their LIDAR-approach?
What other players are operating in this space? And what's their approach?
The LIDAR idea seems better to me. The car can build an accurate 3D model. I'm sure the artificial vision of the Tesla is good but it will probably get fooled too.
Correct me if I'm wrong, but I was under the impression that LIDAR-only doesn't work at all during heavy rain or snow. You'll probably always need some sensor fusion from a mix of optical/supersonic/radar/lidar.
Having done both, the US system makes so much more sense. Less maneuvering means fewer potential accidents.
Even if undertaking is banned, you still need to check when switching lanes since people will undertake you no matter what. So there is no clear benefit.
Yes, it's how people normally drive and it's very frustrating for those of us who understand why keeping to the right is better.
My experience driving in the UK, Netherlands, France & Germany suggests that citizens of those countries are much much more conscientious about driving etiquette than us North Americans.
In my experience, American driver ed doesn't cover this much at all, at least when I learned (the late 1990s). There are a lot of people who think "I'm going exactly the speed limit" means they should hang out in the "fast" lane.
I have come to think this is coming from a certain righteous attitude. Everyone knows about the old norm of slower traffic to the right, but people give up on it when 1) why should I get behind that guy in the right lane who is driving slower than the limit, just so you can speed and 2) if you want to speed, find your own way through traffic, not my problem. There's also the variant of, I'm speeding and people are getting out of my way, why should I get out of your way because you decided to drive even faster than me. It just encourages faster and faster driving. I live where it's very common to see anything from -10 to +30 the limit. Which is just dangerous. The safest thing to do, is to stay where you are and let the fast guy find his way through or be forced to slow down. So I do kind of agree with this being an outdated norm that doesn't apply to today.
What you're taught in driver ed means nothing after a little time on the road dealing with the norms of other drivers and the inconveniences they impose on you. Nobody wants to be inconvenienced any more, even if it's the "right thing" to do.
Some what related, I feel I notice in last 10-20 years people absolutely refuse to miss their turn/exit. They will create a very dangerous situation swerving across lanes and cutting people off at the last second rather than saying "oh dang, I will need to do a U-turn ahead".
Another example I notice is, the use of a turn signal is an expectation of the right to change lanes. People actually get angry with thoughts like "I put on my turn signal, everyone else should slam on their brakes so I can change lanes because I need to turn now!"
In the US keeping to the right is a wisdom that is applied to highways (but even then it's more of a 'if you're driving slowly stay to the right') but isn't something people really consider on normal roads and streets.
It's pretty normal on regular non-highway streets--especially if someone knows that they're going to need to get over to the left to make a turn in a few blocks. There's a general preference for keeping to the right on this sort of road if there's no reason not to be, but it's not really a strict rule.
Will this new neural net and hardware be capable of advanced object detection?
For instance if a plastic bag or piece of cardboard rolls across the highway a human driver knows it's safe to run over without stopping. Would a system like this just see an obstacle via radar and emergency brake?
Google has been working on this problem for longer and they have access to the largest image/video datasets in the world to train their models. I wonder how google and tesla systems would compare.
What I would find more interesting is if neural nets are even a valid way to built automotive components for self driving cars.
Safety critical components in most of the industries are currently very heavily built around the idea of being deterministic, so that you exactly know what will happen for a specific input and when it will happen. And of course also where the behavior from requirements can be found in code and all this process stuff. Neural networks from my point of view (only university knowledge) seem however more like blackboxes, where you don't exactly know how they will behave in the end.
I highly doubt it. They're not including Lidar, which means no high-resolution point clouds to be able to identify objects. In my opinion, this just doesn't cut it since ultrasonic is limited to relatively short range, and Radar doesn't provide the level of detail you need to really understand what is going on around the car beyond "there's some kind of object 100 feet ahead".
> Google has been working on this problem for longer and they have access to the largest image/video datasets in the world to train their models
As of August, Google's self-driving cars have 2 million miles of real on-road experience, while Tesla's Autopilot systems have 140 million. They're beaming back data to Tesla the whole time.
My understanding is because Tesla has to send their data over the cell network it is first heavily processed/reduced by the cars onboard computer before being sent back (in order to conserve bandwidth). So I am not sure if it's fair to compare the 2 company's datasets based on miles driven alone.
"if a plastic bag or piece of cardboard rolls across the highway a human driver knows it's safe to run over without stopping." Well, usually. But frequently they don't see it at all, or freeze and hit it whatever it is, or swerve three lanes over, or jam on the brakes in front of you.
It only has to be better than humans, not perfect.
False. It has to be perfect or Tesla gets sued. A plastic bag flies in front of your car, the car brakes hard and another one behind you hits yours and gives you whiplash. Lawsuit time.
To be safely aware of its surroundings, an autonomous vehicle must have two types of sensors in each direction - this setup is not safe enough.
I would also have proof of 10 million kilometers of simulated rides with no accident, and a third party organization not under the control of Tesla who creates some really tough repeatable challenges, both simulated and in the real world, that a vehicle manufacturer has to pass.
Challenges should include:
- thin wire tensioned over the street.
- the combination of super heavy rain with lighting, thick fog and people suddenly running onto the street
- passing by a soccer field and ball bounces over the street. Car should stop because it can be reasonably expected that a child will run blindly onto the street after the ball
- have obstacles that minimally invade into the minimum clearance outline of the current planned course. Car should plot an alternative course if it is possible or stop. Obstacles should appear in the last moment possible and car should always do the right thing.
- proof that the car can always detect street boundaries, any obstacle, and especially humans. It should be 100% correct or side on the safe side every time. At night, in a rain storm with super thick smog and hail. I'm not joking.
These are the minimum limits before any self-driving car should be able to drive on public roads, imho.
Because churches aren't changing velocity all the time, causing your body to move around. Even with safer automatic cars, we'll still need seat belts. Have you ever tried strapping an adult seat belt onto a 2 year old?
Why are cars changing velocity all the time? Not enough input and output on the conditions outside the car.
Self driving cars don't have to drive like humans, they have to drive like 90% of the other cars. Remove the humans and self driving cars can adopt far more sophisticated traffic patterns than the simple stop/go of most city traffic.
Fair enough. It's high time that car manufacturers worked out how to make seats/seatbelts that are adaptable to all ages. Or perhaps some sort of foldaway "wings" which turn a standard seat into a child seat.
You won't. Uber is a bit ahead of the game with the whole "Order a car" system, but car manufacturers should by now have realized that they need to be a tech and service company and not just a "goods" supplier.
Hyper-exotics like Ferrari or Aston Martin can rely on driving enthusiasts to keep their brand alive, the others have to get self-driving tech and think about building an Uber-esque service. I can imagine Uber has a strong position of being able to sell their software/service to the manufacturers (the "Get a ride in a Mercedes" app will be the Uber app with Mercedes-branding, and it will run with Uber's servers), for a, say 5%, cut!
This is a sign of the utter commodification of hardware and the possibility that a majority of innovation in the future (with the exception of low-power wearables) lies in the realm of software and algorithms.
It is quite impressive, but I'll honestly have a hard time getting excited about self-driving cars until I see a demo of driving at night in a snow storm (heck, even heavy rain would be nice to see) around road construction, poor signage and faint lines on the road. Believe it or not, those kinds of conditions are fairly common in places outside of California, and until we have self-driving cars that can do really well in those conditions, this is basically just a fun demo in my opinion.
I'm really not trying to downplay the hard work and technical merit of Tesla; sped-up video and opportune edits aside, it is very cool. But I can't help but feel that it's a bit like showing off (to the world) your shiny new web app that only works in IE with ActiveX installed, only if your name is "demo user", and only when the planets are in perfect alignment - or in other words, a functional prototype by anyone else's standards. It's a great achievement, but we're certainly not "there" yet - if that's what it's trying to communicate. And yes, the "Full Self-Driving Hardware" headline certainly seems to suggest that (at least) the hardware is "there" now, and that it's only a matter of software iteration to be done.
Before you respond with the typical "but those are just nitpicky details" or "this is only v1; v2 will be able to solve those things easily", let me say this: going from this to a system that can handle challenging road conditions is not just a matter of software iteration. Since poor road conditions threaten the reliability of sensor data itself, we're talking about a problem that gets increasingly more difficult. The most sophisticated software in the world can't do anything if cameras and sensors are frozen or obstructed, and when signage and lines are lacking, the software must rely on more and more human-like levels of AI inference - not just about driving, but about the complex world in general.
When this becomes real, the next question becomes "why own the car"? What's the benefit of having it sit in a parking lot for 8 hours until I'm ready to go home. Seems like the future will become more Uber-like, where I call up rides whenever I want, and don't worry about parking, maintenance, etc....
Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency breaking, collision warning, lane holding and active cruise control
Right, so they are actually announcing that their new cars now have less automation capabilities. I can't keep track with all the "autopilot" hardware they have deployed to date, MobilEye, BOSCH Radar, own software hacks, then this completely new one..
Not to mention that they have sold thousands of cars with the same Autopilot brand and "fully autonomous soon" messaging that will now likely never get there.
About self driving cars in general: I am very concerned that self driving cars and speed limits are going to be a very annoying issue. I can see them drive way slow in semi-complicated situations annoying all other drivers. There are also many places in the county where it's normal and seemingly expected to go 5-10mp/h over the speed limit. Of course self driving cars will stay under the posted speed limit. I hope that in the long run we will be able to innovate on how we deal with speed limits especially once the human driven cars are off the road and hopefully illegal. But till then I can see lots of road rage coming from this.
I look forward to riding my bike 10mph in front of driverless cars.
This is one of the few things that excites me about driverless cars. People should be driving below the limit (and the limits should be about 10mph lower in a lot of places) for pedestrian safety. The fatality rate drops precipitously around 20mph.
Instead of mad honking drivers, the car will sniff your smartphone's Bluetooth address. Computers you go near will recognize the smartphone, know it's you, and they'll start crashing programs and slowing down the wifi...
I agree that city driving is a more complicated issue. Freeway speed limits are a much easier case, but maybe I'm just missing driving on the Autobahn.
Once autonomous cars are ubiquitous, it should be possible to increase speed limits substantially above where they are now, since the autonomous cars will have instantaneous reaction times.
Of course, but with instantaneous reaction times you can achieve the same stopping distance at higher speeds. If it takes one second for a driver to start braking when something happens (which is about the average) then a human driver going 65MPH will have the same stopping distance as an autonomous car going 78MPH.
So self-driving will be a standard feature of Model 3, not an option? Pretty cool if they can make it work. I'm skeptical that the computer (NVIDIA Drive PX 2 perhaps?) will have enough power to do it all without LIDAR.
Self-driving hardware will be standard on Model 3, but you'll likely still have to pay a premium for the software to activate it. This is akin to Tesla including the 75 kWh battery on the 60 kWh Model S, and then charging you extra to "activate" the battery's full capacity.
The hardware will be standard, but almost certainly you'll pay extra to enable it. (If they do it the same as similar features in the past, there will be a discount to enable it at order, or you can do it later at a slight premium.)
That'll be an interesting court case: the first time a fatality is the result of unactivated safety features on a car that's fully capable of preventing a collision.
Tesla has already said that the Model 3's safety features will come standard, though it wouldn't preclude such a lawsuit against the owner if the features weren't activated.
Also, I thought the self-closing door was a nice touch.
Interesting really - I guess it has a nice "chauffeur" feel if you can get into the car without closing the door (I wonder how it knows you are there? Or that everyone who is about to get out has got out?) and/or just roaming off.
Personally I'd be worried about someone slipping into the car before it had closed the door and stealing anything in the car.
BUT the car was driving itself in ideal conditions, with high visibility in all directions and amidst light traffic.
What I'm really hoping to see is a video of the car driving itself in more dangerous situations, such as in the middle of heavy rain or thick fog that limits visibility, or at night on a dangerous stretch of highway with lots of trailer trucks zooming by, or surrounded by tired angry drivers on a major holiday in a popular route with bumper-to-bumper traffic.
When self-driving cars can successfully navigate those and other similarly dangerous scenarios, we will know the technology is ready.
Hardware performance is not a problem for Level 5 autonomy - the software is. If Tesla insists on deploying full self-driving capability in the next couple of years, they will be litigated out of existence. We are a few decades away from autopilot to "understand" what it is doing. Right now it is just parroting the most common scenarios. This may be as good or slightly better than the average driver, but it still will result in many deaths, if deployed in hundreds of thousands of cars. Unless Tesla somehow shields itself from legal liability, it will be sued to oblivion.
I don't think so. It's not hard to make an algorithm that drives slower/extra careful when facing unusual circumstances. It's the same thing human drivers do btw.
No it isn't! I often react to bad situations not by slowing down but by speeding up, changing lanes, going on to medians--or even grass one time, you name it. Driving defensively is so much more than not tailgating the car in front of you.
So, I am buying hardware I can't use solely for the purpose of providing data to a for-profit company for free to improve its product for another generation of customers?
No, you're buying hardware you can use which has the added benefit of improving the product you're using over time using data collected from your hardware along with thousands of others'.
It's not free for tesla. The hardware cost is what tesla paid to obtain that data. The people that use the autopilot hardware effectively subsidise your car.
This advice is a holdover from the 70s and 80s when the CV joints in front wheel drive cars really were not strong enough, and dry steering would put a lot of extra stress on them, and they would wear out in no time.
The amount of "stress" they put on the tires is infinitesimal compared to taking a single corner at high speed.
I'm quite certain Tesla wouldn't be doing it if they were not sure their power steering system can handle it.
Steering while not moving, I guess. It't not really good for the tyres. However, for certain maneuvers, e.g. when parking, there isn't really a good way to avoid it.
> While this is occurring, Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency braking, collision warning, lane holding and active cruise control.
I suppose it was to show how much he believes in his own product, if he demonstrates that he trusts it to drive him around safely. When you see an "inventor" testing their own product it is usually good press. Given, I realize he is not the "inventor" per se (more of the "enabler"?), but still it is a show of good faith in one's own product.
Not because of hero worship.
I can't say I've ever seen a video of Elon riding in a self driving Tesla before, seems like it would be very tangible proof how much he trusts the autopilot.
The second one redirects to the first. The first contains two links to the second, below and on the right "Next video". I clicked for quite sometime until I figured what was happening.
This is very awesome and just one more step in moving towards a completely automated world. Everyone's commutes everyday is just a gold-mine of mostly unused data-points. There are solutions out there right now like Waze / Google Maps that'll redirect users around accidents. Can you imagine how crazy it'll be when our roads become even smarter based on individual users. For example, if there are people who "logged-in" to a road that enjoy driving faster, then this self-aware driving car can go in the lanes that avoid certain dangerous users.
I wonder how they balance their development process for the algorithms with the upgraded sensors vs the code that runs with older sensors as input. Do they maintain two different teams? Back port improvements?
They have a pretty interesting description of their radar images: "...because of how strange the world looks in radar. Photons of that wavelength travel easily through fog, dust, rain and snow, but anything metallic looks like a mirror. The radar can see people, but they appear partially translucent. Something made of wood or painted plastic, though opaque to a person, is almost as transparent as glass to radar".
So my question is, where can I find such images? Or can I buy such a radar and tinker with it myself? What wavelength are they speaking about?
The Jalopnik review of the video was pretty critical, essentially claiming that the test was done under the best possible conditions and this doesn't demonstrate that Tesla is getting any closer to automatic driving on more typical roads. (I don't know whether that's right or wrong, just thought it was an interesting analysis.)
Can someone speak to Tesla's approach of collecting real-world data, and Google's approach of "simulating" roads and conditions and running self-driving models on that (so technically their vehicles drive millions of miles on simulated roads).
Intuitively Tesla's approach makes more sense, but would love to hear someone with domain knowledge on how much of a difference it can actually make (after all, you need quality training data and Tesla may now have to navigate through significant more noise).
Not an expert, but what you describe is testing different parts of the system. Real world data tests that the sensor system is creating an accurate model of the environment, whereas simulated data ensures that the vehicle makes correct decisions for any given input model.
e.g. real world data tests that the vehicle can detect a stop sign. Simulated data tests that the vehicle stops at a stop sign.
1) If a self-driving car is involved in an accidental death. Is the justice system equipped to effectively hold a trial where information like logs, debugging information, etc. are discussed in court to validate whether or not there is any liability on the part of the manufacturer, considering the car is driving itself?
2) What happens in the case of bugs or system-level crashes? What is it about car software that makes it "not broken" compared to the other software we write?
It was probably just a typo that the couple of people who wrote and edited the announcement didn't catch. I am doubtful of the correlation you are implying between their spelling of braking and their braking system.
I wonder, do they upload all the camera videos taken during driving in grayscale low-res video through 4G to be computed though their neural net at Tesla ?
What hardware do they have in the car to process the video, the Jetson TX1 can use up to 6 cameras or 1400 Mpix/s, but they probably use low-res output for neural net usage.
I wonder what drivers think of their privacy.
Yes, it will be processed live in the car. From the article:
> To make sense of all of this data, a new onboard computer
with more than 40 times the computing power of the previous generation runs the new Tesla-developed neural net
for vision, sonar and radar processing software.
In the beginning, it's very likely some data will be uploaded in bulk, for Tesla to analyze, particularly while they are running the passive tests to correct their models.
Commented above too but it looks like making a video took the extra few days:
"Will post video of a Tesla navigating a complex urban environment shortly. That was what took the extra couple of days."
https://twitter.com/elonmusk/status/788902908175618049
Seems like Tesla is moving forward with much regard for safety nor technical advancement. Disappointed Tesla could back up and park in one motion. It also went too far forward to back up. You don't need to that much room. How is it going to handle itself on Market St when it finds a spot but the bus behind it has to go around?
I wonder how this will compare against geohotz' comma.ai aftermarket self driving kit that he promises to ship by end of year.
He calls his company's technology level 3, which is more like autopilot, as opposed to level 4, which is a fully autonomous self driving car e.g. Google's.
Does Tesla aim to eventually have a fully autonomous self driving car?
"The person in the driver seat is only there for legal reasons" - how do Tesla reconcile this with the "summon" feature? How can they market the summon feature and say the Tesla could find you on the other side of the country unless it has someone in the driving seat touching the steering wheel?
It's a demonstration of the technology they have, not of the technology that will be available to consumers tomorrow (the hardware is in the cars, the software isn't).
Part of getting the software deployed will be figuring out the legal situation.
Who's providing all this hardware? EIGHT surround cameras and TWELVE ultrasonic sensors: Are they building this in house too? If not, that's a lot of business to a supplier... all I could find about camera suppliers for Tesla was their former camera (tech?) supplier Mobileye.
EDIT: Ignore the first paragraph below; @dyarosia mentioned correctly they're working on their own vision system, so they probably buy generic camera units.
(IGNORE THIS: The surround cameras are likely still in collaboration with Mobileye, since in their case, Mobileye doesn't just give the camera hardware but also has a fair bit of software that makes all the cameras come together. It's not too likely Tesla's duplicated this so quickly.)
Ultrasonic sensors though are dirt cheap (in the few dollars range), and they probably come from one of the many tier 1 suppliers (Bosch, Delphi, etc) that makes them. These are the technology that powered those beeping back distance sensors for 10+ years.
The real news here is that again, LiDAR is absent. It seems like Tesla's pretty confident they can get to full autonomous self driving without the point cloud data that LiDAR provides. Now THAT'D be impressive!
But I thought that they had gone separate ways?
""When Tesla refused to cancel its own vision development activities and plans for deployment, Mobileye discontinued hardware support for future platforms and released public statements implying that this discontinuance was motivated by safety concerns"
Good catch! I think you're right; Tesla probably now buys pretty generic cameras (from various suppliers) and then integrates them, so the cost is definitely not that much.
It's still marketing... Tesla says "Autopilot" and has a lot of small print that says "This is not self-driving!". Google has a lot of research but has so far not made any jazzy "Self-driving cars are here!" claims or videos. And this video shows a sunny day on well-defined roads and low traffic. What about rain or snow?
Tesla just announced that all their newly-manufactured cars will be shipping without the existing, industry-standard active safety and driving assistance features like adaptive cruse control, automatic emergency braking, lane assist, etc that they previously had, and that they will release a patch to readd them at some unspecified point. I'm not surprised that a carefully edited video of automatic driving in ideal conditions wasn't enough to head off a stock price drop.
"results in" - You are assuming that the the announcement and the downtrend are causally related because they happened at about the same time. How do you know that the stock wasn't "trying to" go down further but was prevented from doing so by this announcement? This is one of my pet peeves in financial press: They announce some daily news and a price change as if somehow one caused the other. We simply don't know if/how they are related.
Man I sure hope human-driven vehicles/internal combustion engines won't be deemed illegal in my lifetime. I still enjoy driving my motorcycle down the road, feeling the engine vibe on my fingertips, and hearing it click click rumble rumble vroom. This video made me worry.
My stance is very simple: when I can buy a car in Vancouver, BC without a driver's license I will be at the car salon door / preorder page / whatever, midnight movie release style to buy one and I won't ask about the price. Just make it happen, please.
I'm pretty sure it'll be a long time before you aren't any longer expected to take over your autonomous vehicle at any point and therefore won't need a driving license.
Some of what they describe sounds like it's going to take some real adjustment before it stops being annoying and starts being useful, namely the assumption of what you want when you get in and out.
> If you don’t say anything, the car will look at your calendar and take you there as the assumed destination or just home if nothing is on the calendar.
Oh boy. If you get in your car, it will just assume it should start driving somewhere more or less immediately? What if you want to sit for a few minutes?
I know, I'm taking them very literally. Just saying, though.
> When you arrive at your destination, simply step out at the entrance and your car will enter park seek mode, automatically search for a spot and park itself.
Again, what if I'm unpacking things for the car, or don't want the car to go anywhere? I don't want to have to pull out my phone and tap on something to stop it rolling away, or jump in front of it or something, or open a door.
Hopefully it obeys simple voice commands directed towards it like "wait here for now."
> Oh boy. If you get in your car, it will just assume it should start driving somewhere more or less immediately? What if you want to sit for a few minutes?
You think they won't have a "Go" button on the 17" touchscreen?
> Again, what if I'm unpacking things for the car, or don't want the car to go anywhere? I don't want to have to pull out my phone and tap on something to stop it rolling away, or jump in front of it or something, or open a door.
Again—there is a 17" touchscreen right there. You don't control your car with your phone or your voice. Have you never seen a Tesla?
Another reason we want better battery life on phones. I can imagine a scenario when your car goes and parks itself and you come looking for it without phone battery. Super cool though. Love how they are challenging such a significant and resourced industry.
I'm curious to see when cities will start changing their zoning for this new reality. The most exciting to me is elimination of parking minimums - these add a lot to the cost of building anything and take up very valuable/well located space.
The cameras look monochrome from the video. Or is this just editing?
If true then I'm surprised that colour data is not used. You would have to detect a red stop light from just its position rather than it also being red.
Is this a formal model-year revision/refresh, or just a midyear 'minor revision' thing (despite being a major revision?) Are old models retrofittable? Will this hurt the resale value of existing Teslas that have the last generation hardware?
Is there an industry-standard (or governmental) safety test that these autonomous systems have to go through to evaluate their efficacy and performance in different scenarios?
The old models actually had pretty weak sensors which indirectly caused the death of that one guy who used autopilot and t-boned a trailer.
> Eight surround cameras provide 360 degree visibility around the car at up to 250 meters of range.
Previously that was just one camera mounted in front of the rear view mirror
> Twelve updated ultrasonic sensors complement this vision, allowing for detection of both hard and soft objects at nearly twice the distance of the prior system.
Same number as before except instead of 16' of range it will be 32' of range.
> A forward-facing radar with enhanced processing provides additional data about the world on a redundant wavelength, capable of seeing through heavy rain, fog, dust and even the car ahead.
same number and position of radar sensors as before but possibly a beefed up type for inclement weather. I don't know what the prior one was capable of.
> The old models actually had pretty weak sensors which indirectly caused the death of that one guy who used autopilot and t-boned a trailer.
Saying it was the sensors that caused the guys death is like saying it was the powerstation that generated the electricity that ran the car causing his death.
He was told, repeatedly, to keep his hands on the steering wheel and to watch the surrounding environment. He even apparently understood the limitations of the system, having recorded a number of scenarios (many of which are situations where AP shouldn't have been engaged anyway) where the car's ability to handle the situation was marginal at best.
So, him deciding to (apparently) watch a video on a tablet while driving down a public highway isn't really the fault of the AP system.
that's the indirectly part. he absolutely should have been paying attention and kept his hands on the wheel but it was in the uncanny valley of it really works pretty damn well 95% of the time until it doesn't. the sensor and the system failed there.
I though it would be funny to edit my comment to me it look like I was right all along. On reflection, this was petty. My original comment was mistaken in just the manner described.
Can you describe what's so amazing about it? They're planning to put temporarily non-functional autonomous driving hardware on their new cars, with the intention to use it when the software is ready at some unknown point in the future.
"Hey buddy, want to by a $100k car the cops are looking for that's lacking all the cool features because Tesla could track its location if we turned it on?
I get that tech companies want self-driving cars really bad because they smell billions of dollars in "disruption" but no matter how good AI gets, I have a suspicion it won't actually do better than a decent human driver can do. It's not about processing speed, it's about experience and reflexes, which granted, not everyone has.
Let's see a self-driving car win a Formula 1 race--and even that controlled racetrack environment isn't the same as the real world! It's actually harder to drive on the typical American roadways than it is to be on a track.
And yes, I am aware that AI stuff is improving exponentially or whatever, but the more I think about this, the more I think it is mostly a pipe dream to grab headlines and be a "look over here" type distraction for the purposes of raising funding.
In terms of safety, people will still lose their lives, they will just die from different kinds of car accidents than the kinds we have now.
Human drivers who generally do a decent job still have bad days. They still get tired and distracted. They still can't see 360 degrees around them all the time.
Each new human driver also has to be trained from scratch; when we're starting out, we're dangerous, all of us. An experienced human can't download their knowledge into a new driver telepathically. But software (including AI models trained with data from millions of road miles) can be copied into new cars, no problem.
> It's actually harder to drive on the typical American roadways than it is to be on a track.
Highway driving is relatively easy to automate, compared to driving on surface streets with pedestrians, bicycles, etc. Even if self-driving only worked on the highway, it'd still free up a huge amount of time for a lot of commuters. That'd be worth it even if self-driving cars didn't improve the accident rate.
> I have a suspicion it won't actually do better than a decent human driver can do.
I have the opposite opinion. I don't think self-driving cars are going to start on ALL roads, but on specific routes that are expanded as required. Eventually all routes will be covered, but it will take time. In truth, 90% (99%?) of my driving is on the same routes, so I don't think this is going to be a major issue.
In the end, I don't think this is going to be about what you, me, or even Tesla thinks. It will be about what insurance companies think. If self-driving cars turn out to be as good as they are promised to be, having a license for driving a car yourself will get more and more expensive. As more riskier drivers have less licenses, those insurance fees will ratchet up and ratchet up, until they are prohibitively expensive.
There's a model for this type of thing... Singapore. This is the type of place that I think will use self-driving cars first. When it starts to happen, I think the transition is going to be very quick.
Computers don't have fatigue. Ever driven for 12 hours? I have. I was so exhausted, I couldn't tell how much of safety hazard I was to everyone else. Humans fuck up like that.
Even if computers can only match humans at skill level, they will be at the skill level 100% of the time, no directions. (And they will probably exceed us in skill anyways)
Car accidents don't happen because people lack skill nearly as often as because people aren't at their best for just a moment. That's why autonomous cars will be safer.
That's not how it works. Imagine you had a medical device that had a high rate of operator-error that would result in serious injury or death. Now imagine someone made a fully automatic device performing the same function, but which would kill or injure a person half the time a manual operator would. Would such a device be allowed on the market? Of course not. The first time a machine did any damage to a human it would be recalled and would disappear off the market, no matter how much better it is than a manually operated machine.
If an automatic car kills a person through software error(and I would argue it has already happened with Tesla) the public backlash will be severe, even if those errors happen much less often than human mistakes in manual cars.
I think americans are going to get left behind in this technology. It's not just the crumbling infrastructure, it's the lack of public transport options. I expect there will be plenty of cities around the world that will be adopting self-driving car technology long before the US does.
Surely lack of mass transit would be a boon to self-driving cars. Public transportation makes owning a car — self-driving or otherwise — less necessary.
The people who use public transport are much more likely to be users of share drive services. The target people for this technology, at least at the beginning, aren't going to be suburban or rural people, but urban people, and especially people with constraints such as no garage's or associated services and infrastructure. Any walk around New York, Singapore, or London, will demonstrate this.
These people will already be using public transport. They won't want to own a car if they don't need to. They'll want the convenience of a car, without the inconvenience of parking it and owning it. As more people do it, the costs associated with doing alternate things will increase (except, probably, for public transport). As less infrastructure is available for cars that sit empty all day in car parks, that empty-car infrastructure will get more expensive. When some bright spark recognizes that the space now occupied by a car-park, in a high-density city environment, can be turned into multi-million-dollar apartments, with a small private fleet of self-driving cars for their residents, there aren't too many wealthy people that won't want to have that property and service. There will be billions to be made.
All of these reasons, and more (like the insurance thing I talked about), I think, will contribute to it overtaking traditional methods of driving quite quickly when it is introduced.
Also, self driving cars can follow eachother and communicate closely. A group of cars could be like a train going along the road. Thus, much less roadway space is needed and valuable real estate brought back into use.
Studies have shown[1] that most people think they're an above-average driver, so your statement isn't very convincing. This cognitive bias is called Illusory superiority (https://en.wikipedia.org/wiki/Illusory_superiority).
[1] Here's one that says 80% of people rated themselves above-average on a number of important driving characteristics - http://www.sciencedirect.com/science/article/pii/00014575869.... And it's easy to find many sources saying that this conclusion has been reproduced time and time again.
Do you have evidence that most people aren't above-average drivers? I would be pretty shocked if driving skill was normally distributed. I would guess that there are few extremely good drivers (you can only do so much to improve), but significantly more extremely bad drivers. This would drag down the average.
Even if you word your survey question in terms of medians, people are bad at numbers. I don't think you can really decide if that's illusory superiority without more digging.
I don't think there are huge numbers of extremely bad drivers. People who are extremely bad drivers tend to have their license taken away, and thus are no longer drivers at all. In fact, I would wager that, when considering average as the mean (not the median), most people are probably below-average. As previously mentioned, there's a minimum bar for driving ability, and I would also wager that most people are pretty close to the average to begin with. But there are fair number of exceptional drivers (e.g. anyone who drives for a living is likely to be a much better driver than the average person, as are any of the numerous people who do a lot of recreational driving).
All that said, even if your premise was correct, it would allow for a simple majority to be better than average, but it wouldn't allow for 80% of people to be better than average.
> extremely bad drivers tend to have their license taken away, and thus are no longer drivers at all
Your optimism about rule-breakers' adherence to the rules is heartening. A lot of that long tail of bad drivers we are talking about have no issues driving without a licence...
Not to mention that "better driver than average" is a very fuzzy definition, and more than 50% probably hit at least one of the possible categories, e.g. safer? Quicker reaction time? More pleasant to other drivers? Most comfortable to ride with? More technically skilled when it comes to difficult conditions? Able and willing to go fastest? Gets the fewest speeding tickets? Etc.
>Let's see a self-driving car win a Formula 1 race
Aside from the obvious that you or I probably couldn't win an F1 race either, autonomous race cars are coming along - one beat a top driver at Thunderhill Raceway Park in Northern California last year. Not F1 but give it time. http://www.telegraph.co.uk/news/science/science-news/1141026...
I'm pretty confident AI will get better at driving than humans, we're pretty rubbish. I'm more concerned about the AI being hacked, either directly or by something like overloading a sensor.
Wow. Just wow. Amazing! Hope it changes everything forever. For a while I thought it was driving way too fast then realized that it was just fast played.
I don't like the idea of "sharing" upholstered seats with other people. I'm very neat and tidy, and other people aren't. Other people are going to eat their chips and junk in the car and get crumbs everywhere...
>While this is occurring, Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency breaking, collision warning, lane holding and active cruise control.
But not software and they don't even have confidence in their current implementation?
It's not surprising considering the recent announcements by the regulators, but that's quite a step.
> But not software and they don't even have confidence in their current implementation.
I think it's quite forward thinking for them to include the hardware when it's ready, knowing they can update the software in the future. And I'd prefer them to be conservative in rolling out the software for an automobile. Not something I'd want to see beta tested on the highway.
>Not something I'd want to see beta tested on the highway.
It's not like the previous-gen cars aren't on the highway at the moment. Or has "autopilot" been deactivated in the meanwhile?
I understand their step to add better sensors for the future (even though it seems difficult without LIDAR). But disabling autopilot for new cars with better sensors and keeping it enabled for older ones seems like a strange step.
i'm guessing that the data is just plain too different (probably no more mobileye sensors, for example) and not worth adapting to the older ML models (which itself would require extensive testing) when the new system is going to end up with different ML models anyways.
[edit] Tesla previously hired Jim Keller (chip designer) into the autopilot team. considering the kinds of things he may be working on, i'd be surprised if the differences in either the sensors or GPUs aren't significant.
This means they don't have confidence in running the same tried and tested programs on a completely new platform they just launched. Which makes complete sense, wouldn't it be irresponsible otherwise?
It gives them time to confirm a new technology and update their maps of areas with information from the new wavelengths they're just now gathering.
So the difference is basically just a few extra cameras and updates to the sensors. It doesn't seem like a huge step or completely new platform - at least when looking at the components.
i don't think they ever believed the previous sensors were capable of "full autonomy". the reason they didn't throw in all these sensors sooner is that it would have been a premature optimization considering their lack of experience and also pretty expensive in terms of the hardware (which includes the GPUs necessary to make use of the sensors).
This company's self driving cars are gonna have serious problems because their business roadmap is all over the place. This is not just wordplay, I'm serious.
This software is probably two years before release at minimum, and I wouldn't be surprised if it was more like four or five. Would you trust it at this point?
Are the cars going to look like Google's and Uber's self driving cars, then?
I never cared that much about self driving capabilities - I like to drive myself - and I certainly don't want to shell out $35,000 for a car with what looks like a food processor or a police emergency light mounted on the rooftop.
IMHO, one of the best features of Tesla has been that they actually made EVs look like traditional cars. It might seem trivial, but many of the budding competitors still fail to do just that:
Have you seen the e-Golf? I'm pretty sure they stole that design from somewhere. Hard to believe but it happens to look like a regular... golf. I'm sure they will get in trouble with VW. /s
There is also the Kia Soul EV and Skoda Octavia Green E Line which basically are just their regular line up with the major difference being that they are EVs.
I think the Volt looks pretty good. Better than a Tesla, which is just a generic black sedan, not remarkably different from the look of a Honda, Lexus, Audi, etc.
I think the fact that it needs to have Wheels and be aerodynamic as possible while having a decent amount of interior space has more to do with it. And even without a motor you still need a crumple zone for safety.
So many "but what if this and that and this..." & "and yeah let's see if it can handle X & Y"
This is the iPhone 1 of self-driving cars! That's akin to saying Apple should have waited to release their phone until iPhone 7 "because of this & that & this..."
Don't we have to start somewhere?? Aren't there supposed to be a big user base here who understands that it's an evolutionary process - we build the plane before we build the rocket before we shoot people into space?
Oviously the perfect self-driving car is still some way off, but I for one am thrilled this race is on!