Hacker News new | past | comments | ask | show | jobs | submit login
Nvidia and Audi aim to bring a self-driving AI car to market by 2020 (techcrunch.com)
215 points by t23 on Jan 5, 2017 | hide | past | favorite | 133 comments



Tesla is using a modified Nvidia self-driving platform too for Autopilot v2, available in early 2017.

Nvidia demonstrated their platform with many sensors like 10 cameras, ultra-sonic sensors and at least one LIDAR.

Where as Tesla is using a sensor mix with radar, ultra-sonic sensors and like 7 cameras - the LIDAR is probably still too expensive even for a 70+k car (the big one you know from Google cars cost $70k, the smallest one cost at least 7k).

It will be interesting to learn about Audi's sensor mix, and what LIDAR product they choose.



> And if you're worried about the sensors being vulnerable to hacking—Eldada says they've engineered the sensor with seven layers of protection to make sure the system can't be tricked.

A seven layer seal? That could surely only be broken by the darkest of magics.


Let's hope each of this layer is not guarded by admin:admin credentials.


Nah, they got this - it is admin:changeme


Seven proxies perhaps?


Who is the attacker in this scenario? The car owner?


Seven horcruxes.


iptables rules that require each packet to jump through 7 chains before being allowed in.


Definitely interesting. Solid-state-LIDAR is the break-through that makes then affordable. The current-gen big spinning LIDARs are too cumbersome in harsh real world environment, if you think of mass deployment in cars.


Velodyne is working on a 50$ solid state LiDAR.


The problem is that those solid state LiDARs have missed shipment targets many times and they also aren't equivalent to $60K lidar systems.


The smaller sensors aren't the same as the giant sensor that goes on top of the car, but ~10 of them can give you comparable coverage for self-driving purposes (assuming they work similar to other smaller LIDAR sensors such as what ibeo makes, hard to say before the product actually ships). And a few thousand dollars is easily affordable for luxury cars; think of it as a typical upgrade like consumers pay for navigation packages, stereos, leather interior, etc.

With regards to the missed shipments - I agree, that is a red flag. And when a company gives a timeline, and then that passes and they change their answer to "soon" it's concerning. Of course, it's CES right now - maybe someone will make an announcement in a few days. Last year Quanergy hyped up their sensor, but when they were pressed it turns out their demo sensor was actually not solid state.


But I struggle to name the expensive element of a LiDAR system. I suspect the previous, reletivley low vumes might have contributed to much of the cost.


Well, $5 LIDAR is 'on the horizon' too[1], but I will believe it when I can hold one... but I really, really want to believe.

1: https://techcrunch.com/2016/11/15/new-lidar-package-makes-it...


>Tesla...available in early 2017.

One claim versus another, both optimistic. I'm not going to believe Audi or Tesla until I see it.


A LIDAR is available off Amazon as we speak for < $500 and that's end user price. I won't link because I am not spamming but B01L1T32PI. Surely it's not the bees knees but not $7K for sure.


It's from a vacuum cleaner robot. It can only scan 2D up to 6m. It's a toy compared to a automobile grade LIDAR that can scan 3D 360 degree with a fast scan rate.


I love that in 2017, we can just buy stuff like this on Amazon!


> available in early 2017.

Enhanced Autopilot available mid December 2016... oh wait.


I believe they are using 'HD cloud maps', RADAR, and deep learning vision processing _instead_ of LIDAR.


The term "self-driving" in this context has no technical meaning behind it. Cruise control is also self-driving. And don't get me started on "AI". Jen-Hsun is saying that the car "was trained for four days". WTF? Mobileye has a team of 100s of annotators working 9-to-5 generating training data. It's as if this report is tailored to fool credulous readers who have a vision of HAL 9000 driving a car in their mind.


The article specifies Level 4 autonomy, which means:

The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.


On a sidenote, level 4 autonomy will be really interesting when it's all over the place. People are already "bad drivers" (in my opinion, and including myself), but not having to drive 75-90% of the time will mean we rob ourselves of the constant training that helps keep us somewhat competent at driving.

How bad will the roads be when we have thousands of people all very unsure of their driving skills driving in conditions that are dangerous to begin with? Seems quite the interesting issue.


I am at my worst as a driver when I am fatigued from the stress of attending to driving for a long time. Having an L4 system where I could step in for a few minutes after just riding for an hour, giving fresh attention to those few minutes, would make me a better driver.

The advent of cruise control has not made people incapable of using the accelerator. You occasionally meet 16-year-olds who enable it on any 35, 45 or 55 mph road when they reach the speed limit, because holding constant speed is a skill. But it's not difficult to refrain from using it for a while to brush up on the skill.


For me, it's when I'm fatigued and early in the morning. How nice would it be to get 15 minutes to drink some coffee and wake up or relax before getting back to the drive?


I once spent two hours driving through a snowstorm in Canada. There's no reason to expect that the conditions which require human attention would only last for a few minutes.


That's great. I've done the trip to Michigan Tech from lower Michigan in a few snowstorms.

But I have done far more miles on highways in good conditions where an L4 system would be helpful. The tech is still useful even if it's not a complete solution.


>> "...people all very unsure of their driving skills..."

People will still be wildly overconfident of their driving abilities.


The last two sentences seem contradictory. If the system cannot detect by itself when it is safe to be used, the driver will as well have to react when it becomes unsafe to use?


It means the system is constructed such that it won't get into a situation it can not handle. But it does not handle any random situation a human can put it in.


You're right, it should probably be a bit more muted. Perhaps "When enabled, [significantly less] driver attention is required." The driver can't go to sleep, for example, lest they miss the alert that indicates "oh no I can't figure it out anymore." Though the car would likely be designed to timeout on driver response and execute a safe deceleration/pull-aside maneuver.


>> When enabled, driver attention is not required.

People need to stop saying this. It doesn't matter if Level 4 specifies this, and someone claims a certain car successfully passes the test. No car in AI infancy is going to be 100%. Drivers must absolutely be prepared to take over in under half a second at all times. That's not likely to change for 30-50 years.

I found it quite socially irresponsible for him to mention during the keynote that it would be fine for his 80+ year old mother to continue driving when she becomes no longer capable of managing the vehicle on her own. It might be perfectly fine in a few decades. Not by 2020 though.


I agree with you, but that is the new standard. The NHTSA used to have a much stricter definition, but have since adapted the SAE standards.


4 days as in training time given all the compute resources they could allocate to the problem


Exactly. Who cares if they have trained it for only 4 days? That doesn't mean it would get twice as good (or even any better) in 8 days' training.


Wonder how many more times this can get declared before the PR value works as a net loss.


I think that's happening already, among engineers at least.

There should be some healthy dose of inspiring "propaganda", but this got out of hand - everybody claims "they have it".

I am so sick of this endless stream of lies, that I'm not even going to read the article. The next thing I am going to read on the topic would read something like "X has a viable self-driving car - it's hitting the market later this year.".


You'll still have to endure the definition-stretching phase. I'm going to start really paying attention when pedaless cars are declared.


> when pedaless cars are declared.

That won't happen till General AI. How would you do something as simple as drive the car onto a lift to change the oil without pedals and a steering wheel?

Or there is a concert and you need to drive onto the grass to park?


Solving a couple tiny problems really doesn't require human-level general intelligence. At the very worst you'd have a touchscreen with simple overrides.

People talk as if there's some infinite number of unique edge cases when driving a dang car, but there simply isn't that much you actually do with a vehicle. Real deal no-drivers-license-required automated vehicles are coming pretty soon; maybe not in 5 years, but definitely within 20.


Been hearing that line, verbatim, since I was a kid. "Cars will be self-driving long before you'd ever need to learn driving. You won't need that skill, just like we never needed horse-riding." Well, I've had a driving license for decades, but apparently the marketing slogan still manages to impress people. P.T.Barnum would be proud.


I remember hearing this as a kid. But it was mostly speculation, it would be insane to believe it to the extent that one even starts to plan for it.

We are in a completely different situation today.


My guess is people will change their expectations of what they can do with a car. Maybe it will not work on unusual terrain anymore. Maintenance procedures might also change and involve remote controls or other specialized types of control.


I have a lot of trouble envisioning that. I definitely would not want a car that I can't take off road as needed. At least half the people on the street where I live have one truck among other cars and many cars are parked in lawns. How quickly do you think perceptions about this could change?

The only thing I can think that would sell me on the idea is a really low price point, e.g. sub-$10k to get a new, driverless, road-only vehicle. But I would only want that in addition to a "real" car.


Noone will have cars any more. Especially not just parking around in a residential street. It's just so inefficient. You haul one via your app and that's it.


I doubt that. Cars are, besides being a transportation means, also a moving trunk, and that makes them non-interchangable to a degree.


The vast majority of car users take their vehicles at least slightly off road. I don't mean running a jeep through the Rubicon Trail, just simple common stuff like parking in a snow-covered lot at a ski resort with a human attendant pointing you where to park. How is a self-driving car ever going to handle that short of AGI?


You plan the path on a high resolution map on the center console. When decision speed is a non issue this is a quite nice interface because you can double check the entire path before the wheels turn.


"(6) It is easier to move a problem around (...) than it is to solve it."

In other words, now you need a hi-res, current map to show. Where does the data for that come from? (Oh wait, don't tell me: Magic!)


The cameras of the self driving car of course. I assume converting these inputs into maps that are usable enough for this purpose is much easier than driving a car in city conditions. So _if_ we get self driving cars we also get live updated maps of the surrounding circle with a radius ~20m.


Which can now see behind corners, underground and over terrain. Magic, I say, pure magic!


Not that people actually driving cars off the road are capable of this now. I'd often wish for a remote control when parking in constricted spaces.


They are not - but then you're back to driving, although using the touchscreen. Completely different from the original proposition "tell the car where to go, it will do the rest". You're just exposing a different control API to the human controlling the car (a.k.a. driver), that's hardly driverless.

TL;DR: "remote control" != "self-driving"


Correct - but the starting point here was already 'take this path', not 'go to that location on the grass'.

Come to think about it - even 'just park somewhere' is remarkably complicated. I don't quite understand why self-driving cars should need external high quality maps and can't make do with onboard cameras as human drivers do.

But then the parking requires either a high-quality database of valid and off-limits parking locations, or the ability of the car to read the signs on parking lots and generally be able to distinguish private from public parking.


Which is sort of my point: lots of things that seem simple to a human (but actually consist of many rules from various domains, not all of them explicit or universal) are thus handwaved away as "a few minor issues," even though every single of them is at least an UX friction point, and in sum they're significant blockers on the path to actual driverless vehicles. Since Minsky started meddling with AI, the hype always considers "70% there" as "done." AIchilles and the turtle, so to speak ;)


> drive the car onto a lift

There will be an app for that?

Hey, James Bond was driving from the backseat with a Nokia 9000 (?) a few movies ago.


"Driverless" is the new "flying." Also, human-like translation Real Soon Now (yes, yes, I've read that last week's article. And the last year's one. And the one before that. It's turtles all the way down to Babbage.).


I've never seen a flying car. I have driven in traffic next to, in front of, and behind Google's driverless cars, though.


Fair point, perhaps we might get there within a decade or two. (OTOH, nobody has ever seen an actual driverless car, either: those cars are "driverless unless driver intervention is needed" cars - a circular definition if I ever saw one. "90% there" is not "done", never mind the marketing)


Really?

You know that there are 10s of thousands of self driving cars on the road RIGHT NOW, that you as a consumer can buy tomorrow.

Tesla sells thems, and they work.


Tesla cars are not autonomous as stated by Tesla themselves.


You press a button, take your hands off the wheel, and it drives down the highway.

That is absolutely self driving.

If half of your driving time is done by the car, that is still a massive benefit to consumers.

End to end autonomity, Is even better, of course, but I think that everyone is selling short the in between 95% self driving.

Even TINY features, like autobraking in emergencies, has the potential to save tens of thousands of lives every year, if it got fully deployed in all cars.

And for jobs, highway driving is 90% of what truck drivers do. If you have highway self driving (which we DO, right now!), then there goes most truck driver jobs.


I agre with you part way. Of course it is still a massive benefit.

But fully autonomous driving is still transformative in entirely different ways that makes it hard to treat the evolutionary steps as all that exciting even though many of them probably should be.


> Of course it is still a massive benefit.

Could you ELI5 what the massive benefit is? The system requires you to be alert and focused on the road the entire time.

We know for a fact e.g. from the aviation industry that having a human sitting alert but doing nothing increases human response time and decreases the correctness rate of their actions, which is one reason why airplanes don't regularly land on autopilot, even though the tech has been there for a long time. In fact, auto-land on airliners is only used in very low visibility, low wind conditions, or for training to ensure pilots use it at least semi-annually.

The huge benefit of autopilot on airplanes and ships is that for normal operation out on the ocean or up in the sky, it's sufficiently safe that the pilot/captain can focus his attention elsewhere for long periods of time.


No it doesn't. Systems like Otto, do level 4, self driving for trucks on highways. These are on the roads right now.

The truck drivers can just get up out of their seat and do something else. There goes 90% of truck jobs, as now you only need a driver for the first and last parts of a trip.

Other benefits: saved lives. The more highway driver cars we have, the less that humans will be driving, and the less chance that a human will make a mistake.

The level 2 and 3 stuff on a Tesla, is already a better driver than most humans.


Otto's level 4 system has not reached production, similar to all of the other level 4 systems that are currently in testing.


> The level 2 and 3 stuff on a Tesla, is already a better driver than most humans.

Ehm, how do you come to this conclusion? Statistically, Autopilot Teslas are no safer than the NHTSA average, at least so far.


Statistically, the error bar is so large that your statement, and the one you're replying to, are not accurate. We don't know because there's too little data.


Yes, I agree, to a point. However, if we suppose that Autopilot driving was several orders of magnitude safer than normal driving, the probability that the (admittedly poor) statistics at this point would show it being equal to normal driving is very low.

If I say "black swans are extremely rare in this part of the world", and you spot one the very next day, the Bayesian in you would assign a low probability to my statement being true, even though that's based on a sample size of one.


Drives down the highway, and it it decides there's something it doesn't understand, it stops driving. Which is not the same thing as "stops the vehicle" - in some cases it's actually a "f*ck it, I'm out, now you drive the car yourself".

Anything fits within "autonomous except when not autonomous" - even the crudest cruise control.


None of those are actually self-driving cars though.


It's just like battery innovations.

If those were true our phones and notebooks would have power until eternity already.


Random thought: the more deep learning is used in training, the less humans will be able to retroactively explain decisions; this surely has liability implications


Perhaps NNs need functionality to come up with fake rationalizations for their decisions just like the human brain does.


I'm thinking that this idea about "fake rationalization" is a bit backwards. We say that many actions happen before we are aware of it or have a rationalization for them but becoming aware of why we did something after the fact doesn't mean that "you" weren't part of the decision. Aren't your reflexes a part of "you"?


Of course, but nobody is going to say "my reflexes did it". We come up with stories, lines of argumentation, etc. Even though these are "ex post" we often manage to convince ourselves...

In Homer's time the Greeks viewed the unconscious as an external (divine) force, the most famous example being Telemachus's sneeze. In a weird way this feels more intellectually honest than the thing we do.


    > we often manage to convince ourselves
In this case, presumably the trick is to convince a jury.


NuTonomy has an autonomous driving technology based on formal logic [1]. It seems like formal logic is a better approach for retroactively explaining software decisions.

[1] http://spectrum.ieee.org/transportation/self-driving/after-m...


Google too uses plenty of formal logic in their autonomous OS, it's a frankenstein of various machine learning techniques. Narrow pattern recognition alone, while powerful, is unlikely to ever be suitable for things like negotiating busy four way stops.

There is the presumption that all you gotta do is gather enough training data and shazam!, you got a self driving car, but that's bollocks. There is a tremendous amount of elbow grease that foes into developing and validating an AV to six sigma reliability. Six sigma might not even be good enough for something as safety-critical as a driverless car.


It doesn't need six-sigma, just to be significantly better than humans, and for the manufacturer to take on the liability.


In strictly rational terms It has to be better than 1 fatality for every 100,000,000 miles driven, but public opinion isn't exactly rational so it probably needs to be even better than that.


It'll be that way at first, yeah, but once it becomes obvious how much better the AI drivers are, it will start to become irresponsible to not use it


I don't see how this is true. With a machine system you would at least have the ability to log the mathematical operations and results, which would be complex, sure.

But more complex than figuring out if another person is lying about a very nuanced and opaque decision they made?


The scope of your question is too large. Do not compare the complexity of AI to the human brain. Instead compare AI to traditional robotics. In expert systems rules are laid out and logic chains can be followed. AI is a black box of a billion random weighted nodes. The system does not care about 'why' when reducing loss.


Yes. This is why https://news.ycombinator.com/item?id=13111516 was on the front page last month.


I feel like this is very actively being addressed, for example by visualization of intermediate features.


Bitcoin and cryptocurrencies, AI, self-driving cars... good (and profitable) times for GPU manufacturers.

Nvidia stock (NASDAQ:NVDA) is x4 in the last year and probably will continue escalating..


I thought all the bitcoin and crypto stuff had already moved past GPUs and on to custom ASIC units?


I know next to nothing about this but my buddy is big into Bitcoin. He runs a lot of ASIC machines for bitcoin, but he says there are a lot of reasons to dislike the ASIC compatible algorithms. He's experimenting with switching to GPUs, iirc he's mining Ethereum with them at the moment.

I can't go into much detail as it's not my knowledge obviously, but i can say that GPUs are viable/required for some of the coins out there. Based on his word, at least.


The only crypto that has transitioned from GPU to ASIC is bitcoin, but that comes after many years of GPU-based mining operations. Regarding the rest of cryptos, the vast majority is still being mined with GPUs, including Ethereum.



I think that Tesla is basically saying 'F it' and releasing something like this either right now (version 2.0) or full version before end of 2017.

But they have more-or-less been doing that the whole time. Its just now they have more sensors and deep learning so they are going to be autonomous a higher percentage of the time.

So I think as soon as they start rolling it out more and more Tesla owner will have more common 100% autonomous trips with some exceptions for weird traffic or weather.

I think this is risky in some ways but overall its more ethical than delaying because the only way to train/engineer for the exceptional situations is to get a lot of vehicles running the system and training on data. Waiting a few years means people die from human error and you're unlikely to see massive improvements to the system that would make up for that.

One thing people will realize eventually is that we create a lot of driving situations that are structurally unsafe. For example, it is accepted to speed past pedestrians or bicyclists a few feet away on the sidewalk or bike lane. No level of AI advancement can prevent some random horrific accidents in that case. Could be as simple as a pedestrian crossingthe street a little early. People are not going to tolerate AIs going 5 mph anytime a pedestrian is nearby but thats the only way you could prevent fatalities in some situations. That is part of the 'low confidence situations' the nvidia guy mentioned. So actually we need laws to protect autonomous tech companies in those situations or that will delay situational deployment and lead to more human error deaths.


I know lawyers and states need time to legalise the paperwork around driverless, but I feel like 2020 is simply to appease the auto companies and give them another few years to stall.


Why is everyone so damn cynical about self-driving cars?

We know this is coming, and this technology will improve the lives of so many people in the long run. Maybe it's from Nvidia and Audi, maybe Tesla, Uber, Google, that dude who launched and failed and ran away to China, who knows?

I'm excited to think about what opportunities will start to open up once humans don't need to spend 2+ hrs a day with their hands on the wheel :)


Video: https://www.youtube.com/watch?v=7jS4AuPnmyg

"Audi and NVIDIA developed an Audi Q7 piloted driving concept vehicle, which uses neural networks and end-to-end deep learning. Demo track at CES 2017 in Las Vegas."


I wonder how it compares to Tesla Vision.



Tesla made their own software:

The computer delivers more than 40 times the processing power of the previous system, running an Tesla-developed neural net for vision, sonar, and radar processing


Every customer of Nvidia's solution "develops their own neural net" by feeding their own set of sensor data into Nvidia's training platform, then using the resulting neural net on Nvidia's hardware in the cars. Did Tesla not do that, the same way Audi did here, and Nvidia themselves did for BB8? Everyone's going to be adding some software on top of that for the UI and UX they want to offer and such. Did Tesla do more than that? Your quote doesn't actually suggest they did.


The neural net is 99% of the work - if all they share is a particular GPU model, but each have "their own" neural net made on their own training data, then you can pretty much treat it as a completely separate, different system.

The quality and performance of the system is mostly determined by the data, not the raw computing power, so it's worth comparing them as they can be very different.


The scary thing about non-ad hoc techniques is that deep net is a "black box" -- you really don't know how pathologies occurred nor do you know how to fix them.

Not only that, there are _inherent_ pathologies associated with using deep nets in the first place.


DB: So, control theory is model-based and concerned with worst case. Machine learning is data based and concerned with average case. Is there a middle ground?

BR: I think there is! And I think there's an exciting opportunity here to understand how to combine robust control and reinforcement learning. Being able to build systems from data alone simplifies the engineering process, and has had several recent promising results. Guaranteeing that these systems won't behave catastrophically will enable us to actually deploy machine learning systems in a variety of applications with major impacts on our lives. It might enable safe autonomous vehicles that can navigate complex terrains. Or could assist us in diagnostics and treatments in health care. There are a lot of exciting possibilities, and that's why I'm excited about how to find a bridge between these two viewpoints.

https://www.oreilly.com/ideas/machine-learning-in-the-wild


It doesn’t have to be all or nothing.

For instance, the network might only be used to decide among a series of actions, and those actions can still have limits (such as “car cannot travel faster than X” or “Y cannot change more than 3 times per minute”, or whatever). There is still an abundance of attention put into safety, as usual for the auto industry. It isn’t just a brain hooked up to an engine that is allowed to run rampant.


Depending on the context, 'turn the wheel left two degrees' can be a non-rampant, a very rampant, or a rampage-compensating action.


"Keep the car in a straight line" or "turn right, 10°" are the kinds of operation that will always be more performant, take less engineering, and more reliable to create as an old-fashioned simple math function than by using machine learning.

One would expect the output of the neural nets to be encoded in those.


It's the exceptional cases that get you:

1. "Keep the car in a straight line" => The road is closed or there is a detour, or the lane markers aren't visible (or incorrectly marked like on the Palo Alto stretch of the 101 where there are two sets of lane markers; one new (the "actual" markers) and another old but barely faded and still very visible.

2. "Turn right, 10 degree" => With what steering radius? What if the road is bumpy or sloped and the driver has to compensate with the wheel?

etc.

These corner cases are everyday occurrences and lend a necessity for something far more sophisticated than a mere ad hoc model to represent and control human-level driving behaviors with a finite set of rudimentary actions.

Actuator inputs like steering and pedals are almost always represented using continuous values rather than discrete sets for these very reasons.


As do humans (see the various cognitive biases). The question is: does the AI's black box result in a safer outcome?


And the answer is an obvious no, not even close, with the current state-of-the-art in deep nets. I could forgive an opaque model if it actually performed well.


Fun thought experiment.

An alien race landed on earth and demands to play a game of Go. We only get to play one game with them. If they win, our planet is destroyed.

Who would you trust to play for the human race if this scenario happened tomorrow? Lee Sedol or AlphaGo? Remember that we do not completely understand how AlphaGo reasons, it is still a black box to us.


Lee Sedol is also a black box, no?

Also, one shot for the survival of humanity is very different from billions of iterations of driving situations every day for the foreseeable future. A complete understanding of the system is much more valuable when you have the opportunity to iterate.

I'm not necessarily arguing for one approach; just saying that the analogy doesn't really apply to the case at hand.


> Lee Sedol is also a black box, no?

Last time I checked we can talk to Lee Seedol and ask him to explain things. We can ask him questions. We can have an intelligent conversation with him.


Human explanations for their decisions are often rationalizations after the fact. The explanations don't necessarily accurately represent how the decisions were actually made. Most decisions are made subconsciously based on intuition and emotion. So that intelligent conversion might not have any real significance.


Ya, that was my thinking. And likewise with the sort of hybrid, constrained neural net setup we're discussing here, you could likewise 'discuss' the constraints, inputs, perhaps even the thought process to some extent. But like a human, it couldn't tell you the exact causal path taken to arrive at the decision.


We often don't even have appropriate language for many decision processes. See: research into those that do vs. don't have internal monologues (virtual voice, basically) when reading and thinking, and associations to creative thought.

N.B: When I was younger I had no idea that others did those and thought they were fucking with me when they were describing this.


My general point applies to human thinking in general not just Go.

Example: Would you fly on a plane designed ultimately by a human vs an impenetrable black box?

Also there is a spectrum. Let us not pretend otherwise.

1. One end: No explanations.

2. Middle: Seometimes false explanataions and sometimes true explanations.

3. Other end: Always true explanations.

Are we really saying the middle is completely useless?


Yes I would fly in any large airplane which is properly certified for scheduled commercial airline service, regardless of how it was designed. The FAA has earned our trust and has a good safety record so if they tell me the design is satisfactory then I would believe them. I also wouldn't take the risk of flying in any non-certified experimental aircraft, again regardless of who or what designed it.

We have no way to determine whether an explanation is true, false, or simply a post-hoc rationalization. We like to believe that we can, but we're just fooling ourselves.


> We have no way to determine whether an explanation is true, false, or simply a post-hoc rationalization

If we have no way of determining whether something is true or false, then I can say the same thing about your own statement quoted above. I can just say it is false and go on with my life.

I sincerely hope you realize the obvious self contradiction :D

Logic 101. :)


There's no self contradiction. I never claimed that we have no way of determining whether something is true or false. I only claimed that we have no way of determining whether the explanation a person gives for how he made a decision actually matches his real mental process or motives. We can't yet install a debugger with break points and variable watches in the human mind; it's very much a black box.

Logic 201. :-)


Logic 101. :)

> I never claimed that we have no way of determining whether something is true or false

Everything can be cast as a declarative statement.

Matches(givenStatement, actualIntention).


The Black Box that can invent stories to give the illusion it understands it sub-conscious processes? How comforting.


I remember an article here on HN about somebody that trained a neural network to explain the decisions of another NN. I think it's fitting :)


Do we really need to understand the whole stack that goes into a decission?

That means we have to start with physics.


Yea the analogy doesn't hold. I was simply probing at the author's criticism (or fear) of deep learning as a flawed approach given the resulting model is pretty opaque.

To be honest, I would have felt the same way two years ago. Seeing all these recent advances in the field has slowly changed my mentality.


If emotions weren't a problem? Lee Sedol.

If adaptability is the metric, machines still didn't beat humans.

AlphaGo was trained on some set of movements (as were the other AIs that helped train it). It is not a pure game theory construction, so there is no reason to expect it to be as good against an alien as it is against a person.


AlphaGo for the simple reason that, in that scenario any human would be absolutely bricking and would be pretty much bound to make an unforced error as a result.


Definitely, quite scary, keeps me up at night. At least I know EXACTLY what my taxi driver is thinking....

/s


And all this because of gamers.

You're welcome ;)


And the guys with the NNs.


Audi? I'd be happy if they could build a car with a damned USB port and a touch screen.


lately every few weeks the trend in news is 1 tech company+1 car company


... and soon the trend'll be 1 car company owned by a tech company. Wondering when the first tech companies are going to start buying car companies -- in the tried and true spirit of tech (software) eating the world.


Maybe. Fiat Chrysler is a 16.44B company, nVidia is a 57.15B company. Any other car group I can think of is much larger than Fiat Chrysler.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: