Hacker News new | past | comments | ask | show | jobs | submit login

Does anybody understand where the supervision (target steering angles) comes from? I checked Learn to Drive.ipynb but that seems to just read steering outputs from a file. Shouldn't there be manual "labeling" involved?



The steering angles are probably automatically obtained from the script you run when manually training the toy vehicle. I presume at every frame an image is captured along with the steering angle for this image.

You then create a neural net architecture that is being fed images and steering angle positions and outputs steering angle predictions. This is a regression problem that can be expressed in plain English as:

"First train the neural network with a collection of images and associated steering angles. After training, if I were to give the neural network a new image it has never seen before, what steering angle would the network predict?"

NVIDIA has a paper on it[1], and I blog about a similar coursework I complete as part of Udacity's self-driving car nanodegree[2]

[1] https://arxiv.org/abs/1604.07316

[2] https://towardsdatascience.com/teaching-cars-to-drive-using-...


Presumably the file is a recording of a human operating the car?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: