One point I noticed not mentioned yet is that the images used for training are only 64x64! In the original google "grasping" research, the images were 472 × 472, 54 times bigger! I think they are looking for "minimum visual information" required to trigger the required learning. This will help in mobile applications (ie: robotics, smartphones, etc) where processing power is severely limited.
I think this is mainly a "data" challege, how does Udacity plan on gaining an upper hand on this, especially with competitors like Uber and Tesla which gather data exponentially with each new ride/car sold?
The goal is to create open source software for self-driving cars. That's a win if the quality is anywhere close to what Uber and Tesla have developed.
The first challenge is to create a conv net that decides how to turn the wheel to stay in the lanes, similar to what Nvidia described in their devblog.
Earnest question: do you believe that full automation and mass displacement of workers, combined with a basic income will lead to anything resembling a decent quality of life for most?
Because I want to believe it, but all contemporary evidence points to an outcome that includes an even more extreme stratification of wealth and masses of struggling, impoverished people.
This is correct. Those at the top, like myself, will continue to develop AI and automation tools while doing our best to keep workers from "becoming smart" through low wages/benefits/minimally satisfying regulations. You wouldn't want your maid revolting against you, now would you?
Call me morally corrupt if you want, but there will always be morally corrupt people in the world whether we like it or not.
In both of those examples likely the blame is on the company and not the worker.
In Joe's case, any aluminium dust may stick on the broom - solution, two seperate 20$ brooms, problem solved. Nevermind aluminum and iron shaving take a significant heat source to trigger, at what part of the factory was this corner cut?
In Roy's case, it is a problem with the process, not with the worker. 1" drill bits get dulled, and are very hard to actually break. Was the jig properly set up? Was Roy involved in setting the process up and understand tolerances and bolt holes (including adding a go/nogo tool if required). These are the questions that need answering before passing the blame onto the minimum wage employee.
I'm sorry, at the moment I don't have teaching materials in any presentable form. I tried Arduino-based projects, making parts of a 2D game in Swift+SpriteKit, fooling with Minecraft redstone, and worked through Google's blockly games.
Arduino seemed to be click the best, because it involves a lot of working with your hands (both breadboard and soldering) and you can "see" your creation working in a way that you just don't get with pure-software teaching. At least for the ages I was working with (8-10 y/o) blinking a LED held more joy than putting a 2D sprite into a window.
I thought about publishing a more formal project curriculum, but there's a ton of Arduino project books already on the market, so it didn't seem worth it. Instead I just pitch a couple ideas to the kids, see what they want to do, then we spend several sessions building that.
One point I noticed not mentioned yet is that the images used for training are only 64x64! In the original google "grasping" research, the images were 472 × 472, 54 times bigger! I think they are looking for "minimum visual information" required to trigger the required learning. This will help in mobile applications (ie: robotics, smartphones, etc) where processing power is severely limited.