I both make most of my money from time series data and use deep learning and work with data with no labels. Here's a recent presentation I did on some of this work and a companion presentation I encourage people to read on how to use this effectively in production.
While you are right that some feature engineering is needed, there's no reason DL can't be a part of your workflow.
> While you are right that some feature engineering is needed, there's no reason DL can't be a part of your workflow.
I (and, I believe, the earlier poster, too) never implied you can't use deep learning on such examples. What we (I think, both) were referring to was the claim that it would absolve you from feature engineering. (Which I understand you also refute.)
> For more of the basics, my book on deep learning might help as well
Congratulations on your book, I know how much hard work that is!
Disclaimer: I make money with deep learning, too... ;-)
A lot of people make money with deep learning with images ;).
I guess what I wanted to do was add a bit of nuance. It can help reduce the amount of feature engineering needed. Of course you still need a baseline representation though. More feature engineering also doesn't hurt. I always think of deep learning in the time series context as a neat SVM kernel with some compression built in. With the right tuning it can give you a better representation which you can use with clustering and whatever else you'd like.
I work with language, not images. There, clever feature engineering isn't just better, it's essential to get anything that is production worthy. In fact, it will even be embedded in some expert system process if your system needs to understand very complex relationships. AI around the corner my ass... :-)
Agreed :). Workflow matters a lot more than the hype Sandhill road and google's marketing team are perpetuating. Good on you for making it work in the real world for something outside of vision/speech!
Do you have an opinion on the fast.ai and deeplearning.ai courses?
I finally have some time to work through these and since the deeplearning.ai series starts on December 18th, I'm wondering which one to dive into since I can't tell from the outside how they compare.
While I agree with others that more is better, if you can take only one course, I strongly recommend taking Andrew Ng's. While it is true that you don't need to be able to design and understand 'nets from scratch to be able to use them, I agree with most of the brightest minds in DL that you won't get too far if you don't at least have an intuition for the math behind it. And Ng's course really only gives you that - an intuition. It does an excellent job at ensuring participants understand the bare minimum to do any kind of serious work. Learning ${your favorite framework}'s API will be a breeze if you understand the "why" already.
I would take both. deeplearning.ai focuses more on math fundamentals, fast.ai takes a more coding oriented approach. It also has 2 classes: a beginner and advanced one. I personally prefer the fast.ai approach.
Add Udacity's DLF ND to the mix and do all 3 of them, they are all a bit different. Udacity's one has the inventor of GANs doing lectures there, so it's pretty top notch as well.
While you are right that some feature engineering is needed, there's no reason DL can't be a part of your workflow.
https://www.slideshare.net/agibsonccc/anomaly-detection-and-...
https://www.slideshare.net/pacoid/humanintheloop-a-design-pa...
For more of the basics, my book on deep learning might help as well (minimal math vs the standard text book):
http://shop.oreilly.com/product/0636920035343.do