Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Deep Learning in TensorFlow – The Roadmap for Study and Learning (github.com/astorfi)
146 points by irsina on Jan 26, 2019 | hide | past | favorite | 20 comments



Some well considered advice: drop TensorFlow and go with PyTorch. Spend your effort where it will make a difference: on deep learning, rather than on fighting with the framework.

People just keep using TF because it was the first full-fledged Python framework for this, not because it has any technical merit anymore. In PyTorch you will make twice as much progress in half the time.


TF does have stuff which is still lacking in pytorch: for example complex numbers support, better sparse matrix support. The new distributed api is much more functional than pytorch one, allowing different levels of control, and not only high level. Surely there are more examples outside of my usecases. So yeah, pytorch is default, but if I need feature which is lacking there, I switch back to tf


Distributed training is not a concern for someone who's just learning the ropes, I think you will agree.


How about tf.keras though?

Im just starting with TF and found keras quite useful to begin with. TF itself feels like assembly language and I'm sure it will evolve to something higher level such as keras in the near future.

I have never used PyTorch though, will check it out. I would appreciate your thoughts on tf.keras vs pytorch.


Not flexible enough for research (you still have to deal with the horrible TensorFlow API that's underneath at some point), but good if you just want to implement or use something that already exists. Not good for models in which graph changes dynamically. Actually "flexible" is probably not the right word. You can make it do what you want, but you will spend a lot longer and the result will likely be unreadable.

But a better question is, why bother with Keras at all, if PyTorch gives you a higher performance, more flexible, more "Pythonic" solution? And yes, did I mention performance? PyTorch blows the socks off anything TF based on most training and inference tasks.


Thanks. I admit what made me chose TF is the support from Google which guarantees somehow that the tool will stick around for some time. And the numbers of contributors to the library. Where do you see PyTorch in the near future?


I see it overtaking TF as the framework of choice for researchers and practitioners alike. I also see TF moving closer and closer API-wise to PyTorch's superior API. This is already starting in 2.0 with imperative mode, but due to the amount of legacy code already written for earlier versions, they have a massive brake on their efforts, something PyTorch (which got it more or less "right" from the beginning) does not. Finally, Google is working on Cloud TPU support for PyTorch as we speak.


Is there any way to use a pytorch model in Mobile and in a website without a server API? For me 5hose are two good reasons to keep using TensorFlow.


https://caffe2.ai/docs/AI-Camera-demo-android.html?

Not something I've used myself, but supposedly yes.


Thanks. I always forgot about caffe2 when talking about pytorch. I couldn't anything for JavaScript and mobile seems not as good supported as TF but for sure they will improve.


Have you actually used TFLite? It's slow as molasses. Deploying on mobile is a bit of a shitshow across the board right now, from what I undertand. Not all models are supported out of the box (especially with ONNX), and the ones that are supported aren't guaranteed to have acceptable performance with off-the-shelf frameworks. Documentation is very sparse as well, especially for the quantized stuff.


How do you use TF on mobile? via google.colab?


TF lite or TensorFlow.js


Not disparaging the author, good on you for working on building this! I’m sure you learned a lot from just compiling everything together.

My question is, isn’t everything in this guide pretty much just a straight up copy of the actual TensorFlow docs/guides? What’s the difference?


It would be great to have some sort of roadmap depending on "what" the user is trying to do.

i.e. if I was a total noob and I wanted to make an application using AI to detect if people in the crowd were bored, I would have no idea where to start without reading/researching for hours online on different fields and models that work and how they work. It would be neat if there was a tool that just asked you a few questions, then took that info and gave you a roadmap, i.e. "Feed Forward Neural Networks, Digit Classification, Image Classification w/ Inception, Object Detection with ResNet + Inception, Optimizing TensorFlow code for Servers, Deploying TensorFlow with Docker, Protecting Against Adversarial Input"

This way someone with a time sensitive project doesn't have to learn TF for 6 months before being able to accomplish what they wanted! Just something I think would be neat and also possible to add to TF World.


There are not so many type of problems for such a complex tool. Your problem usually fits in one category among classification, prediction, clustering, generation or control. Then you have different domains as images, video, audio, text, etc. With a combination or type of problem and domain you sure can have a roadmap, but you probably will need to read papers to solve your problem if it is not something some has done before.


What do you mean by “control”? Like control engineering? How is it used in control domain, can you explain a bit?


+1 for this one!


Thanks for the list.


This guide will be useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: