Hacker News new | past | comments | ask | show | jobs | submit login
TensorFlow Example: Fit a straight line (github.com/jostmey)
106 points by jostmey on Sept 9, 2016 | hide | past | favorite | 16 comments



Can someone explain what on earth the type of 'error' is? It starts off as 0.0 but then gets +='ed with expressions containing tf.Variables, so I assume it turned into some object that is basically function? Why not write it as an actual function? Are they doing automatic differentiation for their gradient descent, or something like that?

(Grumble grumble something about dynamic and/or implicit type systems.)


Ok so looking into this a bit more, that's exactly what they're doing. In fact, TensorFlow is linked as an example on the wiki page for automatic differentiation [https://en.wikipedia.org/wiki/Automatic_differentiation]

"TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them." [https://www.tensorflow.org/]

In retrospect the name "Tensor Flow" makes a lot more sense now - I had only ever seen it in the context of machine learning, and assumed it was pretty specific to that domain.


Odd mix of (I presume) "bare bones" and "rock bottom" in the title. Alternative might be "Rock Bones"?


I think it's a pun on the repo name, "NakedTensor"


Can someone ELI5 this example?


This example builds a really simple tensorflow model that is able to predict the result of a function y = f(x) where y = f(x) is a straight line.

The tensorflow example is able to predict the result of this function for any x, without knowing the function itself only by knowing some example points (points for which the model knows both x and y).

This is a "Hello World" in the area of AI, adapted for the tensorflow library.


I just don't see the point of all these simple introductions to some ML topic. If you want to do a good job it is hard. If you need a simple introduction and don't just dig in a persevere with harder stuff you probably aren't going to succeed in whatever ML task you aspire to.


I upvoted because I'm quite familiar with ML theory and like to read the latest tweaks, etc. but I've never used a tool like TensorFlow. Papers, blog posts, etc. usually give diagrams and/or fully-fledged code snippets. That's great for the high-level overview, and low-level nitty-gritty, respectively.

Yet minimal code examples like this let me see straight away what TensorFlow programs look like, without having to distinguish between fundamental aspects and snippet-specific ones.

My only remark would be to put a comment at the top something like "Fit a straight line, of the form y=m*x+b".

I guessed that the "m" and "b" variables were referring to these common usages, but was wary of this assumption until line 14.

For all I know, "m" and "b" could be common parameter names for some TensorFlow config or something :)


Another suggestion: 'operation' would be better named 'train_step' -- it's more descriptive and eliminates the need for the comment at the end.

Also, it's best practice and common tensorflow idiom to do:

    init = tf.initialize_all_variables()
    ...
    with tf.Session() as session:
         session.run(init)
Not just for style, but because creating new graph nodes after creating the session requires tearing down and then re-setting-up some of the internal session stuff. It's much faster to move the initialize_all_variables() creation outside of the session.


I actually ended up coding the hard stuff from scratch in matlab/julia. I wish I has seen a simple example on how to use Theano early on -- then I would have been off on my way much sooner.

Learning to use TensorFlow/Theano/ect isn't like learning another programming language or package. I think it is more like learning a new skill.


> I actually ended up coding the hard stuff from scratch in matlab/julia

I always find it easier to use and understand the higher level tools (like TensorFlow) once you've written your own rudimentary implementation from scratch. Most of my work is done Julia, and sometimes that code is fast enough that I don't have to use anything else.


Yes, this is why we teach math starting from Vector Calculus and Real Analysis, and start programming by having students implement the Fast Fourier Transform in Befunge. Only wussies need "introductions" and "tutorials". Just dig into the hard stuff and persevere, damnit!!


Befunge? That wouldn't vectorize very well...


Because people can be interested in how things work without having the time or inclination to study them in depth.


Reminds me of all the 'simplified monad' tutorials.


If you don't see the point of learning by example then you should never use stack overflow ever again.

Btw - you know what also is learning by example? This example.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: