Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How to Seriously Start with Machine Learning and AI
264 points by meddy on Jan 17, 2018 | hide | past | favorite | 71 comments
Hey,

Since years I've been seeing tones of news "how machine learing did smth... " and today that's enough with just reading how other people change the world with AI. I want to join into this area and scientificly understand how it everything works - make my own projects...

-I'm a third-year Computer Science student who just has passed most of the needed courses like obj programming,python course, databases, math statistics, algebra etc... I really enjoy playing with data like projecting databases, programming backed etc...

Everything I know until today - I have learned on my own(swift, python, backend). Mostly by practice and solving problems. Now I really want to start serious journey with Machine Learning and AI.

But by making some small research which made me realised that I don't want just to implement already done frameworks for e.q face recognition (maybe I should?) I would like to understand the topic really seriously and be able to explore this area... ---but here's a problem because I don't know how to start it. I've got enthusiasm, some ideas for a projects, but still don't know almost anything about how exactly everything works.

When I was starting with programming, I read some books, watched online lecture and bang. I started doing my own projects. How to start in this more scientifically sophisticated area?

Are there any courses, books, online lectures which you can recommend me for a start to understand how it all works? Unfortunately, my university doesn't lead any more interesting courses in this area... People here are just fascinated with it but nothing more complex...

I'm still young so why not to lose time on something that seems to be really fascinating ;)




I'm probably the worst example of how to get into this field of work, but since I do actually work on developing and applying ML algorithms every day, I think my case might be relevant.

Firstly, my background is not in mathematics or computer science what-so-ever; I'm a classically trained botanist who started came at the issue of programming, computer science, and ML from a perspective of "I've got questions I want to ask and techniques I want to apply that I'm currently under prepared to answer."

Working as a technician for the USDA, I learned programming (R and python) primarily because I needed a better way to deal with large data sets than excel (which prior to 5 years ago was all I used). At some point I put my foot down and decided I would go no further until I learned to manage the data I was collecting programmatically. The data I was collecting were UAV imagery, field and spectral reference data, specifically regarding the distribution of invasive plant species in cropping systems. The central thrust of the project was to automatically detect and delineate weed-species in cropping systems from low altitude UAV collects. This eventually folded into doing a masters degree continuing to develop this project. That folded into additional projects applying ML methods to feature discrimination in a wide range of data types. Currently I work for a geo-spatial company, doing vegetative classification in a wide range of environments with some incredibly interesting data (sometimes).

I think you've got the issue a bit cart-horse backwards. In a sense I see you as having a solution, but no problem to apply it too. The methods are ALL there, and there are plenty of other posts in this thread addressing where to learn the principals of ML. What this doesn't offer you, is a why of why you should care about a thing? My recommendation would be to find something of personal interest to you in which ML may play a role.

With out a good reason to apply the techniques that everyone else here is outlining, I think it would be too challenging to keep the level of interest and energy required to realize how to apply these concepts. Watching lectures, reading articles, doing coursework is all very important, but it shouldn't be thought of as a replacement for having personally meaningful work to do. Meaningful work will do more to drive your interests than anything.


>I think you've got the issue a bit cart-horse backwards. In a sense I see you as having a solution, but no problem to apply it too. The methods are ALL there, and there are plenty of other posts in this thread addressing where to learn the principals of ML. What this doesn't offer you, is a why of why you should care about a thing? My recommendation would be to find something of personal interest to you in which ML may play a role.

This is gold advice and the only legit way to stick to something for the long term. OP actually needs a niche, an industry, a cause to care... then instruments will come naturally. The other way round is flooded already.


This reads as an amazing journey. Kudos for your pursuit of a better process.

It seems to me that so many (online) courses jump to applying tf/pytorch to a predefined dataset, whereas most of the work is in preparing the data. I have a personal project I'd like to try out classifying images, and haven't had much luck finding resources on building my own training dataset.

Can you recommend any resources on assembling and collating your own dataset?


It should be noted that I deal primarily with geo-spatial image analysis, so there is a not insignificant amount of bias with regards to what data I'm interested in. I like using the USDA NAIP API for imagery, since I can call in imagery using GDAL directly into python or R. I rely heavily on freely available public utility data sets (Parcel level utility data). Beyond that and other than as a starting point, you're training data is always going to be something you've invested in heavily. Good training data is 100% the game. No modeling exercise is going to go well on poor quality training data. Currently as a personal project, I'm trying to develop a platform for developing and training data for geospatial modeling. If you're interested, hit me up on a PM and I can explain it in more detail (after work).


PM OTW...


Not sure if there are PM's in HN? PatientPolly@forward.cat otherwise.


I'll second that. Without solving a real problem it's hard to learn something. You can learn the basics but then it's about applying them to a real problem and find out what works and what doesn't. It's the same with a lot of CS graduates. They know a lot of fancy algorithms but they have no idea when to use them and when not.


IMO, to start with AI, it's best to start as simple as possible so that you understand the algorithms that drive things that are accessible to tweak. If you are a gamer, think about the various AI routines that control pathfinding, resource collection, and strategy for computer players in strategy games. Some of the concepts of BFS and DFS build into Minimax strategies and other types of "simple" approaches to searching decision spaces/states for optimal approaches to problem solving. This gets into the development of heuristics for approximating outcomes. All of this is fantastic baseline for understanding more complex AI and machine learning algorithms. While it's easy to jump into ML or AI at a higher level, without the baseline understanding of the search algorithms and probability landscapes that are underpinnings to advanced work in these fields, you'll probably never feel like you understand what is going on.

You can find some fun tutorials here: https://www.redblobgames.com

The CS 188 "Intro to AI" class at Berkeley is excellent: http://ai.berkeley.edu/home.html

It used to be on edx.org but I think a lawsuit about accessibility required them to remove it? Perhaps you can find it in the edX archives.

Edit: looks like you can find the lecture videos and other resources on the Berkeley site: http://ai.berkeley.edu/lecture_videos.html


By far the best response. After this,one could always try the deeplearning.ai or fast.ai courses depending on whether they prefer top-down or bottoms-up approach. Or why not do them together?


I don't know much about AI but for ML specifically Elements of Statistical Learning is fantastic. I find its explanations a lot easier to understand than other resources. I recommend you skim through it to get a taste. Additionally if you prefer lectures ETHZ has recordings of their ML class[1].

The best way to learn the details is of course to read the original papers. This is especially true for following along with the latest developments in deep learning.

[1] https://www.ethz.ch/content/vp/en/lectures/d-infk/2017/autum...


For someone who might want a higher-level primer, Introduction to Statistical Learning is also great.

http://www-bcf.usc.edu/~gareth/ISL/


> Additionally if you prefer lectures ETHZ has recordings of their ML class[1].

I took that class back in 2015. I found that the lectures were sometimes hard to follow, unless you already know the concepts (which creates a bit of a catch-22). For me the most valuable moments were, when Prof. Buhmann got sidetracked by some anecdotes. Would absolutely recommend, but maybe not as a starting point.


I'm working my way through Elements now. Do you by any chance know of any lectures specifically based on it? And solutions to the exercises there? I have found it hard to find (good) solutions.


The Machine Learning Course offered by Prof. Ravindran at Indian Institute of Technology (IIT- Madras) uses ESLR. The course is free to enroll and you will have weekly assignments. https://onlinecourses.nptel.ac.in/noc18_cs26/preview


Thanks!!


I haven't looked for any lectures because I learn best from reading books. So, I don't know, sorry. I'm interested in a solution set too.


Does anyone know how Elements of Statistical Learning compares to Introduction to Statistical Learning which is also from the same authors?


ISLR is a simplified version of ESL.


I'm surprised that no one has yet mentioned Andrew Ng's Machine Learning course on Coursera and to go a bit deeper, his deep learning specialization on Coursera as well. Along with the programming assignments it's a solid way to get your feet wet. And definitely second the suggestion of the fast.ai courses as well


As somebody that has started learning AI/ML and was looking at the exact question here three months ago - I found the Andrew Ng deeplearning.ai Coursera course an amazing starting point.

It was high level enough, and got me to understand enough, to get me to a point where I could start trying to build a side-project, without exposing me to the deeper math behind Neural Nets.

It was a great starting point without being overwhelming. Now I feel that I have the option to either go deeper if I need to, or go wider.

I find Andrew Ng to be an amazing teacher - explains things simply, clearly, and in a way that I find super easy to understand.


On that note, do any of these courses take a deep dive on the deeper mathematics of neural nets?


Yes, see: https://www.coursera.org/learn/machine-learning

It starts out reaaaally basic but give a thorough grounding of the maths and the intuition behind it.


I think fast.ai is the more "programmer-y" do-first-learn-as-you-need approach, and Andrew Ng's is the more "math-y" learn-basics-work-your-way-up approach, and they can work well together too.


This is true, but it's more than that: the fastai professors (Jeremy + Rachel) have studied how people learn. How teachers can structure courses to maximize understanding.

They believe (and have research backing them up), that the way we teach math (base and rote concepts, building until you can understand something complex) is sub-optimal. They dive into the code and get stuff done, then later bubble back up for concepts.

For me, it was bewildering at first, but if you can trust your instructor, you trust they won't leave you stranded. (It does also require the type of student who does a lot of study on their own!)


I took this course in the Fall of 2011, before Coursera existed, when it was introduced to the public as "ML Class" (sponsored by Stanford).

I agree - it's a great introduction, and I learned a ton from it (and it answered a ton of questions I had until then).


I took this course. Andrew is a great teacher. However, I wish he worked on his public speaking a bit. He has certain speech patterns, like starting sentences with "it turns out..." and after a while it becomes extremely irritating, at least to me.

Fantastic course though.


Lol, apparently we aren’t allowed to criticize people HN likes.


In October, I quit my job to live on savings and work on AI/ML. I am not interested in developing novel AI approaches; my objective is to learn the application of machine learning to solve narrow problems.

In light of that, I believe the technology to have progressed to where one can learn how to use existing libraries to solve specific problems. Lots of businesses have specific problems, and will pay me to solve them.

The Fast.AI course (part 1 v2) is the best way to get started, IMO. The fastai library wraps up a lot of boilerplate, and gives you a simple recipe w/ conceptual understandings to achieve state of the art results (top 20% on just about any kaggle competition) in just a couple months of intensive study.


I've been doing Coursera and got done with Udacity but both are just too philosophical. (I'll probably cruse the deep learning class too.)

Does FAST.AI get into the grit of actually doing or is it more philosophical?


Fast.AI is 100% focused on getting into DOING stuff. Its guiding philosophy is that teaching the math first instead of teaching you how to actually do some machine learning is backwards. The lectures start with a dozen lines of code that learns how to discriminate between cats and dogs and then follows up on that by slowly removing layers of abstraction. It's very cool.

That said, the lectures are pretty rough around the edges compared to something like a coursera class. And his goal is to get you to play around with stuff on your own using his stuff as a jumping off point. It works well for some people but not as much for others.


Very focused on getting shit done, while trying to help you build appropriate mental models to what's going on. Towards the end of the course, some more theory.

Multiple times, the prof said things like "the theory says this shouldn't work, but it does, so we use it." (Thinking about his perspective on the practical reality surrounding the "curse of dimensionality" (i.e. that it's not a curse)


They teach you how to efficiently use (two kinds of) hammers and what nails to knock

1. ConvNet (for images and structured data)

2. RNN (for text and sequence data).

They don't teach you far advanced stuff, like how to creatively misuse the hammers to knock different nails (example: using ConvNet, tweak as causal convolution to handle sequence data)


Pattern Recognition and Machine Learning by Christopher Bishop.

Book: http://users.isr.ist.utl.pt/~wurmd/Livros/school/Bishop%20-%...

Notes: Its very very math heavy but if you really want to grasp the concepts and the idea around each topic, this is one of the way to go.

Online Lectures: https://www.youtube.com/watch?v=mbyG85GZ0PI&list=PLD63A284B7...

I like how he explains stuff and adds some context behind the math of each topic.

Alternative: https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A

A pretty good youtube channel that follows up on modern machine learning and he has all of his video tutorial demos on GitHub.

Hope this helps fellow Machine Learner :)


Bishop's book is horrible. I've got a PhD in machine learning and yet it makes me feel stupid.

I'd go with David Barber's book every time.


I started with Andrew Ng's ML courses, which take a bottom-up approach beginning with the math. After finishing the deeplearning.ai track, I started rounding out my skillset with some data science and R programming classes, so I could be more comfortable working with unfamiliar data.

The fast.ai courses are a more top-down approach to ML, and there are plenty of good reasons for taking this approach. You'll start getting practice with libraries like Tensorflow right away. However, if you have a fairly strong math background and linear algebra doesn't give you nightmares, I highly recommend the Andrew Ng courses. A deep (ha!) understanding what's going on "under the hood" in ML will help your debugging, inform your strategy, and make your code better in the long run.


I wouldn't dive in to a full machine learning course as the first step.

1. Watch short tutorials about TensorFlow on Youtube

2. Look for lectures about specific topics that sound interesting.

3. Read NIPS papers that sound interesting

4. Check out the Deep Learning textbook, but maybe don't read the whole thing ( http://www.deeplearningbook.org/ )

5. By this point you should have a very rough idea about the current and past states of machine learning. You didn't have to spend any money or put in any exhaustive mental effort. If it still sounds interesting and you are motivated you can try a full online course or buy some paid books.


These were to 2 books I had when I took AI classes in college(~1998) pre-deep learning, CNN etc..

http://aima.cs.berkeley.edu

https://www.allbookstores.com/book/compare/9780201533774

might be good for some foundational stuff. I felt they were pretty readable.

I remember having to do backprop in excel as one assignment.


Apparently Kaggle just opened a training/education branch.

https://www.kaggle.com/learn/overview

EDIT: I also second the fast.ai suggestion(s) as well


This video got me started, it runs through building a simple neural net in C++ from scratch: https://vimeo.com/19569529

After that I extended it to solve the hand written digits problem. Tweaking it to get past about 80% accuracy taught me a lot of intuition about how the learning rate/other parameters interact.

After that going through Andrew Ng's machine learning course will fill in some of the underlying math.


This doesn't help answer your question (I don't think) by my observation...

I have the same question and I got my degree 12 years ago.

I remember my AI course in college consisted of implementing a neural net in Lisp and memorizing a ton of dense text.

It is really easy to take a Tensor flow model someone already wrote and tweak the parameters to make it work for your use case. I think that code reuse and open source is the largest advancement in AI in the last 10 years.

So a lot of companies can use AI in their products without even really knowing how it work.

Now once you start training your own models the hardest part to me is the vocabulary. So many tutorials and classes say "use this algorithm" or "take this code and modify it"... I want to know why I chose that algorithm, what were the other choices, and how do I write that code if I don't have an instructor to do the boilerplate for me. It is very frustrating.

Hopefully some of the answers here will answer some of those questions.

As another note, a lot of the advanced AI uses Calculus. algebra is not enough. I don't know what college you went to or what accreditation it has but by third year you should at least have taken Calculus II... maybe even Differential Equations. If you haven't, don't worry, 99% of programming jobs won't use them. But parts of AI will.


I'd say to "seriously" start anything you can take a first step by building something, whatever it is. Even if you just follow a tutorial it gets you wet enough to keep going I think.


I would highly recommend starting with course.fast.ai. Really helped me.


+1. I started with a bunch of theory courses before jumping in to the fasi.ai course but I wish it had been available at the time for me to start with. The theory is a lot easier to internalize when you're working on real networks instead of toys.


I would recommend working on as many Kaggle competitions as you can. Try your best to understand manipulating data quickly and efficiently (use Python). Using the correct algorithm is typically trivial and a solved problem for most typical business issues (classification of text, imagery or audio data). At first you'll have to look at other people's code and copy/paste/edit/learn/iterate. Also, don't underestimate the complexity of creating good training, validation and potentially test sets.

95% of your time will be spent massaging data. People say it so often, it may sound ridiculous, but I promise you, it's not. This skill is also readily transferrable to other domains.

I would start with classifying text using sklearn and Facebook's Fasttext.

Then try the dogs/cats image classification challenge and get familiar with Keras and its utilities. I created a few hours of content around recognizing Bill Gates or Jeff Bezos, then trying to recognize 2 types of dog breeds. I outline the challenges of creating "good" training and validation sets.

https://www.youtube.com/watch?v=O3hffX-jC98&list=PLImyDqSBQb...

For every hour you spend looking at some equations, you missed an hour expanding your skill-set to manipulate and get data into a format which can be fed into well understood and maintained algorithms. Once you feel like you get produce results, go back and think about the underlying mechanics. My 2 cents from experience.


"Be fearful when others are greedy, and greedy when others are fearful" - Warren Buffett

I don't want to try to dissuade you in particular, but I think more young people should apply this principle to the question of what field to enter. I've seen dozens of "How do I get into AI/ML?" posts in the last couple of years.


If that were true young people shouldn't get into tech at all let alone AI since it's overhyped at the moment.


I don't quite understand, can you elaborate what you mean by that?


Think you should pick a project that seems reasonable and then start working on it, learning what you need to along the way.


1. Get yourself grounded in theory

Learning from Data by Yaser Abu-Mostafa, and its companion book. Very theoretically heavy but worth the trouble (this is Caltech course)

Do all assignments in the course with Python/Numpy/Scikit-learn

2. Choose your niche

You need to NARROW down your interest. You can start broad just to know enough basics, but gradually pinpoint to making/hacking stuff that is most fun for you. Test waters on computer vision, speech, NLP, and games/robotics (reinforcement learning), or other less popular fields.

E.g. Start with computer vision -> Basic convnet image classification -> Encoder-Decoder architectures -> End-to-end ConvNet monocular RGB to depth image generator

Read papers that have published code on github, this let you reproduce and understand how things work, so that after awhile you can hack your own models and stuff


With the shortest introduction, created by myself ;) http://lausbert.com/2018/01/14/the-shortest-introduction-to-...


For now, I'd recommend you start the two courses on: http://fast.ai/

They will guide you through machine learning and deep learning basics without being overwhelming with math. Good luck!


> I'm a third-year Computer Science student

Make sure you take, as soon as possible, all the AI & ML courses your department offers. If prerequisites are holding you back, try to negotiate or audit. (Some of my key insights date back to graduate-level AI classes they humored me to sit in on when I was a freshman.)

MOOCs are great for people who don't have that option, but you have the opportunity to ask questions, get course credit for your work, bolster your GPA, show it on your transcript, network with classmates, etc.


I'm starting the fast.ai courses as a recent stat grad looking to expand my knowledge. Heard many good testimonials. Besides that, I think Andrew Ng's Coursera offerings - intro to ML and the newer NN specialization - are great first steps. Personally I have a hard time learning from videos, so I refer to my copy of Pattern Recognition and Machine Learning by Bishop.

If you want a linear algebra text, I enjoy Strang's Introduction to Linear Algebra



This is kind of a masters degree course i created for myself to get knowledge of Machine Learning from bottoms up

First, you need a strong mathematical base. Otherwise, you can copy paste an algorithm or use an API but you will not get any idea of what is happening inside Following concepts are very essential

1) Linear Algebra (MIT https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra... ) 2) Probability (Harvard https://www.youtube.com/watch?v=KbB0FjPg0mw )

Get some basic grasp of machine learning. Get a good intuition of basic concepts

1) Andrew Ng coursera course (https://www.coursera.org/learn/machine-learning)

2) Tom Mitchell book (https://www.amazon.com/Machine-Learning-Tom-M-Mitchell/dp/00...)

Both the above course and book are super easy to follow. You will get a good idea of basic concepts but they lack in depth. Now you should move to more intense books and courses

You can get more in-depth knowledge of Machine learning from following sources

1)Nando machine learning course ( https://www.youtube.com/watch?v=w2OtwL5T1ow)

2)Bishops book (https://www.amazon.in/Pattern-Recognition-Learning-Informati...)

Especially Bishops book is really deep and covers almost all basic concepts.

Now for recent advances in Deep learning. I will suggest two brilliant courses from Stanford

1) Vision ( https://www.youtube.com/watch?v=NfnWJUyUJYU )

2) NLP ( https://www.youtube.com/watch?v=OQQ-W_63UgQ)

The Vision course by Karparthy can be a very good introduction to Deep learning. Also, the mother book for deep learning ( http://www.deeplearningbook.org/ )is good


hey neel8986, I know linear algebra is very important for large scale calculations. But how much calculus and statistics do you need for ML? Also, if you can touch what applications of calculus and statistics are used in ML that would be awesome :]. THANKS!


Regarding calculus, I think basic multivariable calculus can be enough for starting. If you need a refresher you can look for (https://ocw.mit.edu/courses/mathematics/18-02sc-multivariabl...)

Also the basic idea of chain rule is important for deep learning.

Regarding statistics, I already mentioned the probability course which describes most of the important statistics concept you need. Also, some idea of Hypothesis testing can be helpful


right on thanks neel8986.


datasciencemasters.org was reccumended by someone working inside the Tensorflow team at Google. It is a sophisticated collection of courses and books (Stanford curriculum...etc).

You would find that as a superset of what you need, you can just scroll till you find what you need to learn. (They are kinda sorted from basics first)


Make sure you load up on mathematics, in both pure and applied. Try to find work where you'll either be immersed in statistical models or have the freedom to experiment with them.

Capital markets such as equities trading will probably be the best place to look for work experience.


> I want to join into this area and scientificly understand how it everything works

That's kind of funny, given that one of ML's biggest criticisms is that not even the field's foremost experts truly understand how it works.


For deep learning: Surprised no one's mentioned Nando de Freitas's YouTube lectures, great series! Hugo Larochelle's YouTube course is great as well!


Do you recommend using rented GPU instances, or building a some kind of Nvidia rig?


I'd personally say it depends on your budget, what you want to do, and the amount of data you plan to process. It may also depend on how comfortable you are setting up your own rig, if you go that route.

For instance, last year I completed Udacity's "Self-Driving Car Engineer Nanodegree" course. Large parts of the course needed us to train neural nets (tensorflow and similar) on data we either generated or downloaded. We were provided the option to use AWS instances for training (free credits), but I opted to use my local box.

It took me about a day or so of playing around before I finally got everything working properly (CUDA, etc) with my NVidia 750ti GPU. This is a very low-end GPU, but it honestly performed quite well for the course. It could only handle a limited amount of data, and sometimes training cycles took a while for turnaround (depending on the task), but it ran the resulting models quite easily (while still handling the 3D rendering tasks of the vehicle simulator).

For learning purposes, it will all depend - if you already have a machine with a decent GPU, CPU and RAM (say something equivalent to the 750 or better, 4 cores or more, and say 8 gig of RAM), it might make sense to try to do things locally - if you think your skills are up to the configuration challenges (I got my system working, but I ended up breaking Ubuntu's update system, because I'm on 14.04 LTS, and I had to hand-install many things to fix dependencies and such for Python, Tensorflow, C/C++, etc - in order to complete the course).

However, if you are planning on processing a huge amount of data for a large model, but aren't planning on doing this constantly - then an AWS instance might be a better option, as a custom rig for this kind of thing I'd imagine would be a bear to spec out and configure - not to mention cost.

I'm certainly not an expert on all of this, though...ymmv.


Start with python. Then learn scikit-learn


Fast.ai is the only thing I recommend



I'm very adamant that if you really want to be an AI researcher, it starts with the mathematics. Multi-variable calculus, linear algebra,discrete math, probability and statistics are key. The classic books and courses others have suggested are excellent starting points.

I'll also say something here that isn't established, so I may take some heat. I believe there are two main paths in AI that will eventually converge to general AI: symbolic and sub-symbolic reasoning. If you go the path of symbolic reasoning, studying functional programming, theory of computation,type theory, natural language processing/compilers are in your future. If you go the path of sub-symbolic reasoning, you will be closer to optimization methods, neural networks, etc. It really depends on what you want to do. Ex: Computer vision is all about sub-symbolic reasoning, while natural language processing is heavily about symbolic reasoning. Of course advanced applications mix both! If you want to go after general AI, you gotta figure out how to tackle both forms of reasoning.

In the end, if you are serious about being an AI researcher, you will have to be a great computer scientist and mathematician. This is why it seems so difficult to get into. "Do I focus on working on messing with libraries and algorithms or the mathematical theory? And if I do both, how!?"

It's not easy, but just start and keep going. Others have provided great advice. One of my greatest joys in life has been coming to understand the marriage of computer science and mathematics under the banner of AI. It is really something exciting worth living for.

[Never give up!](https://www.youtube.com/watch?v=KxGRhd_iWuE)

Also, I recently gave a talk on getting into AI and my view on the state of things. It also has a resources section that may be of interest. Slides: https://docs.google.com/presentation/d/1pDZLkFTFjuZzM8lIKkuC...


I completely agree with you on knowing the math part. I am kind of emphatic about it too when doling out advice. Without that, getting good results just becomes a blind exercise in tweaking parameters. With that, you can really troubleshoot your models, and know what model(s) to try next; the path to optimality is somewhat meaningful and reasonable.


One day 40 years after I graduated I "got" linear algebra.

The eigenvectors of the inverse of the covariance matrix...

Squeeze and twist it into a hypersphere!

Mahalabobis distance is generalized z-score! Oh!


That's the ticket. It's all about building that intuition so you can be creative with mathematics. Linear algebra in particular is an area that continues to teach me more over the years.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: