Hacker News new | past | comments | ask | show | jobs | submit login

I did both my BSCS and MSCS at Georgia Tech. While I have many complaints about the school, the quality of the classes is not one of them, for either the undergrad or grad programs.

That said, with a couple of notable exceptions, the graduate classes are there for PhD students as first and second year background material so they have some starting points for their research. This naturally leads to a format where the semester can effectively be described as a long reading list of papers and lectures to spur discussion on the content of the paper. I was planning on pursuing a PhD when I started into my MS, so this format worked quite well for me at the time. In the years subsequent to that, the grounding from those classes has given me starting points for deep dives into problems I encountered at work[0].

It's interesting that you brought up machine learning. Charles Isbell's Intro ML class was a significant exception to the pattern I described above. In addition to high quality, pre-prepared lectures peppered with entertaining anecdotes, the had high quality projects that worked with pratcial tooling. It was also probably the highlight of my graduate career[1].

[0]: In particular, the material covered in my graduate systems classes has been invaluable for not reinventing the wheel for the thousandth time. The material from the couple compilers classes I took on a whim has been a huge boon when talking about software correctness. I work on the hypervisor underneath GCE. Correctness is near and dear to my heart, but performance is right there with it :)

[1]: For undergrad that dubious honor has to go to Olin Shivers, not only because of his eclectic teaching style, but also because his class completely altered the way I think about problems in computer science. In particular, my mindset shifted to one of models of computation and decomposition of problems into subproblems for which the simplest model could apply. I have an example I'd like to write up, but it's a bit long for a footnote.




Hi! I'm the director of Georgia Tech's MS in Analytics program (both on-campus and online).

GT's MS in Analytics degree is actually designed specifically for people who are going to go out and work in the analytics field -- it's not a pre-PhD degree, and our courses are targeted primarily at people who want to learn and apply analytics. We have an industry advisory board that helps us target course and program content, and we're constantly working to make sure our coursework is focused to the right cohort. We even have a required applied analytics practicum (both for on-campus and online students) where our students work on analytics projects for a wide range of companies and organizations.

Perhaps other degrees are different, but the MS Analytics is a very practice-focused degree.


I may be an educational purist, but to me I cringe when I hear universities boast about the "practicality" of their degrees they offer. You get a degree to prove you can learn. The courses should be heavy on theory and concepts. If you teach these well enough, ideally your students should be able to easily pick up whatever FOTM development stack or tool is out there and roll with it. I wish we could reverse this trend, but it just seems like it's too much good PR to say "hey everyone! come to our school and you are guaranteed to get a job!"


Certainly with the rise of sham/for-profit universities, sales pitches promoting 'practicality' now launch red flags, and deservedly so. But if the role of 'higher education' is to be a practical one (as engineering programs have always been), it only makes sense for schools to ask industry what it needs and then serve those ends, first and foremost.

In general, while theory has great value, it's more as a stepping stone to higher study than as an end unto itself. Few computing pros submit proofs among their deliverables. And devising the theta bound on a function or resolving the terms of a CSP simply don't deliver much value when working outside PhD-level R&D labs and writing peer-reviewed papers.

I believe there's a great deal of value in applied non-PhD track academic programs like GT's online discount offerings, especially in serving professionals and employers. I also believe it's high time that universities clued in to the unmet need that most of us post-academics face toward helping us continuously re-educate ourselves as we progress through our careers. Few of us pros can return to campuses, even part-time. Distance learning meets a crying need. And when done right and priced-right (as I believe GT does), I have nothing but kudos to offer in return. I say, more power to GT's authors, curators, and administrators who made this possible. And to all who make this greatly empowering service possible: thanks, and keep up the good work.


I disagree. The role of a university CS degree is to bridge the gap between high school student and software development professional. That's going to include some theory but a lot of hands-on experience with modern development tools. It should include a healthy amount of group work and tons of coding projects.

If you want to play around with theoretical computer science, get your PhD. College educations are too expensive to not be imminently practical.


I disagree with this sentiment based on my own experience. I did great in my BS CS program from a highly ranked program, but was woefully underprepared for industry and quite frankly a bad software engineer. Graduates from traditional programs often leave with next to no experience with testing, version control, team structure/process, newer languages, frameworks/3rd party packages, etc, and my experience in industry is that it's a role of the dice if your company, team, etc are interested in teaching you or waiting for you to learn. The only people I know who graduated with those skills are people who either had amazing mentors or were natural hackers in their spare time. If I could re-design my education, it would be 2-3 years of theory and then 1-2 years of applied liberal arts education before starting an actual career.


Graduates from traditional programs often leave with next to no experience with testing, version control, team structure/process, newer languages, frameworks/3rd party packages, etc, and my experience in industry is that it's a role of the dice if your company, team, etc are interested in teaching you or waiting for you to learn.

It's a waste of time to teach industry tools at a university. It's much more valuable to be taught fundamentals. Know your fundamentals well and any new tech will be much easier to learn. It's long-term thinking - put in the investment to make sure you can change skillsets in the future.

All the things you mentioned tend to be ephemeral and change a lot within a few years. Look at the git monoculture that's sprung up in the last 5 years for example - 10 years ago it might have been reasonable to teach SVN.


And if you learned SVN, you would have had a solid base for understanding GIT. Would you expect students to learn source code control in the abstract or not at all?

You have to do programming assignments anyway. Why wouldn't you require students to learn and use the latest source code control tools while they're doing their development?

Teach students to write tests, use source code control, utilize continuous integration, etc.

Although the specific tools, languages, and approaches will evolve in the coming years - none of the above are going away soon.


Git is a very complicated piece of software. it's not intuitive, it has a famously poor CLI, and it takes time to learn.

Time spent making undergraduates use git is time that could be spent teaching them long-term, fundamental skills.


vi isn't very intuitive either, but there's no better way to learn something difficult than to learn it when you're young. I've been using vi professionally for 30 years thanks to my early GT classes. It's probably the single-most valuable skill that I learned there that I still use today.


I'm a vi(m) user, but I have to say - it's not a fundamental part of computing at all. It's just a very popular tool. A lot of people don't know how to use it and manage to make amazing things.

there's no better way to learn something difficult than to learn it when you're young

Hmm, define 'young'. I'm in my late 20s and I find it easier to learn new things more than ever - including things I failed to learn in my teens and early 20s. Maybe I'm just a late bloomer, and it took me a while to "learn how to learn". But maybe I'm still young in the eyes of someone who has been using vi for 30 years (:


If you know Graph theory then you know git, all that remains is just reading the man page for specific commands. Intro to Graphs/Graph Theory is generally in the curriculum at all university compsci departments

Testing ect is usually covered in all intro classes (assert libraries) or industry type testing like JUnit by a software engineering elective typically taught in Java


If you know Graph theory then you know git, all that remains is just reading the man page for specific commands.

Just because one of gits key abstractions is based on a kind of graph, I don't think it follows that knowing graph theory means you know git. I mean LISP is based on a graph structure as well but plenty of people find that confusing.


Exactly, we were checking in Java assignments in the form of a ZIP file in 2009 that we had validated with print statements. It wouldn't have been that much more work to structure the algorithm assignments in that class in a way more similar to industry workflows even if the workflows are an evolving target.


Fair enough, and I should have been a bit clearer in my original post.

In my experience with the MSCS program (nearly ten years ago at this point) the core required classes were mostly well structured and would serve people well continuing onto a PhD or growing their skill set for industry. The core constituted a relatively small chunk of the overall credits required, though, and the elective courses tended to be more along the lines of what I described.

I'm glad to hear that the Analytics program has a more dedicated focus on practical matters. It might be interesting to produce a series of similar (but narrower) curricula that amount to curated collections of CS classes making up degrees in Machine Learning, Systems Programming, etc.

I personally really enjoyed my dartboard-oriented approach to class registration. I learned more than I've never needed to know about approximation algorithms, cryptographic theory, and compilers. Even if much of what I learned there hasn't proven itself directly useful yet, I really enjoyed learning it for learning's sake, and I think I'd have had a hard time picking up some of the gems I pulled out of that since. I also still have a hobby of proving problems NP-complete on demand as a bit of a parlor trick (within the limited scope of problems for which you can apply the small handful of patterns I've burned into my brain over the years :).


Can professional experience and a partially completed bachelor's degree in SE substitute for the undergraduate degree requirement?


I asked this question in a number of places a couple years ago and the answer is basically no.

I did, however, find that my undergraduate university had a great program for people with nearly complete degrees who had been away for a few years.

I'll be finishing undergrad this May and am now looking at grad schools. Feel free to contact me if you want to chat about this because it's been surprisingly hard to find info or advice in our situation.


https://www.udacity.com/georgia-tech/faq

Who can apply to the OMS CS degree program? Admission into the OMS CS program will require a Bachelor of Science degree in computer science from an accredited institution, or a related Bachelor of Science degree with a possible need to take and pass remedial courses. Georgia Tech will handle the degree admissions process. For more information please visit the Georgia Tech program page.


I got into analytics while using the quant investment site Quantopian.

Mostly you use python numpy and scipy to analyze a large time series data set (stock market) to predict pricing while having a low correlation to the overall market movement.

I had some success and won their 6 month contest, but I still feel like a bit of a hack. I'd like to move into the financial quantitative analysis industry.

Would you say this GT program would be a good stepping stone?


In some ways. There's a class called ML For Trading that's very fun and like an intro to computational trading. The professor runs a company in that space.


I'm very interested in this program.

What is the best way to get in touch with you and get the syllabus material for the courses?

I'm at rememberlenny at gmail.


Charles Isbell is still at GT? Holy cow. I think he was the teaching assistant when I was taking VAX assembly back in the late 80's when I was there. Seemed like a nice guy.

It's amazing how you don't think of someone for almost 30 years, but you read their name in a comment on HN and memories come flooding in. What do those neurons do while they're waiting to be used again?


He was a associate professor in the early 00's.


> I did both my BSCS and MSCS at Georgia Tech. While I have many complaints about the school, the quality of the classes is not one of them, for either the undergrad or grad programs.

I also did BS and MS at GT, and while I generally share your experiences there were 3 or 4 truly disappointing classes during my MS. They didn't ruin my overall experience, but I can see how someone could happen to have more experiences like those and fewer positive ones and come aware with a very different perception of course quality.

My overall opinion of GT is mixed, but rigor or the courses is not one of my top critiques.


I took Isbell's class as well, and perhaps here we can share our respective experiences.

In the year I did it, the class was structured as follows:

At the beginning of the semester, you'd pick two datasets.

Every two weeks, you'd apply two or so algorithms that were being covered at the time (maybe k-means and SVD, or a NN and SVM) to your chosen data sets. There would be a set of variations that you were supposed to apply to each algorithm. Typically you'd normalize or clean the data in some way. Perhaps you'd filter outliers, etc...

The result would be a set of experiments to run (2 datasets) x (2 algorithms) x (2^3 variations per algorithm). You would compile the results into a (10 page max) paper, with analysis about how the dimensions differed.

It was up to the student to figure out how to actually implement this pipeline (I used sqlite + numpy/scipy/scikitlearn, many used Matlab).

On paper, this sounds like a great class - what a wonderful way to learn about how different approaches relate to each other, and how crucial the process of preparing data is to the effectiveness of the algorithm. In practice, however, this did not happen for most students I knew.

These students spent most of their time finding implementations of the algorithms and hacking at them to actually run all the experiments. They then rushed through gluing the results together through some semblance of analysis. Alumni of the class I knew said the same thing about their experience.

This analysis was read by TA's. There were I think 3 of them for about 100 students. We wouldn't get the papers back for weeks (long past we moved on to new material). When we got our papers back there was very little feedback of the content - mostly it was noted that we submitted the work on time, and had successfully performed all the experiments required.

I agree that Isbell is a joy to listen to - he is charismatic, entertaining, and I too enjoyed his anecdotes. However, I felt like you would only get something out of his lectures if you already knew what you were talking about.

When I think about the quality of the class, I think about how responsive the class is to the individual needs and progress of the student.

If you say that it's up to the student what they get out of the class, and your bar for a good class is that the content is arranged in a nice manner, then here you go https://pe.gatech.edu/sites/pe.gatech.edu/files/agendas/CS-4... ... any self-directed student can grab Mitchell, and do the weekly assignments I describe above - all for free and in the comfort of their own home.


I agree that latency and detail of feedback is an enormous problem with this sort of partially-guided coursework. However, it's a generalized problem with higher education, not specific to GT, in that when implemented effectively it's one of the most valuable education experiences but difficult to scale, because it demands time-consuming supervision.

This is especially true of term project courses, where the final portion of the project to which you devote the most time and creativity is also the part for which you're likely to receive the least feedback.


>However, I felt like you would only get something out of his lectures if you already knew what you were talking about.

I disagree (having taken the course as an undergraduate and it being my first major exposure to machine learning). Certainly if all you do is attend the lectures, you're going to miss some background knowledge, but that is true of most (if not all) university courses. You're supposed to devote 2-3 hours of outside work for each hour of lecture. Meaning 6-9 hours of studying per week outside of those lectures.

Some of this is doing the projects, although some of it is personal investigation.

There are failings of his course (one of the biggest at this point is that it doesn't do any work with the state of the art now), but I think that the fact that his course caters toward people who are self-driven is not a failing.

The best way to look at what the goal of the course is is by looking at his exams. If they weren't different than you took them, they were intentionally too difficult for the allotted time, leading to low averages and incomplete work by the majority of students.

However, the course allows motivated students to make connections between concepts, with the help of the professor and the coursework. Having someone "leading you" down the right path is very helpful, much moreso than a textbook alone.

I really do think that there is one exam question that sums up Isbell's course perfectly: its the one where you are asked to compare and contrast 4-5 aspects of 4 randomized optimization algorithms (RHC, GA, SA, and MIMIC) and explain situations where you'd use each and why.

The course's goal is to lead to a strong intuition for the algorithms covered (sadly at the partial expense of a theoretical understanding), not everyone puts in the work to develop that understanding, but that's not a failure of the course, necessarily.


I do agree that having materials that provide an approach to a topic is very useful, but as I mention elsewhere such materials are available for free online.

You can find the syllabus for Isbell's class and follow along. You can do the readings and programming investigations. If you like lectures, you can find many full courses on YouTube (I found caltech's lectures https://www.youtube.com/watch?v=eHsErlPJWUU to be the best at presenting SVM's out there, although this was probably my third attempt at understanding them so maybe the other resources rubbed off.. they also skim over the quadratic programming detail but I get that this may be beyond the detail that many people desire in an intro class).

If you have to teach the material to yourself, how is your experience improved by being in the class?


>You can find the syllabus for Isbell's class and follow along

To be fair, most of Isbell's course (lectures) is also available on Udacity.

>If you have to teach the material to yourself, how is your experience improved by being in the class?

There are a couple advantages. One of the most obvious is the lower latency of responses when you have confusion or misunderstanding. In a lecture, you can ask a question and get an answer almost immediately. This is most useful (imo) with algorithms and mathematical concepts, because you can ask, and lecturers are often quick to provide insight, into the interrelationships between algorithms (both in Machine learning and in a more theoretical sense like computability). There are topics that come up a lot, and being able to have instant feedback on those connections allows you to spend less time misunderstanding than not.

That alone is a fairly weak justification, I think the stronger one is feedback in general. Watching lectures only gets you so far. With implementation of algorithms, often your feedback is testable correctness (although my experience in DS&A suggests that most people are capable of constructing incredibly incorrect models for things that perform well on some input, and even on decent autograders), but with things like machine learning algs and intuition about those algorithms, you can't get that. So the feedback that yes, your understanding is correct (even if that feedback is slow) is invaluable. In that regard I think online courses and MOOCs can be good, but MOOCs that don't provide feedback aren't as valuable. I've attended a lot of lectures, and I've ignored a lot of lectures. Listening to someone say something does not mean one has learned it.

I'd also note that, if I recall, the way that Isbell approaches teaching the material, vs. the way the textbook does are very different. Textbooks are (often) references. They provide information on what something is and how it works theoretically, but very often lecturers are able to provide the kinds of things that aren't (and shouldn't?) be in textbooks.

If I'm reading a textbook, its very likely that I want to know how to implement an algorithm, so I care that the algorithm for simulated annealing says that you jump with probability e^(D/T) > Rand[0,1]. Whereas in a lecture, I'm likely much more interested in the idea that simulated annealing is conceptually very similar to throwing a ping-pong ball into a large complex, convex plastic surface and seeing where it lands.


My criticism is precisely that feedback was lacking. The assignments were only graded on submission - there was no feedback there (likely because every student worked with different data so going in-depth would have required the grad student TAs to spend too much time per student digging in).

I don't agree that feedback during lecture is valuable or low-latency as you say - not with 100 students attending. It might work to ask a clarifying question here and there, but again - you're only in a position to take advantage of that if you're already comfortable with the material and are generally keeping up.

Books are different than lectures, sure, but I don't think there's much difference between attending a lecture with 100 students, or watching one online. Indeed many people claim the online way is better, since you can rewind and skip around, pause and lookup references, etc...


When I took it we were encouraged to use Weka for the algorithm implementations themselves. This certainly allowed me (and I'd never so much as touched machine learning prior to taking the class -- I took it on a bit of a lark that wasn't related to my research at all) to focus on understanding the behavior of the algorithms rather than worrying about hacking them together.

I'd agree that Tech has too few TAs for too many students, generally, for its graduate courses, but I don't know that other schools do a better job. A brief survey of the folks around my desk elicited howls of laughter at the notion of useful or accessible TAs in grad school.

> I agree that Isbell is a joy to listen to - he is charismatic, entertaining, and I too enjoyed his anecdotes. However, I felt like you would only get something out of his lectures if you already knew what you were talking about.

I think this assertion is, at best, too strong. A better assertion might be that his lectures depended on coming in with sufficient background.

As I said, I came into the course with no experience with machine learning at all. On the other hand, I did have a fairly strong theoretical computer science, stats, and linear algebra background. I will admit that may have made me blind to things he was simply assuming with respect to educational background that were not actually safe to assume. That said, I still refer back to his primer on information theory (http://www.cc.gatech.edu/~isbell/tutorials/InfoTheory.fm.pdf) when discussing work relying on it, so he certainly made some effort to fill in gaps as he discovered they were common.

> When I think about the quality of the class, I think about how responsive the class is to the individual needs and progress of the student.

For a graduate level course I feel a class clears this bar when it accurately and thoroughly documents the prerequisites. Now, I'm not saying Charles's class necessarily does this. As I said, I came in with a pretty strong background in what turned out to be more than sufficient, but with that background I personally felt his lectures were quite tractable, even assuming complete ignorance of ML itself.


These students spent most of their time finding implementations of the algorithms and hacking at them to actually run all the experiments. They then rushed through gluing the results together through some semblance of analysis. Alumni of the class I knew said the same thing about their experience.

Ironically, this sounds quite a lot like much of industry.


Or unsurprisingly...


yeah, true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: