Hacker News new | past | comments | ask | show | jobs | submit | srconstantin's comments login

This makes sense.

It's a "The way to Tara is via Holyhead" kind of thing.


Look -- as long as "data scientist" is a sexy job title, a lot of different jobs are going to claim they fall under that umbrella. I have an applied math background, and I'm fine with scientific computing, but I have much less experience with databases. I'm a very different candidate than a software engineer who took a machine learning course. Maybe in a few years we'll have more intelligent language for making those distinctions.

It shouldn't be surprising or bad news that some "data scientists" have deeper knowledge than others. We're going through a quantitative revolution -- many fields and industries are nearly untouched by statistical analysis/machine learning, and so there's a lot of low-hanging fruit in going from "nothing" to "something." Even somebody who only knows a little can add value at these margins. But, of course, that won't be true forever -- look at quantitative finance, which is very competitive and requires a lot of education, because the low-hanging fruit was picked in the 90's.

There's room in this world for the statistician, the mathematician, the database engineer, the AI guy, the data visualization expert, the codemonkey who knows a few ML methods, etc.


First, most poor people still don't live in projects. Second, I don't know if degradation matters more than material possessions.


I'm kind of the villain of this piece -- a relatively novice coder who expects to apply for programming-heavy jobs in the future.

Here's why I think I'm not actually a villain:

1. I'm not under any illusions that people should pay me mega-bucks because I can program at all. I haven't hit my 10,000 hours yet; closer to 1000. You should hire me because I'm a mathematician who can program, not because I'm a hacker genius. Often, being an "X who can program" is much more valuable than just an X. It means you can execute your ideas yourself; and it means you have a clear concept of which ideas can be executed computationally and which cannot.

2. I couldn't just "quit programming" any more than I could quit writing or quit reading. I could resolve to, but I wouldn't last long. I've found that when something makes my brain happy, but isn't technically "my job," I'm much better off following instinct and doing it anyway, rather than forcing myself never to have any side projects.

3. You're worried I'll contribute to the volume of bad code in the world? Well, just because there are more experienced programmers than me doesn't mean I'm dishonest or stupid. I make a point to be very frank about what I can and can't do at the moment. I've found that the stuff I can do is quite valuable to people. I don't try to bluff my way into projects I'm obviously not qualified to contribute to.


I think you're much closer to a programmer than a layman that programs.

The straw man about reducing the amount of bad code in the world is garbage. Just code. If you're getting a benefit from it - or hell, if you just enjoy it - do it. You have nothing to apologize for.

The whole elitist concept of only sanctioned programmers coding is just arrogant. Even the best programmers wrote awful code before they were great. Doesn't mean they shouldn't have kept it up.


It's like trying to cut down on the amount of bad writing in the world. Sure, I guess aesthetically it'd be nice if I never had to look at any bad writing. But bad writing is part of the process of becoming a good writer, so there'd ultimately be less good writing out there.


There's a difference between optimizing for current conditions, and changing future conditions.

Optimizing for current conditions: right now, a lot of the best job applicants have degrees from top-tier schools, and if you need to hire someone today, using education as a job filter is a reasonable choice.

Changing future conditions: there are plausible arguments that universities should not have a monopoly on education and credentialing, and promoting alternatives to a college degree may be a reasonable long-term advocacy and philanthropic goal.

The real story is: what's going on with this new hedge fund? Is the focus on "conclusions that are fundamentally correct but missed by most of the world" an actual strategy or a contrarian buzzword?


Secret I believe to be true:

A good deal of experimental research is a cartel on resources and data. The "open access" movement in biology is not what programmers would think of as open; you can get access to many "open" data sets only by application, and you pretty much need to be an academic biologist to be granted an application. Science would be much stronger if there was a norm of truly sharing data. But restricting access is in the interest of each individual researcher who wants to maintain his/her relative prestige advantage.


I think people should seriously consider that not all women want children.

Women aren't idiots. We think about the tradeoffs.


How can you know about the tradeoff? you remember your past lifes? nobody can, you will have to guess and often you will guess wrong. That's why very few people has great family planning skills at their 20's.


Violence against women and violence against men are different. Men are more likely to be the victims of violent crimes, and especially more likely to be attacked by strangers. Violence against women is more likely to be rape or domestic violence. Men are more likely to wind up in something you could describe as a "fight." Depending on context, you could say that men are in more danger than women (because most violence is man vs. man) or that women are in more danger than men (because a lot of violence against women comes from familiar figures like family members or boyfriends, and because self-defense is often harder for women.)


I don't think "snake oil" is the right paradigm here. In ML/AI, lots of honest researchers are wrong; being a scientist who's wrong doesn't make you a criminal.

That said: Hawkins' principles are very different both from what the brain does, and from what the state of the art in machine learning does. My impression is that HTM's attempt to be too general and assume too little about the problem.

For vision in particular, most successful computer vision algorithms (as well as what we know about the visual cortex's mechanisms) make extensive use of information related to the fact that the image is an image. That is: edges are probably more likely to be continuous than broken; locally constant curvature is more likely than not; textures and colors usually continue over the surface of an object; objects occlude other objects; etc. Brains and effective computer vision algorithms hard-code a lot of information about the nature of the problem they're solving. Hawkins wants to bypass that, and I think it's probably too ambitious an aspiration.

Then again, if he makes it, more power to him.

I don't think we should be prejudiced against someone who comes from the tech industry and wrote a popular book. It's certainly not "snake oil" -- it seems to be a good-faith attempt to solve an important problem. I think the odds are against it working, but that's not a moral condemnation.


Read the paper; HTM's don't seem to do better than other object recognition algorithms at recognizing shapes, especially because there are visual properties it ignores (curvature, global topological properties, etc.) The accuracy for the picture datasets are only 60-70%. What's interesting about HTM is its generality. I can't judge whether it would be good for the Grok prediction engine, but I know more about image recognition and you definitely don't want to use it for that.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: