Hacker News new | past | comments | ask | show | jobs | submit login

We each get to start where the greatest minds before us ended.

Disagree. Pretty soon, you're going to have to spend 40 years in specialized schooling just to even try to understand where the giants before us went, let alone ended. Human knowledge only seems sequential, but I don't think it is. The fact that archeologists are continually surprised by how advanced the "ancients" were shows how much knowledge humanity has lost and reinvented (or in some cases, not).




Not if you have silicon neurons that can process information a hundred times faster (since they use electromagnetic signals, not chemical agents, to pass information)! ;-)

But I beg the question...

EDIT: I sort of disagree with you. I'm in graduate school right now, and 90% of a modern mathematician's life is spent on keeping up with the progress of others, particularly in the last 50 years. That is true. However, over time, we also filter out the unimportant stuff. While automorphic forms may be an important field today so that we can bring in fresh minds to try to tackle the Langlands Program, once that is solved there won't be as much of an emphasis. I knew "more" calculus than Isaac Newton when I was 12 in the sense that I had learned things like Stoke's theorem that he had not even known about, but Newton's knowledge was much more of a mess, as well. The stuff you learn from textbooks is not how researchers originally discovered things. Think of the analogy of starting a fresh startup codebase that looks horrible, but becomes very polished and clean after several refactorings; it may take weeks to introduce a person to the former codebase, but for the latter it may only take a good one-hour tutorial.


You nailed it with the code analogy, human knowledge is continually refactored and slowly but surely increased with each successive generation. To say we aren't getting smarter is absurd, we are getting both smarter and more knowledgeable at an ever increasing rate.


You are assuming that the required knowledge can be presented in a modular way, to reduce its complexity.

Knowledge can only be simplified so much... what if the knowledge required for AI, after being made as simple as possible, is still more complex than the most gifted human being can understand?

I like to think that the ultimate nature of reality is simple and beautiful. But all I can be sure of is that the things that I can see are simple enough for me to grasp.


We've already designed, or grown might be a more accurate description, working (most of the time) systems which defy understanding [in full] by the most gifted humans. See the world financial system, our national electricity grid, Windows, Wikipedia, etc. etc. We may not be able to design AI, but we are going to develop it.


We can design/grow some things that we cannot understand.

Does it follow that we can therefore design/grow all things that we cannot understand?


Well the thing is, the code analogy works really well for things like algorithms, math, and factual knowledge. Really, the things you are talking about that cannot be modularized are things that require large amounts of training. However, the training algorithm may be able to be modularized and encoded thus enabling the AI to learn these things. This is the goal of machine learning.


A training algorithm selects a hypothesis from a hypothesis space.

What happens if the hypothesis space does not include the true hypothesis?

Defining the hypothesis space is tricky - though the training/search part is also tricky :-).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: