Hacker News new | past | comments | ask | show | jobs | submit login

This is a fantastically good point. I think things will get even more interesting once the ML tools have access to more than just text, audio and image/video information. They will be able to draw inferences that humans will generally be unaware of. For example, maybe something happens in the infrared range that humans are generally oblivious to, or maybe inferences can be drawn based on how radio waves bounce around an object.

"The universe" according to most human experience misses SO much information and it will be interesting to see what happens once we have agents that can use all this extra stuff in realtime and "see" things we cannot.




As far as I know, all sensory evolution prior to this point has been motivated based on incremental gains in fitting a changing environment.

True vision requires motive and embodied self. I’m ignorant about the state of the art here, but I’m way more terrified of what these things don’t see than interested in what they could show us. It seems to me that the only human motives accessible to machines are extremely superficial and behavioral based.

Knowledge is not some disconnected map of symbols that results in easily measurable behavior, it has a deep and fundamental relation to conscious and unconscious human motivation.

I don’t see any possible way to give a machine that same set of motives without having it go through our same evolutionary and cultural history, and strongly believe most of our true motives are under many protective layers of behavioral feints and tests and require voluntary connection and deep introspection to fractionally expose to our conscious selves, let alone others, let alone a computer.

These models seem to be amazingly good at combining maps of already travelled territory. Trying to use them to create maps for territory that is new to us seems incredibly dangerous.

Am I missing something here, or is it not true that AI models operate purely on bias? What we chose to measure and train the model on seems to predetermine the outcome, it’s not actually empirical because it can’t evaluate whether it’s predictions make sense outside of that model. At some point it’s always dependent on a human saying “success/fail”, and seems more like an incredibly complicated kaleidoscope. Maybe they can cause humans to see patterns we didn’t see before, but I don’t think it’s something that could actually make new discoveries on its own.


I think your point is more interesting but the problem is tabula-rasa knowledge starts. A human isn't born knowing about quantum mechanics, christoffel symbols or what pushforward measures are. If there was just a method to learn facts from scratch as cheaply as brilliant humans do, it would be so amazing. Even if you count from elementary school years, humans still end up with less energy spend by several orders of magnitude.

Transformers themselves are a lot more effective compared to n-gram models or non-contextual word vectors. I imagine there is something to Transformers as Transformers are to word2vec.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: