Hacker News new | past | comments | ask | show | jobs | submit login

No argument with the particles/neurons/matter approach to the subject. It is sound and if you look at us compositionally there is nothing magic about whats going on. There is, though, something about intuition or instinctual behavior which can constantly recombine/reapply itself to a task at hand. I know many will balk at intuition, and maybe its only at the very best a heuristic, but i think we need to at least unravel what it is and how it operates before we can understand what makes something classify as human-like intelligence. Is it merely executing a process which we can put our minds into with practice, or is it demonstrating something more general, higher-level.



Well look, compared to the electrified bits of sand in my laptop i'd strongly defend pregnancy as something vastly more "magical" if those are the terms we must use.

People who thing organic adaption, sensory-motor adaption, somatosensory representation building... ie.., all those things which ooze-and-grow so that a paino player can play, or we can here type... are these magic?

Well I think it's exactly the opposite. It's a very anti-intellectual nihilism that all that need be known about the world is the electromagnetic properties of silicon-based transitors.

Those who use the word "magic" in this debate are really like atheists about the moon. It all sounds very smart to deny the moon exists, but in the end, it's actually just a lack of knowledge dressed up as enlightened cynicism.

There are more things to discover in a single cell of our body that we have ever known; and may ever know. All the theories of science needed to explain its operation would exhaust every page we have ever printed. We know a fraction of what we need to know.

And each bit of that fraction reveals an entire universe of "magical" processes unreplicated by copper wires or silicon switches.


You make good points. I think it's a typical trait of the way computer scientists and programmers tend to think. Computer science has made great strides over the decades through abstraction, as well as distillation of complex systems into simpler properties that can easily be computed.

As a result of the combination of this method of thinking and the Dunning-Kruger effect, people in our field tend to apply this to the entire world, even where it doesn't fit very well, like biology, geopolitics, sociology, psychology, etc.

You see a lot of this on HN. People who seem to think they've figured out some very deep truth about another field that can be explained in one hand-waving paragraph, when really there are lots of important details they're ignoring that make their ideas trivially wrong.

Economists have a similar thing going on, I feel. Though I'm not an economist.


As an aside both my parents are prominent economists, I myself have a degree in economics, and I have spent much of my life with a birds eye view of the economics profession and I can emphatically confirm that your feeling is correct.


Economics is zoology presented in the language of physics. Economists are monkeys who've broken into the uniform closet and are now dressed as zookeepers.

I aspire, at best, to be one of the children outside the zoo laughing. I fear I might be the monkey who stole the key...


Remember always, computer science is just discrete mathematics with some automatic whiteboards. It is not science.

And that's the heart of the problem. The CSci crowd have a somewhat well-motivated inclination to treat abstractions as real objects of study; but have been severely misdirected by learning statistics without the scientific method.

This has created a monster: the abstract objects of study are just the associations statistics makes available.

You mix those two together and you have flat-out pseudoscience.


Not sure I agree in this regard. We are after all, aiming to create a mental model which describes reproducible steps for creating general intelligence. That is, the product is ultimately going to be some set of abstractions or another.

I am not sure what more scientific method you could propose. And we can, in this field produce actual reproducible experiments. Really, more so than any other field.


There's nothing to replicate. ML models are associative statistical models of historical data.

There are no experimental conditions, no causal properties, no modelled causal mechanisms, no theories at all. "Replication" means that you can reproduce an experiment designed to validate a causal hypothesis.

Fitting a function to data isnt an experiment, it's just a way of compressing the data into a more efficient representation. That's all ML is. There are no explanations here (of the data) to assess.


I don’t think that’s true either.

Take the research into Loras for example. Surely the basic scientific method was followed when developing it. You can see that from the paper.

Obviously the results can be reproduced. Unlike in many other fields, reproducibility can be pretty trivial in CS.

Training a model isn’t really a science, but the work gone into creating the models surely is.


CS isnt science, it's discrete mathematics


All sciences are progressively more impure (eg. Applied) forms of math.


lol


Also there’s literally a causal relationship between model topology and quality of output.

This can be plainly seen when trying to get a model to replicate its input.

Some models perform better in fewer steps, some perform worse for many steps, then suddenly much better.

How is discovering these properties of statistical models NOT science?


I do think there's an empirical study of ML models and that could be a science. Its output could include things like,

"the reason prompt Q generates A1..An is because documents D1..Dn were in the training data; these documents were created by people P1..Pn for reasons R1..Rn. The answer A1..An related to D1..Dn in so-and-so way. The quality of the answers is Q1..Qn, and derives from the properties of the documents generated by people with beliefs/knowledge/etc. K1..Kn"

This explains how the distribution of the weights produces useful output by giving the causal process that leads to training data distributions.

The relationship between the weights and the training data itself is *not* causal.

Eg., X = 0,1,2,3; Y = A,A,B,B; f(x; w) = A if x <= w else B

w = 1 because the rule x <= 1 partitions Y st. P(x|w) is maximised. These are statistical and logical relationships ("partitions", "maximises").

A causal relationship is between a causal property of an object (extended in space and time) to another causal property by a physical mechanism that reliably and necessarily brings about some effect.

So, "the heat of the boiling water cooked the carrot because heat is... the energetic motion of molecules ... and cooking is .... and so heating brings about cooking necessarily because..."

heating, water, cooking, carrot, motion, molecules, etc.. -- their relationships here are not abstract; they are concretely in space and time, causally effecting each other, etc. etc.


So what do you call the process of discovering those causal properties?

Was physics not actually a science until we uncovered quarks, since we weren’t sure what caused the differences in subatomic particles? (I’m not a physicist, but I hope that illustrates my point)

Keep in mind most ML papers on arxiv are just describing phenomena we find with these large statistical models. Also there’s more to CS than ML.


You're conflating the need to use physical devices to find relationships, with the character of those relationships.

I need to use my hand, a pen and paper to draw a mathematical formula. That formula (say, 2+2=4) expresses no causal relationships.

The whole field of computer science is largely concerned with abstract (typically logical) relationships between mathematics objects; or in the case of ML, statistical ones.

Computer science has no scientific methodology for producing scientific explanations -- it isnt science. It is science in the old german sense of just "a systematic study".

Scientists conduct experiments in which they hold fixed some causal variables (ie., causally efficiacious physical properties), and vary others, according to an explanatory framework. They do this in order to explore the space of possible explanations.

I can think of no case in the whole field of csci in which there are cases where causal variables are held fixed; since there is no study of them. Computer science does not study even voltage, or silicon, or anything as physical objects with causal properties (that is electrical egnineering, physics, etc.).

Computer science ought just be called "applied discrete mathematics"


I see where you’re coming from, but I think there’s more to it than that, specifically with non determinism.

So if I observe some phenomena in a bit of software that was built to translate language, say the ability to summarize text.

Then I dig into that software and decide to change a specific portion of it, keeping same all other aspects of the software and its runtime, then I notice it’s no longer able to summarize text.

In that case I’ve discovered a causal relationship between the portion I changed and the phenomenon of text summarization. Even though the program was constructed, there are unknown aspects.

How is that not the process of science?

Sorry if this is just my question from earlier, rephrased, but I still don’t see how this isn’t a scientific method.


Intuition is a process of slight blind guesses in a system that was built/proposed by a similar organism, in a way that resembles previous systems. Once you get into things like advanced physics, advanced biology, etc, intuition evaporates. Remember these SR/GR things? How intuitive were these? I’d say the current AI is pure intuition in this Q'-ZipQA-A' sense, cause all it does is blind guessing the descent path.


intuition is a form of pattern matching without reasoning, so kinda like LLM




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: