Hacker News new | past | comments | ask | show | jobs | submit | more 0xBABAD00C's comments login

There is research claiming the entire universe is a neural network: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7712105/


You can find theoretical physics research claiming a lot of bonkers things that almost certainly are not true, that's sort of the nature of the field.


That's the sort of blue sky research I'm glad exists.


Indra's Net is, in fact, a neural network


Frankly, stuff like this makes me more skeptical of the ML community. Remember when people thought Brains were just really complicated hydraulic systems?


Vanchurin is a physics professor, but ok https://twitter.com/vanchurin?lang=en

I actually think that interdisciplinary work like this can shed light into common structures across physics, biology, neuroscience, CS, etc. If anything, I wish there were more attempts to explore the connections between these disciplines.


The author is basically a crank, it looks like they held some teaching positions in an unrelated subject at various universities yielding the “professor” title, and originally studied/published actual research in cosmology decades back.

Might as well be skeptical of the math community because of circle squarers


> The author is basically a crank

Really not sure why do you say this. Is it something personal against him? Are you against people exploring weird interdisciplinary topics?

He has tons of publications [1], including co-authorship with people like this guy: https://en.wikipedia.org/wiki/Eugene_Koonin

Koonin is certainly not a crank, and is a rather well-known interdisciplinary researcher at the intersection of biology and physics.

[1] https://scholar.google.com/citations?hl=en&user=nEEFLp0AAAAJ...


Yeah I tend to agree it feels like a little bit of everything, but mostly B. To be honest, this community survived longer than many others I've seen.


> Gradients Are Not All You Need

Sometimes you need to peek at the Hessian.

Seriously though, what is intelligence if not creative unrolling of the first few terms of the Taylor expansion?


Of what?


`a + ax + ax^2+...`


That's the expansion. What does it represent?


Given the context of the original statement, the Taylor series expansion represents the statistical learning process of any topic anywhere.


I think you mean loss function, not process. And I'm failing to make any connection between intelligence and polynomial approximations to loss functions.


Intelligence is rather ill defined which I suppose is why you are having difficulties.


Yes.

(And it goes down all the way)


No that's turtles.


Arguing against AI's "existence" or its dangers is cheap/free: if the whole thing flops, you get minor cookie points, and if it doesn't, people will have bigger problems to worry about than you or your opinion. AI is not gonna flop, and we better prepare.


You could make this argument about literally any low-probability, high-impact event.


> able to generalize language, math, and positional reasoning and mix it with the older parts of the brain for reward and training mechanisms and it can do it in real time

most brains I run into don't do that much at all, mostly just existing and adaptation-execution


It's not a strong argument to say "what could it be other than X? therefore, X"

> What could it be?

Sense of community is gone. Trust is at an all-time low. Traditional social institutions and scaffolding that held society together are gone. Families are more broken apart than ever. It's an entire regime change, and we're still operating with our old instincts/hardware, which expects to be raised in a village by a largely familiar and empathetic community. The farther we get from that model, the more we'll be stressed out (until and unless we upgrade our hardware somehow, or, more likely, die trying).

Social media is small potatoes in comparison to the changes that have happened already, and especially to those that are coming.


> Sense of community is gone. Trust is at an all-time low. Traditional social institutions and scaffolding that held society together are gone. Families are more broken apart than ever.

Granting all of that to be true (though I think it could be argued otherwise), those things started happening long before the sharp rise in depression among teen girls.


And it’s all due to the very existence of progressing technology. Dr. Skrbina in the book Technological Slavery makes the case that humanity has no control over the impacts of technology on humanity, and that technology has an emergent will that is independent of humans who create and operate it.

https://ia600300.us.archive.org/21/items/tk-Technological-Sl...


> dynamic programming only comes up when I'm studying for interviews

I worked a SWE job years ago which was heavily reliant on dynamic programming and string search/alignment/etc. It really depends.


It always depends, but I think we can safely say that it doesn’t come up in the _majority_ of jobs.


Human beings are not the top-level cognitive agents, there are egregores at higher levels of organization (related to memeplexes, meta-organisms, etc). It's just hard to define and talk about them concretely, our language is not well-suited to reason about agency above the human level. One century-old example of identifying an egregore is the meme "A spectre is haunting Europe — the spectre of communism." It's nothing supernatural, just like a combination of neurons computing each their own thing gives rise to a new agent (human), a combination of humans computing each their own thing gives rise to a superhuman agent (spectre of communism).


My opinion is that the Earth is the top level cognition with a hierarchy of lesser intelligences going all the way to the single cell. We exist in between and that is why the human condition is so depressing.


Here's the analogy, per Sean Carroll [1]:

- There seems to be a special direction "down", where things fall by default, because we live in the vicinity of an influential object in space, called Earth

- There seems to be a special direction "forward in time", where things happen by default, because we live in the vicinity of an influential event in time, called Big Bang

If we had stuck to the local Earth context and geocentrism, the objectivity of the "down" direction would remain unquestioned. It's when we started modeling other things outside of Earth, it became clear there's no objective "down" direction, just a more general concept of Gravity.

Carroll's argument is that it's the Big Bang, an extremely low-entropy configuration of the universe, that gives rise to the 2nd Law of Thermodynamics and the resulting emergence of an arrow of time in the forward direction. It's purely a statistical phenomenon at larger scales, and attributable to being "next to an influential event", according to him.

[1] https://www.youtube.com/watch?v=AZsmyTE3j9o


It's difficult to see how such a hypothesis could be tested either experimentally or observationally. Sounds more like metaphysics?

Stars shine until they run out of fuel, and the age of a star - the status of its fuel, the buildup of fusion products - happens as time passes. Life on planets taps into the flow of sunlight from the star (and/or the flow of heat and chemical energy from the a hot planetary core) to generate complex structures in defiance of the regular direction of entropy (not violating conservation of energy, though). So... life reverses the arrow of time?


I think that one thing to keep in mind is that we are only capable of observing the passage of time along one vector because our perception relies upon entropic biological processes.

We cannot observe in any subset of possible universes where entropy is not present or is working backwards-Ergo those possibilities are wholly out of our direct perception.

That does not mean that causality cannot run in reverse, however, only that we cannot interact with those mechanisms in a way that would preclude our existence or observation.


But could we observe the effects? Can there be particles which have mass and therefore exhibit gravitation but which are subject to reverse entropy? We would observe these for example as an unexplainable increase in gravitational pull on observable matter without an observable source of that gravitation and without a preceding cause.

Are photons themselves stratling this entropic boundary since they travel at the speed of light and within their own reference frame are not subject to the passage of time?


Biology has nothing to do with it. We can't construct any manner of machine that measures time going in the other direction.


Wouldn't questioning those pre-existing notions be the first step towards constructing such a machine? It sounds silly to dismiss the idea time could go backwards because we don't have any machines that work that way, since that typically how all ideas work before we implement them.


Maybe all that quantum nonsense we can't explain is due to some type of time travel we don't comprehend.


If only we had Tony Stark to help build some sort of a “quantum GPS”.


> It's difficult to see how such a hypothesis could be tested either experimentally or observationally. Sounds more like metaphysics?

Sure, but some claims which cannot be tested experimentally or observationally are nonetheless important. For example, the claim “it’s important that scientific hypotheses be tested experimentally or observationally.”


While I agree that not everything in science can be tested experimentally or observationally, arguably this one can:

Models created using both empirical experimentation and logical reasoning match reality much better than those based on "pure logics" (if that even is possible).


I think that the real question is whether the death of the universe by cool down will happen before the arrow of time is reversed. To me it seems more like it will just slow down and then finally stop.


Or if there ever is a "big crunch" where all black holes crunch together to a critical mass, would there be another big bang and time then run backwards?


The hypothesis I have encountered is that time would reverse when the expansion of the universe peaked and started to collapse back in on itself. The next big bang would start another run of our universe. I'm guessing the randomness in quantum fluctuations would allow this next run to evolve somewhat differently from the one we are experiencing.


Well, of course not :) Entropy as direction of time just means that the chaos on the whole increases. So, to create an ordered structure, like a cell organism, the by-product is more chaos around it.


To add, life just speeds up the chaos by orders of magnitude, it's a perfect entropy catalyst.


"there is 10 million times more entropy in that radiation [cosmic microwave radiation] than there is in all of the mass of the universe"

https://news.berkeley.edu/2016/09/20/new-book-links-flow-of-...


But then, this is just entropy from a one-off event long ago. Meanwhile, life is an entropy producing reaction with a strong exponential factor.

To butcher an insightful quote by (IIRC) Hamming, any positive slope will eventually make up for y-intercept of a constant function.


It underlines the idea that the entropy-time connection is pretty shaky though.


I’d be shocked if that were the case, even only considering the Earth. The oceans and the atmosphere are full of entropy. So is the liquid outer core. And even if the mantle is not quite liquid, and even if the crust is mostly solid, these are huge in terms of volume and mass, much larger than the sliver of dirt we inhabit on top of them. So yeah, I really doubt we (collectively, all human beings) are changing the Earth’s entropy in any meaningful way.


Life reduces local entropy.


At the expense of increased global entropy, which I suppose is GPs point.


> Life on planets taps into the flow of sunlight from the star (and/or the flow of heat and chemical energy from the a hot planetary core) to generate complex structures in defiance of the regular direction of entropy

It doesn't defy the direction of entropy. Frankly, I'm not sure why you have that notion? Entropy is a measure of system's state of equilibrium. Moving energy from a concentrated source, and distributing it more equally (across life forms and other complex structures), is following the direction of entropy, as is it makes energy more equalized, in contrast to the former state where energy levels are more unequal between highly concentrated areas (like stars, the Big Bang), and areas with less energy.


Entropy only increases on average. You can decrease it locally, for example, making steel from iron ore. You generate heat, so you still get a net increase, but you could easily redefine your boundary to only include the local decrease in entropy (the steel from ore).


Yes, but like you said, that requires re-drawing your boundaries. It doesn't really defy the direction of entropy, it only appears that way from having changed scope.


Alain Aspect tested it with his delayed choice quantum eraser, didnt he ? (https://en.m.wikipedia.org/wiki/Aspect%27s_experiment)

At least in France we take as fait accompli that causality is non trivial and that objects are non local at best: either you can consider objects as one despite immense distance between them, or you accept information passed between them in reverse time. Both are hard to swallow.


No, he didn’t. The “delayed choice quantum eraser” is something else - and it’s not about the flow of time either.


"Sounds more like metaphysics?"

The regular arrow of time from the human perspective, "leaks" changes into the future. The future is constantly being changed by the past.

You can measure those changes by many ways, the 2nd. law of thermodynamics is everywhere just to make sure you see this happening across everything.

So, probably there are other non-discovered underlying mechanisms in the reality that are already signaling changes from the future "leaking" into the past, changing the past right infront of us, without us noticing it.

A probably omnipresent mechanism for observing the future changing the past, valid in the whole cone of light visible to us, is the probability law.

If you have 10 millions of possible events in a probability, one of them is the exact thing it will happen in the future, hence the information of what happens in the future is right here, existing in the past.

Yeah, you can say how would the nature could use this information without powerful processors predicting branches trillions of times per second on Earth right now?

Well, the brain is a pattern predictor, quite several parts of the brain are firmly attached to the physics of the universe, biochemically but what if the pattern prediction mechanisms in the brain are actually an adaptation of the evolution, just like wings to fly, but with those mechanisms the brain uses probability to "see" information from the future, and here in the "present", it changes the past (from the perspective of the future created originally from the past), if the brain "chooses now" to do something different that what originally created the future it "saw" by analizing patterns, essentially evaluating probabilities.

So the Probability Law would be a kind of Inverted 2nd. Law of Thermodynamics.

It would work perfectly as law of the universe: from the future to the past, the revered time of arrow, one single event created "a" future, but from the regular arrow of time, there would a range of probable future outcomes which you cannot precisely say it has "1" (100%) probability of ocurring, right to exact moment till the future becomes the present.

And it was right there all the time, something obvious infront of everybody in the world, like electricity (which was suspected by many across thousands of years, but just "discovered" recently).

https://en.wikipedia.org/wiki/Law_of_total_probability


Right, entropy can decrease, just with low probability, which you can calculate using the Fluctuation Theorem: https://en.wikipedia.org/wiki/Fluctuation_theorem

Perhaps given infinite time, we will randomly get back to a low entropy state infinite times, but I don't know enough about the math to say for sure.


You can, but that inevitably leads to far more Boltzman brains than things like the universe, and that's disastrous because then you should expect to be a Boltzman brain yourself, but they're configured randomly so if you are a Boltzman brain you can't trust any belief you have about reality including the maths that says you should be a Boltzman brain.

It's basically a softer version of Russell's paradox but for cognition and reality.


Carroll did a great solo podcast about this last year: https://www.youtube.com/watch?v=B40PRvLtiec

Hard to summarize, but it's well worth anyone interested in this topic. There's a good bit where he ties the sleeping beauty thought experiment with Boltzmann Brains and consciousness; it blew my mind trying to understand the whole thing.


Infinite time is too much of a “get out from jail” card, because during infinite time anything and everything will happen by definition


I don't have a problem with "time" meaning "away from the local Big Bang". But that does not imply that future causes the past.

Just like noticing that "down" actually means "towards the center of the local planet" did not make water suddenly flow uphill.


I don't think Carroll can take credit for the idea of the arrow of time being emergent from a local entropy initial state.

Roger Penrose proposed that as far as I know, when Carroll was probably a high schooler.


Explaining an idea is not equivalent to taking credit for it.


> Carroll's argument

This assigns credit to Carroll.


It merely says that Carrol argued. The idea within the argument was no means assigned to her but used to make/support it.


Sean Carroll is a man, and look up the possessive apostrophe while you're at it!


Roger Penrose proposed that as far as I know, when Carroll was probably a high schooler.

No. At least as early as Arthur Eddington when Penrose was a twinkle in his dad's eye.


Thanks didn't know he'd made that argument although I knew he made the connection between entropy increasing and arrow of time.


This idea of time's relation to entropy is pretty iffy. I recommend this book https://news.berkeley.edu/2016/09/20/new-book-links-flow-of-....


who's taking "credit"


> scaling the model and having more data completely beats inductive bias

The analogy in my mind is this: "burning natural oil/gas completely beats figuring out cleaner & more sustainable energy sources"

My point is that "more data" here simply represents the mental effort that has already been exerted in pre-AI/DL era, which we're now capitalizing on while we can. Similar to how fossil fuels represent the energy storage efforts by earlier lifeforms that we're now capitalizing on, again while we can. It's a system way out of equilibrium, progressing while it can on borrowed resources from the prior generations.

In the long run, the AI agents will be less wasteful as they reach the limits of what data or energy is available on the margins to compete within themselves and to reach their goals. It's just we haven't reached that limit yet, and the competition at this stage is on processing more data and scaling the models at any cost.


>My point is that "more data" here simply represents the mental effort that has already been exerted in pre-AI/DL era, which we're now capitalizing on while we can

Not really. It's not simply that modern architectures are not adding additional inductive biases, they are actively throwing away the inductive bias that used to be used by everyone. For example, it was taken for granted that you should use CNNs to give you translation invariance, but apparently now visual transformers can match that performance with the same amount of compute.


There are bit caveats here:

Vision transformers outperform CNNs in HUGE data regimes. On small datasets, CNNs still shine.

Also, if you take a CNN with modern tricks, they can be on par with vision transformers, e.g convnext

Transformers really dominate when you scale the amount of data to infinity


Yes, that's what I was saying: scale > inductive bias. Apparently, scaling is all you need ;)


I don't think what you're saying contradicts what I'm saying. My baseline / reference point wasn't CNNs.


Perhaps another analogy is if you train something by repeatedly telling it many different stories about the same thing day in and day out, compared to mentioning something just once in passing, perhaps the system will know what it's been exposed to more. Replaying that event it was exposed to in passing in order to check it for parsimony requires more mental effort and seems like something that requires explicitly setting aside the work to do so.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: