Hacker News new | past | comments | ask | show | jobs | submit login

It's possible to do lossless compression with LLMs, basically using the LLM as a predictor and then storing differences when the LLM would have predicted incorrectly. The incredible Fabrice Bellard actually implemented this idea: https://bellard.org/ts_zip/



Can we do this in physics?

Use a universal function approximator to approximate the universe, seek Erf(x)>threshold, interrogate universe for fresh data, retrain new universal approximator, ... loop previous ... , universe in a bottle.


You can do that sort of thing with a toy universe -- in fact Stephen Wolfram has a number of ongoing projects along broadly similar lines -- but you can't do it in our physical universe. Among other reasons, the universe is to all appearances infinite and simultaneously very complex, therefore it is incompressible and cannot be described by anything smaller than itself, nor can it be encapsulated in any encoding. You can make statistical statements about it -- with, e.g., Ramsey Theory -- but you can never capture its totality in a way that would enable its use in computation. For another thing, toy model universes tend to be straightforwardly deterministic, which is not clearly the case with our physical universe. (It is likely deterministic in ways that are not straightforward from our frame of reference.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: