Hacker News new | past | comments | ask | show | jobs | submit | pistachiopro's comments login

Something that set Baba is You apart for me compared to any other puzzle game I can remember playing is the puzzles themselves were actually "funny." Not like Portal, where you solved puzzles alongside a funny narrative, but in the actually language of the puzzles themselves there are setups which get subverted in absurd and delightful ways, often in multiple layers, as you work your way through to a solution. Playing made me feel like the math part of my brain was laughing.

The game doesn't hold your hand, and I think it took about 10 hours for me to learn enough of the puzzle language for things to get really good (and then it started to descend into frustrating fiddliness in the deep endgame), but the middle 30 hours or so we're some of the most gratifying gameplay I've experienced.

It kind of felt like the anti-Witness. The Witness was fastidiously fair and so carefully constructed I can look back at it and marvel, but actually playing it was pretty formal and mirthless. Baba is You can be a little sloppy and unfair, but it's warm and fun and funny, and the actual craftsmanship of the puzzles themselves is still top tier, in it's own way.


Take a relatively simple large language model like Llama 1. It has a context of 2048 tokens and each token can be one of 32,000 values. So the lookup table would need 32,000^2048 entries. That's not just impractically large, that's larger than cosmically large. There are only estimated to be about 10^80 atoms in the visible universe. So while a 32,000^2048 lookup table might be a valid concept mathematically, it's not anything you can intuit physically, and therefore not something you can say is incapable of reason.


Practically all materials behave nonlinearly when stretched or compressed a visible amount. For certain structural applications, though, if that happens we've already failed. Linear models work really well for designing big concrete structures and certain metal structures. Sometimes we try to apply linear models to other things, but that's always kind of fishy.


LLMs are being trained on a smaller and smaller percentage of human prose. Right now it seems like code is the best source for the bulk of an LLM's diet, but it's also looking likely that synthetic math text will be even better. The structured reasoning of code and math seems to be what actually makes these big LLMs "smart." Once you've trained a smart LLM, it seems to take a relatively small amount of hand-curated human prose to fine tune it into talking like a human. Unfortunately this article feels like the wishful thinking of someone who is afraid of the changes LLMs are bringing and hasn't done much research.


It seems to me like archive.org and the major book publishers are sitting on a gold mine(at least up to 2022), but I haven't seen anyone saying the same, so maybe I just don't know enough about LLM.


Chris Hecker and Jon Blow (among others) wrote good math and physics content for Game Developer Magazine, but the one I most associate with it is Jeff Lander. He's got copies of the articles he wrote throughout the years hosted on his website here: http://www.darwin3d.com/gamedev.htm


According to Wikipedia, 43% of the sun's energy at the surface of the earth is visible light (https://en.m.wikipedia.org/wiki/Sunlight#Measurement). So just blocking the other wavelengths would help, but I'm not sure it would be a "night and day" difference, so to speak. Daylight is several orders of magnitude brighter than normal indoor lighting, so I bet you could block 80-90% of visible light, as well, and it would still look really bright out while heating the inside significantly less.


You can publish an article with a title like this and probably not end up embarrassed. Room temperature and pressure super conductors seem hard enough to find that chances are any given paper claiming to have found one will end up with a more mundane explanation. And I do think the information about the phase change of Cu2S is highly relevant, as it points at a way the original researches my have fooled themselves.

The dismissal of the partial levitation as ferromagnetism, on the other hand, doesn't strike me as especially robust. Ferromagnetism explains the partial levitation of tiny fragments of material generated by people trying to reproduce LK-99. Very light and thin pieces of ferromagnetic material will align themselves with a magnetic field. For example, Andrew McCalip (who streamed himself attempting to reproduce the material in his rocket startup's lab) generated a partially levitating fragment and sent it into USC, where they determined it was ferromagnetic. But bulk pieces of ferromagnetic material will just stick to magnets (or if they are magnetized, they will stick to one side and be unstably repelled from the other).

Ferromagnetism doesn't explain the levitation demonstrated in the videos put out by the original researches, though. Barring fraud, the most likely explanation for that kind of levitation is diamagnetism. The article mentions Derrick van Gennep recreating the partial levitation video with a chunk of pyrolytic graphite (one of the most diamagnetic materials we know of, other than superconductors), supergluing iron filings to a corner of it to anchor it to the magnet. The levitation in that video comes from diamagnetism, not ferromagnetism. LK-99 is primarily made of lead, not graphite, which is 5-10 times denser, so the diamagnetic effect must be at least that much stronger than pure pyrolytic graphite. The thing is, as the rest of the article points out, the supposed main constituents of LK-99 have now been extensively studied, and none of them appear to be especially diamagnetic, so something in those samples the original team recorded must be extremely diamagnetic to make up for it!


>so something in those samples the original team recorded must be extremely diamagnetic to make up for it!

I wonder what would have happened if they would have pushed a paper out talking about anomalously high diamagnetism and skipped any mentions of superconducting. And let people speculate if it is a superconductor. I suppose we wouldn't be talking about it. But I hope that we see some group try to replicate the diamagnetic material properties.


Have you looked into Small Step XPBD (introduced in the paper "Small Steps in Physics Simulation ")? It's the most efficient nonlinear dynamics integrator/solver I've come across, and luckily one of the simplest, too! (Keeping in mind that simplest nonlinear solver is a relative metric.) I've been able to simulate very stiff materials like bone by updating the sim at 6000 steps/second. The exact number of steps/second you'll need will depend on both the desired spatial resolution and physical accuracy of your sails, but I wouldn't be surprised if 6000 steps/second is more than sufficient. And computers are fast enough where you could simulate quite a few large, detailed sails at that rate in realtime.


Thanks for that link, I had not seen this paper before. I will add this to the backlog of things to try!


Right, so for his nondeterministic path:

  r(t) = 0              ; t <= T
  r(t) = (1/144)(t-T)^4 ; t >= T
We can see that r''''(T) is either 0 or 1/6, depending on if we go with the top or bottom equation. That does look like some sort of hidden state change, there.

Interestingly, the spherical dome he mentioned (which doesn't yield to this non-determinism) forces all derivatives of r(t) to be continuous ...


> That explains why XPDB moves away from "substepping" the physics

Interestingly, XPBD has moved back to substepping! The relatively recent "Small Steps in Physics Simulation" from Nvidia goes into it, but I can outline the reasoning briefly.

In a physics simulation, there are 2 main sources of error, the integrator and the solver. Breaking that down a bit:

The integrator is an algorithm to numerically integrate the equations of motion. Some possibly familiar integrators are Euler, Verlet and Runge-Kutta. Euler is a simple integrator which has a relatively high error (the error scales linear with timestep size). The most common version of Runge-Kutta is more complex, but scales error with the 4th power of the timestep.

The solver comes into play because the most stable flavors of integrator (so-called implicit or backwards integrators) spit out a nonlinear system of equations you need to solve each physics frame. Solving a nonlinear system to high accuracy is a difficult iterative process with its own zoo of algorithms.

XPBD uses an implicit Euler-esque integrator and a simple, but relatively inefficient, Projected Gauss-Seidel solver. For most games, the linear error from the integrator is ugly but acceptable when running at 60 or even 30 frames a second. Unfortunately, for the solver, you have to spend quite a bit of time iterating to get that error low enough. The big insight from the "Small Steps" paper is that the difficulty of the nonlinear equations spat out by the integrator scales with the square of timestep (more or less -- nonlinear analysis is complicated). So if you double your physics framerate, you only have to spend a quarter of the time per frame in the solver! It turns out generally the best thing to do is actually run a single measly iteration of the solver each physics frame, and just fill your performance budget by increasing your physics frames-per-second. This ends up reducing both integrator and solver errors at no extra cost.


just related to this, I found neat overview of various PBD adjacent developments, XPBD is far from the only thing https://doi.org/10.1002/cav.2143


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: