Hacker News new | past | comments | ask | show | jobs | submit | hopfog's comments login

I built a multiplayer chatroom where all messages are transformed by an LLM (e.g. into pirate speak or corporate jargon):

https://impersona.chat/

I also built this incremental clicker game where you split words ad infinitum (like Infinite Craft but in reverse):

https://lantto.github.io/hypersplit/


This is what I thought the future will be like since years ago. Everybody is going to be harmless since llms will translate and cushion, or outright censor any problematic communication.


For a long time I've wanted to write a self-censoring browser tool that sits between my social media forms and the HTTP call that sends what I type. It was going to be rudimentary: when you hit "post" on FB/TWT/etc., some quick sentiment analysis happens and prompts the user—upon detecting negative speech—"are you sure you want to send this?"

The idea is that you have actual triggers to remind you to be kind. Nextdoor has something like this, if you use profanity or other charged words, it will gently nudge you: remember to be kind.

(Obviously, if you know Nextdoor, this doesn't work. Lotta "random minority is scaring me by existing near my house")

But incorporating an LLM might be awesome. I am not wedded to the idea of censoring incoming speech, but I'd sure like to be nudged if I am being a problem.

There used to be a web-based tool you could give it your Reddit username, and it would do an analysis of your posts and give you statistics and a kindness score (or something like that).

I found that my enjoyment of the website went up by regularly running that script, because it reinforced that I should be kinder online (I find this more difficult than in meatspace), and by being kinder, it was far less likely I'd get a mean response, which lowered stress levels.

Maybe this would be a useful project to work on. A browser plugin of some kind, if Monkeyscript or something can use Rust-based web workers. I really don't know where browser tech is these days.


I explored whether this could be helpful in an online dispute resolution platform. The system could detect insulting or angry messages that threaten to derail a conversation, and suggest a more neutral way of formulating them. I think it's promising!

https://arxiv.org/abs/2307.16732


Next step is just re-writing on the receiving end.


Tried the multiplayer chatroom for a while and it seems fun, great idea.


This seems really cool and I'm excited to play around with it once it's up and running properly again. These type of things are my favorite applications of LLMs.

A while back I made something similar in the form of an incremental "clicker" game where you split things ad infinitum: https://lantto.github.io/hypersplit/


It is back and running, and running strong! Enjoy. Your game is nifty maybe it'll give me ideas. There could be an option to hold Ctrl to split branches vertically without branching down, ect... cool


Working on my game Sandustry, a mining and automation sandbox with pixel-based physics (think Noita meets Factorio).

I released the first playtest of the alpha a few days ago, which you can try directly in your browser:

https://lanttogames.itch.io/sandustry


performance tanked once I started to dig and the screen got dark, FPS returned to normal once I was fully above ground


Thanks for reporting! May I ask what browser and system you are on?

When going underground it starts raycasting shadows, which is heavier on the GPU, but the difference shouldn't be that big.


chrome, windows 10


Great article and very relevant for me since I'm building a game in JavaScript based on "falling sand" physics, which is all about simulating massive amount of particles (think Noita meets Factorio - feel free to wishlist if you think it sounds interesting).

My custom engine is built on a very similar solution using SharedArrayBuffers but there are still many things in this article that I'm eager to try, so thanks!


The author is also working on a really impressive voxel physics engine: https://youtu.be/3259aycYcek


I'm surprised the simulation is run on the CPU! This problem has solutions that fit the GPU well, even for grid<>particle conversions.


this kind of sims are better suited for CPU ;-), gpu are good to work on meshes, not really on pure particles. At GPU are super good for grid based hydro.


"gpu are good to work on meshes, not really on pure particles"

Why?

Having thousands of particles, all in need of doing the same operations on them in parallel screams GPU to me. It is just way harder to program a GPU, vs a CPU.


Collision detection is usually a tree search, and this is a very branching workload. Meaning that by the time you reach the lowest nodes of the tree, your lanes will have diverged significantly and your parallelism will be reduced quite a bit. It would still be faster than CPU, but not enough to justify the added complexity. And the fact remains that you usually want the GPU free for your nice graphics. This is why in most AAA games, physics is CPU-only.


"Collision detection is usually a tree search"

Yes, because of the very limited numbers of CPU cores. With a GPU you can just assign one core to one particle.

Here is a simple approach to do it with WebGPU:

https://surma.dev/things/webgpu/

It uses the very simple approach, of testing every particle with EVERY other particle. Still very performant (the simulation, the choosen rendering with canvas is very slow)

I currently try to do something like this, but optimised. With the naive approache here and Pixi instead of canvas, I get to 20000 particles 120 fps on an old laptop. I am curious how far I get with an optimized version. But yes, the danger is in calculating and rendering blocking each other. So I have to use the CPU in a smart way, to limit the data being pushed to the GPU. And while I prepare the data on the CPU, the GPU can do the graphic rendering. Like I said, it is way harder to do right this way. When the simulation behaves weird, debugging is pain.


If you use WebGPU, for your acceleration structure, try to use the algorithm here presented in the Diligent Engine repo. This will allow you not to transfer data back and forth between CPU and GPU: https://github.com/DiligentGraphics/DiligentSamples/tree/mas...

Another reason I did it on CPU was because with WebGL you lack certain things like atomics and groupshared memory, which you now have with WGPU. For the Diligent Engine spatial hashing, atomics is required. I'm mainly using WebGL because of compatibility. iOS Safari still doesn't enable WGPU without special feature flags that user has to enable.


Thanks a lot, that is very interesting! I will check it out in detail.

But currently I will likely proceed with my approach where I do transfer data back and forth between CPU and GPU, so I can make use of the CPU to do all kinds of things. But my initial idea was also to keep it all on the GPU, I will see what works best.

And yes, I also would not recommend WebGPU currently for anything that needs to deploy soon to a wide audience. My project is intended as a long term experiment, so I can live with the limitations for now.


This is a 2D simulation with only self-collisions, and not collisions against external geometry. The author suggests a simulation time of 16ms for 14000 particles. State of the art physics engines can do several times more, on the CPU, in 3D, while colliding with complex geometry with hundreds of thousands of triangles. I understand this code is not optimized, but I'd say the workload is not really comparable enough to talk about the benefits of CPU vs GPU for this task.

The O(n^2) approach, I fear, cannot really scale to much beyond this number, and as soon as you introduce optimizations that make it less than O(n^2), you've introduced tree search or spatial caching that makes your single "core" (WG) per particle diverge.


"that make it less than O(n^2), you've introduced tree search or spatial caching that makes your single "core" (WG) per particle diverge"

Well, like I said, I try to use the CPU side to help with all that. So every particle on the GPU checks maybe the 20 particles around it for collision (and other reactions) and not 14000, like it is currently.

That should give a different result.

Once done with this sideproject, I will post my results here. Maybe you are right and it will not work out, but I think a found a working compromise.


Yeah, pretty much this, I've experimented with putting on the GPU a bit but I would say particle based is 3x faster than a multithreaded & SIMD CPU implementation. Not 100x like you will see in Nvidia marketing materials, and on mobile, which this demo does run on, GPU often becomes weaker than CPU. Wasm SIMD only has 4 wide but the standard is 8 or 16 wide on most CPUs today.

But yeah, once you need to do graphics on top, that 3x pretty much goes away and is just additional frametime. I think they should work together. On my desktop stuff, I also have things like adaptive resolution and sparse grids to more fully take advantage of things that the CPU can do that are harder on GPU.

The Wasm demo is still in its early stages. The particles are just simple points. I could definitely use the GPU a bit more to do lighting and shading a smooth liquid surface.


Agree with most of the comment, just to point out (I could be misremembering) 4-wide SIMD ops that are close together often get pipelined "perfectly" onto the same vector unit that would be doing 8- or 16-wide SIMD, so the difference is often not as much as one would expect. (Still a speedup, though!)


The issue is not really parallelism of computation. The issue is locality. Usually a hydro solver need to solve 2 very different problem short and long range interaction. therefore you "split" the problem into a particle-mesh (long range) and a particle to particle (short range).

In this case there is no long range interaction (aka gravity, electrodynamics), therefore you would go for a pure p2p implementation.

Then in a p2p, if you have very strong coupling between particles that will insure the fact that neighbors stay neighbors (that will be the case with solids, or with very high viscosity). But in most case you will need are rebalancing of the tree (and therefor the memory layout) every time steps. This rebalancing can in fact dominate the execution time as usually the computation on a given particle represent just a few (order 100) flop. Then this rebalancing is usually faster to be done on CPU than on GPU. Then evidently, you can do this rebalancing "well" on gpu, but the effort to have a proper implementation will be huge ;-).


I'm building a JavaScript game in my own engine, which in retrospect was a big mistake considering the game utilizes a lot of multithreading (the game is "Noita meets Factorio" so it's required for the simulation). You can only share memory between threads using a SharedArrayBuffer with raw binary data and you need special headers from the web server to even enable it. This means that I've had to write a lot of convoluted code that simply wouldn't be necessary in a non-browser environment.

Other than that I really like that I can piggyback on the browser's devtools and that I can use the DOM for the UI.


You probably found it already but performance wise you should look into WebAssembly: https://maxbittker.com/making-sandspiel


Yes, this is a really good article that I can highly recommend if you're interested in these type of "falling sand" simulations.

A big difference between a classic powder toy game like in the article and Noita is that Noita needs to run a much larger simulation that extends beyond the visible canvas. So while multithreading is probably not needed in the former it's most likely needed when the game is a scrolling platformer. I posted a GDC talk by the Noita devs as a reply to a sibling comment if you're interested in their tech.


There are plenty of babylon.js & physics package demos around.

The WebGL support can be an issue on some machines, but when it does work... things like procedural fog/pseudo-procedural-water/dynamic-texture-updates look fairly good even on low-end gpus. Also, the free Blender addon can quickly export basic animated mesh formats that will save a lot of fiddling with assets later, and the base library supports asset loading etc.

Like all js solutions your top 3 problems will be:

1. Audio and media syncing (don't even try to live-render something like a face mocap)

2. Interface hardware access (grabbing keyboard/mouse/gamepads is sketchy)

3. Game cheats (you can't trust the clients world constraints are true)

4. web browser ram/cpu overhead

The main downside of using __any__ popular game-engine is asset extraction is a popular hobby for some folks (only consoles sort of mitigate this issue).

Best of luck, =3


Thank you!

If you haven't played Noita it's basically a "falling sand" or powder physics game where every pixel is simulated. You need a special cellular automata that is not your typical game physics engine, so I don't think Babylon.js would be a good choice but I may be wrong.

I've modeled my architecture after this fantastic GDC talk by the Noita devs: https://www.youtube.com/watch?v=prXuyMCgbTc ("Exploring the Tech and Design of Noita")


If all you want is 2D fluid flow with points, than Sebastian Lague's talk is very practical for the frame-rates he hits:

https://www.youtube.com/watch?v=rSKMYc1CQHE

It should port over if you are not hitting super fine granularity, and the collision model for the sprites is simplified.

Some of the Babylon.js fluid examples still need a bug report:

https://doc.babylonjs.com/features/featuresDeepDive/particle...

Have a wonderful day =)


We use multi threading in the netcode for our browser based games built in Godot. I've been considering looking into using web workers _without_ shared array buffer so that we can use threads in all environments, even without cross origin isolation. Is that something youve looked into?


For me personally, that's exactly the sort of thing that ends up teaching me so much, so it's very valuable in it's own right.


Sorry about that! I realized too late that the game is really badly optimized once you hit higher levels and start getting chain reactions. Turning off Effects helps for some things.

I wonder how far you can push the DOM. It gets crazy with so many element updates after a while but there are probably many performance improvements that can be done.


I understand, it would be nice if it periodically wrote out the score/etc to localStorage or similar. I got reset to the level I was on but I was up to ~$2K gold if I had gone to the next level that was reset. Far from the end of the world.


Try pressing the wizard. I should probably make it clearer on mobile and highlight it once you can afford your first cat.


Getting the second resource (can't post emojis here) means you hit a quark, loop or a dead-end. I have a version where it continues splitting to up/down quarks and then fantasy matters but it quickly derails into chaos.


I've seen many reports of this on Pixel phones but I experienced something very similar on another Android phone (it was either Moto G5 or OnePlus One): https://news.ycombinator.com/item?id=29494820


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: