Hacker News new | past | comments | ask | show | jobs | submit | blixt's comments login

JS numbers technically have 53 bits for integers (mantissa) but all binary operators turns it into a 32-bit signed integer. Maybe this is related somehow to the setTimeout limitation. JavaScript also has the >>> unsigned bit shift operator so you can squeeze that last bit out of it if you only care about positive values: ((2*32-1)>>>0).toString(2).length === 32

I assume by binary you mean logical? A + b certainly does not treat either side as 32bit.

Sorry, I meant bitwise operators, such as: ~ >> << >>> | &

I'm constantly impressed by the design of DOs. I think it's easy to have a knee-jerk reaction that something is wrong with doing it this way, but in reality I think this is exactly how a lot of real products are implicitly structured: a lot of complex work done at very low scale per atomic thing (by which I mean, anything that needs to be transactionally consistent).

In retrospect what we ended up building at Framer for projects with multiplayer support where edits are replicated at 60 FPS while being correctly ordered for all clients is a more applied version of what DOs are doing now. We also ended up with something like a WAL of JSON object edits so in case a project instance crashed its backup could pick up as if nothing had happened, even if committing the JSON patches into the (huge) project data object didn't have time to occur (on an every-N-updates/M-seconds basis just like described here).


I saw the reference to “apps like Figma” and as one of the people that worked on Framer’s (also a canvas based app) database which is also local+multiplayer I find it hard to imagine how to effectively synchronize canvas data with a relational database like Postgres effectively. Users will frequently work on thousands of nodes in parallel and perform dragging updates that occur at 60 FPS and should at least be propagated to other clients frequently.

Does Instant have a way to merge many frequent updates into fewer Postgres transactions while maintaining high frequency for multiplayer?

Regardless this is super cool for so many other things where you’re modifying more regular app data. Apps often have bugs when attempting to synchronize data across multiple endpoints and tend to drift over time when data mutation logic is spread across the code base. Just being able to treat the data as one big object usually helps even if it seems to go against some principles (like microservices but don’t get me started on why that fails more often than not due to the discipline it requires).


Good point on the update frequency, I believe it is a must to batch the requests and responds for any of this type of lib/service to work in a production environment, a performance report/comparison is still required for ppl to get the idea if this is good to support their business model.

About the synchronized data though I think it's not about the database but the data types designed to sync the data? I worked on multiple-player canvas games and we didn't really care that much about relational db or document db, they worked both fine. I would love to know what's the difference and the challanges.


We do indeed batch frequent updates! Still many opportunities for improvements there, but we have a working demo of a team-oriented tldraw [1]

[1] https://github.com/jsventures/instldraw


We would love to hear more about the architecture you used at Framer. Would you be up for a coffee? My email is stopa@instantdb.com


Would love to hear how you went about doing things at Framer!


I was impressed by Microsoft’s AICI where the idea is a WASM program can choose the next tokens. And relatedly their Guidance[1] framework which can use CFGs and programs for local inference to even speed it up with context aware token filling. I hope this implies API-based LLMs may be moving in a similar direction.

[1] https://github.com/guidance-ai/guidance


A friend of mine made this in-browser neural network engine that could run millions of multi-layer NNs in a simulated world at hundreds of updates per second and each network could reproduce and evolve. It worked in the sense that the networks exhibited useful and varied behaviors. However, it was clear that larger networks were needed for more complex behaviors and evolution just starts to take a lot longer.

https://youtu.be/-1s3Re49jfE?si=_G8pEVFoSb2J4vgS


Maybe the validator should be something like Object.getPrototypeOf(document.createElement("a")).constructor !== HTMLUnknownElement

I noticed several functional HTML elements (like "font") were not in the list, so might be nice to just trust the browser (not sure how you'd get the correct count per browser though).


That might make the results browser-dependent.


Channels require a different goroutine to send values while also receiving them (which is how you’d have two loops communicating, essentially what you get from range funcs).

There’s nothing stopping you from doing this but it does mean you are introducing the requirement of thread safety in your code, in the case where the iterator is stateful.

I would argue anything that needs a range func beyond the simple functional things like filters is probably a stateful iterator (or generator if you’d like), and as such having range funcs is a great way to write code that doesn’t go wrong due to parallelism.

Now you could add two way communication to your channel iterator (or any other locking mechanism) for safety but honestly I think range funcs perfectly solve this use case, and have already used them to keep my code more readable and correct.

All this said, while I’m still a fan of Go and have used it regularly since 0.9 as well as contributed to the language, I will agree with the other comments that sometimes the language design bends over backward to be purist at the cost of having to add more footguns in user land.


> Channels require a different goroutine to send values while also receiving them ...

The requirement is a limit of the current design and implementation. Channels can be enhanced to avoid the requirement.


Very cool! Good to see the shoutout to Dennis' Holograph.so work as well. I played around with that one to make some fun things:

A game of cat and mouse: https://x.com/blixt/status/1797384954172625302

An analog clock: https://x.com/blixt/status/1798393279194824952

I think tools like these, with better UX and more guard rails for the code, can really help people understand logic and systems in a much more visually intuitive way. In some aspects, these propagators work similar to Excel, which I think a lot of people already have some intuition for.


From Kyutai, demo'd today: https://www.youtube.com/live/hm2IJSKcYvo


I was also exploring randomness in JS at some point and found lots of interesting things! One was that the Alea PRNG algorithm[1] by Johannes Baagøe performed faster on JavaScript's floating point numbers. Another was that Dieharder[2] is a really fun tool to test PRNGs. I also made an attempt at consolidating other PRNG methods which were not great[3] and that led me to other people who had done the same[4].

And finally I tried to make a nicer Dieharder wrapper and a simple PRNG library, but lord knows how relevant it is anymore: https://github.com/blixt/js-arbit

I guess in this archaeological dig I also found how many useful resources on the internet disappear in less than a decade.

[1]: https://web.archive.org/web/20120502223108/http://baagoe.com... (Baagøe's original site is down)

[2]: https://rurban.github.io/dieharder/ (old site is dead, though here's a web archive link: https://web.archive.org/web/20170609075452/http://www.phy.du...)

[3]: https://gist.github.com/blixt/f17b47c62508be59987b (Don't use this)

[4]: https://github.com/nquinlan/better-random-numbers-for-javasc... (this is mainly a mirror of Baagøe's wiki)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: