Hacker Newsnew | past | comments | ask | show | jobs | submit | matharmin's commentslogin

We're relying on logical replication heavily for PowerSync, and I've found it is a great tool, but it is also very low-level and under-documented. This article gives a great overview - I wish I had this when we started with our implementation.

Some examples of difficulties we've ran into: 1. LSNs for transactions (commits) are strictly increasing, but not for individual operations across transactions. You may not pick this up during basic testing, but it starts showing up when you have concurrent transactions. 2. You cannot resume logical replication in the middle of a transaction (you have to restart the transaction), which becomes relevant when you have large transactions. 3. In most cases, replication slots cannot be preserved when upgrading Postgres major versions. 4. When you have multiple Postgres clusters in a HA setup, you _can_ use logical replication, but it becomes more tricky (better in recent Postgres versions, but you're still responsible for making sure the slots are synced). 5. Replication slots can break in many different ways, and there's no good way to know all the possible failure modes until you've run into them. Especially fun when your server ran out of disk space at some point. It's a little better with Postgres 17+ exposing wal_status and invalidation_reason on pg_replication_slots. 6. You need to make sure to acknowledge keepalive messages and not only data messages, otherwise the WAL can keep growing indefinitely when you don't have incoming changes (depending on the hosting provider). 7. Common drivers often either don't implement the replication protocol at all, or attempt to abstract away low-level details that you actually need. Here it's great that the article actually explains the low-level protocol details.


Yeah I was debating heavily between WAL and L/N. Tried to get WAL set up, struggled; tried to learn more about WAL, failed; tried to persevere, shot myself in the foot.

At the end of the day the simplicity of L/N made it well worth the performance degradation. Still making thousands-to-millions of writes per second, so when the original article said they were 'exaggerating' I think they may have gone too far.

I've been hoping WAL gets some more documentation love in the years/decades L/N will serve me should I ever need to upgrade, so please share more! :D


Probably a security feature. If it can access the internet, it can send your private data to the internet. Of course, if you allow it to run arbitrary commands it can do the same.


FoundationDB also has a Mongo-compatible document layer, but it seems like the last release was 6 years ago, so probably doesn't count anymore.


The project looks great! Object storage is often so much better in terms of cost efficiency than a database on EBS. It's often 10-20x more expensive for EBS after taking into account that you need 3x replicas for a typical MongoDB deployment, and need to over-provision the storage. And being able to scale compute independently from storage is great.

The biggest things I'm missing from the docs (checked the github page and the site) is seeing what MongoDB features are supported or not. I've worked with Azure CosmosDB before, and even though it claims MongoDB compatibility, it has many compatibility issues as soon as you have more than a basic CRUD application. Some examples include proper ChangeStream support, partial index support, multi-key index support, set of supported aggregation pipeline operations, tailable cursor support, snapshot queries.

Another thing that's not clear: What does multi-master/multi-write mean in practice? What happens if you write to the same data at the same time on different nodes?


That's exactly the reason. S3 is better in almost all aspects compared with EBS, except the performance part, and I am glad that our Data Substrate technology solved this issue gracefully [1].

As for the compatibility, we are leveraging some of the code from 4.03 version (the last AGPL version), and we have a very good compatibility (we will show some results in later blog posts). As I mentioned in another reply post, the Mongo APIs are reasonably stable over the last few years, only seeing very minor changes. Most of the later versions improved upon performance and transaction supports, which we support natively with our underlying data substrate technologies. Still, if you have any specific API that you feel is needed, we'd be happy to implement and we welcome community contributions.

Multi-master/multi-writer means it is a fully distributed database. Of course you can run it in single node configurations and get all the single node benefits, but if deployed in a cluster, you do not need to worry about which node to write to, or how data are sharded. If you writes potentially can cause conflicts (i.e. write to the same data at the same time on different nodes), the concurrency-control will handle that for you. In fact, you will encounter the same issue even in a single node configuration, since a single node is still multi-threaded.

[1] https://www.eloqdata.com/blog/2025/07/16/data-substrate-bene...


That is completely wrong for this stage of a company. The ability to make profit in the future is important. Making profit while growing is not.


So exactly how are they going to make a profit if each user causes a marginal lost?

Any idiot can sell a dollar’s worth of value for 90 cents.


Let's say I have an open-source project licensed under Apache 2. The grant allows me to include the extension in my project. But it doesn't allow me to relicense it under Apache 2 or any other compatible license. So if I include it, my project can't be Apache 2-licensed anymore.

Apache 2 is just an example here - the same would apply for practically any open source license.

The one place I imagine it could still work is if the open-source project, say a sqlite browser, includes it as an optional plugin. So the project itself stays open-source, but the grant allows using the proprietary plugin with it.


I don't see why this would infect your project, though. You aren't using the code directly, you're using it as a tool dependency, no? Same way as if your OSS project used an Oracle DB to store data.


Unlike Oracle DB, sqlite gets embedded in your program binary. It's a library, not an external service, and this matters for OSS licenses


Ah true, I forgot because I always use it in Python, where it's built in.


It would help to have more info on what you need to run this. The page mentions "without the need for any new hardware", but doesn't say what existing types of hardware it is compatible with. The apps available for download gives a hint, but are the apps for displaying the content, or for controlling the content?

For example, I recently setup a dashboard using a Raspberry Pi running Chromium - would this work for my use case?

And does the control work over the local network, or does it require an internet connection?


The controller is a web app (https://admin.signagesync.app), and the app is basically a WebView wrapper. The app has 2 "modes", control (opens the website mentioned earlier) and display.

The app requires internet, but the content can be from local network. Eventually if there's enough demand I'll make everything local.

It doesn't work for RPi yet but soon!


> Overall, I haven’t seen many issues with the drives, and when I did, it was a Linux kernel issue.

Reading the linked post, it's not a Linux kernel issue. Rather, the Linux kernel was forced to disable queued TRIM and maybe even NCQ for these drives, due to issues in the drives.


Hopefully there are drives that don’t have that issue?


There may be good reasons to do that, but this isn't one. Any project using CocoaPods will still remain working for the foreseeable future - you'll just not get new updates to dependencies at some point. And at that point you can migrate to SwiftPM or vendored dependencies, without losing anything.


Good idea let's punt it to 20 years from now when the CocoaPods server goes down. By then the startup will have been acquired and we will be long gone, so it's the parent company's problem not ours.


You'll still be able to use those - the CocoaPods repo will not go away any time soon. If someone wants to provide updates for those, it has to be migrated to SwiftPM. And in the meantime you may have to use both SwiftPM and CocoaPods, or fork & migrate those yourself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: