Feeling like a rockstar while developing. Impressing the colleagues and clients with how amazing functionality I could produce in so little time.
Then the absolute horror of production traffic hitting the system and everything grinding to a halt. If I remember correctly, the resources required by their reactive data model scaled up significantly both with read connection count and write activity. And I could not do a damn thing about it without a complete rewrite. First and only real case in my career where I simply could not solve the problem, not even a small part of the problem.
Oh, and it also let me experience running Mongo as primary data source. A lot of experience gained courtesy of meteor.js, for sure, but I'd prefer it to stay in rear view mirror for sure.
I ran into the same situation multiple time and was never able to find a solution besides throwing more money at servers. That might be due to my own lack of knowledge about running servers and performance optimization as I was (and still am) a front-end person first and foremost.
This is perhaps one of the most misunderstood and misused quotes I know of. It doesn't mean what people think it means, yet I hear it misapplied at least a few times every year.
(The quote has to be understood in its original context. It isn't really about performance as such, but about maintaining degrees of freedom and development flexibility until you understand the problem you are solving and can commit)
I've been on the other end of that story quite a few times - being the person having to point out that "this isn't going to work even for normal loads", and having people argue that caring about performance is "premature optimization". And then see them crash and burn because of the arrogance of not testing their assumptions before betting the farm on them.
I think the path of least conflict is usually to encourage people to develop quick benchmarks and test their assumptions and then ask questions about how they plan to address any problems that identify themselves. If people are receptive another useful piece of advice is that they shouldn't be afraid of just starting over if it is really early in the project.
It wasn't that bad? Me and another guy made a non-profit platform with Meteor that got Hacker News front page traffic; we spun up like two instances to handle it I think. I wonder how much production traffic you were handling?
I don't remember the specifics, it was around the 1.0 release of meteor, so... eight years ago? But the core concept was sports betting dashboard, live updating the odds. So updates were coming in thick and fast, hundreds of betting positions would change every couple seconds as bookies would try to boost their margins during games in progress.
In testing it was beautiful. With simulated updates on local machine, instant updates. Instant. Everyone's happy. Deployed to the server, and connected to the data firehose? Feedback is still okay, with just the clients employees and us browsing every now and then. Slightly slower, but hey it's on a remote server now, that's got to be the issue.
Went live, client ran the advertising campaign, and users flocked. Thing is, they flocked all during the same time, when the games were on. And updates were coming fastest while the games were on. Both of those things multiplied together to firmly peg server CPUs at 100%. Clients were also not thrilled about throwing more and more boxes at it to try to stop the bleeding. Resource consumption was going up geometrically with user count, something I hadn't seen before with any technology stack.
All in all, it taught me there is no such thing as free lunch. You pay somewhere - worse developer experience, more resource requirements, development costs and time. No such thing as a silver bullet.
Also, keeping data in sync without transactions in a Mongo cluster provided endless educational entertainment. We needed to process incoming payment confirmations from the bank, and update the "credits" balance of users. Entirely too often one of those would fail, especially under load. I hear it's gotten better, but I still refuse to treat Mongo as anything but a non-authorative cache since then.
Not my experience. And interacting in the HN comments tells me others do too. Can you point to a current example of an “HN aggregator bot”? I’m assuming you see them often?
I prefer simple tools for just this reason. I never used Meteor (for fear of what you describe), but I’ve used plenty of 3rd party libs / tools that I was told “do 90% of what we need; just wrap it and do the remaining 10%”. The 90% was usually the easy part, but often had flaws that were a nuisance to (or impossible) work around. Or they’d stop getting maintained and decay over the years.
Would be interested in peoples thoughts on what realistic read/write pressure is a good level to test at, to feel confident in real-world performance... [ if not HN level dos'ing ]
I’m not sure whether I’m understanding those numbers correctly, but servers are expected to deal with many more zeroes than that. 10 writes/seconds reads like a meme.
Ten writes per second is abysmal, considering that even consumer NVMe SSDs have several hundreds of thousands of random write IOPS.
At the end of the day though, what matters is whether or not your application scales appropriately for your expected workload.
If your expected workload is ten writes per second, then trying to get every single bit of performance out of the hardware is probably not time well spent. If you're having to handle thousands of writes per second and you're getting ten writes per second per server, then it's probably worthwhile to look into what's causing that number to be so low.
You can horizontally scale Mongo via sharding and you can horizontally scale meteor pubsub with redis and related libraries as of like 2015. Depending on when, you had a lot of options. :P
Feeling like a rockstar while developing. Impressing the colleagues and clients with how amazing functionality I could produce in so little time.
Then the absolute horror of production traffic hitting the system and everything grinding to a halt. If I remember correctly, the resources required by their reactive data model scaled up significantly both with read connection count and write activity. And I could not do a damn thing about it without a complete rewrite. First and only real case in my career where I simply could not solve the problem, not even a small part of the problem.
Oh, and it also let me experience running Mongo as primary data source. A lot of experience gained courtesy of meteor.js, for sure, but I'd prefer it to stay in rear view mirror for sure.