Hacker News new | past | comments | ask | show | jobs | submit | ozgune's comments login

I can't help but upvote this article just because of the bacteria's name.

+1 on your comment.

I think having a description of Apple's threat model would help.

I was thinking that open source would help with their verifiable privacy promise. Then again, as you've said, if Apple controls the root of trust, they control everything.


Their threat model is described in their white papers.

But essentially it is trying to get to the end result of “if someone commandeers the building with the servers, they still can’t compromise the data chain even with physical access”


They define their threat model in "Anticipating Attacks"


I took a guided tour of the London Zoo this summer. The zoologist said that they had to procure vegetables from special sources and that they couldn't give fruits to the animals at the zoo.

When asked why, she said, "When we give animals fruits that humans eat, they all develop diabetes."


That’s interesting. Apparently the reason is selective breeding to increase the sugar content of commercially produced fruit.

https://www.npr.org/2018/10/07/655345630/how-fruit-became-so...


I think the parent project, Kamal, positions itself as a simpler alternative to K8s when deploying web apps. They have a question on this on their website: https://kamal-deploy.org

"Why not just run Capistrano, Kubernetes or Docker Swarm?

...

Docker Swarm is much simpler than Kubernetes, but it’s still built on the same declarative model that uses state reconciliation. Kamal is intentionally designed around imperative commands, like Capistrano.

Ultimately, there are a myriad of ways to deploy web apps, but this is the toolkit we’ve used at 37signals to bring HEY and all our other formerly cloud-hosted applications home to our own hardware."


Hey there, this is a comprehensive and informative reply!

I had two questions just to learn more.

* What has been your experience with using local NVMes with K8s? It feels like K8s has some assumptions around volume persistence, so I'm curious if these impacted you at all in production.

* How does 'Reclaim the Stack' compare to Kamal? Was migrating off of Heroku your primary motivation for building 'Reclaim the Stack'?

Again, asking just to understand. For context, I'm one of the founders at Ubicloud. We're looking to build a managed K8s service next and evaluating trade-offs related to storage, networking, and IAM. We're also looking at Kamal as a way to deploy web apps. This post is super interesting, so wanted to learn more.


K8s works with both local storage and networked storage. But the two are vastly different from an operations point of view.

With networked storage you get fully decoupled compute / storage which allows Kubernetes to reschedule pods arbitrarily across nodes. But the trade off is you have to run additional storage software, end up with more architectural complexity and get performance bottlenecked by your network.

Please check out our storage documentation for more details: https://reclaim-the-stack.com/docs/platform-components/persi...

> How does 'Reclaim the Stack' compare to Kamal?

Kamal doesn't really do much at all compared to RtS. RtS is more or less a feature complete Heroku alternative. It comes with monitoring / log aggregation / alerting etc. also automates High Availability deployments of common databases.

Keep in mind 37 signals has a dedicated devops team with 10+ engineers. We have 0 full time devops people. We would not be able to run our product using Kamal.

That said I think Kamal is a fine fit for eg. running a Rails app using SQLite on a single server.

> Was migrating off of Heroku your primary motivation for building 'Reclaim the Stack'?

Yes.

Feel free to join the Discord and start a conversation if you want to bounce ideas for your k8s service :)


(Ozgun from Ubicloud)

Thank you for the kind words!

Daniel has a few gems in this blog post and we tried to italicize some of them. My favorite one is around "There is no code without a theory of testing."

"If you have 100% branch coverage, it doesn't mean you've covered all the cases. But it does mean that, whenever an obscure fault is understood in production, or even merely observed in development, there is an incremental path to add it to the base of knowledge in the tests: there are no spans of code with no test model."


Also not surprised to find out that Ozgun from Citus is behind these strong principles! :)

Yea I mean that can basically be assessed as a paradox: how can you possibly write a line of code that you don’t know how to validate its correctness!?


I read this book called "How Big Things Get Done." I've seen my fair share of projects going haywire and I wanted to understand if we could do better.

The book identifies uniqueness bias as an important reason for why most big projects overrun. (*) In short, this bias leads planners to view their projects as unique, thereby disregarding valuable lessons from previous similar projects.

(*) The book compiles 16,000 big projects across different domains. 99.5% of those projects overrun their timeline or budget. Others reasons for slipping include optimism bias, not relying on the right anchor, and strategic misrepresentation.


This goes beyond project estimations. The Metaculus tournaments have questions like

> Will YouTube be banned in Russia before October 1?

These are the kinds of questions most people I meet would claim are impossible to forecast accurately on because of their supposed uniqueness, yet the Metaculus community does it again and again.

I believe the problem is a lack of statistical literacy. Back in the 1500s shipping insurance was priced under the assumption that if you just learned enough about the voyage you could tell for certain whether it would be successful or not. It took a revolution in mindset to ignore everything that makes a shipment unique and instead focus on what a primitive reference class of them have in common, and infer general success propensities from that.

Most people I meet still don't understand this, and I think it will be a few more generations until statistical literacy is as widespread as the literal one.


> the Metaculus community does it again and again.

Do they? I can't see where on that site there is something like "history of predictions" or "track record".

Edit: here.

https://www.metaculus.com/questions/track-record/

But now I can't tell if it's good or bad.


The binary calibration diagram is what I would focus on. Of all the times Metaculus has said there's an x % of something happening, it has happened nearly exactly x % of the time. That is both useful and a little remarkable!


That doesn’t necessarily imply individual predictions are particularly good. If between two outcomes I say X wins 50% and I pick the winner randomly I’m going to be correct 50% of the time. However, if I offered people bets at 50/50 odds based on those predictions I would lose a great deal of money to people making more accurate predictions.


This is true! The Metaculus community forecasts also performs very well (in terms of score) in tournaments including other aggregation mechanisms though.


It also seems gameable: for every big question of societal importance that people care about for its own sake, have a thousand random little questions where the outcome is dead obvious and can be predicted trivially. Would you know anything talking about weighing questions to account for this?


There's calibration, but you can also just see contests where you pit the community aggregate against individual forecasters and see who wins. The Metaculus aggregate is really dominant in this contest of predicting outcomes in 2023, for example. See this: https://www.astralcodexten.com/p/who-predicted-2023


Trivial questions wouldn't result in a good histogram where a probability of 30% actually results in something happening roughly 1 in 3 times. Trivial would mean questions where the community forecast is 1% or 99%. Those are not the vast majority of questions on the site. It would be very boring if the site was 70% questions where the answer is obviously yes or obviously no.

Additionally, many questions require that you give a distributional forecast, in effect giving you 25/50/75th percentile outcomes for questions such as "how much will Bitcoin be with at the end of 2024?"

Who would be gaming the system here anyway, the site? Individual users?


I'd also include under 'trivial' things like "Will this 6-sided die roll a 1?", or really any other well-understood i.i.d. process whose distribution of outcomes never changes under reasonable circumstances. Not just things that are 0.1% or 99.9%.

> Who would be gaming the system here anyway, the site? Individual users?

Cynically speaking, users would be incentivized to ask and answer more trivial questions to pad out their predictive accuracy, so they can advertise that accuracy elsewhere for social clout.


> Back in the 1500s shipping insurance was priced under the assumption that if you just learned enough about the voyage you could tell for certain whether it would be successful or not.

Do you have a source on that? That sounds very unlikely to me. People just have to look at a single storm to see that it sometimes destroys some ships and does not destroy others. It is very clearly has a luck component. How could have anyone (at any time really) believe that "if you just learned enough about the voyage you could tell for certain whether it would be successful or not"?


Okay, that was an oversimplification. I have long wanted to write more on this but it never comes out right. What they did was determine the outcome with certainty barring the will of God or similar weasels.

I.e. they did thorough investigations and then determined whether the shipment ought to be successful or not, and then the rest laid in the hands of deities or nature, in ways they assumed were incalculable.

This normative perspective of what the future ought to be like is typical of statistical illiteracy.

I'll get back to you on sources after dinner!


I think I got most of this from Willful Ignorance (Weisberg, 2014) but it was a while since I read it (and this was before I made more detailed notes) so I might be misremembering.

There are what looks like fantastic books on the early days of probability and statistics aside from Weisberg (Ian Hacking is an author that comes to mind, and maybe Stephen Stigler?) but I have not yet taken the time to read them – I'll read more and write something up on this some day.

But statistical understanding is such a recent phenomenon that we have been able to document its rise fairly well[1] which is fascinating in and of itself.

[1]: And yet we don't know basic stuff like when the arithmetic mean became an acceptable way to aggregate data. Situations that obviously (to our eyes) call for an arithmetic mean in the 1630s were resolved some other way (often midrange, mode, or just a hand-picked observation that seemed somehow representative) and then suddenly in the 1680s we have this letter to a newspaper which uses the arithmetic mean as if it was the obvious thing to do and then its usage is relatively well documented after that. From what I understand we don't know what happened between 1630 and 1680 to cause this change in acceptance!


Could it be related to Huygens publication in 1657? (De Ratiociniis in Ludo Aleae)


Thank you for the reference!


Somewhat related, but very detailed, is the book Against the Gods[0] by Peter Bernstein that documents the historical development of understanding probability and risk. It also discusses what people believed before these concepts were understood.

0 - https://www.amazon.com/Against-Gods-Remarkable-Story-Risk/dp...


I know you're coming from a good place, but the 'most people I meet don't understand this' line about statistics is quite arrogant. Most people you meet are fully capable of understanding statistics; you should do a better job of explaining it when it comes up, or maybe you are the one who misunderstands. After all, most statisticians thought Marilyn vos Savant was wrong about the goats too...


You don't have to be on the internet for long to see:

- "Polls are useless, they only sampled a few thousand people"

- "Why do we need the crime figures adjusted for the age/income/etc groups? Just gimme the raw truth!"

Have to say, I think stats are the least well taught area in the math curriculum. Most people by far have no clue what Simpson's or Berkson's paradoxes are. Most people do not have the critical sense when presented with stats to ask questions like "how was the data collected" or "does it show what we think it shows".

I just don't see it, tough ironically I don't have stats to back it up.


You don't have to be on the Internet long to see flat-earthers or any number of asinine ways of thinking. You can't stretch discrete observations from a supremely massive sample size into "most people".


Gee, if only there was some kind of rigorous and well understood process for determining how to transform discrete observations according to how representative they are, such that we could build a model for a larger population.

Something like that would be very useful for political decision, so perhaps we could name it after the latin word for "of the state"…

;P


> perhaps we could name it after the latin word for "of the state"…

Civitatis?



That's not Latin.

> from New Latin statisticum

Also, etymonline makes a pretty convincing case that statisticum refers to the behavior of administrators, not to the concept of the administration, with the -ist- specifically indicating a person.


It's definitely arrogant. But after much experience trying to explain these things to people, I'm more and more convinced it's not just "if you just put it the right way they will understand". Sure, they will nod politely and pretend to understand, and may even do a passable job of reciting it, but once its cast in a slightly different light they are just as confused.

Much as with reading and writing, I think it takes an active imagination and a long slog of unlearning to trust logic (the "ought to" thinking that shields one from reality) and coming to terms with the race not being to the swift, etc, and that these effects can be quantified.

It's not that some people are incapable of it. Much like literal literacy has reached rates of 99.9 % in parts of the world, I'm convinced statistical literacy can too. But when your teacher is not statistically literate (which I hypothesise they are not, generally speaking), they will not pass that on to you. They will not give you examples where the race is not to the swift. They will not point out when things seem to happen within regular variation and when they seem to be due to assignable causes. They will not observe the battle against entropy in seating choices in the classroom. They will not point out potential confounders in propensity associations. They will not treat student performance as a sample from a hypothetical population. They will not grade multiple-choice questions on KL divergence, although that would be far more useful. I could go on but I think you get the point.

Yet to be clear, I'm not talking about just applying statistical techniques and tools. I'm talking about being able to follow a basic argument that rests on things like "yes I know they are a fantastic founder but startups fail 90 % of the time and so will they" or "if the ordinary variation between bus arrivals is 5–15 minutes and we have waited 20 minutes then there is something specific that put the bus we are waiting for into a different population." These are not obvious things to many people.

This is not a personal failure – it is a lack of role models and teachers. I wouldn't have considered myself statistically literate until recently, and only thanks to accidentally reading the right books. I wouldn't even have known what I was missing were it not for that!

I suspect it will take a few generations to really get it going.

If someone would donate me large amounts of money I would love to actively research this subject, come up with reliable and valid scales to measure statistical literacy, and so on. But in the meantime I can only think in my spare time and sometimes write about it online.


What resources would you recommend for someone who wants to improve their statistical literacy? You mention reading the right books, I'd appreciate it if you could give a short list, if you have time.


I am not the person you asked, but I have been on a similar path to improve my statistical literacy. For context, I am fairly good at math generally (used to be a math major in college decades ago; didn't do particularly well though I did graduate) but always managed to get myself extremely confused when it comes to statistics.

In terms of books: there are a few good ones aimed for the general public, such as The Signal and The Noise. How to Measure Anything: Finding the Value of Intangibles in Business is a good book of applying statistical thinking in a practical setting, though it wouldn't help you wrap your brain around things like the Monty Hall problem.

The one book that really made things click for me was this:

Probability Theory: The Logic of Science by E. T. Jaynes

This book is a bit more math-heavy, but I think anyone with a working background in a science or engineering field (including software engineering) should be able to get the important fundamental idea out of the book.

You don't need to completely comprehend all the details in math (I surely didn't); it is enough to have a high-level understanding of how the formulas are structured at the high level. But you do need enough math (for example, an intuitive understanding of logarithm) for the book to be useful.


I second both The Signal and the Noise as well as How to Measure Anything. I also mentioned upthread Willful Ignorance.

I think perhaps the best bang for your buck could be Wheeler's Understanding Variation -- but that is based mainly on vague memory and skimming the table of contents. I plan on writing a proper review of that book in the coming year to make certain it is what I remember it to be.

I think the earlier works by Taleb also touch on this (Fooled by Randomness seems to have it in the title).

But then I strongly recommend branching out to places where these fundamentals are used, to cement them:

- Anything popular by Deming (e.g. The New Economics)

- Anything less popular by Deming (e.g. Some Theory of Sampling)

- Moneyball

- Theory of Probability (de Finetti)

- Causality (Pearl)

- Applied Survival Analysis

- Analysis of Extremal Events

- Regression Modeling with Actuarial and Financial Applications

The more theoretical and applied books are less casual reads, obviously. They also happen to be the directions in which I have gone -- you may have more luck picking your own direction for where to apply and practice your intuition.

Edit: Oh and while I don't have a specific book recommendation because none of the ones I read I have good opinions on, something on combinatorics helps with getting a grasp on the general smell of entropy and simpler problems like Monty Hall.


These responses are great and helpful. Thank you to you both.


Not my experience at all. Just one example: try talking to physicians about false discovery rates; even those who do not profit from state-of-the art screening methods. Incorporating Bayesian methods is even a struggle for statisticians. It is a stuggle for me.

John Ioannidis has much to say on this topic:

https://en.wikipedia.org/wiki/John_Ioannidis


The Monty hall problem is a great example of something I’ve been educated into believing, rationalizing, whatever you want to call it…but I would still never claim I “understand it.” I think that’s maybe the source of disagreement here, there are many truly unintuitive outcomes of statistics that are not “understood” by most people in the most respectful sense of the word, even if we’ve been educated into knowing the formula, knowing how to come to the right answer, etc.

It’s like in chess, I know that the Sicilian is a good opening, that I’m supposed to play a6 in the najdorf, but I absolutely do not “understand” the Najdorf, and I do think it’s fundamentally past the limit of most humans understanding.


you should totally spend some time thinking and experimenting with the monty hall problem then. It might not be as tricky as you think.


This is not at all true, and I think it's an example of what statisticians have to fight against in order to explain anything. Most people have an almost religious belief that inferences drawn from statistics should be intuitive, when they are often often extremely counterintuitive.

> After all, most statisticians thought Marilyn vos Savant was wrong about the goats too...

This is the opposite of the argument that you're making. Here you're saying that probability is so confusing and counterintuitive that even the experts get it wrong.


My point was that the experts were blinded by arrogance.

Even back then most ( almost all? ) statisticians were capable of understanding the monte hall problem. Yet they just assumed that a woman was wrong when she explained something that didn’t match their intuition. Instead of stopping to think, they let their arrogance take over and just assumed they were right.


My experience is that most people don't understand statistics and can be pretty easily mislead. That includes myself, with my only defense being that I'm at least aware statistics don't intuitively make sense and either ignore arguments based on them or if absolutely necessary invest the extra time to properly understand.

Most people either don't realize this is necessary or don't have the background to do it even if they did, in my experience.


I'm not surprised by the statement 'most people i meet don't understand this'. More than 50 percent of people I meet are less educated. Its statisticly evident.

I might be over reaching but in fact what comes across as arrogance is just an example of statistical illiteracy.

Most (>> 50 percent of) people are very good at detecting patterns. People are very bad at averaging numbers of events, because the detected patterns stand out so much and are implicitly and unconsciously exaggerated.

An example. "in my city people drive like crazy" in fact means: this week i was a lot on the road and i saw 2 out of 500 cars that did not follow the rules and there was even one honking. It 'felt' like crazy traffic but in fact it was not.


The vast majority of people are not that great at statistics. Even something as banal as "correlation is not causation, and can often be explained by a common cause" will blow a fair few minds (e.g. telling most people that old chestnut about how the murder rate is correlated with ice cream sales).


For forecasting with no agency involved this makes sense, but when you’re executing a project things get trickier. The hard part is finding appropriate reference classes to learn from while also not overlooking unique details that can sink your specific project.


I think a lot of modern US public works fall prey to politicians who think the objective of the project is the spending of the money. That is - they push for spending on transit so they can talk about how much, in dollars, they got passed in transit funding. The actual outcomes for many of them are, at best, inconsequential.

Further cynicism could be layered in if you consider some of the blocks of donors (infrastructure contractors / RE devs / etc) and blocks of voters (union transit workers & construction workers) who are recipients again of the spending but not the outcomes.

Finally a lot of the problems come from a long gap of not doing capital projects and so hollowing out of state capacity which has been outsourced. If you outsource your planning, they are less incentivized to re-use existing cookie-cutter plans for subway stations. If you outsource your project management, they are less incentivized to keep costs down. ETc.


Personal suspicion is that real leadership is dull and thankless (like so many things in life).

Announcing a big new transit project is exciting. Actually running the program well requires a lot of boring study, meetings, and management of details. Why bother, if the voters don't punish them for not doing it?

You can find endless internet posts by people complaining their manager doesn't want to do the scheduling of employees, which is the most basic part of their job. It's too tedious, so they try to avoid it.


> Further cynicism could be layered in if you consider some of the blocks of donors (infrastructure contractors / RE devs / etc) and blocks of voters (union transit workers & construction workers) who are recipients again of the spending but not the outcomes.

It's probably best to think of an initial budget as a foothold, and that its ideal amount is low enough to be approved, but high enough to prevent the organization from changing direction after realizing that it's not going to be nearly enough i.e. to think in terms of "pot-commitment."


This book is by Bent Flyvbjerg, the lead author of the paper being discussed here. He’s done a lot of great scholarship on how megaprojects go haywire, and uniqueness bias is definitely a big piece of it.

You especially see this bias with North American public transit. Most of our transit is greatly deficient compared to much of the rest of the world, and vastly more expensive to build (even controlling for wages). But most NA transit leadership is almost aggressively incurious about learning from other countries, because we are very unique and special and exceptional, so their far better engineering and project management solutions just wouldn’t work here and aren’t even worth considering.


> 99.5% of those projects overrun their timeline or budget.

This does not shock me. In the corporate world, plans are not there to be adhered to, but only to give upper management a feeling of having tightened the rope for those pesky engineers who wanted to work at a lazy pace. Such feeling usually vanishes as soon as reality kicks in.


I dunno, upper management normally move onto greener pastures long before reality comes crashing down. That does not happen until two managers over. But that's okay the new plan will fix everything.


This is why your project shouldn't be too small.

If a project is projected to be finished in 6 months, the current manager will still be there, and the success or failure will reflect on their record. It can only go wrong and reflect badly on them.

If a project will take 3 years, the manager can already collect their points for initiating a project with an incredible business case and innovative approach, leave after 18 months, and after a further six month, the new manager can say 'wow, my predecessor left a big mess, I'll clean it up/kill it'.


> Others reasons for slipping include optimism bias, not relying on the right anchor, and strategic misrepresentation.

Optimisim and misrepresentation are WAY more important.

Most engineers I know of were a bit optimistic to other engineers. This leads to slippage because subtasks have a finite amount they can come in early but an almost infinite amount of time they can come in late.

In addition, most engineers are acutely aware of what they think the project would take vs. what number management was willing to hear to launch the project.

Combine both of these and your project will never come in even remotely close to the estimates.


And sometimes the reality is that the realistic answer to a time estimate is "If we are lucky it takes me 30 minutes, if we are unlucky 30 days".

E.g. when it turns out to your surprise, that a part you had in your hardware design was replaced with a part that was 5 cents cheaper, but uses a undocumented protocol that someone has to re-implement, so a goal that was trivial in theory has suddenly involves hardcore reverse-engineering in a high pressure environment. A thing all engineers love.

The only time someone can give you reasonably accurate estimates is when they do something that down to the tiniest detail they have done before. The problem with that is, that in software things change constantly. A thing that was trivial to do with library X and Component Y of version 0.9 might be a total pain in the rear with Library Z and Component Y of version 1.0.

But yeah, unexperienced engineers are going to be optimistic that it is possible, because in theory it should be trivial.


> And sometimes the reality is that the realistic answer to a time estimate is "If we are lucky it takes me 30 minutes, if we are unlucky 30 days".

“I'll put that in Project as 60 minutes, then if you are luck you've got double time for contingency.” -- the external consultant acting as project manager.

Been there before…

Never give a best case estimate, or anything close to, even when quoting a range. Some will judge you, or worse make plans around you, based on that and little else, and it *ahem* isn't their fault if things overrun.


>In short, this bias leads planners to view their projects as unique, thereby disregarding valuable lessons from previous similar projects.

I once had a discussion. We had some managers that would systematically over or underestimate their projects, mostly underestimate. I suggested that we take into account the estimation accuracy of previous projects for that manager and adjust their estimate.

They said that each project is too unique to do this. But I saw the same optimism or pessimism playing out repeatedly for the same managers when looking at the numbers.

Although tbf I think accurate estimation can be bad if the person managing the project knows about the estimation. Since if they have more time, there's less pressure, and they'll have overruns again, making the estimation inaccurate again.

Hofstadter's law, to me, is less about accurate estimation and more about human psychology. If you know you have more time, you waste more time.

This is also a failure-mode in agile project management. If you don't have strict deadlines, it's easy to fall into infinite iteration, because there's always something that could be done better.


Cannot recommend that book enough


(Ozgun from Ubicloud)

Hey there, I know you meant something different, but I just wanted to chime in. At Ubicloud, we're building an open source cloud and our managed solution runs on Hetzner: https://www.ubicloud.com/docs

We're planning to tackle a managed k8s service next. If you have any feedback for our roadmap, could you drop us a line?


I'm curious, have you had people walk away from using Ubicloud because it's a Ruby-based infrastructure project as opposed to something like Go or even Python?


Hi there, not that I know of.

Our control plane is in Ruby, but our data plane uses different open source projects for managed services. For example, we use Linux KVM (C) and Cloud Hypervisor (Rust) for compute, SPDK (C) for block storage, Kubernetes (Go) well for Kubernetes.

My cofounder Daniel will talk about why we picked Ruby at the YC meetup next week. Maybe, we'll turn that into a blog post. Happy to hear your perspective as well!


(Ozgun from Ubicloud)

Looks like every GitHub Actions provider is on this thread. :) I also wanted to chime in with two thoughts.

First, we recently wrote about how we enabled arm64 VMs at Ubicloud. Any feedback on the technical details is more than welcome: https://news.ycombinator.com/item?id=40491377

Second, I feel runs-on's analysis is a bit unfair to us. Ubicloud's x64 / arm64 performance and queue times above are as good as any (we deprecated the AMD EPYC 7502 line). For example, RunsOn arm64 queue times are 31|42 seconds and Ubicloud queue times are 17|24 seconds.

But the analysis says the following, "Be aware that Ubicloud has low CPU speeds and high queue times, which means they won’t necessarily end up being cheaper than competitors." I fail to see this conclusion from the numbers above. What am I missing?


The queue time for x64 appears to vary quite a bit still. But you are correct that it is much better than what it was a few weeks back, when the analysis was written (it was not unusual to see 60s+ spikes).

Also note that I specifically mentioned that "all third-parties are good on that front, expected [sic] Ubicloud (but that may change)."

Also agree that now that you removed the outdated CPUs (was still active end of May), the analysis should mention that you have OK CPUs by now. I will fix it.

For providers that are on this page, do not hesitate to write to me if you find the analysis outdated for your specific service.

[edit] page is now up to date with new analysis


Hey there, thank you for this update. It's your analysis, so you're free to mention Ubicloud as you'd like.

Still, you updated the above statement to, "Be aware that Ubicloud has a slightly lower CPU speed and somewhat variable queue times for x64 (but improving)."

Could you clarify what you meant by variable queue times?

According to the benchmarks, Ubicloud queue times for x64 were 18|44 secs at p50|p95 over the past month. I agree that our p95 number is higher. At the same time, RunsOn's AWS numbers are 31|36 secs. So I feel that driving a conclusion based on our p95 variance number for x64 is a bit unfair to Ubicloud.

Thank you for compiling this benchmark btw. Now, we'll follow it closely.


RunsOn is the only on-premise solution of the bunch, so it is single tenant, and as such it cannot easily have pools of 10s of machines on standby to start within 10s. So on queue time I’m comparing Ubicloud against the other SaaS providers, and anything higher than 20s is not especially great in that case. Plus you have high variance, which I personally consider an issue.

Once this improves I’ll be sure to update it.


I read through the entire report and it gradually got more interesting. Then, I got to the very end, saw Andres Freund's name, and it put a smile on my face. :)

Who else would have run a PostgreSQL performance benchmark and discover a major security issue in the process?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: