Hacker Newsnew | past | comments | ask | show | jobs | submit | lclarkmichalek's commentslogin

In absolute terms, yep. In marginal terms, not so much. See also: paradox of value

Highways are pretty safe. The road is designed from start to finish to minimise the harm from collisions. That’s not true of urban streets

I think if you regulated coal on a linear no threshold risk model, you'd find the costs to be somewhat closer.


Coal is already losing, and things are only getting worse for steady state production.

Grid solar drives wholesale rates for most of the day really low long before new nuclear gets decommissioned. If nighttime rates rise above daytime rates a great deal of demand is going to shift to the day. Which then forces nuclear to try and survive on peak pricing, but batteries cap peak pricing over that same timescale.

Nuclear thus really needs to drop significantly below current coal prices or find some way to do cheap energy storage. I’m somewhat hopeful on heat storage, but now you need to have a lot of turbines and cooling that’s only useful for a fraction of the day. On top of that heat storage means a lower working temperature costing you thermodynamic efficiency.


There's also the converse argument, to governments that look to infrastructure as the secret to all prosperity - America succeeds without infrastructure, somehow.


Natural monopolies exist, for sure. Your insurance example is odd though - insurance markets are generally highly competitive. The recent cases where we’ve seen a loss of competition in the market (CA home insurance, for example) have been driven by regulators imposing price controls.

The issue with healthcare is that providers have leverage over insurers, not that there is a lack of competition for insurance.


No, the problem with insurance is not that providers have leverage over insurers. The problem is that people buying insurance have imperfect information about what that insurance will and won't cover when they buy it. Even after you get insurance trying to find a list of things that are covered if impossible. Even if you call the insurance company, they won't tell you.

How is a discerning buyer supposed to choose insurance based on anything but price when that is the only information available?

Then there's the entire bureaucracy that is medical billing. You have to know which obscure codes for diagnosis can be used with which, similarly obscure, codes for treatment. None of those codes ever exactly match what is happening to the patient, so you have to choose one that is close enough and hope the insurance agrees.

You ever wonder why it takes 6+ months to get a medical bill? That's why. It has to be processed by the medical billing bureaucracy until it bears only the slightest resemblance to reality and then shuttled back and forth between the provider, the insurance filing system, and the insurance underwriter. Only once that is done can they send you a bill.

How much cheaper could providers offer service if they didn't have to pay dedicated staff to play some perverse game of telephone with the insurance company?


The usual issue is the addition of control loops without much understanding of the signals (CPU utilization is a fun one), and the addition of control loops without the consideration of other control loops. For example, you might find that your cross region load balancer gets into a fight with your in-process load shedding, because the load balancer's signals do not account for load shedding (or the way they account for the load shedding is inaccurate). Other issues might be the addition of control loops to optimize service local outcomes, to the detriment of global outcomes.

My general take is that you want relatively few control loops, in positions of high leverage.


What does this have to do with the topic being discussed?


Because it is about people speculating on events that seem connected to their own experience, but in actuality aren’t, because they don’t understand the breadth of the distribution of the abstraction they are discussing.

This happens when your terms are underspecified: someone says “Netflix’s servers are struggling under load” and while people in similar efforts know that basically just equivalent to “something is wrong” and the whole conversation is basically esoteric to most people outside a few specialized teams, these other people jump to conclusions and start having conversations based on their own experience having to do with what is (to them) related (and usually fashionable, because that is how most smaller players figure out how to do things).


In short, people with glib answers tend to rely on over simplified models that don’t reflect reality.


It does happen in prod. Usually due to virtual FSes that rely on get_next_ino: https://lkml.org/lkml/2020/7/13/1078


That method is wrapping and not checking for collisions? I would not call that a problem of running out then. It's a cheap but dumb generator that needs extra bits to not break itself.


There is a limit on reliable usage of the FS. Call it what you want. The user doesn't particularly care.


What I'm trying to say is that the problem you're describing is largely a separate problem from what kbolino is describing. They are both real but not the same thing.


I think a fair few of them were created because they knew a bit too much about FOQS


Personally, there's no way I'd want a customer initiated operation to trigger something like terraform or mess with DB schemas. On the security side, it would significantly complicate the permissions structure from the application to the database. And on the performance side, I have absolutely no mental model for how operations like that scale, and how trivial of a DoS I'm exposing myself to. At the same time, I love the isolation (mostly operationally, the security & privacy side is also nice) that db-per-customer would bring. If this product helps bridge the gap, then it sounds good to me.


Last project I worked on was a mix of on prem software and cloud software.

The cloud counterpart had 600+ mongodb databases split amongst 3 Mongo clusters.

The integration team took usually 2 weeks to setup the on premises software, and the cloud stuff took about a minute. The entire setup for the cloud was a single form that the integration team filled in with data.

The point I'm trying to make, is that if your customers require separate infra, they can wait a bisuness day to be setup. Meanwhile they can play on a sandbox environment.

It's also doable in fully automated fashion, but you will have to have strong identity and payment verifications, to avoid DoS, and in those cases usually contracts fly around.

That's for the b2b side.

For b2c, usually you rely on a single db and filter by column ID or similar, which can easily be abstracted away.


You rather explained the value prop of this product then. The benefits of isolation without the 1 business day wait.


What exactly is the value prop tho? To a technical person 1 business day wait seems dumb, but few businesses move that fast where waiting a single day matters.


But it'll take 10 business days to get an OK from management and other departments.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: