Yeh but a pragmatic decision like a $40pcm box won't go down well with your now very bored team of SREs, DevOps and distributed systems engineers(tm) who demand more playtime with the magic cloud toybox (for a 100DAU internal app).
Those people will actually be grateful they don't have to deal and spend weeks to debug yet another cloud provider hidden gotcha or bug and convince support they are right. They will now very happy as they can now deliver value (at relative scale) and really speed up processes. I know I am.
Good ol' bare metal is real nice, but it won't save you from application complexity, security requirements, and so on - you still need to manage it somewhat. If you're not a startup looking for market fit at least.
I'm not sure about that. We get paid well for working with (or around) public cloud complexity, but I bet many of us would gladly manage much simpler setups like the Hetzner Cloud.
If they manage it in my company at least of course, the real question is if they want to manage it (and know what that means). Usually they don't, in my experience.
Dang, that is a fantastic deal. The €100 / month is even better - DDR5 RAM, 2TB NVME raid 1, and it's all customizable too. Just have to wait for Ubuntu 24.04 to be available and might have to make this switch.
It's more than a box in a rack. These providers do actively monitor and fix these boxen. They can all be rebooted remotely as if you physically hit the button, you've got interfaces to access the machine as if you were logging in physically with a DB-9/RS232/ethernet/whatever console, etc.
It's not just "space in a rack and you deal with the servers yourself and you come to fix them if they break".
You're missing the point. There's a massive difference between getting a box service, and getting a highly available regionally distributed service with a semblance of a SLA of bandwidth to anywhere on the planet. To quote a former manager, its not even apples and oranges, is apples and pumpkins. They simply aren't in any way the same scope.
What do you mean? The big bare metal providers have multiple datacenters with fat pipelines and peering. You can put those geo-separated servers in the same subnet.
Europe and US is covered, it looks like Asia a black spot for Hetzner, I will give you that.
I doubt that so many business have such size and global reach, that world wide latency is a priority. If it would be, the average site would not connect to 20 domains to load megabytes of javascripts, trackers and what not.
(Also, at Hetzner you can even rent their network hardware if you want to make custom solutions.
I don’t know why you are suggesting that people will build unreliable things. My company’s first day of AWS was the first time AWS had a major outage. We were sold on more reliable, but I can tell you AWS is just as reliable as home grown BS, if not less reliable. The biggest difference is what you do during downtime: in AWS, you refresh status pages. In your actual hardware, you’re actually problem solving and able to build/deploy workarounds to get back working within 30m.
Having worked for one of the major cloud providers in the past, I second this.
In fact, I have no idea why people go for those. You pay a massive premium for the "privilege" of not owning the infrastructure, and being subject to opaque pricing and outages that are completely beyond your control.
And in terms of actually managing the stuff, now you have to pay staff to manage your cloud things too.
Hetzner is actually offering their Hetzner Cloud and it's a joy to work with because of its simplicity (think about the early days of AWS). You can do everything in Terraform or via their CLI if you prefer. Setting up a full k8s cluster takes maybe 10 minutes including all configuration.
And this is just for the "dumb" EC2 instance, the markup on their "smarter" stuff is probably much higher. In general I understand why one would want to start off in the cloud but staying there for 10+ years is quite absurd given the costs.
Exactly, it's the same with autonomous driving or any other activity (even art!) that has always required a human. While it can "look cool" and like it "understands the context" on the surface, when diving deeper it will _always_ misunderstand the nuance, because it doesn't have all of the lived human experience and reasoning.
Being even 80% (or 90 or even 95) there isn't enough - something will always be missed because it's only able to "reason" probabilistically within a narrow area not far away from the training data.
Funny how I've heard from an Azure employee who worked with many big clients that very few among them cared about security - the incentives were just not there.
Seems like they're finally doing something about that, to set an example for the rest of the industry.
I doubt that's the case. I've been working in/near enterprise sales for quite a while now. Security is considered unglamorous table-stakes: companies won't buy your stuff because you're doing all the right things, but they'll definitely not buy your stuff if you're not.
Giant products like AWS and Azure are too big to grill about their security controls. If you try to ask an AWS rep about something, they'll direct you to their security portal where you can download a SOC2 report and a few other things. That's about all you'll get from them unless you're equally huge. The most you can really go by is their reputation. If you trust AWS, buy their product. If you don't, don't. That's all the prior research a typical < 10,000 employee business can possibly do.
My suspicion is that your friend is only talking to clients who've vetted Azure and figured "it's Microsoft: they're big so they probably know more about it than I do". It's not that they don't care. It's that there's nothing they can do about it. The people who don't already trust Azure would never have gotten as far as talking to your friend in the first place.
We're getting drowned by security checklist by clients now.
A lot of them don't make much sense for us, we primarily make a Win32 B2B program hosted by these customers themselves and a lot of the checklists are all about more generic web SaaS things (because we charge like SaaS). But the person on the other end wants all the questions answered regardless.
Seems that as long as you can put a checkmark in a box that you follow various "best practices" and whatnot, actual details don't matter. You put a checkmark in a box, you did your best.
From being on the buying side, it's likely that the person sending you that questionnaire knows a lot of it is irrelevant to your situation, but they're personally reviewing 100 vendors this year (no, seriously) and there aren't enough hours in the week for them to make exceptions for everyone.
Very often the best answer would be like:
> Q: Do you use multi-tenant databases?
> A: N/A: you'll be deploying our product on your own server.
That's actually a perfectly fine answer! The person reading it doesn't have to explain large gaps in the answers to their boss. It documents why this isn't relevant in a way their successor can easily understand next year when they're reviewing those 100 vendors as part of their annual Vendor Management Policy™ process.
You can write whatever you want. Nobody is ever looking at that document again. By the time the annual process rolls around the process has already changed so much that it's now insufficient. The mandate from management will be to "do it right for future vendors, but blanket approve the previously signed agreements"
It's the same thing every time because the actual security is in the details, but details are so fucking boring.
I get it! I’ve been on both sides of that table many times.
If you see the same questions over and over and over again, consider filling out a SIG LITE questionnaire and offering that to buyers from the start. If you can give them all or most of the info they need in a common format, you might be able to head off a lot of follow-up questions.
FWIW from my own experience with auditors, the process is really kind of superficial. Yes, they can identify the most common checklist-based gaps, and that's what tends to be the low-hanging fruit for attackers as well. But they would never go deep enough to identify something that a determined attacker could exploit.
It really depends on the Team. Trying to broad stroke anything about Microsoft engineering is impossible because it's a patchwork of business units and teams that rarely communicate and work together unless forced. Some Teams are very visible and have top talent on them that prioritize and think about security. Some services do not... problem is security is very much a "you're only as strong as your weakest link" kinda thing.
This is a step in the right direction to get the top-layer prioritizing security.
Couldn’t agree more. I can’t emphasize enough how BIG Microsoft is and how many dimensions of security there are. Nobody has as many attack vectors as we do. I’m pretty confident in saying that. It’s a super hard problem and nearly impossible to enforce all of them from an organizational standpoint. But this is a great step in trying to do so.
That's true - he was talking about clients though, if my memory serves well.
The main challenge he highlighted is there're no financial incentives for most companies in the industry to stay secure (unless you're a security company) - the punishment (including reputational risk) is just way too small.
How could you possibly secure anything when the other half of the company is shoving ads into the start menu and implementing "ai search" to record every keystroke and screen text into perpetuity?
Even the good people at microsoft will forever be undermined by this shit, complete demoralization and throwing their hands up to doing anything properly
I would love a LLM that if it doesn't know says I don't know. Rather then extremely firmly say this is the answer, only for that to be 100% incorrect, not even sort of correct.
It seems unrealistic to anticipate stronger AI that doesn't hallucinate. We're chasing a human-style intelligence and that is known to hallucinate like crazy (a lot of the most intelligent humans turn out to be crackpots - Bobby Fischer was one of the best meat-based chess engines for example).
The vast majority of humans - even intelligent humans - do not "hallucinate like crazy."
Given a list of episode descriptions of Gilligan's Island, the vast majority of humans - even intelligent humans - would either be able to discern the correct answer or say they don't know.
I understand why there is this drive to present the normal human mental and psychological baseline as being just as unstable as LLMs, there is just too much money behind LLMs not to want to aggressively normalize its faults as much as possible (just as with the faults in autonomous driving), but any human being who hallucinated or confabulated with as much regularity as LLMs would be considered severely mentally ill.
> any human being who hallucinated or confabulated with as much regularity as LLMs would be considered severely mentally ill.
ie, it is common enough that we have a label for it. And the stats on how many people have a mental illness are not encouraging. If you put a little fence around the people hallucinating and dehumanise them then sure, humans don't hallucinate. The problem with that argument is they are actually still people.
>ie, it is common enough that we have a label for it.
Having a label for something doesn't imply that it's common. We have labels for plenty of rare things as well.
Also, "mental illness" is a far more broad category than what's being discussed, which is specifically symptoms that resemble the hallucinations and confabulations of LLMs, at the frequency with which LLMs display them. Most mental illness doesn't involve hallucinations or confabulations That is not common in humans, in LLMs it's normal.
>If you put a little fence around the people hallucinating and dehumanise them then sure, humans don't hallucinate.
I'm not dehumanizing anyone, this isn't a rational argument, it's just an ad hominem.
> The problem with that argument is they are actually still people.
The problem is that isn't the argument, and you can't attack the argument on its merits.
The simple, plain, demonstrable non-prejudiced fact is LLMs confabulate and hallucinate far more than human beings. About 17% to 38% of normal, healthy people experience at least one visual hallucination in their lifetime. But hearing voices and seeing things, alone, still isn't what we're talking about. A healthy, rational human can understand when they see something that isn't supposed to be there. Their concept of reality and ability to judge it doesn't change. That is schizophrenia, which would more accurately model what happens with LLMs. About 24 million people have schizophrenia - 0.32% of the population. And not even all schizophrenics experience the degree of reality dysfunction present in LLMs.
You are claiming that, in essence, all human beings have dementia and schizophrenia, and exhibit the worst case symptoms all the time. We wouldn't even be able to maintain the coherence necessary to create an organized, much less technological, society if that weren't the case. And you're claiming that the only reason to believe otherwise must be bigotry against the mentally ill. Even your assertion upthread, that "a lot of the most intelligent humans turn out to be crackpots" isn't true.
Stop it. Stop white knighting software. Stop normalizing the premise that it isn't worth being concerned about the negative externalities of LLMs because humans are always worse, and thus deserve the consequences. The same attitude that leads people to state that it doesn't matter how many people autonomous cars kill, humans are categorically worse drivers anyway. I can't think of many attitudes more dehumanizing than that.
> I'm not dehumanizing anyone, this isn't a rational argument, it's just an ad hominem.
Well, you lead with "The vast majority of humans - even intelligent humans - do not "hallucinate like crazy."" and then follow up by identifying a vast category of humans that do, literally, hallucinate like crazy. Unless you want to make an argument like mental illness actually being the appropriate mindset for viewing the world. Anyhow, you probably want to include an argument for why you think it is OK to exclude them.
Humans hallucinate continuously. If you test them in any way it is common to get nonsense answers. The difference is that it isn't polite to ask humans questions that expose the madness, people tend to shy away from topics that others routinely get wrong.
It is quite hard to explain a typical scholastic test without hallucinations. Particularly getting making mistakes in maths, spelling, and the sciences. It isn't like there is some other correct answer to a math problem that someone could be confused by; people just invent operations that don't exist when questioned.
> The simple, plain, demonstrable non-prejudiced fact is LLMs confabulate and hallucinate far more than human beings.
That isn't true, the opposite is true. Humans couldn't answer the breadth of questions a LLM does without making up a substantially more garbage. The only reason it isn't more obvious to you is because we structure society around not pressuring humans to answer arbitrary questions that test their understanding.