I wish CDK was fully baked enough to actually use. It's still missing coverage for some AWS services (sometimes you have to do things in cloudformation, which sucks) and integrating existing infra doesn't work consistently. Oh and it creates cloudformation stacks behind the scenes and makes for troubleshooting hell.
CDK is an abomination and I'm not sure why AWS is pushing it now. A few years ago all their Quick Starts were written in CloudFormation, now it's CDK that compiles to CloudFormation. Truly a bad idea.
Just write CloudFormation directly. Once you get the hang of the declarative style and become aware of the small gotchas, it's pretty comfy.
I also had a really rough go with cdk. I personally found the lack of upsert functionality -- you can't use a resource if it exists or create if it doesn't -- to make it way more effort than I felt was useful. Plus a lack of useful error messages... maybe I'm dumb, but I can't recommend it to small companies.
Upserting resources is an antipattern in cloud resource management. The idiom that works best is to declare all the resources you use and own their lifecycle from cradle to grave.
The problem with upserting is that if the resource already exists, its existing attributes and behavior might be incompatible with the state you're declaring. And it's impossible to devise a general solution that safely transitions an arbitrary resource from state A to state A' in a way that is sure to honor your intent.
If you don't mind sharing, suppose (because it's what I was doing) I was trying to create personal dev, staging, and prod environments. I want the usual suspects: templated entries in route53, a load balancer, a database, some Fargate, etc.
If they're all meant to look alike, you'd deploy the stack (or app, in CDK parlance) into your dev, staging, and prod accounts. You'd get the same results in each.
Is this really an AWS issue? Sounds like you were just burning CPU cycles, which is not AWS related. WebSockets makes it sound like it was a data transfer or API gateway cost.
Neither the title nor the article are painting it as an AWS issue, but as a websocket issue, because the protocol implicitly requires all transferred data to be copied multiple times.
I disagree. Like @turtlebits, I was waiting for the part of the story where websocket connections between their AWS resources somehow got billed at Amazon's internet data egress rates.
Only for people that just read headlines and make technical decisions based on them. Are we catering to them now? The title is factual and straightforward.
I doubt they would have even noticed this outrageous cost if they were running on bare-metal Xeons or Ryzen colo'd servers. You can rent real 44-core Xeon servers for like, $250/month.
I think they meant that Hetzner is offering specific machines they know to be faulty and should have EOLd to customers, not that they use deprecated CPUs.
It's not scary, it's part of the value proposition.
I used to work for a company that rented lots of hetzner boxes. Consumer grade hardware with frequent disk failures was just what we excepted for saving a buck.
AWS was working “fine” for about 10 years without live migration, and I’ve had several individual machines running without a reboot or outage for quite literally half a decade. Enough to hit bugs like this: https://support.hpe.com/hpesc/public/docDisplay?docId=a00092...
Anyway, depending on individual nodes to always be up for reliability is incredibly foolhardy. Things can happen, cloud isn't magic, I’ve had instances become unrecoverable. Though it is rare.
So, I still don’t understand the point, that was not exactly relevant to what I said.
I know serious businesses using Hetzner for their critical workloads. I wouldn’t unless money is tight, but it is possible. I use them for my non critical stuff, it costs so much less.
I don't know anything about Colo Crossing (are they a reseller?) but I would bet their $60 per month 4-core Intel Xeons would outperform a $1,000 per month "compute optimized" EC2 server.
For $1000 per month you can get a c8g.12xlarge (assuming you use some kind of savings plan).[0] That's 48 cores, 96 GB of RAM and 22.5+ Gbps networking. Of course you still need to pay for storage, egress etc., but you seem to be exaggerating a bit....they do offer a 44 core Broadwell/128 GB RAM option for $229 per month, so AWS is more like a 4x markup[1]....the C8g would likely be much faster at single threaded tasks though[2][3]
Wouldn't c8g.12xlarge with 500g storage (only EBS is possible), plus 1gbps from/to the internet is 5,700 USD per month, that's some discount you have.
If I try to match the actual machine. 16G ram. A rough estimate is that their Xeon E3-1240 would be ~2 AWS vCPU. So an r6g.large is the instance that would roughly match this one. Add 500G disk + 1 Gbps to/from the internet and ... monthly cost 3,700 USD.
Without any disk and without any data transfer (which would be unusable) it's still ~80USD. Maybe you could create a bootable image that calculates primes.
These are still not the same thing, I get it, but ... it's safe to say you cannot get anything remotely comparable on AWS. You can only get a different thing for way more money.
That's not dedicated 48 cores, it's 48 "vCPUs". There are probably 1,000 other EC2 instances running on those cores stealing all the CPU cycles. You might get 4 cores of actual compute throughput. Which is what I was saying
That's not how it works, sorry. (Unless you use burstable instances, like T4g) You can run them at 100% as long as you like, and it has the same performance (minus a small virtualization overhead).
Are you telling me that my virtualized EC2 server is the only thing running on the physical hardware/CPU? There are no other virtualized EC2 servers sharing time on that hardware/CPU?
If you are talking about regular EC2 (not T series, or Lambda, or Fargate etc.) you get the same performance (within say 5%) of the underlying hardware. If you're using a core, it's not shared with another user. The pricing validates this...the "metal" version of a server on AWS is the same price as the full regular EC2 version.
In fact, you can even get a small discount with the -flex series, if you're willing to compromise slightly. (Small discount for 100% of performance 95% of the time).
This seems pretty wild to me. Are you saying that I can submit instructions to the CPU and they will not be interleaved and the registers will not be swapped-out with instructions from other EC2 virtual server applications running on the same physical machine?
I hate negotiation with a passion. However, IME, just being semi transparent about your max offer lets anyone else come back with new numbers without having to ask for a specific number.
Another is to ask for additional benefits in monetary value. IE if one company that provides parking/commuter/better health coverage benefits, ask for an additional $X per year to the companies that don't.
That also make it like you're trying to equalize total comp instead of asking for an arbitrary higher number.
reply