Hacker News new | past | comments | ask | show | jobs | submit | turtlebits's comments login

Have you contacted support? I've had one for over 4 years (and used up 2 packs of nibs) and it writes perfectly, and much prefer it over the iPad.

Not sure if you have introduced an artificial delay, but deduping ~25 rows shouldn't take 5+ seconds...

edit: I see you're using an LLM, but " ~$8.40 per 1k records" sounds unsustainable.


I wish CDK was fully baked enough to actually use. It's still missing coverage for some AWS services (sometimes you have to do things in cloudformation, which sucks) and integrating existing infra doesn't work consistently. Oh and it creates cloudformation stacks behind the scenes and makes for troubleshooting hell.

> sometimes you have to do things in cloudformation, which sucks

All of CDK does things in cloudformation, which made the whole thing stillborn as far as I’m concerned.

The CDK team goes to some lengths to make it better, but it’s all lambda based kludges.


CDK is an abomination and I'm not sure why AWS is pushing it now. A few years ago all their Quick Starts were written in CloudFormation, now it's CDK that compiles to CloudFormation. Truly a bad idea.

Just write CloudFormation directly. Once you get the hang of the declarative style and become aware of the small gotchas, it's pretty comfy.


> Just write CloudFormation directly. Once you get the hang of the declarative style and become aware of the small gotchas, it's pretty comfy.

Exactly this. And don't make huge templates, split stuff logically to several stacks and pass vars via export/importvalue.


The biggest hurdle I've encountered is cross-stack resource sharing, especially in case of bidirectional dependencies like KMS keys and IAM roles.

The biggest hurdle is when you want to refactor your stacks, and you pretty well just can't, without risk of deleting everything

> you pretty well just can't, without risk of deleting everything

This is one hyper annoying area.

It is possible to get around it, but it's ugly, drop to L1 and override logical id:

   let vpc = new ec2.Vpc(this, 'vpc', { natGateways: 1 })
   let cfnVpc = vpc.node.defaultChild as ec2.CfnVPC
   cfnVpc.overrideLogicalId('MainVpc')
You have to do this literally for every resource that's refactored.

For us, we run 2 stacks. One that basically cannot/should-not be deleted/refactored. VPC, RDS, critical S3 buckets - i.e. critical data.

The 2nd stack runs the software and all those resources can be destroyed, moved whatever w/o any data loss.


+1 CDK refactoring is annoying and ugly

in my experience you'd need to read the CDK source code to find the offending node and call `overrideLogicalId`

there is a library to do it in nicer way: https://github.com/mbonig/cdk-logical-id-mapper

however it does not work in every case


> we run 2 stacks. One that basically cannot/should-not be deleted/refactored. VPC, RDS, critical S3 buckets

Why, dear god, you put VPC and RDS in one stack? They are much better off as separate CFN stacks.


There are deletion protection flags that can be enabled.

But circular dependencies can also lead to issues here where CDK will prevent you from deleting a resource used or referenced by a different stack.


I also had a really rough go with cdk. I personally found the lack of upsert functionality -- you can't use a resource if it exists or create if it doesn't -- to make it way more effort than I felt was useful. Plus a lack of useful error messages... maybe I'm dumb, but I can't recommend it to small companies.

Upserting resources is an antipattern in cloud resource management. The idiom that works best is to declare all the resources you use and own their lifecycle from cradle to grave.

The problem with upserting is that if the resource already exists, its existing attributes and behavior might be incompatible with the state you're declaring. And it's impossible to devise a general solution that safely transitions an arbitrary resource from state A to state A' in a way that is sure to honor your intent.


Hmm.

If you don't mind sharing, suppose (because it's what I was doing) I was trying to create personal dev, staging, and prod environments. I want the usual suspects: templated entries in route53, a load balancer, a database, some Fargate, etc.

What are you meant to do here? Thank you.


If they're all meant to look alike, you'd deploy the stack (or app, in CDK parlance) into your dev, staging, and prod accounts. You'd get the same results in each.

Cant use bun to deploy CDK, CDK fails as it looks for package-lock yarn-lock or pnpm’s exclusively

So dumb. Trying to move to SST for only that reason

but if you add cdk to the path, you can still deploy, its just that your cicd and deployment scripts are not all using bun anymore


Hmm, beyond a bug they had in bun between version 1.0.8 and 1.1.20[0] bun has otherwise worked perfectly fine for me

You have to do a few adjustments which you can see here https://github.com/codetalkio/bun-issue-cdk-repro?tab=readme...

- Change app/cdk.json to use bun instead of ts-node

- Remove package-lock.json + existing node_modules and run bun install

- You can now use bun run cdk as normal

[0]: https://github.com/codetalkio/bun-issue-cdk-repro


mmm, I wonder how hard that would be to fix in a PR.

actually good idea, didnt think about it

Is this really an AWS issue? Sounds like you were just burning CPU cycles, which is not AWS related. WebSockets makes it sound like it was a data transfer or API gateway cost.

Neither the title nor the article are painting it as an AWS issue, but as a websocket issue, because the protocol implicitly requires all transferred data to be copied multiple times.

I disagree. Like @turtlebits, I was waiting for the part of the story where websocket connections between their AWS resources somehow got billed at Amazon's internet data egress rates.

If you call out your vendor, the issue usually lies with some specific issue with them or their service. The title obviously states AWS.

If I said that "childbirth cost us 5000 on our <hospital name> bill", you assume the issue is with the hospital.


Only for people that just read headlines and make technical decisions based on them. Are we catering to them now? The title is factual and straightforward.

And also highlights a meaningful irrelevance.

The idea that clearer titles are just babying some class of people is perverse.

Titles are the foremost means of deciding what to read, for anyone of any sophistication. Clearer titles benefit everyone.

The subject matter is meaningful to more than AWS users, but non-AWS users are going to be less likely to read it based on the title.


I didn't know this - why is this the case?

> Is this really an AWS issue?

I doubt they would have even noticed this outrageous cost if they were running on bare-metal Xeons or Ryzen colo'd servers. You can rent real 44-core Xeon servers for like, $250/month.

So yes, it's an AWS issue.


  You can rent real 44-core Xeon servers for like, $250/month.
Where, for instance ?

Hetzner for example. An EPYC 48c (96t) goes for 230 euros

I checked here: https://www.hetzner.com/managed-server/

I see "AMD EPYC 7502P 32-Core" for 236 EUR per month. Can you tell me where you see 48c/96t?

EDIT

I found it! Unbelievable that it is so cheap.

https://www.hetzner.com/dedicated-rootserver/#cores_threads_...


Hetzner network is complete dog. They also sell you machines that are long should be EOL’ed. No serious business should be using them

What cpu do you think your workload is using on AWS?

GCP exposes their cpu models, and they have some Haswell and Broadwell lithographies in service.

Thats a 10+ year old part, for those paying attention.


I think they meant that Hetzner is offering specific machines they know to be faulty and should have EOLd to customers, not that they use deprecated CPUs.

Thats scary if true, any sources? My google-fu is failing me. :/

It's not scary, it's part of the value proposition.

I used to work for a company that rented lots of hetzner boxes. Consumer grade hardware with frequent disk failures was just what we excepted for saving a buck.


Sorry, I have no idea if this is true. I was just pointing out what the GP was trying to claim.

Most of GCP and some AWS instances will migrate to another node when it’s faulty. Also disk is virtual. None of this applies to baremetal hetzner

Why is that relevant to what I said?

Only relevant if you care about reliability

AWS was working “fine” for about 10 years without live migration, and I’ve had several individual machines running without a reboot or outage for quite literally half a decade. Enough to hit bugs like this: https://support.hpe.com/hpesc/public/docDisplay?docId=a00092...

Anyway, depending on individual nodes to always be up for reliability is incredibly foolhardy. Things can happen, cloud isn't magic, I’ve had instances become unrecoverable. Though it is rare.

So, I still don’t understand the point, that was not exactly relevant to what I said.


I just cat'ed /proc/cpuinfo on my Hetzner and AWS machines

AWS: E5-2680 v4 (2016)

Hetzner: Ryzen 5 (2019)


Now do hard drives

the hetzner one is a dedicated pcie 4.0 nvme device and wrote at 2.3GB/s (O_DIRECT)

the AWS one is some emulated block device, no idea what it is, other than it's 20x slower


You keep moving the goal posts with these replies.

Hetzner isn't the best provider in the world, but it's also not as bad as you say they are. They're not just renting old servers.


I know serious businesses using Hetzner for their critical workloads. I wouldn’t unless money is tight, but it is possible. I use them for my non critical stuff, it costs so much less.

There are many colos that offer dedicated server rental/hosting. You can just google for colos in the region you're looking for. I found this one

https://www.colocrossing.com/server/dedicated-servers/


I don't know anything about Colo Crossing (are they a reseller?) but I would bet their $60 per month 4-core Intel Xeons would outperform a $1,000 per month "compute optimized" EC2 server.

For $1000 per month you can get a c8g.12xlarge (assuming you use some kind of savings plan).[0] That's 48 cores, 96 GB of RAM and 22.5+ Gbps networking. Of course you still need to pay for storage, egress etc., but you seem to be exaggerating a bit....they do offer a 44 core Broadwell/128 GB RAM option for $229 per month, so AWS is more like a 4x markup[1]....the C8g would likely be much faster at single threaded tasks though[2][3]

[0]https://instances.vantage.sh/aws/ec2/c8g.12xlarge?region=us-... [1]https://portal.colocrossing.com/register/order/service/480 [2]https://browser.geekbench.com/v6/cpu/8305329 [3]https://browser.geekbench.com/processors/intel-xeon-e5-2699-...


Wouldn't c8g.12xlarge with 500g storage (only EBS is possible), plus 1gbps from/to the internet is 5,700 USD per month, that's some discount you have.

If I try to match the actual machine. 16G ram. A rough estimate is that their Xeon E3-1240 would be ~2 AWS vCPU. So an r6g.large is the instance that would roughly match this one. Add 500G disk + 1 Gbps to/from the internet and ... monthly cost 3,700 USD.

Without any disk and without any data transfer (which would be unusable) it's still ~80USD. Maybe you could create a bootable image that calculates primes.

These are still not the same thing, I get it, but ... it's safe to say you cannot get anything remotely comparable on AWS. You can only get a different thing for way more money.

(made estimates on https://calculator.aws/ )


What do you mean by "1gbps from/to the internet"?

125 MB per second × 60 seconds per minute × 60 minutes per hour × 24 hours per day x 30 days = 324 TB?

If you want 1 Gbps unmetered colo pricing, AWS is not competitive. So set up your video streaming service elsewhere :-)

https://portal.colocrossing.com/register/order/service/480 offers unmetered for $2,500 additional per month, for the record.

If you have high bandwidth needs on AWS you can use AWS Lightsail, which has some discounted transfer rates.


Even just the compute, without even disk, is barely competitive.

I'm not sure I understand your point anymore.

> That's 48 cores

That's not dedicated 48 cores, it's 48 "vCPUs". There are probably 1,000 other EC2 instances running on those cores stealing all the CPU cycles. You might get 4 cores of actual compute throughput. Which is what I was saying


That's not how it works, sorry. (Unless you use burstable instances, like T4g) You can run them at 100% as long as you like, and it has the same performance (minus a small virtualization overhead).

Are you telling me that my virtualized EC2 server is the only thing running on the physical hardware/CPU? There are no other virtualized EC2 servers sharing time on that hardware/CPU?

If you are talking about regular EC2 (not T series, or Lambda, or Fargate etc.) you get the same performance (within say 5%) of the underlying hardware. If you're using a core, it's not shared with another user. The pricing validates this...the "metal" version of a server on AWS is the same price as the full regular EC2 version.

In fact, you can even get a small discount with the -flex series, if you're willing to compromise slightly. (Small discount for 100% of performance 95% of the time).


This seems pretty wild to me. Are you saying that I can submit instructions to the CPU and they will not be interleaved and the registers will not be swapped-out with instructions from other EC2 virtual server applications running on the same physical machine?

Only the t instances and other VM types that have burst billing are overbooked in the sense that you are describing.

Yes — you can validate this by benchmarking things like l1 cache

Welcome to the wonderful world of multi-core CPUs...

What benchmark would you like to use?

This blog is about doing video processing on the CPU, so something akin to that.


Did this ever work? It's failing to request a URL on localhost:3001...

I made a few adjustments in cloud and should hopefully be resolved now!

Shouldn't these be boiled potatoes and not baked? Would this taste much different from a potato cooked in oil but not deep fried (confit)?

If people aren't paying, none of those are problems. The second you charge, that's when the headaches come.

I hate negotiation with a passion. However, IME, just being semi transparent about your max offer lets anyone else come back with new numbers without having to ask for a specific number.

Another is to ask for additional benefits in monetary value. IE if one company that provides parking/commuter/better health coverage benefits, ask for an additional $X per year to the companies that don't.

That also make it like you're trying to equalize total comp instead of asking for an arbitrary higher number.


Knock? Even if there is a doorbell, I prefer to knock as it's less disruptive.

Why does this need an app? Why not just send me a text, or better, yet just an SMSTO link? (ie SMSTO:+1123456:Open the door!)

Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: