Hacker News new | past | comments | ask | show | jobs | submit login
2022 Cloud Report (cockroachlabs.com)
278 points by estambar on June 16, 2022 | hide | past | favorite | 111 comments





This blog post is literally unreadable with animations flying around covering up content, is that by design?


It also ignores the "prefers-reduced-motion" CSS Media feature flag :(


Dang! I didn't know this was a thing! Thanks!!


so? this isnt about accessibility its about preference. i prefer white backgrounds but im not sad when a site author chooses pink.


How should you know whether it's about accessibility? The `prefers-reduced-motion` feature is explicitly[0] intended to accommodate people with vestibular motion disorders, which are common above age 40.[1]

[0]: https://drafts.csswg.org/mediaqueries-5/#prefers-reduced-mot...

[1]: https://vestibular.org/article/what-is-vestibular/about-vest...


If it's really bad, couldn't they apply userstyles that disable all animation with `!important`? `prefers-*` settings seem somewhat optional, best-effort, based on their name.


Sure, please write it, ship it, and then support my 8+ aging family members who need to use it.

Or just follow the spec.


This feels like back in the 90s with those GIF-laden scrolling text webpages. I can't fathom what goes into the mind of a designer creating all this nonsense for what basically is a table report. Does anyone in their right mind expect people would be more attracted by banners frolicking around?


> I can't fathom what goes into the mind of a designer

Don't underestimate the damage an overly eager and naive developer can do.

Or, (maybe more likely in this case?) a CMS author who's been given a little bit too much power?


Probably, they want you to download their report which requires a work email nad personal details.


The website has been updated to resolve some of the animation issues - is it better now?


These blog posts used to be really good in comparing different clouds; I'm not sure why they decided that this was the optimal way to display this information.


The JavaScript console is your friend:

document.querySelectorAll('.aos-init').forEach(e => e.remove())

The above will remove all of those popups.


I have Windows zoomed in at 125% and at my 1080p height screen some content boxes don't even fit my full view height. Thankfully the full report is direct linked here.


I had confirmed that the overlay is intended teasing to get you to download the PDF.


Can we just go back to good old days and stop with animations to bring the content from all directions which is even slow enough I get blank screen for a few seconds as I scroll down which really reduces the readability.


It also ignores the "prefers-reduced-motion" CSS Media feature flag :(


Shouldn't the web browser be the one implementing css rules?


No. How is the web browser supposed to know how an "unanimated" animation is supposed to appear? Animations can be very complex, and there's simply no way that a browser could do this without breaking a lot of stuff.


From p. 74:

> Overall, the gap for most AMD-based processors closed almost immediately when we controlled for NUMA nodes – in other words, when we only considered runs where each instance showed all vCPUs running across a single NUMA node. When we did this, the performance gap dropped from 22% to 1%, which is smaller than our margin of error.

How does one avoid machines with vCPUs across multiple NUMA nodes? Do you just spin the machine up, run `lscpu | grep 'NUMA node(s)'` and kill the machine if the value reported is anything but 1 and try to spin a VM again?


That's what we did, yes


"we" meaning you are an author on the report, or just at your job?



yes I was one of the authors of the report :)


As a AMD fanboy who loves seeing them back on top, I’m just happy we have a competitive CPU market now. I’ll include the M-series from Apple as well, despite being platform locked, because it also forces the other players to up their game.

I would love to see an AMD chip as fully integrated as an M1, moving the RAM fully on die and part of the Infinity fabric directly. The insane memory bandwidth of the M1 is what keeps it competitive.


Yes! How awesome is it that we've got companies like AMD, Intel, Nvidia, ARM, Apple + TSMC for M series, and others who are cranking out awesome products?

Sometimes we get lost in the criticism of every little thing that these companies do and forget that honestly, they're all cranking out great products.


we need a fab competition as well. too many fabless like AMD, Apple...etc. deepened on TSMC


Intel is working on expanding their fab capacity and contracting it out like TSMC and Samsung.


ASML deserves mentioning as well.


Don't forget ZEISS


No.

Monopolies don't deserve credit.


They hold a monopoly of merit, many tried EUV and they succeeded in doing something that others were calling impossible even 5 years before they shipped it, not by shutting down competition but simply because no one else could.

The idea that ASML would somehow be more worthy of merit if Nikon/Canon also succeeded is weird.

Would the moon landing me more impressive if Russia and Japan also managed it?


Exactly correct. They're literally more deserving of credit because they've done something that to this day nobody else has been able to do.


Hopefully some serious competition and general normality will return to consumer discrete GPUs sometime soon, too.


I don't know about competition, but I think we'll see prices bottom out below MSRP before the 4000 level cards come out in the next few months.


Do you think so, because the crypto craze is back in the crypt?


We're already close to that point. Low to mid-end AMD cards are already available at / below MSRP (e.g. 6600XT), and nVidia cards no longer command a > 2x premium across the board.

At the height of the craze, I saw a five-year-old card, the GTX 1050 (not even Ti!), going for over 200.

https://pcpartpicker.com/trends/price/video-card/

From February: https://www.techradar.com/deals/looking-for-a-gpu-dont-buy-y...


Yep, the secondhand market is getting flooded.


GPU prices are already coming back to earth, thanks to both improved availability and the collapse of crypto mining.


I would love to see an AMD chip as fully integrated as an M1, moving the RAM fully on die and part of the Infinity fabric directly. The insane memory bandwidth of the M1 is what keeps it competitive.

I know this has been said a million times, but it's worth repeating because somehow the idea is still floating around – the M series very much does not have the RAM on-die. It's not even in the same package – it's standard LPDDR4/5 sitting off to the side with a lot of channels.


So the novel ("novel") idea that M1 is bringing is using high bandwidth LPDDR5 for desktop CPU memory, where it's currently typically used only for mobile CPUs. [1] Intel and AMD CPUs technically support LPDDRx, but no manufacturer exploits this for some reason. [2]

[1]: https://www.bgr.in/top-products/best-phones-with-lpddr5-ram-...

[2]: See other thread. :)


> I would love to see an AMD chip as fully integrated as an M1, moving the RAM fully on die and part of the Infinity fabric directly.

Current rumours suggest that's where AMD is heading, Zen 5 having multiple accelerators integrated and Zen 6 having HBM part of the package (on the datacenter variants):

https://youtu.be/6yFn85I5PbY?t=1222


The memory in an M1 is not "on die" it is plain old DRAM that they buy from the Koreans and solder to the board just like Intel and AMD and everyone else. DRAM is made on a fundamentally different semiconductor process and there will never, ever be a CPU with on-die DRAM. DRAM that can be made on a CPU logic process is called eDRAM. A huge eDRAM is a few tens of megabytes, while a huge DRAM is gigabytes. The bit cell density of eDRAM is slightly better than SRAM and 1000x lower than DRAM.


I think the person you are responding too used the wrong vocabulary. Apple mounts the SoC and DRAM together in a system-in-a-package design, which is pretty different from how thin & light x86 manufacturers solder DRAM chips to the mainboard. The proximity between the SoC and DRAM is part of what makes the M1s bandwidth possible.


The M1's bandwidth is possible because Apple uses high end LPDDR ram and a memory controller with a lot of channels. They aren't doing anything exotic.

Consumer PCs don't match this bandwidth because DDR DIMMs generally aren't as fast as LPDDR. Plus AMD & Intel limit their mainstream consumer CPUs to two memory channels, both for cost savings and to segment the market.


> Consumer PCs don't match this bandwidth because DDR DIMMs generally aren't as fast as LPDDR.

What about LPDDR (low-power DDR) allows it to be faster? And, by faster, do you mean lower latency? higher clock rates -> higher throughput? This is unintuitive to me.

My impression is that lower power means that you can't sustain higher clocks as readily (in fact, when overclocking RAM, it's common to increase voltage in the interest of stability).

I can't find anything about CAS latencies for LPDDR DIMMs.

edit: to clarify: when overclocking RAM, your two options are either increase voltage or increase timings, as if you want to sustain higher speeds, you need to either charge your capacitors faster, or wait more cycles for them to be charged.


> What about LPDDR (low-power DDR) allows it to be faster? And, by faster, do you mean lower latency? higher clock rates -> higher throughput? This is unintuitive to me.

By faster I mean higher throughput at similar latency, achieved by higher clock rates. And it is indeed unintuitive as to how this can be done while using less power than standard DDR.

My understanding is that it's down to two major factors:

1. JEDEC has iterated on the LPDDR standards much more rapidly. DDR4 and LPDDR3 both hit the market in 2012. But then LPDDR3e, LPDDR4, LPDDR4x, and LPDDR5 were all introduced before DDR5 was.

2. LPDDR isn't available on DIMMs, it's soldered only.

So given that most laptops sold by companies like Dell and Lenovo use soldered ram anyway, and that Intel and AMD both support LPDDR, then why are PC laptops with faster RAM so rare? I have no idea, maybe it costs a bit more and the manufacturers don't think they can market it as a benefit?


> So given that most laptops sold by companies like Dell and Lenovo use soldered ram anyway, and that Intel and AMD both support LPDDR, then why are PC laptops with faster RAM so rare? I have no idea, maybe it costs a bit more and the manufacturers don't think they can market it as a benefit?

For consumers, the primary application that benefits from higher RAM bandwidth is real-time graphics rendering, and non-Apple PCs optimize for this by using discrete GPUs with their own onboard high-bandwidth memory.


I wouldn't take the statement at face value. Most laptops base on 11th-gen Core and later Intel CPUs use LPDDR4x, just like the M1. It's not rare at all. It is the reference design from Intel.


The DRAMs in an M1 system are not really any closer to the CPU than they are in competing ARM and x86 systems.

https://cdn.arstechnica.net/wp-content/uploads/2020/09/tiger...


I should have said "on package", not "on chip".


All this makes me wonder where the Crystalwell concept could have gone if Intel had really stuck it out.


As someone who has recently bought into AMD from a long hiatus, I have to say they've come a long way since and I've been personally impressed with what I've experienced so far on the hardware side. That said, the reverse can be equally said on other matters pertaining to their business as well; more specifically their customer support pertaining to RMA's as of late. Mind you, this is all a personal anecdote so take with a grain of sand.


Agreed, in it's heyday (before Intel dominated in PC era), AMD was my go-to for performance processors.


>As a AMD fanboy who loves seeing them back on top

I am not a fanboy, but a realistic dude.

AMD ruled the last years but Alder Lake overtook Vermeer on both performance and price/performance.

And that is with a process node difference, Intel using 10nm vs AMD using 7nm.

And the future looks like Intel will enhance the distance between its performance and AMD's.


Alder Lake and zen 3 are on comparable processes. Intel 10nm, now renamed Intel 7, has pretty much the same density as tsmc 7nm.


And N7 is almost certainly cheaper (holistically) than Intel 7, from an economic perspective AMD have done more with less.


>Alder Lake and zen 3 are on comparable processes.

Intel didn't use EUV, so no.


I think Intel stands to regain some lost ground over the next year or two, but Alder Lake isn't a compelling argument in a datacenter-focused discussion.

Alder Lake relies on brute force, inefficient power consumption to regain the performance crown. AMD's chips are much more efficient, and efficiency matters in datacenters. There is only so much power and cooling available to each rack unit.

I think Sapphire Rapids holds a lot of promise, but it remains to be seen.


You are right. But parent comment was about desktops, not data centers.

And in desktops performance is what matters. And price/performance ratio, and both are in Intel's favor.


This HN topic is about cloud, and I don’t see anything in the comment you replied to that’s talking about desktop computers specifically.

Both Intel and AMD have plans to integrate memory more tightly onto the package of their datacenter processors in the next couple of years, IIRC, and that seems to be what the OP of this comment thread was hoping they would learn from M1.

But, whatever.


Take a note, that node sizes of different factories aren't comparable, because they are measured quite differently. You can only compare TSMC v TSMC, Intel v Intel etc.


True, but Intel is still lagging at least one node behind TSMC, by whatever way you can measure.


Against apple and I believe amd’s new offering this fall. Not amds last offering.


if by "enhance the distance" you mean Intel will fall farther in terms of price and performance to AMD, you're probably right.


Am I misreading the report or is GCP faster for network/disk, have more consistent performance (at least for network), & offer cheaper pricing? Aside from vendor lock-in (or potentially negotiated rates for large ENT accounts altering the economics), is there any reason to choose AWS/Azure instead of GCP?


Each cloud comes with unique service / API complexity and despite being managed services that experience does not translate 1:1 across clouds. For example AWS IAM policies cannot be reused, and there may be differences in availability, durability, feature set et al. A good reason to choose AWS may be to minimize deltas between stacks, and often it is a fair assumption that their service offerings have been used by a significant amount of enterprises.


> it is a fair assumption that their service offerings have been used by a significant amount of enterprises.

AWS revenue is 62B. Azure Cloud revenue is 23B. GCS revenue is 22B. I'd say that at 42% of the market, other cloud provider service offerings cumulatively are used by a significant amount of enterprises as well. These revenue numbers are probably artificially deflated due to the land war going on between these business units (i.e. Azure/GCS have to offer larger discounts to steal business from AWS) so it's hard to actually compare without deeper analytics access into these providers. Regardless, even individually these are massive revenue numbers indicating there's plenty of deployments on each of the clouds. Also, the "nobody got fired for IBM" reasoning is pretty flawed in tech; it's kind of ridiculous how often this argument is made.

Just to preface this next bit. I'm not asking about scenarios like "I run all parts of my business on AWS" which could change the benefits towards AWS (vendors love lockin because it raises prices). I'm asking mainly about a greenfield project that is selecting between the clouds equally.

> Each cloud comes with unique service / API complexity and despite being managed services that experience does not translate 1:1 across clouds

AFAIK the vast majority of all of this has been thoroughly commoditized. Do you have anything specific that you see as an important product/feature that Amazon has that the other's don't?

> AWS IAM policies cannot be reused

Not relevant for new deployments & AWS IAM policies are famously overly complex resulting in security vulnerabilities due to misconfiguration, but sure. IAM policies are unique to each cloud provider.

> and there may be differences in availability, durability, feature set et al

Experientially it seems like Google has an advantage there - AWS seems to have more frequent and longer-lasting outages. I doubt durability has a major difference & feature sets tend to be fairly even between them (might matter for the long tail). Also, that statement feels kind of empty as all it says is that there "may be" differences without stating any or which provider might come out ahead.


Dear webmasters, for “simple” cookies settings windows like this, you should be tortured equally. https://ibb.co/dp89vYQ

You are trying to “make” the users click “Allow all” just to hide this trash as quickly as possible. It is low.


It's not even low, but straight up breaking the regulations which made them put up that banner in the first place. But seems companies haven't yet understood that, nor have governments actually enforced anything so, here we are.


We need a browser extension to delete these stupid cookie consent popups



This gives consent automatically (in an unspecified amount of cases). The correct action is to respond with a "no consent" answer every time, because withdrawing your consent for every single optional cookie category should be the default. People bother you with popups because they hate you, not because they are mandatory.


While it's true that this extension sometimes accepts all cookies, that isn't the case most of the time. From the extension's website:

> In most cases, it just blocks or hides cookie related pop-ups. When it's needed for the website to work properly, it will automatically accept the cookie policy for you (sometimes it will accept all and sometimes only necessary cookie categories, depending on what's easier to do). It doesn't delete cookies.

It would be helpful to have a "what did you choose on this site" view though if that doesn't exist yet.


It would be better if the default was to reject always, everywhere. And the "please pander to corporate evil" flag was something to turn on explicitly.


The whole website seems ill-conceived, with popups appearing above the graphs they're supposed to explain and obscuring them.


I read this as a strategic move. As their main goal is to get your name, company, title and email to download the report. So they were showing you snippets of interesting charts and then quickly covering it up.


I guess they make the information on the site unusable to obtain your work e-mail for downloading the pdf.


cockroachlabs..... ;)


I wonder why they did not test the Amazon 6a class. The report gives the impression that Amazon lacks an implementation of the 3rd-gen EPYC.


"Note: Because of our machine selection and testing cutoff times, we were unable to test AWS’s m6a instances, which also run AMD’s Milan processors. Based on the rest of our testing, we expect that the m6a instances could have outperformed m6i."



thanks


How about ARM? Included in the report?


From the pdf of the report

>We chose not to test ARM instance types this year as CockroachDB still does not provide official binaries for that processor platform. Official support for ARM binaries is slated for our Fall release (22.2), so we expect to return to testing this processor platform next year.


There is very little offering and honestly afaik they are not competing on performance except niche applications.

It's not that easy to displace x86...


I'm not sure that is accurate for Gravaton3, which appears to be besting intel/amd in a number of cases. So, it doesn't appear there is a clear leader between the amd, intel and gravaton instances. Meaning everyone should be benchmarking their application and picking one.

https://www.phoronix.com/scan.php?page=article&item=graviton...

Amazon here seems to be leading the Arm technology pack, as they are using a newer generation of CPU, while arm instances on other providers tend to be providing gravaton2 (Neoverse-N1) based instances. I would imagine that gradually changing in the near future as those vendors also upgrade.


Graviton3 on AWS is better for most applications on a cost/performance basis, and in some cases also on straight latency.

Honeycomb has some great blogs about it: https://www.honeycomb.io/blog/present-future-arm-aws-gravito...


I stand corrected ! Note that is on cost/performance, not raw performance yet. Impressive.


And on raw performance: https://www.phoronix.com/scan.php?page=article&item=graviton... (direct link to the geometric mean performance of the whole test suite)

ARM isn't the optimal solution for every application at this time, but anyone who isn't seriously considering it probably needs to update their information.


The big savings on graviton come from no hyperthreading. A VCPU on x86 is a hypercore, but a VCPU on graviton is a full core. So you get a full core and its L1 and L2 all to yourself. Usually for cheaper than an x86 VCPU.


As far as I know most AWS services can be run on graviton instances.


Last I checked cockroach don’t support ARM binaries. Probably means they thought not worth the effort to do the analysis on those instances as it doesn’t help for their offering


It does supports ARM. It's a Go project so it should except if they do exotic things.


Explained why they don't produce official, supported builds.

https://github.com/cockroachdb/cockroach/issues/62903#issuec...


We're planning on having official ARM binaries in the fall, so I expect ARM benchmarks will return in next year's report


Awesome, can't wait to test :)


I work on a similar product to cockroach. We inject assembly into our code, wouldn't be surprised if they do too.


Benchmark was executed only for CPU intensive node configurations. For highmem node configurations, AMD is not performing well. Probably need to title the report as Cloud Report for CPU intensive workloads. For memory intensive workloads, this report is not doing enough justice.


Can you elaborate on what you'd expect to see if they tested the highmem node configs?


Not just highmem configs. You have to do any decent memory intensive workloads to be able to see difference. Memory Access Latencies are higher for AMD machines because of various reasons.


We are addressing some of the page rendering issues discussed in the comments below and hope to have them resolved soon. Sorry about that!


These animations are painful!


Wonder how cockroach is doing these days with all the competition in the space.


I don't follow their product closely. Who are their competitors?

(If I were in the market for their database and the alternatives were close in parity and price, I'd be likely to choose a competitor for the name not being "cockroach" alone.)


I'm curious what it is about the name Cockroach that turns you off. I assume the name is intended to give the impression that the DB is hard to kill.


^^THIS


Other globally consistent NewSQL DBs: Spanner, TiDB, Yugabyte, SingleStore

Partitioning for existing DBs: Citus for Postgres and Vitess for MySQL.

Snowflake apparently just started to target transactional workloads with UniStore.


was discussing this the other day with colleagues. you google CockroachDB and are immediately served Yugabyte and PlanetScale, which are imo much "hotter" right now




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: