How should you know whether it's about accessibility? The `prefers-reduced-motion` feature is explicitly[0] intended to accommodate people with vestibular motion disorders, which are common above age 40.[1]
If it's really bad, couldn't they apply userstyles that disable all animation with `!important`? `prefers-*` settings seem somewhat optional, best-effort, based on their name.
This feels like back in the 90s with those GIF-laden scrolling text webpages. I can't fathom what goes into the mind of a designer creating all this nonsense for what basically is a table report. Does anyone in their right mind expect people would be more attracted by banners frolicking around?
These blog posts used to be really good in comparing different clouds; I'm not sure why they decided that this was the optimal way to display this information.
I have Windows zoomed in at 125% and at my 1080p height screen some content boxes don't even fit my full view height. Thankfully the full report is direct linked here.
Can we just go back to good old days and stop with animations to bring the content from all directions which is even slow enough I get blank screen for a few seconds as I scroll down which really reduces the readability.
No. How is the web browser supposed to know how an "unanimated" animation is supposed to appear? Animations can be very complex, and there's simply no way that a browser could do this without breaking a lot of stuff.
> Overall, the gap for most AMD-based processors closed almost immediately when we controlled for NUMA nodes – in other words, when we only considered runs where each instance showed all vCPUs running across a single NUMA node. When we did this, the performance gap dropped from 22% to 1%, which is smaller than our margin of error.
How does one avoid machines with vCPUs across multiple NUMA nodes? Do you just spin the machine up, run `lscpu | grep 'NUMA node(s)'` and kill the machine if the value reported is anything but 1 and try to spin a VM again?
As a AMD fanboy who loves seeing them back on top, I’m just happy we have a competitive CPU market now. I’ll include the M-series from Apple as well, despite being platform locked, because it also forces the other players to up their game.
I would love to see an AMD chip as fully integrated as an M1, moving the RAM fully on die and part of the Infinity fabric directly. The insane memory bandwidth of the M1 is what keeps it competitive.
Yes! How awesome is it that we've got companies like AMD, Intel, Nvidia, ARM, Apple + TSMC for M series, and others who are cranking out awesome products?
Sometimes we get lost in the criticism of every little thing that these companies do and forget that honestly, they're all cranking out great products.
They hold a monopoly of merit, many tried EUV and they succeeded in doing something that others were calling impossible even 5 years before they shipped it, not by shutting down competition but simply because no one else could.
The idea that ASML would somehow be more worthy of merit if Nikon/Canon also succeeded is weird.
Would the moon landing me more impressive if Russia and Japan also managed it?
We're already close to that point. Low to mid-end AMD cards are already available at / below MSRP (e.g. 6600XT), and nVidia cards no longer command a > 2x premium across the board.
At the height of the craze, I saw a five-year-old card, the GTX 1050 (not even Ti!), going for over 200.
I would love to see an AMD chip as fully integrated as an M1, moving the RAM fully on die and part of the Infinity fabric directly. The insane memory bandwidth of the M1 is what keeps it competitive.
I know this has been said a million times, but it's worth repeating because somehow the idea is still floating around – the M series very much does not have the RAM on-die. It's not even in the same package – it's standard LPDDR4/5 sitting off to the side with a lot of channels.
So the novel ("novel") idea that M1 is bringing is using high bandwidth LPDDR5 for desktop CPU memory, where it's currently typically used only for mobile CPUs. [1] Intel and AMD CPUs technically support LPDDRx, but no manufacturer exploits this for some reason. [2]
> I would love to see an AMD chip as fully integrated as an M1, moving the RAM fully on die and part of the Infinity fabric directly.
Current rumours suggest that's where AMD is heading, Zen 5 having multiple accelerators integrated and Zen 6 having HBM part of the package (on the datacenter variants):
The memory in an M1 is not "on die" it is plain old DRAM that they buy from the Koreans and solder to the board just like Intel and AMD and everyone else. DRAM is made on a fundamentally different semiconductor process and there will never, ever be a CPU with on-die DRAM. DRAM that can be made on a CPU logic process is called eDRAM. A huge eDRAM is a few tens of megabytes, while a huge DRAM is gigabytes. The bit cell density of eDRAM is slightly better than SRAM and 1000x lower than DRAM.
I think the person you are responding too used the wrong vocabulary. Apple mounts the SoC and DRAM together in a system-in-a-package design, which is pretty different from how thin & light x86 manufacturers solder DRAM chips to the mainboard. The proximity between the SoC and DRAM is part of what makes the M1s bandwidth possible.
The M1's bandwidth is possible because Apple uses high end LPDDR ram and a memory controller with a lot of channels. They aren't doing anything exotic.
Consumer PCs don't match this bandwidth because DDR DIMMs generally aren't as fast as LPDDR. Plus AMD & Intel limit their mainstream consumer CPUs to two memory channels, both for cost savings and to segment the market.
> Consumer PCs don't match this bandwidth because DDR DIMMs generally aren't as fast as LPDDR.
What about LPDDR (low-power DDR) allows it to be faster? And, by faster, do you mean lower latency? higher clock rates -> higher throughput? This is unintuitive to me.
My impression is that lower power means that you can't sustain higher clocks as readily (in fact, when overclocking RAM, it's common to increase voltage in the interest of stability).
I can't find anything about CAS latencies for LPDDR DIMMs.
edit: to clarify: when overclocking RAM, your two options are either increase voltage or increase timings, as if you want to sustain higher speeds, you need to either charge your capacitors faster, or wait more cycles for them to be charged.
> What about LPDDR (low-power DDR) allows it to be faster? And, by faster, do you mean lower latency? higher clock rates -> higher throughput? This is unintuitive to me.
By faster I mean higher throughput at similar latency, achieved by higher clock rates. And it is indeed unintuitive as to how this can be done while using less power than standard DDR.
My understanding is that it's down to two major factors:
1. JEDEC has iterated on the LPDDR standards much more rapidly. DDR4 and LPDDR3 both hit the market in 2012. But then LPDDR3e, LPDDR4, LPDDR4x, and LPDDR5 were all introduced before DDR5 was.
2. LPDDR isn't available on DIMMs, it's soldered only.
So given that most laptops sold by companies like Dell and Lenovo use soldered ram anyway, and that Intel and AMD both support LPDDR, then why are PC laptops with faster RAM so rare? I have no idea, maybe it costs a bit more and the manufacturers don't think they can market it as a benefit?
> So given that most laptops sold by companies like Dell and Lenovo use soldered ram anyway, and that Intel and AMD both support LPDDR, then why are PC laptops with faster RAM so rare? I have no idea, maybe it costs a bit more and the manufacturers don't think they can market it as a benefit?
For consumers, the primary application that benefits from higher RAM bandwidth is real-time graphics rendering, and non-Apple PCs optimize for this by using discrete GPUs with their own onboard high-bandwidth memory.
I wouldn't take the statement at face value. Most laptops base on 11th-gen Core and later Intel CPUs use LPDDR4x, just like the M1. It's not rare at all. It is the reference design from Intel.
As someone who has recently bought into AMD from a long hiatus, I have to say they've come a long way since and I've been personally impressed with what I've experienced so far on the hardware side. That said, the reverse can be equally said on other matters pertaining to their business as well; more specifically their customer support pertaining to RMA's as of late. Mind you, this is all a personal anecdote so take with a grain of sand.
I think Intel stands to regain some lost ground over the next year or two, but Alder Lake isn't a compelling argument in a datacenter-focused discussion.
Alder Lake relies on brute force, inefficient power consumption to regain the performance crown. AMD's chips are much more efficient, and efficiency matters in datacenters. There is only so much power and cooling available to each rack unit.
I think Sapphire Rapids holds a lot of promise, but it remains to be seen.
This HN topic is about cloud, and I don’t see anything in the comment you replied to that’s talking about desktop computers specifically.
Both Intel and AMD have plans to integrate memory more tightly onto the package of their datacenter processors in the next couple of years, IIRC, and that seems to be what the OP of this comment thread was hoping they would learn from M1.
Take a note, that node sizes of different factories aren't comparable, because they are measured quite differently. You can only compare TSMC v TSMC, Intel v Intel etc.
Am I misreading the report or is GCP faster for network/disk, have more consistent performance (at least for network), & offer cheaper pricing? Aside from vendor lock-in (or potentially negotiated rates for large ENT accounts altering the economics), is there any reason to choose AWS/Azure instead of GCP?
Each cloud comes with unique service / API complexity and despite being managed services that experience does not translate 1:1 across clouds. For example AWS IAM policies cannot be reused, and there may be differences in availability, durability, feature set et al. A good reason to choose AWS may be to minimize deltas between stacks, and often it is a fair assumption that their service offerings have been used by a significant amount of enterprises.
> it is a fair assumption that their service offerings have been used by a significant amount of enterprises.
AWS revenue is 62B. Azure Cloud revenue is 23B. GCS revenue is 22B. I'd say that at 42% of the market, other cloud provider service offerings cumulatively are used by a significant amount of enterprises as well. These revenue numbers are probably artificially deflated due to the land war going on between these business units (i.e. Azure/GCS have to offer larger discounts to steal business from AWS) so it's hard to actually compare without deeper analytics access into these providers. Regardless, even individually these are massive revenue numbers indicating there's plenty of deployments on each of the clouds. Also, the "nobody got fired for IBM" reasoning is pretty flawed in tech; it's kind of ridiculous how often this argument is made.
Just to preface this next bit. I'm not asking about scenarios like "I run all parts of my business on AWS" which could change the benefits towards AWS (vendors love lockin because it raises prices). I'm asking mainly about a greenfield project that is selecting between the clouds equally.
> Each cloud comes with unique service / API complexity and despite being managed services that experience does not translate 1:1 across clouds
AFAIK the vast majority of all of this has been thoroughly commoditized. Do you have anything specific that you see as an important product/feature that Amazon has that the other's don't?
> AWS IAM policies cannot be reused
Not relevant for new deployments & AWS IAM policies are famously overly complex resulting in security vulnerabilities due to misconfiguration, but sure. IAM policies are unique to each cloud provider.
> and there may be differences in availability, durability, feature set et al
Experientially it seems like Google has an advantage there - AWS seems to have more frequent and longer-lasting outages. I doubt durability has a major difference & feature sets tend to be fairly even between them (might matter for the long tail). Also, that statement feels kind of empty as all it says is that there "may be" differences without stating any or which provider might come out ahead.
It's not even low, but straight up breaking the regulations which made them put up that banner in the first place. But seems companies haven't yet understood that, nor have governments actually enforced anything so, here we are.
This gives consent automatically (in an unspecified amount of cases). The correct action is to respond with a "no consent" answer every time, because withdrawing your consent for every single optional cookie category should be the default. People bother you with popups because they hate you, not because they are mandatory.
While it's true that this extension sometimes accepts all cookies, that isn't the case most of the time. From the extension's website:
> In most cases, it just blocks or hides cookie related pop-ups. When it's needed for the website to work properly, it will automatically accept the cookie policy for you (sometimes it will accept all and sometimes only necessary cookie categories, depending on what's easier to do). It doesn't delete cookies.
It would be helpful to have a "what did you choose on this site" view though if that doesn't exist yet.
It would be better if the default was to reject always, everywhere. And the "please pander to corporate evil" flag was something to turn on explicitly.
I read this as a strategic move. As their main goal is to get your name, company, title and email to download the report. So they were showing you snippets of interesting charts and then quickly covering it up.
"Note: Because of our machine selection and testing cutoff times, we were unable to test AWS’s m6a instances, which also run AMD’s Milan processors.
Based on the rest of our testing, we expect that the m6a instances could have outperformed m6i."
>We chose not to test ARM instance types this year as CockroachDB still does not
provide official binaries for that processor platform. Official support for ARM binaries
is slated for our Fall release (22.2), so we expect to return to testing this processor
platform next year.
I'm not sure that is accurate for Gravaton3, which appears to be besting intel/amd in a number of cases. So, it doesn't appear there is a clear leader between the amd, intel and gravaton instances. Meaning everyone should be benchmarking their application and picking one.
Amazon here seems to be leading the Arm technology pack, as they are using a newer generation of CPU, while arm instances on other providers tend to be providing gravaton2 (Neoverse-N1) based instances. I would imagine that gradually changing in the near future as those vendors also upgrade.
ARM isn't the optimal solution for every application at this time, but anyone who isn't seriously considering it probably needs to update their information.
The big savings on graviton come from no hyperthreading. A VCPU on x86 is a hypercore, but a VCPU on graviton is a full core. So you get a full core and its L1 and L2 all to yourself. Usually for cheaper than an x86 VCPU.
Last I checked cockroach don’t support ARM binaries. Probably means they thought not worth the effort to do the analysis on those instances as it doesn’t help for their offering
Benchmark was executed only for CPU intensive node configurations. For highmem node configurations, AMD is not performing well. Probably need to title the report as Cloud Report for CPU intensive workloads. For memory intensive workloads, this report is not doing enough justice.
Not just highmem configs. You have to do any decent memory intensive workloads to be able to see difference. Memory Access Latencies are higher for AMD machines because of various reasons.
I don't follow their product closely. Who are their competitors?
(If I were in the market for their database and the alternatives were close in parity and price, I'd be likely to choose a competitor for the name not being "cockroach" alone.)
was discussing this the other day with colleagues. you google CockroachDB and are immediately served Yugabyte and PlanetScale, which are imo much "hotter" right now
Here is the direct link to the report