Yeah, ours don't either (yet) in large part since radiators are more common. Though at least in NL there's a lot of hybrid systems. You keep your gas boiler and attach a heat pump as well. The heat pump handles house heating and the gas boiler still heats domestic hot water.
I don't remember the exact timeline but I think SMS became free (bundled with mobile phone plan) in the US before WhatsApp became popular. And most of us don't interact via chat very much internationally. So (probably) most people just default to SMS/iMessage unless there's a reason to do something differently. And even the one person I regularly communicate with chat in Europe, we default to Facebook Messenger.
"doing ok" or "barely keeping head above water". Is not a good look, when you have people doing demanding engineering jobs that require skill and years of education, that they have to keep themselves up to date learning past 9-5. I know married senior developer with two kids, 20 years of experience, still saving for deposit for a flat (as house is now out of reach). Rents are going up and so the property prices so they can't catch up. His wife can't work full time, because of children and nursery etc is unaffordable. They have not been on holidays since Covid and employer is talking about downsizing. Man is 40 and looks like 50 due to stress.
I know a couple of developers who are single, yes they do "okay", but they are nowhere near in a position to start a family. It's grim.
Then you have wage compression where really doing warehouse job doesn't get you much worse living standard than typical developer. You will have shittier flat, maybe extra housemate and you will shop in Aldi instead of Waitrose. That's very much the difference right now.
If you're using EBS for shuffle/source data you've already behind.
If you can afford it you should be using the host NVME drives or FSX and relying on Spark to handle outages. The difference in performance will be orders of magnitude different.
And in this case you won't have the ability to store 64TB. The average max is 2TB.
It's really the only reason to use Spark in the first place, because you're doing non-local level processing - when I was throwing 100TB at the problem it made more sense than most of the data science tasks I see it used for.
This magnitude of data fascinates me. Can you elaborate on how that much data came to be and what kind of processing was needed for all that data? And if not, maybe point to some podcast, blog that goes into the nitty gritty of those types of real big data challenges.
Click/traffic data for a top 100 website.
We weren't doing a ton, but basic recommendation processing, search improvement, pattern matching in user behavior, etc
We normally still would only need to process say, the last 10 days of user data to get decent recommendations, but occasionally it would make sense for processes running over the entire dataset.
Also this isn't that large when you consider binary artifacts (say, healthcare imaging) being stored in a database, which pretty sure that's what a lot of electronic healthcare record systems do.
A random company I bumped into has a 40TB OLTP database to this effect.
Financial market data (especially FX data) can have thousands of ticks per second. It's not unheard of for data engineers in that space to sometimes handle 1TB per day.
The vast majority of Chinese students do not plan on migrating to the UK. They usually do it so they can complete a Masters in 1 year instead of the 2 years required by Chinese universities. Having an international degree also holds weight in the domestic job market.