My spouse and I work at home and after the first couple multi-day power outages we invested in good UPSs and a whole house standby generator. Now when the power goes out it's down for at most 30 seconds.
This also makes self-hosting more viable, since our availability is constrained by internet provider rather than power.
Yeah we did a similar thing. Same situation, spouse and I both work from home, and we got hit by a multiple day power outage due to a rare severe ice storm. So now I have an EV and a transfer switch so I can go for a week without power, and I have a Starlink upstream connection in standby mode that can be activated in minutes.
Of course that means we’ll not have another ice storm in my lifetime. My neighbors should thank me.
Well, it's an EV with a big inverter, not a generator, but I get your point. And I do periodically fire it up and run the house on it for a little while, just to exercise the connection and maintain my familiarity with it in case I need to use it late at night in the dark with an ice storm breaking all the trees around us.
Oh, I see! Genuinely curious -- what kind of EV has a battery to power a house for a week?
> maintain my familiarity with it in case I need to use it late at night in the dark with an ice storm breaking all the trees around us.
That's the way to do it. I usually did my trial runs during the day with light readily available but underestimated how much I needed to see what I am doing. Now there's a grounding plug and a flashlight in the "oh shit kit".
> what kind of EV has a battery to power a house for a week?
Assuming their heating, cooking and hot water is gas, a house doesn't actually consume that much. With a 50kWh battery you can draw just under 300W continuously for a week. I'd expect the average house to draw ~200W with lighting and a few electronics, with a lean towards the evenings for the lighting.
On paper the numbers look right, but a week off _50kWh_ EV battery feels off.
What follows is back of the napkin calculations, so please treat it as such and correct me if I am wrong.
1. Inverters are not 100% efficient. Let's assume 90%
2. Let's also assume that the user does not want to draw battery to 0 to not become stranded or have to do the "Honda generator in the trunk" trick. Extra 10%?
3. 300W continuous sounds a bit low even with gas appliances. Things like the fridge and furnace blower have spiky loads that push the daily average. Let's add 100W to the average load? I might be being too generous here, but I used 300W, not the 200W lower bound.
4. Vehicle side might need some consumption. If powering off the battery, it would probably need to cool the battery or keep some smarts on to make sure it does not drain or overheat? Genuinely not sure how to estimate this, let's neglect it for now.
Math is (50kw - 10%(inverter loss) - 10%(reserve)) / 0.4 = 100 (hours), ~ 4 days.
The above calculations assume a sane configuration (proper bidirectional wire, not suicide cord into 12v outlet). Quick skim of search for cars with bidirectional charging support for home shows batteries between ~40kWh(Leaf) to 250 kWh (Hummer).
So looks like one should be looking for ~80kWh battery, which actually most of the cars in the list have.
Again, very back of the napkin, would probably wanna add 20% margin of error.
Actually yes one thing I didn't consider in my calculation is the fridge (mostly because it's a spiky load that rarely comes on and I based it off my own apartment's instantaneous consumption at the time which was ~100W since the fridge compressor wasn't running).
Indeed with the fridge it pushes it a bit. But to address some of your other points:
> it would probably need to cool the battery
I'd expect if you're in a storm then you probably don't need any cooling - not to mention a 300W load is nothing for an EV battery compared to actually moving the vehicle. I'd expect some computers in the vehicle to be alive but that should be a ~10-20W draw.
On the other hand, my calculation assumes ~300W continuous. I expect the consumption to lean into the evenings due to the extra lighting, and drop off during other times.
But yes 80kWh might very well be what the OP has; I intentionally picked 50kWh as the lowest option I found on a "<major ev brand> battery kwh" search.
2025 was the year of LiFePo power packs for me and my family. Absolute game changers: 1000Wh of power with a multi-socket inverter and UPS-like failover. You lose capacity over a gas genny but the simplicity and lack of fumes adds back a lot of value. If it’s sunny you can also make your own fuel.
You’re right, it’s not much, but it is convenient and clean. A few lamps, USB charging, and a router/modem will use a few tens of watts and the big power pack will keep that going for eight hours.
For longer outages there is an outhouse with triple-redundant generators:
- Honda c. 2005
- Honda c. 1985
- Briggs & Stratton c. 1940
The “redundancy” here is that the first is to provide power in the event of a long power outage, and the other two are redundant museum pieces (which turn over!)
Generac 26kW Guardian, natural gas fueled, connected to a pair of automatic transfer switches. We have two electric meters due to having a ground source heat pump on its own meter.
During winter outages, do you stick to the heat pump or switch to a backup heat (e.g. furnace)?
I regrettably removed our old furnace/tank when installing the air source heat pump we have now (northeast), but that’s been my biggest concern power wise
Curious how long you've been sitting on the IP block. I've been nosing around getting an ASN to mess around with the lower level internet bones but a /24 is just way too expensive these days. Even justifying an ASN is hard, since the minimum cost is $275/year through ARIN.
The minimum publicly routable IPv4 subnet is /24 and IPv6 is /48. IPv6 is effectively free, there are places that will lease a /48 for $8/year, whereas as far as I can tell it's multiple thousands of USD per year to acquire or lease a /24 of IPv4.
My security update system is straightforward but it took quite a lot of thought to get here.
My self hosted things all run as docker containers inside Alpine VMs running on top of Proxmox. Services are defined with Docker Compose. One of those things is a Forgejo git server along with a runner in a separate VM. I have a single command that will deploy everything along with a Forgejo action that invokes that command on a push to main.
I then have Renovate running periodically set to auto-merge patch-level updates and tag updates.
Thus, Renovate keeps me up to date and git keeps everyone honest.
Chicken and rice is anything but bland. I haven't had Hainanese style but the Thai style khao man gai that Nong's serves in Portland is a flavor that I still remember more than a decade later.
chicken and rice has oil and some savoriness but it's not jacked to the tits with spice like an indian curry or any thai food - in that regard, compared to other asian cuisines, yes it is bland. compared to midwest mac & cheese, sure, maybe it's less bland but even then I bet a midwesterner could pleasantly eat the dish where they would be on the struggle bus eating indian food
On consumer chips the more memory modules you have the slower they all run. I.e. if you have a single module of DDR5 it might run at 5600MHz but if you have four of them they all get throttled to 3800MHz.
Mainboards have two memory channels so you should be able to reach 5600mhz on both and dual slot mainboards have better routing than quad slot mainboards. This means the practical limit for consumer RAM is 2x48GB modules.
Intel's consumer processors (and therefore the mainboards/chipsets) used to have four memory channels, but around the year 2020 this was suddenly limited to two channels since the 12th generation (AMD's consumer processors had always two channels, with exception of Threadriper?).
However this does not make sense, as for more than a decade the processors have only grown increasing the number of threads, therefore two channels sounds like a negligent and deliberately imposed bottleneck to access the memory if one use all those threads (Lets say 3D render, Video postproduction, Games, and so on).
And if one want four channels to surpass such imposed bottleneck, the mainboards that nowadays have four channels don't contemplate consumer use, therefore they have one or two USB connectors with three or four LAN connectors at prohibitive prices.
We are talking about consumer quad-channel DDR4 machines ten years old, wildly spread, keeps being competent compared with current consumers ones, if not better. It is like if all were frozen along this years (and what remains to be seen with such pattern).
Now it is rumoured that AMD may opt for four channels for its consumer lines due to the increased number of pin connectors (good news if true).
It is a bad joke what the industry is doing to customers.
> Intel's consumer processors (and therefore the mainboards/chipsets) used to have four memory channels, but around the year 2020 this was suddenly limited to two channels since the 12th generation (AMD's consumer processors had always two channels, with exception of Threadriper?).
You need to re-check your sources. When AMD started doing integrated memory controllers in 2003, they had Socket 754 (single channel / 64-bit wide) for low-end consumer CPUs and Socket 940 (dual channel / 128-bit wide) for server and enthusiast destkop CPUs, but less than a year later they introduced Socket 939 (128-bit) and since then their mainstream desktop CPU sockets have all had a 128-bit wide memory interface. When Intel later also moved their memory controller from the motherboard to the CPU, they also used a 128-bit wide memory bus (starting with LGA 1156 in 2008).
There's never been a desktop CPU socket with a memory bus wider than 128 bits that wasn't a high-end/workstation/server counterpart to a mainstream consumer platform that used only a 128-bit wide memory bus. As far as I can tell, the CPU sockets supporting integrated graphics have all used a 128-bit wide memory bus. Pretty much all of the growth of desktop CPU core counts from dual core up to today's 16+ core parts has been working with the same bus width, and increased DRAM bandwidth to feed those extra cores has been entirely from running at higher speeds over the same number of wires.
What has regressed is that the enthusiast-oriented high-end desktop CPUs derived from server/workstation parts are much more expensive and less frequently updated than they used to be. Intel hasn't done a consumer-branded variant of their workstation CPUs in several generations; they've only been selling those parts under the Xeon branding. AMD's Threadripper line got split into Threadripper and Threadripper PRO, but the non-PRO parts have a higher starting price than early Threadripper generations, and the Zen 3 generation didn't get non-PRO Threadrippers.
At some point the best "enthusuast-oriented HEDT" CPU's will be older-gen Xeon and EPYC parts, competing fairly in price, performance and overall feature set with top-of-the-line consumer setups.
Based on historical trends, that's never going to happen for any workloads where single-thread performance or power efficiency matter. If you're doing something where latency doesn't matter but throughput does, then old server processors with high core counts are often a reasonable option, if you can tolerate them being hot and loud. But once we reached the point where HEDT processors could no longer offer any benefits for gaming, the HEDT market shrank drastically and there isn't much left to distinguish the HEDT customer base from the traditional workstation customers.
I'm not going to disagree outright, but you're going to pay quite a bit for such a combination of single-thread peak performance and high power efficiency. It's not clear why we should be regarding that as our "default" of sorts, given that practical workloads increasingly benefit from good multicore performance. Even gaming is now more reliant on GPU performance (which in principle ought to benefit from the high PCIe bandwidth of server parts) than CPU.
I said "single-thread performance or power efficiency", not "single-thread performance and power efficiency". Though at the moment, the best single-thread performance does happen to go along with the best power efficiency. Old server CPUs offer neither.
> Even gaming is now more reliant on GPU performance (which in principle ought to benefit from the high PCIe bandwidth of server parts)
A gaming GPU doesn't need all of the bandwidth available from a single PCIe x16 slot. Mid-range GPUs and lower don't even have x16 connectivity, because it's not worth the die space to put down more than 8 lanes of PHYs for that level of performance. The extra PCIe connectivity on server platforms could only matter for workloads that can effectively use several GPUs. Gaming isn't that kind of workload; attempts to use two GPUs for gaming proved futile and unsustainable.
You have a processor with more than eight threads, at same bus bandwidth, what do you choose, dual channeled or four channeled processor.
That number of threads will hit a bottleneck accessing only through to channels of memory.
I don't understand why you brought up the topic of single-threading in your response to the user, given that processors reached a frequency limit of 4 GHz, and 5 GHz with overclocking, a decade ago. This is why they increased the number of threads, but if they reduce the number of memory channels for consumer/desktop...
Larger capacity is usually slower though. The fastest ram is typically 16 or 32 capacity.
The OP is talking about a specific niche of boosting single thread performance. It’s common with gaming pcs since most games are single thread bottlenecked. 5% difference may seem small, but people are spending hundreds or thousands for less gains… so buying the fastest ram can make sense there.
reply