Just a quick caution, I had a frustrating series of issues with an Asrock Industrial NUC that I ordered from Newegg. Asrock Industrial support was fairly helpful and responsive and it definitely looked like my unit specifically had a faulty motherboard which, well, these things happen. The caution is that arranging an RMA turned into a real pain between Asrock and Newegg. I don't think Newegg is exactly selling them gray market, but they are not one of Asrock Industrial's distributors, which seemed to be the pain point (Newegg was not actually set up to handle RMAs). I think it might be a better idea to buy through https://mitxpc.com/ which both sells direct to consumer and is a listed Asrock Industrial distributor.
To their credit Newegg did take it back and send me a replacement but I think it had to be escalated to management and they ate the cost. It took a few days and a few back-and-forths with their CSAs.
GamersNexus on youtube had an incredibly messed up experience returning an open box motherboard to newegg and made a whole video series about it. They actually got an interview with newegg execs, some of whom had only been there a few months and have already left the company.
That newegg interview was the strangest and most awkward encounter imagineable. The whole overly corporate feel of the company was exposed and just made them look even worse.
Newegg is a shitshow and has been for a long, long time.
So many people have been burned by them, and the writing was on the wall as long as a decade and a half ago when they would purposefully not process orders for 2-3 days to get you to pay extra to have an order process in their warehouse in a reasonable period of time.
It doesn't matter whether Newegg was "set up to handle RMAs" for Asrock. They sold you something defective. They shouldn't get credit for "taking it back and sending a replacement"...that's required of them.
It's important to understand here that the product was well outside of Newegg's return period and this all occurred over the manufacturer's warranty. The problem is that most B2B manufacturers expect their distributors to handle warranty claims. Based on my interactions with Newegg I think they had gotten into this situation somewhat accidentally and may not have realized that the NUC was from Asrock Industrial, a distinct sales channel from Asrock's consumer products. Of course this doesn't speak positively of their internal processes.
With this kind of product you can often find lower prices from distributors other than authorized ones, at the cost that the warranty is not serviceable. It's usually referred to as gray-market sales. I think Newegg was effectively a gray-market seller here but unknowingly, as they were advertising Asrock Industrial's warranty even though they weren't authorized to service it.
That sucks. I usually try to buy parts from Newegg even if it costs a little more than Amazon because of their stance on patent trolls. When I got a bad gpu from them in 2012, the RMA experience was easy. I wonder what went wrong.
I thought they started to go downhill fast once the marketplace vendors took off. It felt like they cut their own inventory / skus so you couldn't just filter them out and still get a broad selection. I too held them in high regard for the patent troll fighting.
I stopped using them completely around ~2012-13 after getting so many dead motherboards from Newegg and zero from other sources I was convinced shenanigans were afoot.
If a device you bought from a store (online or not) doesn't work, can't you just return it to the seller and have them deal with it... I mean.. they did sell it to you, they should be responsible for the warranty and everything else.
...or is this some kind of only-in-EU type of regulation?
These small boxes make for great homelab servers. I use a Topton box from AliExpress[1] as a router/VM/container host and love it. It has an iGPU in case SHTF, and 4 individual 2.5GB Ethernet ports. The only thing it really lacks is out of band management.
Run pfsense/opn in a VM, plug in WAN/AP/NAS/Desktop and you have a lightning fast network/development lab (by home use standards at least).
Even if you were not planning on using it as a lab, one of these kinds of boxes and a strong AP beats out even the high-end consumer focused routers all in ones without breaking a sweat at ~50W
A long time ago I ran a rack-mount Dell R710 with an external disk shelf. It drew ~600W from what I remember and had a fraction of the power
Even if you want to do 25gbit + you are still better off building a custom pc [1] than buying any dedicated routing hw. At least right now they are still very loud and cost way more. Plus as you said you can use it as a vm host for all kinds of stuff.
Depends what exactly you want to do with the traffic. NAT v4 out to the internet at 25G? Yeah, nothing is going to beat the price for performance of a software box. Route+switch your home lab at 25G? You can get a 4 port 100G (up to 16 ports of 25G via breakout) line rate hardware offloaded performance using a $700 MikroTik switch which runs pretty damn quiet (and you can always throw in Noctuas if you really want it dead silent).
I am so jealous of your ISP situation! I’ve got symmetric gigabit from Verizon FiOS, which is great for what it is, but there is zero roadmap to anything faster in the US.
You can look at commercial offerings, but they charge way more for much less speed on a dedicated line. The cable folks are bragging that they hit 10Gbit in the lab, but no indication of when they will actually sell that. 25G is a fantasy.
The only reason we have this is because the people of the city voted for requiring every home to have fiber. It cost a lot of tax payer money but now the infrastructure exists and is run by the state owned power/phone company. Any private provider can use the infrastructure to offer what ever they want and since the fiber has no real limit on speed, it just depends on what's attached to the ends. Every home has 4 (usually 2 terminated) strands p2p directly to the POP.
There has been on going battle in the courts because the state owned phone company started using p2mp fiber connections for new runs which limit the maximum speed to what ever hardware they have installed in the split. However they have now backed off because it looks like the state will rule against them as it is anti-competitive. Also requires power for the split which is not very green all because they save around USD 50 per link.
So it isn't all roses and takes effort by the people but if you ever get the chance to vote on municipal fiber/infrastructure do it. Even here in Switzerland we have a lot of people who do not understand this and it took a lot of effort. They are even defending the phone companies use of p2mp.
Thanks for sharing! I agree that publicly-owned, privately-operate fiber to the home network is currently the gold standard. The Baby Bells are lobbying hard for regulatory barriers to this[0] but there are some success stories like Chattanooga [1].
I'm definitely interested to support these efforts. Wireless Wide Area Networks are not gonna cut it. If we get the baseline up to 10g, all sorts of new applications and architectures will be feasible.
I definitely agree about using boxes like the one being reviewed here as mini homelab servers, but I tend to prefer the actual router be a separate device. For one thing, the ASRock box only has a single Ethernet port, which isn't ideal for a router (yes, you can use a managed switch and VLANs to make it work, but that's not always optimal either). I think there's also value in keeping your router separate from a reliability and security standpoint. I use a box similar to the ASRock as a VM server and a fitlet[1] as a dedicated router running Opnsense. Tiny homelab footprint and super reliable so far.
… but you are comparing a reasonably priced Chinese box with multiple LAN (ideal for router) with a $1000 single Ethernet box. Kinda not the same thing.
I like used enterprise level small factor PCs for home server. Lots of cheap parts on eBay, decent performance and low TDP. For example, HP EliteDesk 800 series.
Intel has the NUC Extremes which offer a full x16 slot in a fairly small footprint. The Core of the computer itself looks like a small GPU, and it plugs into a daughter board that acts as a coupler for its two x16 slots.
I think most people throw GPU's in them but I don't see a reason why you couldn't just fit it with a nice NIC
Kinda a stupid comment from me but I am wondering why Framework doesn't get into the NUC business. The ASRock NUCs usually retail from $750-$650 and that's around the cost of an upgrade kit from Framework. Considering that most industrial compute is mostly dealing with long lived deployments of edge hardware, it seems like the sort of stuff that their emphasis on ecosystem can help here.
Anyways, I should stop my homelab hobby even though I find this stuff fun. I might pick this up regardless.
Do upgradeable NUCs make sense? I would imagine almost all the cost is in the motherboard, unlike a laptop where you can reuse the screen, battery, keyboard, etc.
I can imagine a sort of "NUC of Thesus" where you can swap the wifi module, ethernet port, mainboard, storage, CPU or RAM as requirements change—all in a pretty simple enclosure that remains standard in size. If you're not particularly picky about the size or form factor, it might even be possible to do so with the existing parts Framework already makes and sells
Article has a nice summary of the differences between standard and in-band ECC. Looks kinda like the difference between hardware and software RAID?
In-band ECC trades performance for resilience because by allocating some of your memory bandwidth to checksums. This is dramatically simpler, and thus less expensive, than traditional side band ECC, which adds a redundant memory channel dedicated to parity information.
Seems like a hugely valuable option to me. Needs more OS support (Intel says ChromeOS only for now??). Would be interesting to compare price/performance/resilience against used/low end server hardware.
The thing is not remotely "industrial", but I guess that label is an attempt to justify the outrageous pricetag. It's firmly office, maybe light commercial at best.
The review itself mentions that the design is based off a consumer product and they only altered the cooling vent location to accommodate a new motherboard layout.
This is aimed at customers like the previously-on-HN JesusChicken chain which is placing NUCs at every store. Even then: I'd expect better sealing.
If it’s not clearly written out as a promise then it can be different. They called this a surprise so they didn’t expect it.
Businesses are in the profit maximization at reducing all costs phase as a general rule right now. Gone are the days of “oh nice for you we left that in” or other consumer surpluses. There’s probably room to coin a term for this corporate strategy ala how they’ve coined the “quiet quitting” term for laborers. “Shrinkflation” comes to mind.
Very nice! I wish they would have copied Intel's moving the power switch to the front as the top switch is a falling book reboot waiting to happen. (I've got 1/2 dozen servers in this form factor doing different things and all but one have front facing power switches.
I even designed a way to easily mount them to the side supports of an IKea Jerker Desk[1] :-).
Was there nothing mentioned about noise under load or did I miss it? I bought an 8 core Miniforum AMD-based nuc to use as a home server and it's too noisy even under slight load.
Mine's silent with just one or a few VMs but once I get the cpu maybe 50% loaded it gets too noisy. It's a bummer because I feel like I'm throwing away half of the system's capabilities.
It would be nice if somebody made a NUC with a desktop-class CPU cooler. But I suppose that's getting pretty close to ITX territory.
Truly "industrial" small x86-64 PCs will have direct wiring terminals for 12VDC power, or -48VDC, not a consumer grade 110-240VAC to DC-something power brick with it.
Not to mention -40 to 80C operating temperature range for all components, waterproof port shields and operation at 100% humidity. I didn’t even see an RS-232 port let alone RS-485.
There’s some industrial operations admin offices this thing wouldn’t survive in!
I’m guessing they’re working on a very liberal interpretation of “industrial”
tl;dr it reserves 3% of your RAM for ECC parity/check values, which is all handled transparently without your operating system needing to know anything.
I know it's apples vs oranges for a lot of folk' applications, but boy does the updated M2 Mac Mini starting at $549 (at time of comment) seem like a steal.
Yeah… although every time I poke at it I add the M2 Pro and 10GBASET…
Also Apple’s memory and storage pricing is insane. Guess that’s the point of the upgraded network and unified memory though? Have a fast NAS on a 10G network and work that way.
In-band ECC is likely a lot more marketing speak and a lot less ECC. The way I look at ECC is like this, no extra die no ECC.
In band “ECC” DIMMS store the ECC bits on the same memory chips using the same lanes as the data. Side band ECC uses dedicate lanes and a dedicated memory chip. If you don’t have 9 chips on your RAM sticks you have some marketing version of ECC (cheaper).
I didn’t call it fake but I am hesitant to adopt something that doesn’t have a need. Is it better and more reliable or is it cheaper for the manufacturer to produce?
My money is on it being cheaper to produce and with no benefit to the consumer. The anandtech author even states in band deserves more investigation.
ZFS stores the error correcting bits on the same physical drive as the data. It’s not unheard of to do it that way.
Why shift from a dedicated memory channel and chip to a shared one?
The FreeNAS/TrueNAS community has worried about situations like the scrub of death (no ecc). Others on the forum below think the same about consumer DDR5 “ecc”.
> Why shift from a dedicated memory channel and chip to a shared one?
Cost, mostly; maybe pincount and routing at some level as well (I think current connectors have pins for ecc, but maybe this enables some pinsaving for ddr6 or the new ram connector from Dell?).
But also maybe performance? ECC ram usually doesn't come in high performance models, but with in-band, you can use whatever ram. Clearly, there is a performance penalty for in-band vs no ECC with the same ram, and there's almost certainly a penalty for in-band vs side-band at the same ram timings, but is there a benefit for in-band with faster ram vs side-band with conservative ram? Maybe? need approrpriate benchmarks.
If this has similar reporting to machine check exceptions that proper side-band ECC supports, this may be an easier path to get ECC on low end machines than hoping all the stars are aligned for ECC support and then paying a significant premium for the ram.
DDR5's on chip ECC probably increases reliability, but also means failures will be worse. You'll always have at least two bits flipped if bits are flipped on the chip, because single bit errors will be corrected. It's not clear what will happen with errors that are detectable but not correctable, since, AFAIK, there's no mechanism to report errors.
I did look at the R86S with interest when the STH article came out but the 16G max of RAM is a limitation. 32 or 64GB would be ideal to be able to run a bunch of VMs.
To their credit Newegg did take it back and send me a replacement but I think it had to be escalated to management and they ate the cost. It took a few days and a few back-and-forths with their CSAs.