Pretty neat. I have been thinking about how I can build a tiny NAS, but I would like ECC so that rules out the pi, etc.
Best option I have found so far might be this ASRock 4x4-v2000 https://www.asrockind.com/en-gb/4X4-V2000M , it has an 8-core cpu and supports ECC. Would need to get a M.2 to 4x SATA adapter. The hard part seems to be figuring out how to buy the board itself...
I built my home NAS on this and it's great. Has Oculink ports built into the board so I can just use M.2 for the OS itself and it supports 10 GB ethernet, so as long as I'm wired in, there is effectively no network latency since that's faster than SATA anyway. The board itself also does video, so I can use a Ryzen CPU without Radeon graphics and not have to get the pro to support ECC.
What benefit would ecc have for a nas? Just making sure there are no bits flipped in your data before writing to disk? If so, seems like file integrity checking would be more important.
The Helios64 "Full Bundle" looks nice, but only 4 GB RAM seems limited for ZFS. Plus, for 300 USD, there should be a way to set up a similar x86 system, possibly from second-hand parts. I'm thinking some older AMD Zen with 8-16 GB RAM and a Fractal Design Node or similar case.
Yes, I know that would likely use more electricity, but in my particular case (France), electricity isn't that big of a cost, plus I wouldn't be running this 24/7. More like one or two evenings a week. The only reason why I'm not running recycled "enterprise" servers at home is that my apartment is small and I absolutely cannot stand the noise they make.
But those are usually "consumer" parts (in the broad sense). In particular, they won't take ECC RAM. I'm hoping this will change now that AMD has a believable "enthusiast" offering that has ECC.
This to me is the main issue. Usually, manufacturers think ECC = Enterprise = Running in a computer room = Doesn't need to be quiet.
Of course, usually clients looking for thin, least-possible-U systems doesn't help either.
I've actually gotten my hands on a Lenovo tower server my client wasn't needing anymore which is really pretty quiet. But it's "huge" by rack standards. It's basically a full PC tower in height, but longer.
I could see myself running it in the kitchen or something once I get it home after the lockdowns.
The issue is it only takes 2.5" drives, which are still quite expensive if SSDs or use SMR if HDDs.
I too have a helios4 and i like it for it's ECC Support but it has a plethora of drawbacks.
* 32bit OS means maximum 16TB Partition Size
* it's not fast at all. It can barely saturate a Gbit connection without any encryption using HTTP. Even with the Crypto Engine enabled, any encryption and the connection drops significantly
* Depending on which version you got, the fans are quite annoying, no matter which version you got, the fans are quite ineffective
* it has at least one known hardware bug concerning SATA and the internal Flash.
* WOL and the internal Watchdog are hit or miss at best
On the other hand the linux support and documentation is top notch so i dont want to complain too much but i am kinda disappointed with the speed i got from it
Ah. I haven't run into speed or capacity issues (different use cases & preferences, I suppose).
For the fans - I am not using the included case. I have one stock fan on top of the SOC and one 120mm fan for the disks. The stock fan is decently loud on bootup, but it stops or low-speed during regular operation.
One of the little gems I found while I was putting together the video for this post was "Shouting in the datacenter", an older clip (from the pre-HD era) demonstrating the consequence of doing exactly what the title says: https://youtu.be/tDacjrSCeq4
Does anybody know why support for NBase-T is so uncommon and expensive in networking equipment?
These 2.5Gbps and 5Gbps adapters look really nice, especially for laptops and other devices with USB 3.2 ports but it appears to be near impossible to affordably connect them to a more-than-trivial network.
First standards for 10GE appeared as early as 2002, NBASE-T was as late as 2015. It’s a rather new standard, unnecessarily so in hindsight... apparently there were discussions about potential 2.5Gbps and 5Gbps modes while original 10GE was being defined but were eventually not included
Dunno, I bought a new cable modem, which has 2.5gbe. Most new motherboards come with 1 or 2 2.5gbe, at least if you spend another $20-$40 over the cheapest. Even raspberry pi like widgets are coming with 2.5Gbe, like the hardkernel odroid h2+. The later has a 4 port 2.5gbe expansion card if you want a total of 6.
What "non-trivial" network is blocked by using 2.5 gbps? There are transceivers that let 2.5 gbe work with 10gb ports on older/common 10Gb switches. QNAP, linksys, and trendnet have a decent number of switches in the 5-18 port range, which seem splenty for most home networks.
1G has been around so long that it's cheap commodity hardware now.
10G was too much of a leap, IMO, so the hardware remains expensive. 10G over regular Cat5e is also relatively power hungry.
2.5G and 5G arrived relatively recently to fill the gap with the stalled jump to 10G. You can get started with something like a $110 QNAP 2.5G 5-port switch and some cheap 2.5G adapters on the computers you care about.
I have a 10 GE NAS (Truenas) and just bought the parts for a server which has 10 GE network interfaces built in. This was not more expensive than usual..
I'm waiting for the price of switches to get down to a level that I will accept. They are quite expensive now.
10 GbE cards and adapters are still quite expensive (at least compared to cheap $10-20 1 Gbps gear); one of the cheapest ASUS 10G PCIe cards is $99, and the cheapest Thunderbolt 3 10G adapter (both with RJ45 connections) is $149.
If you want to graduate to SFP+ those adapters usually cost a little more (plus the cost of a transceiver for fiber or copper).
Cabling also means many existing installations (like my house) with 50+ ft runs need to be re-cabled if they actually want stable 10G speeds. My home network works great at 2.5G but there are a few Cat5e runs that don't give me 10 Gbps.
The reason why I want to have 10 GE is that I made sure that the cables in my house could support these speeds. I expect more and more devices to support 10 GE as I will install them in the future (new TV, new workstation). This is just future proofing of my network installation.
> 10G was too much of a leap, IMO, so the hardware remains expensive.
Switches are still quiet expensive, but 10G cards are quiet affordable nowadays. Notebooks can use a Thunderbolt 3 to 10G ethernet, but I think we will see 10G ports one consumer notebooks soon.
I wouldn't invest in a bridge technology like 2.5G since it introduces non-trivial costs an offers only 2.5x speed up instead of 10x.
> Consumer notebooks are moving away from having ethernet ports in general.
I might be wrong, but I don't think that every notebook will be as slim as the height of a USB-C port and with only one or two USB-C ports as only ports. This might be true for Apple and some companies that copy Apple. But there are still consumers notebooks that don't follow that trend and build some bulkier books with more ports and better thermals.
> 10GBASE-T is relatively power hungry, requiring a couple of watts per port. I don't think we'll be seeing this in consumer notebooks any time soon.
It all depends on customers needs. I would rather have a 10GBASE-T port than a dedicated GPU in my book, and I think the latter even consumes more watts.
There are some affordable (relatively) 10 gig switches nowadays, though they are based on SFPs so you have to factor that into the price too. Mikrotik sells a 4 port SFP+ switch for about $150.
> 10G over copper requires Cat6, most homes have Cat5e if they're lucky.
10G over copper works fine at medium distances (up to about 45m / 150ft) on Cat5e. You would need Cat6a if you needed to guarantee 10G performance at 100m, but for the average home it's not a big deal to use Cat5e.
If you need a LOT of bandwidth, you are not using ethernet. If you have normal needs, 1gbps is fine, cheap, and everything works with it. Why spend more?
Had something similar. The firmware update of my sse4200 NAS has stopped and its NetBios version is badly outdated. I was looking for an inexpensive way to reuse the disks to build another NAS. After looking around a bit, ended up getting a RAID capable DAS enclosure for the disks, repurposed an old Chromebox to connect to it via usb, installed Linux, and shared over SAMBA. It works well.
I mention the Wiretrustee SATA towards the end of the post; it looks like they'll have a laser-cut acrylic case in two sizes, for either 2.5" or 3.5" drives.
I'm hopeful one of the mini ITX or micro ATX boards goes into production soon; at that point, you could mount the Pi securely into any bog-standard ITX/ATX case and even make use of PSU connectors and the PCIe slot!
I mentioned it in a comment for your original hardware video but I'll repeat it here. I think another setup worth trying is an eSATA 4 drive enclosure connected to the SATA controller board via a SATA to eSATA cable.
Power consumption quickly becomes relevant in terms of cost when you are running it 24/7. I measure about 33 kWh a year per RasPi I run, my undervolted x86 2014 intel platform is easily over 15x that.
TrueNAS eats a lot of memory for itself though. Just the web frontend thing eats 3GB on my box, and "services" eats about 12-16GB total (most of which I have no idea where is going).
Since ZFS uses memory for caching, it will perform much better with more memory, but it should work[1] with less.
I just solder directly to the motherboard, there are 3 usable PCIEx1 interfaces on x220, one at the missing USB 3.0 controller footprint, second one mpcie, third ExpressCard. You could also desolder memory card controller for fourth pcie lane. Then you have 3x SATA controllers, one HDD, one on mpcie socket, and third on the Dock connector. You also get over 10 individual USB 2.0 channels. There was a time before Sandy Bridge you could configure pcie bifurcation and freely join separate lanes, but I dont think anyone figured out how to do it on x220.
Its a bare pcb in old 16 port switch case with couple of drives stuffed into the bottom of my closet. I have another x230 pcb stuck into 4337 dock (got those for ~$3) just so I can have a nice power button.
Best option I have found so far might be this ASRock 4x4-v2000 https://www.asrockind.com/en-gb/4X4-V2000M , it has an 8-core cpu and supports ECC. Would need to get a M.2 to 4x SATA adapter. The hard part seems to be figuring out how to buy the board itself...