Hacker News new | past | comments | ask | show | jobs | submit login
Honeycomb LX2 ARM Workstation: An Alternative to x86 (solid-run.com)
98 points by ekianjo on Aug 26, 2021 | hide | past | favorite | 101 comments



4x SFP+ 10Gbe?

What kind of crazy NAS / router / appliance thing was this board originally made for?

That being said, having access to 4x10 Gb ports is pretty funny for a computer of this strength, and might be the main selling point of this whole setup (some kind of crazy CDN server or something that's heavily I/O bound?)


The LX2160A chip is indeed designed for router/firewall/VPN type appliances and this board is repurposing it (poorly) as a "workstation".


I had to look at this for an unrelated project, and its clear that chipset is for router/firewall use. The endstation features of the chip don't include very basic offloads like TSO.


For anyone else who does not know what TSO stands for, it is TCP segmentation offload.

https://en.m.wikipedia.org/wiki/Large_send_offload


The only TSO that came to my mind was IBM's Time Sharing Option, which is part of z/OS and has been part of their mainframe OSs since the 360.

It was obviously a poor match for the context ;-)


I'm all in for misusing server chips to build workstations, but the feature match is not great here.


While the board is advertising itself as a workstation, wouldn't it be easy to use it as a server?


If you scroll down on the page they sell a server that has two Honeycombs in 1U.


It looks like a good match for a small server, better than a workstation.



The qsfp28 port can be configured as 4x10Gbps or 1x40Gbps, or 4x25Gbps, 2x50Gbps, or 1x100Gbps. They are different signalling standards.


I'm seeing 4x10g + 1x100G in that link which is even funnier as it's starting to approach on the bandwidth to the RAM lol.


Given that ARM-chips have 128-bit NEON + AES-encryption at the core/chip level, you probably can push that bandwidth in a VPN-application without ever touching RAM.

Imagine a 1x100G WAN with encrypted traffic being routed to each of the 10g links based off of a simple application-specific decision. You can probably route that sort of thing without ever consulting main RAM at all.

Yeah, I'm liking this thing as a server-chip, but not so much a workstation. I think there's a lot of applications where a low-energy use CPU hanging off of high-speed fiber connections could be useful.

But I'm not entirely sure of any application that would benefit from the 16 cores, strangely enough.


The LX2160a has a 50Gbps crypto accelerator and a 100Gbps compression/decompression accelerator built into the network packet processor to help offload some of the processing. The 16-cores are useful for the high speed packet processing in userspace with DPDK.


Yeah, I was thinking that nevermind workstations, this could be good upgrade from APU2 as a router platform if you want to go beyond 1Gbe. Throw in some pcie wifi adapter (ax210 maybe?) and you got a pretty decent setup


We have a growing lineup of SystemReady ES certified hardware that will make good OSS upgrades. New platforms on our LX2162A based SOM will be especially good for this application.


please make a beefier SOM that fits the MNT Reform, those would be nice to have in a laptop :)


We are talking with MNT about possibly building around the LX2162a SOM.


What's the difference? The block diagram seems very similar.


The core count and network processing is the same, but the number of SERDES lanes is half. This allowed the SOC's die size to be much smaller. You can fit 4 SOMs in the space of a single CEX7 module.


> You can fit 4 SOMs in the space of a single CEX7 module.

Do they like multi-"socket" settings?


There is no backplane for the CPUs no. But they support 10 and 25Gbps networking that can be used for high speed clustering.


Now that you mention that, it's quite an upgrade over my idea of putting 32 Octavo easy to interface packages (you need to connect like 5 balls for what I want) along with a ethernet switch and make a cluster of 32 puny nodes...

The number 32 is important because a stack of boards should look like a Thinking Machines CM-2a (http://www.corestore.org/cm2a.htm) and each CPU would control one LED (or three - Octavo now has a 3-core, two puny, one even punier, part, ending up with only 10 nodes per board).


sounds interesting, great news!


> What this means is that HoneyComb running our tianocore edk2 based firmware will boot and run on most Aarch64 operating systems out of the box.

eDonkey 2000 based firmware?


> eDonkey 2000 based firmware?

Now that's a name that I haven't heard in a while since my youth. There is so much of the past web that is long forgotten and never had been experienced by today's youth.


I once wrote a C++ client for a certain site because the official client was written in Delphi and used gigabytes of RAM. It used ed2k hashes and I wrote my own ed2k hash function (it's incredibly simple). That was in 2017. Any more hints would be dangerous ;)


Heh. Embedded Development Kit II (EDK2) is a "modern, feature-rich, cross-platform firmware development environment for the UEFI and PI specifications". https://github.com/tianocore/edk2


edk2 != ed2k


I was searching for a good ARM board for tinkering a while back and settled on the Odroid N2+. That has a pretty good trade-off between performance and price. It retails for less than $100 and has 6 cores total, 4 performance A73 cores clocked at 2.4 GHz (compared with the 2 GHz A72s here) and 2 efficiency A53 cores at 2 GHz. This has at least good Linux support with armbian[1] and ArchLinux ARM[2].

Outside of Apple, there are not many modern options. The A73s above are 4 generations old now. There are some ARM laptops with the faster Snapdragon 8CX chip but these still don't come close to the M1.

[1] https://www.armbian.com/odroid-n2/

[2] https://archlinuxarm.org/platforms/armv8/amlogic/odroid-n2


Amlogic boards have great mainline Linux support, but their firmware/uboot setup is full of blobs, can only use an old compiler toolchain, and only works with an old version of uboot. I am linking to the odriod n2 instructions below, but if you look at any of the Amlogic boards it is a similar situation. (Including the beloved Khadas VIM3).

https://u-boot.readthedocs.io/en/latest/board/amlogic/odroid...


I had my eyes on this for quite some time. With built-in 10G this would probably make a great low-power homeserver. And I definitely want to support this. Unfortunately, >800€ in the EU is quite a word.


Indeed. You can get a very reasonable server fully assembled for that. I'm disgusted by x86, but not 800€ disgusted.


The Buy Now link doesn't work for me but this one seems to be what it should be:

https://shop.solid-run.com/product/SRLX216S00D00GE064H09CH/

Specs as noted there:

  * NXP LX2160A Arm® Cortex® A72
  * Dual DDR4 SO-DIMM 3200MT/s 
    (none included; up to 2 x 32GB)
  * 64GB eMMC
  * Commercial Temp. (0° to 70° C)
  * COM Express type 7 module + Mini ITX carrier board
  * Size: 170mm x 170mm


The A72 is okay but is just not setting any records, even if you have 16 of them - what can you get with an X1 or (preferably) N1?


No it is not setting records, but we felt that it was capable enough to help crack the existing chicken and egg problem. We could provide a SystemReady Arm platform that could help bridge the divide and provide developers a native Aarch64 platform that wasn't $3000+ HoneyComb has helped bring better Aarch64 support to many OSS projects getting things ready for better SOCs once they become available.


One of the positives of having lots of non-speed-record-setting cores instead of few single-threading-monsters is that it helps people learn to write parallel programs.

I wish we could give all software engineers CPUs with 64 Atom (or ARM53) cores and force them to write software that performs well that way.

I am myself guilty of that - many of my build scripts didn't bother to run on more than one core until I got a burner laptop that has horrendous single-thread performance (but 4 threads). Now the big laptop runs them virtually instantaneously.


That's a fair point but the reality is this -- if you work on large-ish or actually large codebase (think like, Open Office sized codebases, not your random run of the mill npm package) today and you want to maintain it, publish packages, do work, integrate a CI builder, etc -- it doesn't really matter. You need RAM, cores, and importantly, a fast drive. Basically the only things that can fit this bill at sub-$1000 USD, are:

- Virtualized Linux on an M1 Mac. (Limited RAM since the M1 caps at 16GB, but viable.)

- Jetson Xavier AGX. (Good for ML and overall strong SoC, but the proprietary bits get in the way e.g. if you want a new kernel version.)

- This thing. (OK cores, but 64GB RAM and M.2)

Hilariously, the newest entry in this list is actually the M1 Mac (both the AGX and Honeycomb date to 2019/2020.)

Unless you're going to start subsidizing boards, somehow. You're not going to see Neoverse in a developer workstation at a similar price point until the market either flourishes substantially (months, years from now) and the volume allows lower pricing, or a sugar daddy company somewhere decides to just set money on fire and directly subsidize sales directly to you. Part of this is the same problem where you have to buy both the CPU and the motherboard together, but at the end of the day it doesn't matter.

For comparison, an Ampere Altra with the entry-level 32-core will cost you about $6,500 USD. https://store.avantek.co.uk/ampere-altra-64bit-arm-workstati...


In the first half of next year, Jetson AGX Xavier will be superseded by Jetson AGX Orin at the same price point, which has 12x Cortex-A78AE. Will be quite a nice bump.

At the same time, the SW stack will switch to UEFI for the bootloader, including support for the latest kernels. (but, Jetson Nano/TX1/TX2/TX2 NX will no longer be supported by the new stack)


The extra cores are welcome, but, for a workstation, the eMMC is limiting. And I have emotional scars from Nvidia GPUs in desktop Linux computers, so I'll pass this one.


You can just use NVMe on those, even boot from it. On mine, I don't even consider it as existent. :)


Well, you can get an M1 Mac Mini for about the same price.

A 4-core VM running on my M1 MBP builds stuff nearly as fast as a bare-metal 64-core Ampere EMAG. So I'm guessing the M1 would run circles around 16 A72s..


An M1 Mac mini maxes out at 8 GB of RAM, no?


No, 16 GB.


I think a 100 MHz Pentium with sufficient memory can run circles around a swapping M1 ;-)


It seems to be setting price records with $750. Ugh. I mean, I get the reasons, but I'd probably rather build a cluster of 8GB RPi4's. For $750, it better be that X1/N1.


This also supports ecc memory...


And doesn't need to boot off of SD cards.


Everything boots off of something. Even netboot needs firmware which is on some kind of flash memory anyway.


Some things are more reliable and fail safe than others. SD would not rank high on that list and that's not even talking about the cheap mechanical connector involved.


And doesn't keep important files on SD cards...


So do Ryzens, for much less money per unit of performance.


We discussed a couple options in https://news.ycombinator.com/item?id=28299204

Unfortunately, having 4 4-core nodes will not make Slack or Chrome any faster.

Unless you run each on a separate node and pipe the X output to your node.


True, but I'm not quite sure why I'd be running either of those on this.


It’s a workstation after all. You run workstationish stuff.


Not too familiar with ARM CPU names, but apparently this is a 16 nm chip from 2016. Hrm.

The product itself doesn't appear to be particularly new either; their YouTube video is from March 31, 2020 (https://www.youtube.com/watch?v=lxdRSCQfhyw), and this blog article is dated February 24, 2021: https://www.solid-run.com/blog/articles/honeycomb-lx2-server... and this "early access sale" (https://www.solid-run.com/news/early-access-limited-offer-ho...) is from June 7, 2019. (You have to go to pages like https://www.solid-run.com/blog/page/3/ to find the dates.)

I don't think ARM chips will take off on "real computers" until you can buy CPUs and motherboards independently. And if that is difficult for technical reasons (e.g., no standardized BIOS/hardware discoverability -- I'm just extrapolating from ARM-based dev boards I have used, maybe this exists on computers like this) then I think it's going to take a long time -- perhaps RISC-V-based "real computers" will appear before that.


socketed Arm SOCs are very unlikely because there are so many vendors making them. This works on the x86_64 side because you basically have Intel and AMD and they each have their own manufacturing ecosystem and target sockets. That is why we chose CEX7 since it is a standard that allows the motherboard to be customized to the use case.


...until you can buy CPUs and motherboards independently

Ampere Altra is socketed; too bad the motherboard alone is ~$5,000.

standardized BIOS/hardware discoverability

This is solved with SBSA/ServerReady/SystemReady.


Honeycomb in fact advertises (for a good reason) that they do have sensible platform firmware in place.


I too turned this offering down after considering that I don't want to lug around 16 A72's. It is certainly an interesting board, and if you are a hobbyist in need of an ARM server I see no better options, but I think we will see much more interesting boards in the future.


You may be interested in our new platforms coming out based on our newly announced LX2162A, https://www.solid-run.com/embedded-networking/nxp-lx2160a-fa...


Looks like it is still A72s. Any chance to upgrade the core design, or switch to a newer process node? I am comparing against Ryzen Zen 3 chips FWIW.


We don't make the SOCs, just implement them. The only other available SOC that is N1 is the Ampere Altra and you will not be able to provide a < $1000 system with that. Also the SOC itself is almost as large as the entire CEX7 module. This system was never meant to out perform the latest CPUs on the market. It is a developer tool to fill a niche and help advance the SystemReady ecosystem. We liked the fit because after faster more desktop ready SOCs are available this doesn't become ewaste, but can easily re-purpose into a micro-server or other network development platform.


Does your comparison include sales volume and power draw?


> I am comparing against Ryzen Zen 3 chips FWIW.

I find their ISA offensive ;-)


> but I think we will see much more interesting boards in the future.

That's what I have been hearing since 2012 or so.

Not going to say it won't happen, but I am not holding my breath.


Until we start using the less interesting boards to signal the market we want more interesting boards, we won't have more interesting boards.


Hmm, says workstation, but doesn't seem to include a gpu. But it does have a PCIe slot. Whats the chance that modern gpu would work with it?

I suppose nvidia's binary blobs wouldn't work, but perhaps the AMD stuff might? Or perhaps the upcoming dedicated intel gpus?


Currently any GPU that supports an in kernel OSS driver will work, so mainly using the amdgpu or nouveau driver. We are working nvidia on some issues regarding using their binary Aarch64 Linux drivers. That is kind of the point of HC, if the platform didn't exist then Nvidia would never be looking at these issues.


I'm running OpenBSD on it with an amdgpu WX2100 card, works nicely.


Should work fine. They recommend AMD gpus.


incredibly impressive advance in what arm systems are purchaseable as general purpose computers.

unbievably fantastically expensive & middling performance versus ehat you'd get paying the same for x86. arm has been a long series of promises & capabilities that have never ever been generally purchaseable, generally usable, and radically overpriced every single time it competes outside if it's protected hard to access niches.

at least this particular chip is well engineered for the price. i cant think of a single other arm chip that either a) comes with good easy to get documentation or b) you or i could go buy off a major lart supplier easily. nxp (nee freescale) continues to be the only arm vendor trying to play it by the respectable x86 game, of selling your chips faurly openly, of documenting your chips fairly well.


this was at +3 now at -1. anyone want to comment why you think this is off or why you are downvoting?

look like, the promise has been in the air for over a decade[1] (ed: not the best link. i was tryin to find a decent ARM / AMD A1100 Cortex-A57 intro link). it's never gone anywhere. no one gets to use ARM except massive titans of the world. we get some old old old hand me downs every now and then like rpi's or this here cluster of power-inefficient old A72's. this is dregs. this is shit. this is insulting pathetic offerings, and it's literally all we have. this is the only thing ARM has ever made available to us, generally. how anyone can be ok or happy with this is madness.

[1] https://www.semiaccurate.com/2011/06/22/amd-and-arm-join-for...


You said that arm has been promising but not delivering. But now Apple is selling the M1 which runs Intel software faster than the previous Intel Mac did. Even though it had to be emulated. So it’s pretty clear that arm can provide high performance.

Though the rpi shows that arm can also deliver bad performance…


That doesn't invalidate claims about ARM not delivering - because a big part on the undeliverable is how booting any ARM board is still a shitshow (no, le random uboot build with FDT files is not a proper platform)


any ARM board is a stretch. Several rockchip boards can boot with mainline uboot and no firmware blobs.

And this board, the honeycomb, has an actual uefi implementation.

If anything the SoC vendors are getting in the way, not ARM itself. They just license CPU and GPU designs.


ARM took its sweet, sweet time delivering SBSA and its follow ons. Including keeping the specs secret for a time. To call it anything other than self-own would be reaching.

Because yes, ARM stewards also the UEFI efforts on ARM.

It's why I laud Honeycomb and similar companies that actually take care to implement sensible firmware.


I agree with you both very strongly. :) I think the SoC vendors have been one of the primary obstructions in ARM availability. But I also think ARM has failed to take responsibility for ecosystem success, has let their market position rot into place.


the m1 is an example of arm only being available to a very rareified, select user: apple, the biggest tech company on the planet by far.

qualcomm's 8cx gen2 is a possibly interesting consumer offering i miggt cite if i wanted to talk about arm inroads. but again much harder to acquire & use than x86, no public documentation. and the core is from december 2018 (kryo 495), which feels like a typical arm pace: very slow to get modern chips in consumers hands, except bleedingly expensive flagship phones.


I'm still waiting on mine to arrive :(

I may have to cancel soon as it's been past the suggested 6 week ship delay


>SystemReady ES

So maybe it will work out of the box. That said the ARM ecosystem still seems immature on desktop.


Typing this right now a HoneyComb running Fedora 34. RX560 gpu, nvme, 32GB of DDR4 3000 XMP memory. Are there rough edges with Aarch64 on the desktop, sure but without platforms like HC this will never change. There are lots of core OSS devs using HC to help solve these problems. Here is a video presented at the last Linaro Connect https://www.youtube.com/watch?v=tloLT4EPSpo


Thank you for your work and comments.


What is the easiest way to get this within an enclosure?


It is an mITX motherboard, you can use any off the shelf case. The community has done all sorts of interesting builds.


Would you be able to run M! mac os on this?


I very much doubt it. Apple's M1 SOC is not a standards base design and the OS is very specialized to their hardware. As far as I know they don't even support external PCIe GPUs on the platform.


Can you run it with a working framebuffer and peripherals? Probably not. You'd have to ask Corellium for their secret sauce Apple SoC virtualization.

Can you run it in some capacity? Yes. See https://blogs.blackberry.com/en/2021/05/strong-arming-with-m... and https://news.ycombinator.com/item?id=25064593

With additional work, it should be "possible" to do this on an ARM device and use KVM to get virtualized performance, with unsupported instructions/features trapping back to the hypervisor and being emulated in software.

All that said, it will probably be a long time (if ever) that you will be able to boot a virtualized ARM MacOS up to a graphical desktop.


why would you buy this when you can get a very comparable rpi4 for 10% the price


The Raspberry Pi 4 is barely a real computer, and hardly comparable. The Honeycomb has faster interconnect with practically everything on the board, so can handle a video card, serious networking and storage performance, etc.


A Raspberry Pi is not even going to even touch this in terms of performance. You do realize that a Pi is underpowered in every meaning of the word?


But...but... the Pi is only $10.


"If you were plowing a field, which would you rather use, two strong oxen or 1024 chickens?" – Seymour Cray


But there are no oxen in this comparison. The RPi is 4 chickens and the Honeycomb is 16 chickens.


The $10 Pi is not the same kind of bird as the Honeycomb and the RPi 4 lacks the expandable memory, storage and pretty much everything else.

It has a GPU, at least, which can be handy.


At this point I consider it job security that Rpi users can't look beyond their $10 kit boards.


Could you point me to the 10 Gb ports on the Pi 4?


The Pi4 can hardly handle 1Gb speeds in the first place.


To be fair, the Pi can handle 4Gbps of ethernet traffic successfully as Jeff Geerling demonstrated


The LX2160a can handle up to 130Gbps of network traffic. That is what it is designed for. You also have the benefit that the Quad SFP+ ports can be configured as a network switch completely offloaded from the CPU. That is what this SOC was designed for after all. :)


The Honeycomb is older than RPi 4 so it made some sense at the time. It should still be faster for builds because of the 16 cores and the Honeycomb has far more I/O for people who need that.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: