Hacker News new | past | comments | ask | show | jobs | submit login
Turing Pi V2 is here (turingpi.com)
216 points by rbanffy on Aug 25, 2021 | hide | past | favorite | 92 comments



The fine folks at Pine64 have a similar cluster board, but it uses their own little modules (smaller and cheaper).

The cluster https://www.pine64.org/clusterboard/

The compute node: https://pine64.com/product/sopine-a64-compute-module/

The AI node: https://pine64.com/product/soedge-ai-neural-module/


Note that the Turing Pi v1 has 7 slots for Pi CM/CM3/CM3+ modules, but it has been in short supply as each batch they made has sold out pretty quickly.


The use of non-CM4 connector is interesting. Do you know why that is?

Edit: Oh, I see, they are using a daughtercard method to connect the CM4 form factor. Probably nicer to keep everything upright and packed parallel.

Edit: Also, thank you for your K8s book, I'm an avid user although I need to recheck Leanpub to see if there are any recent updates.


CM3 uses a laptop-RAM-style SO-DIMM form factor and connector. CM4 changed to a different form factor and connector.


The daughtercard also allows them to adapt to a Jetson Nano form factor, which makes for an interesting hybrid board.


CM4 requires both boards to be stacked, so it doesn't work for something like this. IIRC there are also problems with the connector layout assuming zero tolerance.


Jeff Geerling! I'm excited for the video that I know is forthcoming, I'm haven't even looked at Turing Pi's official release because I want to hear your take on it :)


I'm quite interested in this but there seems very little explanation. I found the wiki has more but I'd really like to see a good review of the cluster in use. Whether it is useful as more than a local self learning system. https://wiki.pine64.org/index.php/SOPINE

https://wiki.pine64.org/wiki/SOEdge

https://wiki.pine64.org/wiki/Clusterboard


I have an obsessive love for Pine64 but man it's hard to order stuff. Always out of stock.


I have never found the PINE64 software support to be that great. Their hardware is nice, but they never match up the software enough.


They are only 2GB RAM modules though.


Yes. But almost twice as many ;-)

Fits well in an education setting, when you manage a cluster in order to manage a cluster.


I know it’s a HN cliché to comment on this, but it’s also true: I went several links deep, then looked at the homepage and the FAQ, and still have no idea what this product is or who it’s for.


Short version is that it is a custom board into which you can plug multiple RPi compute modules (and now some Jetson modules) to create a miniature version of a blade server system; this board is the backplane of that compute module blade system. Use it to create your own edge cluster system I guess; it does not seem particularly useful beyond being a neat curiosity when you put RPi CMs into it, but as a GPU/CUDA node filled with Jetson modules there is some interesting possibility there for people looking for a cheap local cluster for training ML models.


4x Jetson nano would cost $240, the Turing board will probably cost around $100 (it looks like they haven't decided yet) and you get 1,88 tflops; you can add 50 bucks and get GTX 1060 with 4.4 tflops and you can play games.


Not sure that's quite a fair comparison, because you'll need quite a bit more hardware to use the GTX 1060, and I think that the Turing board + Jetson setup would be all-inclusive (except a power supply and chassis, I suppose)?

I could be wrong about that though.


But can you do a 4x/cluster SYN flood with a GTX 1060


And "RPi CMs" are Raspberry Pi Compute Modules, so apparently these: https://www.raspberrypi.org/products/compute-module-4/?varia...

Ironically, the adapter images that you use to plug in the RPis (which might give you a clue) don't currently load on the Turing Pi homepage.


Agreed save for the caveat that a 4 RPi compute system could actually do quite a bit of Edge ML. Even a single RPi is enough for >15fps image recog.


Their two use cases are edge infra — horizontally scaled server applications on a power and cabinet space budget — or as a workstation for workflows that can benefit from distributed compute power.

I could imagine the latter might be handy if you’re doing CAD with rendering in the Amazon rainforest and don’t have 5A of power for x86_64 + GPU. Maybe.

It definitely seems like a solution in search of a problem. Happy to be proven wrong though.

See “Use Cases”, here: https://turingpi.com/turing-pi-2-announcement/


There's a company here in Ireland, Cainthus, that does workplace wellbeing for dairy cows. It does so by continuously analysing video feeds and detecting behaviors that could indicate stress or other environmental factors that make the cows unhappy. Management could be done by the RPi while the inference could run on one or more Jetson boards. These little machines are very friendly for embedded work.


I think I'll wait for (and enjoy!) the obligatory Jeff Geerling[1] video[2] on it.

--

[1] https://news.ycombinator.com/user?id=geerlingguy

[2] https://www.youtube.com/c/jeffgeerling


oh I didn't know he had a youtube channel. I knew him through his excellent Ansible modules.


I'm also here on a daily basis ;)

I'm hoping to get my hands on a board soon... I think they're still brushing up the prototype though, trying to get it to a more final production state.


Excellent. Your videos on the various Pi clusters have been fantastic. I'm looking forward to your review of this new updated board.


I have the earlier 7 node version which runs Kubernetes, following Jeff Geerling's guide (https://www.youtube.com/c/JeffGeerling).

Actually I wish they hadn't reduced the number of slots to 4, because part of the "fun" is dealing with the fact that with 7 nodes, using ssh and individual node management is no way to manage a cluster, so you're forced to treat it as a real cluster. I feel with 4, I might be tempted to individually manage each node. But I also understand why changes in the Pi Compute Module 4 made this necessary. The CM4 is physically much larger than the CM3.

Edit: Actually my real wish is for a compute module that has more I/O channels. I would love to build a hypercube-style supercomputer (like the Meiko Computing Surface) but these require 5+ high speed I/O interconnects say to build a 32+ node cluster. I wonder if PCIe offers a solution?


I too would be interested in playing around with more processor to processor interconnects. Just for fun I built a 16 way SAMD21 board that used the serial interconnects to make a hypercube arrangement and it was very cool to play with.

It would be possible to build an interconnect over PCIe, but of course it might just be better to use a 10g ethernet PCIe interface chip for each node and a local to PCB network.


They look to be great devices for edge compute. I can slap a Jetson in one slot and pi's in the other three. Cheap, easy to fix, no expensive support contracts. I'll be looking at throwing them out into rural locations/farms. I can get x86 nucs but tbh they lack the customisability in this has. This is awesome for industrial applications.


Doesn't apply here. From the homepage:

----------

What can I do with Turing Pi?

Home server (homelab) and cloud apps hosting

Learn Kubernetes, Docker Swarm, Serverless, Microservices on bare metal

Cloud-native apps testing environment

Learn concepts of distributed Machine Learning apps

Prototype and learn cluster applications, parallel computing, and distributed computing concepts

Host K8S, K3S, Minecraft, Plex, Owncloud, Nextcloud, Seafile, Minio, Tensorflow


No, I saw that, I just don’t understand it. I can do all of that with a PC. Is this a PC? A more powerful Raspberry Pi? What does the ability to “learn concepts” even mean? I learn concepts from books, what does the hardware do?


This board is a very convenient way (maybe the most convenient one I've seen) to setup a bare-metal cluster of computers. Not just multiple cores, not just multiple VMs, four entirely separate ARM computers communicating over a real hardware network. One Alternative to boards like this is to connect multiple SBCs together, with all the wiring, and also some mechanical support. Another (more powerful alternative) is to install some kind of server rack at home. More expensive, too. Using multiple virtual machines is also not quite the same.

What people use it for? Mostly to learn how to deal with problems that arise from managing a cluster and running software on it. Can you build a website that tolerates getting one of the nodes or hard drives turned off?

Some people use such solutions for productive things, like a Home Server, but a store-bought NAS or a single PC is usually more performant. A PI cluster might be less power hungry in some scenarios.

Some people use them as build/test platforms for code that should run on ARM architectures. Others have used them to host a website from their internet connection (I know...).

Some people just have fun tinkering with such things....


Don't worry, it's not just you. I learn concepts by building and tinkering (and reading specifications), so you'd think I'm a target market. But when I wanted to get some hands on experience with a cluster file system, for a job, I spun up a cluster of 5 vms on... my normal computer.

4 seems like a very useless number to me. 4 raspis is more expensive and less useful than a used dual xeon on ebay. I could imagine maybe there's a use for something with 16 slots? or at least 8? But I don't get these cluster boards (or, for that matter, storage enclosures!) which presume I can do something fundamentally different with 4 small computes than 1.


The original Turing Pi had 7 slots (wish it had even more!). I do feel that was better, because it really forces you to manage it as a cluster.

Spinning up VMs is sort of fine, but they don't have quite the performance or management characteristics of a real cluster. The network is slow, the nodes individually are not very powerful, you have to work out how to image each physical machine, nodes break or have I/O errors, ...


> maybe there's a use for something with 16 slots? or at least 8?

You can always connect 2 or 4 of these together.

But I understand what you mean. A project I want to build one day, when I have the time and learn ethernet interfacing through PCB's is to build a single board cluster of Octavo SoM modules. They are individually inexpensive and it'd be relatively easy to build a board with a dozen of them connected to a switch chip.


Yeah exactly, my first thought is that a normal multicore PC is going to be not just more powerful, but more power efficient and cost efficient. It's a fun idea but I wouldn't be interested unless they publish some comparisons.

Basically everything here can be done on a single multicore computer (which is already a distributed system in many respects):

https://turingpi.com/12-amazing-raspberry-pi-cluster-use-cas...


more powerful in most cases - yes. distributed - no. learning about doing HA/Scale out via distributed systems is a really valuable skill, and projects like these make it sooo much more real, beyond even just basic networking.


I wonder if this video helps: https://www.youtube.com/watch?v=8zXG4ySy1m8 Jeff Geerling "Why would you build a Raspberry Pi Cluster?"


That says what it is for, but not what it is.


Ironically I thought just the opposite.


Raspberry Pi has a compute module version: CM4, which is basically a pared down RPi4 with almost no IO options. This is a board for CM4 (and, apparently, NVidia Jetson) that lets you power, network and communicate with several CM4 boards in a cluster setup.


But... why? Every other thing doesn't work on ARM, so what's the point of a rpi cluster?


It's kind of this:

https://blog.fosketts.net/2012/02/21/cubix-ers-blade-server-...

but for Raspberry Pi's and Nvidia Jetsons.


same here, their website should have a one-line description of what this board is/for. Though I did find a short answer on potential use cases from FAQ in one of the pages (I think it's the Turing Pi V2 product page?)


This could power a portable audio studio. Use the CMs for different VST instruments and sound effects. Zynthian is just using a single raspi, but 6 of these beasts would make up a complete audio studio with multi channel recording and real time effects and synths. Zynthian provides accurate emulations of classic instruments: Grand piano, Rhodes, Wurlitzer, pipe organ, Hammond organ, combo organ, Minimoog, DX-7, Oberheim OB-X, JX-10... In the 80s I would have given a limb for a machine like that. Put it into a 19 inch case, slap a large display and 16 Knobs and sliders on the front panel, give it 8 line ins and midi connectivity and you have the ultimate audio studio in a box. Sell it for 1000 bucks and it sells better than sliced bread.

https://zynthian.org/


As a very happy Zynthian user: YES TO THIS!

I will get a Turing Pi ASAP to play with just this.

It should be noted however that even the bare bones Zynthian system is capable of doing a LOT of voices. Plus, you don't have to go with the original Zynthian hardware - I have a small stack of devices from Audiophonics (I-SABRE RaspTouch - [1]) that all run ZynthianOS perfectly well, and it is a very nice and easy way to add polyphony/DSP power to the setup.

EDIT: to note, this is just a perfect example of how open source projects inspire innovation - the ZynthianOS is free and amazing, and the Zynthian project itself has its own hardware designed for the purpose - but it can run on other peoples Pi-based hardware as well, and there is such a wide plethora of audiophile devices out there to pick and choose from. Having the TuringPi architecture available, I can imagine its only a matter of a few months before we see exactly the system - multi-core/expander-type synth modules - appear on the market, making ZynthianOS even more powerful ..

[1] - https://www.audiophonics.fr/en/network-audio-players-rasptou...


Why would I not just get a Nord Stage 3 instead?


Because the Nord Stage 3 won't allow you to easily add your own code to the device for experimenting with new synthesis techniques?


Korg Minilogue and its descendants allow uploading of user oscillators, way cheaper at ~500. Plenty of YT tutorials out there.

(edit) one of the first i found: https://www.youtube.com/watch?v=ouGBnYXUT40


The Nord costs about 10 times what this does.


Because a Nord is three grand?


Wow, this thing is pretty neat!


Reminds me of the good old “imagine a beowulf cluster of these” jokes.

For those who don’t remember, the origin of the joke came about shortly after the first beowulf clusters were announced - thereafter every time a new computer was announced on Slashdot, somebody would say "Imagine a Beowulf cluster of these".


Don't forget that one of the milestone things in Beowulf cluster was Stone Soup cluster, and the whole Beowulf cluster thing was heavily about "we don't have money for supercomputer, we hacked something from random junk"


My jam was rather Openmosix, but same vibe: throw whatever you have into the cluster and be happy with the inefficiency because those resources would sit unused otherwise !


Like Google did in the early days.


I thought early google essentially ran on a university cluster?


Early early google seems to have used scavenged machines from university as soon as it outgrew what was essentially a phd project? I recall a photo of random assortment of Sun sparcstations and the like.

Much later was the infamous cardboard cluster.


So, I have clearly not been "paying attention in class".

This is some sort of board to connect several Pi's, and make a "cluster"?

What are the advantages to simply connecting them via my LAN, except cable management?


The ability to take Jetson cards is actually a really strong point in its favor IMO. Raspberry Pis by themselves aren't incredibly interesting in clusters except for pedagogical reasons, but with some Nvidia cards running whatever on CUDA, maybe with some RPis mixed in to do management/support tasks, and the built-in BMC, this could be pretty sweet for the right task.


I'm not sure these NVidia cards are very powerful. One decent GPU in a PC may blow several clusters of these out of the water. I haven't checked, though.


In raw performance, probably. The benefit these have (at least purportedly) is they're very energy efficient, consuming little power (and generating little heat) for comparatively large throughput.

So I can imagine someone wanting a few of these on a desk, running inference on some models or something, maybe as a small back-end for a hobby project. It may still be more power efficient to just use regular GPUs, but I suspect these win out because of the tight coupling between "CUDA cores" and the CPU.

Now, is that worth spending a bunch (many hundreds) of dollars on a carrier board and these Jetson modules? For me, no, but I at least see why it may appeal to some people.


Isn't cable management a pretty inconvenient thing? This also includes a switch. You need plenty of cables to replace this.


Not trying to minimize cable management, just trying to see if I'm missing anything here or not :)


I guess it is. But I suspect the price (as it's usually the case with "cool" rpi things) isn't going to look like one of a cable management solution. INB4: $200


The entire purpose of a PCB is cable management ;)


Ha, that's a good one... also somewhat true! By extension, ASICs really are just about cleaner PCB layout.

(Yes, yes, there are non-aesthetic physical, electrical, designed, and parasitic effects of cables vs PCBs and vice-versa. Spoil-sport.)


> ASICs really are just about cleaner PCB layout

ASIC's are just very small PCB's with all discrete components etched on the same material ;-)


Yes, exactly, thus tidying ('mother') PCB layout in the same way that a PCB tidies all the cables into a small arrangement with all discrete components fixed in the same plane.


These connect Pi Compute Modules, which are distinct from regular a Pi in a few ways (eg they don't function without a host board of some sort, so don't have certain things on board like network connectors, GPIO etc) but putting that aside, you'd get to a reasonably similar place if you hooked up some regular Pi's, it's simply more wires with the regular ones.


I think the biggest differentiator here is direct access to the PCIe bus and SATA that doesn't go via USB - that's something you can't get on a normal Rpi.


It's not just the networking. Which would be awkward enough. It's also power and IO/Storage.

Just take a look at the PI Clusters people have built, the volume is a few times that of the boards alone.

Also, the CM4 is a bit cheaper than a comparable "complete" SBC, though I don't know if you'd come out ahead with the price of the board.


Pretty cool project, but on the other hand, if it ends up costing 200usd this is gonna be pretty expensive. And tbh, what is the point of doing raspberry pi clusters in general?

NUCs/usff PCs are more powerfull, cheaper and easier to upgrade


One of the points is to have a cluster of separate hardware computers communicating over a hardware network. Not for performance reasons, but rather to learn to deal with the limitations.

One NUC is still one NUC. There may even be workloads where 4 CM4 modules (or even Jetsons) beat a modest NUC. Not sure where, though.


NUCs are surprisingly expensive. This plus 4x Pi CM4s is about 300 USD (if your pricing is right - might be a little high). That's still under a NUC.


Ok, new NUCs might be a bit expensive. But I can get lenovo tiny m72 or something like that, with core i3, 4GB of ram and 320gb hdd for <70usd. I would assume 4 of those are going to be similar in terms of size as turingpi build, much more powerfull and consume just a bit of power more (have a few similar machines, they consume something like 10W unless under heavy load)


I felt that expensive is subjective in this context. It is expensive as a full kit NUC, yes. However, barebone NUC is cheaper and I see a few of them are in $300 range with various of i3 and i5. I seen a very cheap $200 range barebone NUC last month, however it use low end cpu, i3 or Celeron.


Yeah nucs really aren't that great. Minimal GPIO if any. No mPCIE slots.... And they are expensive.


The mPCIe connector is mostly dead, it has been largely replaced by M.2 and NUCs have at least one M.2 connector.


I hope they keep the 20-pin ATX power connector. The latest Turing Pi V1 boards are missing it, and one must have an ATX-compatible PSU with the 4-pin motherboard connector instead. Most ATX PSUs have this, but not all 1U cases come with such PSUs. Supermicro's CSE-512 doesn't have one, and it's a shame that the V1 is not compatible with it, because they're the cheapest 1U Supermicro cases on eBay. No one is buying them, but had this point of compatibility been preserved, they could have gone to better use instead of the scrap heap.


This seems like a good alternative for the people who don't want to run old rackmount gear for a homelab, in the thread yesterday. https://news.ycombinator.com/item?id=28261768

Part of my reason for setting up a home lab was for cluster experimentation, so the suggestions of "just get a Mini-ITX computer" weren't a good fit. Yes, I could set up multiple VMs there, I wanted multiple chassis.


It is not here, it is not available and the price is unknown.

So it is unobtainium.


And given the chip shortage it's going to be out of stock pretty soon after launch.


Too bad it’s impossible to find RPi CM3 modules in stock for my Turing Pi V1 that has been sitting in its box for a couple months now! :(


Is there any details on the BMC / switch?

If the BMC is something good (thats supports standards like redfish / etc) and the managed switch is exposed using something standard, I think this could make a really interesting dev board for someone who wants to learn about bare metal "clouds".


Note, the page says "4x 1-gigabit ports are allocated for each computing node" which is incorrect if there are seven total ports as specified elsewhere. It should instead say something like "4x 1-gigabit ports are allocated, one for each computing node".


How is the storage accessed by the compute nodes? Do only some nodes get direct access and the rest need to use nfs or something?


What I'd like to see is faster node-to-node interconnects. Gigabit ethernet is very outdated.


Faster and more of them. Unfortunately they're limited by what the CM4 exposes, which is gigabit ethernet, a single lane of PCIe gen 2, and a bunch of GPIO pins.


Unfortunate they dropped to only 4 slots per board too. The value goes up with each slot.


It doesn't seem to say, but do RPi CM3 and CM3+ boards work with this new one?


Currently, no. Only the V1 supports those. It would be interesting if a hybrid solution was available.


What is this ? I have no idea ??




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: