Hacker News new | past | comments | ask | show | jobs | submit login
Nvidia Jetson Orin Nano Super [video] (youtube.com)
65 points by ralusek 8 days ago | hide | past | favorite | 51 comments





If anyone is wondering why nVidia is dominant, it's because this guy can run the same CUDA stuff as the $30000 H100

So everyone can play around and develop for it. That's how you get to software dominance, and then can reap the rewards.


The new Java

java: write once, test everywhere

If it really is the new Java they will surf this wave for 20 years at least

Cuda came out in 2007, 2 years to go

For most of us this is not so attractive.

The 8GB shared RAM are all there is for OS + models, so the current Jetson AGX Orin 64GB is still more interesting as it allows to run a small LLM plus ASR and TTS on one device, yet with a price tag above 2000€ and also with an Ampere GPU but at 275 TOPS.

But for vision / robotics stuff this Nano Super is a great price.

You can search for "View Jetson Orin Technical Specifications" on the Jetson Orin page [0] to see the full offering of devices (it's in the middle of the page).

[0] https://www.nvidia.com/en-us/autonomous-machines/embedded-sy...


Was literally about to buy the Jetson Orin Nano for $499 and then this gets to released. How awesome

I've been looking for a replacement for Google Assistant/Nest/Nest Mini that is cloudless and could support things like verbally setting alarms, inquiring about the weather, and random queries.

This looks like a decent fit. Just needs a case, microphone, and speaker. Oh an one of the newer small LLMs that fit in under 8GB.


You probably want to wait a few months to a year for this: https://futureproofhomes.net/

They will have a local llm module within a year too I think.


Very interesting. I'll keep an eye out. This is one I've been tracking as well:

https://news.ycombinator.com/item?id=40744293


Even with the lower price point it's still a bit of an odd fit. The dev kit is aimed at developers but costs about the same as a 3060 (which will blow it out of the water on AI performance and capacity). Unless this board was already the one you were targeting for non-development usage for your robotics/embedded edge use case then I'm not sure there is a compelling reason to be interested in the announcement.

Edit: I didn't mean to imply it's a bad use case or product, just that it's not a use case 95% here are going to care about. It's an odd fit but if you're the target of that fit it's great. If you're not that odd fit target, don't get your hopes up about a low cost AI dev kit from Nvidia. It'll just be another SBC sitting unused in your drawer.


In the world of embedded compute modules, "developer kit" isn't the equivalent of a "developer workstation" with super-high specs.

Instead, it means "We sell this module for use on PCBs, but you want to start developing software and testing before you've made your first PCB. Here's the module plugged into a PCB that provides USB, HDMI and Ethernet connectors. We'll also throw in a power supply and an SD card with a working OS image on it."


I agree 100%, edited with clarification to cause less confusion since most seemed to take it I was saying the opposite. Thanks for the clear description of what I failed to describe.

Are USB, HDMI, and Ethernet really the ports you want for your embedded system?

Depends what you're doing with it. You can just glue this dev board in to your hobby system, or if you're making an interactive kiosk, maybe you want a usb webcam & hdmi screen. If you want to debug, those ports can be pretty handy, you can have a monitor and keyboard on it. If you do your own pcb integration, you can decide what ports you want on your board.

If you want to connect a screen, keyboard and mouse? Or plug in a wifi dongle? Absolutely!

And by the point you don't need them, you can graduate from the dev kit and spin your own PCB without them on, if you like.

The development kit will probably break out other signals too - but if you know what SPI and CSI are, you probably don't need to be told what an embedded dev kit is :)


On an embedded system out in the field? Depends on the use.

On your prototype devkit? Absolutely.


Some people have different views on what’s an embedded system, but hardware serial, USB and Ethernet are tablestakes for interfacing with any most industrial hardware or for robotics uses.

Kinda agree with the other replies to you, the edge case you carved out is the target use case, this is not meant to compete against the 3060, it's meant to be put on robots or other embedded use cases (even as a devkit). 3060 has more juice but it draws 170w vs 25w for the Orin nano (with 15w and 7w modes), and is much smaller.

You can't put a 3060 in a smaller-than-large robot

This is what I mean by "unless this board was already the one you were targeting for non-development usage for your robotics/embedded edge use case". If you're planning on using this board already then sure, you want this dev board update. If you're not that use case of that niche then this announcement will probably not be worth looking in to.

(a) Dev kits are often much more expensive than the production hardware.

(b) Why would you even care about this if a desktop gaming gpu would work for you? It’s an embedded inference module.


I meant to highlight, though apparently did poorly, 95% of people here will fall into (b) as it's not aimed at being the general use AI dev kit they'd want it to be. Unfortunately that seems to have been taken as "the product is useless to those building embedded systems because you can buy a 3060" due to my unclear wording.

It appears to be the same hardware as before, even the same firwmare/software as before (JetPack 6.1 was already out), just now they've lowered the dev kit price and documented a new performance mode which increases the clocks, power consumption, thermals, and compute performance.

You can flash the new firmware on the previous versions to get the same benefits

The specs on this look identical to the Orin Nano 8GB. Elsewhere Nvidia says the software updates are available to all Orin owners [1], so is this just a special edition released alongside the new JetPack release?

1: https://blogs.nvidia.com/blog/jetson-generative-ai-supercomp...


Looks like just higher clock speeds and faster memory. An upgrade to 16GB would've been cool.

The Orin NX is 16GB. My guess would be they still want some differentiation

Any guesses on using this for LLM inference how it compares performance-wise (like tokens/sec ultimately) to like the new mac mini m4pro? I'm not sure how to figure that out.

I appreciated this release including the personal video from the CEO with an oven joke. Although the VRAM is very limited, these specifications compete with Raspberry Pi's platform and I would expect the Orin Nano Super to outperform.

What OSes can be run on this? I've seen people using Ubuntu and Debian but sounds like bascially anything that has an ARM64 version? What about BSD (ignoring CUDA)?

The price is interesting. What I don't like about these devices is that they are supported for a few years and after you are on your on. I have the original Jetson Nano and they stopped providing any updates for it, if you can use it without an internet connection it will work just fine for years.

Yup and trying to hack your OG jetson Nano to a later linux is very hard due to proprietary linux / nvidia firmware/drivers. Its not a matter of your are on your own for OS upgrades, it straight up wont even be able to use the GPU for cuda. Pretty annoying.

Apparently they differentiate between their regular modules and the dev kits: The Jetson Nano you mentioned is supported until 2027, but their dev kit is EOL.

For the Jetson Orin Nano the module is supported until 2032, but the dev kit likely not.

Source: https://developer.nvidia.com/embedded/lifecycle


I think the latest Orin series supports both UEFI and there is "Bring Your Own Kernel" support for more recent releases[1], but you still need to apply the patches. Also, the Tegra GPU driver "nvgpu" has been open source in this form for quite some time in the Linux4Tegra tree, but I don't see it in the patch list. So, it's still a bit of a mixed bag, I think. But UEFI and 6.x kernel support is at least an improvement...

[1] https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide...


Well, that's how embedded computers are meant to be used, just like everything before the Internet.


What do you guys recommend for a case for this? rank mountable would be useful but a base I can sit on a shelf in a rack would work too.

"NVIDIA Ampere architecture with 1024 CUDA cores and 32 tensor cores" from their site. Not sure I understand how many Gigs of gpu memory that is?

That's the number of processing cores. Memory is 8 GB of LPDDR5 at 102 GB/s. Datasheet here: https://nvdam.widen.net/s/zkfqjmtds2/jetson-orin-datasheet-n...

50% of the performance increase seems to come from the memory bandwidth increase and the other 20% from the extra 10 Watts to boost clocks with (15W -> 25W).


I really like CUDA, but these are crazy expensive for the specs. It's sad there is no competition.

Am I correct to assume this is the same SoC as the upcoming Switch 2?

I do wonder if he is simply gaming the Turing test now?

is it really that hard to add a sodimm slot.

Another new toy in the Jetson series, the wallet shows that it is very painful.

Good response to Apple silicon eating dev llm. Bc dev is the first step to domination

This isn't a response at all. Nvidia has been making Jetson boards for longer than Apple has been making their own chips.

Longer than Apple has made desktop chips, yes.

Not everybody wants a huge noisy hot and energy slurping device on their desk

Not really comparable, Apple Silicon is not an embeddable device as the Jetson series is meant to be. And the Jetson will never replace an Apple laptop or even a MacMini for regular users.

Sure, but it’s a small desktop device, popular for development



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: