Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nvidia announces Jetson Nano 2GB, a single board computer (medium.com/ageitgey)
284 points by lentinjoseph on Oct 5, 2020 | hide | past | favorite | 201 comments


I have a Jetson nano. It's a nice thing for what it is, and the price.

But, the software misses the mark by a lot. It's still based on Ubuntu 18.04.

Want python later than 3.6? Not in the box. A lot of python modules need compiling, and the cpu is what you'd expect. Good for what it is, bad for compiling large numbers or packages.

They run a fancy modern desktop based on the Ubuntu one. Sure, it's Nvidia, gotta be flashy. But that eats a quarter of the memory in the device and make the SOC run crazy hot all the time.

These aren't insurmountable issues, they just left a bad taste in my mouth. In theory I only have to do setup changes once, but it's still a poor experience


> Want python later than 3.6? Not in the box.

The more I use linux (and I'v been for almost 15 years), the more strongly believe that using what the Linux distros provide as a development toolchain is an antipattern. Use separate toolchains with SDKs separate from your /usr instead.

> Good for what it is, bad for compiling large numbers or packages.

at worst you can

    docker run -it aarch64/ubuntu 
on your desktop and compile your stuff there.


> The more I use linux (and I'v been for almost 15 years), the more strongly believe that using what the Linux distros provide as a development toolchain is an antipattern.

Your assertion makes no sense. Let me explain why.

You've adopted a LTS release which was made public 2 years ago, was already a couple of years in the making, and is aimed at establishing a solid and reliable base system that can be targetted by the whole world with confidence.

And knowing that, your idea is to bolt on custom tooling that's not installed anywhere or by no one by default and make it your own infrastructure?

Unless you're planning on managing that part of the infrastructure for your customers to use, your race to catch up with the latest and greatest makes zero sense, and creates more problems than those you believe you're solving.

And no, it's not an infrastructure problem. It's a software engineering problem that's yours in the making. If you seek stability and reliability then you target stable and reliable platforms, such as the stuff distributed by default by LTS releases such as Ubuntu 18.04. because that's what they are used for.


> You've adopted a LTS release which was made public 2 years ago, was already a couple of years in the making, and is aimed at establishing a solid and reliable base system that can be targetted by the whole world with confidence.

From my experience, 0.05% of the people who have an Ubuntu x.y chose it because of "stability promises" of the LTS release. They just want a quick linux install to run their shit on, or it was handed that way by the Uni department / company / whatever. Or just because it's the biggest name and you're sure to find help online. And then a lot of people don't upgrade - I work with my fair share of people running win 7 (even still have an XP person around), OS X < 10.10 and Ubuntu 14.04, just because they can't afford newer hardware required to make the experience of newer OSes not a laggy shitfest.

> And knowing that, your idea is to bolt on custom tooling that's not installed anywhere or by no one by default and make it your own infrastructure?

gcc is not part of infrastructure, just like Xcode.app or Visual Studio aren't part of Windows or macOS infrastructure.


> From my experience, 0.05% of the people who have an Ubuntu x.y chose it because of "stability promises" of the LTS release.

Your experience and my experience differ greatly, then. I manage around a hundred virtual and bare-metal Ubuntu 18.04 LTS systems. Most of our docker containers (not included in the number above) are based on an Ubuntu 18.04 image because that's what we have standardized our infrastructure around.

The LTS choice was an easy one to make. We are too busy focusing on delivering value to the business to spend much time constantly playing around with the underlying technology. We are not early adopters for much of anything (except hobby projects at home) because we need our technology to be an asset to the business rather than a liability. Depending on new and bleeding-edge software for production systems just creates unnecessary risk.

We'll probably start thinking about upgrading to 20.04 in six months or so.


> We are not early adopters for much of anything (except hobby projects at home) because we need our technology to be an asset to the business rather than a liability.

I'm really glad there are still companies with such values. For some strange reason, these days people equate innovation with using fancy/barely tested tools.


> We are too busy focusing on delivering value to the business

there are a lot more linux users out there than business users


This disagreement is why multiple distros exist. Ubuntu LTS is a reasonable choice for entities that want stability and are willing to accept the occasional migration cost of upgrading to the next LTS (note: this applies to lots of desktop users as well as businesses). Fedora and Arch are reasonable choices if you want cutting-edge software, but come with risks as software changes under them.


> there are a lot more linux users out there than business users

maybe, but there are a LOT more linux servers out there than linux users, if by "linux users" you meant "people running linux on their computers".

those servers (and the people that run them) generally require stability.

would you care explaining what you mean? is it not fair to say that there are many orders of magnitude more instances of linux running on servers than there are instances of linux running on people's home or work computers? that is indisputable, no?


It makes as much sense for me to count individual servers than to count individual docker containers. It's the amount of people administering those systems which matters - having many servers to manage just means that you have to know how to setup automation


> From my experience, 0.05% of the people who have an Ubuntu x.y chose it because of "stability promises" of the LTS release.

You might be right, that %.05 of people choosing to use LTS distros are doing so for its service promises. But I'd counter that the majority of those who care are managing hundreds or thousands of machines / users. The end users might not know that they care, and that's because the admins are doing a good job.


gcc is part of the infrastructure, for the simple reason that you need the same compiler version to build Linux kernel modules.


> for the simple reason that you need the same compiler version to build Linux kernel modules.

Which kernel modules wouldn't come as precompiled packages in Ubuntu ? (real question - as an Arch user for instance I haven't had the need for DKMS for years)


DisplayLink is something that comes to mind. For Arch (btw i use) it needs DKMS and comes out of the AUR[0]. For Ubuntu, it looks like you need some 3rd-party packages that will need to be compiled for modern kernels.

[0] https://wiki.archlinux.org/index.php/DisplayLink#USB_3.0_DL-...


> and is aimed at establishing a solid and reliable base system

Excellent, I have a solid base.

Now I want to install the latest version of Gimp. Cannot. Why is that tied to the base versioning?


Why do you believe you can't? Because, I mean, you can.


I totally agree, at least on the desktop. My /usr isn't required for any of my customers' projects to run, and it probably shouldn't be - I have environments and VMs and containers for all that stuff.

But this is an SBC. It's likely to be configured to run its specific task forever. It should automatically log into the default user's account, and automatically start whatever software you want it to run. Software for it is likely to come from cross-compilers and to be sent over SSH.

It doesn't have hundreds of gigabytes of high-speed storage, the entire OS needs to be copied from a microSD card on every boot, so you don't want to have multiple toolchains on it.

I really like the DietPi [1] environment for these purposes (though it doesn't currently support have the Jetson Nano), the setup scripts make it easy to configure the OS for exactly the tools you want.

[1]: https://dietpi.com/


The more I use linux (12 years), the more I believe that common users should not use "stable distribution". Upgrade is risky, software is always outdated. It solves nothing and provides unjustified feeling of stability.

LTS makes sense as deployment platform. Push it to server and deploy own application. Make a demo like subject NVIDIA Jetson Nano SDK and forget about it.

There are better ways to help underpowered machines - use light desktop environment, block ads in browser, add RAM if possible. Kernel and base requirements has not changed much in a decade. About 120 MB with XOrg.


this feels like mass cowardice to me. run Debian testing or unstable. run something semi modern. not old lame vendor LTS.

those who cower in LTS land, thinking they are saving themselves from troubles have very very very little experience, scant little too contribute to the community that cares & tries to remain mildly up to date. there is such a prevalent fear based mentality about running decent reasonable modern upstream software but it is such FUD, such a cannot do attitude. rarely can these camps justify their afraid behavior. LTS is nonsense. testing, unstable, they work well, & they expose real honest levels of support for systems, versus relying on antiquated but known support that LTS brings the tepid adopters.


It is in the name, marketing, people use Debian Stable, Ubuntu Long Term Support, they don't use Ubuntu Outdated. But that's what it is. The moment it moves to Stable it does not receive versions update. We can't force rename. Maybe we can come up with better names for "testing", "unstable"?

* Fresh

* Upstream

* Rolling

I understand people don't want to switch distribution, channels out of fear, inconvenience. Can cross-platform package manager help? The one with dependency resolution — Nix, not the one that bundles it all together like Flatpak or Snap.


this is godforsaken mother-of-all-anti-pattern sinful words. how terrible! don't trust the distro, don't trust upstream! trust only the particular bizarre awkward vendor tool chain? gods no. heavens no. the only reason, the only places this advice is ok is when your hardware is unsupportable, un-un-streamable, when the hardware embodies an anti-supportable form of existence. @jcelerier's advice here could not be more antithetical to the real story. if you need custom weirdo toolchains, something has gone direly wrong.


u can use virtual environments for that


In the case of 'Python Virtual Environments' you'd still have to install another version of Python which is non-trivial with Debian based systems without causing some sort of collision, unless you download the binaries to /usr/ or other bin location as suggested.


I like to use pyenv for this purpose, as it "virtualizes" Python versions by installing them locally as you mention. Then you just point each virtualenv at the Python binary for the version installed via pyenv. pyenv + virtualenv + virtualenvwrapper have greatly reduced Python dependency issues for me.


A bunch of programming languages do a little bit similar like Node has Node Version Manager or nvm, I think Go has GVM, Rust just has Rust Up, they usually install in your home folder and let you swap between versions globally, I've yet to have to do this as I usually spin up a VM if I know I'm going to need to deploy to a very specific version of Debian / Ubuntu, a virtual environment might not match behavior on the actual target system but a VM definitely will.


+1 to pyenv. And it's even better if you combine it with pipenv [1]: pipenv manages dependencies its own file (Pipfile) which can also specify the required Python version. Pipenv integrates with pyenv so if the required Python version is not available it will offer to download and install it.

Tldr; with pyenv and pipenv installing a Python project is just "pipenv install".

[1] https://pipenv.pypa.io/en/latest/


I would strongly recommend taking a look at Poetry as an alternative to Pipenv.

Pipenv is honestly a nightmare.


I have been using pipenv only on a couple of projects and for a few months, but it has been an improvement over pip and virtualenv.

Why do you say Pipenv is a nightmare? Have you had bad experiences with it?


I regret I can only upvote this once. Pipenv still gets press, but I'm not sure why. Every senior dev I know switched to Poetry long ago.


It's totally trivial. ubuntu 18.04 has packages for python3.7 and python3.8 which install to /usr/bin/python3.7 and /usr/bin/python3.8, leaving /usr/bin/python3 pointing to /usr/bin/python3.6. So, you'd type:

`sudo apt -y install python3.7`

Then:

`mkvirtualenv --python $(which python3.7)`

It really is that easy.


Especially in the case of a python virtual environment, "conda" solves everything ultra cleanly (regardless of whether it's debian, fedore, arch or Windows and probably macOS too though I didn't try).

It provides any version of Python, regardless of what your base system has (or lacks). It also provides an isolated gcc and many other packages; and you can still "pip install" inside it if the package you need is not conda-native, and conda will track it just as well.

Conda is really underappreciated.


Perhaps virtual environments should be the default on Linux distributions?


Please take a look at Nix. I think you will like it.


FWIW, I got a review unit and it ships with a much lighter desktop environment (LXDE) this time and is has a swap file enabled by default. I guess that was needed to ship a 2GB version, but overall it seemed a little better thought out. They also include a wifi dongle in the box.

It still only has Python 3.6, but OpenCV and numpy are pre-installed correctly this time so you don't have to compile them which is an improvement.


I did switch mine to LXDE, but it was a faff to do and took ages. I imagine the only way to make this thing slower is to swap to an SD card.

I'm not doing anything with mine that involves OpenCV, I didn't realise one of its flagship examples was also messed up in the box.

Oh! And I just dug out my script to rebuild if I need to. I forgot a few more annoyances:

* tensorflow isn't installed by default, and it's not obvious how to do the install because you have to specify NV's repo: pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow

* openai gym won't work unless you specify archaic version 0.15.3, there's a buggy interaction between it and NVidia's X11.

* lots of packages need installing that should really have come by default, or were broken by default [eg numpy]


You can update it to Ubuntu 20.04 pretty easily. (using a Xavier as a desktop with that)

If you don't need CUDA, you can outright use regular Fedora, which uses nouveau. (no reclocking issues or such on Tegra)


If you don't need CUDA, then save money, heat, and support issues associated with a second-string product and just get a rpi4. The entire point of the Jetson is that there's a non sucky GPU that can do AI at the Edge.


Perhaps we would be better off if NVidia just made GPUs for the RPi ...


They are trying to purchase ARM, so I wonder what that means for Mali...


Do you know who I could contact to request a review unit? (if you can share ofc - email in bio) I have an open-source Jetson-related project that I think the new model could be useful for.


Email me (info in bio) and I'll see if I can help


I use SBCs extensively, bought Jetson Nano as soon as it was available, I'm convinced that none of the SBCs in the 4GB Memory space are fit to run in the desktop mode and removing desktop packages to make more space for headless mode is the first thing I do after installing the OS.

I doubt that it largely has to do with the lack of optimisations of the graphical packages in the Linux for ARM architecture, as a x86 Chromebook with 4GB memory and slightly better CPU clock boost rate can offer far superior desktop experience.


I think there are (in general) two models that work.

1) ubuntu lts - stable releases - I'm a server person I want nothing to change, ever.

2) arch linux - rolling releases - I'm a desktop person, I want the latest of everything, now.

anything between these two extremes kind of sucks, both as a developer and as a user.

advantages:

1: "Q: I want to disable snapd on ubuntu 18.04, how do I do it?"

"A: <20 detailed posts of how to do it>"

1: "user: something isn't working"

"developer: clearly your fault, I haven't changed anything in that project in 4 months"

2: "Q: I want to disable snapd on arch linux, how do I do it?"

"A: why did you install it? look on the wiki."

2: "user: something isn't working"

"developer: everything is up-to-date, please do pacman -Syu and read the wiki"


18.04 is a sensible release to ship, considering that 20.04 only came out in April.


I would guess because the NVIDIA driver/library support on 20.04 is still lacking.

I just set up a new 20.04 for ML and the official NVIDIA repos for this version were still missing a few cuXYZ libraries. I had to also add the 18.04 repos and symlink them together to get some Frankenstein library mess that works (as far as I can tell).


Wait, what? That really surprises me. I am using CUDA 11 on NixOS unstable (which is a rolling release) without any issues at all. Or is that on Aarch64?


Jetson is still on CUDA 10.2 which only supports fully GCC 8 and earlier.

Support for CUDA 11 is coming soon, with the bump to Ubuntu 20.04 at the same time too. However, patching those issues isn't very problematic.


I'm not going to lie, I don't understand the usage of Python and other such languages in the embedded space in general.


This are fully blown computers with plenty of ram and using python on them is just fine if it meets your cpu usage spec. People get too carried away with the term embedded. Not everything is an MSP430 with a tiny bit of memory or processing power. That said, I've been using modern c++ and rust on my hobby projects as they have most of the advantages of python and blow it away as far as memory usage and performance, at least for embedded usage where we're not building huge web apps or similar.


This is not really the classic embedded space; it is a full ARM processor running a full linux operating system. Its more like a small server than a MCU board. It couldn't be used for hard real-time requirements.


Rapid prototyping and in the case of Jetson - computer vision and Tensorflow running on the GPU. Both have excellent support for Python.

Sure you can make those things run natively but people using them are used to Python, lot of code is written in Python for it and there is little to no performance penalty of running the code in Python (everything perf. critical is running natively anyway, Python is only used to "glue" those bits seamlessly together).

And once you have RAM in gigabytes even the extra memory overhead of Python becomes a moot point.


In this particular case, I think everything compute-intensive happens in the GPU anyway, so there's not much to lose by using a high level language on the CPU side.


Can you not trivially update all these things though? Like how big of a deal is it really? Or are you stuck using their specific distribution, and their distribution can't be updated to a more recent Ubuntu and python?


You can update things but in my experience it wasn't trivial


Is it just the normal process for updating Ubuntu to the next release though? I wouldn't call that trivial either, but only because it'll likely take hours here. Still, it's a well-trammelled path that typically works well and while unattended. So if that's all it is it doesn't seem like a huge deal.


I run Debian (mostly) on it. Seems to work well for me. Compiling PyTorch is a bit of a nuisance at 4GB, but it worked. One issue I had was that TRTorch wanted bazel to build and I couldn't bring myself to install that (and Java), so I made up some quick and dirty CMakeLists.txt and it worked well.

But I must admit I don't see the 2GB model as a better value than the $100 4GB one, especially with shared CPU/GPU.


> Want python later than 3.6?

apt-get install python3.7

or

apt-get install python3.8


That might theoretically trigger updates of packages on which NVidia stuff depends (through indirection), which will then render your system unusable. You might be lucky, though.


I think it'll be all right. python3.8 is its own distinct package, which will install alongside the existing python3.6.

Debian/Ubuntu's "alternatives" system can select what /usr/bin/python3 runs. In practice, changing that away from the distribution default can break other people's code, and the win of an LTS is not spending time on that stuff.

But you can get yourself a Python 3.8, then you can build venvs, run a Jupyter kernel in it, whatever you like. If you can download or rebuild all the modules you want in a 3.8 venv, you're pretty set. And everything wanting Python 3.6 on your LTS still works.


that's why you look at what's getting install before you hit go. anyway it's easy enough to backup and restore almost all these SBC's


An alternative is to use Docker containers or similar.


That will most likely break OpenCV, Tensorflow and CUDA which depend on binary packages compiled for the given version of Python.


CUDA doesn't depend on python, it's the other way around. And you'd install the new versions of those packages.


What I meant is that it would break the CUDA package for Python and support in things like Tensorflow, obviously. Not CUDA itself.

You can't just "install new version" of those packages unless Nvidia supports it, unless Tensorflow supports it, unless OpenCV works with it. If you have ever tried to build Tensorflow from source, you would know what I am talking about.


I have, on python 3.7, and it works fine with the entire CUDA matrix from 10.1 to 11. So your point has theoretical merit, but in practice, it's not a real problem as compatible versions of those packages are readily available.


Yes, NVidia should stop tying their Jetson hardware to specific Ubuntu releases.

Instead they should release drivers that work with any Linux distribution (for example Debian, which is now broken because of USB issues).


I just installed python 3.8 on my Ubuntu 14.04... Had to also upgrade openssl and lzma for it to be useful though


Also, no ffmpeg with hardware acceleration. They force you to use gstreamer.


ffmpeg with hardware acceleration is available, see: https://github.com/jocover/jetson-ffmpeg

(there's also the NV prebuilts of an older version of the patchset at https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%252... )


Just curious, any guess why it doesn't come with the latest Ubuntu and Python pre-installed?


18.04 was the last Ubuntu LTS before very recently, and python3.6 is system python3 for Ubuntu 18.04. These are both Ubuntu examples that Nvidia just left as the default. When 20.04 comes python will upgrade with it.


NVidia don't see the point. There are a few forum threads where NV respond "why do you even want that" or "you can get ubuntu 20.xx from a third party".


Then why pre-install anything? I'm not in the field, but I would think that a lot of people would want to run there Python CV code on it. Or is it, becaue they want you to use CUDA?


Because if you don't have an offering that'll make it do something interesting OOB, it's unlikely anyone would buy it. [bootstrapping from true scratch is typically a nightmare for hobbyists]

I shouldn't be quite so negative; NV did put in a lot of effort building up the development kit. ARM Builds of CUDA, drivers, etc. Repos with packages for the job. Tutorials for common tasks, etc. They're really within a hair's breadth of an excellent OOB experience, but fell at the last hurdle.


The raspberry pi didn’t/doesn’t ship with anything and people bought/buy that by the millions


It’s very easy to install any version of Python using pyenv. I would be careful not to change the system Python in 16.04 because it will probably mess up apt.


As soon as you start needing C libraries, you start getting minor issues here and there. Numpy modules in particular seem prone to issues -- even on a bog-standard x86 desktop. It's got to be much worse on the very niche Nvidia SoC.


I booted mine Radxa Rock (rk3188) with BareBox, ArchLinux ARM and a mainline kernel. It just took me few hundred hours to figure out all the problems on my way there. And it's one of the most open source friendly ARM SOCs.

I wish luck to everyone buying these things. You'll need it to run any modern distro in a few years.


I took alot of interest in various small ARM devices.

Only to realise that lack of software support makes them almost all pretty much completely useless.

I wouldn't use any randomly bought ARM device now for hobby projects, except the Raspberry Pi.


Very recently I recovered my rock too. I'm about 20 hours in. Any repository or helpful documentation to save part of the 80 remaining hours? pleaaaasee


1.) Forget about EMMC, it's gone and not supported by anyone.

2.) Boot is only possible from microSD card. USB is not supported.

3.) Build BareBox with https://www.barebox.org/doc/latest/boards/rockchip.html

Use `rk-splitboot` to split FlashData from the original boot loader, other option doesn't work. Build it, it will create barebox-radxa-rock.img. It's .img, not the .bin.

Write an SD card as in the instruction.

4.) Follow these instructions, but use the generic image.

https://archlinuxarm.org/platforms/armv7/amlogic/odroid-c1

http://os.archlinuxarm.org/os/ArchLinuxARM-armv7-latest.tar....

5.) Build kernel as in this tutorial, ignore the `dts` stuff. The `dts` comes with the kernel sources.

https://wiki.radxa.com/Rock/Linux_Mainline

https://github.com/torvalds/linux/blob/master/arch/arm/boot/...

6.) Copy the zImage, rk3188-radxarock.dts to the /boot and plug in the UART.

7.) sudo picocom /dev/ttyUSB1 -b 115200 -e w

8.) Boot in the bootloader and execute the following commands:

global.bootm.appendroot=true

global.bootm.image=/mnt/mshc1.0/boot/zImage

global.bootm.oftree=/mnt/mshc1.0/boot/dtbs/rk3188-radxarock.dtb

bootm

9.) If it boots, you can make it permanent by following the BareBox docs.


Are you statically compiling in the kernel modules? (Would have expected some .ko's to need copying too, beyond the kernel image and device tree pieces.)


Woow HUGE thanks!


Forgot the most important one - power it by using power cable that came with it and the 5V _2A_ adapter. It will kernel panic at random places if you power it from the USB port.


Oh geez. How did you even figure this one out?


This sort of thing isn't that weird in small cheap devices. They more or less demand clean power, which is why the Pi4 is heavily recommended to be used with their official charger.

I find it gets unstable with cheaper USB C chargers.

I specifically buy high quality chargers/cables/battery packs for my embedded work since they are so intolerant of bad power


What are your recommended brands of USB chargers?


This person got hold of a review board and write up an interesting detailed tutorial: https://medium.com/@ageitgey/build-a-face-recognition-system...


That seems like a more interesting article, so I've changed to it from https://hackaday.com/2020/10/05/nvidia-announces-59-jetson-n.... Thanks!


Well that's pretty crappy. We put out a great article at Hackaday that covered the meat and the meaning of this release. Seems like it deserved to be linked on HN for this topic.

Edit: Seems like people voted up the Hackaday article and not the medium article. So you've in effect negated everyone's votes.


I dislike HackADay's regurgitation of articles, but just changing the link to a completely different article is pretty ridiculous and unwarranted.

How do I know which comments are talking about which article?


I have the 4GB version of the Nano and the 4GB Raspberry Pi. I like the Nano, but I use it mostly as a development machine and for that I would rank the 8GB Raspberry Pi at about $89 US the best deal. Substantially below that is the Nano, and almost tied is 4GB Pi. There may be cases where you absolutely need an Nvidia machine in which case I would argue for the 4GB version, but otherwise the 8GB Pi appears to be the winner by a long shot.


I think the whole idea for the Nano was having a GPU on a small computer. The RPi doesn't have that. If you're comparing them as plain CPU-based development systems, you're missing the whole point of the Nano.


Just so there is no confusion, the Raspberry Pi 4 supports dual 4k displays out of the box. It has a built in Broadcom VideoCore VI @ 500 Mhz GPU and dual micro HDMI ports so you are good to go. Perhaps you meant a discrete GPU?


This is not about the display. It's about running CUDA-accelerated machine learning stuff. Think - pytorch. That's the whole point of the Nano.

If you're not doing that, use an RPi.


The Nano supports CUDA; the Pi doesn't.


Nvidia's video decoder/encoder is noticeably faster too


do you happen to know the memory bandwidth if these newer Raspberry Pi's?

I can't find it on the product site, nor the BCM2711 datasheet (more like a programmers manual)


What are the use-cases for a GPU-accelerated single board computer?


Apparently there's a big Software Defined Radio (SDR) community that appreciates the TFlops of floating point compute. cuSignal in particular.

Note that embedded radios / modems / codec manipulation is incredibly common. Usually its handled by a DSP (VLIW architecture), but I'd imagine that the GPU architecture has more FLOPs. DSPs probably have less latency, but something like SDR is not very latency sensitive (or more like: the community models latency as a phase-shift and is therefore resilient to many forms of latency)

Note: I'm not actually in any of these communities. I'm just curious and someone else told me about this very same question.


It looks like at least one company has started to produce products that tie together the radio <-> FPGA <-> GPU compute pipeline for SDR using cuSignal and these Jetson modules: https://www.crowdsupply.com/deepwave-digital/air-t

It's pretty neat stuff to think about DSP on hundreds of MHz of bandwidth being something that you can do in software these days. I remember when that was firmly in the realm of "don't bother unless you want to design a custom ASIC" and now it's starting to become even hobbyist-accessible.


Heck, just being able to find an ADC with hundreds of MHz of bandwidth is kind of a big deal, let alone the computational resources now available to the modern hobbyist.

The modern day has brought incredible advancements to the modern hobbyist. Just 10 years ago, you probably still needed to play with OpAmps and Inductors to effectively read a hundreds-of-MHz signal in any real capacity (at the hobbyist scale at least: $1000 or less projects). A PC probably had the compute girth back then to interpret the data, but you still needed a high-speed, and high-quality, ADC to even read the data that quickly.


One commonly pitched use case is real time neural network calculations on mobile robots.


Same as it's always been: Gaming. I bet even this $59 computer can play games surprisingly well. Obviously not modern AAA titles or anything like that, but there's many decades' worth of games it can play (including ones originally released for non-PC platforms). There's lot of interesting Pi-powered handhelds and mini-arcade setups running emulated games; this seems like it'd be significantly better at that.


Gaming is not the purpose of GPU accelerated single board computers. Though you could reasonably play some games on the Jetson Nano, Nvidia markets the jetson series as embeddable machine learning systems. While there may be complaints of how hot the jetson gets or its performance with Ubuntu, the jetson is not intended to be used as a personal computer. It’s cheaper than dirt machine learning access. I highly recommend it as a way to get started in AI or CUDA with the smallest price barrier possible.


I said it was a use case, not the purpose. But realistically these single board computers are more suitable for gaming than for machine learning. If you don't have a computer at all then I guess you could use a Jetson for this purpose, but a real computer would be much better and is easily within the budgets of most companies and most software engineers.

Is this a developing markets play? I've never heard of someone whose only computer is a Pi/Jetson. By the time you add all the necessary peripherals to use it as your only computer, it costs more than a cheap laptop.

So the point of a Jetson is .. you only have a cheap laptop with terrible integrated graphics, and for $60 you get a much better GPU with this Jetson, and you remote into it?

I think it'll do better in gaming.


It's an arm cpu dude, it can't run anything other than emulators basically. And the bottleneck there is the cpu, not the gpu


ETA Prime already did a emulation, seems to have enough grunt to run some GameCube/Wii games: https://www.youtube.com/watch?v=5xrRxz5633I

Compared to him testing on a Raspberry Pi 4: https://www.youtube.com/watch?v=l4TyYU9Xhcs

(note both those videos are a old so results could be different now)

The video is a year old, but I imagine there would be less bugs now. With this cheaper board coming out, there might be a bit more of development activity.


There is a fairly decent subreddit around single board computer gaming: https://www.reddit.com/r/SBCGaming/ .


None of those emulators need accelerated graphics in the form of the Nvidia GPU and emulators for the ARM platform most likely can't even use it as most of these SBCs don't have anything like that.

This Jetson really isn't a platform to run emulated games - for that there are much better boards, with more RAM. Which is a lot more important issue. You can't do miracles with 2GB of RAM.

Jetson is made for computer vision, neural networks and signal processing, where the GPU isn't used as a GPU but as a massively parallel co-processor to crunch through a lot of data at once.


> Same as it's always been: Gaming.

No, not even close. It's embedded systems with GPU acceleration, mostly for machine learning.


Fancy 3D visualizations on screens of embedded devices. Image processing on camera feeds (both "traditional" and neural-network-based). Video encoding/transcoding. Complex UIs in high resolutions.

(strictly speaking any screen with higher resolution and/or expectations of smooth non-trivial graphics requires some kind of GPU, but I take the question to mean why one would want a powerful one)


Smart security cameras that can perform complex operations at the edge. Like real-time video stitching, object recognition etc.


HDMI out and boots Linux, so could function as desktop replacement. The demos I've seen it really shines in small robotics. Think AWS Deep Racer League style competitions for K-12 ;)


> HDMI out and boots Linux, so could function as desktop replacement.

As someone who has tried to run a desktop environment on various Pi-like devices for years, with every generation being "the one that will replace your PC" I just laugh.

The modern web is nearly unusable with less than 4 GB of RAM and it's really easy to push beyond that. I personally would not recommend that anyone try to use anything with less than 8 GB of RAM as a general purpose desktop computer anymore.

You can do it, sure, but there will be significant compromises and limitations along the way. A secondhand x86 PC with 8+ GB of RAM is almost always a better choice for general purpose desktop use. Leave the "hacker boards" for roles that benefit more from their compact size, low power consumption, and/or specialty I/O capabilities.

I haven't tried the Pi 8GB, that should at least solve the RAM problem, but the lack of native high-speed storage still likely impacts usability. Here's hoping the major Pi-likes follow suit to offer variants with more memory soon.

---

And of course as noted by other comments, another major reason to think of these things more as appliances is the generally spotty support for updating kernels and the like that is unfortunately standard in the ARM world. That entire side of the industry is infected with NDA culture and has no interest in curing it.


Local processing and video compression for a raw video camera (especially in autonomy applications). The compression in standard IP cameras introduce not only compression artifacts but also latency and jitter. And system scales better with one low cost GPU right at the camera vs trying to process for many cameras at one server.


Things like video analysis for security cameras for people who don't want to stream everything to the cloud. It's possible on a single board computer CPU but only with lots of compromises - you can't do multiple streams etc.


Why wouldn't someone just use a PC for that since it doesn't need to be portable?


Power usage, fanless, size and price are the usual reasons.

If you price a silent PC with a fanless NVidia card it's quite a lot more.


Convenience. If you have a security camera up 50 ft on a tower 40 miles away connected to the internet via cell modem where do you put the PC? And is the PC able to fit in the power envelope and put up with environmental factors? An sbc is much easier to harden environmentally


I'd assume because of cost. This device does exactly what's required with low power draw and is easily replaced if it breaks or more processing power is required.


Better to understand this as; What is the use-case of RPi combined with machine learning?

More specific; What is the use case of a battery enabled robot “brain” with machine learning?


Good retro console emulation like the N64 alone may make this a go to for many.


For the price, you might as well do a Mister FPGA build using the DE10-Nano. That’s ~$140 versus the $200 for the Jetson Nano here. And the Mister FPGA, being FPGA based, can avoid most slowdowns software based emulators would have.


Add on sram, expansion board, usb hub, and it gets to be a bit more than $140.

I'm hoping someday we can get something like mister on the pano logic boards. You can find 2nd gen ones on ebay for $15 or so and have usb, dvi, audio, and I believe a more powerful fpga. All in a solid metal case.


Is there an N64 implementation written in an HDL? cen64 is at least written in an RTL style, but it's not quite there.


The Jetson Nano is less than half the price of the Mister FPGA.


I want to strap it to a drone and fly around with it.


Autonomous drone navigation (forest trail following) using a TX2: https://github.com/NVIDIA-AI-IOT/redtail


Anything that is not mass produced, needs to have a small size or weight and needs 3D graphics or computer vision. For instance, imagine a bike rental startup that wants to do traffic sign recognition integrated in their vehicles.


Self Driving RC Car Racing


Getting more CUDA buy-in?

But more seriously: robotics.


Does this run a stock distribution and a distro provided stock kernel such as Ubuntu? If not, how good are the security updates and for how long does NVidia commit to providing them?


The RPi was originally a tiny arm bolted to a decent mobile GPU :)


As usual, the hardware is way ahead of software.


This isn't true. I personally can use this now, and I now 5 or 6 people who have been trying to do stuff using the Intel USB accelerator or Google Coral who I expect will switch to this at this price point.

Only the MAIX platform has a price advantage over this now, and the software is much less mature.


The use case between K210 and Jetson systems is quite different. I've used the Maix Dock as a smart controller for a custom CNC system, and it performs beautifully. The Jetson line is more suited to systems that are video heavy (low latency, multiple feeds, mixing, processing, etc). The Maix boards just don't have the RAM to deal with that; I've only been able to handle two incoming video streams at low resolution.

That being said, programming the Maix is a piece of cake. The Jetson Nano was very challenging to deal with; the software is unstable in many cases, and the levels of abstraction are all mixed up, and discovering which Nvidia tools to use is an absolute pain (Gstreamer vs DeepStream vs ARGUS vs whatever else Nvidia pumps out this year). Also, I had way better support from the Maix community than from Nvidia. Nvidia's support model seems to be "unless you're spending tens of thousands of dollars, go pound sand, maybe a volunteer on our forums will help you, if anyone knows what's actually going on here."

Nvidia just has too much software, too little documentation, and holds onto their secret sauce so tightly that doing anything is a true pain.


I haven't had any trouble at all with the NVidia stack. But I do machine learning full time for my day job, so I'm reasonably familiar with it.

OTOH, the MAIX has things like the pin assignments wrong[1] and you need a magic (undocumented) flag to make the LCD display colors correctly. There's also different types of models supported with undocumented constraints on each one, and you need to find the correct set of random open source projects to make them work. But it is an exciting platform!

[1] https://github.com/sipeed/MaixPy/issues/298


Strange Idea:

Someone could use this SBC -- not as an SBC, but as a discrete video card...

That is, have a computer without a video card -- communicate with this SBC via USB or Ethernet -- and have it perform all of the functions of a video card to generate the display output...

After all, it's got quite a high power GPU in it...

Yes, it would be a bit slower than a PCIe based high-end graphics card... this idea is for experimentation only.

One idea, once this is done, is to decouple the GUI part of any Operating System you write -- and have that part, and that part alone, run on the SBC / GPU.

Yes, X-Windows/X11 already exists and can do something like this.

But if say, you (as an OS designer) felt that X-Windows was too complex and bloated, and you wanted to implement something similar to Windows' GUI -- but a lighter version, and decoupled from the base Operating System -- then what I've outlined above might be one way to set things up while the coding is occurring...

Again, doing something like this would be for people that wanted to experiment, only...


For video playback, CSS3 transitions/animations, gaming etc., how much better is the GPU in the Jetson Nano compared to the one in the RPi4? Just to get an idea.


Looks great and I'm glad the sales numbers have given them the ammunition nvidia execs needed to keep pushing the price down well into hobbyist range. I'm trying to preorder a couple, but...

ORG.APACHE.SLING.API.SLINGEXCEPTION: CANNOT GET DEFAULTSLINGSCRIPT: JAVA.LANG.ILLEGALSTATEEXCEPTION: RESPONSE HAS ALREADY BEEN COMMITTED

Edit: it's live now! [1][2]

[1] https://www.seeedstudio.com/NVIDIA-Jetson-Nano-2GB-Developer...

[2] https://www.sparkfun.com/products/17244


I want to setup something like this for my daughter's primary grade school meetings using Zoom. Any suggestions? Are the Pis or this device up to par now for such use case? I totally do not like Chromebooks, and standard laptops are an overkill.


If you are looking for something that you can use so your daughter can run a browser and then zoom into a class session then I would suggest a Raspberry PI 4GB or 8GB. I have a 4GB Nano. Comparing the Pi 4GB and Nano 2GB. The 4GB of the Pi is an important factor vs the 2GB. Personally I would recommend the 8GB Pi.

There are many more Pi's out there so the development community and support is stronger than Nano. The Nano has support, just not in the same area. I have had problems with the Nano where memory gets chewed up in some way and I just reboot it after a while. The Nano does have the advantage of an Nvidia GPU, but it appears that is of no benefit to you.

Although the Nano does not have built-in wifi I don't think that is a substantial issue, but it does mean you need to get a wifi dongle. Of more concern is the need for a fan on the Nano. I have found that it gets hot fairly easily when pushed.

Bottom line is that in my opinion the small amount of memory is a substantial to avoid the Nano 2GB, the 4GB Pi is good and for about $30 more the 8GB Pi is a definite step up.


Given that you also need a screen, keyboard, mouse camera and mic, I don't know if it is better than a laptop for your application.


Yup, just buy a cheap Chromebook (maybe used) and save many hours of labor/setup and quite possibly even some money.


Why though? If your goal is just to run zoom, you are better of with buying some used usff pc, having ability to install x86 OS and having more cpu power in similar, if not lower price point.

Of course, if you just want to buy a jetson and just looking for some reason to, that's totally fine too :-D


afaik there's no native Zoom client for ARM [0]. You'll be stuck with the web version of it, which according to some functions poorly no matter where you run it

[0] https://devforum.zoom.us/t/have-you-considered-building-zoom...


Go with the overkill, used mobile workstations are cheaper than new Chromebooks. Just imo


I tend to lean in this direction as well. A 4GB RPi4 would work, but the Pi itself costs $55, plus a dedicated USB power supply, uSD card, the HDMI adapter I usually need, webcam, possible case/cooling fan (Pi4's can get hot), plus a spare keyboard, mouse, and monitor.

The total package ends up significantly more expensive than the "list price" of an RPi itself, and it ends up not making much more sense than buying a used x230 or something off of eBay for <$200.


I have Odroid H2 in here as my main system, quad core X86, M2 slot for SSD, user-friendly RAM (SO-DIMM, currently 8GB installed supports up to 32GB), SATA ports, runs fanless 40degrees Celsius for most of the time. Lovely bit of kit.


$111 for anybody out of the loop: https://www.hardkernel.com/shop/odroid-h2/ (and seemingly out of stock?)


Meanwhile a production ready nano will set you $1000 back.


A five year old ARM chip, the X1[1] is now far & away the best graphics one can get on an $100 single-board-computer.

Double amusing, there is now an ARM core called the X1, a heavyweight follow up to the A76 for bigger form factors.

[1] https://en.wikipedia.org/wiki/Tegra#Tegra_X1


Honest question, why would I choose this over an "old" Android phone that I already have in my drawer?


This runs a basically normal Ubuntu, which is much nicer to use than Android for many tasks. I suspect that its GPU is actually more powerful than an older Android device. And some people want to buy these new to build some sort of edge compute, in which case it's much nicer to buy something off the shelf that's supported in your usecase, cheap, and doesn't tack on hardware you don't need (touchscreen, modem) while adding hardware you do want for that situation (ethernet).


Native CUDA implementation on an Ubuntu system would be the biggest reason in my mind. There's CUDA for android but (at a glance) it didn't look stellar. It also helps that there is an active community here where everyone has the same device so if you run into issues you have somewhere to turn. Final point is NVIDIA clearly sees this as a business segment worth investing in so it will continue to grow and improve.


No WiFi kills it for me


This version comes with a USB wifi dongle in the box in North America and Europe now. I guess they had an issue with licensing chipsets in all regions.


If wifi is all you needed then this product wasn't made for you.


Buy a chinese usb wifi dongle. They are very cheap.


But don't buy the WRONG dongle. I just spent 15+ hours trying to convince the Jetson Nano that the rtl8822bu driver ought to compile on an arm64 system. It's definitely not like the Pi, where 99% of the software is open source and you don't have to deal with Nvidia's nasty hackish tooling to download special Jetson headers and such.


Same here!! If you want wifi just go ahead and double check what works w/ it beforehand.. also remember that the USB bandwidth is shared on the Nano


This has WiFi.


> The Gigabit Ethernet port is still there, but unfortunately wireless still didn’t make the cut this time around. So if you need WiFi for your project, count on one of those USB ports being permanently taken up with a dongle.


NVidia at https://www.nvidia.com/en-us/autonomous-machines/embedded-sy... :

> connectivity: Gigabit Ethernet, 802.11ac wireless[1]

> [1] Not initially available in all regions

whatever the footnote means


It comes with a WiFi adapter in the box, but that's not really the same thing.


I don't see a good use of this over Raspberry Pi and Pi + Intel Neural Stick.


You can see the inference benchmarks of OG Jetson Nano and RPi3 with Intel Neural Compute Stick[1]; the FPS are at least double if not more for all tests on Nano and many tests did not run on RPi3+INCS due to memory.

It would be interesting to see the tests repeated with Raspberry Pi 4B + Intel Neural Compute Stick 2, but I doubt whether there will be any drastic difference as this setup will still be bottlenecked by USB bandwidth when compared to Jetson Nano.

Even if the performance matches some how, the price/value is overwhelmingly favourable to Jetson Nano more so with the new cheaper 2GB version.

[1] https://developer.nvidia.com/embedded/jetson-nano-dl-inferen...


This seems like it could be useful for home automation.

Is this enough power to do voice recognition? I'm sorry if this is a stupid question, I haven't done anything with ML before.


It may not have the horsepower to actually do the learning, but it takes very little computing power to actually run a realized model that was made w/ machine learning.


You can use a smartphone for that. There are options other than Google for hotword detection, but you could hook into Assistant, as well. Bit of a pain in the ass, but it works.


I think there's a desire for privacy which you can't get using Google's speech to text solutions.


Can we be done with the Cortex A57? It's ancient.


Slightly off topic but I’m wondering if there is a new Nvidia Shield coming out, they seem to have released them end of October in past years...


The first one was not updated for years, I doubt the current one will either.


shield new version are released every 2 years 2015,2017 the latest 2019 so unlikely a new version comes out this year. I still am using the 2015 version as 2019 is not much of an upgrade.


I was excited until I read the hn comments below. Sounds quite painful still. Which is a more issue for enthusiasm driven tinkering


How does the performance compare to a Raspberry Pi 4 with 2 GB ram?


But can it run Kodi with 4k streaming?


The video decoder can do 4k at 60fps or two stream 4k at 30fps


tl;dr: It's the same device as before in terms of processing, but with less RAM and fewer connectivity options (and therefore cheaper)


More than that.

They dropped Display Port, the second CSI Camera Connector, the M2 slot, the USB slot and replaced the barrel jack with the USBC.

Thats a decent downgrade of capability


USBC power is an improvement though!

Edit: But no WiFi is annoying.


Depends on your usecase. If you try to connect it to something like a battery or a generator, USB C is definitely a downgrade.


I guess that's true, but many projects I see use USB power banks for arduinos and I don't think this is too different.


The 4GB Jetson will run off a micro-usb port (for power banks) OR the barrel adapter.


4gb one over micro-usb real needs to be in 5w mode disabling 2/4 cpus. Running off usb-c at 15w where I can use all 4 is a big improvement for convinience for me.


I run the barrel adapter off a power bank just fine


Removing the second CSI Camera Connector is what immediately killed it for my use case :(


I don’t understand why it has less RAM. I would have expected a 4GB and an 8GB version.


What about a commitment to providing customers with proper hardware schematics and detailed datasheets? (yes, even if the board itself isn't opensource)

I lost faith in using raspberry pi 4s because the foundation is very tight-lipped. Want to know why your board is behaving weirdly, or whether the behavior is expected or a result of a defect? TOUGH LUCK. You'll have to rely on outdated forum posts, SO answers, and (if you're really lucky) the good mood of the engineer monitoring these sites.

I'd rather stick to building for a well-understood but expensive platform like x86 than undocumented black boxes.


Hey now, having the words "cat" and "dog" printed next to the pictures of the cat and the dog is cheating!


What is exactly the point of such devices? If you create something worthwhile with it, you cannot exactly include it in your product. I get that you can create something like a one off garage opener based on face recognition, but you won't learn any real world knowledge of developing a product and if you later on decide that this is something for you, you'll have to learn a whole new ecosystem. These devices remind me of "game maker" software claiming you could make your own game with it, but you were limited with what developer had in store for you and you wouldn't become a game developer from using it.


These get a lot of use in limited-run industrial applications where you need something custom and smaller/lower power than a desktop computer but you aren't manufacturing thousands of devices and you don't have millions of dollars to develop custom hardware. You can take pretty much any model you trained on a desktop GPU and run it on the board without any changes which is nice for the right kind of project. And they sell a range of modules with a spectrum of price/performance/power draw depending on how much processing power you need.

If you are manufacturing something in larger volume, they will sell you production modules without the dev boards that you can mount on your own boards. But those are still aimed at more expensive products. So this isn't a solution for low-cost high volume consumer products, but there is definitely a market for it.


The Jetson Nano is the "small" SDK for the Jetson Xavier series which is Nvidia's embedded ai platform for volume (ie they will sell you thousands of units of them for volume manufacturing in cars, etc) - that's always been the draw for the Jetson Nano - you use the hardware for prototyping and when you are ready, buy the larger and more expensive Xavier and that goes into your hardware product.

https://www.nvidia.com/en-us/autonomous-machines/embedded-sy...


I tried to find any documentation, code, schematics but gave up after 30 minutes. Who is this for? It seems like this product is for people who want to say they do AI and show some PCBs on their Instagram.


They are literally development kits for the embedded chips and modules NVidia sells. What you prototype on a dev kit you can easily transfer to production hardware that incorporates those modules.

And if you just do CUDA experiments with it, those transfer at least partially to other environments (e.g. desktop/server hardware with GPUs) too.

Where do you get the idea this isn't transferable?


I tried to find any documentation, schematics, code and I can't see it anywhere on that website.



Man I swear I got retconned, I just couldn't see this. I would like to retract all I wrote. It's pretty cool!


You have various Jetson Nano 'pro' boxes, like:

+ https://www.aaeon.com/en/c/aaeon-nvidia-ai-solutions

+ https://www.avermedia.com/professional/category/nvidia_jetso...

and you have certain numbers of custom design using the Nano compute module https://developer.nvidia.com/embedded/jetson-nano




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: