I have a Jetson nano. It's a nice thing for what it is, and the price.
But, the software misses the mark by a lot. It's still based on Ubuntu 18.04.
Want python later than 3.6? Not in the box. A lot of python modules need compiling, and the cpu is what you'd expect. Good for what it is, bad for compiling large numbers or packages.
They run a fancy modern desktop based on the Ubuntu one. Sure, it's Nvidia, gotta be flashy. But that eats a quarter of the memory in the device and make the SOC run crazy hot all the time.
These aren't insurmountable issues, they just left a bad taste in my mouth. In theory I only have to do setup changes once, but it's still a poor experience
The more I use linux (and I'v been for almost 15 years), the more strongly believe that using what the Linux distros provide as a development toolchain is an antipattern. Use separate toolchains with SDKs separate from your /usr instead.
> Good for what it is, bad for compiling large numbers or packages.
FWIW, I got a review unit and it ships with a much lighter desktop environment (LXDE) this time and is has a swap file enabled by default. I guess that was needed to ship a 2GB version, but overall it seemed a little better thought out. They also include a wifi dongle in the box.
It still only has Python 3.6, but OpenCV and numpy are pre-installed correctly this time so you don't have to compile them which is an improvement.
I use SBCs extensively, bought Jetson Nano as soon as it was available, I'm convinced that none of the SBCs in the 4GB Memory space are fit to run in the desktop mode and removing desktop packages to make more space for headless mode is the first thing I do after installing the OS.
I doubt that it largely has to do with the lack of optimisations of the graphical packages in the Linux for ARM architecture, as a x86 Chromebook with 4GB memory and slightly better CPU clock boost rate can offer far superior desktop experience.
I would guess because the NVIDIA driver/library support on 20.04 is still lacking.
I just set up a new 20.04 for ML and the official NVIDIA repos for this version were still missing a few cuXYZ libraries. I had to also add the 18.04 repos and symlink them together to get some Frankenstein library mess that works (as far as I can tell).
Can you not trivially update all these things though? Like how big of a deal is it really? Or are you stuck using their specific distribution, and their distribution can't be updated to a more recent Ubuntu and python?
I run Debian (mostly) on it. Seems to work well for me. Compiling PyTorch is a bit of a nuisance at 4GB, but it worked. One issue I had was that TRTorch wanted bazel to build and I couldn't bring myself to install that (and Java), so I made up some quick and dirty CMakeLists.txt and it worked well.
But I must admit I don't see the 2GB model as a better value than the $100 4GB one, especially with shared CPU/GPU.
I booted mine Radxa Rock (rk3188) with BareBox, ArchLinux ARM and a mainline kernel. It just took me few hundred hours to figure out all the problems on my way there. And it's one of the most open source friendly ARM SOCs.
I wish luck to everyone buying these things. You'll need it to run any modern distro in a few years.
I have the 4GB version of the Nano and the 4GB Raspberry Pi. I like the Nano, but I use it mostly as a development machine and for that I would rank the 8GB Raspberry Pi at about $89 US the best deal. Substantially below that is the Nano, and almost tied is 4GB Pi. There may be cases where you absolutely need an Nvidia machine in which case I would argue for the 4GB version, but otherwise the 8GB Pi appears to be the winner by a long shot.
Apparently there's a big Software Defined Radio (SDR) community that appreciates the TFlops of floating point compute. cuSignal in particular.
Note that embedded radios / modems / codec manipulation is incredibly common. Usually its handled by a DSP (VLIW architecture), but I'd imagine that the GPU architecture has more FLOPs. DSPs probably have less latency, but something like SDR is not very latency sensitive (or more like: the community models latency as a phase-shift and is therefore resilient to many forms of latency)
Note: I'm not actually in any of these communities. I'm just curious and someone else told me about this very same question.
Same as it's always been: Gaming. I bet even this $59 computer can play games surprisingly well. Obviously not modern AAA titles or anything like that, but there's many decades' worth of games it can play (including ones originally released for non-PC platforms). There's lot of interesting Pi-powered handhelds and mini-arcade setups running emulated games; this seems like it'd be significantly better at that.
Fancy 3D visualizations on screens of embedded devices. Image processing on camera feeds (both "traditional" and neural-network-based). Video encoding/transcoding. Complex UIs in high resolutions.
(strictly speaking any screen with higher resolution and/or expectations of smooth non-trivial graphics requires some kind of GPU, but I take the question to mean why one would want a powerful one)
Someone could use this SBC -- not as an SBC, but as a discrete video card...
That is, have a computer without a video card -- communicate with this SBC via USB or Ethernet -- and have it perform all of the functions of a video card to generate the display output...
After all, it's got quite a high power GPU in it...
Yes, it would be a bit slower than a PCIe based high-end graphics card... this idea is for experimentation only.
One idea, once this is done, is to decouple the GUI part of any Operating System you write -- and have that part, and that part alone, run on the SBC / GPU.
Yes, X-Windows/X11 already exists and can do something like this.
But if say, you (as an OS designer) felt that X-Windows was too complex and bloated, and you wanted to implement something similar to Windows' GUI -- but a lighter version, and decoupled from the base Operating System -- then what I've outlined above might be one way to set things up while the coding is occurring...
Again, doing something like this would be for people that wanted to experiment, only...
For video playback, CSS3 transitions/animations, gaming etc., how much better is the GPU in the Jetson Nano compared to the one in the RPi4? Just to get an idea.
Looks great and I'm glad the sales numbers have given them the ammunition nvidia execs needed to keep pushing the price down well into hobbyist range. I'm trying to preorder a couple, but...
ORG.APACHE.SLING.API.SLINGEXCEPTION: CANNOT GET DEFAULTSLINGSCRIPT: JAVA.LANG.ILLEGALSTATEEXCEPTION: RESPONSE HAS ALREADY BEEN COMMITTED
I want to setup something like this for my daughter's primary grade school meetings using Zoom. Any suggestions? Are the Pis or this device up to par now for such use case? I totally do not like Chromebooks, and standard laptops are an overkill.
I have Odroid H2 in here as my main system, quad core X86, M2 slot for SSD, user-friendly RAM (SO-DIMM, currently 8GB installed supports up to 32GB), SATA ports, runs fanless 40degrees Celsius for most of the time. Lovely bit of kit.
This version comes with a USB wifi dongle in the box in North America and Europe now. I guess they had an issue with licensing chipsets in all regions.
It may not have the horsepower to actually do the learning, but it takes very little computing power to actually run a realized model that was made w/ machine learning.
You can use a smartphone for that. There are options other than Google for hotword detection, but you could hook into Assistant, as well. Bit of a pain in the ass, but it works.
shield new version are released every 2 years 2015,2017 the latest 2019 so unlikely a new version comes out this year. I still am using the 2015 version as 2019 is not much of an upgrade.
[+] [-] chunkyks|5 years ago|reply
But, the software misses the mark by a lot. It's still based on Ubuntu 18.04.
Want python later than 3.6? Not in the box. A lot of python modules need compiling, and the cpu is what you'd expect. Good for what it is, bad for compiling large numbers or packages.
They run a fancy modern desktop based on the Ubuntu one. Sure, it's Nvidia, gotta be flashy. But that eats a quarter of the memory in the device and make the SOC run crazy hot all the time.
These aren't insurmountable issues, they just left a bad taste in my mouth. In theory I only have to do setup changes once, but it's still a poor experience
[+] [-] jcelerier|5 years ago|reply
The more I use linux (and I'v been for almost 15 years), the more strongly believe that using what the Linux distros provide as a development toolchain is an antipattern. Use separate toolchains with SDKs separate from your /usr instead.
> Good for what it is, bad for compiling large numbers or packages.
at worst you can
on your desktop and compile your stuff there.[+] [-] ageitgey|5 years ago|reply
It still only has Python 3.6, but OpenCV and numpy are pre-installed correctly this time so you don't have to compile them which is an improvement.
[+] [-] Abishek_Muthian|5 years ago|reply
I doubt that it largely has to do with the lack of optimisations of the graphical packages in the Linux for ARM architecture, as a x86 Chromebook with 4GB memory and slightly better CPU clock boost rate can offer far superior desktop experience.
[+] [-] m463|5 years ago|reply
1) ubuntu lts - stable releases - I'm a server person I want nothing to change, ever.
2) arch linux - rolling releases - I'm a desktop person, I want the latest of everything, now.
anything between these two extremes kind of sucks, both as a developer and as a user.
advantages:
1: "Q: I want to disable snapd on ubuntu 18.04, how do I do it?"
"A: <20 detailed posts of how to do it>"
1: "user: something isn't working"
"developer: clearly your fault, I haven't changed anything in that project in 4 months"
2: "Q: I want to disable snapd on arch linux, how do I do it?"
"A: why did you install it? look on the wiki."
2: "user: something isn't working"
"developer: everything is up-to-date, please do pacman -Syu and read the wiki"
[+] [-] riffic|5 years ago|reply
[+] [-] hobofan|5 years ago|reply
I just set up a new 20.04 for ML and the official NVIDIA repos for this version were still missing a few cuXYZ libraries. I had to also add the 18.04 repos and symlink them together to get some Frankenstein library mess that works (as far as I can tell).
[+] [-] banachtarski|5 years ago|reply
[+] [-] CydeWeys|5 years ago|reply
[+] [-] t-vi|5 years ago|reply
But I must admit I don't see the 2GB model as a better value than the $100 4GB one, especially with shared CPU/GPU.
[+] [-] pikrzyszto|5 years ago|reply
apt-get install python3.7
or
apt-get install python3.8
[+] [-] amelius|5 years ago|reply
Instead they should release drivers that work with any Linux distribution (for example Debian, which is now broken because of USB issues).
[+] [-] vaccinator|5 years ago|reply
[+] [-] quellhorst|5 years ago|reply
[+] [-] FrojoS|5 years ago|reply
[+] [-] slezyr|5 years ago|reply
I wish luck to everyone buying these things. You'll need it to run any modern distro in a few years.
[+] [-] simonw|5 years ago|reply
[+] [-] dang|5 years ago|reply
[+] [-] talkingtab|5 years ago|reply
[+] [-] nobrains|5 years ago|reply
[+] [-] dragontamer|5 years ago|reply
Note that embedded radios / modems / codec manipulation is incredibly common. Usually its handled by a DSP (VLIW architecture), but I'd imagine that the GPU architecture has more FLOPs. DSPs probably have less latency, but something like SDR is not very latency sensitive (or more like: the community models latency as a phase-shift and is therefore resilient to many forms of latency)
Note: I'm not actually in any of these communities. I'm just curious and someone else told me about this very same question.
[+] [-] zucker42|5 years ago|reply
[+] [-] CydeWeys|5 years ago|reply
[+] [-] detaro|5 years ago|reply
(strictly speaking any screen with higher resolution and/or expectations of smooth non-trivial graphics requires some kind of GPU, but I take the question to mean why one would want a powerful one)
[+] [-] gnarbarian|5 years ago|reply
[+] [-] peter_d_sherman|5 years ago|reply
Someone could use this SBC -- not as an SBC, but as a discrete video card...
That is, have a computer without a video card -- communicate with this SBC via USB or Ethernet -- and have it perform all of the functions of a video card to generate the display output...
After all, it's got quite a high power GPU in it...
Yes, it would be a bit slower than a PCIe based high-end graphics card... this idea is for experimentation only.
One idea, once this is done, is to decouple the GUI part of any Operating System you write -- and have that part, and that part alone, run on the SBC / GPU.
Yes, X-Windows/X11 already exists and can do something like this.
But if say, you (as an OS designer) felt that X-Windows was too complex and bloated, and you wanted to implement something similar to Windows' GUI -- but a lighter version, and decoupled from the base Operating System -- then what I've outlined above might be one way to set things up while the coding is occurring...
Again, doing something like this would be for people that wanted to experiment, only...
[+] [-] pfalafel|5 years ago|reply
[+] [-] akiselev|5 years ago|reply
ORG.APACHE.SLING.API.SLINGEXCEPTION: CANNOT GET DEFAULTSLINGSCRIPT: JAVA.LANG.ILLEGALSTATEEXCEPTION: RESPONSE HAS ALREADY BEEN COMMITTED
Edit: it's live now! [1][2]
[1] https://www.seeedstudio.com/NVIDIA-Jetson-Nano-2GB-Developer...
[2] https://www.sparkfun.com/products/17244
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] webwanderings|5 years ago|reply
[+] [-] am_lu|5 years ago|reply
[+] [-] MuffinFlavored|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] turowicz|5 years ago|reply
[+] [-] rektide|5 years ago|reply
Double amusing, there is now an ARM core called the X1, a heavyweight follow up to the A76 for bigger form factors.
[1] https://en.wikipedia.org/wiki/Tegra#Tegra_X1
[+] [-] bserge|5 years ago|reply
[+] [-] Ftuuky|5 years ago|reply
[+] [-] ageitgey|5 years ago|reply
[+] [-] postalrat|5 years ago|reply
[+] [-] catsdanxe|5 years ago|reply
[+] [-] haram_masala|5 years ago|reply
[+] [-] suyash|5 years ago|reply
[+] [-] Shared404|5 years ago|reply
Is this enough power to do voice recognition? I'm sorry if this is a stupid question, I haven't done anything with ML before.
[+] [-] jovial_cavalier|5 years ago|reply
[+] [-] bserge|5 years ago|reply
[+] [-] awill|5 years ago|reply
[+] [-] andy_ppp|5 years ago|reply
[+] [-] teruakohatu|5 years ago|reply
[+] [-] xbmcuser|5 years ago|reply
[+] [-] Havoc|5 years ago|reply