Getting Started with the NVIDIA Jetson Nano Developer Kit

Getting started with NVIDIA’s GPU-based hardware.

Alasdair Allan
6 years ago

Over the last year or two there has been a flood of custom silicon intended to speed up machine learning on the edge. First to arrive was Intel with their Moividius-based hardware, and more recently we’ve seen the appearance of Google’s Edge TPU-based hardware. However traditionally NVIDIA’s offering in this space — built as it was around their GPU-based hardware — has been higher powered, and comparatively expensive.

However, with everyone moving towards the edge it is perhaps unsurprising to see them introduce something a bit more affordable. So last month we saw the arrival of the Jetson Nano Module and Developer Kit.

Still based around their existing GPU technology, the new Jetson Nano is therefore “upwards compatible” with the much more expensive Jetson TX and AGV Xavier boards.

Opening the Box

The Jetson Nano Developer Kit arrives in yet another unassuming box. Inside the box is the carrier board itself with the Jetson Nano module and a heatsink already fitted. Also in the box is a small leaflet pointing you at the getting instructions and letting you know the which ports should be used to power board and for the monitor, keyboard, and mouse.

Finally there’s the somewhat surprising addition of a paper stand, which I immediately threw away. If I want to lift the board off the desk I’ll just add some rubber bumper feet. That feels way safer than a paper stand?

At 80 × 100 × 28 mm and weighing 140g, the Jetson Nano Developer Kit isn’t a small board. In fact I’d go as far as saying that for a product intended to be deployed as edge computing, it’s more than rather oversized.

The Jetson Nano module itself, just visible underneath the heatsink, is neat and reasonably sized. However the, hopefully rather oversized, heatsink that is mounted on top of the module is large and ungainly. It’s also not reassuring.

Although not included in the retail box, an optional fan came with my pre-release hardware when NVIDIA shipped it to me for review. Not reassuring.

Gathering Supplies

Unlike both the Google and Intel hardware it doesn’t seem possible to set up the Jetson Nano without the use of a monitor, keyboard, and mouse.

I very rarely use a board in anything but headless mode these days. So, feeling rather like it was ten years ago, and I was walking to the other side of campus to the data center to use my rack’s KVM switch after things went really wrong, I went ahead and dug out the portable monitor, keyboard, and mouse I use when everything goes really wrong with one of my Raspberry Pi projects.

As well as a monitor, keyboard, and mouse, you’re going to need a USB power supply and a USB-A to micro USB cable to connect it to Jetson Nano, as well as an HDMI cable to connect the Jetson Nano to your monitor. Predictably you’ll also need a micro SD Card, with a minimum size of 32GB.

⚠️Warning While the OS image and other installs comfortably fits on a 16GB card, using that small a card will usually result in the root filesystem filing up and ‘no space left on device’ errors during inferencing.

Surprisingly perhaps there is no onboard support for Wi-Fi on the Jetson Nano carrier board so you’ll also have to dig out an Ethernet cable as well and hope that you can position it close enough to your router so that you can connect it to your LAN. Alternatively you can grab a USB Wi-Fi dongle and hope you can get NVIDIA’s L4T distribution to support it.

Compared to the direct competition, that’s a lot of supplies just to get started.

Powering the Board

NVIDIA recommend that the Jetson Nano Dev Kit should be powered using a 5V micro-USB power supply capable of supplying 2A to 3.5A, with a further recommendation to use a 4A power supply if you “…are running benchmarks or a heavy workload.” That’s a potentially rather problematically specification requirement for the power supply.

Considering how hard it is to find a reliable 2.5A supply for the Raspberry Pi, with the official Raspberry Pi USB power supply being one of the few good options, I’m rather unsure where you’d actually go about sourcing a workable 3.5A USB power supply? Especially since actually providing 3.5A over USB is, as far as I’m aware, well outside the USB specification.

Fortunately the board has a barrel jack as well, you can switch between micro USB and the barrel jack using a jumper. This means that you can power it using a ‘normal’ DC supply if you’re getting under-voltage issues with your USB power supply.

Getting the Operating System

The first thing you’ll need to do is grab the Jetson Nano Developer Kit card image, it weighs in at around 5.64GB all zipped up which is about five times larger than your average Raspbian download.

So, maybe make a cup of coffee?

For burning card images these days I’d generally recommend Etcher, made by the folks at Balena. It’s cross platform — it works on Windows, Linux and mac OS — and lets you burn an image in four clicks.

However, if you’re a command line person like me, you can either download and install the now deprecated experimental Etcher command line tools, or you can still go ahead and do it the old fashioned way.

The instructions here are for the Mac, because that’s what I have on my desk, but instructions for Linux are similar.

Go ahead and insert the micro SD card into the adaptor, and then the card and the adaptor into your Macbook. Then open up a Terminal window and type df -h, and check the device name for your SD Card. In my case it’s /dev/disk1, and I’ll need to use the corresponding raw device, /dev/rdisk1, when writing to the card.

Go ahead and unmount the card from the command line,

$ sudo diskutil unmount /dev/disk1s1

rather than ejecting it by dragging it to the trash. Then from there we can go ahead and uzip the image, and write the image to our SD card.

$ unzip jetson-nano-sd-r32.1–2019–03–18.zip
$ sudo dd bs=1m if=jetson-nano-sd-r32.1–2019–03–18.img of=/dev/rdisk1

If the above command reports an error dd: bs: illegal numeric value, change bs=1m to bs=1M. The image’s boot partition should be automatically remounted after dd is done writing the image.

Setting Up the Jetson Nano

Plug the board into your monitor, keyboard, and mouse, then go ahead slot the micro SD Card into the slot on the underside of the Jetson Nano module.

The micro SD Card slot is on the Jetson Nano module, not on the carrier board. You should look between the module and the carrier board for the slot.

Finally go ahead and connect it to your power supply and power on the board. The green LED next to the micro USB connection should light up as soon as you power it on, and after it finishes booting you’ll be walked through some initial setup.

⚠️Warning The Jetson Nano appears to be particularly fussy about its HDMI connection, and wouldn’t talk to the three older monitors I keep around for debugging. This is presumably down due to HDMI handshaking problems on older hardware. It’s not a problem I’ve seen on a single-board computer before. The getting started instructions do include a note saying that, “HDMI to DVI adaptors are not supported. Please use a display that accepts HDMI or DP input.”

You’ll first have to accept the software EULA, then you get to select your preferred system language, the layout of your keyboard, and your current times zone. Finally you’ll be asked to create a user — this user will have administrator privileges — and choose a hostname. For simplicity I created a user called “jetson” and similarly picked jetson as my hostname.

Connecting the Board to Your Wireless Network

It turns out the NVIDIA L4T has poor support for USB Wi-Fi adaptors, and most of the adaptors I’ve got on my desk don’t work with the distribution.

The list of unsupported adaptors includes the really common RT5370-based dongles along the official Raspberry Pi Wi-Fi adaptor which has the Broadcom chipset. However after rummaging around in my spares box I managed to find a much less common RT8188CUS-based dongle, mine was an Edimax EW-7811Un, which fortunately is supported.

Go ahead and open up a terminal by right clicking on the Desktop and selecting “Open Terminal” from the drop down menu.

You can usually tell what chipset your dongle is based around by looking at the output of the lsusb command after plugging the adaptor into the board. So in our new terminal window type,

$ lsusb
Bus 002 Device 002: ID 0bda:0411 Realtek Semiconductor Corp.
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 011: ID 7392:7811 Edimax Technology Co., Ltd EW-7811Un 802.11n Wireless Adapter [Realtek RTL8188CUS]
Bus 001 Device 004: ID 248a:8367
Bus 001 Device 003: ID 045e:07fd Microsoft Corp. Nano Transceiver 1.1
Bus 001 Device 002: ID 0bda:5411 Realtek Semiconductor Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
$

here you can see here the adaptors for my wireless keyboard and mouse, along with my RTL8188-based wireless adaptor. You can also check to see that an appropriate kernel module has loaded using the lsmod command.

$ lsmod | grep rt
rtl8xxxu 115372 0
rtl8192cu 85379 0
rtl_usb 14074 1 rtl8192cu
rtl8192c_common 54245 1 rtl8192cu
rtlwifi 88873 3 rtl_usb,rtl8192c_common,rtl8192cu
btrtl 7318 1 btusb
rt2800usb 22944 0
rt2x00usb 12492 1 rt2800usb
rt2800lib 80870 1 rt2800usb
rt2x00lib 61822 3 rt2800lib,rt2800usb,rt2x00usb
mac80211 719792 7
rt2800lib,rt2x00lib,rt2x00usb,rtl_usb,rtlwifi,rl8192cu,rtl8xxxu
cfg80211 589351 3 rt2x00lib,mac80211,rtlwifi
$

Support for my RT8188 dongle being provided by the RT8192 driver. Finally, you can check that it’s been correctly detected using the nmcli command. We can then connect it to your wireless network.

$ nmcli 
wlan0: disconnected
"Realtek 802.11n WLAN Adapter"
wifi (rtl8192cu), 74:DA:38:58:6F:0F, hw, mtu 1500

$ sudo nmcli dev wifi connect MY_SSID password MY_PASSWORD ifname wlan0
Device 'wlan0' successfully activated with 'e08e5ecf-b2a5-4b32-ac2a-44b6754867f8'.
$

After activation, we can check the status of our connection.

$ nmcli connection show
NAME UUID TYPE DEVICE
MY_SSID e08e5ecf-b2a5-4b32-ac2a-44b6754867f8 wifi wlan0
Wired connection 1 d0d5acde-fcfe-3d3a-bc1d-584df2b7bfb2 ethernet eth0
l4tbr0 d0cb0095-06cf-4528-b9f3-18a8cf740122 bridge l4tbr0
$

While the board should pick up an IP address from your DHCP server on wlan0 at this point, some of the time it doesn’t and that can be for a variety of reasons, so it’s probably just quickest to reboot at this point. You can then find out the wireless address as using the ip command.

$ ip addr | grep wlan0
7: wlan0: mtu 1500 qdisc mq state UP group default qlen 1000
inet 192.168.1.118/24 brd 192.168.1.255 scope global dynamic noprefixroute wlan0

Your Jetson Nano is now connected to your wireless network and you should just be able to unplug your Ethernet cable.

Enabling Desktop Sharing

Unfortunately the instructions helpfully left on the Jetson’s desktop on how to enable the installed VNC server from the command line don’t work, and going ahead and opening the Settings application on the desktop and clicking on “Desktop Sharing” also fails as the Settings app silently crashes. A problem that appears to be down to an incompatibility with the older Gnome desktop.

There are a number of ways you can approach this problem, the easiest route is a mix of command line and graphical fixes. The first thing you need to do is to edit the org.gnome.Vino schema to restore the missingenabled parameter.

Open the schema in your favourite editor,

$ sudo vi /usr/share/glib-2.0/schemas/org.gnome.Vino.gschema.xml

and go ahead and add the following key into the XML file.

Enable remote access to the desktop

If true, allows remote access to the desktop via the RFB
protocol. Users on remote machines may then connect to the
desktop using a VNC viewer.

false

Then compile the Gnome schemas with the glib-compile-schemas command.

$ sudo glib-compile-schemas /usr/share/glib-2.0/schemas

This quick hack should stop the “Desktop Sharing” panel crashing, allowing you to open it. So go ahead and click on the “Settings” icon, and then the “Desktop Sharing” icon which is in the top row.

Tick the “Allow other users to view your desktop” and also “Allow other users to control your desktop” check marks. Then make sure “You must confirm each access to this machine” is turned off. Finally tick the “Require the user to enter this password” check mark, and enter a password for the VNC session.

Close the “Settings” panel, then click on the green icon at the top left of your screen to open the “Search” panel. Type ‘startup applications’ into the search box that appears at the top of the screen.

Click on the application to open the “Startup Applications Preferences” panel. Here we can add VNC to the list of applications that are automatically started when you login to the computer.

Click Add at the right of the box, then type ‘Vino’ in the name box, and then in the command box enter /usr/lib/vino/vino-server. Finally you can add a comment, perhaps ‘VNC Server.’ Click Save at the bottom right of the box, and then close the app.

⚠️Warning The encryption supported by Vino is TLS security type (18), this is widely unsupported by most viewers including popular ones such as TigerVNC, TightVNC, and RealVNC. This incompatibility is a known problem, and has been around for more than five years. It’s probably not going away any time soon.

Finally then open a Terminal and, somewhat regrettably, you’ll probably need to disable encryption of the VNC connection to get things working.

$ gsettings set org.gnome.Vino require-encryption false
$ gsettings set org.gnome.Vino prompt-enabled false

You should now go ahead and reboot the board, and afterward the reboot, log back into your account. VNC should now be running and serving the desktop.

You can check this from your laptop by using the nmap command.

$ nmap jetson
Starting Nmap 7.70 ( https://nmap.org ) at 2019-04-13 01:11 BST
Nmap scan report for jetson (192.168.1.118)Host is up (0.0030s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
5900/tcp open vnc
Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds
$

If you don’t already have a VNC Viewer installed on your laptop RealVNC offers a VNC Viewer application for Windows, Linux, and macOS — as well as a number of other platforms. So go ahead and download the application and install it on your laptop.

Once installed open a VNC connection to jetson. You’ll be warned about the lack of encryption on the VNC session, and then prompted for the password.

If everything has worked, you should see the desktop in a window. That’s it. You’re now connected to your Jetson Nano over VNC.

Enabling Remote Desktop

Unfortunately the VNC Server will only be running when a user is logged into Jetson Nano on console. If you logout, the server will be stopped. You can’t just unplug your monitor, keyboard, or mouse and run the board in headless mode.

If you want to do that, the easiest way is probably going to be running an RDP server called xrdp. Installation is a lot simpler than setting up VNC.

$ sudo apt-get install xrdp

After installation has completed, you should go ahead and reboot the Jetson Nano board. Once the reboot has completed you can check installation of xrdp was successful by using the command nmap from your laptop.

$ nmap jetson
Starting Nmap 7.70 ( https://nmap.org ) at 2019-04-13 01:39 BST
Nmap scan report for jetson (192.168.1.118)
Host is up (0.0025s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
3389/tcp open ms-wbt-server
Nmap done: 1 IP address (1 host up) scanned in 1.25 seconds
$

As you can see since we’re not logged in, our VNC server has been shutdown, however the RDP server is running despite us currently being at the login screen on the physical machine.

While RDP is a proprietary protocol, Microsoft do provide viewers for most platforms for free, including the Mac, which is available in the Mac App Store.

You should go ahead and install it.

Open Microsoft Remote Desktop and click on “Add Desktop.”

Once you’ve configured the settings to your liking, you might want to turn off “Start session in full screen” for instance and set a reasonable resolution for the resulting window’ed desktop. Click “Save” and then open the RDP desktop by clicking on the “Jetson Nano” desktop icon.

When you’re connect to the board using RDP the desktop will look somewhat different. That’s because you’ll be seeing a standard Ubuntu desktop, running Gnome, rather than the older Unity style desktop that is the default on L4T.

⚠️Warning You can not be logged in at the physical desktop and open an RDP desktop, conversely if you have an RDP desktop already open you won’t be able to login to the physical desktop. If you have an RDP desktop open and attempt to connect to the Jetson Nano using VNC you will be connected to the RDP session.

If you’re used to VNC when using Remote Desktop you should bear in mind the differences. You’re not viewing the existing Jetson Nano desktop, you’re creating another. That virtual desktop will be persistent until you logout as you would if you were sitting in front of a physical keyboard. If you just close the RDP window and walk away, that doesn’t close the desktop, or log you out. The next time you connect to the RDP server on your Jetson Nano the RDP desktop will look the same.

Finally if you’ve already gotten rid of your Ethernet cable and are relying on wireless networking to connect to the Jetson Nano you should open up the new Gnome “Settings” application, select “Power” from the menu on the left hand side and make sure that “Turn off Wi-Fi to save power” is turned off.

The Jetson Nano can now be run in headless mode. You can unplug your monitor, keyboard, and mouse, you won’t be needing them any more.

Setting Up NVIDIA TensorRT

Unlike the Coral Dev Board, which comes with a pretty slick initial demo application pre-installed that starts a web server with a video stream of freeway traffic with real time inferencing done on the board overlaid on top, the Jetson Nano doesn’t ship with any demo applications in the default image.

However there is an extensive first demo you can download from the project’s GitHub repo, called “Hello AI World.” Before we can take a look however, the first thing you need to make sure is that cmake and git is installed.

$ sudo apt-get install cmake
$ sudo apt-get install git

Then we can clone the repo, and configuring submodules.

$ git clone https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ git submodule update --init

Then build the source,

$ mkdir build
$ cd build
$ cmake ../
$ make
$ sudo make install

which will take a while. Perhaps time for a coffee?

Running Your first Machine Learning Models

Now we have our demo applications built we can run our first model. If you go ahead and change directory to the build directory, you can run the included detectnet-console demo application. This accepts an image as input, and outputs a list of coordinates of the detected bounding boxes. You’ll need to specify a pre-trained model as the third argument.

$ cd ~/jetson-inference/build/aarch64/bin
$ ./detectnet-console ~/dog.jpg out.jpg coco-dog
⚠️Warning The first time you run the demo applications, TensorRT may take “up to a few minutes” to optimise the network. While the optimised network file is cached to disk the first time it’s used, and things will run faster next time, they aren’t kidding. The first time you run a model you may assume that the code has hung and it’s not working. I know I did, because it’s more than “a few minutes.”

Throwing an image of my dog playing fetch on the beach at a DetectNet Caffe model trained using the COCO, that’s “Common Objects in Context,” dataset and I end up with a pretty solid detection and a okay bounding box.

1 bounding boxes detected
detected obj 0 class #0 (dog) confidence=0.710427
bounding box 0 (334.687500, 488.742188) (1868.737549, 2298.121826) w=1534.050049 h=1809.379639

Considering he’s more or less facing the camera and is soaking wet from his recent dip in the sea, I’m actually reasonably satisfied with the output here.

There are several other pre-trained models shipped with the Jetson Nano, including ones trained to detect multiple humans, and luggage, in an image.

I went ahead and threw the image of me taken at CES, the same one I used with the Intel Neural Compute Stick 2, at the pre-trained network.

$ ./detectnet-console me.jpg out.jpg facenet

Interestingly the bounding box was slightly different than we’ve seen before and the confidence in the detection much lower. But still a solid detection.

1 bounding boxes detected
detected obj 0 class #0 (face) confidence=0.604924
bounding box 0 (336.479156, 32.527779) (441.888885, 199.777786) w=105.409729 h=167.250000

The differences are going to be down to the model tuning rather than any other factor. My guess is that the demonstration models included with the Jetson Nano aren’t well tuned.

They aren’t, in other words, production quality models.

Given how big the heat sink is on the Jetson Nano I’m sort of interested to see how hot it’s getting now we’ve been exercising it somewhat. So I grabbed my laser infrared thermometer and checked.

The heatsink was reaching temperatures of around 50°C (122°F) after the board has been running a while. This is similar to what we saw with Google’s Coral Dev Board.

When Is a Banana Not a Banana?

At this point I wanted to throw the same lunch time fruit image at the NVIDIA hardware as I did with Google’s Coral hardware. This time we’ll be using the imagenet-console demo application.

$ ~/jetson-inference/build/aarch64/bin
$ ./imagenet-console ~/fruit.jpg out.jpg

While it detected the banana it didn’t really do a great job of it.

So I threw a few more image of different fruit at it and I got similar, and in some cases, sometimes much poorer results. The sample networks perform well with the included imagery, but very poorly—disastrously so in some cases—with real world imagery. The explanation here is that this model has been trained to classify images, not detect objects. That’s a very different job. It’s notable that the included fruit-related imagery all has white or neutral background. Effectively the network is saying “…this is an image of a banana” rather than “…this is where the banana is in the image.

Always read the documentation, it’s there to help.

Writing Your Own Object Recognition Code

Writing your own object recognition code isn’t actually that hard, and even in C++ can be done in a fairly compact manner if you’re not trying to do any complicated stuff around the task of classification.

You can grab the code and the associated build file from the command line using wget, and then build it as follows.

$ cd ~
$ mkdir object_recognition
$ wget https://gist.githubusercontent.com/aallan/4de3a74676d4ff10a476c2d6c20b9255/raw/818eb292805520a9fc01aaaee2f7a5692cdf1f92/object_recognition.cpp
$ wget https://gist.githubusercontent.com/aallan/9945105f8ae2aed47d96e23adb8dddc1/raw/fef4e1249de9f4be6763e40cfcd8e1a7b92a40d4/CMakeLists.txt
$ cmake .
$ make

The code will load the same GoogleNet model as the imagenet-console demo application we looked at earlier, but this time will only output the certainty and classification to the console.

I ran it against a set of images of a polar bear I took in Finland, and got pretty good classification results for most of the images even given the shadows.

class 0150 - 0.010576  (sea lion)
class 0181 - 0.020546 (Bedlington terrier)
class 0182 - 0.012807 (Border terrier)
class 0279 - 0.010743 (Arctic fox, white fox, Alopex lagopus)
class 0294 - 0.014232 (brown bear, bruin, Ursus arctos)
class 0296 - 0.741458 (ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus)
class 0360 - 0.089249 (otter)
image is recognized as 'ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus' (class #296) with 74.145813% confidence

Although some images of did return poor results most of these were of the polar bear cubs. Again this is probably down to model tuning, and the initial training data of the model.

Python Support for the Jetson Nano?

If you’re interested in working with the Jetson Nano in C++ I would heavily recommend looking at NVIDIA’s “Hello AI World” tutorial. However for myself, I’d really rather prefer to work in Python. Fortunately there is an official TensorFlow release for the Jetson Nano. Unfortunately, if somewhat predictably, here we run into an issue with the installation instructions.

Never replace the system pip using pip, it’s an act of vandalism.

Avoiding the obvious pitfalls, installation isn’t that bad, just rather lengthly. However, first of all, you’ll need to install some dependencies.

$ sudo apt-get install libhdf5-serial-dev hdf5-tools
$ sudo apt-get install python3-pip
$ sudo apt-get install zlib1g-dev zip libjpeg8-dev libhdf5-dev
$ pip3 install -U numpy grpcio absl-py py-cpuinfo psutil portpicker grpcio six mock requests gast h5py astor termcolor

After these all get installed, only then should you go ahead and install TensorFlow. While you can install specific versions if you want, the following should install the latest available release.

$ pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu

You can verify that TensorFlow has been installed correctly by dropping into Python and trying to import the TensorFlow module,

$ python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
>>>

if you don’t see any errors then TensorFlow has been installed correctly.

These Go to Eleven?

You might have noticed that you can power the Jetson Nano using either the micro USB connector or the barrel jack by jumper’ing the J48. Closing J48 will switch the power from the micro USB jack to the barrel jack.

If you power the board from the barrel jack using a 4A power supply, you can enable the “maximum performance model” and things should run faster.

sudo nvpmodel -m 0

But I guess if you only do that if you need that extra push, over the cliff.

⚠️Warning Don’t enable the “maximum performance model” unless you’ve switch the board to use the barrel jack and have a sufficiently solid supply. It’s very unlikely that you’ll be able to find a micro USB supply that won’t cause under voltage events if you enable maximum performance on the board.

Summary

NVIDIA made setup a lot harder than necessary. For a piece of hardware supposedly intended for the edge, it's rather strange that you need a desk to put a monitor, keyboard, and mouse on top of to get the board set up. I’m guessing that speaks to the NVIDIA heritage, they’re far more used to building a desktop computer or racked servers than they are embedded devices.

However that isn’t any excuse for the awful state of their Linux distribution. The amount of pain it caused both during setup and getting things done was excessive. I don’t think I’ve struggled this hard with a Linux distribution to make it do what I want it to do in a good few years.

The Jetson documentation isn’t in as bad as state as the documentation for the Intel Neural Compute Stick, but it is somewhat sprawling and it’s hard to keep track of which bit you should be reading. However some of it is excellent.

There’s are several good collection of resources, and I really can’t recommend their deep vision tutorial, “Hello AI World,” more highly. If you want to use C++ with the Jetson Nano, it’s an amazing place to start.

But, just like the Intel Neural Compute Stick, the out of the box experience with the NVIDIA hardware is using C++ rather than Python. While I can understand their reasoning, and the history that’s led them there, I think it was a poor decision on their part. They should have led with an environment that’s familiar to machine learning developers and data scientists and, for machine learning, that’s Python, TensorFlow, and Jupyter notebooks.

Alasdair Allan
Scientist, author, hacker, maker, and journalist. Building, breaking, and writing. For hire. You can reach me at 📫 alasdair@babilim.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles