RTP stream from Jetson Nano CSI Camera to macOS

Would you like to stream from the CSI Camera (Jetson Nano) to your macOS? No problem, it’s super easy! In this tutorial I will show you 2 ways to display the RTP video stream on your macOS. Both options require only a few steps and which one you choose is up to you.

Condition

  • Both devices (Jetson Nano & macOS) must be on the same network
  • SSH connection from macOS to Jetson Nano is available
  • You know the IP or the domain name (.local) of the macOS
  • Firewall on macOS is disabled (for simplicity)
  • The CSI camera on the Jetson Nano is operational

Use of ffplay

Download ffplay for macOS from the official provider site. Unzip the archive and move binary to target directory.

# move binary to target directory
$ mv ffplay /usr/local/bin/ffplay

# test ffplay version (optional)
$ ffplay -version

Now create an SDP file for ffplay (for example directly on your desktop).

SDP:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=NanoStream
c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat 56.15.102
m=video 1234 RTP/AVP 96
a=rtpmap:96 H264/90000

Note: Note the port specification (m=video 1234 RTP/AVP 96), if this is already occupied on your system, you must change this value and adapt the following steps.

Connect (via SSH) to the Jetson Nano and start the video stream from the camera there.

# start video stream from csi camera
$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080' ! nvvidconv flip-method=2 ! nvv4l2h264enc insert-sps-pps=true ! h264parse ! rtph264pay pt=96 ! udpsink host=(IP or Domain from macOS) port=1234 sync=false -e

If there are no problems, leave this terminal connection and start another terminal on the macOS.

# start ffplay
$ ffplay -protocol_whitelist "file,udp,rtp" -i ~/Desktop/ffplay.sdp

After confirming the command, a window should open after a few seconds and you will see the video stream.

Use of VLC

Download latest VLC player from official VLC website. Now install VLC Player. Optionally you can test whether everything works. Also create an SDP file for the VLC Player.

c=IN IP4 127.0.0.1
m=video 1234 RTP/AVP 96
a=rtpmap:96 H264/90000

Connect (via SSH) to the Jetson Nano and start the video stream from the camera there.

# start video stream from csi camera
$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080' ! nvvidconv flip-method=2 ! nvv4l2h264enc insert-sps-pps=true ! h264parse ! rtph264pay pt=96 ! udpsink host=(IP or Domain from macOS) port=1234 sync=false -e

Note: Depending on your camera position, you can change the value of flip-method=2 (1 to 4).

If there are no problems, leave this terminal connection and double click on the SDP file for the VLC player. Here, too, you should be able to watch the stream after a few seconds.

Notice

If you have already cloned/installed the repository dusty-nv/jetson-inference, you can view the different video streams in the same way on macOS!

# video viewer
$ video-viewer --bitrate=1000000 --output-codec=h264 csi://0 rtp://(IP or Domain from macOS):1234 --headless

# pose estimation
$ posenet --bitrate=1000000 --output-codec=h264 csi://0 rtp://(IP or Domain from macOS):1234 --headless

# image detection
$ imagenet --bitrate=1000000 --output-codec=h264 csi://0 rtp://(IP or Domain from macOS):1234 --headless

# object detection
$ detectnet --bitrate=1000000 --output-codec=h264 csi://0 rtp://(IP or Domain from macOS):1234 --headless

Hint: Don’t forget to re-enable the firewall (on your macOS) after you’r done!

Python, cmake and dlib on macOS

There is a very easy possibility to use dlib on macOS. Additional packages and package manager (for example brew) are not needed. Here are the instructions which you can follow independently in a few minutes.

Requirements

Steps

Download latest cmake version as *.dmg or *.tar.gz from cmake.org. After the app has been successfully installed, you can start it (but it is not necessary).

CMake.app on macOS

Inside the CMake application you will find the needed binary. All you have to do now is create a symlink and optionally check the version.

# create symlink
$ sudo ln -s /Applications/CMake.app/Contents/bin/cmake /usr/local/bin/cmake

# check version (optional)
$ cmake --version

Note: Close the terminal and restart if cmake is not found.

If you now want to install dlib in your Python project (in the virtual environment), there will no longer be an error message. Note that the installation may take some time.

# install dlib
(venv) $ pip3 install dlib

# verify dlib installation (optional)
(venv) $ pip3 freeze

That’s it. Made super easy and super fast without bloating the entire system with unnecessary ballast.

RPLidar A1 with ROS Melodic on Ubuntu 18.04

If you are (for example) an owner of NVIDIA Jetson Nano Developer Kit and RPLidar, you can use ROS Melodic to realize obstacle avoidance or simultaneous localization and mapping (SLAM). Here is a beginner’s tutorial for installing the necessary software.

Requirements

Objective

Installing ROS Melodic on Ubuntu 18.04 with Catkin Workspace and use of Python 2.7 for RPLIDAR A1.

Preparation

If you already have the necessary packages installed, you can skip this step.

# update & upgrade
$ sudo apt update && sudo apt upgrade -y

# install needed packages
$ sudo apt install -y git build-essential
Install package

If you run Ubuntu in VirtualBox it is recommended to install the Extension Pack and the VirtualBox Guest Additions.

# install needed packages
$ sudo apt install -y dkms linux-headers-$(uname -r)

# Menu -> Devices -> Insert Guest Additions Image

# reboot system
$ sudo reboot

Install ROS Melodic

For Ubuntu 18.04 you need the latest version ROS 1 (Robot Operating System) Melodic. The packages are not included in the standard sources.

Hint: If you still want to use a newer version of Ubuntu (like 20.04) you need ROS 2 Noetic!

Note: By default the ROS Melodic is using Python 2.7. For higher Python version the following description is not working! But it would also be possible to use Python 3.x, only the steps for installation are little different.

Add ROS repository

The ROS Melodic installation is relatively easy but can take a while depending on the internet connection, as there are many packages to be installed.

# update and install packages
$ sudo apt update && sudo apt install -y ros-melodic-desktop-full python-rosdep python-rosinstall python-rosinstall-generator python-wstool

# verify installation (optional)
$ ls -la /opt/ros/melodic/
add bashrc source
# initialize & update
$ sudo rosdep init && rosdep update

# show ros environment variables (optional)
$ printenv | grep ROS

Create a Catkin workspace

You now need the Catkin Workspace with the necessary RPLIDAR packages/binaries (RPLIDAR SDK from Slamtec).

# create and change into directory
$ mkdir -p $HOME/catkin_ws/src && cd $HOME/catkin_ws/src

# clone Git repository
$ git clone https://github.com/Slamtec/rplidar_ros.git

# change directory
$ cd $HOME/catkin_ws/

# build workspace
$ catkin_make

# verify directory content (optional)
$ ls -la devel/

# refresh environment variables
$ source devel/setup.bash

# verify path variable (optional)
$ echo $ROS_PACKAGE_PATH

# build node
$ catkin_make rplidarNode

Okay, that’s it… Wasn’t very difficult, but a lot of steps.

Start ROS Melodic

Now connect the RPLIDAR device. If you use VirtualBox, pass through the USB from host to guest.

# list USB device and verify permissions
$ ls -la /dev | grep ttyUSB

# change permissions
$ sudo chmod 0666 /dev/ttyUSB0

From now on you need two terminals! Once you have successfully gotartet roscore in the first terminal, switch to the second terminal.

# change directory
$ cd $HOME/catkin_ws/

# launch roscore
$ roscore

Note: Don’t close the first terminal though or don’t stop the running process!

# change directory
$ cd $HOME/catkin_ws

# refresh environment variables
$ source $HOME/catkin_ws/devel/setup.bash

# run UI
$ roslaunch rplidar_ros view_rplidar.launch

Sorry for using screenshots but my provider doesn’t allow the use of certain words or characters. πŸ™

OpenCV & SSD-Mobilenet-v2

The first steps with OpenCV and haarcascades are done and now it should really start. With other models you can detect easily more objects. In this tutorial I will show you therefore a different possibility with your Jetson Nano device. I recommend switching to desktop mode for performance reasons.

Requirements

  • Jetson Nano Developer Kit (2GB / 4GB)
  • 5V Fan installed (NF-A4x20 5V PWM)
  • CSI camera connected (Raspberry Pi Camera Module V2)

Note: You can also use any other compatible fan and camera.

Objective

The goal is to recognize objects as well to mark and label them directly in the live stream from CSI camera. OpenCV and SSD-Mobilenet-v2 with Python3.6 are used for in this tutorial.

Preparation

As always, it takes a few steps to prepare. This is very easy but can take a while.

# update (optional)
$ sudo apt update

# install needed packages
$ sudo apt install cmake libpython3-dev python3-numpy

# clone repository
$ git clone --recursive https://github.com/dusty-nv/jetson-inference.git

# change into cloned directory
$ cd jetson-inference/

# create and change into directory
$ mkdir build && cd build/

# configure build
$ cmake ../

# download only SSD-Mobilenet-v2
# all other you can download later
# PyTorch is also not needed yet

# build with specified job and install
$ make -j$(nproc)
$ sudo make install

# configure dynamic linker run-time bindings
$ sudo ldconfig

CSI Camera Object detection

When the preparation is successfully completed, you can create the first simple Python script.

# create file and edit
$ vim CSICamObjectDetection.py

Here is the content of the script. The important points are commented.

#!/usr/bin/env python3

import jetson.inference
import jetson.utils
import cv2

# set interference
net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)


def gstreamer_pipeline(cap_width=1280,
                       cap_height=720,
                       disp_width=800,
                       disp_height=600,
                       framerate=21,
                       flip_method=2):
    return (
        "nvarguscamerasrc ! "
        "video/x-raw(memory:NVMM), "
        "width=(int)%d, height=(int)%d, "
        "format=(string)NV12, framerate=(fraction)%d/1 ! "
        "nvvidconv flip-method=%d ! "
        "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
        "videoconvert ! "
        "video/x-raw, format=(string)BGR ! appsink" % (cap_width,
                                                       cap_height,
                                                       framerate,
                                                       flip_method,
                                                       disp_width,
                                                       disp_height)
    )


# process csi camera
video_file = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)

if video_file.isOpened():
    cv2.namedWindow("Detection result", cv2.WINDOW_AUTOSIZE)

    print('CSI stream opened. Press ESC or Ctrl + c to stop application')

    while cv2.getWindowProperty("Detection result", 0) >= 0:
        ret, frame = video_file.read()

        # convert and detect
        imgCuda = jetson.utils.cudaFromNumpy(frame)
        detections = net.Detect(imgCuda)

        # draw rectangle and description
        for d in detections:
            x1, y1, x2, y2 = int(d.Left), int(d.Top), int(d.Right), int(d.Bottom)
            className = net.GetClassDesc(d.ClassID)
            cv2.rectangle(frame, (x1,y1), (x2, y2), (0, 0, 0), 2)
            cv2.putText(frame, className, (x1+5, y1+15), cv2.FONT_HERSHEY_DUPLEX, 0.75, (0, 0, 0), 2)

        # show frame
        cv2.imshow("Detection result", frame)

        # stop via ESC key
        keyCode = cv2.waitKey(30) & 0xFF
        if keyCode == 27 or not ret:
            break

    # close
    video_file.release()
    cv2.destroyAllWindows()
else:
    print('unable to open csi stream')

Now you can run the script. It will take a while the first time, so please be patient.

# execute
$ python3 CSICamObjectDetection.py

# or with executable permissions
$ ./CSICamObjectDetection.py

I was fascinated by the results! I hope you feel the same way.

Invalid MIT-MAGIC-COOKIE-1

Problem

After an OS update including a restart of the system, an error occurs with SSH X11 forwarding. Instead of displaying the windows as usual, “Invalid MIT-MAGIC-COOKIE-1” is displayed in the terminal.

Here is the example. I connect to the Ubuntu with my macOS and get the error message.

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % ssh -C4Y lupin@10.0.0.12

lupin@nano4gb:~$ xeyes
Invalid MIT-MAGIC-COOKIE-1 keyError: Can't open display: localhost:10.0
lupin@nano4gb:~$ exit

1st Quick and dirty solution

Since I’m on my own and relatively secure network, I leave XQuartz running in the background, disable access control, reconnect and everything works again.

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % xhost +
access control disabled, clients can connect from any host

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % ssh -C4Y lupin@10.0.0.12

lupin@nano4gb:~$ xeyes
lupin@nano4gb:~$ exit

This setting is (fortunately) only valid for this one session. As soon as I end the connection, restart XQuartz and establish a new SSH connection, the error message appears again. Anyway a not so nice solution!

2nd Quick and dirty solution

XQuartz offers the possibility to establish the connection without authentication. This solution is permanent and can be set up in the simplest way. Open the XQuartz settings and uncheck the respective checkbox.

XQuartz X11 Settings

Restart XQuartz, establish a new connection.

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % xhost
access control enabled, only authorized clients can connect

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % ssh -C4Y lupin@10.0.0.12

lupin@nano4gb:~$ xeyes
lupin@nano4gb:~$ exit
example for xeyes

Now everything works as usual. But even with this solution I have my doubts! So I undo the setting and restart XQuartz.

3rd Quick and dirty solution

As described in the previous step, I undid the setting and restarted XQuartz. Additionally I allowed local connections in the access control.

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % xhost
access control enabled, only authorized clients can connect

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % xhost +local:
non-network local connections being added to access control list

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % ssh -C4Y lupin@10.0.0.12

lupin@nano4gb:~$ xeyes
lupin@nano4gb:~$ exit

This solution also works but is not permanent and not secure.

What now?

Searching the internet didn’t really help me. Some of the suggested solutions are total nonsense (or I completely misunderstand). I’ve also read the man pages where it even says that the environment variable $DISPLAY should not be changed. With me (also due to the ignorance on my part) the first signs of despair are appearing! Trust me, I deleted also all recommended files, what did not change this situation.

My final Solution

In the end my problem was very easy to solve! I still had entries from already deleted macports in the .zprofile file, which set the local environment variable $DISPLAY. With the restart of my macOS, this export of the environment variable became active again, of course.

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % echo $DISPLAY
:0

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % vim .zprofile

# I deleted all old macports entries

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % sudo reboot

The result after the restart.

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % echo $DISPLAY           
/private/tmp/com.apple.launchd.ASpxDkLA98/org.xquartz:0

β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % ssh -C4Y lupin@10.0.0.12

lupin@nano4gb:~$ xeyes

Everything is all right again. Oh yeah… that was troubleshooting. The good thing is I learned a lot about xhost, xauth plus my own systems.

First steps with Jetson Nano 2GB Developer Kit (Part 3)

In the two previous steps you installed the Jetson nano operating system and installed the necessary hardware pereferie. In this part you will learn how to recognize faces in pictures. In addition, it is shown how to use this remote via SSH/X11 Forward (headless).

Note: If you like to use the GUI, you can jump directly to section “Python & OpenCV” and adapt necessary steps.

Preparation

Since I use macOS myself, this tutorial will also be based on this. If you are a user of another operating system, search the Internet for the relevant solution for the “Preparation” section. However, the other sections are independent of the operating system.

If you haven’t done it, install XQuartz on your macOS now. To do this, download the latest version (as DMG) from the official site and run the installation. Of course, if you prefer to use a package manager like Homebrew, that’s also possible! If you do not have to log in again automatically after the installation, carry out this step yourself! That’s pretty much it. It shouldn’t need anything other than starting XQuarts.

Now start the SSH connection to the Jetson Nano (while XQuarts is running in the background).

# connect via SSH
$ ssh -C4Y <user>@<nano ip>

X11 Forwarding

You might want to check your SSH configuration for X11 on the Jetson Nano first.

# verify current SSH configuration
$ sshd -T | grep -i 'x11\|forward'

Important are following setings x11forwarding yes, x11uselocalhost yes and allowtcpforwarding yes. If these values ​​are not set, you must configure them and restart the SSHD service. Normally, however, these are already preconfigured on latest Jetson Nano OS versions.

# edit configuration
$ sudo vim / etc /ssh/sshd_config

# restart service after changes
$ sudo systemctl restart sshd

# check sshd status (optional)
$ sudo systemctl status sshd

Note: The spaces between slashes and etc is wrong! But my provider does not allow the correct information for security reasons.

You may have noticed that you established the SSH connection with “-Y”. You can check that with the environment variable $DISPLAY.

# show value of $DISPLAY (optional)
$ echo $DISPLAY
localhost:10.0

Attention: If the output is empty, check all previous steps carefully again!

Python & OpenCV

Normally Python3.x and OpenCV already exist. If you also want to check this first, proceed as follows.

# verify python OpenCV version (optional)
$ python3
>>> import cv2
>>> cv2.__version__
>>> exit()

Then it finally starts. Now we develop the Python script and test with pictures whether we recognize the faces of people. If you don’t have any pictures of people yet, check this page. The easiest way is to downnload the images into the HOME directory. But you can also create your own pictures with the USB or CSI camera.

# change into home directory
$ cd ~

# create python script file
$ touch face_detection.py

# start file edit
$ vim face_detection.py

Here the content of the Python Script.

#!/usr/bin/env python3

import argparse
from pathlib import Path
import sys
import cv2

# define argparse description/epilog
description = 'Image Face detection'
epilog = 'The author assumes no liability for any damage caused by use.'

# create argparse Object
parser = argparse.ArgumentParser(prog='./face_detection.py', description=description, epilog=epilog)

# set mandatory arguments
parser.add_argument('image', help="Image path", type=str)

# read arguments by user
args = parser.parse_args()

# set all variables
IMG_SRC = args.image
FRONTAL_FACE_XML_SRC = '/usr/share/opencv4/haarcascades/haarcascade_frontalface_default.xml'

# verify files existing
image = Path(IMG_SRC)
if not image.is_file():
    sys.exit('image not found')

haarcascade = Path(FRONTAL_FACE_XML_SRC)
if not haarcascade.is_file():
    sys.exit('haarcascade not found')

# process image
face_cascade = cv2.CascadeClassifier(FRONTAL_FACE_XML_SRC)
img = cv2.imread(cv2.samples.findFile(IMG_SRC))
gray_scale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
detect_face = face_cascade.detectMultiScale(gray_scale, 1.3, 5)

for (x_pos, y_pos, width, height) in detect_face:
    cv2.rectangle(img, (x_pos, y_pos), (x_pos + width, y_pos + height), (10, 10, 255), 2)

# show result
cv2.imshow("Detection result", img)

# close
cv2.waitKey(0)
cv2.destroyAllWindows()

Hint: You may have noticed that we use existing XML files in the script. Have a look under the path, there are more resources.

# list all haarcascade XML files
$ ls -lh /usr/share/opencv4/haarcascades/

When everything is done, start the script include specifying the image arguments.

# run script
$ python3 face_detection.py <image>

# or with executable permissions
$ ./face_detection.py <image>

You should now see the respective results. With this we end the 3rd part. Customize or extend the script as you wish.

First steps with Jetson Nano 2GB Developer Kit (Part 2)

In the first part I explained the initial setup of the Jetson Nano 2GB Developer Kit. This second part should help with additional (but required) hardware like the fan and csi camera.

Preparation

If you are still connected via SSH (or serial) and the Jetson Nano is still running, you have to shut it down now.

# shutdown
$ sudo shutdown -h 0

Connect the fan and the camera and make sure that all connections are made correctly! If you are not sure, look for the respective video on this very helpful website. As soon as you are done, you can restart the Jetson Nano and connect.

Note: The screws that come with the fan are too big. So I simply used cable ties. Otherwise I can really recommend this fan.

Fan

As soon as you have started and logged in again, you should check whether the fan is basically running.

# turn on fan
$ sudo sh -c 'echo 255 > /sys/devices/pwm-fan/target_pwm'

# turn off fan
$ sudo sh -c 'echo 0 > /sys/devices/pwm-fan/target_pwm'

If that worked, the easiest way use the code provided on GitHub to start/stop the fan on certain temperatures fully automatically.

# clone repository
$ git clone https://github.com/Pyrestone/jetson-fan-ctl.git

# change into cloned repository
$ cd jetson-fan-ctl

# start installation
$ sudo ./install.sh

# check service (optional)
$ sudo systemctl status automagic-fan

You can also change the values for your own needs.

# edit via vim
$ sudo vim / etc/automagic-fan/config.json

# restart service after changes
$ sudo systemctl restart automagic-fan

Note: The space between slash and etc is wrong! But my provider does not allow the correct information for security reasons.

From now on, your Jetson Nano should be protected against too quickly overheating. If you want to determine the current values ​​yourself, simply call up the following commands.

# show thermal zones
$ cat /sys/devices/virtual/thermal/thermal_zone*/type
$ cat /sys/devices/virtual/thermal/thermal_zone*/temp

Camera

The CSI camera don’t need any additional installation of packages, all you need is already installed. So after booting up, you can start right away.

Important: Do not attach the CSI camera while the device is running!

# list video devices
$ ls -l /dev/video0
crw-rw----+ 1 root video 81, 0 Jan  9 12:07 /dev/video0

An additional package allows you to find out the possibilities of the camera.

# install package (optional)
$ sudo apt install -y v4l-utils

# list information (optional)
$ v4l2-ctl --list-formats-ext

Take a picture

# take picture and save to disk
$ nvgstcapture-1.0 --orientation=2 --image-res=2

# Press j and ENTER to take a picture
# Press q and ENTER to exit

# list stored pictures
$ ls -lh nvcamtest_*
-rw-rw-r-- 1 lupin lupin 20K Jan  9 15:39 nvcamtest_7669_s00_00000.jpg

Now record a video. Be careful with your disk space!

# take video and save to disk
$ nvgstcapture-1.0 --orientation 2 --mode=2

# Press 1 and ENTER to start record
# Press 0 and ENTER to stop record
# Press q and ENTER to exit

$ ls -lh *.mp4
-rw-rw-r-- 1 lupin lupin 3,0M Jan  9 15:52 nvcamtest_8165_s00_00000.mp4

That’s it for this time.

First steps with Jetson Nano 2GB Developer Kit (Part 1)

The Jetson Nano Developer Kits are awesome to start with artificial intelligence and machine learning. To make your learning more successful, I will use this tutorial to explain some first important steps before you start.

Requirements

The values ​​in brackets are intended to indicate the hardware I am using. However, you can use other compatible hardware. Before you buy, see the documentation provided by Nvidia. Here you will find also a page with many good information collected.

  • Jetson Nano Developer Kit (2GB with WiFi)
  • 5V Fan (NF-A4x20 5V PWM)
  • USB or CSI camera (Raspberry Pi Camera Module V2)
  • Power Supply (Raspberry Pi 15W USB-C Power Supply)
  • Micro SD Card (SanDisk microSDXC-Karte Ultra UHS-I A1 64 GB)

additional Hardware:

  • Monitor & HDMI cable
  • Mouse & Keyboard
  • USB cable (Delock USB 2.0-Kabel USB C – Micro-USB B 1 m)

Objective

The first part should provide information about needed parts to buy and describes the necessary installation (headless).

Installation

After downloading the SD card image (Linux4Tegra, based on Ubuntu 18.04), you can write it to the SD card immediately. Make sure to use the correct image in each case! Here a short overview about commands if you use a macOS.

# list all attached disk devices
$ diskutil list external | fgrep '/dev/disk'

# partitioning a disk device (incl. delete)
$ sudo diskutil partitionDisk /dev/disk<n> 1 GPT "Free Space" "%noformat%" 100%

# extract archive and write to disk device
$ /usr/bin/unzip -p ~/Downloads/jetson-nano-2gb-jp46-sd-card-image.zip | sudo /bin/dd of=/dev/rdisk<n> bs=1m

There is no automatic indication of the dd progress but you can press [control] + [t] while dd is running.

The setup and boot of the Jetson Nano could be done in two different ways (with GUI or Headless). In case you choose the headless setup, connect your computer with the Micro-USB port of the Jetson Nano and follow the next instructions.

# list serial devices (optional)
$ ls /dev/cu.usbmodem*

# list serial devices (long listing format)
$ ls -l /dev/cu.usbmodem*
crw-rw-rw-  1 root  wheel   18,  24 Dec  2 07:23 /dev/cu.usbmodem<n>

# connect with terminal emulation
$ sudo screen /dev/cu.usbmodem<n> 115200

Now you simply follow the initial setup steps and reboot the device after your finish.

Connect to WiFi

If you have not setup the WiFi on initial setup, you can connect again via serial interface and follow the next steps.

# connect with terminal emulation
$ sudo screen /dev/cu.usbmodem<n> 115200

# enable (if WiFi is disabled)
$ nmcli r wifi on

# list available WiFi's
$ nmcli d wifi list

# connect to WiFi
$ nmcli d wifi connect <SSID> password <password>

# show status (optional)
$ nmcli dev status

# show connections (optional)
$ nmcli connection show

Finally you can run the update and installation of needed additional packages.

# update packages
$ sudo apt update -y && sudo apt upgrade -y

# install packages (optional)
$ sudo apt install -y vim tree

From this point on, nothing should stand in the way of the SSH connection, provided you are in the same network.

# get IP (on Jetson device)
$ ip -4 a

# connect via SSH (example for local device)
β”Œβ”€β”€[lupin@HackMac]::[~]
└─ % ssh -C4 <user>@<ip>

In next Part 2 we will add the fan (incl. installation of needed software) and add the camera.