This project is part 1 of a 4 part series of projects, where we will progressively create a Vitis-AI and ROS2 enabled platform for ZUBoard:
- Part 1 : Building the foundational designs
- Part 2 : Combining designs into a common platform
- Part 3 : Adding support for Vitis-AI
- Part 4 : Adding support for ROS2
The motivation of this series of projects is to enable users to create their own custom AI applications.
We will also be hosting a webinar series on creating a fun robotics application:
- Webinar I : Teaching the ZUBoard to recognize hand gestures
- Webinar II : Controlling a robot with the ZUBoard
Every time Avnet releases a new petalinux BSP version for its platforms, it's like Christmas.
I always start my designs from these BSPs, which include both hardware (Vivado) and software (Petalinux) components.
Foundational Design OverviewI call these projects the foundational designs, since they are literally the foundation on which I create more advanced projects, including hardware accelerators and additional software packages.
The ZUBoard has two (2) foundational designs:
- zub1cg_sbc_base
- zub1cg_sbc_dualcam
The first design, the base design, is the simplest design. It has an (almost) empty PL design.
The second design, the dualcam design, implements a MIPI capture pipeline in the PL (hardware), and supports the V4L2 API in linux (software).
The next sections describe how to rebuild these foundational designs from source in the Avnet github repositories.
Cloning the Avnet github repositoriesThe foundational designs make use of the following Avnet github repositories:
- github.com/Avnet/bdf (latest branch)
- github.com/Avnet/hdl (2022.2 branch)
- github.com/Avnet/petalinux (2022.2 branch)
- github.com/Avnet/meta-avnet (2022.2 branch)
- github.com/Avnet/meta-on-semiconductor (2022.2 branch)
The first thing to do is to clone the three (3) first repositories to a Ubuntu linux machine which has the 2022.2 tools installed. I like to do this in a directory that has a meaningful name (such as "Avnet_2022_2" in your root directory):
$ cd ~
$ mkdir Avnet_2022_2
$ cd ~/Avnet_2022_2
$ git clone https://github.com/Avnet/bdf
$ git clone -b 2022.2 https://github.com/Avnet/hdl
$ git clone -b 2022.2 https://github.com/Avnet/petalinux
The last two repositories will get cloned during the build process, so no need to clone them.
Building zub1cg_sbc_baseThe first design we will build is the "zub1cg_sbc_ base" design.
To do this, first call your 2022.2 setup scripts:
$ source /tools/Xilinx/Vitis/2022.2/settings64.sh
$ source /tools/Xilinx/petalinux-v2022.2-final/settings.sh
Next, launch the petalinux build script as follows:
$ cd ~/Avnet_2022_2/petalinux
$ scripts/make_zub1cg_sbc_base.sh
This script will perform the following:
- Create and Build the Vivado project
~/Avnet_2022_2/hdl/projects/zub1cg_sbc_base_2022_2
- Create and Build the Petalinux project
~/Avnet_2022_2/petalinux/projects/zub1cg_sbc_base_2022_2
- Create a final SD card image
~/Avnet_2022_2/petalinux/projects/zub1cg_sbc_base_2022_2/images/linux/rootfs.wic
I often get an error during the petalinux build phase, caused by a failure to access packages from various github repositories. If this occurs, re-check your internet connection, and re-start the build as follows:
$ cd ~/Avnet_2022_2/petalinux/zub1cg_sbc_base_2022_2
$ petalinux-build
I usually keep doing this until the build succeeds.
re-Enabling the CacheIt is important to know that the automated petalinux build configured the petalinux project to cache all the packages that are fetched from the internet. That way, those packages are only fetched once for all the petalinux projects, and reused from the local cache.
If your automated build succeeded, the cache will have been disabled in the petalinux project. You "may" want to re-enable it by editing the following files:
project-spec/configs/config
...
#
# Add pre-mirror URL
#
CONFIG_PRE_MIRROR_URL="file://{path}/Avnet_2022_2/petalinux/projects/cache/downloads_2022.2/"
...
#
# Network sstate feeds URL
#
CONFIG_YOCTO_LOCAL_SSTATE_FEEDS_URL="file://{path}/Avnet_2022_2/petalinux/projects/cache/sstate_2022.2/aarch64/"
CONFIG_NETWORK_LOCAL_SSTATE_FEEDS=y
...
project-spec/meta-user/conf/petalinuxbsp.conf
...
PREMIRRORS:prepend = "git://.*/.* file://{path}/Avnet_2022_2/petalinux/projects/cache/downloads_2022.2/ \
ftp://.*/.* file://{path}/Avnet_2022_2/petalinux/projects/cache/downloads_2022.2/ \
http://.*/.* file://{path}/Avnet_2022_2/petalinux/projects/cache/downloads_2022.2/ \
https://.*/.* file://{path}/Avnet_2022_2/petalinux/projects/cache/downloads_2022.2/ "
DL_DIR="{path}/Avnet_2022_2/petalinux/projects/cache/downloads_2022.2/"
SSTATE_DIR"{path}/Avnet_2022_2/petalinux/projects/cache/sstate_2022.2/"
...
where {path} is the absolute path to your Avnet_2022_2 directory.
Executing zub1cg_sbc_baseIn order to execute the base design, we first need to program the SD card image to a micro-SD card (of size 32GB or greater).
~/Avnet_2022_2/petalinux/projects/zub1cg_sbc_base_2022_2/images/linux/rootfs.wic
To do this, we use Balena Etcher, which is available for most operating systems.
Once programmed, insert the micro-SD card into the ZUBoard, and connect up the platform as shown below.
Press the power push-button (top-left) to boot the board.
As linux boots, a large quantity of verbose output will be sent to the serial console, ending with the following:
...
*********************************************************************
***
*** Avnet ZUBoard 1CG Out Of Box PetaLinux Build V1.2
*** The PS LED is mapped to 334
***
*********************************************************************
[ OK ] Started Blinky Sample Application.
[ OK ] Started Network Name Resolution.
[ OK ] Reached target Network.
[ OK ] Reached target Host and Network Name Lookups.
[ OK ] Started NFS status monitor for NFSv2/3 locking..
Starting Permit User Sessions...
Starting Target Communication Framework agent...
[ OK ] Started Xinetd A Powerful Replacement For Inetd.
[ OK ] Finished Permit User Sessions.
[ OK ] Started Getty on tty1.
[ OK ] Started Serial Getty on ttyPS0.
[ OK ] Reached target Login Prompts.
[ OK ] Started Target Communication Framework agent.
[ OK ] Reached target Multi-User System.
[ OK ] Reached target Graphical Interface.
Starting Record Runlevel Change in UTMP...
[ OK ] Finished Record Runlevel Change in UTMP.
Starting Hostname Service...
[ OK ] Started Hostname Service.
PetaLinux 2022.2_release_S10071807 zub1cg-sbc-base-2022-2 ttyPS0
Login as the "root" user as follows:
zub1cg-sbc-base-2022-2 login: root
root@zub1cg-sbc-base-2022-2:~#
Notice that, by default, a "Blinky Sample Application" was launched, and will be continually blinking the D9 LED on the ZUBoard.
To discover which IP address has been assigned to your ZUBoard (requires DHCP server), use the following command:
root@zub1cg-sbc-base-2022-2:~# ipconfig -a eth0
-sh: ipconfig: command not found
root@zub1cg-sbc-base-2022-2:~# ifconfig -a eth0
eth0 Link encap:Ethernet HWaddr FC:C2:3D:42:B9:F6
inet addr:10.0.0.179 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::fec2:3dff:fe42:b9f6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:170 errors:0 dropped:0 overruns:0 frame:0
TX packets:49 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:12886 (12.5 KiB) TX bytes:7320 (7.1 KiB)
Interrupt:37
We will be using MobaXterm on a Windows machine to view video output from the ZUBoard, since it does not have an HDMI or DP port by default. We can start a SSH session with MobaXterm, ensuring that we have X-11 forwarding enabled.
Next we can perform a quick sanity check with the USB camera.
We can query the presence of the USB camera with the following commands:
root@zub1cg-sbc-base-2022-2:~# ls /dev/media*
/dev/media0
root@zub1cg-sbc-base-2022-2:~# ls /dev/video*
/dev/video0 /dev/video1
root@zub1cg-sbc-base-2022-2:~# media-ctl -p -d /dev/media0
Media controller API version 5.15.36
Media device information
------------------------
driver uvcvideo
model UVC Camera (046d:0825)
serial 33E07BA0
bus info usb-xhci-hcd.2.auto-1
hw revision 0x12
driver version 5.15.36
Device topology
...
root@zub1cg-sbc-base-2022-2:~# v4l2-ctl -D -d /dev/video0
Driver Info:
Driver name : uvcvideo
Card type : UVC Camera (046d:0825)
Bus info : usb-xhci-hcd.2.auto-1
Driver version : 5.15.36
Capabilities : 0x84a00001
Video Capture
Metadata Capture
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04200001
Video Capture
Streaming
Extended Pix Format
...
root@zub1cg-sbc-base-2022-2:~# v4l2-ctl -D -d /dev/video1
Driver Info:
Driver name : uvcvideo
Card type : UVC Camera (046d:0825)
Bus info : usb-xhci-hcd.2.auto-1
Driver version : 5.15.36
Capabilities : 0x84a00001
Video Capture
Metadata Capture
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04a00000
Metadata Capture
Streaming
Extended Pix Format
...
root@zub1cg-sbc-base-2022-2:~#
We observe that one media node (/dev/media0) and two video nodes (/dev/video0, /dev/video1) have been enumerated.
Inspecting the "Device Caps" of each video node, we take note that the /dev/video0 is the one which has "Video Capture" capability.
Create a new python script in the root directory with the following content:
import cv2
# Open the camera
cap = cv2.VideoCapture(0)
# Set the resolution
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
# Create a display window
cv2.namedWindow("USB Cam Passthrough", cv2.WINDOW_NORMAL)
while True:
# Read a frame from the camera
ret, frame = cap.read()
if ret:
# Display the frame in the window
cv2.imshow("USB Cam Passthrough", frame)
# Check for key presses
key = cv2.waitKey(1)
if key == 27: # Pressing the Esc key exits the program
break
# Release the camera and destroy the display window
cap.release()
cv2.destroyAllWindows()
NOTE : This code was generated by ChatGPT with the following prompt : "python code usbcam passthrough 640x480"
Run this script from MobaXterm as follows:
root@zub1cg-sbc-base-2022-2:~# python3 usbcam_passthrough.py
[ WARN:0] global /usr/src/debug/opencv/4.5.2-r0/git/modules/videoio/src/cap_gstreamer.cpp (1081) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
This will display the video from the USB camera in an X11 windows on your PC.
In order to get a better understanding of the base design, we will modify it. More specifically, we will include additional software packages in the petalinux project.
AMD-Xilinx often provides groups of packages that are named with the "packagegroup-" prefix. As an example, we will add jupyter notebook functionality, which is provided by "packagegroup-petalinux-jupyter".
Edit the following file:~/Avnet_2022_2/petalinux/zub1cg_sbc_base_2022_2/roject-spec/meta-avnet/recipes-core/images/petalinux-image-minimal.bbappend
Add "packagegroup-petalinux-jupyter" to the "IMAGE_INSTALL:append:zub1cg-sbc" entry:
IMAGE_INSTALL:append:zub1cg-sbc = "\
...
packagegroup-petalinux-jupyter \
"
Next, rebuild the petalinux project:
$ cd ~/Avnet_2022_2/petalinux/zub1cg_sbc_base_2022_2
$ petalinux-build
Once complete, reprogram the micro-SD card, and reboot the ZUBoard.
Once booted, launch the jupyter-lab server in the serial console, specifying the IP address, as follows:
petalinux@zub1cg-sbc-dualcam-2022-2:$ jupyter-lab --allow-root --ip 10.0.0.179 &
[1] 879
[I 2023-03-20 16:07:40.475 ServerApp] jupyterlab | extension was successfully linked.
[I 2023-03-20 16:07:40.548 ServerApp] Writing Jupyter server cookie secret to /home/petalinux/.local/share/jupyter/runtime/jupyter_cookie_secret
[I 2023-03-20 16:07:40.649 LabApp] JupyterLab extension loaded from /usr/lib/python3.9/site-packages/jupyterlab
[I 2023-03-20 16:07:40.649 LabApp] JupyterLab application directory is /usr/share/jupyter/lab
[I 2023-03-20 16:07:40.681 ServerApp] jupyterlab | extension was successfully loaded.
[I 2023-03-20 16:07:40.686 ServerApp] Serving notebooks from local directory: /home/root
[I 2023-03-20 16:07:40.686 ServerApp] Jupyter Server 1.13.5 is running at:
[I 2023-03-20 16:07:40.687 ServerApp] http://10.0.0.179:8888/lab?token=06452daa538f08d3ef351ca243735d0a97ac1cca2a95f999
[I 2023-03-20 16:07:40.687 ServerApp] or http://127.0.0.1:8888/lab?token=06452daa538f08d3ef351ca243735d0a97ac1cca2a95f999
[I 2023-03-20 16:07:40.687 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 2023-03-20 16:07:40.717 ServerApp] No web browser found: could not locate runnable browser.
[C 2023-03-20 16:07:40.719 ServerApp]
To access the server, open this file in a browser:
file:///home/petalinux/.local/share/jupyter/runtime/jpserver-879-open.html
Or copy and paste one of these URLs:
http://10.0.0.179:8888/lab?token=06452daa538f08d3ef351ca243735d0a97ac1cca2a95f999
or http://127.0.0.1:8888/lab?token=06452daa538f08d3ef351ca243735d0a97ac1cca2a95f999
On the PC side, the jupyter notebook can be accesed by copying the link that contains the token. For example, in my case this was:
http://10.0.0.179:8888/?token=6dae07dd4168bf26850904cadd43526fe146fcc59de4b384
We will implement a live video in a jupyter notebook. The recipe was taken from the following reference:
In the "Notebook" section of the "Launcher", click on the "Python3 (ipykernel)" button to create a new jupyter notebook.
Paste the following code snippets (taken directly from the above reference):
import matplotlib.pyplot as plt
import cv2
import numpy as np
from IPython.display import display, Image
import ipywidgets as widgets
import threading
# Stop button
# ================
stopButton = widgets.ToggleButton(
value=False,
description='Stop',
disabled=False,
button_style='danger', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='square' # (FontAwesome names without the `fa-` prefix)
)
# Display function
# ================
def view(button):
cap = cv2.VideoCapture(0)
display_handle=display(None, display_id=True)
i = 0
while True:
_, frame = cap.read()
frame = cv2.flip(frame, 1) # if your camera reverses your image
_, frame = cv2.imencode('.jpeg', frame)
display_handle.update(Image(data=frame.tobytes()))
if stopButton.value==True:
cap.release()
display_handle.update(None)
# Run
# ================
display(stopButton)
thread = threading.Thread(target=view, args=(stopButton,))
thread.start()
Save the jupyter notebook with a meaningful name (ie. usbcam_live_view.ipynb), then click the "Run" icon on each of the two code cells.
You will see a live video feed in the jupyter notebook, as shown below:
Click the "Stop" widget to stop the live feed.
Building zub1cg_sbc_dualcamThe next design we will build is the "zub1cg_sbc_ dualcam" design.
If not done so already, call your 2022.2 setup scripts:
$ source /tools/Xilinx/Vitis/2022.2/settings64.sh
$ source /tools/Xilinx/petalinux-v2022.2-final/settings.sh
Next, launch the petalinux build script as follows:
$ cd ~/Avnet_2022_2/petalinux
$ scripts/make_zub1cg_sbc_dualcam.sh
This script will perform the following:
- Create and Build the Vivado project
~/Avnet_2022_2/hdl/projects/zub1cg_sbc_dualcam_2022_2
- Create and Build the Petalinux project
~/Avnet_2022_2/petalinux/projects/zub1cg_sbc_dualcam_2022_2
- Create a final SD card image
~/Avnet_2022_2/petalinux/projects/zub1cg_sbc_dualcam_2022_2/images/linux/rootfs.wic
In order to execute the base design, we first need to program the SD card image to a micro-SD card (of size 16GB or greater).
~/Avnet_2022_2/petalinux/projects/zub1cg_sbc_dualcam_2022_2/images/linux/rootfs.wic
To do this, we use Balena Etcher, which is available for most operating systems.
Once programmed, insert the micro-SD card into the ZUBoard, and connect up the platform as shown below.
Make sure that the DualCam HSIO jumpers are configured as follows:
- J1 => HSIO
- J2 => MIPI DSI
- J3 – present => VAA connected
- J4/J14 – J4.1-J4.2 => VDD is 1.2V (This needs to be set to 1-2 for 2.8V)
- J5 – 2-3 => Clock is on-board OSC(48 MHz)
- J6 => IAS0
- J7 => IAS1
- J8 – 2-3 => SENSOR1_GPIO1 is 1V8_SP2
- J9 – 2-3 => SENSOR1_GPIO3 is GND
- J10 – 2-3 => SENSOR2_GPIO1 is 1V8_SP3
- J11 – 2-3 => SENSOR2_GPIO3 is SENS2_ADDR
- J12 – absent => SENSOR1_GPIO0/FLASH
- J13 – absent => SENSOR2_GPIO0/FLASH
- J15 => MCU PROG
Press the power push-button to boot the board, and login as "root".
Next we can perform a quick sanity check with the dual MIPI sensor module.
We can query the presence of the MIPI sensors with the following commands:
root@zub1cg-sbc-dualcam-2022-2:~# ls /dev/media*
/dev/media0 /dev/media1
root@zub1cg-sbc-dualcam-2022-2:~# ls /dev/video*
/dev/video0 /dev/video1 /dev/video2
root@zub1cg-sbc-dualcam-2022-2:~#
Note that the USB camera has still been enumerated as media node /dev/media0 and video nodes /dev/video0 and /dev/video1. The dual MIPI capture pipeline has been enumerated at the next available nodes, which are /dev/media1 and /dev/video2. If the USB camera were not plugged in, the MIPI capture pipeline would have been enumerated as /dev/media0 and /dev/video0.
root@zub1cg-sbc-dualcam-2022-2:~# media-ctl -p -d /dev/media1
Media controller API version 5.15.36
Media device information
------------------------
driver xilinx-video
model Xilinx Video Composite Device
serial
bus info
hw revision 0x0
driver version 5.15.36
Device topology
...
root@zub1cg-sbc-dualcam-2022-2:~# v4l2-ctl -D -d /dev/video2
Driver Info:
Driver name : xilinx-vipp
Card type : vcap_CAPTURE_PIPELINE_v_proc_ss
Bus info : platform:vcap_CAPTURE_PIPELINE_
Driver version : 5.15.36
Capabilities : 0x84201000
Video Capture Multiplanar
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04201000
Video Capture Multiplanar
Streaming
Extended Pix Format
...
By default, the petalinux project is configured for the AR0144 sensors, which are the sensor modules that are shipped with the DualCam HSIO module.
root@zub1cg-sbc-dualcam-2022-2:~# media-ctl -p -d /dev/media1 | grep ar
<- "0-003c.ar0144.0":0 [ENABLED]
<- "0-003c.ar0144.1":0 [ENABLED]
- entity 21: 0-003c.ar0144.0 (1 pad, 1 link)
- entity 25: 0-003c.ar0144.1 (1 pad, 1 link)
The petalinux project also provides some python scripts to configure the MIPI capture pipeline and stream the video via GStreamer and OpenCV. Launch the "avnet_dualcam_passthrough.py" python script from MobaXterm as follows:
root@zub1cg-sbc-dualcam-2022-2:~# cd ~/avnet_dualcam_python_examples/
root@zub1cg-sbc-dualcam-2022-2:~/avnet_dualcam_python_examples# python3 avnet_dualcam_passthrough.py --sensor ar0144 --mode dual
[INFO] input resolution = 640 X 480
[INFO] fps overlay = False
[INFO] brightness = 256
[INFO] Initializing the capture pipeline ...
[DualCam] Looking for devices corresponding to AP1302
dev_video = /dev/video2
dev_media = /dev/media1
sensor_type = ar0144
ap1302_i2c = 0-003c
ap1302_dev = ap1302.0-003c
ap1302_sensor = 0-003c.ar0144
[DualCam] Looking for base address for MIPI capture pipeline
mipi_desc = b0000000.mipi_csi2_rx_subsystem
csc_desc = b0020000.v_proc_ss
scaler_desc = b0040000.v_proc_ss
[DualCam] hostname = zub1cg
[DualCam] Detected SYZYGY dualcam (sensors placed left-right on board)
[DualCam] Initializing AP1302 for dual sensors
media-ctl -d /dev/media1 -l '"0-003c.ar0144.0":0 -> "ap1302.0-003c":0[1]'
media-ctl -d /dev/media1 -l '"0-003c.ar0144.1":0 -> "ap1302.0-003c":1[1]'
[DualCam] Initializing capture pipeline for ar0144 dual 640 480
media-ctl -d /dev/media1 -V "'ap1302.0-003c':2 [fmt:UYVY8_1X16/2560x800 field:none]"
media-ctl -d /dev/media1 -V "'b0000000.mipi_csi2_rx_subsystem':0 [fmt:UYVY8_1X16/2560x800 field:none]"
media-ctl -d /dev/media1 -V "'b0000000.mipi_csi2_rx_subsystem':1 [fmt:UYVY8_1X16/2560x800 field:none]"
media-ctl -d /dev/media1 -V "'b0020000.v_proc_ss':0 [fmt:UYVY8_1X16/2560x800 field:none]"
media-ctl -d /dev/media1 -V "'b0020000.v_proc_ss':1 [fmt:RBG24/2560x800 field:none]"
media-ctl -d /dev/media1 -V "'b0040000.v_proc_ss':0 [fmt:RBG24/2560x800 field:none]"
media-ctl -d /dev/media1 -V "'b0040000.v_proc_ss':1 [fmt:RBG24/1280x480 field:none]"
[DualCam] Disabling Auto White Balance
v4l2-ctl --set-ctrl white_balance_auto_preset=0 -d /dev/video2
[DualCam] Opening cv2.VideoCapture for 1280 480
GStreamer pipeline = v4l2src device=/dev/video2 io-mode="dmabuf" ! video/x-raw, width=1280, height=480, format=BGR, framerate=60/1 ! appsink
[ WARN:0] global /usr/src/debug/opencv/4.5.2-r0/git/modules/videoio/src/cap_gstreamer.cpp (1081) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
[DualCam] Setting brightness to 256
v4l2-ctl --set-ctrl brightness=256 -d /dev/video2
This will display the stereo video from the dual MIPI capture pipeline in a windows on your PC.
The "media-ctl" and "v4l2-ctl" commands configure the following MIPI capture pipeline.
The "dual" mode captures the stereo images as shown in the following diagram.
The primary sensor (1) corresponds to the left image, and the secondary sensor (2) corresponds to the right image.
You can also stream video from the primary (left) sensor as follows:
root@zub1cg-sbc-dualcam-2022-2:~/avnet_dualcam_python_examples# python3 avnet_dualcam_passthrough.py --sensor ar0144 --mode primary
You can also stream video from the secondary (right) sensor as follows:
root@zub1cg-sbc-dualcam-2022-2:~/avnet_dualcam_python_examples# python3 avnet_dualcam_passthrough.py --sensor ar0144 --mode secondary
Once agsin, we will add jupyter notebook functionality, which is provided by "packagegroup-petalinux-jupyter".
Edit the following file:~/Avnet_2022_2/petalinux/zub1cg_sbc_dualcam_2022_2/roject-spec/meta-avnet/recipes-core/images/petalinux-image-minimal.bbappend
Add "packagegroup-petalinux-jupyter" to the "IMAGE_INSTALL:append:zub1cg-sbc" entry:
IMAGE_INSTALL:append:zub1cg-sbc = "\
...
packagegroup-petalinux-jupyter \
"
Next, rebuild the petalinux project:
$ cd ~/Avnet_2022_2/petalinux/zub1cg_sbc_base_2022_2
$ petalinux-build
Once complete, reprogram the micro-SD card, and reboot the ZUBoard.
Once booted, launch the jupyter-lab server in the serial console, specifying the IP address, as follows:
root@zub1cg-sbc-dualcam-2022-2:$ jupyter-lab --allow-root --ip 10.0.0.179 &
[1] 879
[I 2023-03-20 16:07:40.475 ServerApp] jupyterlab | extension was successfully linked.
[I 2023-03-20 16:07:40.548 ServerApp] Writing Jupyter server cookie secret to /home/petalinux/.local/share/jupyter/runtime/jupyter_cookie_secret
[I 2023-03-20 16:07:40.649 LabApp] JupyterLab extension loaded from /usr/lib/python3.9/site-packages/jupyterlab
[I 2023-03-20 16:07:40.649 LabApp] JupyterLab application directory is /usr/share/jupyter/lab
[I 2023-03-20 16:07:40.681 ServerApp] jupyterlab | extension was successfully loaded.
[I 2023-03-20 16:07:40.686 ServerApp] Serving notebooks from local directory: /home/root
[I 2023-03-20 16:07:40.686 ServerApp] Jupyter Server 1.13.5 is running at:
[I 2023-03-20 16:07:40.687 ServerApp] http://10.0.0.179:8888/lab?token=06452daa538f08d3ef351ca243735d0a97ac1cca2a95f999
[I 2023-03-20 16:07:40.687 ServerApp] or http://127.0.0.1:8888/lab?token=06452daa538f08d3ef351ca243735d0a97ac1cca2a95f999
[I 2023-03-20 16:07:40.687 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 2023-03-20 16:07:40.717 ServerApp] No web browser found: could not locate runnable browser.
[C 2023-03-20 16:07:40.719 ServerApp]
To access the server, open this file in a browser:
file:///home/petalinux/.local/share/jupyter/runtime/jpserver-879-open.html
Or copy and paste one of these URLs:
http://10.0.0.179:8888/lab?token=06452daa538f08d3ef351ca243735d0a97ac1cca2a95f999
or http://127.0.0.1:8888/lab?token=06452daa538f08d3ef351ca243735d0a97ac1cca2a95f999
On the PC side, the jupyter notebook can be accesed by copying the link that contains the token. For example, in my case this was:
http://10.0.0.179:8888/?token=6dae07dd4168bf26850904cadd43526fe146fcc59de4b384
We will once again implement a live video feed in a jupyter notebook, but this time with the dual MIPI capture pipeline.
In Jupyter-Lab, navigate to the "avnet_dualcam_python_examples" directory.
In the "Notebook" section of the "Launcher", click on the "Python3 (ipykernel)" button to create a new jupyter notebook.
Paste the following code snippets:
import matplotlib.pyplot as plt
import cv2
import numpy as np
from IPython.display import display, Image
import ipywidgets as widgets
import threading
# Load dualcam helper script
import sys
import os
sys.path.append(os.path.abspath('../'))
sys.path.append(os.path.abspath('./'))
from avnet_dualcam.dualcam import DualCam
# Stop button
# ================
stopButton = widgets.ToggleButton(
value=False,
description='Stop',
disabled=False,
button_style='danger', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='square' # (FontAwesome names without the `fa-` prefix)
)
# Display function
# ================
def view(button):
#cap = cv2.VideoCapture(0)
dualcam = DualCam('ar0144','dual',640,480)
display_handle=display(None, display_id=True)
i = 0
while True:
#_, frame = cap.read()
#frame = cv2.flip(frame, 1) # if your camera reverses your image
left,right = dualcam.capture_dual()
frame = cv2.hconcat([left,right])
_, frame = cv2.imencode('.jpeg', frame)
display_handle.update(Image(data=frame.tobytes()))
if stopButton.value==True:
cap.release()
display_handle.update(None)
# Run
# ================
display(stopButton)
thread = threading.Thread(target=view, args=(stopButton,))
thread.start()
Save the jupyter notebook with a meaningful name (ie. usbcam_live_view.ipynb), then click the "Run" icon on each of the two code cells.
You will see a live video feed in the jupyter notebook, as shown below:
Click the "Stop" widget to stop the live feed.
ConclusionI hope this tutorial will help you to get your custom AI applications up and running quickly on the ZUBoard.
If you would like to have the pre-built petalinux BSPs or SDcard images for these designs, please let me know in the comments below.
Revision History2023/04/11
Added registration link to webinar series:
http://avnet.me/ZU1-Robotics-webinar-series
2023/03/27
Add description of being part of a 4 part series.
2023/03/20
Preliminary Version
Comments