On May 23rd 2023, World Turtle Day, together with Bryan Fletcher, I gave a webinar on how to control a robot with hand signs with ZUBoard.
This project provides a Getting Started Guide to help you quickly get up and running with the demonstration on your ZUBoard and/or Ultra96-V2 platform.
Webinar OverviewIf you missed the webinar, you can still watch it on-line by registering at the link below:
The first webinar described how to train and deploy an American Sign Language (ASL) classification model, and deploy it to the ZUBoard.
The second webinar described how to build a ROS2 graph that incorporated the ASL classification model in order to control a simulated robot.
Start with the pre-built SD imagesThe following link provides a pre-built image for the ZUBoard
- http://avnet.me/avnet-zub1cg-2022.2-sdimage(2023/05/10, md5sum = 82372486c5dde174b0f00d32a6e602fa)
The following link provided a pre-built image for the Ultra96-V2
- http://avnet.me/avnet-u96v2-2022.2-sdimage(2023/05/10, md5sum = de17c497334da903790d702a5fae8f51)
With the SD image programmed to a micro-SD card, connect up your ZUBoard or Ultra96-V2 platform as shown below.
Press the on button to boot the board, and login as "root".
After reboot, the "xmutil listapps" command should confirm that the "avnet-{platform}-benchark" app is active:
# xmutil listapps
Also, the DPU architecture loaded in the PL can be queried with the "xdputil query"
# xdputil query | grep DPU
Make note of the DPU architecture, which will be B2304 for Ultra96-V2 and B512 for ZUBoard, when executing the demo.
Clone the py_vision ROS2 packageThe webinar demo was implemented as a ROS2 package called py_vision. The first step to reproduce the demonstration is to clone the source code for this package, using the git command as follows:
# mkdir -p webinar_ws/src
# cd webinar_ws
# git clone --branch 2022.2 https://github.com/AlbertaBeef/py_vision src/py_vision
Cloning into 'src/py_vision'...
remote: Enumerating objects: 40, done.
remote: Counting objects: 100% (40/40), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 40 (delta 13), reused 27 (delta 8), pack-reused 0
Receiving objects: 100% (40/40), 16.62 KiB | 2.08 MiB/s, done.
Resolving deltas: 100% (13/13), done.
For more information on how this code was created, please watch the webinar where I walk you through these steps:
Building the py_vision ROS2 packageNext, we need to build the "py_vision" ROS2 package.
In order to use the ROS2 build tools and utilities, we first need to initialize the environment variables:
# source /usr/bin/ros_setup.sh
Now we can build the package with "colcon"
# colcon build
Starting >>> py_vision
Finished <<< py_vision [6.18s]
Summary: 1 package finished [7.15s]
Finally, we need to make ROS2 aware of the py_vision package in our local workspace:
# source ./install/local_setup.sh
We can confirm the presence of our new ROS2 package with the "ros2 pkg executables" command:
# ros2 pkg executables | grep py_vision
py_vision usbcam_publisher
py_vision usbcam_subscriber
py_vision webinar_demo
The package contains three nodes:
- usbcam_publisher : a ROS2 publisher node, capturing video
- usbcam_subscriber : a ROS2 subscriber node, displaying video
- webinar_demo : a ROS2 subscriber-publisher node, identifying hand signs and converting them to velocity commands
We start by launching a video passthrough, which can serve as a starting example. This first version launches two nodes that are implemented in python:
# ros2 launch py_vision usbcam_passthrough1_launch.py
This example corresponds to the demo that was built in the following hackster projects:
Executing the video passthrough (option 2)We can also launch a second version of a video passthrough,
This second version makes use of an existing package (v4l2_camera) to capture images:
# ros2 launch py_vision usbcam_passthrough2_launch.py
This example corresponds to the demo that was built in the webinar:
Executing the webinar demoThe final example implements a ROS2 subscriber-publisher node called "webinar_demo" that performs the following tasks:
- subscribes to the "\image_raw" topic to capture images
- performs ASL classification using Vitis-AI
- publishes modified images to the "/vision/asl" topic
- publishes velocity commands to the "/turtle1/cmd_vel" topic
This example can be launched as follows:
# ros2 launch py_vision webinar_demo_launch.py
This example corresponds to the demo that was built in the webinar:
ConclusionI hope this getting started guide inspires you to be creative with ROS2 on the ZUBoard and/or Ultra96-V2.
Don't forget to check out the following projects that describe how to add support for ROS2 in your petalinux 2022.2 projects:
Don't Have a Board ?If this project inspired you, but you don't have a board, don't forget to apply for a free ZUBoard on Element14 with their RoadTest:
AcknowledgementsI wanted to thank Bryan Fletcher for his collaboration on the ZU1 Vision Webinar series:
I also want to thank Tom Curran for the DisplayPort support on ZUBoard via the DP-emmc HSIO.
I have to mention that May 23 is World Turtle Day:
and May 23, 2023 is the launch date of the new ROS2 distribution : Iron Irwini
I want to thank Joshua Ellingson for all the great "turtle" artwork that has been created for ROS.
2023/05/23
First version of the Controlling Robots with Vitis-AI getting started guide.
Comments