This project is part 4 of a 4 part series of projects, where we will progressively create a Vitis-AI and ROS2 enabled platform for Ultra96-V2:
- Part 1 : Building the foundational designs
- Part 2 : Combining designs into a common platform
- Part 3 : Adding support for Vitis-AI
- Part 4 : Adding support for ROS2
The motivation of this series of projects is to enable users to create their own custom AI applications.
Introduction - Part IVThis project provides instructions on how to add ROS2 to a petalinux project.
A simple ROS graph will be implemented in python to perform a USB camera passthrough, as an introductory exercise.
A Brief History of ROSThe Robot Operating System (ROS) started at Stanford University as the Stanford Personal Robotics Program. Initiated as a personal project by Keenan Wyrobek and Eric Berger, the motivation was to define a common re-usable infrastructure for robotics, in an effort to stop re-inventing the wheel.
Willow Garage took ownership of this initiative and started releasing distributions called ROS (Robot Operating System). In 2012, Willow Garage passed to torch to Open Source Robotics Foundation.
Open Source Robotics Foundation (OSRF), has been maintaining the ROS distributions since then. As ROS matured, known limitations prevented its widespread adoption for commercial use.
- single point of failure
- lack of security
- no real-time support
ROS2 was initiated in 2015 specifically to address those issues. ROS2 distributions started in 2018, and have been proliferating into commercial applications.
As can be seen, Foxy is at its end of life (EOL), and the next distribution is planned for May 23, 2023.
The current active distribution is Humble.
An even Briefer History of meta-rosAlthough the ROS2 documentation mentions installation on Ubuntu distributions, it is also possible to install ROS2 in a petalinux project with the meta-ros yocto layer.
Up until the ROS2 foxy distribution, LG Electronics has been maintaining the meta-ros layer. In January 2020, they unfortunately resigned:
https://discourse.ros.org/t/ros-2-tsc-meeting-january-20th-2022/23986
“LG has made a difficult decision to suspend our activities on SVL Simulator and Embedded Robotics.Hence, our engineering team has been directed to focus on other areas which are more strategically important to LG’s business and that unfortunately means that we will not be able to continue working on meta-rosand meta-ros-webos.Because of this, we are resigning from ROS 2 TSC for 2022.”
We thank them for their effort through all these years :)
Although, there is no new "official" maintainer of the meta-ros layer, there are many forks of the meta-ros repo, with a lot of work being done by the community in order to support humble.
Two of these repositories have captured my attention:
- the "honister-humble" branch from Victor Mayoral at Acceleration Robotics
- the "superflore/humble/2022-11-23" branch from Rob Woolley at Wind River
Note that Xilinx have a fork of this meta-ros layer that builds in petalinux 2022.2, which is based on the work from Victor Mayoral.
For my specific needs, I had more success with the repo from Rob Woolley.
I will provide instructions for both options in the next two sections:
- Option 1 : Xilinx provided meta-ros
- Option 2 : RobWoolley provided meta-ros
Since the initial publication of this project 4/11, AMD-Xilinx provided updates to their meta-ros repo. I have thus added a third option:
- Option 3 : local (AlbertaBeef) fork of meta-ros, with latest updates from AMD-Xilinx, plus additional updates
As you can see, there is a lot of activity on the meta-ros layer ...
Adding support for ROS2 (Option 1)The first alternative, and the simplest, is to use the meta-ros repository provided by Xilinx:
AMD-Xilinx often provides groups of packages that are named with the "packagegroup-" prefix. Within these packagegroups, we can find:
- packagegroup-petalinux-ros
- packagegroup-petalinux-ros-dev
These packagegroups are for runtime and development, and the exact contents can be discovered here:
We notice that some packages, such as cv-bridge, are not included, so we will want to install those as well.
- cv-bridge
- cv-bridge-dev
I would have also wanted to add these packages, but they do not build with this meta-ros repository:
- v4l2-camera
- v4l2-camera-dev
- turtlesim
- turtlesim-dev
Edit the following file:
~/Avnet_2022_2/petalinux/u96v2_sbc_base_2022_2/roject-spec/meta-avnet/recipes-core/images/petalinux-image-minimal.bbappend
Add the previously mentionned packages to the "IMAGE_INSTALL:append:u96v2-sbc" entry:
IMAGE_INSTALL:append:u96v2-sbc = "\
...
packagegroup-petalinux-jupyter \
packagegroup-petalinux-ros \
packagegroup-petalinux-ros-dev \
cv-bridge \
cv-bridge-dev \
"
Next, rebuild the petalinux project:
$ cd ~/Avnet_2022_2/petalinux/u96v2_sbc_base_2022_2
$ petalinux-build
Once complete, reprogram the micro-SD card, and reboot the Ultra96-V2.
Adding support for ROS2 (Option 2)An alternative meta-ros is available from Rob Woolley:
We will still use the following packagegroups from AMD-Xilinx, but will need to make some minor edits:
- packagegroup-petalinux-ros
- packagegroup-petalinux-ros-dev
We notice that some packages, such as cv-bridge, are not included, so we will want to install those as well.
- cv-bridge
- cv-bridge-dev
- v4l2-camera
- v4l2-camera-dev
- turtlesim
- turtlesim-dev
Edit the following file:
~/Avnet_2022_2/petalinux/u96v2_sbc_base_2022_2/project-spec/meta-avnet/recipes-core/images/petalinux-image-minimal.bbappend
Add the previously mentionned packages to the "IMAGE_INSTALL:append:u96v2-sbc" entry:
IMAGE_INSTALL:append:u96v2-sbc = "\
...
packagegroup-petalinux-jupyter \
packagegroup-petalinux-ros \
packagegroup-petalinux-ros-dev \
cv-bridge \
cv-bridge-dev \
v4l2-camera \
v4l2-camera-dev \
turtlesim \
turtlesim-dev \
"
Replace the default meta-ros repo with the one from Rob Woolley, as follows:
$ cd components/yocto/layers
$ rm -r meta-ros
$ git clone -b "superflore/humble/2022-11-23" https://github.com/robwoolley/meta-ros
$ cd -
Edit the packagegroup-petalinux-ros package definition:
~/Avnet_2022_2/petalinux/u96v2_sbc_base_2022_2/components/yocto-layers/meta-petalinux/recipes-core/packagegroups/packagegroup-petalinux-ros.bb
And remove the following packages:
- byobu
- rqt-runtime-monitor
Next, rebuild the petalinux project:
$ cd ~/Avnet_2022_2/petalinux/u96v2_sbc_base_2022_2
$ petalinux-build
Once complete, reprogram the micro-SD card, and reboot the Ultra96-V2.
Adding support for ROS2 (Option 3)AMD-Xilinx has been busy updating and improving their meta-ros layer. With this latest version, the "turtlesim" package now builds successfully. Also, I was able to create a fix for the "v4l2-camera" package in my local fork of the AMD-Xilinx meta-ros reposiroty
We will still use the following packagegroups from AMD-Xilinx, but will need to make some minor edits:
- packagegroup-petalinux-ros
- packagegroup-petalinux-ros-dev
We notice that some packages, such as cv-bridge, are not included, so we will want to install those as well.
- cv-bridge
- cv-bridge-dev
- v4l2-camera
- v4l2-camera-dev
- turtlesim
- turtlesim-dev
Edit the following file:
~/Avnet_2022_2/petalinux/u96v2_sbc_base_2022_2/project-spec/meta-avnet/recipes-core/images/petalinux-image-minimal.bbappend
Add the previously mentionned packages to the "IMAGE_INSTALL:append:u96v2-sbc" entry:
IMAGE_INSTALL:append:u96v2-sbc = "\
...
packagegroup-petalinux-jupyter \
packagegroup-petalinux-ros \
packagegroup-petalinux-ros-dev \
cv-bridge \
cv-bridge-dev \
v4l2-camera \
v4l2-camera-dev \
turtlesim \
turtlesim-dev \
"
Replace the default meta-ros repo with the one from myself, as follows:
$ cd components/yocto/layers
$ rm -r meta-ros
$ git clone -b "rel-v2022.2" https://github.com/AlbertaBeef/meta-ros
$ cd -
Edit the packagegroup-petalinux-ros package definition:
~/Avnet_2022_2/petalinux/u96v2_sbc_base_2022_2/components/yocto-layers/meta-petalinux/recipes-core/packagegroups/packagegroup-petalinux-ros.bb
Change the following two defines:
...
RDEPENDS:${PN}-demo:aarch64 = "\
${ROS_BASE_PACKAGES} \
${ROS_DEMO_PACKAGES} \
"
RDEPENDS:${PN}-control:aarch64 = "\
${ROS_BASE_PACKAGES} \
${ROS_CONTROL_PACKAGES} \
"
...
Next, rebuild the petalinux project:
$ cd ~/Avnet_2022_2/petalinux/u96v2_sbc_base_2022_2
$ petalinux-build
Once complete, reprogram the micro-SD card, and reboot the Ultra96-V2
Implementing a video passthrough as a publisher subscriberIn ROS2, an application is called a graph.
This graph consists of several actors called nodes, which communicate with topics, using publisher-subscriber communication.
A simple ROS2 graph could consist of a publisher node and a subscriber node, as shown below:
The publisher node publishes messages to a defined topic, and the subscriber node subscribes to this topic, in order to receive these messages.
In order to implement a video passthrough, we will use the following pre-defined message:
https://docs.ros2.org/foxy/api/sensor_msgs/msg/Image.html
We will also be using the OpenCV library, along with the cv-bridge package to convert from OpenCV images to ROS2 messages.
Creating a workspaceWe start by initializing our ROS2 environment variables:
root@u96v2-sbc-2022-2:~# source /usr/bin/ros_setup.sh
Notice which environment variables were defined:
root@u96v2-sbc-2022-2:~# set | grep ROS
ROS_DISTRO=humble
ROS_LOCALHOST_ONLY=0
ROS_PYTHON_VERSION=3
ROS_VERSION=2
We start by creating a workspace for our new package:
root@u96v2-sbc-2022-2:~/vision_ws/src# source /usr/bin/ros_setup.sh
root@u96v2-sbc-2022-2:~# mkdir -p ~/vision_ws/src
root@u96v2-sbc-2022-2:~# cd ~/vision_ws/src
Then we create a package using the python template:
root@u96v2-sbc-2022-2:~/vision_ws/src# ros2 pkg create --build-type ament_python py_vision
going to create a new package
package name: py_vision
destination directory: /home/root/vision_ws/src
package format: 3
version: 0.0.0
description: TODO: Package description
maintainer: ['root <root@todo.todo>']
licenses: ['TODO: License declaration']
build type: ament_python
dependencies: []
creating folder ./py_vision
creating ./py_vision/package.xml
creating source folder
creating folder ./py_vision/py_vision
creating ./py_vision/setup.py
creating ./py_vision/setup.cfg
creating folder ./py_vision/resource
creating ./py_vision/resource/py_vision
creating ./py_vision/py_vision/__init__.py
creating folder ./py_vision/test
creating ./py_vision/test/test_copyright.py
creating ./py_vision/test/test_flake8.py
creating ./py_vision/test/test_pep257.py
[WARNING]: Unknown license 'TODO: License declaration'. This has been set in the package.xml, but no LICENSE file has been created.
It is recommended to use one of the ament license identitifers:
Apache-2.0
BSL-1.0
BSD-2.0
BSD-2-Clause
BSD-3-Clause
GPL-3.0-only
LGPL-3.0-only
MIT
MIT-0
Before proceeding, you can edit the recommended content, such as:
- description
- maintainer
- licenses
The first node we will create is our publisher node.
Create a new python script in the following location:
~/vision_sw/src/py_vision/py_vision/usbcam_publisher.py
With the following content:
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import cv2
class USBImagePublisher(Node):
def __init__(self):
super().__init__('usbcam_publisher')
self.publisher_ = self.create_publisher(Image, 'usb_camera/image', 10)
self.timer_ = self.create_timer(0.1, self.publish_image)
self.bridge_ = CvBridge()
# Open the camera
self.cap = cv2.VideoCapture(0)
# Check if the camera is opened correctly
if not self.cap.isOpened():
self.get_logger().error('Could not open USB camera')
return
# Set the resolution
self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
def publish_image(self):
# Read an image from the camera
ret, frame = self.cap.read()
# Convert the image to a ROS2 message
msg = self.bridge_.cv2_to_imgmsg(frame, encoding='bgr8')
msg.header.stamp = self.get_clock().now().to_msg()
# Publish the message
self.publisher_.publish(msg)
def main(args=None):
rclpy.init(args=args)
node = USBImagePublisher()
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
NOTE : This code was initiallygenerated by ChatGPT with the following prompt : "write python code for a ROS2 publisher for usb camera",However, the code was not using cv-bridge, and was calling cv2.VideoCapture(0) at each iteration, so I modified it.
Make certain that the declared name is 'usbcam_publisher' in the python script:
super().__init__('usbcam_publisher')
We need to explicitly add this python script as a node in our py_vision package, by modifying the following setup script:
~/vision_ws/src/py_vision/setup.py
from setuptools import setup
package_name = 'py_vision'
setup(
...
entry_points={
'console_scripts': [
'usbcam_publisher = py_vision.usbcam_publisher:main',
],
},
)
We can see that the code is using the opencv and sensor_msgs python libraries, so we need to make our package aware of this by editing the following file:
~/vision_ws/src/py_vision/package.xml
...
<package format="3">
<name>py_vision</name>
...
<exec_depend>rclpy</exec_depend>
<exec_depend>std_msgs</exec_depend>
<exec_depend>sensor_msgs</exec_depend>
<exec_depend>cv_bridge</exec_depend>
<exec_depend>opencv</exec_depend>
...
</package>
We can build our package using the colcon build utility:
root@u96v2-sbc-2022-2:~/vision_ws# colcon build
Starting >>> py_vision
Finished <<< py_vision [6.50s]
Summary: 1 package finished [7.51s]
Finally, we can query our system for this new package (executable). Note that we need to run the local_setup.sh script, as shown below:
root@u96v2-sbc-2022-2:~/vision_ws# ros2 pkg executables | grep py_vision
root@u96v2-sbc-2022-2:~/vision_ws# source ./install/local_setup.sh
root@u96v2-sbc-2022-2:~/vision_ws# ros2 pkg executables | grep py_vision
py_vision ubcam_publisher
Running the node is as simple as invoking it with "ros2 run...":
root@u96v2-sbc-2022-2:~/vision_ws# ros2 run py_vision usbcam_publisher
Creating a subscriber for videoThe second node we will create is our subscriber node.
Create a new python script in the following location:
~/vision_sw/src/py_vision/py_vision/usbcam_subscriber.py
With the following content:
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import cv2
class VideoSubscriber(Node):
def __init__(self):
super().__init__('usbcam_subscriber')
self.subscription_ = self.create_subscription(Image, 'usb_camera/image', self.callback, 10)
self.cv_bridge_ = CvBridge()
def callback(self, msg):
# Convert the ROS2 message to an OpenCV image
frame = self.cv_bridge_.imgmsg_to_cv2(msg, 'bgr8')
# Display the image
cv2.imshow('Video Stream', frame)
cv2.waitKey(1)
def main(args=None):
rclpy.init(args=args)
node = VideoSubscriber()
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
NOTE : This code was generated by ChatGPT with the following prompt : "write python code for a ROS2 subscriber for video"
Make certain that the declared name is 'usbcam_subscriber', and the topic name is "usb_camera/image" in the python script:
super().__init__('usbcam_subscriber')
self.subscription_ = self.create_subscription(Image, 'usb_camera/image', self.callback, 10)
We need to explicitly add this python script as a node in our py_vision package, by modifying the following setup script:
~/vision_ws/src/py_vision/setup.py
from setuptools import setup
package_name = 'py_vision'
setup(
...
entry_points={
'console_scripts': [
'usbcam_publisher = py_vision.usbcam_publisher:main',
'usbcam_subscriber = py_vision.usbcam_subscriber:main',
],
},
)
We can see that the code is using the opencv, cv-bridge, and sensor_msgs python libraries, so we need to make our package aware of this by editing the following file:
~/vision_ws/src/py_vision/package.xml
...
<package format="3">
<name>py_vision</name>
...
<exec_depend>rclpy</exec_depend>
<exec_depend>std_msgs</exec_depend>
<exec_depend>sensor_msgs</exec_depend>
<exec_depend>cv_bridge</exec_depend>
<exec_depend>opencv</exec_depend>
...
</package>
We can build our package using the colcon build utility:
root@u96v2-sbc-2022-2:~/vision_ws# colcon build
Starting >>> py_vision
Finished <<< py_vision [6.50s]
Summary: 1 package finished [7.51s]
Finally, we can query our system for this new package (executable).
root@u96v2-sbc-2022-2:~/vision_ws# source ./install/local_setup.sh
root@u96v2-sbc-2022-2:~/vision_ws# ros2 pkg executables | grep py_vision
py_vision ubcam_publisher
py_vision ubcam_subscriber
Running the node is as simple as invoking it with "ros2 run...":
root@u96v2-sbc-2022-2:~/vision_ws# ros2 run py_vision usbcam_subscriber
If the usbcam_publisher node is running in parallel, you will get the following output on MobaXterm.
I hope this tutorial will help you to get started with ROS2 quickly on the Ultra96-V2.
I recommend diving further into these examples as follows:
- add parameters to the usbcam_publisher node for image width, image height, camera id, fps, etc...
- implement the same nodes in C++
If you would like to have the pre-built petalinux BSPs or SDcard images for these designs, please let me know in the comments below.
Pre-Built SD imageThe following link provided a pre-built image for the Ultra96-V2
- http://avnet.me/avnet-u96v2-2022.2-sdimage
(2023/05/10, md5sum = de17c497334da903790d702a5fae8f51)
I wanted to thank Victor Mayoral for all his work on the meta-ros layer and hardware acceleration for robotics.
Revision History2023/05/10
Update SD image with following updates:
- added support for DisplayPort
- added "avnet-u96v2-dualcam-dpu" app with DPU B1152
2023/04/20
Add instructions for alternate (Option 3) reposiroty from local fork.
https://github.com/AlbertaBeef/meta-ros/rel-v2022.2
Publish pre-built SD image.
2023/04/14
Add instructions for alternate (Option 2) meta-ros repository from Rob Woolley.
https://github.com/robwoolley/meta-ros/tree/superflore/humble/2022-11-23
2023/04/11
Preliminary Version
Comments
Please log in or sign up to comment.