My Kria KR260 recently arrived in the mail which means it's time to install the latest version of Vivado, Vitis, and PetaLinux since the KR260's support starts with v2022.1. Along with this, the official Ubuntu distribution from Canonical released for the KR260 is currently 22.04 LTS, and since the Kria Robotics (KRS) workflow requires the cross compilation of an Ubuntu 22.04 image to run on the ARM-core of the Kria's MPSoC, I decided to buy a new external hard drive and create a new Ubuntu 22.04 virtual machine for my host environment.
I've covered before how I create my VMs on external hard drives (here) when I last created my Ubuntu 20.04 VM. The entire process is the exact same for Ubuntu 22.04 so I won't write that out again. I'll just pick up with the prep of the Ubuntu 22.04 environment and installation of system dependencies required for the v2022.1 Xilinx tools.
System DependenciesStart by enabling the 32-bit architecture in the system, and set the shell to bash (by default it's configured to dash on a fresh install of Ubuntu):
~$ sudo dpkg --add-architecture i386
~$ sudo dpkg-reconfigure dash
Next install the following package dependencies. I've put them in the most optimal copy+paste order here (you're welcome);
~$ sudo apt-get install iproute2 make libncurses5-dev tftpd libselinux1 wget diffstat chrpath socat tar unzip gzip tofrodos
~$ sudo apt-get install debianutils iputils-ping libegl1-mesa libsdl1.2-dev pylint python3 python2 cpio tftpd gnupg zlib1g:i386 haveged perl
~$ sudo apt-get install lib32stdc++6 libgtk2.0-0:i386 libfontconfig1:i386 libx11-6:i386 libxext6:i386 libxrender1:i386 libsm6:i386
~$ sudo apt-get install xinetd gawk gcc net-tools ncurses-dev openssl libssl-dev flex bison xterm autoconf libtool texinfo zlib1g-dev
~$ sudo apt-get install gcc-multilib build-essential automake screen putty pax g++ python3-pip xz-utils python3-git python3-jinja2 python3-pexpect
~$ sudo apt-get install liberror-perl mtd-utils xtrans-dev libxcb-randr0-dev libxcb-xtest0-dev libxcb-xinerama0-dev libxcb-shape0-dev libxcb-xkb-dev
~$ sudo apt-get install openssh-server util-linux sysvinit-utils google-perftools
~$ sudo apt-get install libncurses5 libncurses5-dev libncursesw5-dev libncurses5:i386 libtinfo5
~$ sudo apt-get install libstdc++6:i386 libgtk2.0-0:i386 dpkg-dev:i386
~$ ~$ sudo apt-get install ocl-icd-libopencl1 opencl-headers ocl-icd-opencl-dev
Next, the TFTP service for PetaLinux needs to be created:
~$ sudo gedit /etc/xinetd.d/tftp
Configure as such:
service tftp
{
protocol = udp
port = 69
socket_type = dgram
wait = yes
user = nobody
server = /usr/sbin/in.tftpd
server_args = /tftpboot
disable = no
}
Then create the directory for the files to be transferred to/from and give it the appropriate permissions:
~$ sudo mkdir /tftpboot
~$ sudo chmod -R 777 /tftpboot
~$ sudo chown -R nobody /tftpboot
~$ sudo /etc/init.d/xinetd stop
~$ sudo /etc/init.d/xinetd start
Update and upgrade the system for good measure:
~$ sudo apt update
~$ sudo apt upgrade
Finally, add the user to the dial out group so Vitis/Vivado can access the USB ports of the computer:
~$ sudo adduser $USER dialout
Download Installation PackagesStart by downloading the Vitis (SW Developer) installation package, as it is a superset that also includes Vivado.
Even though it's quite the space hog, I personally recommend using the unified single-file download (SFD) installer so you're not at the mercy of your internet speed during the actual installation. I also like to move it to my spare backup drive for future reuse so I don't have to wait for a 70GB+ download (which you don't fully avoid with the self-extracting web installer either so downloading the SFD installer and just saving it to a spare drive is the most efficient method I've found so far).
And yes, I know, Vitis and Vivado take up quite a bit of hard drive space in their current versions. However, given all of the libraries and packages necessary for AI development and FPGA acceleration at the moment, I'm not sure if there is a way to avoid this.
Vitis 2022.1 InstallationOnce downloaded, extract the installation folder and run the installer, xsetup
, within it:
~$ cd ./Downloads/Xilinx_Unified_2022.1_0420_0327/
~/Downloads/Xilinx_Unified_2022.1_0420_0327$ sudo ./xsetup
Select Vitis as the product to install as it's the superset option that will include Vivado (but not PetaLinux, that's a separate install in a later step).
By default, the installer wants to install Vitis and Vivado to /tools/Xilinx
. You can choose elsewhere, but I've found a lot of scripts from vendors such as Avnet to build their BSPs assume this /tools/Xilinx
default directory so I highly recommend selecting it.
At the end of the installation, a prompt will pop up to run an installation script to fill in any missing libraries for Versal ACAP tools. I don't have any current plans to use Versal, but I will probably forget about this script if I ever were to, so just for the sake of completeness:
~/Downloads/Xilinx_Unified_2022.1_0420_0327$ cd /tools/Xilinx/Vitis/2022.1/scripts/
/tools/Xilinx/Vitis/2022.1/scripts$ sudo ./installLibs.sh
Next, the various Xilinx programmer cable drivers need to be installed. Be sure to disconnect any that are currently connected to the host machine prior to running the installation script:
~$ cd /tools/Xilinx/Vivado/2022.1/data/xicom/cable_drivers/lin64/install_script/install_drivers/
/tools/Xilinx/Vivado/2022.1/data/xicom/cable_drivers/lin64/install_script/install_drivers$ sudo ./install_drivers
Finally, to test the installation, I like to launch Vivado and Vitis each:
~$ source /tools/Xilinx/Vivado/2022.1/settings64.sh
~$ vivado
~$ source /tools/Xilinx/Vitis/2022.1/settings64.sh
~$ vitis
The PetaLinux installation is a bit more manual than the Vitis/Vivado installation. While you may have noticed that PetaLinux was an option in the Product Selection window of the Vitis installer, the only thing it does is download the PetaLinux installer package.
However, it downloads the PetaLinux installer package to a location where you can't install it from so I find it easier just to download it straight from the Xilinx download page since I'll have to move it to my Downloads folder anyways.
Since PetaLinux's installer doesn't automatically create the installation directory and give it the appropriate permissions for us like the Vitis installer does, we have to do it manually prior to running the installer. I recommend installing it where you installed Vitis and Vivado, following the same directory setup:
~$ sudo mkdir -p /tools/Xilinx/PetaLinux/2022.1/
~$ sudo chmod -R 755 /tools/Xilinx/PetaLinux/2022.1/
~$ sudo chown -R whitney:whitney /tools/Xilinx/PetaLinux/2022.1/
Then give the installer the appropriate permissions and run it.
~$ sudo chmod 777 ./Downloads/petalinux-v2022.1-04191534-installer.run
~$ ./Downloads/petalinux-v2022.1-04191534-installer.run --dir /tools/Xilinx/PetaLinux/2022.1/
When the license agreement appears in the command line, up the up and down arrow keys if you'd like to scroll through and read it. Then press q to exit and Y to agree.
Once the installation is complete, source the PetaLinux tools and run a command to verify:
~$ source /tools/Xilinx/PetaLinux/2022.1/settings.sh
~$ petalinux-util --help
To design robotics applications on the Kria KR260, the Kria Robotics Stack (KRS) is a super set of ROS 2 that allows for the ROS 2 packages to be built utilizing hardware acceleration on AMD-Xilinx FPGAs. It also ingrates a simulator, Gazebo. Therefore the appropriate versions of ROS 2 and Gazebo need to be installed prior to installing KRS 1.0.
I've already covered the package dependencies in the step step, so now we can go directly to installing ROS 2 Humble Hawksbill distribution:
~$ sudo apt update && sudo apt install curl gnupg lsb-release
~$ sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg
~$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(source /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee /etc/apt/sources.list.d/ros2.list > /dev/null
Update and upgrade the system with the new repositories for ROS 2 added:
~$ sudo apt update
~$ sudo apt upgrade
Then install the ROS 2 Humble distribution (be sure to include the -full option, as I'll explain why in a bit):
~$ sudo apt install ros-humble-desktop-full
Test by running the following in one terminal window:
~$ source /opt/ros/humble/setup.bash
~$ ros2 run demo_nodes_cpp talker
Open a second terminal window and run:
~$ source /opt/ros/humble/setup.bash
~$ ros2 run demo_nodes_cpp listener
And you'll see the ROS demo nodes talking to each other.
Install Gazebo next. Since ROS Humble is compatible with the latest version of Gazebo, nothing specific needs to be done outside of running the default Gazebo install script:
~$ curl -sSL http://get.gazebosim.org | sh
Then launch the GUI to verify the installation:
~$ gazebo
And finally a few other KRS dependencies to install after ROS 2 has been installed:
~$ sudo apt-get install git ocl-icd-* python3-vcstool python3-colcon-common-extensions python3-colcon-mixin kpartx u-boot-tools pv qemu-user-static
(See note below) I also found that I had to manually install the ros-humble-camera-info-manager
and ros-humble-camera-info-manager-dbgsym
packages manually. It's also an open issue in the KRS Github, so I'll link it here to keep an eye on it for whenever a solution is found.
~$ sudo apt install ros-humble-camera-info-manager*
NOTE: I discovered that I had not installed the full ROS desktop package. I initially only install the regular version that didn't include the camera-info-manager
packages:
~$ sudo apt install ros-humble-desktop
But when I went back and installed the full version, it resolved all package dependecy issues I was having:
~$ sudo apt install ros-humble-desktop-full
KRS 1.0 Installation for Kria KV260At the moment, there isn't a great way to build for both the KR260 and the KV260 in the same KRS workspace even though I think there should be a way to do it since KRS is still in its very early stages. I don't think it would hurt anything in the future to keep separate KRS workspaces for the KV260 and KR260 so I'll go over building the KV260 KRS workspace first. This is completely independent of the KR260 workspace, so feel free to skip this if you don't plan to use KRS with the Kria KV260 board.
Create a directory for the KV260 KRS workspace:
~$ mkdir -p ~/krs1.0_kv260_ws/src
~$ cd ./krs1.0_kv260_ws/
Create the repository file:
~/krs1.0_kv260_ws$ gedit krs_humble.repos
Then copy the following into it to the KRS Humble repository with KV260 firmware:
repositories:
perception/image_pipeline:
type: git
url: https://github.com/ros-acceleration/image_pipeline
version: ros2
tracing/tracetools_acceleration:
type: git
url: https://github.com/ros-acceleration/tracetools_acceleration
version: humble
firmware/acceleration_firmware_kv260:
type: zip
url: https://www.xilinx.com/bin/public/openDownload?filename=acceleration_firmware_kv260.zip
acceleration/adaptive_component:
type: git
url: https://github.com/ros-acceleration/adaptive_component
version: humble
acceleration/ament_acceleration:
type: git
url: https://github.com/ros-acceleration/ament_acceleration
version: humble
acceleration/ament_vitis:
type: git
url: https://github.com/ros-acceleration/ament_vitis
version: humble
acceleration/colcon-hardware-acceleration:
type: git
url: https://github.com/colcon/colcon-hardware-acceleration
version: main
acceleration/ros2_kria:
type: git
url: https://github.com/ros-acceleration/ros2_kria
version: main
acceleration/ros2acceleration:
type: git
url: https://github.com/ros-acceleration/ros2acceleration
version: humble
acceleration/vitis_common:
type: git
url: https://github.com/ros-acceleration/vitis_common
version: humble
acceleration/acceleration_examples:
type: git
url: https://github.com/ros-acceleration/acceleration_examples
version: main
Then import the repos:
~/krs1.0_kv260_ws$ vcs import src --recursive < krs_humble.repos
Source the Vitis and ROS tools and add /usr/bin
to the path:
~/krs1.0_kv260_ws$ source /tools/Xilinx/Vitis/2022.1/settings64.sh
~/krs1.0_kv260_ws$ source /opt/ros/humble/setup.bash
~/krs1.0_kv260_ws$ export PATH="/usr/bin":$PATH
This sources the colcon command set to allow for the base KRS workspace to be built:
~/krs1.0_kv260_ws$ colcon build --merge-install
Test by seeing if you can source the built workspace:
~/krs1.0_kv260_ws$ source install/setup.bash
Select the KV260 is the build target and run the list command to verify that it has indeed been selected.
~/krs1.0_kv260_ws$ colcon acceleration select kv260
~/krs1.0_kv260_ws$ colcon acceleration list
Build the vadd_publisher
accelerated ROS node for the KV260 as a test:
~/krs1.0_kv260_ws$ colcon build --build-base=build-kv260 --install-base=install-kv260 --merge-install --mixin kv260 --packages-select ament_acceleration ament_vitis vadd_publisher
You can then run the following command which will build a Vanilla Linux kernel and call PetaLinux to build a root filesystem for the KV260 and output an SD card image with the vadd_publisher
accelerated node:
~/krs1.0_kv260_ws$ colcon acceleration linux vanilla --install-dir install-kv260
See my previous KRS post for details of deploying the SD card image on the KV260.
KRS 1.0 Installation for Kria KR260The KR260 KRS workflow is quite different from the KV260 KRS workflow in that it cross compiles an Ubuntu Desktop 22.04 image that also supports the building of binaries directly on the target (KR260) instead of building a Vanilla Linux kernel and calling PetaLinux to build a root filesystem.
This necessitates that the GNU C compiler PetaLinux uses, gcc-multilib
, be swapped out for the GNU C compiler needed for cross compiling an Ubuntu 22.04 image: gcc-aarch64-linux-gnu
and g++-aarch64-linux-gnu
. Which means you cannot use PetaLinux while building a KRS workspace targeting the KR260. Refer to my last section that explains how to switch back and forth between the two C compilers so you don't have to be constrained in your host environment.
Note: It's also very important that the following steps are not done in a terminal window with the PetaLinux tools sourced or any of the Xilinx tools sourced prior to switching out the GNU C compilers.
Before creating the KR260 KRS workspace, switch out the GNU C compiler:
~$ sudo apt-get install gcc-aarch64-linux-gnu g++-aarch64-linux-gnu
Then create a new directory for the KR260 KRS workspace:
~$ mkdir -p ./krs1.0_kr260_ws/src
~$ cd ./krs1.0_kr260_ws/
Create the repository file:
~/krs1.0_kr260_ws$ gedit krs_humble.repos
And copy the following into it to the KRS Humble repository with KR260 firmware:
repositories:
perception/image_pipeline:
type: git
url: https://github.com/ros-acceleration/image_pipeline
version: ros2
tracing/tracetools_acceleration:
type: git
url: https://github.com/ros-acceleration/tracetools_acceleration
version: humble
firmware/acceleration_firmware_kr260:
type: zip
url: https://github.com/ros-acceleration/acceleration_firmware_kr260/releases/download/v1.0.0/acceleration_firmware_kr260.zip
acceleration/adaptive_component:
type: git
url: https://github.com/ros-acceleration/adaptive_component
version: humble
acceleration/ament_acceleration:
type: git
url: https://github.com/ros-acceleration/ament_acceleration
version: humble
acceleration/ament_vitis:
type: git
url: https://github.com/ros-acceleration/ament_vitis
version: humble
acceleration/colcon-hardware-acceleration:
type: git
url: https://github.com/colcon/colcon-hardware-acceleration
version: main
acceleration/ros2_kria:
type: git
url: https://github.com/ros-acceleration/ros2_kria
version: main
acceleration/ros2acceleration:
type: git
url: https://github.com/ros-acceleration/ros2acceleration
version: humble
acceleration/vitis_common:
type: git
url: https://github.com/ros-acceleration/vitis_common
version: humble
acceleration/acceleration_examples:
type: git
url: https://github.com/ros-acceleration/acceleration_examples
version: main
Import the repos:
~/krs1.0_kr260_ws$ vcs import src --recursive < krs_humble.repos
Source the Vitis and ROS tools and add /usr/bin
to the path:
~/krs1.0_kr260_ws$ source /tools/Xilinx/Vitis/2022.1/settings64.sh
~/krs1.0_kr260_ws$ source /opt/ros/humble/setup.bash
~/krs1.0_kr260_ws$ export PATH="/usr/bin":$PATH
According to the KRS install instructions from Xilinx, the colconbuild
command is supposed to ask for your sudo
password at some point when building the KR260 firmware. However that never happened for me so the build would hang indefinitely or error out in a number of odd ways. My cheat for this was to run a simple command like ls
with sudo
right before running the colconbuild
command. Then source the built KRS workspace in the environment.
~/krs1.0_kr260_ws$ sudo ls -la
[enter sudo password]
~/krs1.0_kr260_ws$ colcon build --merge-install
~/krs1.0_kr260_ws$ source install/setup.bash
Right now, the Ubuntu 22.04 sysroot
can't cross compile using Python. So the current workaround for this is to create a symbolic link to the native compiler on the host machine. This first build may or may not error out due to this, but we need the ARM 64-bit Python library package to copy onto the host from the Ubuntu 22.04 sysroot:
~/krs1.0_kr260_ws$ sudo ln -s ~/krs1.0_kr260_ws/install/../acceleration/firmware/kr260/sysroots/aarch64-xilinx-linux/usr/lib/aarch64-linux-gnu/libpython3.10.so.1.0 /usr/lib/aarch64-linux-gnu/libpython3.10.so
Once the Python package has been copied from the Ubuntu sysroot and the symbolic link has created, rebuild the krs1.0_kr260_ws
directory if it errored out.
There is also currently a bug with the KR260 firmware that requires building packages for it using the KV260 firmware at the moment. This should be fixed in a future release to so the following block of commands shouldn't be needed. However until then:
~/krs1.0_kr260_ws$ cd ./src/firmware
~/krs1.0_kr260_ws/src/firmware$ wget https://www.xilinx.com/bin/public/openDownload?filename=acceleration_firmware_kv260.zip
~/krs1.0_kr260_ws/src/firmware$ unzip ./openDownload\?filename\=acceleration_firmware_kv260.zip -d ./acceleration_firmware_kv260/
~/krs1.0_kr260_ws/src/firmware$ cd ../../
~/krs1.0_kr260_ws$ sudo ls -la
[enter sudo password]
~/krs1.0_kr260_ws$ colcon build --merge-install --packages-select acceleration_firmware_kv260
Source the build KRS workspace and select the KV260 as the target board, but call out the build for the KR260 Ubuntu image:
~/krs1.0_kr260_ws$ source install/setup.bash
~/krs1.0_kr260_ws$ colcon acceleration select kv260
~/krs1.0_kr260_ws$ sudo ls -la
[enter sudo password]
~/krs1.0_kr260_ws$ colcon build --build-base=build-kr260-ubuntu --install-base=install-kr260-ubuntu --merge-install --mixin kr260 --cmake-args -DNOKERNELS=true
Finally, build the KR260 firmware with the desired accelerated ROS 2 node (simple_vadd
in this case):
~/krs1.0_kr260_ws$ sudo ls -la
[enter sudo password]
~/krs1.0_kr260_ws$ colcon build --build-base=build-kr260-ubuntu --install-base=install-kr260-ubuntu --merge-install --mixin kr260 --cmake-args -DNOKERNELS=false --packages-select simple_vadd
This will output an Ubuntu 22.04 image file in the ~/krs1.0_kr260_ws/acceleration/firmware/kr260/
directory to flash onto an SD card with a program such as balenaEtcher.
Once the updated KR260 firmware has been released, I'll write an updated instruction set. So if you're reading this too long after August 2022, check my project feed if I haven't edited it here.
The Yocto framework PetaLinux uses depends on the standard GNU C compiler, and the KRS workflow for the KV260 builds a root filesystem using PetaLinux. The KRS workflow for the KR260 on the other hand builds an Ubuntu 22.04 image using the GNU C compilers for the arm64 architectures.
The standard GNU C compiler and GNU C compilers for the arm64 architectures seem like they can't be installed at the same time on Ubuntu 22.04 as I noticed that the apt package manager removes one to install the other (I'm sure much more Linux savvy users can explain this in the comments).
So if you're using the same development environment for KRS projects for both the KV260 and KR260, you'll have to switch out the packages (uninstall one and install the other) before running any builds in their respective KRS workspaces.
To switch back the package dependencies to be able to use PetaLinux 2022.1 on your system and build the KV260 KRS workspace again, simply use the apt package manager to reinstall the standard GNU C compiler (gcc-multilib
) which will automatically remove the GNU C compilers for the arm64 architectures (gcc-aarch64-linux-gnu
and g++-aarch64-linux-gnu
).
~$ sudo apt-get install gcc-multilib
Same for switching back to the GNU C compilers for the arm64 architectures for building an Ubuntu 22.04 image for the KR260:
~$ sudo apt-get install gcc-aarch64-linux-gnu g++-aarch64-linux-gnu
Again, I'm sure there is a more elegant/full-proof way to do this, but I found that this seems to work with no issue for me at the moment.
Comments