Objective
As a developer participating in category2 (Robotics AI with KR260), my task will be the following
- Create unique AI vision-guided robotics application
- Utilize camera inputs and control output through ROS2 and AI targeting the Kria KR260 Robotics Starter Kit.
- Squirrel-Proofing Innovation: AMD KRIA™ KR260 Robotics Starter Kit's Role in Wildlife Surveillance
I intend to address the problem of squirrels invading my bird feeders. The problem of squirrels invading bird feeders is a common one, and it can be a real nuisance. Squirrels are not only messy, but they can also consume a significant amount of the bird seed, which can be expensive. Additionally, they monopolize the bird feeder, preventing an equitable distribution of feeding time among the birds. To address this problem, I plan to build a surveillance system using the AMD KRIA™ KR260 Robotics Starter Kit. This kit includes a powerful processor, camera, and a variety of sensors that can be used to detect and scare away squirrels.
The camera will be placed in a strategic location near the bird feeders. The processor will use the camera's input to detect the presence of squirrels. When a squirrel is detected, the processor will trigger an output that will scare the squirrel away. This could be a loud noise, a Hawk sound, a bright light, or a jet of water.
The system will be designed to be humane and will not harm the squirrels. It will also be adjustable, so that the sensitivity of the detection system can be fine-tuned to avoid false alarms. I believe that this system will be an effective way to keep squirrels away from my bird feeders. It will allow the birds to eat in peace and will save me money on bird seed.
The ultimate objective of this project is to mount the entire system on an autonomous drone, enabling it to survey a larger area and effectively deter squirrels from the bird feeder
DESIGNFunctional specificationThe surveillance system shall be strategically positioned in proximity to the bird feeders. The processor will employ the camera's input to ascertain the presence of squirrels. Upon detection of a squirrel, the processor will initiate an output designed to deter the squirrel. This deterrent may consist of a loud noise, a predatory sound (e.g., a hawk cry), a bright light, or a jet of water.
Artificial intelligence recognition technology will be employed to discern squirrels from birds that frequent the feeder. Upon detection of a squirrel, the system's control output will emit a sound. An AMD accelerated example serves as a reference. The ROS 2 perception node accelerated application example closely aligns with the desired functionality.
Technical specificationPower: Considerations for power requirements and supply.
Placement and Weatherproofing: Considerations for the placement of the device and its protection from the elements.
Development Environment: Considerations for the development environment, including operating system, tools, and libraries.
Dual Boot: Considerations for dual-booting. Using SOM26 Firmware:Version: 2022.1 for the 1st boot and the Ubuntu 22.04 image on the SD Card.
Tools: Tools and software required for development and testing.
PYNQ + Jupyter notebook: Considerations for using PYNQ and Jupyter notebook for development.
ROS2: Considerations for using ROS2 for development.
KRS (Kria Robot Stack): Considerations for using KRS for development.
Other Workstation (Linux or Windows): Considerations for using a workstation with Linux or Windows for development.
Simulators: Considerations for using simulators for testing and development.
GAZEBO: Considerations for using GAZEBO as a simulator.
AI Models: Considerations for using AI models for development.
APIs and SDKs: Consider learning about and using the following
AI/ML workloads can run on the Kria KR260 board using DPU-PYNQ (v3.5). The overlay contains a Vitis AI 3.5.0 Deep Learning Processor Unit (DPU) and comes with a variety of notebook examples with pre-trained ML models.
Robotic code/functionality is added in the ARM processor using the Robotics Operating System ROS2.
Also available is Aupera's VMSS2.0, made free for contest participants. Install in minutes, create complex applications with no code, and customize with 100+ AMD Model Zoo models. Innovation is made easy for every skill level. See more at Aupera VMSS2.0.
ResearchResources, documentation and steps taken, to gain knowledge of developing software and experimenting with the Kit.
Unboxing Kria™ KR260 Robotics Starter KitI received the Kit April 6, 2024, but was unable to unbox it days later on Thursday April 18, 2024
The kit includes:- power supply and its adapters,
- Ethernet cable,
- USB A-male to micro B cable,
- a micro-SD card with an adapter 64GB is certainly a very good choice.
- A quick card including a lnk to getting started
- Some Decals
You will need the following accessories to utilize the KR260 desktop environment:
- USB Keyboard,
- USB Mouse,
- DisplayPort Cable (for connecting to a monitor),
- an HDTV (1920x1080 as minimum resolution) monitor with DisplayPort connector.
- To begin, a computer with internet connection and with the ability to write to the micro-SD card is needed.
- You can also use a USB camera/webcam or depth sensing camera for your robotics application.
You can follow the start page www.xilinx.com/KR260-start from here called the QuickCard START-PAGE
There are 2 other started pages that I found can get you started also:
Hackster Challenge Developer page: AMD Pervasive AI Developer Contest Robotics AI Study Guide
Refer to this technical study guide for Kria™ KR260 Robotics Starter Kit development.
AMD GitHUB demo page: amd/Kria-RoboticsAI
This is a KR260 Robotics AI Developer Challenge AMD Repo. the focus of this repo is for AI vision-guided robotics applications using camera inputs and control output with ROS and AI targeting the Kria™ KR260 SOM and PYNQ/Vitis-AI software platform.Getting Started with the KR260
This section describes the steps I took to get started with KR260. It includes:
Following the QUICKCard START-PAGE www.xilinx.com/KR260-start
Determine if the SOM26 Firmware is up to date Getting the Status of the SOM Firmware.using the recovery tool and if needed use it to update the firmware.
Making the connections to the carrier board
Booting and using Ubuntu 22.04
QUICKCard START-PAGE www.xilinx.com/KR260-startFollowing along on the page, you get some valuable information about the Kit. Then you come upon an important informational message as follows:
Ok we have not installed an operating system yet? So what do we do?
So let's break this down
Updating the firmware on your Kria K26 System-on-Module (SOM) is crucial to ensure optimal performance, compatibility with the latest operating systems, and access to the full range of board functionalities. To accomplish this, you must install the most recent boot firmware provided by AMD.The firmware update process is straightforward and well-documented in the firmware update instructions available on the K26 Wiki. By following these instructions carefully, you can seamlessly update your firmware and unlock the full potential of your Kria K26 SOM.
Here are the key benefits of installing the latest firmware:
Enhanced Functionality: The updated firmware enables complete board functionality, ensuring that all the features and capabilities of your Kria K26 SOM are fully operational.
Compatibility with Latest Operating Systems: The latest firmware ensures compatibility with the most recent operating systems, allowing you to leverage the latest software and security updates.
Improved Performance: The updated firmware optimizes system performance, resulting in faster boot times, smoother operation, and enhanced overall user experience.
Bug Fixes and Security Enhancements: Firmware updates often include bug fixes and security enhancements, addressing potential vulnerabilities and improving the stability and reliability of your system.
Dual Boot Process
The Kria kits have a dual boot process implemented.
1. he Kria boot firmware located on the System-on-Module (SOM) runs a bootstrap firmware.
2.The Bootstrap firm Ubuntu 22.04.
To initiate the firmware update process, navigate to the K26 Wiki and access the firmware update instructions provided by AMD. These instructions guide you through the necessary steps, including downloading the firmware image, preparing your system, and performing the firmware update.
By following the instructions and installing the latest firmware, you can ensure that your Kria K26 SOM operates at its peak performance and provides a seamless and reliable user experience. The latest Kria bootstrap SOM firmware 2022, is required for Ubuntu 22.04 to boot.
You can refer to the K26 Wiki.for an expanded explanation on the Boot firmware, But I will outline in this section the information you will need to:
1. Determine the Required SOM Firmware that support Ubuntu Desktop 22.04 LTS
2. Getting the Boot firmware on the Out of the Box SOM
3. How to Update to the required Firmware on the SOM if required
Determine the Required SOM Firmware that support Ubuntu Desktop 22.04 LTSThe following page section states that the 2022.1 K26 Boot firmware is recommended if you use the Ubuntu Desktop 22.04 LTS Operating System image. On the KR260 there is a limitation for USB2.0 not functional on U46 interfaces. The u46 interface on the KR260 Robotics Starter Kit refers to the USB 3.0 ports on the board, which are labeled as U46 in the documentation.
The next section guides you through the process of getting the status of the Boot firmware on the Out of the Box SOM. At this point, remember we do not have the OS running yet. It describes how to get the firmware status from the firmware running on the SOM using a web interface on the SOM and using a web browser on a PC connected to the same network as the Kit.
Getting the status with the recovery tool without using Linux imageThe Kria SOM26 includes a standalone "Boot Image Recovery Tool" that can be used to inspect and update the boot firmware partitions (Image A and Image B) without requiring the full Linux system to be running. The Boot Image Recovery Tool is a bare-metal application that runs on the SOM26 and provides a web-based interface. It can be accessed by connecting to the fixed IP address 192.168.0.111 from a web browser on a host PC.
Through the web interface, the Recovery Tool can read the sideband control EEPROMs to verify the make and model of the Kria SOM26 device. It can also display the current status of the A/B boot firmware partitions and allow the user to update them if needed.
The key steps to use the recovery tool to get the status of the SOM26 firmware are:
1. Connect to the Recovery Tool web interface at 192.168.0.111
2. Verify the device information and current boot firmware status
3. Use the web interface to update the firmware partitions if needed
4. Validate the firmware update using the xmutil bootfw_status command
The following text was TAKEN FROM THE K26 WIKI
Boot Image Recovery Tool
The Kria Starter Kit boot firmware pre-programmed into QSPI consists of an “Image Selector” application, A/B MPSoC boot FW partitions, and an “Boot Image Recovery Tool” application. It is suggested for users to use the XMUTILa boot FW update tool as the primary boot FW update mechanism. If the user however gets to a state where they either want to reset the board to factory settings they can use the “Boot Image Recovery Tool '' which is a stand alone, ethernet-based application for manually loading and configuring content of both A and B boot partitions. Users can also use this tool when customizing the platform's boot firmware with their own BOOT.BIN generated through the Xilinx Vitis / PetaLinux / Yocto tools.
The boot image recovery tool (more details here) provides a simple Ethernet-based interface and application for updating the boot firmware. It works in a similar manner to typical home router initial configuration with a static IP and web-server based tool which is used to update the A/B partitions and the persistent register states. This tool can be used to write a factory BOOT.BIN image from the Starter Kit Boot FW table above to overwrite chosen image partitions using the “Recover Image” function. The tool can also be used to modify the bootable state of each partition. Uploading a BOOT.BIN the tool will configure that partition to a “bootable” state.
This application and interface is initiated by holding the firmware update button during the power-on sequence. The application uses a fixed IP address of 192.168.0.111. The static IP is printed to the UART when the system is powered on with the FWUEN button pressed. The following figure shows an overview of the set-up.
Connect the PC to the Kria Starter Kit via Ethernet as shown in figure above. Ensure that the system is connected to the correct Ethernet port for the chosen Starter Kit (see table above). The other ports will not work for platform recovery.
- Connect the PC to the Kria Starter Kit via Ethernet as shown in figure above. Ensure that the system is connected to the correct Ethernet port for the chosen Starter Kit (see table above). The other ports will not work for platform recovery.
Set the PC to a static IP address that is on the same subnet as the recovery tool (192.168.0.XYZ), but not 192.168.0.111.
- Set the PC to a static IP address that is on the same subnet as the recovery tool (192.168.0.XYZ), but not 192.168.0.111.
Hold the firmware update button when powering on the device. You should also see the UART print outs from the recovery application.
- Hold the firmware update button when powering on the device. You should also see the UART print outs from the recovery application.
Use a web-browser (e.g., Chrome or Firefox) on the PC to navigate to the URL http://192.168.0.111 for access to the Ethernet recovery tool.
- Use a web-browser (e.g., Chrome or Firefox) on the PC to navigate to the URL http://192.168.0.111 for access to the Ethernet recovery tool.
Use the Ethernet recovery tool GUI in the web-browser to update either the A or B boot firmware partitions with a BOOT.BIN file from the file system on the PC. The Ethernet recovery tool interface is shown in the following figure.
- Use the Ethernet recovery tool GUI in the web-browser to update either the A or B boot firmware partitions with a BOOT.BIN file from the file system on the PC. The Ethernet recovery tool interface is shown in the following figure.
I had Problems with the method so I abandoned it and decided to move on to continue setting up Ubuntu and worry about the firmware later.
The problem was that the IP did not work from my PC as described in step 4 above? It might be because I did not set the PC static ip address, as described in step 2.
I wanted to run the XMUTL command but I needed to boot an OS to get it.I tried the KV260 SD card That I use with my KV260. I inserted it into the KR260, but it did not boot correctly
It turned out that Ubuntu 22.04 was able to boot and I was able to use the XMUTIL to check and Update the Firmware on the SOM to the Latest version. I have documented this later in this project.
Now following the steps on the page Getting Started with Kria KR260 Robotics Starter Kit to complete the installation of the Ubuntu operating system for the KR260.
Step 1. Setting up the SD Card Image (Ubuntu)The Starter Kit has a primary and secondary boot device, isolating the boot firmware from the run-time OS and application. This allows you to focus on developing and updating your application code within the application image on the secondary boot device, without having to touch the boot firmware. The primary boot device is a QSPI memory located on the SOM, which is pre-programmed (pre-loaded QSPI image) at the factory. The secondary boot device is a microSD card interface on the carrier card.
For setting up the microSD card, you’ll need to download the latest SD card image and then write it using an Image Flashing tool.
- Download the Kria KR260 Robotics Starter Kit Image and save it on your computer.
- Download the Balena Etcher (recommended; available for Window, Linux, and macOS). Find additional OS specific tool options below.
- Follow the instructions in the tool and select the downloaded image to flash onto your microSD card.
Step 1 make sure you go to the “Kria k26 SOM” page to download Ubuntu Desktop 22.04 LTS, as described
I used a 128GB sd card that I had. I used Balena Etcher to create the image on the sd card.
Step 2. Connecting Everything (Ubuntu)
Connected everything as described in Connecting Everything I did not connect the usb camera at this point.
Connection 4 “Connect to a monitor/display with the help of a DisplayPort cable.”, did not work at first with my display adapter that I use on my PC.
The document page stipulates that for DisplayPort video output, an individual must possess a DisplayPort cable and a compatible DisplayPort monitor. In my case, I do not have a DisplayPort monitor but rather have an ample supply of HDMI monitors on hand. Eventually, I was able to ascertain the requisite cable and will elaborate on it later in the article. I’m not sure why the proper DisplayPort cable was not included in the kit? It seems like a necessary piece to include?
As advised, I procured the power supply and connected it to the DC Jack (J12) located on the starter kit. However, I refrained from inserting the opposing end into the AC plug at that time.
Subsequently, I proceeded to the next phase of booting up the starter kit.
Step 3. Booting your Starter Kit (Ubuntu)The Booting your Starter Kit guide at Booting your Starter Kit describes how to access the GNOME Desktop when using the KR260 Ubuntu image. In addition to logging in using the traditional way over the serial port, a full GNOME Desktop is available. To utilize the GNOME Desktop, follow the "Instructions for GNOME Desktop" section below. Note that you will need a keyboard, mouse, and monitor to be connected to use this feature.
I ran the instructions in Instructions for GNOME Desktop Section.
To use the GNOME Desktop, you'll need a DisplayPort or HDMI monitor, a USB keyboard, and a mouse connected to your device.
Connect the power supply to the AC plug to power on the Starter Kit. The power LEDs will light up, and after about 10-15 seconds, you should see console output on the connected display. The desktop login screen should appear after about a minute. Note that the Starter Kit powers up immediately when you connect the AC plug to a wall outlet. There is no ON/OFF switch on the board.
Unfortunately, I didn't see any output on my display using my DisplayPort adapter, so I assumed I had the wrong DisplayPort and decided to log in the traditional way over the serial port.
NEXT, I ran the instructions for Windows in the Instructions for Windows Section. Following are my notes of what I did to boot my starter kit.
- Configure your terminal program (e.g., TeraTerm, PuTTy) over SERIAL with the Baud rate = 115200
- Plug in the power to the kit
- Login using the terminal program on your PC
The default login credentials are:
username:
ubuntu
password:
ubuntu
- Verify Internet connectivity via “ping” or “DNS lookup.”
ping 8.8.8.8
If you can observe that packet transmit/receive worked and there is no packet loss with the above ping command, this means your Internet connectivity is working and active.
NEXT - Set up the Xilinx Development & Demonstration Environment for Ubuntu 22.04 LTS
Install the xlnx-config snap that is required for system management:sudo snap install xlnx-config --classic --channel=2.x
For more information on using the xlnx-config snap, please refer to the xlnx-config snap page.
- Run the xlnx-config sysinit command to install a custom Xilinx version of Gstreamer - accept all defaults when prompted:
Xlnx-config.sysinit
This command took a long time to execute. What else is it installing?
I was posed with the following prompt and I selected the fist option? Hope this was alright?
- Reboot now by using the shutdown command.
$sudo shutdown
At this point you should have Xilinx Development and Demonstration Environment for Ubuntu 22.04 LTS installed now. How can I check if it's installed correctly?
Now restart the kit, and the login details should show up on the terminal so you can log in again.
Next, I connected my version of FileZilla and checked the SOM Firmware version and updated it. I'll talk about that in the next two sections.
File Transfer with FileZilla
In order to copy files from your PC to the KR260 you will need a file transfer program. I use FileZilla
Get the IP of the kit
ip addr show
Enter the host as the IP without “”/port”
Enter the Kits Username and PWD
Enter port as 22
You should now be connected as described in the screen shot below:
This Section we’ll go through the steps for updating the boot firmware of your Kria SOM.
I tried update my Kria SOM boot firmware on the KR260 starter kit by following my blog for a KV260 kit on element14
Update Kria Boot Firmware on SOM before installing Ubuntu version 22.04, but I was unsuccessful and received the following error:
Apparently xlnx-config –xmutil is not supported on the KR260 kit ONLY the KV260 kit
What to do now?
Another good source on element14 of the dual boot and xmutil: https://community.element14.com/technologies/fpga-group/b/blog/posts/kria-kv260-kr260-firmware-update-for-booting-ubuntu-22-04
Use sudo xmutil bootfw_status and the following instructions using straight xmutil commands
Then follow these commands to update
1. Get the file 2022.1_update3_BOOT.BIN
I downloaded a file using their PC, renamed it to BOOT.BIN and transferred it to a kit using FileZilla.
2. Check the current firmware that we have on our Kria SoM
sudo xmutil bootfw_status
3. Now use xmutil to update the firmware on our Kria SoM:
sudo xmutil bootfw_update -i BOOT.BIN
4. Press the RESTART button
Recommend shutdown
sudo shutdown -r now
5. Log back into Ubuntu an run this command to confirm that the new firmware resulted in a successful Linux boot:
sudo xmutil bootfw_update -v
This command must be run in the immediate reboot and if you reboot without running the command, the system will revert back to Image A. After running the command, you can see with the status check that the requested boot image is B (not A).
6. Finally we can check to see that we are now booting from the other partition (in my case, B). Note that the version displayed has not changed since we did the update, but that’s normal, this is the factory installed version and it will not change when you update the boot partitions.
sudo xmutil bootfw_status
Why is image B which was changed “Non bootable”?
Redid with a shutdown and this fixed it.Apparently the recommended RESET button does not work!!
What version is the file I downloaded? Firmware version 2021.01
According to an internet search: XilinxSOM_BootFW_20220915 is the latest and recommended boot firmware version for the Xilinx Kria KV260 board, which enables proper booting of the Ubuntu 22.04 operating system.
The search results indicate that the XilinxSOM_BootFW_20220915 firmware version was released as part of the Xilinx 2022.1 release on September 16, 2022
But I cannot seem to find a connection to the AMD DOC recommended version “2022.1 K26 Boot firmware”?
SECTION CONCLUSIONThe latest SOM Boot firmware 2021.1, is now installed and Ubuntu 22.04 desktop LTS image is now booting of the SD card
Version Details:
SD card image
Ubuntu 22.04 Desktop LTS
SOM Boot Firmware
XilinxSOM_BootFW_20220915 firmware the latest in the 2022.1 k26 boot firmware release.
xmutil CommandsThis section contains the Command Name and a short Description of some helpful xmutil commands
xmutil boardid
Reads all board EEPROM contents. Prints information summary to command line interface.
xmutil bootfw_status
Read primary boot device information. Prints A/B status information, image IDs, and checksums to the command line interface.
xmutil bootfw_update
Tool for updating the primary boot device with a new boot image in the inactive partition.
xmutil getpkgs
Queries Xilinx package feeds and provides a summary to the debug interface of relevant packages for the active platform based on board ID information.NOTE: This functionality is not supported in Kria Ubuntu.
xmutil listapps
Queries on the target hardware resource manager daemon of pre-built applications that are available on the platform and provides a summary to the debug interface.
xmutil loadapp
Loads the integrated HW+SW application inclusive of the bitstream, and starts the corresponding pre-built application software executable.
xmutil unloadapp
Removes accelerated application inclusive of unloading its bitstream.
xmutil xlnx_platformstats
Reads and prints a summary of the following performance related information: CPU frequency, RAM usage, temperature, and power information.
xmutil ddrqos
Utility for changing configuration of PS DDR quality of service (QoS) settings. Initial implementation focuses on PS DDR memory controller traffic class configuration.
xmutil axiqos
Utility for changing configuration of PS/PL AXI interface quality of service (QoS) settings. Initial implementation focuses on AXI port read/write priority configurations.
xmutil pwrctl
Utility for PL power control and status
xmutil desktop_disable
Disables the desktop environmentNOTE: This functionality is not supported in Kria Ubuntu Server.
xmutil desktop_enable
Enables the desktop environment NOTE: This functionality is not supported in Kria Ubuntu Server.
xmutil dp_bind
Binds the display driver
xmutil dp_unbind
Unbinds the display driver
DisplayPort problem RESOLVEDThe display was not working with my PC HDMI adapter and HDMI cable. I found by unplugging the mouse, keyboard, and display that the kit boots using the Putty connection from the PC. I really want to take advantage of the GNOME Desktop I researched what was needed in discord server for this challenge and found some suggestions I ordered two items on Amazon and here is what I found:
First, Here is what I have that is NOT WORKING
I ordered the next two items from amazon:
1. Here is what was recommended on discord by JimMartel, and I ordered but it did not work connected to an ordinary HDMI cable and to my Computer Monitor
2. This cable that does not need an adapter.and it worked fine, connected to my monitor.non 4k
Display RESULTS: using Amazon Basics DisplayPort to HDMI Display Cable, Uni-Directional, 4k@60Hz, 1920x1200, 1080p, Gold-Plated Plugs, 3 Foot, Black is WORKING GREAT..
Now I’m able to use the GNOME desktop on Ubuntu 22.04 by connecting this cable, mouse and a keyboard. SUCCESS !!!
Step 4. Launching the ROS 2 Perception Node Application (Ubuntu)The ROS 2 Perception Node accelerated application implements a subset of image_pipeline, which is one of the most popular packages in the ROS 2 ecosystem and a core piece of the ROS perception stack. It creates a simple computational graph consisting of two hardware accelerated nodes, resize & rectify as shown in the figure above..
This application sounds like I could gain some knowledge to help me with my project. I followed the instructions on the page: Setting up the Board and Application deploymentThis application sounds like I could gain some knowledge to help me with my project.
I spent a lot of time on getting to the point where I could run it, I have everything setup and working except now I'm running into an installation error with Gazebo. I'm trying to install it on a workstation as advised in the documentation. the workstation is separate from the KV260.
The configuration for the workstation is: Ubuntu 22.04 Jammy and ROS2 Humble
Conclusion - ROS 2 Perception Node ApplicationSTATUS of the results thus far.
Gazebo Installation Error
I searched the internet and found that the error
“Package gazebo is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source”
Is because the Gazebo Classic package (gazebo11) is not available for installation on Ubuntu 22.04 Jammy.
The key points are:
- Gazebo Classic (gazebo11) cannot be installed on Ubuntu 22.04 Jammy, as the package is not available in the repositories.
- This is likely due to Gazebo Classic being obsoleted or only available from another source.
- The recommended solution is to use the newer Gazebo Harmonic version, which is compatible with Ubuntu 22.04 and ROS2 Humble.
- To install Gazebo Harmonic, the command is sudo apt-get install ros-humble-ros-gz. This installs the equivalent of the gazebo_ros package for the new Gazebo integration.
- The gazebo_ros package is specific to the older Gazebo Classic and will not work with the new Gazebo Harmonic. Users need to update their launch files and code to use the new ros_gz package instead.
In summary, Gazebo Classic (gazebo11) is no longer available for Ubuntu 22.04, and users should migrate to the newer Gazebo Harmonic version and the corresponding ros_gz ROS2 package. The installation and integration process has changed compared to the older Gazebo Classic.
As of April 23 2024, I was unable to get this accelerated app to run.
I am Stopping here for NOW and Moving on
Section Future Enhancements1. I will return to try and run the GAZEBO11 piece on my laptop with Ros2 Foxey on Ubuntu 20.0. I will need to Install Foxey onto it. Or I will see if using the Gazebo Harmonic version is compatible with this example Application
2. An interesting project on Hackster.io that I might run through. Vitis+PetaLinux 2022.1 & KRS 1.0 Install on Ubuntu 22.04. This project walks through the installation of Vitis and PetaLinux 2022.1 as well as KRS 1.0 on Ubuntu 22.04 Jammy Jellyfish. By Whitney Knitter.
Tutorials
GitHub Repositories
Other
Step 5. Next Steps and Additional Resources (Ubuntu)You have successfully completed the Ubuntu flow.
For more information on developing with Ubuntu, refer to the Ubuntu wiki pages: This step includes helpful links for more experimentations with the KR20 Kit, but I decided to move onto my own path to experiment with the Kit.
NEXT - Setting Up the Development Environment - Installing and experimenting with tools and example on the KV260 Ubuntu image.
Setting Up the Development EnvironmentThis section Installing and Experimenting with example code and tools on my KV260 Ubuntu image.
These tools and more should all be installed in the “Kria Robotics AI Demo” later in this document.
- PYNQ + Jupyter notebook
- DPU-PYNQ ¶
- ROS2 Humble
- KRS (Kria Robot Stack)
- Vitas-AI
Installing a development environment workstation on another Linux OS for running simulation software.
Install Gazebo on Linux. Consider using My other kit KV260 that has Ubuntu and tools already installed and working
I tried this for the Perception Accelerated APP But I was unable to get it to Install on My KV260!! Because of ros2 Humble non-support install.
Experimenting with ApplicationsThe goal here is to understand how to develop applications using accelerated robotics applications. Using the KR260 kit. This Demo and the following application were tried
"KR260 Robotics AI Demo":https://github.com/amd/Kria-RoboticsAI, this will help you to understand the machine learning approach which can be followed for KR260 with open sourced PYNQ Approach.
- "KR260 Robotics AI Demo":https://github.com/amd/Kria-RoboticsAI, this will help you to understand the machine learning approach which can be followed for KR260 with open sourced PYNQ Approach.
Krishna@LogicTronix on discord server, recommended the following applications
1. You can start with KR260-BIST application. BIST can be run with KR260, ethernet cable, USB pendrive etc. BIST allows you to check the major interfaces on the KR260 with python application approach. Python source of BIST are at here:https://github.com/xilinx/kria-bist
2. You can also run this "KR260 Robotics AI Demo":https://github.com/amd/Kria-RoboticsAI, this will help you to understand the machine learning approach which can be followed for KR260 with open sourced PYNQ Approach.
3. After BIST you may test "ROS 2 Multi-Node Communications Via TSN" and "Perception Stack Application". These two are specifically for ROS. Then you can go for "Precision time Management". For 4th Kria app (10GigE Machine Vision Camera - Defect Detect), it needs a specific SLVS camera and 10G ethernet with SFP which are not common to get with (cost and availability).
KR260 Robotics AI DemoThe page at GitHub - amd/Kria-RoboticsAI, is an excellent tool to get you familiar with the KR260 code and tools available.. I went through it and this section contains my notes.
The focus of this repository is to provide AI vision-guided robotics applications using camera inputs and control output with ROS and AI targeting the Kria™ KR260 SOM and PYNQ/Vitis-AI software platform. This repo provides a starting point for developing applications that leverage the capabilities of the Kria KR260 SOM and PYNQ/Vitis-AI to perform real-time computer vision and control tasks. The repository includes examples of applications such as object detection, tracking, and autonomous navigation. It also provides resources and documentation to help users get started with developing their own applications.
Use either Vitis-AIor PYNQsoftware development stacks. I know a little about PYNQ and am excited to learn about Vitis-Ai. One of my goals is to get a development environment setup on my KR260 to use these tools.
Table of Contents
4 Test PYNQ DPU with Python or C++ VART APIs
I started at chapter 3 “INSTALL PYNQ DPU” since I had already completed the other chapters on my own. It was a good checkup on getting started though.
To install PYNQ on your KR260 board you have to be a superuser and run the script install_update_kr260_to_vitisai35.sh.
Highlights of the script
- installs the required debian packages.
- It creates a Python virtual environment named pynq_venv.
- Configures a Jupyter portal.
- The process takes around 30 minutes.
- The script updates packages from Vitis-AI 2.5 to the latest Vitis-AI 3.5. This allows users to work with the latest material available in Vitis-AI 3.5 for Machine Learning.
Once finished successfully, you should see all the packages listed in the Included Overlays reference document already installed, including the DPU-PYNQ repository.
Always remember when following the instruction in this document before running any application, you must execute the following 2 commands:
1. you must always start as super user
sudo su
2. set the PYNQ environment
source /etc/profile.d/pynq_venv.sh
To exit from the virtual environment, type the command:
deactivate
NEXT, I followed the instructions in Section 4 “Test PYNQ DPU with Python or C++ VART APIs”
To test your newly installed environment, running an application is the optimal approach. This demo section provides three options for executing ML inference applications using the KR260 SOM and the PYNQ DPU repo:
OPTION 1 -- Open a Jupyter Notebook with the ".ipynb" file extension.
OPTION 2 -- Execute a plain Python script with the ".py" file extension.
OPTION 3 -- Compile C++ modules and launch the generated executable.
In all cases, you are using the Vitis-AI Runtime (VART) APIs, which are available both in Python (first two items) and C++ language (third item).
I tried all 3 options. The following are my notes and conclusions. I was able to complete option1 successfully but options 2 and 3 generated errors that do not allow me to continue.
OPTION 1 -- Open a Jupyter Notebook (file extension:.ipynb).
This section installs some board related Jupyter Notebooks that are very helpful. I'm learning How to use Python to program the KR260 kit. These examples can be run from web browsers. I have tried some of these examples for the Kria Vision Kit (KV260), the notebooks are experimented in the document
Running PYNQ on the Xilinx/AMD Kria KV260 Vision AI Starter KitThis repo document installs the notebook and describes how to run Jupyter Labs (which allows you to run the python code snippets).
Here are my notes
REMEMBER to DO this to get into the PYNQ environment
sudo su
source /etc/profile.d/pynq_venv.sh
After setting the pynq_venv environment, you can go to your jupyter notebook home folder and fetch the latest notebooks as follows:
cd $PYNQ_JUPYTER_NOTEBOOKS # Takes you to /root
Now, fetch the latest notebooks:. Only do the following command once
pynq get-notebooks pynq-dpu -p .
Running PYNQ JUPYTER NOTEBOOKSAssuming the IP address of your card is 192.168.1.186 (you can easily know it by running on a terminal the command ifconfig -a), you can connect to JupyterLab via a web browser using
this URL: 192.168.1.186:9090/lab or kria:9090/lab (account xilinx with password xilinx), The advantage of this is that you can run the notebooks from any web browser on the KR260 or on another computer on the same network
You just need to run the following commands, only if the kit has been rebooted. The reason being is, that the script Configures a Jupyterlab web portal along with other duties,
sudo su
source /etc/profile.d/pynq_venv.sh
Once your in JupyterLab you can launch a notebook application, among the ones listed in the following directories. these directories will be listed in the Left panel in JupyterLab
ls -l pynq-dpu/
ls -l getting_started/
ls -l pynq-helloworld/
ls -l kv260
The following section describes connecting to JupyterLab running on the kit via a web browser on my PC.
I am interested in using a USB Camera that I have for my project. I connected it to the kit and was able to use it to experiment with my USB camera as shown here.
The camera is a IPEVO - Point 2 View USB document camera. I purchased it from a goodwill store for $10. It has been discontinued by IPEVO, but I’ve been using it and it is working fine with my PC scanning.
I found 2 notebooks in the kv260/video/ directory:
kv260/video/opencv_face_detect_webcam.ipynb
kv260/video/opencv_filters_webcam.ipynb
I was able to run the 2 notebooks and they worked!. Very educational.
Below are the results of using a picture of Elvis to detect a face using the notebook opencv_face_detect_webcam.ipynb. The purpose of this notebook is to apply OpenCV face detection to images captured from a usb camera. It utilizes OpenCV, a library for computer vision, to detect faces within the frames obtained from the camera feed.
This is a photo of how I used a picture of Elvis and how the camera was used to capture and detect the eyes on the picture. Below is the jupyter labs work area showing the notebooks being run at step6.
Now onto the 2nd option to running a python script.
OPTION 4.2 -- Run a plain Python script (file extension: .py).
Too complicated for now will come back to it
OPTION 4.3 -- Compile C++ modules and launch the generated executable.
Too complicated for now will come back to it
Section 5 INSTALL ROS2You will now install ROS 2 Humble Hawksbill distribution on your Ubuntu 22.04 desktop of the KR260 target board.
First of all, boot the board and open a terminal and enter in the pynq_env as usual:
sudo su
source /etc/profile.d/pynq_venv.sh
Now you have to launch the install_ros.sh script from this repository to install ROS2:
cd /home/ubuntu/
cd KR260-Robotics-AI-Challenge/files/scripts/
source ./install_ros.sh
Also this process is quite long and you have to answer Y when prompted some times.
COMPLETED PREVIOUS STEPS .. With Success no problems.
Test TurtleSimOnce you've installed ROS2, you can verify it by starting TurtleSim by entering the following commands in your terminal:
# set up your ROS2 environment
source /opt/ros/humble/setup.bash
# launch the turtle simulator
ros2 run turtlesim turtlesim_node
Could not run this command from the PC terminal. But I was able to run it from the Kit. The simulator window should appear, with a random turtle in the center. You can ignore text messages like this:
libGL error: failed to load driver: xlnx
Open another terminal and set up your ROS2 environment again.
# set up your ROS2 environment
source /opt/ros/humble/setup.bash
Now you will run a new node to control the turtle in the first node:
ros2 run turtlesim turtle_teleop_key
COMPLETED PREVIOUS STEPS .. Success no problems.
A Crash Course on ROS2ROS2 Tutorials - ROS2 Humble For Beginners
I found these 10 1 hour tutorials invaluable in learning and understanding ROS2. Development with ROS2 is different then the way I’m used to programming.Embedded Systems. Basically it is a PUBLISH and SUBSCRIBE design with TOPICS that NODES use to communicate to each other with. Once I completed all 10 tutorials I was hooked.
Most of my NOTES and TIPS and Tricks came from this course.
Section 6 ROSAI ApplicationThis section provides a guide on how to use the ROSAI application within a project directory. ROSAI is a ROS2 design for controlling a robot using an AI-based handwritten digit recognition model. The application runs on the KR260 board and was originally developed by Avnet.
There are two ways to input images into the application:
File Input: Images are taken from files within the MNIST test dataset. The steps to build and run the demo using file input are provided, including setting up the environment, launching the script, and executing the demo.
Camera Input: Images are taken from a camera. The steps to build and run the demo using camera input are also provided, including checking USB webcam compatibility, setting up the environment, launching the script, and executing the demo. It is recommended to use images similar to those provided in the "black_background_images" folder for best results.
Both demos utilize a CNN classifier to recognize handwritten digits and control the movement of a turtle simulator. The document includes screenshots illustrating the demos in action. The steps to run the demos are desccibed in the next two sections.
Section 6.1 - File Input DEMOTo build and run the file input demo for the ROSAI application, follow these steps:
Become a superuser and set the pynq_venv environment.
sudo su
source /etc/profile.d/pynq_venv.sh
Launch the install_rosai_file_input.sh script to install and build the application. This script will also save MNIST images for testing and create a workspace folder named ros2_ws_fileio.
cd /home/ubuntu/KR260-Robotics-AI-Challenge/files/scripts
source ./install_rosai_file_input.sh
ERROR GENERATED:
File "/usr/lib/python3.10/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
bash: cd: /home/root: No such file or directory
colcon: command not found
Set the environment by sourcing the setup.bash and local_setup.sh files. And run the demo
# set env
source /opt/ros/humble/setup.bash
source install/local_setup.sh
Another ERROR:
# source install/local_setup.sh
bash: install/local_setup.sh: No such file or directory
# go to the source code
cd /home/root/ros2_ws_fileio
# demo run
ros2 launch rosai_file rosai_file_demo_launch.py
The turtle in the TurtleSim window will move based on the digit recognized by the MNIST CNN classifier.
Optionally, remove the built demo using the rm -rf build/ install/ log/ command.
MY RESULTS OF THE DEMO:I WAS UNABLE TO RUN THIS DEMO BECAUSE OF ERRORS
Section 6.2 - Camera Input DEMOThis demo was verified with a Logitech HD Pro WebCam C920 camera. First I followed the instructions in the Appendix “Check Your USB WebCam” to check my USB camera. The following section describes my success with connecting my IPEVO - Point 2 View USB document camera and running the python script to test the Camera.
Check Your USB WebCamSTEPS:
Connect the camera to the USB port 'U44' of KR260.
Run one of the following commands to check if the camera is detected:
lsusb
v4l2-ctl --list-devices
ls /dev/vid*
If the camera is detected, the output should include references to the camera model. My model is indicated as
IPEVO Point 2 View: IPEVO Point (usb-xhci-hcd.1.auto-1.3):
/dev/video0
/dev/video1
Now you can use the following Python code to capture and show some frames:
import cv2
import numpy as np
cap = cv2.VideoCapture("/dev/video0") # check this
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow("frame",gray)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
# When everything done, release the capture
cap.release()
Running the code MY RESULTS:To execute the code I installed and used Visual Studio Code
Visual Studio Code (VS Code) is a powerful, open-source code editor developed by Microsoft. It offers various features and extensions that make coding easier and more efficient. This guide will walk you through the process of installing Visual Studio Code on Ubuntu 24.04 LTS using the official DEB file, and provide some tips on how to get started using VS Code to run the python code above. Here are my Step-by-Step Instructions:
*Step 1: Download the Official DEB File*
1. Open your preferred web browser and navigate to the [Visual Studio Code download page](https://code.visualstudio.com/Download).
2. Under the "Linux" section, click on the ".deb" option to download the DEB file.
PLEASE NOTE you need to SELECT the .deb Arm64 download not the .deb big blue box. The x64 version will NOT INSTALL on the KIT....
*Step 2: Open Terminal*
1. Once the download is complete, open the Terminal application. You can do this by pressing `Ctrl + Alt + T` or searching for "Terminal" in your application menu.
*Step 3: Navigate to the Download Directory*
1. By default, downloaded files are saved in the "Downloads" directory. Navigate to this directory by typing the following command and pressing Enter:
```sh
cd ~/Downloads
``
*Step 4: Install Visual Studio Code*
1. Use the `dpkg` command to install the DEB file. Type the following command and press Enter:
```sh
sudo dpkg -i code_*.deb
```
Note: If you encounter any dependency errors, you can resolve them by running:
```sh
sudo apt --fix-broken install
```
*Step 5: Launch Visual Studio Code*
1. After the installation is complete, you can launch VS Code from the Terminal by typing:
```sh
code
```
*Step 6: Getting Started with VS Code*
1. **Open a Folder**: To start working on a project, open a folder by clicking `File - Open Folder` and selecting the desired directory.
2. **Install Extensions**: Enhance your coding experience by installing extensions. Click on the Extensions icon in the Activity Bar on the side of the window or press `Ctrl + Shift + X`. Browse and install Python extensions.
3 From the menu select File/New File and enter the code mentioned above,
4. From the Menu select Run/Start Without debugging.
You will see what the camera is viewing on a screen window titled “frame”
To build and run the camera input demo for the ROSAI application, follow these steps:
Become a superuser and set the pynq_venv environment.
sudo su
source /etc/profile.d/pynq_venv.sh
Launch the install_rosai_camera_input.sh script to install and build the application. This script will create a workspace folder named ros2_ws.
cd /home/ubuntu/KR260-Robotics-AI-Challenge/files/scripts
source ./install_rosai_camera_input.sh
The following commands will launch the demo. Because of the way the model was trained, you need to use the images like the ones provided in the folder rosai_camera_input/black_background_images, or you can create your own images. We recommend either printing the images on paper (one image per page) or using a display device to hold up in front of the camera. Here are the commands:
# set the environment
source /opt/ros/humble/setup.bash
source install/local_setup.sh
# go to the source code
cd /home/root/ros2_ws
# demo the camera run
ros2 launch rosai_camera rosai_camera_demo_launch.py
The turtle in the TurtleSim window will move based on the digit identified by the model. Use images like those provided in the black_background_images folder, or create your own.
Optionally, remove the built demo using the rm -rf build/ install/ log/ command.
MY RESULTS OF THE DEMO:SAME error sourcing :
source ./install_rosai_camera_input.sh
bash: cd: /home/root: No such file or directory
colcon: command not found
AppendixIn addition to Appendix 3 mentioned above to check the camera, there are 2 other appendixes that exist to describe how to build ML Custom models and Get the Vitis AI 3.5 docker image.
A1 Build ML Custom Models for KR260 PYNQ DPU
If you wish to run your own custom CNN model on the KR260, you have to train it and then quantize via the Vitis-AI release, and therefore you need the associated docker image.
This link describes how to get the Vitis AI 3.5 docker image. It is an Integrated Development Environment designed to accelerate AI inference on AMD adaptable platforms. It offers a comprehensive suite of tools, libraries, and pre-optimized models to streamline the development process. This docker image can be used for your Vitis AI Development.
KR260 Robotics AI Demo ConclusionThis concludes my work with KR260 Robotics AI Demo page at GitHub - GitHub - amd/Kria-RoboticsAI
I was able to gain some valid experience in understanding the development environment that will aid me in developing my project. Here are my key takeaways from the sections of the repo project.
Installing and Testing PYNQ DPU: PYNQ can load the Deep Learning Processor Unit (DPU) overlay. Installation involves a 30-minute script that updates packages. Rebooting is necessary after installation. Testing can be done using Vitis-AI Runtime (VART) APIs in Python or C++.
Installing and Testing ROS2: ROS (Robot Operating System) is used for building robot applications. The ROS 2 Humble Hawksbill distribution will be installed on Ubuntu 22.04. Installation is done via a script. Testing involves starting TurtleSim and controlling the turtle with another node.
ROSAI Application: The ROSAI folder contains a ROS2 design for controlling a robot with an AI-based handwritten digit recognition model. The application has file input and camera input modes. Demos involve setting up the environment, launching scripts, and observing the turtle's movement.
Additional Notes: Building custom ML models involves training, quantizing, compiling, and writing ML applications. Vitis-AI 3.5 Docker can be obtained by cloning the repository or pulling it from Docker Hub. USB webcam functionality can be checked using commands and Python code.
Kria KR260 Robotics Starter Kit ApplicationsKria KR260 Robotics Starter Kit Applications
This page contains links to 5 applications that use the KR260 kit. Accelerated application benefits are described
10GigE Vision Camera
ROS 2 Multi-Node Communications via TSN
ROS 2 Perception Node
Precision Time Management
Built-In Self Test (BIST)
KR260 Training Videos This is a free course that will help you learn about the Kria™ System-on-Module (SOM) and Kria KR260 Robotics Starter Kit, enabling you to accelerate robotics-based applications using the KR260 Starter Kit right out of the box without any installation or FPGA knowledge.
The course also demonstrates how to use the Kria Robotics Stack (KRS) to run prebuilt accelerated robotics applications.
The emphasis of this course is on:
- Providing an overview of the Kria K26 SOM and its advantages
- Providing an overview of the Kria KR260 Robotics Starter Kit, its interfaces, and how to get started with the kit
- Describing the Kria Robotics Stack (KRS) and how it enables roboticists to get up and running in the Robot Operating System (ROS)
- Running accelerated applications using an Ubuntu image
Now that I have an understanding of the development environment on the kit I will attempt to build, test and implement my idea for a Bird Feeder Defender.
Electronic Components used in the projectKR260 Starter Kit
USB Camera
The camera is a IPEVO - Point 2 View USB document camera.
BUZZER
The Grove Buzzer module is a simple and versatile electronic component that can produce sound and is compatible with Arduino, Raspberry Pi, and other microcontrollers.
Hammond Manufacturing Enclosure
The 1554VA2GYCL enclosure is a versatile and durable solution designed to protect sensitive electronic components in industrial and outdoor environments. Its features include a watertight construction with a clear lid for visibility, stainless steel hardware for durability, and a rugged polycarbonate material resistant to impact, chemicals, and UV radiation. The enclosure is secured with stainless steel screws and a silicone gasket, providing protection against moisture, dust, and contaminants. Overall, it offers a reliable and cost-effective option for safeguarding electronic components in harsh conditions. The container comes with a hole drilled in the side to attach the connectors provided in the parts kit.
BUILD Diagram
The Diagram below describes the hardware build.
PICTURE OF THE BUILD
Here I have presented the build in pictures. It shows the Grove Buzzer in the 1st pic. Then the KR260 was placed inside the enclosure. The Buzzer will be placed inside the enclosure.
Now the next 2 pictures show the build with the enclosure cover on. It also shows the 3 cables Power, ucamera USB and Ethernet cables coming through the hole in the side of the enclosure.
Software Design
Use ROS2 to implement the design
ROS2 Diagram of the Nodes, Topics, Services, Parameters and Actions used to implement my idea.
Nodes - sends messages between themselves, Publisher, Subscriber
Topics - A TOPIC is a vital element of the ROS graph that acts as a bus for nodes to exchange messages. Topics are one of the main ways in which data is moved between nodes and therefore between different parts of the system.
Camera Control is the TOPIC
EXAMPLE
KV260 Controller - is a PUBLISHER node that publishes to the topics camera control and Buzzer control
Camera and Buzzer - are SUBSCRIBER nodes that subscribe to Camera control and Buzzer Control respectfully.
Nodes
CAMERA
KR260 Controller
Run the Squirrel Vision Recognition MODEL
Control the camera and the buzzer
BUZZER
Topics
Camera Control
Buzzer Control
Other SOFTWARE
Machine Learning
The Squirrel Vision Recognition MODEL
Putting it all together
- Machine Learning is used to build the Squirrel Vision Recognition Model
- The system is implemented in ROS2
- The USB camera is attached to the kit into the U44 bottom USB 3.0 connector.
- THe Gove buzzer is attached to the GPIO pins, on the J21 Raspberry PI Hat
- The Build is placed in the Hammond Enclosure.
- Run the Power, USB camera and the Internet cable through the Prebuilt Hole in the
- Summary The build of the Wildlife Surveillance Project (WSP) will involve several key points:
- Hardware: The project will utilize the AMD KRIA™ KR260 Robotics Starter Kit as its core, incorporating a camera and potentially other sensors. The kit will initially be mounted in a window to observe and identify wildlife, with the ultimate goal of mounting it on an autonomous drone for wider coverage.
- Software: The project will leverage PYNQ version 3.0 for development, as the author is familiar with this environment. ROS2 will be used to handle camera input and control output, enabling real-time monitoring and response to squirrel activity.
- AI and Machine Learning: AI vision-guided robotics applications will be created, utilizing camera inputs and control output through ROS and AI targeting the Kria KR260 Robotics Starter Kit. The system will employ AI recognition technology to distinguish squirrels from birds and trigger deterrents when squirrels are detected.
- Deterrents: The system will be designed to scare away squirrels using humane methods, such as loud noises, predator sounds, bright lights, or water jets. The specific deterrent can be adjusted based on effectiveness and user preference.
- Testing Phase
- Unit Test all subcomponents and systems.
- Example: squirrel recognition
- Prototype Phase
- Put the Build together with the Hardware and software and run it
- Implementation
- Place the Build in the field
- Attach the power to the kit
- Detect squirrels and buzz them.
This document describes a project proposal for a wildlife surveillance system using the AMD KRIA™ KR260 Robotics Starter Kit. The project aims to address the problem of squirrels invading bird feeders by utilizing camera inputs and control outputs through ROS (Robot Operating System) to scare them off when detected. The ultimate goal is to mount the kit on an autonomous drone to survey a larger area.
The document outlines the hardware and software components required for the project, including the KR260 Robotics Starter Kit, PYNQ version 3.0, ROS 2 and Ubuntu LTS version 22.04. . It also details the steps involved in setting up the development environment, including updating the SOM firmware, configuring the SD card image, and connecting the necessary peripherals.
The project proposal emphasizes the use of AI vision-guided robotics applications and the potential for customization and expansion using additional hardware and software components. The document concludes by highlighting the benefits of the proposed surveillance system, including its effectiveness in deterring squirrels, its humane approach, and its potential for broader wildlife monitoring applications.
I did encounter issues installing Gazebo, a required simulator, on their Ubuntu 22.04 system and plans to explore alternative solutions. I never was able to get it working.
Future EnhancementsThe future enhancements planned for the WSP include mounting the system on an autonomous drone. This would enable the system to survey a larger area and deter squirrels from a wider range around the bird feeders. Additionally, the author mentions the possibility of expanding the project to log the movement of various wildlife species near their condo unit, effectively turning it into a more general wildlife surveillance system.
Experiment more with Kria Robotics Stack (KRS)
ResourcesGetting Started with Kria KR260 Robotics Starter Kit www.xilinx.com/KR260-start
For more information on PYNQ: PYNQ - Python productivity for Zynq - Home
For more information on Zynq UltraScale+ EV: Zynq UltraScale+ MPSoC (xilinx.com)
For more information on the KR260: Kria KR260 Robotics Starter Kit (xilinx.com)
For more information on the KR260 Starter Kit Applications: Kria KR260 Robotics Starter Kit Applications — Kria™ KR260 documentation (xilinx.github.io)
Training Webinar on Controlling Robotic: Webinar Series: How to Control Robots with Gestures - 1608698 (webcasts.com)
DPU on PYNQ: https://github.com/Xilinx/DPU-PYNQ/tree/design_contest_3.5
Whitney Knitter's Projects - Hackster.io
Comments