The Kria KR260 Robotics Starter Kit serves as a development platform for robotics and factory automation applications, leveraging the Kria KR26 SOM. This user-friendly kit empowers roboticists and industrial developers, even those without FPGA expertise, to create hardware-accelerated applications for robotics, machine vision, and industrial communication. It ensures flexibility with native ROS 2 and Ubuntu support and enhances productivity through the Kria Robotics Stack (KRS). Powered by the Zynq™ UltraScale+™ MPSoC architecture, the K26 SOM delivers up to 1.4 TOPS AI processing and features an integrated H.264/265 video codec[1]. With 245 I/Os, it accommodates diverse requirements, supporting up to 15 cameras, 40Gb/s network connections, and various USB peripherals. This scalable and expandable platform offers versatility for a wide range of applications and adapts easily to evolving system needs.
- High Level Specs: Zynq UltraScale+ MPSoC EV (XCK26), 4GB DDR4
platform: Embeded - Applications: Autonomous systems, Robotics, Industrial Controls, Machine Vision
- Pre-Built Applications
Video codec acceleration (including at least the HEVC (H.265), H.264, VP9, and AV1 codecs) is subject to and not operable without inclusion/installation of compatible media players. GD-176
Easy Start and Programming with PythonThe PYNQ environment is an open-source project from AMD that makes it easier to use Adaptive Computing platforms, such as Zynq devices (Python and ZYNQ). Leveraging the popular Python language and libraries, designers harness the power of programmable logic and microprocessors for creating sophisticated electronic systems. In the case of the KR260, we've taken an extra step by integrating native Robotics OS (ROS 2) support into the development kit, facilitating seamless integration for roboticists and software developers. Moreover, for AI integration, PYNQ offers a user-friendly approach to incorporate machine learning models into the DPU overlay.
References:
Requirements:
- Linux OS (Ubuntu 22.04)
- The Kria KR260 Robotics Starter kit
- USB Keyboard and Mouse
- USB Camera
- DisplayPort Display and cable
- Micro-SD card of at least is 64GB.
- and any additional adder cards dependent on the desired application.
For more information on the KR260 Starter Kit, see Technical Specs.
Developer ExperienceAPIs and SDKs:
- AI/ML workloads can run on the Kria KR260 board using DPU-PYNQ (v3.5). The overlay contains a Vitis AI 3.5.0 Deep Learning Processor Unit (DPU) and comes with a variety of notebook examples with pre-trained ML models.
- Robotic code/functionality is added in the ARM processor using the Robotics Operating System ROS2.
- Also available is Aupera's VMSS2.0, made free for contest participants. Install in minutes, create complex applications with no code, and customize with 100+ AMD Model Zoo models. Innovation made easy for every skill level. See more at Aupera VMSS2.0.
Development Environment:
The development environment requires Ubuntu Server 22.04. The tool xmutils can be used to upgrade the Kria firmware. If you are using more than 2 USBs, a USB hub can be plugged into one of the right 2 USB ports.
Features and FunctionalityModel Coverage:
- Classification
- Detection
- Segmentation
- NLP
- Text-OCR
- Surveillance
- Industrial-Vision-Robotics
- Medical-Image-Enhancement
Models for KR260 can be found here. As well, many ML models for computer vision can be supported using VMSS2.0 from Aupera.
Getting Started:To get started using the KR260 Robotics Starter Kit for your application, please see: https://github.com/amd/Kria-RoboticsAI
Tutorials / Examples1) ROS AI Application – this example shows how to use the KR260 for processing images from a camera using a classification model and the MNIST dataset, which then controls TurtleSim to move in a direction based on the corresponding number identified. The image classification model is running on the DPU and the TurtleSim robotics control uses ROS2 on the ARM processor. This example demonstrates how to use both ROS and PYNQ to build a full application/example camera input, AI processing of images, robotic sim output).
2) Tutorials for ROS2 can be found here. We recommend the following tutorials at a minimum to ensure your environment is setup correctly (users that are new to using ROS2 should follow the tutorials for beginners):
- Configuring your environment: Configuring environment — ROS 2 Documentation: Humble documentation
- Turtlesim: Using turtlesim, ros2, and rqt — ROS 2 Documentation: Humble documentation
3) Aupera VMSS2.0 - Video Machine-Learning Streaming Server (VMSS 2.0) offers several features to create flexible ML pipelines while efficiently utilizing multiple FPGA resources.
More examples coming in 2024.
Appendix- For more information on PYNQ: PYNQ - Python productivity for Zynq - Home
- For more information on Zynq UltraScale+ EV: Zynq UltraScale+ MPSoC (xilinx.com)
- For more information on the KR260: Kria KR260 Robotics Starter Kit (xilinx.com)
- For more information on the KR260 Starter Kit Applications: Kria KR260 Robotics Starter Kit Applications — Kria™ KR260 documentation (xilinx.github.io)
- Tutorials - MakarenaLabs
- Training Webinar on Controlling Robotic: Webinar Series: How to Control Robots with Gestures - 1608698 (webcasts.com)
- DPU on PYNQ: https://github.com/Xilinx/DPU-PYNQ/tree/design_contest_3.5
- Whitney Knitter's Projects - Hackster.io
Comments