Dine-O is an innovative restaurant robot designed to enhance dining experiences by automating key tasks such as taking orders, delivering food, and clearing tables. Utilizing the Kria KR260 Robotics Starter Kit and advanced Vitis AI technology, Dine-O features an interactive display for engaging customers and personalized recommendations based on individual preferences.
Dine-O is designed to revolutionize the restaurant industry by tackling several pressing issues, including labor shortages and operational inefficiencies. As the restaurant sector grapples with a global labor crisis, Dine-O steps in to automate tasks that are traditionally handled by waitstaff, such as taking orders, and delivering food. This automation addresses staffing challenges and streamlines restaurant operations, reducing wait times and enhancing overall customer satisfaction.
The robot features an interactive display that engages customers by showcasing menus, processing orders through touch inputs, and providing real-time entertainment. This interaction is powered by Vitis AI, which also enables multilingual support and personalized recommendations based on individual customer preferences. By offering a more engaging and efficient service, Dine-O helps to elevate the dining experience and ensure that customers have a memorable visit.
Dine-O's design includes data collection capabilities for valuable insights into customer preferences, order patterns, and operational performance. This data-driven approach allows restaurants to make informed decisions, optimize their operations, and ultimately increase customer satisfaction. The robot’s ability to analyze and act on data provides a significant advantage over existing solutions, which may only address certain aspects of food service automation.
What sets Dine-O apart from other solutions is its comprehensive integration of advanced technologies. Utilizing the Kria KR260 Robotics Starter Kit and AMD AI hardware, Dine-O excels in computer vision, natural language processing, and machine learning. This robust combination of hardware and software not only enhances Dine-O's functionality but also ensures it delivers superior performance in both automation and customer interaction, making it a standout solution in the field of food service robotics.
Setting Up Kria For Dine-OThe Kria™ Robotics Starter Kit is designed with a primary and a secondary boot device to effectively separate the boot firmware from the runtime OS and application. This design allows you to concentrate on developing and updating your application code within the application image on the secondary boot device, without altering the boot firmware. The primary boot device consists of a QSPI memory located on the System on Module (SOM), which comes pre-programmed with a QSPI image from the factory. The secondary boot device is a micro-SD card interface on the carrier card.
1. Setting up the Ubuntu SD Card ImageTo set up the micro-SD card with the necessary Ubuntu image, follow these detailed steps:
Download the SD Card Image:
- Visit the Install Ubuntu on AMD website.
- Navigate to the "Choose a board" section and select "Kria K26 SOMs."
- Click on the green "Download 22.04 LTS" button. This will save the file named
iot-limerick-kria-classic-desktop-2204-x07-20230302-63.img.xz
to your computer.
Write the Image to the SD Card:
- Use an image flashing tool to write the downloaded image file to the micro-SD card. One recommended tool is Balena Etcher, which is available for Windows, Linux, and macOS.
- Follow the instructions provided in the "Setting up the SD card image" reference document to complete this process.
To set up the AMD Kria™ KR260 Robotics Starter Kit, make the following connections as illustrated in Figure 1:
Insert the micro-SD Card:
- Place the micro-SD card containing the boot image into the micro-SD card slot (J11) on the Starter Kit.
Connect the USB Cable:
- Obtain a USB-A to micro-B cable (also known as a micro-USB cable) that supports data transfer. Do not connect the USB-A end to your computer yet.
- Connect the micro-B end to the J4 port on the Starter Kit.
Connect Input Devices:
- Plug in your USB keyboard and mouse to the USB ports (U44 and U46) on the Starter Kit.
Connect to a Display:
- Use a DisplayPort cable to connect the Starter Kit to a monitor or display.
Connect the Ethernet Cable:
- Connect an Ethernet cable to one of the PS ETH ports (the top right port: J10D) on the Starter Kit. This will provide the required internet access using the factory-loaded firmware.
Connect the Power Supply:
- Connect the power supply to the DC Jack (J12) on the Starter Kit. Do not plug the other end into the AC outlet just yet.
Optional: Connect a USB Camera:
- If needed, you can connect a supported USB camera or webcam to the Starter Kit at this point.
Now that you have prepared your KR260 board with the Ubuntu image, it's time to power it on and log in. The default login credentials are:
- Username: ubuntu
- Password: ubuntu
Upon logging in for the first time with these credentials, the system will prompt you to change the password. You can log in using either the traditional method over the UART serial port or by accessing the full GNOME Desktop environment.
3.1 Logging in via UART with PuTTY3.1.1 Connecting with a Windows PCTo connect a Windows PC to the target board via UART using PuTTY, follow these detailed steps:
- Install PuTTY:
Download and install PuTTY from its official website. - Connect the USB-A to the micro-B Cable:
Connect the micro-B end of the USB cable to the J4 port on the KR260 Starter Kit.
Plug the USB-A end into a USB port on your Windows PC. - Identify the COM Port:
Open Device Manager on your Windows PC and look for the COM port associated with the USB connection (e.g., COM27). - Configure PuTTY:
- Open PuTTY and select "Serial" as the connection type.
- Set the serial line to the identified COM port (e.g., COM27).
- Configure the serial port settings: Baud rate: 115200, Data: 8-bit, Parity: None, Stop bits: 1, Flow control: None. - Establish the Connection:
Click "Open" to connect. You should now see the login prompt for the KR260 Starter Kit.
To connect a Linux host computer to the target board via UART using PuTTY, follow these detailed steps:
- Install PuTTY:
- If PuTTY is not already installed, you can install it using the following command:
sudo apt-get install putty
- Connect the USB-A to the micro-B Cable:
- Connect the micro-B end of the USB cable to the J4 port on the KR260 Starter Kit.
- Plug the USB-A end into a USB port on your Linux host computer. - Identify the USB Device:
- Open a terminal and run the following command to identify the connected USB devices: - Launch PuTTY:
Open PuTTY and configure it for serial communication. Use the following command to connect:
sudo putty /dev/ttyUSB1 -serial -sercfg 115200,8,n,1,N
- Open a Shell on the Board:
- Once connected, you should see the login prompt. Log in using the default credentials (ubuntu/ubuntu). - Check the Network Connection:
- After logging in, you may want to check the board’s IP address to facilitate further operations such as file transfers or remote connections:
# Become the superuser
sudo su
# Check network connections
ifconfig
Note down the IP address, for example, 192.168.1.186.
With these steps, you can establish a connection to your KR260 Starter Kit and access the system either through the UART serial port or remotely using SSH.
4. Enabling Wi-Fi on Kria™ KR260To set up WiFi on your Kria KR260 using the rtl8188eus driver, follow these steps:
4.1 Prepare Your Environment- Ensure your system is up-to-date:
sudo apt-get update
sudo apt-get upgrade
- Install necessary dependencies:
sudo apt-get install build-essential dkms git network-manager
4.2 Download and Install the rtl8188eus Driver- Clone the rtl8188eus repository:
git clone https://github.com/aircrack-ng/rtl8188eus.git
cd rtl8188eus
- Blacklist the conflicting driver:
- Open the blacklist configuration file:
sudo nano /etc/modprobe.d/blacklist.conf
- Add the following line to blacklist the rtl8188eu driver:
blacklist r8188eu
- Save and close the file (Ctrl+O, Enter, Ctrl+X).
- Build and install the driver:
make
sudo make install
sudo modprobe 8188eu
3. Connect to a WiFi Network Using nmcli- Restart NetworkManager:
sudo systemctl restart NetworkManager
- Identify available WiFi networks:
nmcli device wifi list
- Connect to a WiFi network:
Replace<SSID>
with your network's SSID and<password>
with your network's password:
nmcli device wifi connect <SSID> password <password>
- Verify the connection:
nmcli device status
- Ensure your wireless interface (e.g., wlan0) is listed as connected.
- Check the IP address:
- Confirm that an IP address has been assigned to your wireless interface:
ifconfig wlan0
Now we have successfully enabled Wi-Fi, which removes our dependency on Ethernet connectivity.
5. Install PYNQ DPU- Clone the Repository on KR260: Login to your KR260 board and clone the Kria-RoboticsAI repository:
git clone https://github.com/amd/Kria-RoboticsAI.git /home/ubuntu/KR260-Robotics-AI-Challenge
- Install dos2unix Utility: Install the dos2unix utility to ensure all shell scripts are in the correct Unix format:
sudo apt install dos2unix
cd /home/ubuntu/KR260-Robotics-AI-Challenge/files/scripts
for file in $(find . -name "*.sh"); do
echo ${file}
dos2unix ${file}
done
- Run the Installation Script: Become a superuser and run the installation script:
sudo su
cd /home/ubuntu/KR260-Robotics-AI-Challenge
cp files/scripts/install_update_kr260_to_vitisai35.sh /home/ubuntu
cd /home/ubuntu
source ./install_update_kr260_to_vitisai35.sh
This script will:
- Install the required Debian packages.
- Create a Python virtual environment named
pynq_venv
. - Configure a Jupyter portal.
- Update packages from Vitis-AI 2.5 to the latest Vitis-AI 3.5.
The process takes around 30 minutes. During installation, you might encounter the following:
- A window warning about a
xilinx-zynqmp
kernel mismatch. Accept the maintainer's version and continue. - If the installation stops with an error, run
source ./install_update_kr260_to_vitisai35.sh
again. The second trial should take less time. - Reboot the Board: After the installation completes successfully, reboot the board:
reboot
- Set Up the Environment: After rebooting, always start as a superuser and set the PYNQ environment before running any application:
sudo su
source /etc/profile.d/pynq_venv.sh
- To exit the virtual environment, use:
deactivate
- Verify the Installation: After setting the
pynq_venv
environment, you can go to your jupyter notebook home folder and fetch the latest notebooks:
cd $PYNQ_JUPYTER_NOTEBOOKS
pynq get-notebooks pynq-dpu -p .
Open a web browser and go to http://<KRIA_IP_ADDRESS>:9090/lab
(using account xilinx
with password xilinx
) to access the Jupyter Notebook interface. Run the example notebooks to verify the DPU functionality.
To install ROS 2 Humble Hawksbill distribution on your KR260 board, follow these detailed steps:
6.1 Boot the Board and Enter the PYNQ Environment: Open a terminal and enter the PYNQ environment:
sudo su
source /etc/profile.d/pynq_venv.sh
6.2 Run the ROS 2 Installation Script: Navigate to the directory containing the installation script and run it:
cd /home/ubuntu/KR260-Robotics-AI-Challenge/files/scripts/
source ./install_ros.sh
The installation process might take some time, and you will need to answer "Y" to prompts when requested.
6.3 Test TurtleSim: Once ROS 2 is installed, you can verify the installation by running the TurtleSim application:
- Set Up the ROS 2 Environment:
source /opt/ros/humble/setup.bash
- Launch the TurtleSim Node:
ros2 run turtlesim turtlesim_node
A simulator window should appear with a turtle in the center. You can ignore messages like libGL error: failed to load driver: xlnx
.
- Control the Turtle: Open another terminal, set up the ROS 2 environment again, and run a node to control the turtle:
source /opt/ros/humble/setup.bash
ros2 run turtlesim turtle_teleop_key
Now you should be able to control the turtle using your keyboard.
7. Setup Micro-ROS7.1 PrerequisitesEnsure you have ROS 2 Humble Hawksbill installed on your KR260 board. If you haven't done this yet, follow the ROS 2 installation steps provided previously.
7.2 Enter the PYNQ EnvironmentOpen a terminal and enter the PYNQ environment:
sudo su
source /etc/profile.d/pynq_venv.sh
7.3 Install DependenciesInstall the necessary dependencies for Micro-ROS:
sudo apt update
sudo apt install -y python3-colcon-common-extensions python3-pip python3-vcstool build-essential cmake
7.4 Create a ROS 2 WorkspaceCreate a workspace for Micro-ROS:
mkdir -p ~/micro_ros_arduino_ws/src
cd ~/micro_ros_arduino_ws
7.5 Clone the Micro-ROS RepositoriesUse git
to clone the necessary repositories:
cd ~/micro_ros_arduino_ws
git clone --branch humble https://github.com/micro-ROS/micro_ros_setup.git src/micro_ros_setup
7.6 Build the Micro-ROS WorkspaceBuild the workspace using colcon
:
cd ~/micro_ros_arduino_ws
rosdep update && rosdep install --from-paths src --ignore-src -y
colcon build
source install/local_setup.bash
7.7 Install the Micro-ROS AgentYou need to install the Micro-ROS Agent to bridge between Micro-ROS and ROS 2:
ros2 run micro_ros_setup create_agent_ws.sh
ros2 run micro_ros_setup build_agent.sh
source install/local_setup.bash
7.8 Run the Micro-ROS AgentRun the Micro-ROS Agent to start communicating with Micro-ROS nodes:
ros2 run micro_ros_agent micro_ros_agent serial --dev /dev/ttyUSB0 -b 115200
Replace /dev/ttyUSB0
and 115200
with the appropriate device and baud rate for your setup.
8.1 Kria KR260
The Kria KR260 is a powerful adaptive computing platform designed by AMD (formerly Xilinx) for industrial applications, including robotics, embedded vision, and AI at the edge. It is particularly suitable for applications requiring high performance and real-time processing capabilities.
Key Features of Kria KR260
- FPGA Fabric: The KR260 integrates a Xilinx Zynq UltraScale+ MPSoC, combining programmable logic (FPGA) with a quad-core ARM Cortex-A53 processor, a dual-core ARM Cortex-R5 real-time processor, and an ARM Mali-400 MP2 GPU.
- High Performance: Offers high computational power suitable for complex algorithms in AI and machine learning, crucial for real-time decision-making and control in robotics.
- Rich I/O Interfaces: Provides a wide range of I/O interfaces, including Gigabit Ethernet, USB 3.0, DisplayPort, and multiple GPIO pins, allowing seamless integration with various sensors and actuators.
- AI and Vision Capabilities: Supports AI inference and image processing, enabling the implementation of advanced vision systems and AI models.
- Development Tools: Compatible with Vitis AI and PYNQ, offering a robust development environment for AI and embedded applications.
- Customization and Flexibility: The programmable logic (FPGA) allows for customization and optimization of hardware acceleration for specific tasks, enhancing performance and efficiency.
8.2 ESP32
The ESP32 is a powerful, low-cost microcontroller with integrated Wi-Fi and Bluetooth capabilities. Developed by Espressif Systems, the ESP32 is widely used in Internet of Things (IoT) applications due to its robust features and versatility. The ESP32 plays a crucial role in our project. Its integration with Micro-ROS allows it to communicate efficiently with the main ROS 2 system running on the KR260 board. The ESP32 handles real-time sensor data acquisition, motor control, and wireless communication, making it an indispensable component in ensuring the robot operates smoothly and efficiently.
Key Features of ESP32
- Dual-Core Processor: The ESP32 features a dual-core Tensilica Xtensa LX6 microprocessor, providing ample processing power for complex tasks.
- Wi-Fi and Bluetooth: Integrated 2.4 GHz Wi-Fi and Bluetooth (both classic and BLE) capabilities, allowing for wireless communication and networking.
- GPIO Pins: Rich set of GPIO (General Purpose Input/Output) pins for interfacing with various peripherals and sensors.
- Low Power Consumption: Designed for energy efficiency, with multiple power-saving modes.
- High Modularity: Supports various protocols and interfaces, including SPI, I2C, UART, ADC, and DAC.
- Real-Time Operating System (RTOS): Supports FreeRTOS, making it suitable for real-time applications.
- Large Memory: Includes SRAM and flash memory for program storage and execution.
8.3 Motors and Drivers
The chosen motors are Rhino 100RPM 25Kgcm DC Planetary Geared High Precision Encoder Servo Motors, providing a high torque output of 25 Kg-cm, essential for driving the weighty structure. These motors feature a planetary gear design for durability and efficiency and a high-precision encoder for accurate navigation and positioning.
The Nominal Voltage of the motor is 12V. With the No load current of 0.8A and the load current up to max 7Amps.
To optimize the operation of the motors in this project, the Rhino Motion Controls DC Servo Drive RMCS-2303 with UART ASCII is utilized. This advanced servo drive operates within a voltage range of 10-30 V DC and communicates seamlessly via UART. The RMCS-2303 is designed to provide closed-loop speed and position control for DC motors with encoders, ensuring precise and reliable performance.
One of the standout features of the RMCS-2303 is its ability to maintain programmed speeds regardless of variations in supplied voltage, ensuring consistent operation. Additionally, it guarantees that the rated torque is available at all speeds without any reduction, making it ideal for demanding applications where maintaining performance under varying conditions is crucial.
To enhance its reliability, the RMCS-2303 includes multiple protection features. It is equipped with short-circuit protection for motor outputs, over-voltage, and under-voltage protection, and is capable of surviving accidental motor disconnects while powered. These protections ensure the longevity and safe operation of both the drive and the connected motors.
Configuration of the RMCS-2303 is straightforward and flexible, performed via the MODBUS ASCII protocol over UART. Users can set the Modbus Slave Address from 1 to 7, either through physical jumpers (hardware setting) or using a PC GUI, PLC, or any other device capable of providing the correct data output on ASCII Modbus.
The three user modes offered by the RMCS-2303 provide versatile control options to suit various application requirements:
- Mode 0 (Analog Control): Suitable for applications needing smooth, variable speed control via an analog signal.
- Mode 1 (Digital Speed Control): Ideal for precise and repeatable speed settings using digital commands.
- Mode 2 (Position Control): Perfect for tasks requiring accurate positioning and movement control.
The RMCS-2303 servo drive's combination of advanced control capabilities, robust protection features, and flexible configuration options makes it an excellent choice for optimizing the performance of the motors in this autonomous food delivery system.
The system is powered by a Li-Ion 11.1V 10000mAh (2C) battery with inbuilt charger-protection circuitry, configured with 12 Li-ion 3.7V 2500mAh cells (3S4P), providing a reliable power source and ensuring long operational periods without frequent recharging.
The battery is charged using an IMAX B6AC Charger/Discharger from Skyrc.
Heavy-duty wheels are employed in the project, each with a load capacity of 20 kg. To ensure a secure attachment between the wheels and the motors, 6mm flange couplers with an outer diameter of 59 mm are used. This setup ensures that the wheels can handle the weight of the delivery system while providing reliable and efficient movement.
8.4 Project Design
The structure of the project is designed using Fusion 360, a powerful CAD software. The design is then converted to DXF files for precise cutting using a CNC machine. MDF wood is chosen for its balance of strength and workability, making it an ideal material for constructing the large and robust structure required for the delivery system.
Then we painted the the whole structure with white emulsion paint.
8.5 Testing Motors
The assembly and testing phase of the project began with motor testing. Following this, four motors were attached to the base section. Each motor was individually tested using Rhino Motor Control software, connected to the computer via the FTDI 232 Mini USB to TTL Converter through the UART pins on the driver, ensuring correct functionality.
All the motors are working fine.
8.6 Remote Controlling Motors With ROS2
To control motors using Micro-ROS and the teleop_twist_keyboard
package, you need to set up your ESP32 board to receive velocity commands and then translate those commands into motor control signals.
8.6.1 Set Up the Arduino Environment
Ensure you have the Micro-ROS library installed in your Arduino IDE. If you haven't done this yet, refer to the previous steps on how to install the Micro-ROS library in the Arduino IDE.
8.6.2 Arduino Sketch
Create an Arduino sketch that subscribes to the cmd_vel
topic and controls the motors accordingly. Here's our code
#include <micro_ros_arduino.h>
#include <RMCS2303drive.h>
#include <geometry_msgs/msg/twist.h>
#include <std_msgs/msg/int32.h>
#include "MapFloat.h"
#include <stdio.h>
#include <rcl/rcl.h>
#include <rcl/error_handling.h>
#include <rclc/rclc.h>
#include <rclc/executor.h>
RMCS2303 rmcs; // creation of motor driver object
// slave ids to be set on the motor driver refer to the manual in the reference section
byte slave_id1 = 3;
byte slave_id2 = 1;
byte slave_id3 = 2;
byte slave_id4 = 4;
// Micro-ROS variables
rcl_publisher_t left_ticks_pub;
rcl_publisher_t right_ticks_pub;
rcl_subscription_t sub;
rclc_executor_t executor;
rclc_support_t support;
rcl_allocator_t allocator;
rcl_node_t node;
geometry_msgs__msg__Twist msg; // msg variable of data type twist
std_msgs__msg__Int32 lwheel; // for storing left encoder value
std_msgs__msg__Int32 rwheel; // for storing right encoder value
// Make sure to specify the correct values here
//*******************************************
double wheel_rad = 0.0500, wheel_sep = 0.260; // wheel radius and wheel separation in meters.
//******************************************
double w_r = 0, w_l = 0;
double speed_ang;
double speed_lin;
double leftPWM;
double rightPWM;
bool new_command_received = false; // Flag to track new command reception
void messageCb(const void *msg_in) // cmd_vel callback function definition
{
const geometry_msgs__msg__Twist *msg = (const geometry_msgs__msg__Twist *)msg_in;
new_command_received = true;
speed_lin = fmax(fmin(msg->linear.x, 1.0f), -1.0f); // limits the linear x value from -1 to 1
speed_ang = fmax(fmin(msg->angular.z, 1.0f), -1.0f); // limits the angular z value from -1 to 1
// Kinematic equation for finding the left and right velocities
w_r = (speed_lin / wheel_rad) + ((speed_ang * wheel_sep) / (2.0 * wheel_rad));
w_l = (speed_lin / wheel_rad) - ((speed_ang * wheel_sep) / (2.0 * wheel_rad));
if (w_r == 0)
{
rightPWM = 0;
rmcs.Disable_Digital_Mode(slave_id1,0);
rmcs.Disable_Digital_Mode(slave_id2,0); // if right motor velocity is zero set right pwm to zero and disabling motors
rmcs.Disable_Digital_Mode(slave_id3,0);
rmcs.Disable_Digital_Mode(slave_id4,0);
}
else
rightPWM = mapFloat(fabs(w_r), 0.0, 18.0, 1500,17200); // mapping the right wheel velocity with respect to Motor PWM values
if (w_l == 0)
{
leftPWM = 0;
rmcs.Disable_Digital_Mode(slave_id1,0);
rmcs.Disable_Digital_Mode(slave_id2,0); // if left motor velocity is zero set left pwm to zero and disabling motors
rmcs.Disable_Digital_Mode(slave_id3,0);
rmcs.Disable_Digital_Mode(slave_id4,0);
}
else
leftPWM = mapFloat(fabs(w_l), 0.0, 18.0, 1500,17200); // mapping the right wheel velocity with respect to Motor PWM values
rmcs.Speed(slave_id1,rightPWM);
rmcs.Speed(slave_id2,rightPWM);
rmcs.Speed(slave_id3,leftPWM);
rmcs.Speed(slave_id4,leftPWM);
if (w_r > 0 && w_l > 0)
{
rmcs.Enable_Digital_Mode(slave_id1,1);
rmcs.Enable_Digital_Mode(slave_id2,1); // forward condition
rmcs.Enable_Digital_Mode(slave_id3,0);
rmcs.Enable_Digital_Mode(slave_id4,0);
}
else if (w_r < 0 && w_l < 0)
{
rmcs.Enable_Digital_Mode(slave_id1,0);
rmcs.Enable_Digital_Mode(slave_id2,0); // backward condition
rmcs.Enable_Digital_Mode(slave_id3,1);
rmcs.Enable_Digital_Mode(slave_id4,1);
}
else if (w_r > 0 && w_l < 0)
{
rmcs.Enable_Digital_Mode(slave_id1,1);
rmcs.Enable_Digital_Mode(slave_id2,1); // Leftward condition
rmcs.Enable_Digital_Mode(slave_id3,1);
rmcs.Enable_Digital_Mode(slave_id4,1);
}
else if (w_r < 0 && w_l > 0)
{
rmcs.Enable_Digital_Mode(slave_id1,0);
rmcs.Enable_Digital_Mode(slave_id2,0); // rightward condition
rmcs.Enable_Digital_Mode(slave_id3,0);
rmcs.Enable_Digital_Mode(slave_id4,0);
}
else
{
rmcs.Brake_Motor(slave_id1,0);
rmcs.Brake_Motor(slave_id2,0);
rmcs.Brake_Motor(slave_id3,0);
rmcs.Brake_Motor(slave_id4,0); // if none of the above break the motors both in clockwise and anti-clockwise direction
rmcs.Brake_Motor(slave_id1,1);
rmcs.Brake_Motor(slave_id2,1);
rmcs.Brake_Motor(slave_id3,1);
rmcs.Brake_Motor(slave_id4,1);
}
}
void setup()
{
// Initialize serial for debugging
Serial.begin(115200);
rmcs.Serial_selection(0); // 0 -> for Hardware serial tx1 rx1 of Arduino Mega
rmcs.initSerial(9600);
rmcs.begin(&Serial2, 9600, 16, 17);
// Micro-ROS initialization
set_microros_transports();
allocator = rcl_get_default_allocator();
rclc_support_init(&support, 0, NULL, &allocator);
rclc_node_init_default(&node, "esp32_node", "", &support);
// Create publishers and subscribers
rclc_publisher_init_default(
&left_ticks_pub,
&node,
ROSIDL_GET_MSG_TYPE_SUPPORT(std_msgs, msg, Int32),
"left_ticks");
rclc_publisher_init_default(
&right_ticks_pub,
&node,
ROSIDL_GET_MSG_TYPE_SUPPORT(std_msgs, msg, Int32),
"right_ticks");
rclc_subscription_init_default(
&sub,
&node,
ROSIDL_GET_MSG_TYPE_SUPPORT(geometry_msgs, msg, Twist),
"cmd_vel");
// Create executor
rclc_executor_init(&executor, &support.context, 1, &allocator);
rclc_executor_add_subscription(&executor, &sub, &msg, &messageCb, ON_NEW_DATA);
}
void loop()
{
lwheel.data = rmcs.Position_Feedback(slave_id3); // the function reads the encoder value from the motor with slave id 4
rwheel.data = -rmcs.Position_Feedback(slave_id1); // the function reads the encoder value from the motor with slave id 1
rcl_publish(&left_ticks_pub, &lwheel, NULL); // publish left enc values
rcl_publish(&right_ticks_pub, &rwheel, NULL); // publish right enc values
rclc_executor_spin_some(&executor, RCL_MS_TO_NS(100));
// If no new command has been received, stop the motors
if (!new_command_received)
{
rmcs.Brake_Motor(slave_id1, 0);
rmcs.Brake_Motor(slave_id2, 0);
rmcs.Brake_Motor(slave_id3, 0);
rmcs.Brake_Motor(slave_id4, 0); // Stop the motors
rmcs.Brake_Motor(slave_id1, 1);
rmcs.Brake_Motor(slave_id2, 1);
rmcs.Brake_Motor(slave_id3, 1);
rmcs.Brake_Motor(slave_id4, 1);
}
else
{
new_command_received = false; // Reset the flag after executing the command
}
}
8.6.3 Install teleop_twist_keyboard Package
On your KR260 board, install the teleop_twist_keyboard package to send velocity commands to the ESP32:
sudo apt update
sudo apt install ros-humble-teleop-twist-keyboard
8.6.4 Run the Micro-ROS Agent
Ensure the Micro-ROS agent is running on your KR260 board:
ros2 run micro_ros_agent micro_ros_agent serial --dev /dev/ttyUSB0 -b 115200
Replace /dev/ttyUSB0
and 115200
with the appropriate device file for your setup.
8.6.5 Run teleop_twist_keyboard
In a new terminal on your KR260 board, set up the ROS 2 environment and run the teleop_twist_keyboard
node:
source /opt/ros/humble/setup.bash
ros2 run teleop_twist_keyboard teleop_twist_keyboard
8.6.6 Test the Setup
Use the keyboard to send velocity commands to the Arduino. The ESP32 should receive the cmd_vel
messages and control the motors accordingly. The teleop_twist_keyboard
node allows you to control the robot using the keyboard, where keys correspond to different velocity commands.
YOLOv3 (You Only Look Once version 3) is a powerful object detection algorithm that can be used to detect and classify objects in real time, including people. When deployed on the Kria KR260 platform, YOLOv3 can leverage the powerful FPGA and CPU resources to perform efficient and accurate detection tasks.
We initially utilized the built-in YOLOv3 example provided with our system. This example served as a foundational tool for object detection, allowing us to identify various objects within images. Building on this base, we customized and extended the YOLOv3 implementation to specifically detect customers. This advanced version of the YOLOv3 model is tailored to recognize human figures in the robot's field of view. By focusing on detecting customers, the system is designed to trigger interactions between the robot and the detected individuals. This enhancement enables the robot to respond appropriately when it identifies a customer, thus facilitating more dynamic and engaging interactions.
from pynq_dpu import DpuOverlay
import numpy as np
import cv2
import colorsys
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
import time
# Load the DPU overlay
overlay = DpuOverlay("dpu.bit")
overlay.load_model("tf_yolov3_voc.xmodel")
# Utility functions
def get_class(classes_path):
with open(classes_path) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
return class_names
def letterbox_image(image, size):
ih, iw, _ = image.shape
w, h = size
scale = min(w/iw, h/ih)
nw = int(iw*scale)
nh = int(ih*scale)
image = cv2.resize(image, (nw,nh), interpolation=cv2.INTER_LINEAR)
new_image = np.ones((h,w,3), np.uint8) * 128
h_start = (h-nh)//2
w_start = (w-nw)//2
new_image[h_start:h_start+nh, w_start:w_start+nw, :] = image
return new_image
def pre_process(image, model_image_size):
image = image[...,::-1]
image_h, image_w, _ = image.shape
if model_image_size != (None, None):
assert model_image_size[0] % 32 == 0, 'Multiples of 32 required'
assert model_image_size[1] % 32 == 0, 'Multiples of 32 required'
boxed_image = letterbox_image(image, tuple(reversed(model_image_size)))
else:
new_image_size = (image_w - (image_w % 32), image_h - (image_h % 32))
boxed_image = letterbox_image(image, new_image_size)
image_data = np.array(boxed_image, dtype='float32')
image_data /= 255.
image_data = np.expand_dims(image_data, 0)
return image_data
def _get_feats(feats, anchors, num_classes, input_shape):
num_anchors = len(anchors)
anchors_tensor = np.reshape(np.array(anchors, dtype=np.float32), [1, 1, 1, num_anchors, 2])
grid_size = np.shape(feats)[1:3]
nu = num_classes + 5
predictions = np.reshape(feats, [-1, grid_size[0], grid_size[1], num_anchors, nu])
grid_y = np.tile(np.reshape(np.arange(grid_size[0]), [-1, 1, 1, 1]), [1, grid_size[1], 1, 1])
grid_x = np.tile(np.reshape(np.arange(grid_size[1]), [1, -1, 1, 1]), [grid_size[0], 1, 1, 1])
grid = np.concatenate([grid_x, grid_y], axis=-1)
grid = np.array(grid, dtype=np.float32)
box_xy = (1 / (1 + np.exp(-predictions[..., :2])) + grid) / np.array(grid_size[::-1], dtype=np.float32)
box_wh = np.exp(predictions[..., 2:4]) * anchors_tensor / np.array(input_shape[::-1], dtype=np.float32)
box_confidence = 1 / (1 + np.exp(-predictions[..., 4:5]))
box_class_probs = 1 / (1 + np.exp(-predictions[..., 5:]))
return box_xy, box_wh, box_confidence, box_class_probs
def correct_boxes(box_xy, box_wh, input_shape, image_shape):
box_yx = box_xy[..., ::-1]
box_hw = box_wh[..., ::-1]
input_shape = np.array(input_shape, dtype=np.float32)
image_shape = np.array(image_shape, dtype=np.float32)
new_shape = np.around(image_shape * np.min(input_shape / image_shape))
offset = (input_shape - new_shape) / 2. / input_shape
scale = input_shape / new_shape
box_yx = (box_yx - offset) * scale
box_hw *= scale
box_mins = box_yx - (box_hw / 2.)
box_maxes = box_yx + (box_hw / 2.)
boxes = np.concatenate([
box_mins[..., 0:1],
box_mins[..., 1:2],
box_maxes[..., 0:1],
box_maxes[..., 1:2]
], axis=-1)
boxes *= np.concatenate([image_shape, image_shape], axis=-1)
return boxes
def boxes_and_scores(feats, anchors, classes_num, input_shape, image_shape):
box_xy, box_wh, box_confidence, box_class_probs = _get_feats(feats, anchors, classes_num, input_shape)
boxes = correct_boxes(box_xy, box_wh, input_shape, image_shape)
boxes = np.reshape(boxes, [-1, 4])
box_scores = box_confidence * box_class_probs
box_scores = np.reshape(box_scores, [-1, classes_num])
return boxes, box_scores
def draw_bbox(image, bboxes, classes):
num_classes = len(classes)
image_h, image_w, _ = image.shape
hsv_tuples = [(1.0 * x / num_classes, 1., 1.) for x in range(num_classes)]
colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
colors = list(map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), colors))
random.seed(0)
random.shuffle(colors)
random.seed(None)
for i, bbox in enumerate(bboxes):
coor = np.array(bbox[:4], dtype=np.int32)
score = bbox[4]
class_ind = int(bbox[5])
bbox_color = colors[class_ind]
bbox_thick = int(0.6 * (image_h + image_w) / 600)
c1, c2 = (coor[0], coor[1]), (coor[2], coor[3])
cv2.rectangle(image, c1, c2, bbox_color, bbox_thick)
return image
def nms_boxes(boxes, scores):
x1 = boxes[:, 0]
y1 = boxes[:, 1]
x2 = boxes[:, 2]
y2 = boxes[:, 3]
areas = (x2 - x1 + 1) * (y2 - y1 + 1)
order = scores.argsort()[::-1]
keep = []
while order.size > 0:
i = order[0]
keep.append(i)
xx1 = np.maximum(x1[i], x1[order[1:]])
yy1 = np.maximum(y1[i], y1[order[1:]])
xx2 = np.minimum(x2[i], x2[order[1:]])
yy2 = np.minimum(y2[i], y2[order[1:]])
w1 = np.maximum(0.0, xx2 - xx1 + 1)
h1 = np.maximum(0.0, yy2 - yy1 + 1)
inter = w1 * h1
ovr = inter / (areas[i] + areas[order[1:]] - inter)
inds = np.where(ovr <= 0.55)[0] # threshold
order = order[inds + 1]
return keep
def draw_boxes(image, boxes, scores, classes):
for i, bbox in enumerate(boxes):
[top, left, bottom, right] = bbox
width, height = right - left, bottom - top
center_x, center_y = left + width * 0.5, top + height * 0.5
score, class_index = scores[i], classes[i]
label = '{}: {:.4f}'.format(class_names[class_index], score)
color = tuple([color / 255 for color in colors[class_index]])
cv2.rectangle(image, (int(left), int(top)), (int(right), int(bottom)), color, 2)
cv2.putText(image, label, (int(left), int(top) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
return image
def evaluate(yolo_outputs, image_shape, class_names, anchors):
score_thresh = 0.2
anchor_mask = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
boxes = []
box_scores = []
input_shape = np.shape(yolo_outputs[0])[1:3]
for i, output in enumerate(yolo_outputs):
boxes_i, box_scores_i = boxes_and_scores(output, anchors[anchor_mask[i]], len(class_names), input_shape, image_shape)
boxes.append(boxes_i)
box_scores.append(box_scores_i)
boxes = np.concatenate(boxes, axis=0)
box_scores = np.concatenate(box_scores, axis=0)
box_scores = box_scores[np.where(box_scores >= score_thresh)]
box_classes = np.argmax(box_scores, axis=-1)
boxes = boxes[np.where(box_scores >= score_thresh)]
return boxes, box_scores, box_classes
def predict_from_camera():
class_names = get_class('coco.names')
anchors = np.array([[116, 90, 156, 198, 373, 326], [30, 61, 62, 45, 59, 119], [10, 13, 16, 30, 33, 23]], dtype=np.float32)
video_capture = cv2.VideoCapture(0)
while True:
ret, frame = video_capture.read()
if not ret:
break
image_shape = np.array(frame.shape[:2])
image_data = pre_process(frame, (416, 416))
output = overlay.run(image_data)
boxes, scores, classes = evaluate(output, image_shape, class_names, anchors)
if boxes.size > 0:
indices = nms_boxes(boxes, scores)
boxes, scores, classes = boxes[indices], scores[indices], classes[indices]
frame = draw_bbox(frame, np.concatenate([boxes, np.expand_dims(scores, axis=1), np.expand_dims(classes, axis=1)], axis=1), class_names)
cv2.imshow('YOLOv3 Detection', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()
if __name__ == '__main__':
predict_from_camera()
10. User InterfaceThe integration of the Dine-O interface into our food delivery robot is a pivotal development that enhances the robot’s ability to interact with customers and manage orders efficiently. The Dine-O UI, available on GitHub, provides a robust platform for customer order management and interaction, which we have seamlessly integrated into our robot’s system.
10.1 Dine-O Interface Overview
- Purpose and Functionality:
The Dine-O interface is designed to facilitate customer interactions with a robotic system through a web-based graphical user interface (GUI). It allows customers to browse menus, place orders, customize their selections, and track order status in real-time.
The system is built with a focus on user experience, providing an intuitive interface that can be accessed from any modern web browser. - Components:
Web-Based GUI: A user-friendly interface where customers can interact with the robot by selecting menu items and placing orders.
Backend Server: Manages data and communicates between the GUI and the robot’s control system. It processes orders and handles customer requests.
API Endpoints: Facilitates communication between the Dine-O interface and the robot, handling tasks such as order placement and status updates.
10.2 System Setup and Configuration:
- Deployment:
The Dine-O interface is deployed on a dedicated server that is accessible over the network. This server hosts the web application and manages interactions with the robot.
Network configurations ensure that the server and the robot can communicate effectively, using secure protocols to exchange data. - Configuration:
The Dine-O server is configured with the necessary dependencies and services to run the web application smoothly. This includes setting up web server software, database management systems, and application frameworks.
Configuration files are adjusted to match the specific needs of the robot, including communication settings, data formats, and API endpoints.
10.3 Integration with the Robot’s Control System:
- Communication Protocols:
We established a communication protocol between the robot and the Dine-O server. This includes setting up RESTful APIs for sending and receiving data, such as order details and customer interactions.
Data is exchanged in JSON format to ensure compatibility and ease of integration with the robot’s existing software infrastructure. - Data Handling:
The robot’s control software is adapted to handle incoming data from the Dine-O interface. This includes parsing order details, updating order statuses, and triggering appropriate responses based on customer interactions.
The robot is equipped with mechanisms to fetch and process order data in real-time, ensuring that customer requests are handled promptly and accurately.
4. Customer Interaction Integration:
- User Experience Enhancements:
- The robot’s interaction system is enhanced to display menus, accept orders, and provide status updates based on the information received from the Dine-O interface.
- The robot uses its onboard sensors and cameras to detect and track customers. This interaction is integrated with the Dine-O interface, allowing the robot to respond to customer presence and trigger relevant interactions. - Interaction Flow:
- Menu Display: When a customer approaches, the robot displays the menu on its screen, allowing them to browse and select items.
- Order Placement: The robot receives the order details from the Dine-O and confirms the order with the customer.
- Status Updates: The robot provides real-time updates on the order status, such as preparation time and delivery status, based on data from the Dine-O.
5. Testing and Validation:
- Functionality Testing:
- Extensive testing ensures that the integration is functioning as expected. This includes verifying that orders are correctly transmitted from the Dine-O interface to the robot and that interactions are handled smoothly.
- Testing scenarios cover various use cases, such as placing orders, tracking status, and handling customer interactions. - Performance Evaluation:
- We assess the performance of the integration by monitoring the robot’s ability to handle multiple orders and interact with customers efficiently.
- Performance metrics are collected, including response times, order accuracy, and interaction quality, to ensure that the system meets the desired standards.
6. Deployment and Operational Management:
- Deployment:
- After successful testing, the robot with the integrated Dine-O interface is deployed in a real-world environment, such as a restaurant or service area.
- Continuous monitoring is implemented to ensure that the system operates reliably and to address any issues that may arise. - Operational Monitoring:
- The system is monitored for performance and reliability. Any issues are logged and analyzed to ensure smooth operation.
- Regular updates and maintenance are performed to keep the system up-to-date and to incorporate improvements based on user feedback.
7. Benefits of Integration:
- Enhanced Customer Experience:
- The integration of the Dine-O interface significantly improves the customer experience by providing an intuitive and efficient way to place and track orders.
- Customers can interact with the robot seamlessly, making the ordering process more engaging and user-friendly. - Efficient Order Management:
- The Dine-O interface streamlines order management, allowing the robot to handle orders and interactions efficiently.
- The system supports real-time updates and accurate order processing, enhancing overall operational efficiency. - Improved Interaction Capabilities:
The robot can interact with customers in a more personalized and responsive manner, thanks to the data and functionalities provided by the Dine-O interface.
Dine-O represents a significant advancement in restaurant automation, seamlessly integrating state-of-the-art technology to address key industry challenges. By automating essential tasks such as order taking, and food delivery, Dine-O not only alleviates labor shortages but also enhances operational efficiency and customer satisfaction. Its interactive display and personalized recommendations create a unique and engaging dining experience, while its data-driven insights enable restaurants to optimize their operations. With its self-charging capabilities and advanced AI functionalities, Dine-O stands poised to revolutionize the restaurant industry, offering a versatile solution that balances innovation with practical utility. As the restaurant industry continues to evolve, Dine-O's contributions underscore the potential of robotics to drive progress and improve service standards.
Comments