In modern life, we are seeing more and more interaction with Autonomous Mobile Robots, not just in factories and specialist facilities, but in everyday life. Recently I was brought lunch by a Robot in a restaurant.
When we are interacting with Autonomous Mobile Robots, they need to be designed to ensure they can safely navigate in their environment. They do this by using a range of sensors to safety detect and interact with people and their environment.
This got me thinking about how I could create a simple demo which would demonstrate some of the principals used. So in this project we are going to use an AMD Kria™ KD240 Robotics Starter Kit to control a robot arm and couple this with a LIDAR camera, which stops the arm should anyone get too close to its operation.
This is a simple demonstration and for full development we would need to consider the complete safety case and certification.
In this project, we are going to be using the AMD PYNQ™ framework, however most of the motor control is going to be implemented within the programmable logic. This will separate the PS and the PL, the PL will receive commands over a RS232 to set the positions of the PWM motor positions.
The PS will control the LIDAR camera and detect if someone gets close to the camera. If someone or something does get close to the sensor and hence the robot, a signal will be asserted into the programmable logic to stop the movement of the arm. Once the obstruction is no longer present the movement of the arm will continue.
Getting StartedThe first thing we need to do is download the Ubuntu headless SD Card image for the AMD Kria KD240 kit.
Once the image has been downloaded us BalenaEtcher to write the image to the SD Card.
At the completion of the disk imaging, insert the SD Card into the KD240, connect a USB Serial Cable and the Ethernet. Power on the board and keep an eye on the boot process in the terminal window (115200:N:1)
Login with the password ubuntu / ubuntu
The next step is to download and install PYNQ for the Kria KD240. To get the instructions for this we can go to the PYNQ home page and follow the instructions for the Kria Installation.
In the terminal issue the commands below and you PYNQ will be installed. This might take a few minutes.
git clone https://github.com/Xilinx/Kria-PYNQ.git
cd Kria-PYNQ
sudo bash install.sh -b KD240
With PYNQ installed on the Kria KD240, the next step is to create the hardware design for Vivado.
This will use some IP that you might be familiar with the AXIS UART, Protocol Conversion and the PWM generation.
The first step in the journey is to create a new project in Vivado targeting the KD240
Make sure when we select the board that we connect the Kria Connectors.
With the project created the next step is to create a block diagram.
Add in the MPSoC processing system
Run the block automation
Click on the add sources option
Add in the four attached files
These should show under the source tab, one of them might be selected as the top level. Before we can add these into the block diagram, we need to generate the VHDL for the block diagram and set that as the top level.
Select the block diagram and go with create HDL Wrapper
Let the top level be managed be Vivado and once it is generated set it as the top level file.
With the top level set we need to create the design as we require. Add in the VHDL modules to the block diagram to support all axis on the Arm we need 6 PWM implementations.
The final project should look like below, there is a TCL file attached at the end to recreate this block diagram.
The only connection to the PS comes from the AXI GPIO which is set for a single bit and connected to the EOP_Reset port on the protocol. The FIFO is implemented to buffer commands for when the robot is halted due to the proximity detected by the LIDAR.
The UART connection is separate to the PS UART, the TX, RX and PWM signals are all routed to the Pmod connector.
The following constraints are required.
set_property PACKAGE_PIN C10 [get_ports rx_0]
set_property PACKAGE_PIN C13 [get_ports tx_0]
set_property PACKAGE_PIN E9 [get_ports pwm_op_0]
set_property PACKAGE_PIN F13 [get_ports pwm_op_1]
set_property PACKAGE_PIN E12 [get_ports pwm_op_2]
set_property PACKAGE_PIN D10 [get_ports pwm_op_3]
set_property PACKAGE_PIN D9 [get_ports pwm_op_4]
set_property PACKAGE_PIN D13 [get_ports pwm_op_5]
set_property IOSTANDARD LVCMOS33 [get_ports pwm_op_0]
set_property IOSTANDARD LVCMOS33 [get_ports pwm_op_1]
set_property IOSTANDARD LVCMOS33 [get_ports pwm_op_2]
set_property IOSTANDARD LVCMOS33 [get_ports pwm_op_3]
set_property IOSTANDARD LVCMOS33 [get_ports pwm_op_4]
set_property IOSTANDARD LVCMOS33 [get_ports pwm_op_5]
set_property IOSTANDARD LVCMOS33 [get_ports rx_0]
set_property IOSTANDARD LVCMOS33 [get_ports tx_0]
We can now generate the bit and HWH needed PYNQ.
TestingBefore we start working with PYNQ, we can set the KD240 boot mode to JTAG by fitting the JTAG jumper and shorting it.
We can then program the KD240 using Vivado and send in commands over the RS232 using the USB to RS232 (logic Level) cable.
This ensure we can control the PWM width by sending in commands over the PL UART this also enables us to create the PC application which will be controlling the UART later also.
For this application, I created a simple Jupyter notebook on my development PC. If you do not have Jupyter installed on your PC, you can install it using the following command in a command prompt.
pip install jupyterlab
Once
installed you can Jupyter from a command prompt.
My plan is to sweep a joint on the robot and from the minimum to the maximum position.
This requires the PWM change from 1 ms to 2 ms the nominal position is at 1.5 ms.
The code to do this in Juypter lab is therefore
import serial
import time
ser=serial.Serial("COM60", 115200, bytesize =serial.EIGHTBITS, timeout=1, parity=serial.PARITY_NONE, rtscts=0, stopbits=serial.STOPBITS_TWO)
number = 100000
for i in range(110):
hex_string = format(number, 'x')
if len(hex_string) % 2:
hex_string = '0' + hex_string # Pad with leading zero if odd length
pwm = bytes.fromhex(hex_string)
hdr = bytearray(b'\x09\x00\x00\x00\x00\x01\x00')
msg = hdr + pwm
ser.write(msg)
number = number + 1000
time.sleep(0.1)
ser.close()
print('Done')
When running on the hardware, we can observe the output on a scope and see the changing of the pulse position.
PYNQ ApplicationWith the hardware design available we are going to create the PYNQ image. The first thing to do in the Juypiter labs is to create a new folder called LIDAR and upload the bit and HWH files to it. Ensure they are both named the same.
With the HWH and Bit file uploaded the next step is to open a terminal and install the python packages needed for the Lidar camera.
python -m pip install TauLidarCamera
This should successfully install
The next step is to create a new ipython note book.
This note book is going to do the following things.
- Import the packages required
- Download the overlay
- Map the AXI GPIO bit zero from the overlay
- Create a set up function which configures the Lidar camera
- Define a button widget
- Create a function which captures and processing a frame
- Start a thread for the capture function
The capture function and thread ensure that the image received by the camera is displayed in near real time video on the Jupyter note book.
This means we can visualise the LIDAR image as objects get close to the camera and robot.
How we detect if there is something close in this example is pretty simple. The LIDAR return is false coloured with depth towards the camera. Red / Purple means it is very close while black means it is far away.
To detect the proximity of anything within the scene, the received frame is converted to HSV colour space and a binary mask is created of any purple within the image which denotes close object.
If the sum of the mask is not 0, then there is a purple pixel in the scene and the arm is stopped from moving. Once there is no purple, the arm is re enabled again.
The complete code for the application is below
Importing
import argparse
import numpy as np
import cv2
import matplotlib.pyplot as plt
import time
import ipywidgets as widgets
import threading
from IPython.display import display, Image
from TauLidarCommon.frame import FrameType
from TauLidarCamera.camera import Camera
from pynq import Overlay
Download the Overlay and configure the AXI GPIO
error = ol.axi_gpio_0[0]
Set up the serial port for the camera
def setup(serialPort=None):
port = None
camera = None
# if no serial port is specified, scan for available Tau Camera devices
if serialPort is None:
ports = Camera.scan()
if len(ports) > 0:
port = ports[0]
else:
port = serialPort
if port is not None:
Camera.setRange(0, 4500)
camera = Camera.open(port)
camera.setModulationChannel(0)
camera.setIntegrationTime3d(0, 1000)
camera.setMinimalAmplitude(0, 10)
cameraInfo = camera.info()
print("\nToF camera opened successfully:")
print(" model: %s" % cameraInfo.model)
print(" firmware: %s" % cameraInfo.firmware)
print(" uid: %s" % cameraInfo.uid)
print(" resolution: %s" % cameraInfo.resolution)
print(" port: %s" % cameraInfo.port)
return camera
Run the camera detection example
camera = setup("/dev/ttyACM0")
# Stop button
# ================
stopButton = widgets.ToggleButton(
value=False,
description='Stop',
disabled=False,
button_style='danger', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='square' # (FontAwesome names without the `fa-` prefix)
)
# Display function
# ================
def view(button):
display_handle=display(None, display_id=True)
i = 0
while True:
frame = camera.readFrame(FrameType.DISTANCE)
if frame:
mat_depth_rgb = np.frombuffer(frame.data_depth_rgb, dtype=np.uint16, count=-1, offset=0).reshape(frame.height, frame.width, 3)
mat_depth_rgb = mat_depth_rgb.astype(np.uint8)
hsv_image = cv2.cvtColor(mat_depth_rgb, cv2.COLOR_BGR2HSV)
# Define the range of purple color in HSV
lower_purple = np.array([125, 50, 50]) # Adjust these values based on your needs
upper_purple = np.array([150, 255, 255]) # Adjust these values based on your needs
# Threshold the HSV image to get only purple colors
mask = cv2.inRange(hsv_image, lower_purple, upper_purple)
# Check if purple is present in the image
if np.sum(mask) > 0:
error.on()
else:
error.off()
# Upscalling the image
upscale = 4
img = cv2.resize(mat_depth_rgb, (frame.width*upscale, frame.height*upscale))
_, result = cv2.imencode('.jpeg', img)
#plt.imshow(img)
#cv2.imshow('Depth Map', img)
display_handle.update(Image(data=result.tobytes()))
if stopButton.value==True:
camera.close()
#display_handle.update(None)
display(stopButton)
thread = threading.Thread(target=view, args=(stopButton,))
thread.start()
When this is running, we will see live video from the camera in the notebook.
With all of the elements put together, the next step is to connect the robot arm to Pmod and run the test
TestingThe video below shows the live testing of the application, and it works well. Even the slightest pixel indicating proximity stops the robot arm.
Wrap UpThis project shows a demonstration of techniques which can be used to allow autonomous robots to operate safely in their environment.
Sponsored by AMD
Comments
Please log in or sign up to comment.