In the heart of agricultural innovation lies Clibot, a robotic solution designed to transform the way farmers manage their crops here in Africa. Powered by a robust motor system, Clibot seamlessly blends manual control with autonomous capabilities, enabling farmers to tailor its operations to their specific needs. Equipped with an array of advanced sensors and a high-resolution camera, Clibot continuously gathers valuable data about the environment and crop health. This data stream is collected and stored in a real-time database, creating a comprehensive record of the field's conditions.
1. The Hacked HoverboardClibot is built upon a modified hoverboard. We used "hacking" techniques to customize the hoverboard's functionalities for our specific needs. This initial phase likely involved modifying the hoverboard's control system, sensors, or motors to achieve the desired level of autonomy for Clibot.
This document serves as a starting point for your Clibot project. Further sections can detail the specific methods used to achieve these functionalities.
Disclaimer:Hoverboard modification can be risky and may void the warranty. It's crucial to prioritize safety throughout the entire process.Consult a qualified technician for complex modifications, especially those involving high voltages.Always exercise caution when working with a modified hoverboard.
Online Resources
Several online resources provide information on hoverboard modifications. It's important to choose resources that emphasize safety and responsible practices. Here are a few options to get you started:
Software Setup for Motor Control
After successfully installing the firmware on the hoverboard, you can test if they function according to your preferences. Remember, we modified the firmware to allow you to control the hoverboard motors in your own way.
To verify successful firmware installation, navigate to the downloaded FOC repository and locate the Arduino folder and open the file name hoverserial.ino, the file looks like this:
// *******************************************************************
// Arduino Nano 5V example code
// for https://github.com/EmanuelFeru/hoverboard-firmware-hack-FOC
//
// Copyright (C) 2019-2020 Emanuel FERU <aerdronix@gmail.com>
//
// *******************************************************************
// INFO:
// • This sketch uses the the Serial Software interface to communicate and send commands to the hoverboard
// • The built-in (HW) Serial interface is used for debugging and visualization. In case the debugging is not needed,
// it is recommended to use the built-in Serial interface for full speed perfomace.
// • The data packaging includes a Start Frame, checksum, and re-syncronization capability for reliable communication
//
// The code starts with zero speed and moves towards +
//
// CONFIGURATION on the hoverboard side in config.h:
// • Option 1: Serial on Right Sensor cable (short wired cable) - recommended, since the USART3 pins are 5V tolerant.
// #define CONTROL_SERIAL_USART3
// #define FEEDBACK_SERIAL_USART3
// // #define DEBUG_SERIAL_USART3
// • Option 2: Serial on Left Sensor cable (long wired cable) - use only with 3.3V devices! The USART2 pins are not 5V tolerant!
// #define CONTROL_SERIAL_USART2
// #define FEEDBACK_SERIAL_USART2
// // #define DEBUG_SERIAL_USART2
// *******************************************************************
// ########################## DEFINES ##########################
#define HOVER_SERIAL_BAUD 115200 // [-] Baud rate for HoverSerial (used to communicate with the hoverboard)
#define SERIAL_BAUD 115200 // [-] Baud rate for built-in Serial (used for the Serial Monitor)
#define START_FRAME 0xABCD // [-] Start frme definition for reliable serial communication
#define TIME_SEND 100 // [ms] Sending time interval
#define SPEED_MAX_TEST 300 // [-] Maximum speed for testing
#define SPEED_STEP 20 // [-] Speed step
// #define DEBUG_RX // [-] Debug received data. Prints all bytes to serial (comment-out to disable)
#define LED_BUILTIN 13
#include <SoftwareSerial.h>
SoftwareSerial HoverSerial(2,4); // RX, TX
// Global variables
uint8_t idx = 0; // Index for new data pointer
uint16_t bufStartFrame; // Buffer Start Frame
byte *p; // Pointer declaration for the new received data
byte incomingByte;
byte incomingBytePrev;
typedef struct{
uint16_t start;
int16_t steer;
int16_t speed;
uint16_t checksum;
} SerialCommand;
SerialCommand Command;
typedef struct{
uint16_t start;
int16_t cmd1;
int16_t cmd2;
int16_t speedR_meas;
int16_t speedL_meas;
int16_t batVoltage;
int16_t boardTemp;
uint16_t cmdLed;
uint16_t checksum;
} SerialFeedback;
SerialFeedback Feedback;
SerialFeedback NewFeedback;
// ########################## SETUP ##########################
void setup()
{
Serial.begin(SERIAL_BAUD);
Serial.println("Hoverboard Serial v1.0");
HoverSerial.begin(HOVER_SERIAL_BAUD);
pinMode(LED_BUILTIN, OUTPUT);
}
// ########################## SEND ##########################
void Send(int16_t uSteer, int16_t uSpeed)
{
// Create command
Command.start = (uint16_t)START_FRAME;
Command.steer = (int16_t)uSteer;
Command.speed = (int16_t)uSpeed;
Command.checksum = (uint16_t)(Command.start ^ Command.steer ^ Command.speed);
// Write to Serial
HoverSerial.write((uint8_t *) &Command, sizeof(Command));
}
// ########################## RECEIVE ##########################
void Receive()
{
// Check for new data availability in the Serial buffer
if (HoverSerial.available()) {
incomingByte = HoverSerial.read(); // Read the incoming byte
bufStartFrame = ((uint16_t)(incomingByte) << 8) | incomingBytePrev; // Construct the start frame
}
else {
return;
}
// If DEBUG_RX is defined print all incoming bytes
#ifdef DEBUG_RX
Serial.print(incomingByte);
return;
#endif
// Copy received data
if (bufStartFrame == START_FRAME) { // Initialize if new data is detected
p = (byte *)&NewFeedback;
*p++ = incomingBytePrev;
*p++ = incomingByte;
idx = 2;
} else if (idx >= 2 && idx < sizeof(SerialFeedback)) { // Save the new received data
*p++ = incomingByte;
idx++;
}
// Check if we reached the end of the package
if (idx == sizeof(SerialFeedback)) {
uint16_t checksum;
checksum = (uint16_t)(NewFeedback.start ^ NewFeedback.cmd1 ^ NewFeedback.cmd2 ^ NewFeedback.speedR_meas ^ NewFeedback.speedL_meas
^ NewFeedback.batVoltage ^ NewFeedback.boardTemp ^ NewFeedback.cmdLed);
// Check validity of the new data
if (NewFeedback.start == START_FRAME && checksum == NewFeedback.checksum) {
// Copy the new data
memcpy(&Feedback, &NewFeedback, sizeof(SerialFeedback));
// Print data to built-in Serial
Serial.print("1: "); Serial.print(Feedback.cmd1);
Serial.print(" 2: "); Serial.print(Feedback.cmd2);
Serial.print(" 3: "); Serial.print(Feedback.speedR_meas);
Serial.print(" 4: "); Serial.print(Feedback.speedL_meas);
Serial.print(" 5: "); Serial.print(Feedback.batVoltage);
Serial.print(" 6: "); Serial.print(Feedback.boardTemp);
Serial.print(" 7: "); Serial.println(Feedback.cmdLed);
} else {
Serial.println("Non-valid data skipped");
}
idx = 0; // Reset the index (it prevents to enter in this if condition in the next cycle)
}
// Update previous states
incomingBytePrev = incomingByte;
}
// ########################## LOOP ##########################
unsigned long iTimeSend = 0;
int iTest = 0;
int iStep = SPEED_STEP;
void loop(void)
{
unsigned long timeNow = millis();
// Check for new received data
Receive();
// Send commands
if (iTimeSend > timeNow) return;
iTimeSend = timeNow + TIME_SEND;
Send(0, iTest);
// Calculate test command signal
iTest += iStep;
// invert step if reaching limit
if (iTest >= SPEED_MAX_TEST || iTest <= -SPEED_MAX_TEST)
iStep = -iStep;
// Blink the LED
digitalWrite(LED_BUILTIN, (timeNow%2000)<1000);
}
// ########################## END ##########################
Then, connect your ESP32 board (any ESP32 series that supports MicroROS) and test if the motor responds to the code.If you haven't installed Arduino IDE yet:
- Download the latest version from the official Arduino website: https://support.arduino.cc/hc/en-us/articles/360019833020-Download-and-install-Arduino-IDE.
- Follow the installation instructions for your operating system.
Launching Arduino IDE and Uploading Code:
- Open the Arduino IDE software.
Boards Menu:
- Navigate to the "Tools" menu and select the "Board" submenu.
- Choose the specific ESP32 board model you're using from the list.
If your ESP32 board isn't listed, you'll need to install the necessary libraries:
- Install the "Arduino ESP32" library for ESP32 support.
- Install the "Software Serial" library for serial communication between the ESP32 and hoverboard.
Ports Menu:
- Navigate to the "Tools" menu and select the "Port" submenu.
- Choose the COM port that corresponds to your connected ESP32 board. Refer to your device manager or system settings if you're unsure.
Uploading the Code:
- Click the "Upload" button (typically an upward arrow icon) in the Arduino IDE.
- Putting ESP32 into Boot Mode: While uploading, you may need to press a specific button on your ESP32 to put it into boot mode. Refer to your board's documentation for exact instructions.
Code Execution:
- Once the upload is complete, observe your motor's behavior. It should respond according to the uploaded code's instructions, typically moving back and forth.
This is how it should be wired
Kria KR 260 Setup
To get started with ROS2 installation, we first need to set up our Kria KR 260 Robotics Starter Kit. The AMD Kria™ KR260 Robotics Starter Kit is the ideal platform to evaluate applications targeted to robotics, machine vision, and industrial communications & control.Click Here to set up AMD Kria™ KR260 Robotics Starter Kit.
PYNQ Installation for Clibot
Since Clibot relies on computer vision for its autonomous navigation and other functionalities, we'll be installing the PYNQ-DPU extension directly. PYNQ-DPU leverages the Deep Learning Processing Unit (DPU) available on certain Xilinx devices to accelerate computer vision tasks. By installing PYNQ-DPU, you'll also get the standard PYNQ library as a dependency, providing the foundation for working with your Kria KR 260 Robotics Starter Kit.
Installation:
Refer to the official PYNQ repository (amd/Kria-RoboticsAI) for detailed instructions on installing PYNQ-DPU on the Kria KR 260 Robotics Starter Kit.
This guide covers the installation process and also includes information on setting up ROS2.
Disclaimer: The Vitual environment you set during PYNQ and ROS 2 installation should also be used for MicroROS.
MicroRos Installation
After installing PYNQ, you'll need to set up MicroROS on your ESP32 to establish serial communication between the Kria KR260 and the ESP32. MicroROS is a lightweight implementation of ROS 2 designed for resource-constrained devices like ESP32. For instructions on installing MicroROS for ESP32, you can refer to this YouTube video:Properly Create ROS2-Arduino IDE (ESP32) Interface - Solution Based on the microROS2 Library
Creating a ROS 2 Package for Clibot
Now that you've set up PYNQ, ROS 2, and MicroROS, let's create a custom ROS 2 package to send instructions to your Clibot.(Not Familiar with Ros2 Click Here to get Started) before creating a package, ensure your ROS 2 environment is properly set up. This includes having ROS 2 installed and sourcing the ROS 2 setup script.
source /opt/ros/<distro>/setup.bash #ON distro you put your disto,mine is humble so it will be
source /opt/ros/humble/setup.bash
If you don't already have a workspace, create one. A workspace is a directory where you can build and install your packages.
mkdir -p ~/clibot/src #we are making a directory name clibot
cd ~/clibot #we are navigating in the directory
Navigate to the src
directory of your workspace and use the ros2 pkg create
command to create a new package. For example, to create a package named clibot_pkg
with dependencies on rclcpp
and std_msgs.
cd ~/clibot/src
ros2 pkg create --build-type ament_cmake clibot_pkg --dependencies rclcpp std_msgs
This command will create a new directory named clibot_pkg
with a basic package structure.Then Return to the root of your workspace and build your package using colcon build.
cd ~/clibot
colcon build
After building your package, source the setup script for your workspace to overlay it on top of your existing environment.
source ~/clibot/install/setup.bash
CreatingRos2Nodes
Now Let's create a simple ROS 2 node in your package. Create a new file in the src
directory of your package. For example, create a file named webcam.py.
"""
This script sets up a ROS 2 node for publishing video frames captured from a camera. The node uses OpenCV to
capture frames from a specified video source and publishes these frames as sensor_msgs/Image messages on
the 'image_raw' topic.
General functionalities:
1. Initialize the ROS 2 node and set up the publisher for the 'image_raw' topic.
2. Capture video frames from the camera using OpenCV.
3. Convert the captured frames to sensor_msgs/Image format using CvBridge.
4. Publish the converted image messages at a specified frame rate.
5. Handle frame capture errors by logging warnings.
Additional features:
- The video capture resolution is set to 640x480 pixels.
"""
import cv2
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
class CameraPublisherNode(Node):
def __init__(self):
super().__init__('camera_publisher')
# Create a publisher for the 'image_raw' topic
self.publisher_ = self.create_publisher(Image, 'image_raw', 10)
# Create a timer to publish frames at 30 frames per second
self.timer_ = self.create_timer(1.0 / 30, self.publish_frames)
# Initialize CvBridge for converting OpenCV images to ROS messages
self.cv_bridge = CvBridge()
# Set up video capture from the default camera (adjust path to video file if needed)
self.video_capture = cv2.VideoCapture(0)
# Set the video capture resolution to 640x480 pixels
self.video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
self.video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
def publish_frames(self):
# Capture a frame from the camera
ret, frame = self.video_capture.read()
if ret:
# Convert the frame from RGB to BGR format
frame_bgr = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
# Convert the frame to sensor_msgs.Image format
image_msg = self.cv_bridge.cv2_to_imgmsg(frame_bgr, encoding='rgb8')
# Publish the image message
self.publisher_.publish(image_msg)
else:
# Log a warning if frame capture fails
self.get_logger().warn('Failed to read frame from camera.')
def main(args=None):
# Initialize the ROS 2 Python client library
rclpy.init(args=args)
# Create an instance of the CameraPublisherNode
camera_publisher = CameraPublisherNode()
# Spin the node to keep it alive and processing callbacks
rclpy.spin(camera_publisher)
# Destroy the node explicitly
camera_publisher.destroy_node()
# Shutdown the ROS 2 Python client library
rclpy.shutdown()
if __name__ == '__main__':
main()
The above script sets up a ROS 2 node for publishing video frames captured from a camera. The node uses OpenCV to capture frames from a specified video source and publishes these frames as sensor_msgs/Image messages on the 'image_raw' topic.
Now, we will create another node that subscribes to the 'image_raw' topic being published. This node is for object detection using the YOLOv3 model.
"""
This script sets up a ROS 2 node for object detection using the YOLOv3 model deployed on a Kria KR260. The node
subscribes to the image topic, processes the image using the DPU (Deep Learning Processing Unit), and publishes
movement commands for the motor based on object detection results.
General functionalities:
1. Initialize the ROS 2 node and set up image subscription.
2. Load the DPU overlay and the YOLOv3 model for object detection.
3. Pre-process the incoming images to be compatible with the YOLOv3 model input requirements.
4. Execute the YOLOv3 model on the DPU to detect objects in the image.
5. Post-process the YOLOv3 outputs to get final bounding boxes, scores, and classes of detected objects.
6. Draw bounding boxes on the image using `cvzone` and annotate them with class labels and confidence scores.
7. Publish movement commands to TurtleSim based on the presence or absence of detected objects.
8. Display the processed image with bounding boxes in a window.
Additional features:
- Draw a line from the center of each detected bounding box to the top-left corner of the image.
- Use `cvzone` format for bounding boxes to ensure a consistent and informative display.
"""
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from geometry_msgs.msg import Twist
import cv2
from cv_bridge import CvBridge
import numpy as np
import colorsys
import random
import sys
sys.path.append('/usr/lib/python3.10/site-packages')
sys.path.append('/usr/local/share/pynq-venv/lib/python3.10/site-packages')
from pynq_dpu import DpuOverlay
import matplotlib.pyplot as plt
import cvzone
import math
class YOLODPUNode(Node):
def __init__(self):
super().__init__('yolo_dpu_node')
self.subscription = self.create_subscription(
Image,
'image_raw',
self.listener_callback,
10)
self.bridge = CvBridge()
# Load DPU overlay and model
self.overlay = DpuOverlay("dpu.bit")
self.overlay.load_model('/root/jupyter_notebooks/pynq-dpu/tf_yolov3_voc.xmodel')
# Anchors and class names
anchor_list = [10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326]
anchor_float = [float(x) for x in anchor_list]
self.anchors = np.array(anchor_float).reshape(-1, 2)
classes_path = '/root/jupyter_notebooks/pynq-dpu/img/voc_classes.txt'
self.class_names = self.get_class(classes_path)
# Prepare colors for bounding boxes
num_classes = len(self.class_names)
hsv_tuples = [(1.0 * x / num_classes, 1., 1.) for x in range(num_classes)]
self.colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
self.colors = list(map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), self.colors))
random.seed(0)
random.shuffle(self.colors)
random.seed(None)
# Initialize DPU runner
self.dpu = self.overlay.runner
inputTensors = self.dpu.get_input_tensors()
outputTensors = self.dpu.get_output_tensors()
self.shapeIn = tuple(inputTensors[0].dims)
self.shapeOut0 = (tuple(outputTensors[0].dims))
self.shapeOut1 = (tuple(outputTensors[1].dims))
self.shapeOut2 = (tuple(outputTensors[2].dims))
self.outputSize0 = int(outputTensors[0].get_data_size() / self.shapeIn[0])
self.outputSize1 = int(outputTensors[1].get_data_size() / self.shapeIn[0])
self.outputSize2 = int(outputTensors[2].get_data_size() / self.shapeIn[0])
self.input_data = [np.empty(self.shapeIn, dtype=np.float32, order="C")]
self.output_data = [np.empty(self.shapeOut0, dtype=np.float32, order="C"),
np.empty(self.shapeOut1, dtype=np.float32, order="C"),
np.empty(self.shapeOut2, dtype=np.float32, order="C")]
self.image = self.input_data[0]
# Publisher for controlling TurtleSim
self.cmd_vel_publisher = self.create_publisher(Twist, 'turtle1/cmd_vel', 10)
def get_class(self, classes_path):
with open(classes_path) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
return class_names
def letterbox_image(self, image, size):
ih, iw, _ = image.shape
w, h = size
scale = min(w/iw, h/ih)
nw = int(iw*scale)
nh = int(ih*scale)
image = cv2.resize(image, (nw,nh), interpolation=cv2.INTER_LINEAR)
new_image = np.ones((h,w,3), np.uint8) * 128
h_start = (h-nh)//2
w_start = (w-nw)//2
new_image[h_start:h_start+nh, w_start:w_start+nw, :] = image
return new_image
def pre_process(self, image, model_image_size):
image = image[...,::-1]
image_h, image_w, _ = image.shape
if model_image_size != (None, None):
assert model_image_size[0]%32 == 0, 'Multiples of 32 required'
assert model_image_size[1]%32 == 0, 'Multiples of 32 required'
boxed_image = self.letterbox_image(image, tuple(reversed(model_image_size)))
else:
new_image_size = (image_w - (image_w % 32), image_h - (image_h % 32))
boxed_image = self.letterbox_image(image, new_image_size)
image_data = np.array(boxed_image, dtype='float32')
image_data /= 255.
image_data = np.expand_dims(image_data, 0)
return image_data
def _get_feats(self, feats, anchors, num_classes, input_shape):
num_anchors = len(anchors)
anchors_tensor = np.reshape(np.array(anchors, dtype=np.float32), [1, 1, 1, num_anchors, 2])
grid_size = np.shape(feats)[1:3]
nu = num_classes + 5
predictions = np.reshape(feats, [-1, grid_size[0], grid_size[1], num_anchors, nu])
grid_y = np.tile(np.reshape(np.arange(grid_size[0]), [-1, 1, 1, 1]), [1, grid_size[1], 1, 1])
grid_x = np.tile(np.reshape(np.arange(grid_size[1]), [1, -1, 1, 1]), [grid_size[0], 1, 1, 1])
grid = np.concatenate([grid_x, grid_y], axis = -1)
grid = np.array(grid, dtype=np.float32)
box_xy = (1/(1+np.exp(-predictions[..., :2])) + grid) / np.array(grid_size[::-1], dtype=np.float32)
box_wh = np.exp(predictions[..., 2:4]) * anchors_tensor / np.array(input_shape[::-1], dtype=np.float32)
box_confidence = 1/(1+np.exp(-predictions[..., 4:5]))
box_class_probs = 1/(1+np.exp(-predictions[..., 5:]))
return box_xy, box_wh, box_confidence, box_class_probs
def correct_boxes(self, box_xy, box_wh, input_shape, image_shape):
box_yx = box_xy[..., ::-1]
box_hw = box_wh[..., ::-1]
input_shape = np.array(input_shape, dtype=np.float32)
image_shape = np.array(image_shape, dtype=np.float32)
new_shape = np.round(image_shape * np.min(input_shape / image_shape))
offset = (input_shape - new_shape) / 2. / input_shape
scale = input_shape / new_shape
box_yx = (box_yx - offset) * scale
box_hw *= scale
box_mins = box_yx - (box_hw / 2.)
box_maxes = box_yx + (box_hw / 2.)
boxes = np.concatenate([
box_mins[..., 0:1],
box_mins[..., 1:2],
box_maxes[..., 0:1],
box_maxes[..., 1:2]
], axis=-1)
boxes *= np.concatenate([image_shape, image_shape])
return boxes
def boxes_and_scores(self, feats, anchors, num_classes, input_shape, image_shape):
box_xy, box_wh, box_confidence, box_class_probs = self._get_feats(feats, anchors, num_classes, input_shape)
boxes = self.correct_boxes(box_xy, box_wh, input_shape, image_shape)
boxes = np.reshape(boxes, [-1, 4])
box_scores = box_confidence * box_class_probs
box_scores = np.reshape(box_scores, [-1, num_classes])
return boxes, box_scores
def draw_bbox(self, image, bboxes):
num_classes = len(self.class_names)
image_h, image_w, _ = image.shape
hsv_tuples = [(1.0 * x / num_classes, 1., 1.) for x in range(num_classes)]
no_objects_detected = True
for bbox in bboxes:
no_objects_detected = False # Objects are detected
coor = np.array(bbox[:4], dtype=np.int32)
score = bbox[4]
class_ind = int(bbox[5])
bbox_thick = int(1.0 * (image_h + image_w) / 600)
c1, c2 = (coor[0], coor[1]), (coor[1], coor[0])
# Draw bounding box and label
img = cvzone.cornerRect(
image, # The image to draw on
(coor[0], coor[1], coor[3], coor[2]), # The position and dimensions of the rectangle (x, y, width, height)
l=30, # Length of the corner edges
t=5, # Thickness of the corner edges
rt=1, # Thickness of the rectangle
colorR=(255, 0, 255), # Color of the rectangle
colorC=(0, 255, 0) # Color of the corner edges
)
color = (255, 0, 255)
scale = 5
img, bbox = cvzone.putTextRect(
image,
'{}: {:.2f}'.format(self.class_names[class_ind], score),
(coor[0], coor[1] - 20), # Image and starting position of the rectangle
scale=2,
thickness=2, # Font scale and thickness
colorT=(255, 255, 255), colorR=(255, 0, 255), # Text color and Rectangle color
font=cv2.FONT_HERSHEY_PLAIN, # Font type
offset=10, # Offset of text inside the rectangle
border=2, colorB=(0, 255, 0) # Border thickness and color
)
# Calculate distance from the center of the object to the bottom-left corner of the window
x1, y1 = c1
x2, y2 = (0, image_h)
cx, cy = (x1 + x2) // 2, (y1 + y2) // 2
length = math.hypot(x2 - x1, y2 - y1)
# Draw distance and control TurtleSim
if img is not None:
cvzone.putTextRect(
image,
'{}'.format(int(length)),
(cx, cy - 10), # Image and starting position of the rectangle
scale=1,
thickness=1, # Font scale and thickness
colorT=(255, 255, 255),
colorR=(255, 0, 255),
font=cv2.FONT_HERSHEY_PLAIN, # Font type
offset=10, # Offset of text inside the rectangle
border=2,
colorB=(0, 255, 0) # Border thickness and color
)
cv2.circle(img, (x1, y1), scale, color, cv2.FILLED)
cv2.circle(img, (x2, y2), scale, color, cv2.FILLED)
cv2.line(img, (x1, y1), (x2, y2), color, max(1, scale // 3))
cv2.circle(img, (cx, cy), scale, color, cv2.FILLED)
# Stop movement if the distance is below a threshold (e.g., 100 pixels)
if length < 300:
self.get_logger().info('Object too close: Stopping TurtleSim')
msg = Twist()
msg.linear.x = 0.0
msg.linear.y = 0.0
msg.linear.z = 0.0
msg.angular.x = 0.0
msg.angular.y = 0.0
msg.angular.z = 0.0
self.cmd_vel_publisher.publish(msg)
else:
self.get_logger().info('Object far away: Moving TurtleSim')
msg = Twist()
msg.linear.x = 2.0
msg.linear.y = 0.0
msg.linear.z = 0.0
msg.angular.x = 0.0
msg.angular.y = 0.0
msg.angular.z = 0.0
self.cmd_vel_publisher.publish(msg)
# If no objects detected, move forward
if no_objects_detected:
self.get_logger().info('No objects detected: Moving forward')
msg = Twist()
msg.linear.x = 2.0
msg.linear.y = 0.0
msg.linear.z = 0.0
msg.angular.x = 0.0
msg.angular.y = 0.0
msg.angular.z = 0.0
self.cmd_vel_publisher.publish(msg)
return image
def nms_boxes(self, boxes, scores, iou_threshold=0.45):
if len(boxes) == 0:
return []
x1 = boxes[:, 0]
y1 = boxes[:, 1]
x2 = boxes[:, 2]
y2 = boxes[:, 3]
areas = (x2 - x1) * (y2 - y1)
order = scores.argsort()[::-1]
keep = []
while order.size > 0:
i = order[0]
keep.append(i)
if order.size == 1:
break
xx1 = np.maximum(x1[i], x1[order[1:]])
yy1 = np.maximum(y1[i], y1[order[1:]])
xx2 = np.minimum(x2[i], x2[order[1:]])
yy2 = np.minimum(y2[i], y2[order[1:]])
w = np.maximum(0.0, xx2 - xx1)
h = np.maximum(0.0, yy2 - yy1)
inter = w * h
ovr = inter / (areas[i] + areas[order[1:]] - inter)
inds = np.where(ovr <= iou_threshold)[0]
order = order[inds + 1]
return keep
def evaluate(self, yolo_outputs, image_shape, class_names, anchors):
score_thresh = 0.7
anchor_mask = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
boxes = []
box_scores = []
input_shape = np.shape(yolo_outputs[0])[1:3]
input_shape = np.array(input_shape) * 32
for i in range(len(yolo_outputs)):
_boxes, _box_scores = self.boxes_and_scores(
yolo_outputs[i], anchors[anchor_mask[i]], len(class_names),
input_shape, image_shape)
boxes.append(_boxes)
box_scores.append(_box_scores)
boxes = np.concatenate(boxes, axis=0)
box_scores = np.concatenate(box_scores, axis=0)
mask = box_scores >= score_thresh
boxes_ = []
scores_ = []
classes_ = []
for c in range(len(class_names)):
class_boxes = boxes[mask[:, c]]
class_box_scores = box_scores[:, c]
class_box_scores = class_box_scores[mask[:, c]]
nms_index = self.nms_boxes(class_boxes, class_box_scores)
class_boxes = class_boxes[nms_index]
class_box_scores = class_box_scores[nms_index]
classes = np.ones_like(class_box_scores, dtype=np.int32) * c
boxes_.append(class_boxes)
scores_.append(class_box_scores)
classes_.append(classes)
boxes_ = np.concatenate(boxes_, axis=0)
scores_ = np.concatenate(scores_, axis=0)
classes_ = np.concatenate(classes_, axis=0)
return boxes_, scores_, classes_
def process_frame(self, input_image):
# Pre-process frame
image_size = input_image.shape[:2]
image_data = np.array(self.pre_process(input_image, (416, 416)), dtype=np.float32)
# Fetch data to DPU and trigger it
self.image[0, ...] = image_data.reshape(self.shapeIn[1:])
job_id = self.dpu.execute_async(self.input_data, self.output_data)
self.dpu.wait(job_id)
# Retrieve output data
conv_out0 = np.reshape(self.output_data[0], self.shapeOut0)
conv_out1 = np.reshape(self.output_data[1], self.shapeOut1)
conv_out2 = np.reshape(self.output_data[2], self.shapeOut2)
yolo_outputs = [conv_out0, conv_out1, conv_out2]
# Decode output from YOLOv3
boxes, scores, classes = self.evaluate(yolo_outputs, image_size, self.class_names, self.anchors)
# Combine boxes, scores, and classes for drawing
bboxes = np.concatenate([boxes, scores[:, np.newaxis], classes[:, np.newaxis]], axis=1)
# Draw bounding boxes on the original image
result_image = self.draw_bbox(input_image, bboxes)
return result_image
def listener_callback(self, msg):
self.get_logger().info('Receiving video frame')
cv_image = self.bridge.imgmsg_to_cv2(msg, desired_encoding='bgr8')
processed_image = self.process_frame(cv_image)
cv2.imshow("YOLOv3 DPU", processed_image)
cv2.waitKey(1)
def main(args=None):
rclpy.init(args=args)
yolo_dpu_node = YOLODPUNode()
rclpy.spin(yolo_dpu_node)
yolo_dpu_node.destroy_node()
rclpy.shutdown()
This ROS 2 node implements object detection with YOLOv3 on a Kria KR260 to guide robot movement in a farm field. It subscribes to an image topic, processes images using the DPU, and publishes movement commands based on the results.
Object Detection and Stopping Behavior:
- The robot moves in a straight line when no objects are detected.
- Upon object detection, the robot initiates a stopping sequence.
- Before stopping, the robot calculates the distance to the object.
- A threshold distance is used to determine the stopping point, ensuring object avoidance.
Modifying The Setup File
Now that we have an image publisher, a subscriber, and an image processor, let's modify the setup.py file in our clibot_pkg directory. This file defines the package details and allows us to utilize ROS 2 commands with entry points.
from setuptools import find_packages, setup
package_name = 'clibot_pkg'
setup(
name=package_name,
version='0.0.0',
packages=find_packages(exclude=['test']),
data_files=[
('share/ament_index/resource_index/packages',
['resource/' + package_name]),
('share/' + package_name, ['package.xml']),
],
install_requires=['setuptools'],
zip_safe=True,
maintainer='kennedy',
maintainer_email='bobbanda441145@gmail.com',
description='This package is for clibot a farm monitoring robot the package detects an object calculate its distance to estimate its position for autonomous navigation',
license='TODO: License declaration',
tests_require=['pytest'],
entry_points={
'console_scripts': [
'yolo = clibot_pkg.yolo:main',
'cam_sub = clibot_pkg.webcam:main',
'control = clibot_pkg.controller:main',
'database = clibot_pkg.fire_db:main'
],
},
)
Important: Save all files you modify before proceeding.
Build the ROS 2 package
Open a terminal (Ctrl+Alt+T) and Before building and running the nodes, you'll need superuser privileges (also known as root access) to access the Deep Processing Unit (DPU). However, using superuser privileges should be done with caution.
first you have enter the commands, this commad will require a password which you ahve to put your password
sudo su
Activate your ROS 2 virtual environment using the appropriate command for your environment setup. For example mine is:
source /etc/profile.d/pynq_venv.sh
then Navigate to your ROS 2 package directory, If you've made changes to your code, build the package and Source the setup script
cd ~/clibot
colcon build
source ~/clibot/install/setup.bash
then
you run ros2 node for image publisher
ros2 run clibot_pkg cam_sub
for the subscriber
ros2 run clibot_pkg yolo
Creating a ROS 2 Node for Manual Operation with Dualshock Controller
Creating a Manual Node for DualShock Controller. We now want to create a node for manual operation using a DualShock controller. The goal is to switch between manual and automation modes when the Select button on the controller is pressed. For troubleshooting purposes, we also want to be able to send these keystrokes to the real-time database. Let's create a simple ROS 2 node in your package. Create a new file named controller.py in the src directory.
import rclpy # ROS 2 Python client library
from rclpy.node import Node # Base class for creating ROS 2 nodes
from sensor_msgs.msg import Joy # Message type for joystick inputs
from geometry_msgs.msg import Twist # Message type for velocity commands
from subprocess import Popen, PIPE # Library to run subprocesses (for launching object detection)
import os # Library for operating system functionalities
# Define the TurtleController class, inheriting from Node
class TurtleController(Node):
def __init__(self):
super().__init__('turtle_controller') # Initialize the node with the name 'turtle_controller'
# Create a subscription to the 'joy' topic to listen for joystick inputs
self.joy_subscriber = self.create_subscription(Joy, 'joy', self.joy_callback, 10)
# Create a publisher to the 'turtle1/cmd_vel' topic to send velocity commands to TurtleSim
self.cmd_vel_publisher = self.create_publisher(Twist, 'turtle1/cmd_vel', 10)
self.manual_mode = True # Variable to keep track of the current mode (manual or automated)
self.object_detection_process = None # Variable to store the subprocess for object detection
# Declare parameters for package name and launch file
self.declare_parameter('package_name', 'clibot_pkg')
self.declare_parameter('launch_file', 'clibot_launch.py')
self.get_logger().info("Turtle Controller Initialized.") # Log node initialization
def joy_callback(self, msg):
# Callback function for joystick inputs
select_button = msg.buttons[6] # 'select' button on Xbox controller (index 6 in buttons array)
if select_button == 1:
self.switch_mode() # Switch between manual and automated modes if 'select' button is pressed
if self.manual_mode:
self.manual_control(msg) # Control TurtleSim manually if in manual mode
def switch_mode(self):
# Function to switch between manual and automated modes
if self.manual_mode:
self.start_object_detection() # Start object detection if switching to automated mode
else:
self.stop_object_detection() # Stop object detection if switching to manual mode
self.manual_mode = not self.manual_mode # Toggle the mode
mode = "Manual" if self.manual_mode else "Automated" # Determine the current mode as a string
self.get_logger().info(f"Switched to {mode} mode.") # Log the mode switch
def manual_control(self, msg):
# Function to control TurtleSim manually
twist = Twist()
twist.linear.x = msg.axes[1] # Map the vertical axis of the left joystick to linear velocity
twist.angular.z = msg.axes[0] # Map the horizontal axis of the left joystick to angular velocity
self.cmd_vel_publisher.publish(twist) # Publish the velocity command
def start_object_detection(self):
# Function to start object detection
if self.object_detection_process is None:
package_name = self.get_parameter('package_name').get_parameter_value().string_value
launch_file = self.get_parameter('launch_file').get_parameter_value().string_value
# Launch the object detection process
self.object_detection_process = Popen(['ros2', 'launch', package_name, launch_file], stdout=PIPE, stderr=PIPE)
self.get_logger().info('Object detection launched.')
def stop_object_detection(self):
# Function to stop object detection
if self.object_detection_process is not None:
self.object_detection_process.terminate() # Terminate the object detection process
self.object_detection_process = None
self.get_logger().info('Object detection stopped.')
def main(args=None):
# Main function to initialize and run the node
rclpy.init(args=args) # Initialize the ROS 2 Python client library
node = TurtleController() # Create an instance of the TurtleController node
rclpy.spin(node) # Spin the node to keep it running
# Clean up the object detection process if it is still running
if node.object_detection_process is not None:
node.object_detection_process.terminate()
node.destroy_node() # Destroy the node
rclpy.shutdown() # Shut down the ROS 2 Python client library
if __name__ == '__main__':
main() # Run the main function if the script is executed directly
The above node publishes the data to other node which is responsible of sending the data to the database this node is called fire_db.py
import rclpy
from rclpy.node import Node
from std_msgs.msg import String
import firebase_admin
from firebase_admin import credentials, firestore
# Define a custom ROS 2 node to interact with Firebase
class FirebaseNode(Node):
def __init__(self):
# Initialize the node with the name 'firebase_node'
super().__init__('firebase_node')
# Subscribe to the '/hoverboard/dht' topic to receive /hoverboard/dht data
self.subscription = self.create_subscription(
String, # Message type
'/hoverboard/dht', # Topic name
self.keystroke_callback, # Callback function
10 # QoS profile depth
)
# Initialize Firebase with the provided credentials
self.cred = credentials.Certificate("/home/kennedy/Documents/clibot-a3441-firebase-adminsdk-c007e-d8e46f293f.json")
firebase_admin.initialize_app(self.cred)
self.db = firestore.client()
# Callback function to process incoming keystroke data
def keystroke_callback(self, msg):
# Parse the keystroke data from the message
key_press_data = eval(msg.data) # Use eval carefully, consider safer alternatives for production
# Send the keystroke data to Firebase Realtime Database
self.db.collection('controller_data').add(key_press_data)
# Main function to initialize the ROS 2 system and run the node
def main(args=None):
rclpy.init(args=args) # Initialize the rclpy library
node = FirebaseNode() # Create an instance of the FirebaseNode
rclpy.spin(node) # Keep the node running to process callbacks
node.destroy_node() # Clean up the node when shutting down
rclpy.shutdown() # Shutdown the rclpy library
# Entry point for the script
if __name__ == '__main__':
main()
Creating and configuring Realtime Database
Before running these nodes, you'll need to create a Firebase account and configure your database to receive data. To learn how to create and configure the database, watch this video: "Getting started with the Firebase Realtime Database on the web."
Once everything is configured, your window should look like this:
Before running the nodes, we need to install the "joy" package for PS3 controller functionality. For installation details, please click here.
Once the package is installed, we can proceed with running the nodes. You will need to enter the required commands, which will prompt for a password.
sudo su
Activate your ROS 2 virtual environment using the appropriate command for your environment setup. For example mine is:
source /etc/profile.d/pynq_venv.sh
then Navigate to your ROS 2 package directory, If you've made changes to your code, build the package and Source the setup script
cd ~/clibot
colcon build
source ~/clibot/install/setup.bash
then
you run ros2 node for the controller
ros2 run clibot_pkg control
for the database
ros2 run clibot_pkg database
Now that we've completed the system's brain, it's time to focus on the control system. Since we're using an ESP32 as the motor controller and Micro-ROS for communication with our Kria KR260, we need to modify the initial motor control code (hoverserial.ino). We'll incorporate ROS2 nodes to subscribe to and publish data, enabling us to control the motors effectively.
/*code is designed to control a hoverboard via UART communication and publish sensor data using micro-ROS. The hoverboard's movement is controlled based on Twist messages received over ROS 2, and it publishes feedback data, including battery voltage and board temperature. Additionally, it reads data from a DHT11 sensor (temperature and humidity) and publishes this data via ROS 2*/
#include <micro_ros_arduino.h>
#include <stdio.h>
#include <rcl/rcl.h>
#include <rcl/error_handling.h>
#include <rclc/rclc.h>
#include <rclc/executor.h>
#include <SoftwareSerial.h>
#include <geometry_msgs/msg/twist.h>
#include <std_msgs/msg/float32_multi_array.h>
#include <DHT.h>
// DHT sensor settings
#define DHTPIN 16 // GPIO pin for the DHT sensor
#define DHTTYPE DHT11 // DHT 11 (you can change it to DHT22 if you use that)
// Create DHT object
DHT dht(DHTPIN, DHTTYPE);
rcl_subscription_t subscriber;
geometry_msgs__msg__Twist msg;
rclc_executor_t executor;
rcl_allocator_t allocator;
rclc_support_t support;
rcl_node_t node;
rcl_publisher_t publisher;
rcl_publisher_t dht_publisher;
std_msgs__msg__Float32MultiArray feedback_msg;
std_msgs__msg__Float32MultiArray dht_msg;
// ########################## DEFINES ##########################
#define HOVER_SERIAL_BAUD 115200 // [-] Baud rate for HoverSerial (used to communicate with the hoverboard)
#define SERIAL_BAUD 115200 // [-] Baud rate for built-in Serial (used for the Serial Monitor)
#define START_FRAME 0xABCD // [-] Start frame definition for reliable serial communication
#define TIME_SEND 200 // [ms] Sending time interval
#define SPEED_MAX_TEST 200 // [-] Maximum speed for testing
#define SPEED_STEP 200 // [-] Speed step
// #define DEBUG_RX // [-] Debug received data. Prints all bytes to serial (comment-out to disable)
#define LED_PIN 13
#define DT 100
#define RCCHECK(fn) { rcl_ret_t temp_rc = fn; if((temp_rc != RCL_RET_OK)){error_loop();}}
#define RCSOFTCHECK(fn) { rcl_ret_t temp_rc = fn; if((temp_rc != RCL_RET_OK)){error_loop();}}
SoftwareSerial HoverSerial(2, 4); // RX, TX
// Global variables
uint8_t idx = 0; // Index for new data pointer
uint16_t bufStartFrame; // Buffer Start Frame
byte *p; // Pointer declaration for the new received data
byte incomingByte;
byte incomingBytePrev;
typedef struct {
uint16_t start;
int16_t steer;
int16_t speed;
uint16_t checksum;
} SerialCommand;
SerialCommand Command;
typedef struct {
uint16_t start;
int16_t cmd1;
int16_t cmd2;
int16_t speedR_meas;
int16_t speedL_meas;
int16_t batVoltage;
int16_t boardTemp;
uint16_t cmdLed;
uint16_t checksum;
} SerialFeedback;
SerialFeedback Feedback;
SerialFeedback NewFeedback;
// ########################## SEND ##########################
void Send(int16_t uSteer, int16_t uSpeed) {
// Create command
Command.start = (uint16_t)START_FRAME;
Command.steer = (int16_t)uSteer;
Command.speed = (int16_t)uSpeed;
Command.checksum = (uint16_t)(Command.start ^ Command.steer ^ Command.speed);
// Write to Serial
HoverSerial.write((uint8_t *) &Command, sizeof(Command));
}
// ########################## RECEIVE ##########################
void Receive() {
// Check for new data availability in the Serial buffer
if (HoverSerial.available())
{
incomingByte = HoverSerial.read(); // Read the incoming byte
bufStartFrame = ((uint16_t)(incomingByte) << 8) | incomingBytePrev; // Construct the start frame
}
else
{
return;
}
// If DEBUG_RX is defined print all incoming bytes
#ifdef DEBUG_RX
Serial.print(incomingByte);
return;
#endif
// Copy received data
if (bufStartFrame == START_FRAME)
{ // Initialize if new data is detected
p = (byte *)&NewFeedback;
*p++ = incomingBytePrev;
*p++ = incomingByte;
idx = 2;
}
else if (idx >= 2 && idx < sizeof(SerialFeedback))
{ // Save the new received data
*p++ = incomingByte;
idx++;
}
// Check if we reached the end of the package
if (idx == sizeof(SerialFeedback)) {
uint16_t checksum;
checksum = (uint16_t)(NewFeedback.start ^ NewFeedback.cmd1 ^ NewFeedback.cmd2 ^ NewFeedback.speedR_meas ^ NewFeedback.speedL_meas
^ NewFeedback.batVoltage ^ NewFeedback.boardTemp ^ NewFeedback.cmdLed);
// Check validity of the new data
if (NewFeedback.start == START_FRAME && checksum == NewFeedback.checksum) {
// Copy the new data
memcpy(&Feedback, &NewFeedback, sizeof(SerialFeedback));
// Publish the feedback data
//feedback_msg.data.data[0] = (float)Feedback.cmd1;
//feedback_msg.data.data[1] = (float)Feedback.cmd2;
//feedback_msg.data.data[2] = (float)Feedback.speedR_meas;
//feedback_msg.data.data[3] = (float)Feedback.speedL_meas;
feedback_msg.data.data[4] = (float)Feedback.batVoltage;
feedback_msg.data.data[5] = (float)Feedback.boardTemp;
//feedback_msg.data.data[6] = (float)Feedback.cmdLed;
RCSOFTCHECK(rcl_publish(&publisher, &feedback_msg, NULL));
}
else {
Serial.println("Non-valid data skipped");
}
idx = 0; // Reset the index (it prevents to enter in this if condition in the next cycle)
}
// Update previous states
incomingBytePrev = incomingByte;
}
// ################CHECKING ERRORS###########################
void error_loop() {
while (1) {
digitalWrite(LED_PIN, !digitalRead(LED_PIN));
delay(100);
}
}
// twist message cb
void subscription_callback(const void *msgin) {
const geometry_msgs__msg__Twist *msg = (const geometry_msgs__msg__Twist *)msgin;
if (msg->linear.x == 2.0) {
Send(0, SPEED_MAX_TEST);
delay(DT);
} else if (msg->linear.x == -2.0) {
Send(0, -SPEED_MAX_TEST);
delay(DT);
} else if (msg->angular.z == 2.0) {
Send(SPEED_MAX_TEST, 100);
delay(DT);
} else if (msg->angular.z == -2.0) {
Send(-SPEED_MAX_TEST, 100);
delay(DT);
} else {
digitalWrite(LED_PIN, LOW);
}
}
void setup() {
set_microros_transports();
pinMode(LED_PIN, OUTPUT);
digitalWrite(LED_PIN, HIGH);
Serial.begin(SERIAL_BAUD);
Serial.println("Hoverboard Serial v1.0");
HoverSerial.begin(HOVER_SERIAL_BAUD);
// Initialize DHT sensor
dht.begin();
delay(2000);
allocator = rcl_get_default_allocator();
// create init_options
RCCHECK(rclc_support_init(&support, 0, NULL, &allocator));
// create node
RCCHECK(rclc_node_init_default(&node, "micro_ros_arduino_node", "", &support));
// create subscriber
RCCHECK(rclc_subscription_init_default(
&subscriber,
&node,
ROSIDL_GET_MSG_TYPE_SUPPORT(geometry_msgs, msg, Twist),
"turtle1/cmd_vel"));
// create publisher
RCCHECK(rclc_publisher_init_default(
&publisher,
&node,
ROSIDL_GET_MSG_TYPE_SUPPORT(std_msgs, msg, Float32MultiArray),
"/feedback"));
// create DHT publisher
RCCHECK(rclc_publisher_init_default(
&dht_publisher,
&node,
ROSIDL_GET_MSG_TYPE_SUPPORT(std_msgs, msg, Float32MultiArray),
"/hoverboard/dht"));
// create executor
RCCHECK(rclc_executor_init(&executor, &support.context, 2, &allocator));
RCCHECK(rclc_executor_add_subscription(&executor, &subscriber, &msg, &subscription_callback, ON_NEW_DATA));
// Initialize feedback message
feedback_msg.data.capacity = 2;
feedback_msg.data.size = 2;
feedback_msg.data.data = (float *)malloc(feedback_msg.data.capacity * sizeof(float));
// Initialize DHT message
dht_msg.data.capacity = 2;
dht_msg.data.size = 2;
dht_msg.data.data = (float *)malloc(dht_msg.data.capacity * sizeof(float));
}
void loop() {
Receive();
delay(100);
RCCHECK(rclc_executor_spin_some(&executor, RCL_MS_TO_NS(100)));
// Read DHT sensor data
float h = dht.readHumidity();
float t = dht.readTemperature();
// Check if any reads failed and exit early (to try again).
if (isnan(h) || isnan(t)) {
Serial.println("Failed to read from DHT sensor!");
return;
}
// Publish the DHT data
dht_msg.data.data[0] = t;
dht_msg.data.data[1] = h;
RCSOFTCHECK(rcl_publish(&dht_publisher, &dht_msg, NULL));
}
Before running the nodes, we may need to install Bluetooth software to facilitate the connection between the KR260 and your Bluetooth dongle. Refer to the image below for the correct connection between the KR260 and the Bluetooth adapter.
For software installation instructions, click here.Once the software is installed, you can run the nodes by entering the following commands in your terminal.
Once the package is installed, we can proceed with running the nodes. You will need to enter the required commands, which will prompt for a password.
sudo su
Activate your ROS 2 virtual environment using the appropriate command for your environment setup. For example mine is:
source /etc/profile.d/pynq_venv.sh
we will now run runs the joy node, which is responsible for interfacing a joystick with ROS2.
ros2 run joy joy_node #Here we are running the joy package
we can open another terminal and then Navigate to your ROS 2 package directory, If you've made changes to your code, build the package and Source the setup script
cd ~/clibot
colcon build
source ~/clibot/install/setup.bash
then
you run ros2 node for the controller
ros2 run clibot_pkg control
for the database
ros2 run clibot_pkg database
Now that the controller and database nodes are running, let's test the ESP32 controller node.
sudo su
Activate your ROS 2 virtual environment using the appropriate command for your environment setup. For example mine is:
source /etc/profile.d/pynq_venv.sh
then
we need to navigate to a dirctory to were you have installed your microros package in this case mine is named ws_test_microros
cd ~/ws_test_microro
source ~/clibot/install/setup.bash
then
we can run the actual node
ros2 run micro_ros_agent micro_ros_agent serial --dev /dev/ttyUSB0
If the Micro-ROS installation was successful, you should see output similar to the following
Note: The exact output might vary depending on the Terminal you are using and configuration.
As you can see, when we run nodes individually, it can be cumbersome for complex projects. To address this, we can launch them simultaneously using launch files.
Let's create a Python file named clibot_launch.py
inside the launch
directory of your clibot_pkg
package:
cd ~/clibot/src/clibot_pkg/launch
touch clibot_launch.py
chmod +x clibot_launch.py
Here's the code we need to write in clibot_launch.py
:
from launch import LaunchDescription
from launch_ros.actions import Node
# This function generates a launch description, which is a configuration for
# launching multiple ROS 2 nodes. In this case, we are launching three nodes
# from the 'clibot_pkg' package: 'yolo', 'cam_sub', and 'database'.
def generate_launch_description():
return LaunchDescription([
# This Node action launches the 'yolo' executable from the 'clibot_pkg' package.
# The node is named 'controller' and its output is directed to the screen.
Node(
package='clibot_pkg',
executable='yolo',
name='controller',
output='screen'
),
# This Node action launches the 'cam_sub' executable from the 'clibot_pkg' package.
# The node is named 'webcam' and its output is directed to the screen.
Node(
package='clibot_pkg',
executable='cam_sub',
name='webcam',
output='screen',
),
# This Node action launches the 'database' executable from the 'clibot_pkg' package.
# The node is named 'fire_db' and its output is directed to the screen.
Node(
package='clibot_pkg',
executable='database',
name='fire_db',
output='screen',
),
])
# This block ensures that the generate_launch_description function is executed
# if this script is run as the main module. This is useful for testing or
# running the launch description directly.
if __name__ == '__main__':
generate_launch_description()
Now, to launch our nodes, activate your ROS 2 virtual environment using the appropriate command for your setup. Then, run the following command:
sudo su
Activate your ROS 2 virtual environment using the appropriate command for your environment setup. For example mine is:
source /etc/profile.d/pynq_venv.sh
the
we launch the file.
ros2 launch clibot_pkg clibot_launch.py
This will launch both the controller and database nodes simultaneously.
3. BOM (Bill Of Material)In this section, we will detail different materials used when building the robot.
Partsof the structure
- 40mm * 20mm * 1.5mm square tube * 5
- 500mm * 500mm * 3mm plate
- 2400mm * 1200mm flat sheet
Devices
- Kria kr-260
- Logitech C920 HD Pro Webcam,
- 15v 3A to 12 3A Buck Converter
- Hoverboard Motors
- Usb Hub
- Bluetooth USB Adapter
- Sony PS3 Controller GASIA
- DHT11 Temperature And Humidity Sensor
- Esp32 Wroom
Power
- 42 Volt 3Amp Hoverboard battery
- 1.3/3.5mm DC Power Plug
- 486-3574-ND PWR PLUG 2.5X5.5MM
- 24T12-10A, 24v to 12v Power Converter 10V 120W
Wires
- 18awg Cable
- 16awg Cable
- Jump Wires Cable Male To Male
- Jump Wires Cable female To Male
- Jump Wires Cable Female To Female
Fasteners
- 3mm bolts and nuts
- Self tapping srews
- Bolt And Nuts Washers HEX Screws
- insulation tape
Signaling
- 0, 5m Micro USB Cable
- VIDEO DISPLAY to VGA adapter 15cm
- USB hub
Tools
- Solder and iron
- Multimeter
- Wire strippers and cutters
- Crimp tools
- soldering wire
- sponge
In the next section, some component wiring diagrams are shown.
Clibot's Power System
Clibot utilizes 42V/2A hoverboard batteries as its primary power source to operate various system components. This choice aligns with the voltage requirements of the motors, which necessitate around 42V for proper function.
Powering the Kria KR260
To power the Kria KR260, a buck converter is employed. This converter taps 15V/3A from the hoverboard board and regulates it to a stable output of 12V/3A, which is suitable for the Kria KR260's operation. Other components within the system draw power directly from the Kria KR260.
Environmental Sensing
An enhancement has been made to Clibot's sensory capabilities with the addition of a DHT sensor. This sensor, categorized as a generic MEMS senso
r, enables the robot to perceive its environment by measuring relevant data.
Communication with Database System
For communication purposes, Clibot leverages Ethernet technology to establish a connection with the database system. This connection facilitates data exchange between Clibot and the database.
4. Assembly InstructionsRemember: Safety is paramount. If you're unsure about any aspect of the welding process, it's best to consult a professional welder for guidance.
Step 1: Cutting the Square Tube
Begin by using the cutting disc to cut the square tube into two straight pieces: one 900mm long and another 300mm long.
Step 2: Welding the Base
As shown in the image, proceed to weld the two pieces together to form the robot's base. Focus on welding the bottom part first.
Step 3:Securing the Hoverboard Motors in the Base
The 42V 10-inch wheel motors from the hoverboard are mounted directly into the robot's base. To secure them properly, you'll need Nuts 30s-M6 and M6 Hexagon Socket Head Screws. Use spanners to tighten the screws firmly.
Step 4:Optional: Enhancing Stability
For increased stability, you can add an outer layer to the base. This will provide additional support for the motors and the entire robot.once the the layer is finished the robot will look something like this
Step 5:Attaching the Upper Layer
With the lower base assembly complete, we can move on to the upper layer. This layer adds a design element to the robot. The image provides a reference for what the completed upper layer should look like after welding.
Unfortunately, due to recent power outages (loadshedding) lasting up to 20 hours, I've encountered a significant setback in completing the robot assembly before the contest deadline at the end of the month. While I've successfully finished the lower base assembly as shown above, completing the upper layer with welding within the remaining timeframe might be very difficult.
5.Future workFuture Enhancements
- Sensor Integration: The next phase of development will involve incorporating a wider range of core sensors. This will allow Clibot to gather more comprehensive environmental data and improve its perception capabilities.
- Assembly Completion: Finalizing the robot's assembly remains a priority. This includes completing the upper layer and integrating all necessary components.
- LiDAR and RADAR Technologies: Future exploration will focus on implementing LiDAR (Light Detection and Ranging) and RADAR (Radio Detection and Ranging) technologies. These sensors will provide Clibot with enhanced obstacle detection and navigation abilities.
- Kria KR260 Potential: I plan to delve deeper into the potential functionalities of the Kria KR260 robotic starter kit. This might involve exploring new programming features or hardware configurations.
- Multi-Robot Communication: An exciting possibility for future development is the introduction of a second Kria KR260 unit. This would enable communication between the robots, potentially allowing for specialized functionalities. For instance, one Kria could handle crop health monitoring tasks while the other focuses on navigation.
Comments