"What would you do if you wanted to protect the people you love ?"
That's the basic question that may keep people awake at night. We love someone so much that we want to protect them and ensure their safety even when we are not around. This is the basic idea we had when thinking about this project.
We wanted to create a surveillance system that would ensure the safety of our family, monitoring the outside area.
This is the motivation behind project AMARONE (acronym for Advanced Mobile Autonomous Robot Optimized for Networked Environment).
Incidentally, Amarone, is also the name of a fine Italian wine from Valpolicella thus continuing the tradition followed the authors who called the open source projects they have been working with a name of a Northern Italian wine, such as ARNEIS and FREISA.
Area and design descriptionThe area of surveillance was decided to be the balcony that runs around the property of my family in Bari province (Apulia in Italy). This terrain was selected because it was the flattest surface available, so we could have used a legged or wheeled Robot.
The balcony extends around the entire house. We chose to use a robot (TBD) to follow a specific linear path back and forth on a section of the balcony because it allows for efficient and consistent monitoring of the area.
By moving linearly on one section and rotating, the robot can also observe the other two sides of the balcony, effectively controlling the entire perimeter of the house.
This method ensures thorough coverage without missing any sections, while also simplifying the programming and control of the robot's movements. We opted for this approach over fixed supervision because the robot can dynamically respond to changes and cover a larger area, making the monitoring process more flexible and comprehensive.
Since we have yet to choose a specific type of robot, it is important to consider models that can support the dimensions of the board and address energy efficiency while thinking about the weight of the whole. When comparing legged robots to wheeled robots, wheeled robots are often favored due to their capacity to carry additional weight, such as the board, battery, and motors.
Wheeled robots typically have higher energy efficiency, which is crucial for prolonged monitoring tasks. Their simpler mechanical design also tends to make them more robust and easier to maintain compared to legged robots.
Moreover, those kinds of robots can often move faster and more smoothly on flat surfaces like a balcony, ensuring better performance for this particular application. Given these factors, wheeled robots are likely the more suitable choice for this project, providing a balance between carrying capacity, energy efficiency, and ease of control.
The robot could also be a homemade design.
Robot hyphothetical design
Our design idea for the robot is based on a straightforward and functional 4-wheel drive setup. The robot will feature four motorized wheels, which will provide the necessary mobility and control to navigate the balcony efficiently, even at the cost of a more complex control.
At the center of the robot, we'll have a battery that will power the entire system. This central positioning ensures balanced weight distribution, which is crucial for stability and performance.
The core of the system will be the AMD Kria KR260 Robotics Starter Kit, which will be connected to an OAK-D Lite camera for advanced monitoring and data processing capabilities. The power supply for the whole system is a critical aspect of the design. Typically, the Kria KR260 requires a power supply of around 12V and 3A, so we will use a battery that meets these specifications.
A common choice for such applications might be a 12V lithium-ion battery with an adequate capacity. This type of battery provides sufficient power and is relatively lightweight, further aiding in maintaining the robot's balance and mobility.
Considering the size of the components, the robot's dimensions will be designed to accommodate the battery, the board, and the OAK-D Lite camera efficiently. We estimate the base of the robot to be approximately 30 cm by 30 cm, allowing enough space for the components and ensuring stability.
The design includes, in short:
- Four motorized wheels for efficient movement and control.
- A centrally placed 12V lithium-ion battery pack to power the entire system.
- A Kria KR260 board connected to an OAK-D Lite camera for recognition and control functionalities.
- Robot dimensions on paper of approximately 30 cm x 30 cm in base.
This configuration will provide a robust and efficient platform for our monitoring and data processing needs while ensuring the robot remains stable and well-balanced.
Tests with the Kria BoardThe first thing was to power up the received board with the received Hardware. This soon required additional components, both in the video output and the USB ports, to be bought, which will be explained later.
By starting to use the board, the provided microSD card of 16 GB were fast filled with OS update and software.
All the interactions till have been made using the provided Ubuntu OS version 22.04.4 LTS. Some of the first programs executed on the board were the PYNQ-DPU notebooks. This made clear that the Kria uses a model compiled as .xmodel instead of the .blob which we are accostumed to from our past experiences with the OAK-D camera.
Training the Neural Network
Since we realized we couldn't use a single model to run natively on the board and the camera at the same time, we decided to have a model trained and saved as .onnx, easy to convert to .h5, which could potentially be converted in both .xmodel and .blob.
For the training we have used Roboflow to split a recorded video into frames and label each one of them. Roboflow also do some magic to increase the robustness of the training process by helping increasing the size of the dataset.
You can find and download the complete dataset from https://universe.roboflow.com/gianluca-teti-wbn5r/test-video-zu1g9
Once you got the dataset you can retrain a YOLOv8 network using Roboflow, or by running the JupyterLab Notebook attached at the end of this post.
Motor tests
Given our prior experience with the Raspberry Pi and RPi header, we attempted a direct one-to-one application porting on the Kria Board to see if we could reproduce the motor controls. Naturally, it wasn’t as straightforward as we initially thought. We were overly optimistic.
After some research online, we discovered that one of the main feature of the Kria KR260 is the I/O connectors through the use of PMOD mounted on the board or the expansion capability of the GPIO/RPi header.
The pins of the PMOD, or the GPIO, are intertwined with the programmable logic and thus required the enabling through the block design in Vivado. This modification would be written in an upgrade to the bitstream of the board through the FPGA. The process would be loaded at boot time, effectived loading/understanding the _device tree_ of the board.
The project which will contains those modifications would be a PetaLinux project.
We couldn't completely understand and finish this part in time, and we'll have to look into how to make the Python control software cooperate with the Motors directly or not.
ChallengesWe encountered several technical challenges during the development process. Firstly, we had a problem with some of the board USB ports. Out of the four available ports, two were not working at all, and any components connected to them were not powered. This significantly limited our ability to connect and use multiple devices simultaneously, requiring us to find a solution like the USB Hub.
Additionally, we faced issues with the D-port to HDMI adapter, which required an active adapter rather than a passive one. This added complexity to our setup, as active adapters are necessary to ensure proper signal conversion and power delivery to the display in possession, but they also tend to be more expensive and harder to source than passive adapters. Not all vendors display an etichette saying that the adapter is an "active adapter".
The same problem appeared with the network ports: 2 of the 4 available RJ45 are not working. Sometimes the board disconnect while doing an
sudo apt update && sudo apt upgrade -y
Regarding the design and movement of the robot, weight is a critical consideration. The robot needs to carry the board, battery, and motors, making weight distribution crucial for optimal performance. Wheeled robots are generally more suitable for this purpose because they can support heavier loads more efficiently than legged robots.
One of the challenges with the linear path approach is the presence of blind spots. These are moments when the robot is moving at regular intervals and someone could potentially slip behind or to the side of it unnoticed. This creates vulnerabilities in the monitoring process, as the robot cannot maintain a constant visual on the entire area simultaneously.
Possible solutions to mitigate these issues include the use of multiple cameras to provide a broader and more continuous view of the area, or the deployment of two robots working in conjunction, dividing the area of interest between them. This way, the chances of missing any activity are significantly reduced, ensuring a more robust and reliable monitoring system.
The team was globally located with people divided between Turin, Bologna and the US. It was not easy to work remotely on one board and particularly balancing daily life with development activities.
Another aspect was the provided capacity for the microSD. The provided 16 GB were filled just by using the PYNQ-DPU notebooks and doing some OS upgrade. It was necessary to have a more bigger microSD and by changing the card, we needed to reinstall the whole system from scratch.
Also after some OS upgrade, the starting boot time increased greatly, from a mere 30-40 seconds to 3-4 minutes for every startup maybe due to some Ethernet check on boot.
Conclusion & Future WorkThe Pervasive AI Robotics Challenge from AMD is important because it drives innovation and advances in AI and robotics technology, addressing real-world problems and enhancing everyday life. By pushing the boundaries of what robots can achieve, this challenge encourages the development of solutions that have significant and wide-ranging impacts on people's lives.
In the industrial and workplace sectors, the challenge fosters the development of robots capable of performing hazardous and repetitive tasks. This not only boosts productivity and efficiency but also significantly enhances worker safety. For instance, in manufacturing, AI robots can handle tasks on the assembly line with high speed and accuracy, reducing the risk of human injury and cutting production costs. Consequently, human workers can engage in more strategic and creative roles, driving innovation and job satisfaction.
In home and personal life, AI robotics innovations from such challenges can revolutionize how we manage daily tasks. Robots can assist with household chores, provide security surveillance, and offer companionship and assistance to the elderly and disabled. This support enables individuals to maintain their independence and improve their quality of life, while also alleviating the burden on caregivers.
Additionally, in public spaces and urban environments, the advancements driven by the AMD challenge can lead to more efficient and sustainable cities. Autonomous vehicles and smart infrastructure can reduce traffic congestion, minimize accidents, and optimize resource management, contributing to environmental sustainability and better quality of urban life.
Overall, the pervasive AI robotics challenge from AMD is a catalyst for technological breakthroughs that make services more accessible, improve safety and efficiency, and enhance the quality of life. By addressing critical issues and enabling innovative solutions, this challenge plays a pivotal role in shaping a more connected, convenient, and safer world for everyone.
We couldn't finish what we sat down to do for various reasons but we plan on keep on working on it the gain knowledge and experience and try to use it, and the Kria Board for the next challenge!
AcknowledgementsThe authors would like to give a warm thanks to our friends at Makarena Labs for their precious hints and suggestions about how to best use the AMD Kria KR260.
Here is a photo we took at the Embedded World in April 2024 in the AMD booth - this was also the first time we could see one KR260 in all its beauty!
They also helped Gianluca eventually receive his own Kria KR260 which he had been waiting for a few weeks.
Thank you again friends, as we promised we renamed the project from "VAS (Vision Artificial Survelliance)" as in the initial proposal to "Amarone" as our sign of gratitude :-)
Comments