Neuro Optix is an innovative robotic surveillance and safety system that combines advanced automation and artificial intelligence. Constructed from durable 3D-printed parts and acrylic sheets, this robotic car features four large wheels driven by DC motors for easy navigation across various terrains. At its core, the OAK-D AI camera is mounted on the car's top lid to monitor distances between personnel and machinery, enhancing safety in environments like construction sites by measuring distances between workers and heavy equipment such as excavators. The KRIA platform handles real-time model inference and visual display, while servo motors enable precise camera movement for comprehensive coverage. Remote control is achieved through a connection system using Streamlit + Ngrok or Luxonics Hub, allowing operation via a web app from anywhere globally. This connectivity ensures versatile deployment across industries, offering real-time surveillance and safety monitoring to enhance operational efficiency.
Problem StatementTraditionally companies hire multiple safety officers for each construction site to ensure workers safety Managing worker safety on construction sites. This approach have following issues
- Transportation costs of Safety Engineers.
- Ensuring timely payment of monthly salaries of safety officers/ Engineers.
Additionally, there are persistent problems related to corruption and negligence. These challenges necessitate a robust and efficient monitoring system to ensure the safety and well-being of workers.
Proposed SolutionsTo address the challenges, we propose a multi-faceted approach.
- Stationary cameras are installed around the site to provide continuous surveillance and monitor worker activities.
- Safety helmet cameras are utilized to offer a first-person perspective, ensuring real-time monitoring of individual workers' safety and compliance with safety protocols.
- Additionally, robots equipped with advanced computer vision technology are deployed to autonomously navigate the site, detect potential hazards, and provide dynamic surveillance, enhancing overall site safety and efficiency.
We have used Robot because it can be controlled. Moreover AI portion can be installed on IP cameras/normal cameras as well but this documentation will focus on the deployment of the AI on the Robot using Kria.
Objectives- Enhance Workplace Safety
- To reduce the number of safety Engineers.
- Advanced Surveillance
- Improve Operational Efficiency
- Global Remote Control
- Versatile Deployment
Neuro Optix offers a superior solution to traditional safety measures, such as fixed cameras and safety helmet cameras, through its mobility and advanced AI technology. Unlike stationary cameras, Neuro Optix can navigate various terrains, providing dynamic, real-time surveillance and ensuring comprehensive environmental coverage. Safety helmet cameras depend on individual wearers and may miss critical blind spots. In contrast, Neuro Optix’s AI-powered system continuously monitors and analyzes the surroundings to proactively prevent collisions and enhance safety. By integrating cutting-edge computer vision and remote connectivity, Neuro Optix delivers more reliable and efficient safety monitoring, making it an ideal choice for diverse and high-risk environments.
Unveiling NeuroOptix- Designing
3D Parts
In the initial design phase of NeuroOptix, the body of the robotic car is meticulously crafted using AutoCAD software. This phase involves creating detailed 3D models that outline the structure, and dimensions of the car.
3D Printing & Laser Cutting
In the fabrication process of NeuroOptix, components are first meticulously designed using AutoCAD software. These designs are then transferred to a 3D printer to create custom parts with precise dimensions, ensuring they fit perfectly into the robotic car's framework. Additionally, certain structural elements are cut from acrylic sheets to provide sturdy support and enhance the overall durability of the vehicle. This dual approach of 3D printing and acrylic sheet cutting allows for a tailored construction that balances flexibility, strength, and ease of assembly.
Soldering
In the assembly of NeuroOptix, soldering is used to securely connect and insulate the wires of DC motors ensuring reliable electrical connections essential for the car's operational integrity.
After designing and printing all components using AutoCAD software and a 3D printer, along with cutting necessary parts from acrylic sheets, the stage is set for assembly. Each printed and acrylic component has been crafted with precision to ensure compatibility. The next step involves methodically assembling these parts, ensuring all connections are securely integrated.
- Hardware Components
In our project, each hardware component plays a crucial role in enhancing the functionality and capabilities of NeuroOptix, our robotic platform designed for advanced surveillance and safety applications:
Kria KR260
The Kria KR260 is a high-performance, adaptive computing module designed for advanced embedded applications. Developed by AMD Xilinx, this versatile FPGA (Field-Programmable Gate Array) kit provides powerful processing capabilities and flexible interfacing options, making it ideal for a wide range of AI and machine learning tasks.
OAK-D Camera:
The KR260 provides the necessary interfaces to connect and control the OAK-D AI camera. It processes the depth sensing and object detection data captured by the camera, enabling real-time analysis and decision-making.
By leveraging the powerful FPGA architecture of the KR260, our system can handle more complex AI models and perform advanced computations. This ensures that the data from the OAK-D camera is processed quickly and accurately.
The OAK-D Lite is a powerful and compact AI vision system designed for advanced computer vision applications.
DC Motors
Drive the movement of the robotic car, providing propulsion across different terrains. Controlled by motor drivers, these motors ensure smooth and precise motion.
Servo Motors
Control pan-tilt movements of the OAK-D AI camera and other articulated functions. This capability enhances NeuroOptix's surveillance capabilities, enabling it to dynamically adjust its field of view and monitor specific areas of interest.
Motor Drivers (L298N)
Essential for controlling the speed, torque, direction, and efficiency of the DC motors. The L298N motor drivers interface between the Raspberry Pi's output signals and the motors, regulating power delivery to ensure optimal performance and reliability during operation.
Arduino
It is used to manage and control various aspects of NeuroOptix. It interfaces with motor drivers to regulate DC motors for precise movement control and coordinates servo motors to adjust the OAK-D AI camera's position.
We didn't added battery for now because as a student we don't have enough budget to buy that for Now we use 3X cell to run the car only.
Software Setup
Vitis-AI:
Vitis AI is a development platform from AMD Xilinx that simplifies deploying deep learning models on FPGAs (Field-Programmable Gate Arrays). It allows developers, even those without extensive FPGA expertise (just like us), to harness the high performance and flexibility of FPGAs for AI applications. A key feature is its ability to convert standard deep learning models into the xModel format for deployment on FPGA DPUs (Deep Processing Units), streamlining the process and making FPGA technology more accessible for AI workloads.
OpenCV
OpenCV, a widely-used open-source computer vision library, is utilized for image processing tasks. It provides the tools necessary to process and analyze images captured by the Luxonis OAK-D Lite, enabling functionalities such as object detection and depth perception.
DepthAI API
The DepthAI API is used to interface with the Luxonis OAK-D Lite. This API facilitates the execution of advanced computer vision tasks by leveraging the AI processing capabilities of the OAK-D, including real-time object detection and depth sensing.
Streamlit & Ngrok
Streamlit is employed to control the robot's movements, while Ngrok is used to enable global access for remote control.
Luxonis Hub
The Luxonis Hub is a central management tool that plays a crucial role in the project. It allows for the control of multiple Luxonis OAK-D Lite devices, managing live video feeds, deploying AI models, and handling device interactions over the network. This centralized control simplifies the management of complex tasks and ensures efficient operation of the AI cameras.
Luxonis Hub is also utilized for controlling the robot's movements, offering lower latency compared to Streamlit.
Data capture starts with the OAK-D camera, which feeds into the Kria module for processing. This processed data is then used for motor control and visualization, allowing remote monitoring by the Safety Officer to ensure safe operations globally.
Block Diagram:
Flow of data, starting with data capture by OAK-D, followed by processing in KRIA, and ending with control of motors and visualization.
- OAK-D: This component captures data, calculates disparity, and calculates distance.
- KRIA: This component includes the following sub-components:
PYNQ: It is the software running in Kria that processes the data received from OAK-D and perform inference of AI models.
Streamlit Hosting: It hosts a web-based interface for visualization and control.
Arduino: It is responsible for controlling motors based on the data received from KRIA via serially.
- Safety Officer: This component represents a human user who can remotely access and control the system via a secure connection through Ngrok.
KRIA & Arduino
Microcontroller (Arduino): The central component is likely an Arduino board, which serves as the controller for the circuit. It processes inputs and sends commands to other components.
Power Supply: The circuit includes a power supply that provides the necessary voltage and current to the components. This power supply is typically connected to the Arduino and the motor driver modules.
Motor Driver Modules: There are two motor driver modules connected to the Arduino. These driver modules are used to control the speed and direction of motors. The motor drivers act as intermediaries between the Arduino and the motors, allowing for higher current and voltage to be used than the Arduino alone can provide.
Connections:
- Wires: The colored wires represent connections between the Arduino and the motor driver modules. Different colors may indicate different types of signals (e.g., power, ground, control signals).
- Input/Output Pins: The Arduino’s digital pins are typically connected to the input pins of the motor drivers, sending signals to control the motors. Additionally, the motor drivers are connected to the motors, providing the necessary output to operate them.
Functionality:
The Arduino is programmed to send control signals to the motor driver modules based on user input or sensor data. This allows it to control the motors' operation, such as starting, stopping, and changing speed or direction.
This setup could be used to control a robotic vehicle, where the main board processes data and the Arduino directs the motors to steer and move the vehicle.
KRIA SetupTo set up the Kria KR260, follow these detailed instructions to ensure a smooth installation and update process.
Step 1: Access the Official Documentation
Visit the official documentation for the Kria KR260 to familiarize yourself with the setup process and requirements.
Step 2: Update the Firmware
Updating the firmware is crucial to avoid potential issues. Open the terminal and run the following command to download the firmware image:
wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1wACTcpbwLPOH9UUuURk5qcnIYeEverSB' -O k26_update3.BIN
After Downloading the firmware run the following command
sudo xmutil bootfw_update -i <path-to-FW.BIN file>
sudo xmutil bootfw_status
sudo shutdown -r now
sudo xmutil bootfw_update -v
Make sure to replace <path-to-FW.BIN file> with the original path.
Step 3: Set Up Wi-Fi (using W11MI Tenda)
To install the Wi-Fi driver for the W11MI Tenda adapter, ensure that you are connected to the internet via Ethernet. If there are any connectivity issues, follow the DNS setup instructions.
- Configure DNS:
If connected via Ethernet but the internet is not accessible, update your DNS settings:
sudo nano /etc/resolv.conf
Add the following line:
nameserver 8.8.8.8
Save and close the file. Now you will have the internet access.
- Install Wi-Fi drivers:
Run these commands to install the necessary Wi-Fi drivers:
sudo apt-get install build-essential git dkms linux-headers-$(uname -r)
git clone https://github.com/McMCCRU/rtl8188gu.git
cd rtl8188gu
make
sudo make install
sudo apt install --reinstall linux-firmware
sudo reboot
After rebooting, you should see the Wi-Fi option appear in your settings. You can now connect to your Wi-Fi network.
Pynq Installation:To perform AI model inference on the Kria KR260, you need to install PYNQ. Follow these instructions to properly set up PYNQ on your device.
Step 1: Clone the Robotics AI Repository
Clone the necessary repository from GitHub by running the following command in your terminal:
git clone https://github.com/amd/Kria-RoboticsAI.git
Install the dos2unix utility, which will help convert Windows-style line endings to Unix-style:
sudo apt install dos2unix
Navigate to the scripts folder within the cloned repository and convert all shell scripts to Unix format. This ensures compatibility and avoids execution issues.
cd /home/ubuntu/Kria-RoboticsAI/files/scripts
for file in $(find . -name "*.sh"); do
echo ${file}
dos2unix ${file}
done
Step 2: Installation of Pynq:
In order to install the pynq you need to run the following commands:
sudo su
cd /home/ubuntu/Kria-RoboticsAI
cp files/scripts/install_update_kr260_to_vitisai35.sh /home/ubuntu
cd /home/ubuntu
source ./install_update_kr260_to_vitisai35.sh
reboot
It will take 10~15 min to install depending upon you internet connection.
Note: Make sure that your internet connection is good otherwise the installation would be failed.
Step 3: Set Up the PYNQ Environment
Now as the pnyq is installed we need to setup the environment first. So run the following command
sudo su
source /etc/profile.d/pynq_venv.sh
cd $PYNQ_JUPYTER_NOTEBOOKS
pynq get-notebooks pynq-dpu -p
Congratulations! You have completed the basic setup.
Project Setup:In order setup this project please clone the repo first. Now you need to first setup the hardware first.
- First Connect Ardiuno Uno with Kria at USB port using the ardiuno cable.
- Now Connect Ardiuno GPIO pins with the motor driver by following the diagrame below.
- Now Connect all 4 motors with the both drivers.
Now ardiuno is connected to kria serially. Now we need to setup the Software part
- Now in the cloned repo in the code folder you will have ardiuno.c file you need to install the ardiuno compiler in the Kria but I would recommend you to use your own personal computer and burn ardiuno.c code in the ardiuno.
- Now you need to run the dpu.py code. Please make sure that you are in the pynq environment if you aren't please run the following command again
sudo su
source /etc/profile.d/pynq_venv.sh
cd $PYNQ_JUPYTER_NOTEBOOKS
- Now run the following command again
Python3 dpu.py
Make sure that you have connected your OAK-D camera properly with the kit if it is not attached this camera won't run.
This script is capable to detect only
- Helmet
- Distance between persons
Helmet detection is being performed on the DPU of kria. If you deploy the application using the Luxonics Hub you can switch between Distance between persons, Fire Detection. To setup the luxonics hub please visit this repo.
Vitis-AI:Vitis-AI is being used to optimize the model for DPU. Otherwise simple model cannot be deployed on the DPU. So As we are using yolov5 so we need to optimize the model first for.pt extension to the.xmodel.
Vitis-AI installation:
For Vitis-AI we shall need ubuntu. We are using the Ubuntu 20 LTS. Now we are going to install the Vitis-AI 3.5 so follow the following link to install. After installation clone the following repo.
Optimizing the models:
Now as we want to optimize Yolov5 models. We need to Navigate in to the Yolov5 folder i.e "Quantizing-Compiling-Yolov5-Hackster-Tutorial". Now make sure you have the test data of the model you have trained on.
Now Activate the Vitis environment and run the following commands please change the commands according to your path.
python3 quant.py -w <model adress> -d <path to your dataset> -q calib
python3 quant.py -w yolo_m.pt -d Safety-Helmet-pro-3/test/ -q calib
python3 quant.py -w yolo_m.pt -d Safety-Helmet-pro-3/test/ -q test
vai_c_xir --xmodel build/quant_model/DetectMultiBackend_int.xmodel --arch /opt/vitis_ai/compiler/arch/DPUCZDX8G/KV260/arch.json --net_name yolov5_kv260 --output_dir ./KV260
After Running these three commands your compiled xmodel shall be stored in the Kv260 model. Now you can use this model in to your pynq DPU code.
Features:- Fire Detection
Leveraging OpenCV's image processing algorithms enhances the OAK-D camera's ability to detect flames in real-time. This not only ensures swift identification of fire outbreaks but also enables the system to take immediate preventive measures, bolstering safety protocols.
- Helmet Detection
Helmet detection is a critical feature of our Remote-Controlled Worker Monitoring System. Using the Luxonis OAK-D AI camera, the system can accurately detect whether workers are wearing helmets in real-time. This ensures compliance with safety regulations, enhancing worker safety and reducing the risk of head injuries on construction sites.
- Distance Measuring
The OAK-D AI camera is used for distance measuring by leveraging its stereo pair of global shutter cameras, which capture 3D depth information to accurately perceive and calculate the distance between objects and their surroundings in real-time.
Neuro Optix utilizes advanced vision technology to distinguish between workers adhering to safety protocols and those not in compliance. This continuous monitoring guarantees consistent safety practices among all workers.
OpenCV facilitates advanced image processing and computer vision techniques, enabling the robotic arm to accurately manipulate and interact with objects. This capability enhances the project's overall functionality in tasks requiring precise object handling.
We foresee several advancements for our Remote-Controlled Worker Monitoring System to further elevate its functionality:
Integration with Drones
We would integrate the drone with the car. Then the car is capable to launch drone on the sites remotely so that safety officer can monitor work at height moreover the feed from drone camera will be inferenced at Kria Kr260 board to perform some PPE detections.
Monitoring Workers at Heights
Incorporating drones into the system will enable the monitoring of workers operating at elevated levels. This will provide a holistic view of the construction site, enhancing worker safety through comprehensive surveillance.
Quality Assurance
Drones will also be utilized for quality inspections from above, ensuring construction standards are met and maintained. This aerial perspective will help in identifying and rectifying issues that might not be visible from the ground.
Worker Training
The robot will be capable of providing training and informing workers about safety protocols by analysing the worksite. For instance, if an activity involves working at height, the robot will instruct workers to use fall protection restraints. This functionality is powered by a GenAI model, deployed on the cloud.
Worker's Safety Conscious
The robot will utilize its sensors to monitor environmental conditions (e.g temperature, wind speed). In adherence to ILO guidelines, it will notify workers to take breaks during peak sunlight hours to prevent heat-related illnesses such as heat exhaustion or heat stroke.
Autonomous Petrol
Robot will autonomously perform certain tasks such as guiding the crane operator and patrolling the site using 3D LiDAR.
Note: The Working on the Robot is still going on to convert it in to product and we are properly consulting the Health and Safety Coordinator Mr. Zaka so that we could achieve the best we can.
Thanks For Reading.
Useful Links:
https://github.com/Xilinx/Vitis-AI-Tutorials/blob/1.4/Design_Tutorials/07-yolov4-tutorial/readme.md
https://github.com/EngrAwab/NeuroOptix.git
https://github.com/EngrAwab/Robo_rumble.git
https://github.com/EngrAwab/OAK-D-WorldwideStream.git
https://github.com/LogicTronix/Vitis-AI-Reference-Tutorials/tree/main
Comments