The working demo of my prototype is added in the below YouTube link, Kindly watch it before the detailed documentation.
Pedestrian Detection system :I have developed a Prototype which will detect the presence of pedestrians in the blind spot regions of the vehicle and alert the driver through buzzer sound. I have used object detection method based on MobileNetV2 using Edge Impulse in Nvidia Jetson Nano 2GB variant board.
Problem Statement :- In Car parking space, Many drivers used to face difficulty in taking out the car where it is full of pedestrians.
- In autonomous vehicle, the pedestrian detection system requires Higher cost LIDAR technology, RADAR to detect the pedestrians in blind spot region
- The Object detection using Edge Impulse in Nvidia Jetson Nano board can able to detect the Pedestrians crossing the vehicle in blind spot region.
- The web camera will cover both blind spot regions of the vehicle.
- The computer vision technique will be cost effective compare to the LIDAR, RADAR technology.
In object detection using Edge Impulse, I have used MobilenetV2 CNN to classify the pedestrians in the input image.
In the prototype, Arduino Nano is used as secondary controller to control the servo motor rotation to cover the blind spot region. The web camera will be attached to the servo motor.
The Web camera with servo motor and Arduino Nano is placed on top of the vehicle in rear side. Where the Nvidia Jetson Nano placed inside the vehicle.
The servo motor will pausing at 0 degree for 2 seconds and 180 degree for 2 seconds and the other regions will be moved with regular interval of 5 Milliseconds. The Aim is to cover only the blind spot regions in both the sides.
The Nvidia Jetson Nano is powered by 5V, 2.1 A power bank.
The buzzer is connected to the Pin12 of the Nvidia Jetson Nano and Gnd pin.
The Web Camera is connected to one of the USB port. This Video section explains the Hardware build setup in detail.
Pedestrian Detection Model using Edge ImpulseLets go to the software part, Here I have explained the detailed steps regarding how I have build the prototype from datasets to deployment.
The process can be covered in below steps:
- Connecting Nvidia Jetson Nano to Edge Impulse Project
- Datasets - Labeling, splitting
- Model Training
- Deployment
- Running Locally
First step, we have to boot the Nvidia Jetson Nano with proper SD card Image.
Follow the Steps mentioned in below link to flash SD card image.
Once it is properly booted up, Connect the Nvidia Jetson Nano to Wifi or Ethernet Network, to install some dependencies.
Open the Terminal and run the below command
wget -q -O - https://cdn.edgeimpulse.com/firmware/linux/jetson.sh | bash
It will take a few minutes to run. Once it is completed, run the below command to connect to your edge Impulse Project.
So, first we need to create a base empty - Project in Edge Impulse.
Go the Below link and login with your email ID.
Once you created a new project, go to device settings.
Run the below command in Nvidia Jetson Nano Terminal Window.
edge-impulse-linux
Datasets Uploading / Labeling:Go to Data Acquisition settings and upload existing datasets;
Upload
the datasets from the below link:
Once the datasets are uploaded, Go to Labeling Queue and Mark the box boundaries for the Pedestrians in the image.
Add the label as "Pedestrians"
The Datasets will contains the huge variety like person wearing different Costumes, different genders, Different Race and carrying different Objects like Bag, umbrella, trolley etc.
Model Training :In parameter configuration settings, configure the pixel resolution as 320x320.
In a Neural Network settings, The training parameter will be set as below.
Once Model is trained, check the accuracy with testing data sets.
To run your impulse locally, just connect to your Jetson again, and run:
edge-impulse-linux-runner
Running LocallyTo add the application over the Object detection model, You have to follow the below steps:
Open the terminal and run the below commands.
sudo apt-get install libatlas-base-dev libportaudio2 libportaudiocpp0 portaudio19-dev
pip3 install edge_impulse_linux
This will take few hours to install.
Clone this repository to get the examples:
git clone https://github.com/edgeimpulse/linux-sdk-python
Then download the Edge Impulse model file.
edge-impulse-linux-runner --download modelfile.eim
If the modelfile.eim is downloaded in the desktop location in your Jetson Nano. Open the terminal, Root back to the -> linux-sdk-python/examples/image
and run the below command to run the model.
python3 classify.py /home/username/Desktop/modelfile.eim
Note: Replace the username with your username of the Nvidia Jetson.
Then you will get an output like this.
To Add buzzer sound when the pedestrian is detected, Copy and paste the below application code in classify.py.
Servo motor ControlFor Servo motor control to cover the blind spot region, I have used Arduino Nano.
#include <Servo.h>
Servo myservo; // create servo object to control a servo
// twelve servo objects can be created on most boards
int pos = 0; // variable to store the servo position
void setup() {
myservo.attach(9); // attaches the servo on pin 9 to the servo object
}
void loop() {
for (pos = 0; pos <= 180; pos += 1) { // goes from 0 degrees to 180 degrees
// in steps of 1 degree
myservo.write(pos); // tell servo to go to position in variable 'pos'
delay(5); // waits 15ms for the servo to reach the position
}
delay(2000);
for (pos = 180; pos >= 0; pos -= 1) { // goes from 180 degrees to 0 degrees
myservo.write(pos); // tell servo to go to position in variable 'pos'
delay(5); // waits 15ms for the servo to reach the position
}
delay(2000);
}
Refer the below steps for connections.
Next Version:In the next version, The prototype will have the following features:
- Calculating moving direction of pedestrian
- Identifying the exact position of pedestrians in x, y axis.
Comments