The model we are incorporating is a people counter which can be applied for detecting people in designated area and detect the number of people in the frame and hence count acquire the total count. TensorFlow Object Detection Model Zoo contains many pre-trained models on the coco dataset. For this project, various classes of models were tested from the TensorFlow Object Detection Model Zoo. SSD_inception_v2_coco and faster_rcnn_inception_v2_coco performed good as compared to rest of the models, but, in this project, faster_rcnn_inception_v2_coco is used which is fast in detecting people with less errors. Intel openVINO already contains extensions for custom layers used in TensorFlow Object Detection Model Zoo.The counter uses the Inference Engine included in the Intel® Distribution of OpenVINO™ Toolkit. The application counts the number of people in the current frame, the duration that a person is in the frame (time elapsed between entering and exiting a frame) and the total count of people. It then sends the data to a local web server using the Paho MQTT Python package.
In the video shown people in a designated area are detected, providing the number of people in the frame, average duration of people in frame, and total count. The counter uses the Inference Engine included in the Intel® Distribution of OpenVINO™ Toolkit.
Using the OpenVINO Toolkit
Initializing the OpenVINO environment
The model optimizer was first used to generate the IR (Intermediate Representation) files (.bin and .xml) of the model (.onnx). Then the model was deployed.
Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices. The model optimizer of OpenVINO was first used to generate the IR (Intermediate Representation) files (.bin and .xml) of the model. Then the model was deployed.
Model Optimizer produces an Intermediate Representation (IR) of the network, which can be read, loaded, and inferred with the Inference Engine. The Inference Engine API offers a unified API across a number of supported Intel® platforms. The Intermediate Representation is a pair of files describing the model:
- .xml - Describes the network topology
- .bin - Contains the weights and biases binary data
The video demonstration is given below.
OpenVINO Toolkit - Benchmark App
The Benchmark tool was extensively used for documenting the tested values using a number of different parameters.
The parameters documented tested on both the CPU and GPU modes have been given below in tabular format
Future Prospects:
This model can be used to keep a a track of customer traffic at entrances of various retail stores, banks, shopping centers ,museums etc. The customer traffic will indeed be a very helpful to understand the extent of progress and operation of the business.
It can be used for staff planning purposes in factories and offices.
It can be used for monitoring high traffic areas and help in effective crowd management.
Comments