Pickpocketing is a form of theft that involves stealing money or other valuables from the person or a victim without them noticing the theft. It may involve considerable talent and a knack for misdirection. A thief who works in this manner is known as a pickpocket. Pickpocketing is a major problem around the world. Most pickpocketing occurs in crowded places such as bus stoppages, railway stations, shopping malls, tea stalls etc. Detecting pickpocketing is crucial, and there is no automated solution exists. It is possible to detect pickpocketing via video surveillance method.
Solution SummaryWe plan to develop a computer vision-based solution for detecting pickpocketing automatically from a surveillance video camera. Audio or video-based alert system will be added for alerting the victims. Additionally, the recorded video is used for arresting the pickpocket (the thief).
Objectives- To Detect pickpocketing from video feedback (video comes from a surveillance camera).
- The recorded scene is used for arresting the pickpocket (used by a law enforcement agency).
- Krai KV260 with Basic accessory pack
- HDMI Monitor.
- Webcam
A sufficient amount of data is needed for developing the machine learning based pickpocketing model. We have collected around 1470 data (images) from google photos and other online image repositories. Since the pick pocketing activities are closely similar to handshaking activity, we have also consider handshaking (detecting handshaking activity as well).
We have developed a detection model based on Convolutional Neural Network. Tensorflow keras framework is used for developing the model.The model is quantized by following vits ai documentation.
https://xilinx.github.io/Vitis-Tutorials/2020-2/docs/Machine_Learning
Kira Developing board preparationFirst we need to prepare the SD card for the Kria KV260 Vision AI Starter Kit.
In the box, a 16GB SD card is provided, but I recommend using at least 32GB instead, since the setup may exceed 16GB of space.
We will download using Ubuntu 20.04.3 LTS. Download the image from the website and save it on your computer.
On your PC, download the Balena Etcher to write it to your SD card.
Once done, your SD card is ready and you can insert it into your Kria.
Running the ApplicationA python based application is written to run the classification model into kira board.
python3 pocketNet.py -i <image file> # for running the application
References
Comments