Interested in real-time crowd counting? Applications include retail, security and industrial safety.
With this project, you can use a USB webcam video stream with the Ultra96 MPSoC FPGA and an OpenCV convolutional neural network to determine how many people are populating the area! With Ubidots, this data is made available on a web dashboard.
The person detecting example described below uses OpenCV's HOG (Histogram of Oriented Gradients) feature descriptor to find upright human-like objects. This method works well for high-angle camera shots and placing the camera at head-height or above is recommended.
- Ultra96 MPSoC
- USB webcam
- USB micro cable
- 16GB microSD card and adaptor (8GB+ needed for image)
- 9-18VDC 2A power adapter (tip positive)
- 96Boards MIPI Adapter V2.1 (optional)
- 96Boards Ultra96 Documentation & Links to OS Images
- Avnet Ultra96 Documentation
- Avnet Ultra96 Design Support Documentation
- Xilinx PYNQ Repository
- Avnet Ultra96-PYNQ Repository
We first need to download the PYNQ image from: PYNQ_Image
While the image is downloading, install the tools needed to flash the SD card with the image. I recommend using ETCHER - an open source flashing tool.
Alternatively, you can write the image to the card using the linux terminal:
#find the disk name that matches your SD card specs e.g /dev/disk2
diskutil list
#unmount the disk
diskutil unmountDisk /dev/disk2
#use dd tool to write disk image
sudo dd bs=1m if=/path/to/file.img of=/dev/disk2 conv=sync
[2] Connecting to the Ultra96The Ultra96 doesn't come supplied with a DC power adaptor but it specifies 9-18V DC @2A. I had an old 12V modem power supply that I soldered the correct jack to but I'm sure these can be readily acquired in an electronics supply store or in the cupboard that you throw all your old chargers and cables! Do a quick check to ensure polarity is correct before you connect to the board.
- Plug the SD card with the PYNQ image into the card slot on the board and power the board on by pressing and releasing the power button (S3). There's no need to touch the boot mode dip switches, covered by kapton tape.
It's recommended that you connect directly to the board with a micro USB cable.
- Open your web browser, (Google Chrome is recommended) navigate to 192.168.3.1
- Alternatively, scan for a wireless network with a name similar to Ultra96-<MAC_ADDRESS> and navigate to 192.168.2.1 in your web browser.
- Enter xilinx as the password. A Jupyter notebook - an open-source web app/ programming environment will open at xilinx/home.
To continue further, you will need to connect the Ultra96 to the internet to download the requisite software and packages.
- The most convenient way to connect to the internet is to use Ethernet over USB to share the connection with your PC/ laptop by assigning your PC a static IP address on the same subnet as the Ultra96. More information can be found at Ultra96 Documentation.
- Test your network configuration by clicking New -> Terminal in the Jupyter notebook.
#find network interface configuration
ifconfig
#ping Google's DNS server - verify that the board is connected to the internet
ping -c10 8.8.8.8
[3] Testing USB Webcam and Hardware Acceleration overlays- Install PYNQ computer vision overlays from PYNQ-ComputerVision
# install cv overlays using pip
sudo pip3 install --upgrade git+https://github.com/Xilinx/PYNQ-ComputerVision.git
- Connect the USB camera and open the USB camera test notebook under Common -> usb_webcam.ipynb
- Click Kernel -> Restart & Run All in the menu bar. Accept the pop up and run Python program. The script takes a picture with the webcam, converts it to black and white, then rotates it 45°.
- Go to Ubidots Industrial and create an account.
- Once your account is setup, click on the user dropdown menu and click API documentation. Make note of your Default Token as you will need this later.
USB video stream and frame preprocessing is done in Programmable Logic. To ensure reliable feature detection, colour and gamma correction should be applied to the captured frame. We can do this with Pynq Computer Vision overlays as shown below.
crowd_counter.py
# coding: utf-8
# In[1]:
import cv2
import imutils
from imutils.object_detection import non_max_suppression
import numpy as np
import requests
import time
import base64
from matplotlib import pyplot as plt
from IPython.display import clear_output
# In[2]:
URL = "http://industrial.api.ubidots.com"
INDUSTRIAL_USER = True
TOKEN = "YOUR_TOKEN"
DEVICE = "camera"
VARIABLE = "people"
# HOG cv2 object
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
# In[3]:
def detector(image):
rects, weights = hog.detectMultiScale(image, winStride=(4, 4), padding=(8, 8), scale=1.05)
for (x, y, w, h) in rects:
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2)
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
result = non_max_suppression(rects, probs=None, overlapThresh=0.7)
return result
# In[4]:
def buildPayload(variable, value):
return {variable: {"value": value}}
# In[5]:
def send(token, device, variable, value, industrial=True):
# build endpoint
url = URL
url = "{}/api/v1.6/devices/{}".format(url, device)
payload = buildPayload(variable, value)
headers = {"X-Auth-Token": token, "Content-Type": "application/json"}
attempts = 0
status = 400
# handle bad requests
while status >= 400 and attempts <= 5:
req = requests.post(url=url, headers=headers, json=payload)
status = req.status_code
attempts += 1
time.sleep(1)
return req
# In[6]:
def record(token, device, variable, sample_time=5):
print("recording")
camera = cv2.VideoCapture(0)
init = time.time()
# ubidots sample limit
if sample_time < 1:
sample_time = 1
while(True):
print("cap frames")
ret, frame = camera.read()
frame = imutils.resize(frame, width=min(400, frame.shape[1]))
result = detector(frame.copy())
# show frame with bounding rectangle for debugging/ optimisation
for (xA, yA, xB, yB) in result:
cv2.rectangle(frame, (xA, yA), (xB, yB), (0, 255, 0), 2)
plt.imshow(frame)
plt.show()
# sends results
if time.time() - init >= sample_time:
print("sending result")
send(token, device, variable, len(result))
init = time.time()
camera.release()
cv2.destroyAllWindows()
# In[7]:
def main():
record(TOKEN, DEVICE, VARIABLE)
# In[8]:
if __name__ == '__main__':
main()
- Create a new Python3 notebook - New -> Python3notebook, name it crowd_counter.ipynb and copy/ paste the script of the same name above. Alternatively, download the notebook from here and upload it to the board New -> Upload -> Select /path/to/file
- Replace variable <TOKEN> with your ubidots Default Token.
- Run crowd_counter.ipynb from the notebook, or save it as a Python3 file by adding the.py file extension. This can be useful if you want the file to run at boot time. From the terminal run:
#navigate to target parent directory and run python file
cd /path/to/crowd_counter.py
python3 crowd_counter.py
# copy file to bin
sudo cp -i /path/to/crowd_counter.py /bin
# add cron job
sudo crontab -e
# add line below to bottom of crontab file - type :wq to save and exit
# @reboot python /bin/crowd_counter.py &
# restart debian
sudo shutdown -r now
Output will show human shaped objects within a bounding box in a matplotlib window.
[6] Ubidots Dashboard Setup- Navigate to https://industrial.ubidots.com/ubi/insights/ and click on the Create Widget button
- Click on Metric > Show me theLast Value > Select adevice 'device' > Select a variable 'variable'
- Click Finish
- Last detected person count will be visible in the dashboard and device data is available under the Devices menu.
- Implement HOG extractor on FPGA.
- Use a High Definition MIPI CSI camera with the High Speed Expansion connector.
- Send data over a radio uplink (NB-IoT modem) rather than wifi, making the system more self-contained.
Thanks to Adam Taylor for the excellent Ultra96 tutorials here on Hackster.io and his MicroZed Chronicles.
People Counting Systems with OpenCV, Python and Ubidots
Multi-scale Convolutional Neural Networks for Crowd Counting
Comments