Many people have home camera systems installed in order to detect when people are around, ranging from doorbell cameras up to multi-camera home surveillance systems. These systems may issue text or audio notifications when motion is detected, but often the motion is not of interest, and may be due to an animal or passing vehicle. These unnecessary alerts are annoying and reduce the effectiveness of alerts since the user begins to get lazy about checking them. Fundamentally the problem is that the system has no concept of what a person looks like. In this project, AI is added to a simple camera system allowing the system to know when a person is in an image. By filtering out uninteresting images, the user is only alerted when a person is detected, improving the effectiveness of the alerting system.
The problem with motion activated camera systemsVirtually all camera systems people use around their home use the detection of motion via a passive infrared sensor (PIR) to indicate the presence of people. More advanced systems may also/alternatively use motion detection via algorithmic processing of images to determine when people or objects are moving in front of a camera. However, both of these approaches can result in the camera being triggered by uninteresting motion, for example due to animals or even the motion of clouds. This video from Wyze summarises some of the problems with traditional approaches. An example image sequence from my own system (a Blink camera) is shown below:
In order to prevent people from getting unwanted notifications some commercial camera systems are now using more advanced algorithmic processing of images to determine when people are in images. However this is typically a premium feature, and/or requires you to purchase new hardware, and may require an ongoing subscription. For example, Nest Aware offers person detection but requires both premium hardware and an ongoing subscription. Additionally your camera images may be streamed to the cloud, raising privacy concerns. This article now shows how you can add the same person detection functionality to your existing camera system, meaning you don't have to purchase new hardware and keeping all data and processing local. Note that this approach is not just limited to person detection, but can be applied to detect a very wide range of objects.
Home Assistant and DeepStackThere are two essential components to this project.
- Home Assistant - for viewing my camera feeds, sending images for processing, and sending notifications
- DeepStack - the AI brains which locate objects (including people) within images
Home Assistant is open source software on Github for home automation that puts local control and privacy first, and is typically run on a Raspberry Pi. You can view my previous Hackster articles on Home Assistant on my Hackster profile page, and refer to the official docs to get started with Home Assistant. A wide range of cameras can be added to Home Assistant as described in the camera docs here. The approach in this project is compatible with all the camera systems that Home Assistant supports, so go ahead and setup Home Assistant and configure a camera. If you want to get started with a Raspberry Pi and a USB webcam you can easily use Motion (as used in this article).
DeepStack is open source software on Github that allows developers to use state-of-the-art AI in their projects, in particular for computer vision tasks including facial recognition and object detection in images. DeepStack does this by exposing APIs that other software can make use of. What I particularly like about DeepStack (compared to cloud solutions from AWS, Google etc) is that all data stays on your on hardware and on your own local network, maintaining your privacy and control of your data. Also compared to 'locally hosted' alternatives that I have used in previous projects, DeepStack offer a wide range of truly cutting-edge solutions for vision projects, including true object detection, and the ability to use your own models. To run DeepStack you have a couple of options. Windows 10 users can use the desktop app, and Mac/Linux users can use Docker. Edge devices including Jetson nano are also supported. Whichever approach you take is compatible with this project, but I personally am running DeepStack on a Jetson nano.
Using DeepStack with Home AssistantOK assuming you have both DeepStack and Home Assistant running, the next task is to link them together. To make this really straightforward I have published code to allow you to use DeepStack from within Home Assistant. This code is in the repository HASS-Deepstack-object, and there is a forum thread where members of the Deepstack/Home Assistant community discuss its use here. Go ahead and follow the instructions in the repo documentation to get DeepStack and Home Assistant working together. Once this is done you can view the camera feed and DeepStack person predictions on the Home Assistant UI, which will look similar to below:
I am then using Home Assistant to send a notification to my mobile phone when DeepStack detects a person in the camera image. DeepStack both counts the number of people in the image, and allows Home Assistant to draw a box around the person in the image. The notification on my phone is shown below:
The details of how Home Assistant handles the DeepStack processing and notifications using Automations are in this directory of the HASS-Deepstack-object repository. Since I may improve and refine this code in future I will not duplicate it here.
Community use casesSince the publication of this project, many users have reported interesting and surprising use cases. Some highlights include:
- Monitoring activity in a brick factory in Latin America
- Watching for intruding snakes in Thailand
- Monitoring Amazon parcel deliveries
- Checking that a motorcycle was locked up
- Checking when a chicken laid an egg
- Greeting people when they return home and playing a theme tune
- Counting visitor numbers at a shop
- Checking when a parking spot became available
Several high quality videos on the use of this project have also been published:
Summary & future
Now that I am using DeepStack to process my camera images I am no longer getting camera notifications due to motion triggered by animals or random lighting effects. I have a camera on my front door which sends me a notification when a person approaches the door, so if it is a sales person and it is dinner time I know to ignore them! I am also running my security camera in my back garden through DeepStack. Note that this camera system is complementary to a professionally installed security system, and not a substitute for one. For example you could use an automation to turn on lights in the house if a person was detected in the back garden when you are away on holiday, and this might be sufficient to scare off a potential burglar (credit Home Alone movie for this idea!).
My next plan for DeepStack is to create a custom model to detect when a local bus passes my home. This particular bus comes every 15 minutes but can often be delayed by traffic, so I want the system to detect if busses are running on time and alert me if it appears there are delays. I am very interested to hear what problems other people would like to try solving using DeepStack and Home Assistant, so please leave your ideas in the suggestions below. Happy hacking! :-)
Comments