Having one's home broken into is something most people never want to experience. One's personal belongings are taken, the house and furniture likely gets damaged, and worst of all is the feeling of fear as one's sacred, protected space is violated.
There are ways to mitigate the risk of being burgled. Houses without a home security system are 300% more likely to be broken into than houses with one (source). Despite this, most people do not bother with setting up such a system. Why?
- They're hard to setup.Every door or window into the residence needs a special sensor and these sensors all need to communicate with the base unit.
- They're relatively expensive.Costs vary with the size of the home, but equipment costs range from around $250 - $1500+, installation costs add to that total, and monitoring costs are additionally $360 per year on average (source).
- They're inconvenient.A home security system is another thing to remember to turn on/off and forgetting to do so can result in an ineffective system or false alarms.
We can do better with the Safe Sound Home Security System built on the Microsoft Azure Sphere. The Safe Sound system is a centralized home monitoring system that requires no special installation and will notify the owner of any detected events. Most break-ins are noisy: 95% of burglaries involve forcing their way into the home by breaking a window or kicking down a door (source). The Safe Sound system takes advantage of this to listen for signs of forced entry. So it can be placed anywhere in one's home, and if a window is broken nearby, or a gun is shot, it will send a notification to the companion app.
The app built for Safe Sound works on both Android and iOS devices and has a number of features, including the update the system's event cooldown period (how long to wait between reporting consecutive events), view the recent event history, simulate a window breaking, receive notifications when an event occurs, and the ability to arm/disarm the Azure Sphere device.
The Azure Sphere is well suited to this project because of it's focus on security. It has a number of secure features built in, including built-in hardware security in the microcontroller unit, a custom OS designed for defending against IoT threats, and consistent security updates. This ensures that the Safe Sound device is itself secure as it secures your home.
Overview of OperationThis project consists of several different components that must work in concert to provide a robust security service. At a high-level, the Azure Sphere collects audio data using the microphone and continuously classifies it, looking for audio events that indicate a break-in. If it detects such an event, it sends a notification to Azure Cloud Services which dispatches a notification to the associated app. Additional status messages, such as controlling whether the device is armed, are also carried back and forth between the app and the device via Azure Cloud Services.
For a visual view of the connection between different parts, see the image below.
Audio classification is achieved using machine learning directly on the Azure Sphere. A two-layer Gated Recurrent Unit (GRU) network is created and trained on data gathered and labelled with various events. In this case, three categories were used: window breaking, gunshot, and background noise. Background noise consists of sounds typically heard in a home, including silence. Audio is continuously classified by the Azure Sphere into one of the three classes. If the predicted category is background noise (as it will be most of the time), the prediction is discarded and the next chunk of audio is examined. If the predicted category is something else, the device sends a notification to the owner about the detected event.
CollectingAudioData
As shown in the graphic, audio data is collected via the Mic 2 Click which senses pressure changes caused by sound waves and outputs them to the analog to digital converter (ADC) on the Azure Sphere. The Sphere takes 16, 000 samples from the ADC per second and stores it in a series of buffers - essentially this is the pulse-code modulation (PCM) audio format. Each buffer is a chunk of audio data, in this case 512 data points. Each buffer is fed into the audio classifier which predicts if there is an audio event taking place, for example glass shattering, which would indicate a window being broken.
PerformingAudioClassification
As indicated, audio classification is performed directly on the Azure Sphere. The Sphere is fairly resource constrained in terms of memory and processing power, which can make running machine learning models difficult. For this task the Embedded Learning Library (ELL) developed by Microsoft was used. ELL is a library specifically for converting and running machine learning models on embedded devices. While it is possible to continuously stream audio data from the Sphere and then perform classification in the cloud using something like Azure Machine Learning, there are several advantages to doing the classification locally:
PrivacyandSecurity
By performing classification locally, the audio data collected by the Azure Sphere never leaves the device. It is not stored on a server somewhere and is therefore much less likely to be exposed accidentally or collected by hackers. In contrast, if audio data is sent to the cloud to perform model inference (classification), there is the potential for someone who has access to the cloud servers to listen in to everything going on in one's house.
Speed
Due to the latency introduced by communicating over the internet, machine learning on the cloud is much slower than using local processing for simple models. In this project, this would likely mean that it would be impossible to perform audio classification in real-time, instead event detection would be limited to every few seconds.
NetworkRequirements
By nature, using machine learning on the cloud requires an internet connection so devices without a network connection are restricted from using machine learning. Additionally, extra bandwidth may be required to handle the increased load of the device sending data to the machine learning servers. For this project, audio events will be detected even if the Sphere's internet connection is interrupted, but notifications will not be sent since they require connectivity.
Improvements
Although not implemented in this project, the ability to run machine learning models on the edge also allows those models to be updated on the device, thereby improving functionality in the future.
Therefore, classification of audio is performed continuously on the Azure Sphere. Most of the time, the Sphere will detect normal background noise. However, if someone breaks glass or fires a gun within earshot of the Sphere, it will detect it as an audio event and send a notification to the owner.
SendingNotifications
Azure Cloud Services are used to process and send the notification. The overall flow begins with the Azure Sphere detecting an audio event and sending a telemetry event to the Azure IoT Hub it is configure to connect to (see below for how to configure the hub). The IoT Hub receives this communication and forwards it to another Azure service, Event Hub, which is a service that allows events to be routed to different endpoints. In this case, Event Hub passes the event message to a custom Azure Function which uses the Google Firebase Messaging API to request to send a notification to any registered Android or iOS devices (note that notifications to iOS devices go through Apple's servers also). Firebase Messaging then delivers the actual notification to the owner's phone.
Building a Safe Sound Home Security SystemPutting together a Safe Sound Home Security System consists of five main steps: first get the hardware ready; then load the software onto the Azure Sphere; next, configure the necessary Azure Cloud Services and Firebase Messaging (and the Apple Notification Service if you have an iOS device); and finally, 3D print a case for the device and put it somewhere convenient. The next several sections go over these steps in more detail.
Preparing the HardwareWhile the Azure Sphere Starter Kit hardware is ready to use right out of the box, the Mic 2 Click needs some adjusting to work with the Sphere. As noted in the Hardware Notes for the Azure Sphere MT3620, when the input pins are configured for use with the ADC, the input voltage cannot exceed 2.5V. However, the unmodified Mic 2 operates on 3.3V (or 5V) and outputs 0 - 3.3V depending on the intensity of the sound it senses. Since the resistance of the Mic 2 mainly depends on two relatively constant components, the electret condenser microphone and the op-amp, a simple voltage divider can be used to step the voltage down to 2.5V.
Both the electret condenser microphone and the op-amp can function on 2.5V, but the programmable potentiometer that is used to adjust the op-amp gain needs at least 2.7V to function. So after adding the voltage divider, the gain will no longer be adjustable. This is not a big issue, however, as the default gain should work for recording and classifying general audio.
To calculate the resistors needed for the voltage divider, take a look at the Mic 2 datasheet and assume steady-state operation. After carefully walking through the problem and referencing the microphone datasheet and the op-amp datasheet, the output impedance to the voltage divider can be calculated as 439 Ohms (largely due to the power LED resistor). Taking this into account, using a resistor of 150 Ohms in series before the Mic 2 circuit should reduce the input voltage to 2.5V. Using the soldering iron desolder the existing 0 Ohm resistor circled in the image below and solder on the new resistor. Insert the click board into slot #1 on the Azure Sphere and you're ready to prepare the software.
Since the Azure Sphere is focused on maintaining security for IoT devices, setup is a little bit more involved than with other microcontrollers. Before loading the code for this specific project, you will need to install the Azure Sphere SDK, claim your device and associate it with a tenant, and configure networking. More detail on each of these steps is specified in the documentation here. Make sure you proceed all the way through "Configure networking".
There are different ways to deploy code onto the Azure Sphere, but if you would like to experiment with or change the code, the most suitable is to place the device in development mode as specified in the documentation. To place the device in development mode, plug the Sphere into your computer that has the Azure Sphere SDK loaded, open an Azure Sphere Developer Command Prompt, and issue the following command:
azsphere device enable-development
While in development mode, the Sphere is available for local debugging and cloud application updates are disabled.
Now the Azure Sphere is ready to load some code onto it. Clone the SafeSound repository to a convenient location using:
git clone https://github.com/jdpwebb/safe-sound.git
To compile the code and load it on the Azure Sphere, we will use Visual Studio 2019 because it is easy and smooth to use. However, there are other options (in preview at this time), and if you prefer to use the command line, you could try following this documentation. For Visual Studio, follow these steps:
1. Open Visual Studio 2019, select File > Open > CMake... and open the CMakeLists.txt file under SafeSound repo > SafeSound_code > CMakeLists.txt.
2. Select Build > Build All and ensure the code compiles without any issues.
3. Connect your Azure Sphere to your computer using the USB cord.
4. In the middle of the toolbar, under "Select Startup Item", select GDB Debugger (HLCore).
5. Click the green play button next to "GDB Debugger (HLCore)" and wait for Visual Studio to start debugging.
Once the application starts debugging, you will see some errors because the Sphere is not connected to Azure IoT Hub. However, you can still test that everything is working by pressing Button A, which will simulate a window breaking event. Now let's connect the Sphere to IoT Hub!
Connecting to Azure IoT HubCommissioning and configuring communication with an Azure IoT Hub is relatively straightforward because the Azure Sphere application code automatically manages the device provisioning and connection to the hub. Perform the following steps to allow the Sphere to communicate with the IoT Hub:
1. Setup an IoT Hub and Device Provisioning Service by following the documentation here.
2. Open the app_manifest.json under SafeSound repo > SafeSound_code > app_manifest.json. The string of X's will need replacing, as well as the hub connection endpoint.
3. Open an Azure Sphere Developer Command Prompt and issue the following command to print your Sphere tenant ID:
azsphere tenant show-selected
Copy the returned value into the DeviceAuthentication field in the app_manifest.json.
4. Log in to the Azure Portal and find your Device Provisioning Service. Copy the ID Scope and paste it into the CmdArgs field in the app_manifest.json. See the image below for where the ID Scope can be found.
5. On the left-hand side, under settings, click on Linked IoT Hubs (see image above). Copy the service endpoint value for the Hub and replace "Daedalus.azure-devices.net" in the AllowedConnections field of the app_manifest.json.
6. Save the app_manifest.json file and run the code by clicking on the green play button. You should see several messages indicating that the device was successfully provisioned and connected to the IoT Hub.
Now that the Safe Sound Home Security device can communicate with the Azure IoT Hub, it needs a place to send notifications: an app!
Smart Phone App OverviewAs noted earlier, the app built for Safe Sound enables control over the Home Security device, including arming/disarming and adjusting some settings, and it allows the owner to view a history of recent events. In addition, it will receive notifications when an event is detected by the Safe Sound system. The app is built using Flutter and can therefore run on both Android and iOS phones with minimal extra configuration. See below for some images of the Safe Sound companion app.
Before building and loading the app on a phone, a few variables must be filled out so the app can connect to IoT Hub to send and receive messages from the Safe Sound device. Navigate to SafeSound repo > SafeSound_app > lib and open main.dart. At the top of the file are three constants that need filling out: sharedAccessKey, deviceID, iotHubEndpoint. The iotHubEndpoint is the same string entered earlier in the app_manifest.json. The deviceID is the ID given to your Azure Sphere by your IoT Hub. It can be found by opening the IoT Hub, click on "IoT devices" in the side menu, and then select your device. See the images below for a visual walk-through.
The sharedAccessKey is a key associated with an IoT Hub that grants the holder permission to interact with the Hub in certain ways. For this project, the shared access key needs to have "service connect" permissions. Note that this key grants access to your IoT Hub to, it should not be revealed to anyone or checked into source control. Additionally, an enterprise IoT service would use a backend server to authenticate app users and manage the services they have access to. For the purposes of this project, however, it is simpler and easier to put the shared access key directly in the app. To find the shared access key, go to your IoT Hub, click on "Shared access policies", select the "service" policy, and copy the "Primary key". See the images below for a visual guide.
With the variables filled out, the app can now be built and installed on your phone. First you will need to install Flutter. With Flutter installed, the app can be built in a few steps. The following instructions describe how to build the app for an Android phone because that is the type of phone I have, but the instructions for building and loading the app on an iOS device can be found here.
For creating an Android release of the app, first create a signing key and sign the app (Note: only follow the instructions for "Signing the app"; stop once you reach "Enabling Proguard"). Then build an APK by opening a flutter command prompt and issuing the following (this may take several minutes to complete):
cd <path to SafeSound repo>/SafeSound_app
flutter build apk --split-per-abi
To install the app, first enable Developer options and USB Debugging on your Android phone. Connect your phone to your computer, and then type the following in a flutter command window:
cd <path to SafeSound repo>/SafeSound_app
flutter install
Congratulations! The app is now installed and you can test out arming/disarming the Safe Sound Home Security System. The app now works to control the Azure Sphere, but there are a few more steps to setup notifications for when a break-in event is detected.
Setting Up Event NotificationsEnabling notifications for the Safe Sound system consists of two main parts: connecting the app to Firebase Cloud Messaging (and Apple Push Notification Service if you have an iOS device), and setting up an Azure Function to send the actual notification when an event is detected by the Azure Sphere.
Connecting the App to Firebase
To connect the app to Firebase Cloud Messaging, start by creating a Firebase project.
1. Open the Firebase Console and click "Create a project" and enter a project name.
2. Click Continue and then disable Google Analytics which is not needed for this project.
3. Click "Create project".
The next step is to configure your phone to work with Firebase. As before, instructions for configuring an Android device will be shown here, but if you have an iOS device, follow these instructions.
To enable notifications for an Android device, follow these steps:
1. Click on the Android figure above "Add an app to get started".
2. Fill out the Android package name as "com.jwebb.safe_sound_app" and click "Register app".
3. Download the google-services.json file and place it in <SafeSound repo>/android/app/.
4. Rebuild the app and install it on your device using a similar process as before:
cd <path to SafeSound repo>/SafeSound_app
flutter build apk --split-per-abi
flutter install
Now the app can receive notifications, but we still need to configure an Azure Function to send them.
Creating an Azure Function to Send Notifications
Go through the following steps to set up the Azure Function for pushing notifications. See the images for a visual walk-through.
1. Navigate to the Azure Portal and click on Function App > Create (If Function App doesn't show up for you, search for it in the search bar).
2. Fill out the Function App name, choose Node.js as the Runtime stack, and Central US as the Region. If you wish, you can experiment with choosing a different region, but not all regions have the same configuration options. Click on Next: Hosting >
3. If not already selected, choose Windows for the OS, and Consumption for the Plan type. Note that the Consumption plan will charge you monthly, but the cost is quite small. I have been charged 1 cent per week for the Azure Function and storage. Click on Next: Monitoring >
4. Disable Application Insights (it is not needed for this project), and then click Review + create.
5. Double-check the configuration (see fifth image below) and click Create.
6. Once the resource has successfully deployed, click Go to resource and click on the + to add a new function (next to Functions).
7. Scroll down and choose In-portal, and then Continue.
8. Scroll down and select More templates... and then Finish and view templates.
9. Scroll down to IoT Hub (Event Hub) and select it. Install the extension when prompted, and press Continue when it is finished installing.
10. In the New Function window, click on new under Event Hub connection.
11. In the pop-up select IoT Hub and accept the defaults.
12. Finally, click Create on the New Function window.
Now that the Function App has been commissioned, it needs to be configured to send notifications through Firebase. This involves setting up credentials to give the Azure Function permission to send notifications via Firebase. Perform the following steps:
1. Go to the Project Overview page of the Firebase Console, click on the gear icon next to Project Overview, and then Project settings.
2. Select the Service accounts tab and scroll down to the Admin SDK configuration snippet. Click Generate new private key and download the generated JSON file. It contains credentials to access this Firebase project and will be used in the next step.
3. Switch back to the Azure Function and click on its name, and then on Console at the bottom-center.
4. Install the firebase-admin SDK with the following command
npm install firebase-admin
5. Click on View files on the right-hand side, and then Upload. In the file chooser dialog, select the JSON file downloaded in step 2.
6. Open the azure_function.js file in the SafeSound repo (or from Github here), copy all the code and paste it into the online editor of the Azure Function. Click Save.
7. Click on the name of your Function App, and then Configuration.
8. Click New application setting and set the Name field to GOOGLE_APPLICATION_CREDENTIALS. The Value field should be set to /home/site/wwwroot/IoTHub_EventHub1/<your JSON file name>.json.
9. Click OK and then Save.
All done! Notifications are now setup for the app. Test it by opening the app, choosing the Events tab and then pressing Simulate Event.
Add a CaseAdd the finishing touches by 3D printing a case. Download the attached STL files and print them, or make your own. If you use the attached case design, the top half of the case should be printed so that the snaps are horizontal (i.e the protrusions from the top of the case should be in the same direction as the layers). This is necessary to ensure that the snaps are strong enough. If not printed this way, there is a good chance the snaps will break off due to the inherent weakness between the printed layers.
The machine learning model for classifying the audio was built using this Python notebook which runs on Google Colab in the browser and requires no install. The notebook clones and compiles the Embedded Learning Library, downloads the training data, builds a featurizer, transforms the data into features, trains the classification model, converts the model to the ELL format, and allows the model to be downloaded. You are encouraged to experiment with the machine learning model. Try adding an additional event type, or improving the accuracy of the existing model.
Final ThoughtsAfter all that, you should now have a working Safe Sound device that will detect sounds of breaking glass or gunshots and alert you with a notification to the companion app. If you didn't make one, enjoy this video demonstration instead.
There are additional features that could be added to the device. One big one would be the ability to train the classification model online. In this way, the device would continuously learn (with user feedback) and get better at detecting home break-ins.
Comments