I have executed this project with some guidance from my dad.
Problems with stovesOne night, my mom had forgotten to turn the stove off after she was done cooking. The next morning, my dad had noticed that stove was on the whole night! Luckily, stove was on low , was nothing on the top and no damage was done, but something had to be done to prevent that from happening again. I had researched that one of the main way houses catch on fire is when the stove is left unattended when cooking or after cooking.
How can my project help?Using a thermal camera and machine learning, you can be able to see the heat radiations. Based on the machine learning, the device can identify if the stove has been left unattended and will notify you.
My implementation is AEyeHere comes my device, "AEye". This device is built with Seeed Studio's Wio Terminal and is connected with a thermal camera and a LoRa Chassis. I am running a tinyml model on the wio terminal so that it can be able to determine whether the stove is on while no one is there. Without machine learning, we can determine if stove is on based on the heat data but taking a finer decision if stove is on and human is present or not, require lot of if-else logic in the code. Machine learning makes more sense in this situation. Then after it can be able to alert you to check the stove before any damage can happen.
Creating EI projectHop onto Edge Impulse and create a new project. Go to the tab called "keys" and add a new HMAC key. After you get the key, copy the key and save it for later.
Edge Impulse doesn't support thermal camera feed for data collection at the moment, so you need to collect your data from the wio terminal and have it forwarded to Edge Impulse to build the model instead of directly collecting the data from Edge Impulse.
To capture the data, I am using the 3 buttons of the wio terminal for all my labels. In order to collect the data, you need to run the WIO_AEye_Data_Collector.ino
program, then you can start collecting the data!
After you have collected the data that you need, remove the SD card from the wio terminal and insert into the computer. After that, you should be able to see the CSV files listed.
After you got the files, copy them to the directory called raw under the folder "data". This step is optional, but if you want to visually see your csv files, then you can run the imager.py
program. This will help create a visual presentation of the data under /data/visual
folder.
Now I have all the data collected in a series of raw data. Since the display is 32x24, we should have 768 separate values for each image. The difference of each of the 768 time series data will have an interval of 1 ms. So in EI, your data should be formatted similar to this:
We have now took a sneak-peak of what the formatted data should look like. In order to start, first open the data-formatter.py program and paste the HMAC key you had gotten earlier when you were first creating the project. What this program will do is it will turn the csv files into json files and will store them under the /data/data-formatter
folder.
python3 data-formatter.py
Now to have the data uploaded under the "Data acquisition" page, CD into formatted-data
folder and run the following command:
edge-impulse-uploader --category split *.json
Building the modelNow that we have the data collection out of the way, we can now move on to building and training the model! To start, head over to the "Create Impulse" page. First, click on "Add an input block" and select "Time series data". Have the "Window size" as 768, and "Window increase" as 768 as well.
Then, click on "Add processing block" and choose "Raw data".
After, select "Add a learning block" and choose Classification (Keras).
Now you can be able to save the impulse. Once you are done, it should look something like this:
You can take a look at the page "Raw data", but we are just keeping everything as default. Now we can move on to the "NN classifier" page to adjust the number of training cycles and the learning rate. If your model doesn't work well, you should add more data and retrain the model.
Now you have finished building the model! This next step will download the library as a zip file. Navigate to "Deployment" page, choose "Arduino" and build. If you are naming your project with the same name as mine, the library file should be AEye_inferencing.h
For this project, I am going to be using LoRa network other than WiFi or cellular network for its long range. The limitation with LoRa is that it can only transmit raw data, not audio or image. But for my project, that is not a problem since I only have raw data. When the device is at home, usually you can connect to WiFi but we have so many smart devices connected to WiFI and I am trying to avoid any new device to it.
After you connect the antenna to the chassis, upload the sketch WIO_AEye_Inference_lora.ino to the Wio terminal.
Register device with HeliumHead over to the Helium Console and register your device. After you registered your device and are in the console, find the devices page and add a new device.
As you can see in the image above, you need to get the Dev EUI, App EUI, and the App key. In order to see this info, slide the blue circle button to the left until you see the screen change.
Integration with AWSTo start the integration, first you need to make a user. Go to IAM and click "users". Then you can be able to go and create a user by adding the user name (I have mine as "wio-terminal-stove-helium") and then the AWS credential type will be "Access Key". This will give you the access key ID and secret access key. Back in the Helium Console, enter the credentials under "Integrations", as well as the AWS region that you would like AWS IoT to run in.
Remember: You can only see the secret access key once, so make sure you have it fully copied.
Once you have followed through with the integration, find the "Flows" page and start creating the workflow as shown in the image below.
Now that you have finished with creating the workflow, you need to create a Lambda function. You can search "Lambda" from AWS IoT and then you can just create the function. You can find my Lambda code under the "Code" section. For this project, I am using AWS Pinpoint to be able to send a notification for when the stove is left unattended.
Creating the ruleAfter you deploy your function, we have to create a rule. Head over to IoT Core and navigate to "Message Routing" and select "Rules". Once you are in the "rules" page, go ahead and create a new rule.
Use the topic name that you used in the helium console and under actions and then select the lambda function you had just deployed.
Creating the databaseIn the lambda function, you can see that I had set a 15 minute interval in between notifications. So that each time the data comes in as the label 2(or stove with no human) and sends notifications to the phone, the next one should not come in until 15 minutes has passed. But when 15 minutes has passed, it will send the notification and update the database. Right now, we have done everything but creating the database.
In order to make the database, first go to DynamoDB and create a table. Set the title as "wio_terminal_stove_config", have the partition key be "id" and set that to a string. Have the table settings as "default settings" and then create the table!
Then that's it! You did it 😄 Now put your device in front of your stove, and have it keep an eye on your stove!
Comments