This project was inspired by my dad because there are just some mornings when he just starts to make his coffee but completely forgets about the cup. So when he gets back to grab his coffee, he has his coffee but unfortunately, it is all over the counter. People might not realize that their beloved coffee has spilled and if they have any sort of rug, this might damage it. This device will help prevent from any coffee waste and time to clean up for people that need coffee in a hurry.
This device is using the Vision AI camera to see if there is a cup after it listens to the brewing of the machine when making the coffee using the microphone from the Wio Terminal to make sure that you are making coffee and no false alarms. Then once it detects that it is brewing and there is no cup where there is supposed to be one, the Wio Terminal will use its buzzer and beep to let the person know that they have to put a cup in the machine before it is too late.
Breakdown of the sensorsI am using the Wio Terminal because it is a microcontroller that I am very familiar with and it also it has a microphone embedded so I won't have to have a separate one to connect.
The microphone will listen the surrounding noise to check if the coffee machine is brewing coffee by using audio classification of different data.
I have also used the Wio Terminal for its plug and play grove ports. They really simple to use and you don't need to solder the sensors to the microcontroller. I had connected Vision AI module to the grove port.
The Wio Terminal is also really good for its display. I used the display to monitor my data the prediction percentage of coffee brewing. It will also be written on the screen if it thinks that the coffee is being brewed and if there is a cup or not.
Making sense of sensorsNow just think of the human senses. We don't just use one sense at a time, we would probably use two or more at a time to have a better understanding of what is around us. For example, when we are trying to differentiate a slice of apple pie and a slice of blueberry pie, you use your eyes to look at what it is and then you might use your sense of smell to get the different types of pie apart from each other.
This is a similar thing that we are doing with our device here. We are using data from two different sensors to make sure that prediction is more accurate and that the false alarms are limited. The microphone is the sensor that will first figure out if coffee is being brewed by listening to the surrounding noise. If the microphone thinks that coffee is being brewed, then this is where the second sense comes in. The Vision AI module will check to see if there is a cup to make sure that the coffee won't spill if it were brewing.
Using Edge Impulse for audio classificationI had used Edge Impulse for both the audio classification and object detection uses, so I have made two separate projects for each of them
For the audio classification model, I have collected about 10 minutes and 10 seconds worth of data, each set of data about 5 seconds. The data that I have collected was either background noise(any other noise than coffee) or the sound of the brewing coffee machine. After all of the data has been collected, I had created my impulse that looks like this:
Once the impulse has been saved, you can go to classifier to train your data. You should make sure you have 100 training cycles and 0.005 as your learning rate before you train. Then you can go straight to deployment to deploy your impulse.
I recommend that you first run your impulse directly to a mobile phone or a computer before you deploy it to Arduino to make sure that everything is working as it should. When you go to deploy your impulse so that you can use it, you need to deploy it as an Arduino library and then that will be used as a.zip file when you get to programming.
Checktheselinksformoreinformation:
Support for audio classification: https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/seeed-grove-vision-ai
Support for Vision AI module: https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-mcu-targets/seeed-grove-vision-ai
Train and deploy your AI model with Roboflow, YOLO 5, and Tenserflow lite: https://wiki.seeedstudio.com/Train-Deploy-AI-Model-Grove-Vision-AI/#train-using-yolov5-on-google-colab
Seeed Studio YOLOv5-swift Github: https://github.com/Seeed-Studio/yolov5-swift
Vision AI module with RoboflowEdge Impulse is the preferred tool that I use. Following this article, I connected Grove Vision AI module to Edge Impulse and collected images right from the module. I then created the impulse, trained and tested by deploying on to iPhone. To deploy the custom model to Vision AI, I downloaded tflite file from Edge Impulse dashboard and using uf2conv.py from this github repo, I created the UF2 file. But when I deployed the file, my module had crashed and I could not recover again. I bought two more modules so I could test it out and have decided to use Roboflow to help with training using YOLO 5 so that I could complete this project.
All of the data that I had collected from Edge Impulse I had exported and then had to create an account for Roboflow so that you can be able to do all of your work from there( I used my dad's account since he already had one).
Once that magic has been accomplished, you will need to follow these steps to be able to annotate your dataset in Roboflow
I had about 120 images of cups of different variety in different angles of the camera. I had then resized them to stretch to 192 X 192. The augmentation was turned off and then I had pressed the button that had said "Generate".
Once you have gotten to the part where you get your downloaded code snippet, you need to train the model using YOLO 5 running on Google Colab. I have used this article to help make my own notebook in Google Colab that will help to train your AI model using YOLO 5. All you need to do is copy this notebook and then add your own id into your copy.
After you are able to download the trained file model, you should be able to see that your AI model is trained by moving the model-1.uf2 file that we had gotten at the end of the training and move it to the GROVEAI drive after you have double pressed the boot button on the vision AI module.
Then you will need to add a library to your code so that you can be able to show the number of cups in the serial monitor. The code will be shown in the bottom of the page. Now you have been able to complete this lovely device! You may be able to now have your scrumdiddlyumptious coffee for your morning or evening boost without having to worry about it being wasted( my dad can have coffee any time of day) ☕️!
There were a few speed bumps along the way to make this project. When I was first trying to get the impulse from the Vision AI module to get deployed, for some reason the.uf2 file would not get pushed into the module. I had usually work on Mac and since that didn't work, I had done the whole setup again on our Windows PC. I was able to flash the firmware, but then the module stopped responding for anything. So me and my dad had contacted Seeed Studio and tried all of their troubleshooting techniques, but it wouldn't respond to anything. That had meant the module had broke, so we had bought another one so that we could continue the work. I had my dad help me with this part, but we had been able to use Roboflow after we had last tested it out.
I also had tried to make the display on the Wio Terminal a little nicer looking with an image for when the device thinks that coffee is being brewed and if there is a cup or not and using sprites for when the prediction percentage of coffee brewing was written on screen. But when we had tried to run the program, the model would freeze and wouldn't print anything else after that. It had seemed like a high memory usage problem, so instead of using sprites and images, I got rid of the images and used tfts in replace of sprites.
I have used Fusion 360 for my 3d designing. Originally I used to use Tinkercad for my 3d designing, but once I started to use Fusion 360, you can have a lot more freedom for how to design it. This was a new experience for me and challenging since I messed up on measuring the slot for the microcontroller and had to ask my dad numerous times how to fix the measuring. But it is something I will always use when making a 3d design.
If I had more time, then I would try to see how I can reduce my memory usage so that I can be able to add some images onto my display to make it a little more nicer looking.
Not every time you make coffee will you stay in the kitchen, so you won't always hear the buzzer when it beeps. So next time I will make sure that when the device says that coffee is being brewed and there is no cup, then it will send some sort of SMS or mail to your phone so that you can be able to get notified even when you aren't around the device and can't hear the buzzer.
I would also spend some more time to be able to research on how to get the object detection model to work fully on Edge Impulse so that I won't have to go back and forth on different software's and would make life a little simpler.
Full video demo 😄
Comments