It was a rainy afternoon and I was searching for inspiration for a project to introduce high school students to machine learning-based object classification and detection.
Then, as I reached across to a bowl of M&M on my desk I realised that counting the number and colours remaining was a good sample object detection problem.
At the same time in the living room I could hear my son playing with his Lego and I realised that identifying pieces of Lego was a good example of an object classification problem.
The.Net Core based UWP client application runs on a Windows 10 IoT Core device and is remotely configured (model type, model to run & threshold etc.) using Azure IoT Hub device twin.
The images are uploaded to Azure Cognitive Services Custom Vision Service for processing and then the results are post processed (filtered & aggregated) on the device to make them easier to use in Azure IoT Central.
19-08-14 05:26:14 Timer triggered
Prediction count 33
Tag:Blue 0.0146500813
Tag:Blue 0.61186564
Tag:Blue 0.0923164859
Tag:Blue 0.7813785
Tag:Brown 0.0100603029
Tag:Brown 0.128318727
Tag:Brown 0.0135991769
Tag:Brown 0.687322736
Tag:Brown 0.846672833
Tag:Brown 0.1826635
Tag:Brown 0.0183384717
Tag:Green 0.0200069249
Tag:Green 0.367765248
Tag:Green 0.011428359
Tag:Orange 0.678825438
Tag:Orange 0.03718319
Tag:Orange 0.8643157
Tag:Orange 0.0296728313
Tag:Red 0.02141669
Tag:Red 0.7183208
Tag:Red 0.0183610674
Tag:Red 0.0130951973
Tag:Red 0.82097
Tag:Red 0.0618815944
Tag:Red 0.0130757084
Tag:Yellow 0.04150853
Tag:Yellow 0.0106579047
Tag:Yellow 0.0210028365
Tag:Yellow 0.03392527
Tag:Yellow 0.129197285
Tag:Yellow 0.8089519
Tag:Yellow 0.03723789
Tag:Yellow 0.74729687
Tag valid:Blue 2
Tag valid:Brown 2
Tag valid:Orange 2
Tag valid:Red 2
Tag valid:Yellow 2
05:26:17 AzureIoTHubClient SendEventAsync start
05:26:18 AzureIoTHubClient SendEventAsync finish
I used Azure IoT Central to store client configuration, display the number of M&M remaining and the bowl and plot some basic consumption/self control KPIs.
The object detection training process requires at least 15 images per tag, so for 6 different colours that's more than 90 images whichtook a while. The object classification model worked pretty well but I would need to upload more training images with different lighting (LED vs. Sun etc.) so the model copes with the varying lighting in my home office.
I iterated the model several times to get the object detection model reliable enough to use in a classroom environment.
The remote configuration should make it easier for students to train and test their own models. Future applications include counting wildfowl on the school stream, and identifying the type/growth stage of plants.
Every so often the contrast on the webcam (on hardware compatibility list) goes wrong and the images are washed out.
The remote configuration should make it easier for students to train and test their own models. Future applications include counting wildfowl on the school stream, and identifying the type/growth stage of plants.
I used a Seeed Studio 96Boards mezzanine on the Dragonboard 410c and Grove Base HAT on the Raspberry PI 2/3 device. These are not required if the digital input to initiate photos and the LED to indicate an image is being processed are not required
I'm looking at building a versions which supports the DragonBoard 410C TPM, disconnected image analysis using a "compact" model downloaded to the device and initiating the image processing in the cloud in an Azure function.
Comments