We live in the era of Artificial Intelligence (AI) and can take advantage of its positive impact, especially when it helps to get new and deeper insight.
KAN to host and manage Computer Vision (CV) at the edgeThere has been very recent announcement about KAN (KubeAI Application Nucleus) by Microsoft. As a Kubernetes-native solution, KAN provides you with a no-code interface to build and deploy computer vision solutions at the edge.
I experimented and tested KAN's object detection capabilities to enable Smart Retail analytics. Additionally, it was an opportunity to check how to process all the data locally to protect privacy of retail consumers and send only aggregated, anonymised data (if required) to the Azure cloud backend.
And, finally, I wanted to explore an option of a simpler touch-enabled interaction, where non-technical audience can control sophisticated AI enabled platform with a more intuitive human-machine interface.
MVP1: Building Smart Retail solutionFirst of all, I've cloned the original KAN repo to get the source code and understand inter-dependencies of released components.
Next step was to open Azure Cloud Shell on Azure portal and execute Bash command shown below to initiate the setup process:
bash <(wget -qO- https://raw.githubusercontent.com/Azure/KAN/main/Installer/kan-installer.sh)
I will not dive into technical aspects of Azure or Kubernetes specific configurations here, as it's a subject for a separate L200 tutorial. Anyway, after following some of on-screen instructions, I've got all the resources deployed to my Azure Kubernetes Service (AKS) instance and was provided with a KAN portal URL to use. Upon clicking it, I got access to the portal's Home page.
KAN portal allows deployment of AI models either to IoT Edge devices registered with Azure IoT Hub (e.g., GPU-equipped Nvidia Jetson kit) or to the Kubernetes-hosted machines. For the demo purposes, I used the latter option here.
Next step was to enable access to a Web cam that supports RTSP (real-time streaming protocol). I deployed demo RTSP endpoint in my AKS and added it as a new camera to KAN.
KAN portal comes with some pre-trained AI models from the Model Zoo. For this project, I've selected person-detection
one. Alternatively, you can train and re-use your own model from the Azure Custom Vision project.
After that, it was necessary to link camera's real-time stream with selected AI model, define thresholds and label settings, and define post-processing of the model output:
- "Camera Input" will intake Web cam's video stream;
- "Run ML Model" defines what trained model to apply;
- "Filter" can specify minimum confidence level expected for detected objects;
- "Export Video Snippet" can save smaller chunks of video with applied model results.
As a final step, I've deployed AI skill to the edge compute device to process demo video stream from retail shop. The outcome of the model application can be found on this YouTube video here:
MVP2: Integration with Infineon's CapSense Pioneer KitThe PSoC™ 4100S Pioneer Kit is Infineon's fourth-generation, low-power CapSense™ solution based on the PSoC (Programmable System-on-Chip) 4100S device. It includes a plastic enclosure to provide ready-to-use trackpad functionalities.
There was a solution from Zerynth, that originally enabled Python execution on PSoC. However, at the time of writing SDK v2 was discontinued, while SDK v3 didn't enable support for PSoC yet.
As a new development, port of MicroPython was released to run it on Infineon PSoC 6 microcontrollers. At the time of writing, PSoC 4 was not added to the list yet.
Once either of the above options is enabled for PSoC 4, MVP2 will be built to enhance prototype with the new touch-enabled functionality. With CapSense Pioneer Kit's touchpad, retail managers will be able to choose category of products they are interested in, to understand its popularity and by what segment of their customers.
Comments