Imagine walking through a bustling shopping mall or an exciting exhibition. As a tech enthusiast, or perhaps someone drawn to the latest fashion trends, your eyes are caught by a sleek, eye-catching product on display. Intrigued, you pick it up to take a closer look, unsure of what it is. Suddenly, the nearby monitor or large screen comes to life, displaying vibrant infographics and dynamic advertisements tailored to that very product in your hands. No salespeople, no extra effort—just a seamless, interactive experience that engages you instantly. This is the future of marketing—dynamic digital signage that connects with consumers through intelligent automation, delivering personalized content with minimal human interaction.
IntroductionIn an age where advertisements and information displays are becoming more interactive, digital signage has the potential to captivate and engage viewers in innovative ways. In this project, I’m building a smart digital signage system powered by a Raspberry Pi-based reComputer R1000. The system leverages Node-RED to create a simple user interface to manage file paths and keywords while using VLC for seamless video playback. But what sets this project apart is the integration of a camera feed, which is processed through a custom object detection model trained on Coca-Cola and Sprite datasets from Roboflow. When the camera detects these objects, the system publishes keywords to an MQTT broker, triggering a related video and interrupting the default video playback. This makes the signage responsive and dynamic, showing specific ads or information based on real-time object detection.
Creating ML ModelFor this project, I used the EfficientDet TensorFlow Lite model, known for its balance of speed and accuracy in object detection. The dataset was sourced from the Roboflow Universe, where a wide variety of labeled image sets are available. EfficientDet requires Pascal VOC XML label format, which can easily be obtained from Roboflow along with the dataset. I used Google Colab to train the model and convert it into a TensorFlow Lite format, ensuring it is optimized for deployment on edge devices like the Raspberry Pi.
References
MQTT BrokerI set up a Mosquitto broker to handle message passing between the object detection model and the video playback system. Once the model identifies an object, such as Coca-Cola or Sprite, it publishes a corresponding MQTT topic with the relevant keyword. I also added custom functions to manage the publishing process, ensuring the right video is triggered based on the detected object at the Node-RED end.
References
Modified code GitHub for object detection
NodeRED UINode-RED, a low-code development tool, was key in creating the user interface for this project. The UI allows users to add video file paths and their corresponding keywords, which are then stored in a MySQL database. A separate tab is used to set a default video, which plays continuously until an MQTT topic triggers a new video. The Node-RED flow works as follows: when the object detection model identifies an item like Coca-Cola or Sprite, the relevant keyword is sent to the MQTT broker and subscribed to by the "MQTT in" node in Node-RED. The system then checks the database for the matching video file path and plays it, interrupting the default video. I also plan to improve the UI by introducing a feature for creating default playlists.
References :
Demo
Comments