Cifer's Voice Recognition Robot Runs a TensorFlow Lite Model on a Seeed Studio XIAO nRF52840 Sense
Trained in Edge Impulse Studio, this open source robot can recognize four spoken commands — and displays them on-screen.
Pseudonymous maker "Cifer" has released a design for a simple open source voice-controlled robot, the XIAO Robot, powered by a Nordic Semi nRF52840 system-on-chip and a voice recognition TensorFlow Lite model created using Edge Impulse.
"The Seeed Studio XIAO nRF52840 Sense has Bluetooth Low Energy (BLE) version 5 wireless capability and it's able to operate with low power consumption [and features] on-board IMU [Inertial Measurement Unit] and PDM [Pulse Density Modulation microphone," Cifer explains. "It can be your best tool for embedded machine learning projects."
To prove that, Cifer has designed a simple robot with surprisingly few components: A two-motor two-wheel chassis, a breadboard housing the Seeed Studio XIAO nRF52840 Sense development board and dual-motor driver, and an SSD1306-based 0.96" OLED display panel for visual feedback.
The heart of the system, though, is the machine learning model running on the low-power XIAO board. Built in TensorFlow Lite, the variant of the TensorFlow framework built with resource-constrained TinyML microcontrollers in mind, and trained using Edge Impulse Studio, the model can recognize four different voice commands — displaying the recognized command on the OLED panel for confirmation while also carrying it out.
"Well, if you ask me," Cifer says following test of the robot's capabilities, "a microcontroller of this size did the job very well without the need for an external microphone."
A video demonstrating the project is available on the CiferTech YouTube channel, while schematics and source code for the robot and voice-control model are available on Cifer's GitHub repository under the permissive MIT license.
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.