Hackster is hosting Hackster Holidays, Ep. 6: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Monday!Stream Hackster Holidays, Ep. 6 on Monday!

Sensitive Skin

AI-on-skin enables artificial skin patches to provide near real-time feedback with on-body FPGAs.

Nick Bild
4 years ago β€’ Machine Learning & AI
Forearm patch β€” stylish, right? (πŸ“·: A. Balaji et al.)

Artificial skin has applications in monitoring health, rehabilitation, prosthetics, and virtual reality. These skin-like patches contain sensors and are worn on the skin to provide both inputs and feedback to the user. The large amount of data produced by these sensors is typically processed by machine learning algorithms to extract useful information.

Artificial skins have been used in robotics to improve touch sensing with the help of convolutional and recurrent neural networks. Similar approaches have been used to recognize gestures or detect biomarkers in humans. While the sensing takes place on-body, the compute power for these networks resides in off-body computers, such as smartphones or cloud resources. This means that while artificial skin sensors can detect input in real time, there is a noticeable delay in responding to those inputs.

An engineering team at the National University of Singapore recently published a paper that paves a path forward towards on-body AI inference for wearable artificial skin patches. Their prototype device consists of a forearm-sized skin patch that makes use of an off-the-shelf FPGA for running inferences.

The device consists of open source Muca artificial skin equipped with a capacitance board, an I2C to UART interface to send data from the capacitance board to the FPGA, and a GPIO interface that allows multiple skin patches to communicate with each other. The tiny Digilent Cmod A7 FPGA was chosen to perform the neural network acceleration. The artificial skin contains an array of 12 by 21 touch points.

The team tested out their prototype to see how well it would perform in detecting handwritten letters in the English alphabet that are traced on the skin. Training data was collected via UART and the model was trained offline with PyTorch. Custom software converted this model into a bitstream for a Shenjing AI accelerator that was implemented on the FPGA. With one hundred training examples for each letter, an overall inference accuracy of 96.7% was achieved.

In another validation, the team trained a model on handwritten gestures. These gestures included a click, swipes, zooms, and dragging. Once again collecting one hundred samples for each class, an overall accuracy of 94.6% was found.

To assess the advantages of this AI-on-skin method as compared with off-body processing, sensor data was streamed over Bluetooth Low Energy to a laptop for neural network inferences. On-body processing was found to be twenty times faster in response time. They also ran tests in which computation was performed on-body with a Teensy 4.0 microcontroller development board. In this case, the FPGA-based approach was found to be 35 times faster.

The study authors are currently working on extending the technique to a full-body suit consisting of multiple patches. They have also taken steps to replace the FPGA with an application specific integrated circuit, which is expected to further decrease inference times and power consumption. They plan to perform larger scale trials once these upgrades are complete to bring this device another step closer to real world use.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles