Researchers Highlight Vulnerabilities and Security Concerns in Current Approaches to TinyML

Limited resources and a lack of specialized hardware mean more research is required into how to protect tinyML deployments, the team claims.

Researchers at Harvard University, the University of Southern California, and Draper Laboratory have called for "essential" work on the security side of on-device machine learning (ML) and artificial intelligence (AI) on resource-constrained devices like microcontrollers — known as tinyML.

"Tiny machine learning (tinyML) systems, which enable machine learning inference on highly resource-constrained devices, are transforming edge computing but encounter unique security challenges," the researchers argue. "These devices, restricted by RAM and CPU capabilities two to three orders of magnitude smaller than conventional systems, make traditional software and hardware security solutions impractical. The physical accessibility of these devices exacerbates their susceptibility to side-channel attacks and information leakage."

TinyML brings other challenges, too, the researchers claim — including the presence of model weights, which may encode sensitive data, on-device and accessible to anyone who can dump the firmware. In the majority of cases, the vulnerabilities and attack surfaces highlighted by the researchers aren't exclusive to tinyML devices; the issue, though, is exacerbated by the limited resources of the underlying hardware, which lack the computational power, memory, and storage capacities to run mitigations alongside their primary workloads.

"We found that the most robust and commonly used countermeasures for SCAs and FIAs [Side-Channel Attacks and Fault-Injection Attacks] are too expensive for tinyML devices in terms of die area and computational overhead," the team concludes. "In addition, many of the built-in countermeasures on commodity MCUs [Microcontroller Units] do not offer much robustness to the attacks we covered."

In short: there's more work to be done. The team suggests that additional research is required in understanding how existing security measures and tinyML models interact on hardware with limited resources, and to benchmark said interactions; still more work is required in the validation of tinyML model security robustness to the attack types the team highlights, in order to "identify countermeasures that must be redesigned or replaced to be more resource efficient for use in tinyML deployments."

The team's work is available as a preprint, under open-access terms, on Cornell's arXiv server.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles