Hackster is hosting Hackster Holidays, Ep. 6: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Monday!Stream Hackster Holidays, Ep. 6 on Monday!

Video-Style Compression Brings Federated Learning to Smartphones, Other Bandwidth-Limited Devices

Designed to work around the bandwidth requirements of privacy-preserving federated learning, this new approach cuts the communication cost.

A team of computer scientists at North Carolina State University (NC State) have come up with an approach for extending the benefits of federated learning — in particular its protections for data privacy — to wireless devices, without overloading the network.

Federated learning, in which multiple client devices are trained using their own datasets to create individual models that are then combined by a central server into a single high-performance hybrid model, has a range of benefits over traditional central machine learning training approaches — not least of which is that each device can keep full control of its own data.

"[Federated learning] can allow the overall AI system to improve its performance without compromising the privacy of the data being used to train the system," explains Chau-Wai Wong, assistant professor and co-author of a new paper on the technique. "For example, you could draw on privileged patient data from multiple hospitals in order to improve diagnostic AI tools, without the hospitals having access to data on each other’s patients."

The biggest drawback to federated learning is simple: Bandwidth. The process has the client devices sending their models to the server, and the server sending the improved hybrid model back to each client device — a process which repeats in order to refine the model. For wireless devices like smartphones, and for low-bandwidth edge devices, there's simply too much data whizzing around.

The solution: Borrowing a trick from streaming media, using predictive coding to compress the data down by discarding the bulk — but in a way that can be reversed by the recipient: Predictive coding.

"We were trying to think of a way to expedite wireless communication for federated learning," Wong recalls, "and drew inspiration from the decades of work that has been done on video compression to develop an improved way of compressing data."

"Our technique makes federated learning viable for wireless devices where there is limited available bandwidth," adds lead author Kai Yue. "For example, it could be used to improve the performance of many AI programs that interface with users, such as voice-activated virtual assistants."

The team's approach works by allowing the clients to compress their model data into a series of compacted packets which can be reconstructed by the receiving server, reducing bandwidth requirements by up to 99 percent — though, in its current incarnation, those savings are only one way with the server sending the full uncompressed hybrid model back to the clients the traditional way.

The team's work is available under closed-access terms following its publication in the IEEE Journal of Selected Topics in Signal Processing; an open-access preprint is available on Cornell's arXiv.org.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles