PyTorch 1.3 Released, Brings Open Source Machine Learning to Android and iOS Mobile Devices
Major new release also boasts additional tools and libraries, support for Google's Cloud Tensor Processing Units (TPUs), and Alibaba Cloud.
The PyTorch project has officially unveiled version 1.3 of its popular open source Python and C++ machine learning library — bringing with it an experimental release of PyTorch Mobile, bringing machine learning to Android and iOS mobile devices with an end-to-end workflow.
"PyTorch continues to gain momentum because of its focus on meeting the needs of researchers, its streamlined workflow for production use, and most of all because of the enthusiastic support it has received from the AI community," the team writes by way of retrospective. "PyTorch citations in papers on ArXiv grew 194 percent in the first half of 2019 alone, as noted by O’Reilly, and the number of contributors to the platform has grown more than 50 percent over the last year, to nearly 1,200. Facebook, Microsoft, Uber, and other organisations across industries are increasingly using it as the foundation for their most important machine learning (ML) research and production workloads."
PyTorch 1.3 improves upon previous releases in a number of fields, but introduces three key new features — which the development team warn are presently for experimental use only. The first is named tensors, originally proposed by Cornell University's Sasha Rush. "Named tensors, with named dimensions [...] eliminates the need for indexing, dim arguments, einsum-style unpacking, and documentation-based coding," Rush claims of the still-optional approach.
The second feature is quantisation. "To support more efficient deployment on servers and edge devices, PyTorch 1.3 now supports 8-bit model quantisation using the familiar eager mode Python API," the team explains. "Quantisation refers to techniques used to perform computation and storage at reduced precision, such as 8-bit integer. This currently experimental feature includes support for post-training quantisation, dynamic quantisation, and quantisation-aware training."
The final headline feature is perhaps the biggest leap: PyTorch Mobile. "Running ML on edge devices is growing in importance as applications continue to demand lower latency," the team explains. "It is also a foundational element for privacy-preserving techniques such as federated learning. To enable more efficient on-device ML, PyTorch 1.3 now supports an end-to-end workflow from Python to deployment on iOS and Android."
The experimental PyTorch Mobile release initially focuses on end-to-end development. The team has indicated that future releases will include build-level size optimisation, performance improvements, and a high-level application programming interface which will cover common preprocessing and integration tasks for common machine-learning applications such as computer vision and natural-language processing.
At the same time, the PyTorch project has launched an analysis tool dubbed Captum, a research platform designed to address the privacy concerns of machine learning on the cloud dubbed CrypTen, new multimodal AI tools including the Detectron2 object detection library and speech extensions for fairseq, plus enhanced cloud platform support including the ability to accelerate on Google's Cloud Tensor Processing Units (TPUs) and run tasks on the Alibaba Cloud.
More information on the latest release is available on the PyTorch blog.