Qualcomm Launches Its AI Hub, Offering Optimized On-Device AI Models with Four Times the Performance
With support for CPU, GPU, and NPU operation, the Qualcomm AI Hub models — more than 75 at launch — aim for high performance.
Qualcomm has announced the launch of the Qualcomm AI Hub, a model zoo offering more than 75 optimized artificial intelligence (AI) models for on-device operation on selected Qualcomm and Snapdragon platforms — providing up to a quadrupling in inference performance.
"With Snapdragon 8 Gen 3 for smartphones and Snapdragon X Elite for PCs, we sparked commercialization of on-device AI at scale. Now with the Qualcomm AI Hub, we will empower developers to fully harness the potential of these cutting-edge technologies and create captivating AI-enabled apps," says Qualcomm's Durga Malladi of the offering.
"The Qualcomm AI Hub provides developers with a comprehensive AI model library to quickly and easily integrate pre-optimized AI models into their applications, leading to faster, more reliable and private user experiences," Malladi continues.
At launch, the Qualcomm AI Hub features over 75 pre-optimized models selected from a range of popular AI and generative AI offerings — including speech-recognition and -synthesis model Whisper, ControlNet, Stable Diffusion, and Baichuan 7B.
These optimizations include support for the Qualcomm AI Engine platform, Qualcomm has confirmed, allowing the models to run on CPU, GPU, and NPU for a claimed fourfold performance gain over unoptimized models. The company has confirmed it will be adding new models in the future, as well as increasing support for more platforms and operating systems.
At the same time, Qualcomm has announced a demonstration of the Large Language and Vision Assistant (LLaVA), a seven-plus billion parameter large multimodal model (LMM) running "at a responsive token rate" on-device — allowing it to respond not only to written queries but to analyze images. For the generative AI fans, the company has also showcased Stable Diffusion with Low Rank Adaption (LoRA) running on an Android smartphone, improving the generated images for specific artistic styles.
The company's demos are running from today to 29 February at Mobile World Congress (MWC) in Barcelona, Booth #3E10; the Qualcomm AI Hub is available on a dedicated website, along with releases on GitHub and on Hugging Face. The models can be deployed via TensorFlow Lite or Qualcomm's AI Engine Direct SDK, or using cloud-hosted hardware via the AI Hub itself.