The Particle Tachyon runs Ubuntu 20.04 by default, which works fully with all its hardware and add-ons. Particle has added experimental support for Ubuntu 24.04, but this version doesn't yet support important parts like the 5G cellular modem and GPS, so they won't work right now. Some support exists for the Qualcomm AI Hub models via Docker containers, but our goal is to deploy a custom model, trained on our own datasets and natively compiled for the Qualcomm chip. To use the powerful Qualcomm AI Accelerator together with fast 5G connection for quick AI tasks on the device, you need the Qualcomm AI Engine Direct SDK (QNN SDK). Sadly, there's no easy official package source for Ubuntu 20.04 to get this SDK, which blocks access to these features on the main OS. That said, you can install the SDK and Runtime through some manual, indirect steps, skipping the missing official support and letting developers get it working with extra work. In this project, we'll install just the basic packages needed to quickly load and run a QNN supported ML model trained in Edge Impulse.
HardwareWe are using a Particle Tachyon with Qualcomm Dragonwing QCM6490 SoC. The Dragonwing QCM6490 features an octa-core Qualcomm Kryo CPU, a Qualcomm Adreno 643 GPU, and a Qualcomm Hexagon 770 DSP containing an AI accelerator capable of delivering 12 TOPS.
We will be using an Elecom 5MP Webcam.
To configure your Particle Tachyon, follow the setup guidelines provided at https://developer.particle.io/tachyon/setup/install-setup. Once completed, SSH access to the device over WiFi should be fully enabled and operational.
Qualcomm AI Engine Direct SDK InstallationTo download and install the required packages, begin by setting them up on a host Linux computer. I utilized a Linux machine running Ubuntu 22.04 for this process, though a virtual machine would serve as a viable alternative (I didn't test) if you lack direct access to physical Linux hardware.
Installation on Host Linux Machine
Navigate to the Qualcomm Package Manager Portal at https://qpm.qualcomm.com. You'll need to log in to access it. If you don't have an account yet, click the Sign up here link on the login page to create one.
After logging in, click on the Tools, as illustrated in the image below.
Choose System OS as "Linux" and enter "Package Manager 3" in the search box. Click on the Qualcomm Package Manager 3 in the right panel.
Select the OS Type and Version as shown below and click on the Download button.
Once the Debian package download finishes, run the following command to install the package manager.
$ cd Downloads
$ sudo dpkg -i --force-overwrite QualcommPackageManager3.3.0.126.7.Linux-x86.deb
Return to the Tools page, then type "AI Stack" into the search field while ensuring Linux is selected as the System OS. In the right panel, expand the Qualcomm AI Stack section and select the Qualcomm AI Engine Direct SDK.
Select the OS Type and Version as shown below and click on the Download button.
Once the download finishes, run the following command to login the package manager command line.
$ qpm-cli --login
Should you encounter a login failure—as I did in my attempt—review the logs for troubleshooting details.
$ cat /var/tmp/qcom/qpmcli/logs/QPMCLI_20251009_235420_00.log
[2025-Oct-09 23:54:35.600591] [info] Login failed
[2025-Oct-09 23:54:35.600752] [error] Login failed. Error: Please agree to the Product Kit License Agreement for access. You can find the agreement at https://www.qualcomm.com/agreements : 400
I resolved the issue by visiting https://www.qualcomm.com/agreements to accept the necessary agreement, then re-executing the aforementioned command.
Run the following commands to activate and install the Qualcomm AI Engine Direct SDK.
$ qpm-cli --license-activate qualcomm_ai_engine_direct
$ qpm-cli --extract qualcomm_ai_engine_direct.2.31.0.250130.Linux-AnyCPU.qik
The SDK will be installed in the /opt/qcom/aistack/qairt/ directory on the host machine. Next, we will select the shared libraries pre-cross-compiled for the target architecture (Qualcomm SoC in this case). Use SCP to transfer the following files from the /opt/qcom/aistack/qairt/2.31.0.250130/lib/
directory on your host machine to the Particle Tachyon.
aarch64-ubuntu-gcc9.4/libQnnDsp.so
aarch64-ubuntu-gcc9.4/libQnnDspV66Stub.so
aarch64-ubuntu-gcc9.4/libQnnGpu.so
aarch64-ubuntu-gcc9.4/libQnnHtpPrepare.so
aarch64-ubuntu-gcc9.4/libQnnHtp.so
aarch64-ubuntu-gcc9.4/libQnnHtpV68Stub.so
aarch64-ubuntu-gcc9.4/libQnnSaver.so
aarch64-ubuntu-gcc9.4/libQnnSystem.so
aarch64-ubuntu-gcc9.4/libQnnTFLiteDelegate.so
hexagon-v66/unsigned/libQnnDspV66Skel.so
hexagon-v79/unsigned/libQnnHtpV79Skel.so
hexagon-v75/unsigned/libQnnHtpV75Skel.so
hexagon-v73/unsigned/libQnnHtpV73Skel.so
hexagon-v69/unsigned/libQnnHtpV69Skel.so
hexagon-v68/unsigned/libQnnHtpV68Skel.so
Installation on Particle Tachyon
Copy the files from the previously transferred location to the /usr/lib/
directory using superuser privileges.
$ sudo cp \
libQnnDsp.so \
libQnnDspV66Stub.so \
libQnnGpu.so \
libQnnHtpPrepare.so \
libQnnHtp.so \
libQnnHtpV68Stub.so \
libQnnSaver.so \
libQnnSystem.so \
libQnnTFLiteDelegate.so \
/usr/lib
Also, copy the following files to the /usr/lib/rfsa/adsp/
directory.
$ sudo cp \
libQnnDspV66Skel.so \
libQnnHtpV68Skel.so \
libQnnHtpV69Skel.so \
libQnnHtpV73Skel.so \
libQnnHtpV75Skel.so \
libQnnHtpV79Skel.so \
/usr/lib/rfsa/adsp
Verify that all required files are included to ensure model inference functions correctly on the AI accelerator.
Data AcquisitionWe need to sign up an account at the Edge Impulse Studio and create a new project for the data processing, model training, and deployment. For demonstration purposes, we selected colored foam blocks for object detection.
We attached a USB webcam to the Particle Tachyon via a USB-C to USB-A adapter. To verify if the webcam was detected, use the following command.
$ lsusb
Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 002 Device 002: ID 056e:701b Elecom Co., Ltd ELECOM 5MP Webcam
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
We must install the Edge Impulse CLI tool by executing the following commands.
$ wget https://cdn.edgeimpulse.com/firmware/linux/setup-edge-impulse-qc-linux.sh
$ sh setup-edge-impulse-qc-linux.sh
Run the command below and follow the onscreen instructions to connect the device to Edge Impulse Studio for data collection.
$ edge-impulse-linux
After capturing 102 images with varying orientations and a mix of different colored foam blocks, we labeled them using the Labeling Queue tab on the Data Acquisition page in Edge Impulse Studio. The blocks are categorized into four labels—red
, green
, blue
, and yellow
—based on their color.
We can view the dataset listings as shown below.
For the model developments, we need to design an impulse which is a custom processing pipeline that combines signal processing and machine learning models. Go to the Impulse Design > Create Impulse page, click Add a processing block, and then choose Image, which preprocesses and normalizes image data. Also, on the same page, click Add a learning block, and choose Object Detection (Images). We are using a 320x320 image size. Now, click the Save Impulse button.
Next, go to the Impulse Design > Image page set the Color depth parameter as RGB, and click the Save parameters button which redirects to another page where we should click on the Generate Feature button. It usually takes a couple of minutes to complete feature generation.
We can see the 2D visualization of the generated features in the Feature Explorer.
To train the model, navigate to the Impulse Design > ObjectDetection page. The training settings we selected are displayed below.
We opted for No color space augmentation in the Advanced training settings, as our use case involves detecting colored blocks, and we want to avoid any color space transformations.
We chose the latest YOLO-Pro model. Click on the Save & train button to begin training.
After training is complete, the training performance is displayed as shown below. The model achieved a 97% precision score on the training data.
Since the model will perform inference on the Qualcomm AI Accelerator on the Particle Tachyon, we selected the Linux (AARCH64 with Qualcomm QNN) option on the Deployment page.
For Model Optimizations, choose the Quantized (int8) option, as the Qualcomm AI Accelerator does not support float32 models.
Click the Build button to compile and download the EIM (Edge Impulse Model) binary.
ApplicationWe will utilize the Edge Impulse Linux SDK for Python to perform model inference on images captured from the webcam and stream the results to a web browser via HTTP. We could have used the Edge Impulse CLI to run the inferencing by providing the model path and check the results on a web browser, however, we decided to show how to integrate it into our application to utilize the results for further actions as required. Follow the instructions provided at the link below to install the Linux SDK for Python.
https://docs.edgeimpulse.com/tools/libraries/sdks/inference/linux/python
The full Python script is provided below.
inference.py
import cv2
import os
import time
import logging
import signal
import numpy as np
from threading import Thread, Condition
from Stream import StreamingOutput, StreamingHandler, StreamingServer
from edge_impulse_linux.image import ImageImpulseRunner
class GracefulKiller:
kill_now = False
def __init__(self):
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
def exit_gracefully(self, signum, frame):
self.kill_now = True
raise KeyboardInterrupt("Interrupted")
doInference = False
colors = {
"yellow": (255, 255, 0),
"blue": (0, 0, 255),
"green": (0, 255, 0),
"red": (255, 0, 0),
}
def inference_thread(runner):
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
try:
while doInference:
ret, frame = cap.read()
img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
features, cropped = runner.get_features_from_image_auto_studio_settings(img)
res = runner.classify(features)
logging.info(
f"Found {len(res['result']['bounding_boxes'])} bounding boxes ({res['timing']['dsp'] + res['timing']['classification']} ms.)"
)
for bb in res["result"]["bounding_boxes"]:
logging.info(f"{bb['label']} ({ bb['value']:.2f})")
color = colors[bb["label"]]
cropped = cv2.rectangle(
cropped,
(bb["x"], bb["y"]),
(bb["x"] + bb["width"], bb["y"] + bb["height"]),
color,
2,
)
cropped = cv2.cvtColor(cropped, cv2.COLOR_RGB2BGR)
_, buf = cv2.imencode(".jpg", cropped)
output.write(buf)
finally:
logging.info("Inference stopped")
if __name__ == "__main__":
logging.basicConfig(
format="%(asctime)s: %(message)s", level=logging.INFO, datefmt="%H:%M:%S"
)
killer = GracefulKiller()
model_file = "./color-block-detection-linux-aarch64-qnn-v6.eim"
runner = ImageImpulseRunner(model_file)
model_info = runner.init()
th = Thread(target=inference_thread, args=(runner,))
th.start()
doInference = True
logging.info("Thread started")
output = StreamingOutput()
StreamingHandler.set_stream_output(output)
StreamingHandler.set_page("Inference on Tachyon", 320, 320)
port = 8888
server = StreamingServer(("", port), StreamingHandler)
logging.info(f"Server started at 0.0.0.0:{port}")
try:
server.serve_forever()
except KeyboardInterrupt:
doInferencee = False
th.join()
logging.info("Inference stopped")
This is the Stream module imported in the earlier script.
Stream.py
import io
import logging
import socketserver
from http import server
from threading import Thread, Condition
class StreamingOutput(io.BufferedIOBase):
def __init__(self):
self.frame = None
self.condition = Condition()
def write(self, buf):
with self.condition:
self.frame = buf
self.condition.notify_all()
class StreamingHandler(server.BaseHTTPRequestHandler):
@classmethod
def set_stream_output(self, output):
self.output = output
@classmethod
def set_page(self, title, width, height):
self.PAGE = f"""\
<html>
<head>
<title>{title}</title>
</head>
<body>
<h1>{title}</h1>
<img src="stream.mjpg" width="{width}" height="{height}" />
</body>
</html>
"""
def do_GET(self):
if self.path == "/":
self.send_response(301)
self.send_header("Location", "/index.html")
self.end_headers()
elif self.path == "/index.html":
content = self.PAGE.encode("utf-8")
self.send_response(200)
self.send_header("Content-Type", "text/html")
self.send_header("Content-Length", len(content))
self.end_headers()
self.wfile.write(content)
elif self.path == "/stream.mjpg":
self.send_response(200)
self.send_header("Age", 0)
self.send_header("Cache-Control", "no-cache, private")
self.send_header("Pragma", "no-cache")
self.send_header(
"Content-Type", "multipart/x-mixed-replace; boundary=FRAME"
)
self.end_headers()
try:
while True:
with self.output.condition:
self.output.condition.wait()
frame = self.output.frame
self.wfile.write(b"--FRAME\r\n")
self.send_header("Content-Type", "image/jpeg")
self.send_header("Content-Length", len(frame))
self.end_headers()
self.wfile.write(frame)
self.wfile.write(b"\r\n")
except Exception as e:
logging.warning(
"Removed streaming client %s: %s", self.client_address, str(e)
)
else:
self.send_error(404)
self.end_headers()
class StreamingServer(socketserver.ThreadingMixIn, server.HTTPServer):
allow_reuse_address = True
daemon_threads = True
InferenceRun the command below to launch the application on the Particle Tachyon.
$ python3 inference.py
The output logs are shown in the terminal, and the output image stream can be accessed via a web browser at <Tachyon IP address>:8888. The inference speed is incredibly fast, at 2 milliseconds per frame.
This project successfully enables AI inference on the Particle Tachyon using the Qualcomm AI Accelerator with a QNN-supported model trained in Edge Impulse. Despite the lack of official SDK support for Ubuntu 20.04, manual installation of essential packages allows efficient model deployment. This project can be improved by leveraging the Particle Tachyon's 5G capabilities to develop AIoT applications.
Comments