It's a great chance to learn how to use the object detection model with Jetson hardware on alwaysAI platform. Thanks to SeeedStudio's rich wiki resources and alwaysAI's powerful AI models and dashboard. I realize it's pretty convenient and easy to get started when I'm a user who does not have so much experience with model training and edge hardware deployment in real scenarios.
For this project, I would like to explore how to detect objects through a pre-loaded video feed connected to the Jetson device powered by Jetson Xavier NX. I choose the A206 carrier board with Jetson Xavier NX from Seeed Studio.
Set up environmentAfter we install JetPack, we need to set up the environment for the development computer and Jetson device. First, download the alwaysAI desktop to get the interface from alwaysAI online model zoo to your local computer; then check if your OpenSSH is downloaded since it will help you connect your development machine to your edge device and deploy your project. The second part is to check if your docker has already been downloaded with JetPack. NVIDIA JetPack includes NVIDIA Container Runtime with Docker integration, so developers can package an application for Jetson with all its dependencies into a single container that is guaranteed to work in any deployment environment. Therefore, we need to add a "docker" group to your user so that you don't need sudo to access docker.
alwaysAI dashboard accountI create a new project for an object detection template in alwaysAI application and add the model with the rich dataset "COCO dataset", which recognizes around 80 object classes.
Deploy the project on the Jetson deviceThere's only one small tip I would like to mention here: to select the target device you would like to use in this project, you need to figure out your device's IP address. Also please remember your NVIDIA username and password:) cuz I waste plenty of time recalling it lol. Once we save it for running the application in the default location, we can change the model name and inference engine type, and then install the application.
All of the changing parts are in the app.py file so you may want to check it out. When we finish it, we install the application again and run the application.
Here we see the recognization results on the left side, showing which kind of object and the recognizing accuracy. There are not only people detection, but also many kinds of objects such as a cup, a tie, a vase, and so on.
Filter what you detectActually, we can also filter the object type that we recognize in the model, just use the object detection filter prediction sentence and here I choose to filter them as elephants, so we'll get a precise result showing they are elephants. And the inference time is 0.018s which equals 55FPS. It's a pretty great performance for object detection!
You may want to pay attention to this part since alwaysAI did a small change in defining the filter prediction function. So remember to change the list of predictions.
results.predictions = edgeiq.filter_predictions_by_label(results.predictions, ['elephant'])
alwaysAI contributes to producing a highly innovative and easy-to-use software platform for developers to build and deploy CV applications on edge, having the experience to help customers run their enterprises effectively and gain more competitive advantages. It provides rich pre-trained computing vision models, APIs, and a platform for users to get real-time analytics for further data visualization. For the project using a small-scale dataset, you can simply use the models and APIs from alwaysAI library for free. For heavy tasks, you may want to choose the one-year subscription to alwaysAI to have a try if it's perfect for your need.
check out more steps guide in Seeed Studio's wiki on how to get started with alwaysAI on the Jetson hardware:
https://wiki.seeedstudio.com/alwaysAI-Jetson-Getting-Started/
import time
import edgeiq
def main():
obj_detect = edgeiq.ObjectDetection("alwaysai/ssd_mobilenet_v1_coco_2018_01_28_xavier_nx")
obj_detect.load(engine=edgeiq.Engine.TENSOR_RT)
print("Loaded model:\n{}\n".format(obj_detect.model_id))
print("Engine: {}".format(obj_detect.engine))
print("Accelerator: {}\n".format(obj_detect.accelerator))
print("Labels:\n{}\n".format(obj_detect.labels))
fps = edgeiq.FPS()
try:
with edgeiq.WebcamVideoStream(cam=0) as video_stream, \
edgeiq.Streamer() as streamer:
# Allow Webcam to warm up
time.sleep(2.0)
fps.start()
# loop detection
while True:
frame = video_stream.read()
results = obj_detect.detect_objects(frame, confidence_level=.5)
results.predictions = edgeiq.filter_predictions_by_label(results.predictions, ['elephant'])
frame = edgeiq.markup_image(
frame, results.predictions, colors=obj_detect.colors)
# Generate text to display on streamer
text = ["Model: {}".format(obj_detect.model_id)]
text.append(
"Inference time: {:1.3f} s".format(results.duration))
text.append("Objects:")
for prediction in results.predictions:
text.append("{}: {:2.2f}%".format(
prediction.label, prediction.confidence * 100))
streamer.send_data(frame, text)
fps.update()
if streamer.check_exit():
break
finally:
fps.stop()
print("elapsed time: {:.2f}".format(fps.get_elapsed_seconds()))
print("approx. FPS: {:.2f}".format(fps.compute_fps()))
print("Program Ending")
if __name__ == "__main__":
main()
Comments
Please log in or sign up to comment.