On May 9th 2023, together with Bryan Fletcher, I gave a webinar on how to train an American Sign Language (ASL) classification model.
This project provides a Getting Started Guide to help you quickly get up and running with the ASL demonstration on your ZUBoard and/or Ultra96-V2 platform.
Webinar OverviewIf you missed the webinar, you can still watch it on-line by registering at the link below:
The webinar described how to train the model, and deploy for inference on Zynq-UltraScale+ devices using a series of easy to follow jupyter notebooks:
- ASL Dataset – Overview
- ASL Dataset – Subset
- ASL Classifier – Checking Model Compatibility
- ASL Classifier – Transfer Learning with Fine-Tuning
- ASL Classifier – Deployment to DPU
- ASL Classifier – Execution on ZUBoard
A lot of new content has been created since the webinar, including:
- Trained MobileNetV2 classifier, which benchmarks 40X faster than VGG-16
- Added support for DualCam design with smaller DPU
- Added support for DisplayPort
The webinar used the VGG-16 model as a use case for classification. Since then I also provided an example for the MobileNetV2 model, which is much less compute intensive.
Having equivalent accuracy, the MobileNetV2 is more complex in its structure, but much lighter in computation requirements.
Comparing the performance of both models also confirms the increased throughput achievable with the MobileNetV2.
The benchmarks were performed on the ZUBoard with the benchmark app with DPU B512.
The xdputil benchmark was performed with the "xdputil" utility, and does not include the pre-processing & post-processing.
The python benchmark was performed with the asl_classify_files.py script.
Aside from benchmarks, there is nothing like a live demo to appreciate the performance of a model. The following video has montage of the ASL classifier (mobilenetv2) running on Ultra96-V2 with the DisplayPort monitor, and on ZUBoard with DualCam HSIO and remote display:
All source code, including the jupyter notebooks for both models, can be found on my github repository:
Start with the pre-built SD imagesThe following link provides a pre-built image for the ZUBoard
- http://avnet.me/avnet-zub1cg-2022.2-sdimage
(2023/05/10, md5sum = 82372486c5dde174b0f00d32a6e602fa)
The following link provided a pre-built image for the Ultra96-V2
- http://avnet.me/avnet-u96v2-2022.2-sdimage
(2023/05/10, md5sum = de17c497334da903790d702a5fae8f51)
With the SD image programmed to a micro-SD card, connect up your ZUBoard or Ultra96-V2 platform as shown below.
Press the on button to boot the board, and login as "root".
After reboot, the "xmutil listapps" command should confirm that the "avnet-{platform}-benchark" app is active:
# xmutil listapps
Also, the DPU architecture loaded in the PL can be queried with the "xdputil query"
# xdputil query | grep DPU
Make note of the DPU architecture, which will be B2304 for Ultra96-V2 and B512 for ZUBoard, when executing the demo.
Hardware Setup with DualCamWith the SD image programmed to a micro-SD card, connect up your ZUBoard or Ultra96-V2 platform as shown below.
Press the on button to boot the board, and login as "root".
By default, the board will boot with the "avnet-{platform}-benchmark" app. When using the dualcam module, this needs to be changed to "avnet-{platform}-dualcam-dpu" app.
On ZUBoard, this can be done as follows:
# echo avnet-zub1cg-dualcam-dpu > /etc/dfx-mgrd/default_firmware
On Ultra96-V2, this can be done as follows:
# echo avnet-u96v2-dualcam-dpu > /etc/dfx-mgrd/default_firmware
Confirm that the change was made, then reboot the board, as follows:
# cat /etc/dfx-mgrd/default_firmware
# reboot
After reboot, the "xmutil listapps" command should confirm that the "avnet-{platform}-dualcam-dpu" app is active:
# xmutil listapps
Also, the DPU architecture loaded in the PL can be queried with the "xdputil query"
# xdputil query | grep DPU
Make note of the DPU architecture, which will be B1152 for Ultra96-V2 and B128 for ZUBoard, when executing the demo.
Executing the Demo with remote display (MobaXterm)In order to launch the demo with a remote display (using MobaXterm)
On the embedded platform, query the IP address:
# ifconfig
As an example, for my ZUBoard, this was 10.0.0.178:
On your PC, launch MobaXterm. Create an SSH session by specifying your IP address.
Make certain that X-11 forwarding is enabled, then press OK.
Navigate to the demo directory:
# cd asl_classification_vitis_ai
If using the base design with a USB camera, launch the demo as follows:
# python3 asl_classify_live.py --model=./model_mobilenetv2/B{#}/asl_classifier.xmodel
Where B{#} corresponds to your DPU architecture (ie. B2304, B512)
If using the dualcam design, launch the demo as follows:
# python3 asl_classify_dualcam.py --model=./model_mobilenetv2/B{#}/asl_classifier.xmodel
Where B{#} corresponds to your DPU architecture (ie. B1152, B128)
In order to launch the demo with a DisplayPort monitor, start by specifying a local display:
# export DISPLAY=:0.0
Next, navigate to the demo directory:
# cd asl_classification_vitis_ai
If using the base design with a USB camera, launch the demo as follows:
# python3 asl_classify_live.py --model=./model_mobilenetv2/B{#}/asl_classifier.xmodel
Where B{#} corresponds to your DPU architecture (ie. B2304, B512)
If using the dualcam design, launch the demo as follows:
# python3 asl_classify_dualcam.py --model=./model_mobilenetv2/B{#}/asl_classifier.xmodel
Where B{#} corresponds to your DPU architecture (ie. B1152, B128)
ConclusionI hope this getting started guide inspires you to be creative with the ZUBoard and/or Ultra96-V2.
Don't Have a Board ?If this project inspired you, but you don't have a board, don't forget to apply for a free ZUBoard on Element14 with their RoadTest:
AcknowledgementsI wanted to thank Bryan Fletcher for his collaboration on the ZU1 Vision Webinar series:
I also want to thank Tom Curran for the DisplayPort support on ZUBoard via the DP-emmc HSIO.
Revision History2023/05/10
First version of the ASL getting started guide.
Comments