I created this project as part of my efforts to benchmark Tensorflow Lite on the MaaXBoard. I wanted to compare the MaaXBoard's performance running inference to similar developer boards, so I chose the Google Coral and Raspberry Pi 3 Model B+.
- Instructions for benchmarking the Raspberry Pi 3 Model B+ are here.
- Instructions for benchmarking the Coral dev board are here.
- Final benchmarking article is here.
- Alasdair Allan's original big benchmarking roundup is here.
MobileNet V.1 and V.2
Luckily for us, there are pre-converted models that meet all of the requirements, like these versions of MobileNet v1 and MobileNet v2. Even luckier - Alasdair Allan created a script to benchmark MobileNet last year, so I can run the exact same script and compare my results to his.
This benchmark uses the MobileNet v2 SSD model trained on the COCO dataset.
I wanted to try a couple other benchmarks, so I browsed through tfhub.com, where Google hosts hundreds of pre-trained Tensorflow models available on tfhub.dev. I was interested in image classification and detection models that were precompiled to Tensorflow Lite, optimized for mobile and edge devices, and had a version precompiled for the Google Coral as well.
I decided to try EfficientNet-Lite as well, since it's so new and shiny. I ended up benchmarking two different versions because the first version I benchmarked was the original version that was optimized for Coral Dev Board's Edge TPU and it didn't perform so well.
INSTALL PREREQUISITES- First, set the MaaXBoard up headlessly.
- Install Tensorflow
Create a new virtual environment called 'tf' (you will have already done this if you installed Tensorflow).
Get your python version and tensorflow version on the virtual environment:
workon tf
python --version
pip show tensorflow
- Install pre-requisites for Tensorflow Lite:
sudo apt install swig libjpeg-dev zlib1g-dev
- Set up Tensorflow Lite. You'll have to download the appropriate version of aarch64 linux and whatever python version you have. My Python is 3.7.3, so the flags are
cp37-cp37m.
wget https://github.com/PINTO0309/TensorflowLite-bin/raw/master/2.1.0/tflite_runtime-2.1.0-cp37-cp37m-linux_aarch64.whl
pip install --upgrade tflite_runtime-2.1.0-cp37-cp37m-linux_aarch64.whl
- Set up OpenCV. I already have OpenCV installed on my other virtualenv so I can just link it to my tflite virtual environment
cd ~/.virtualenvs/tf/lib/python3.7/site-packages/
ln -s /usr/lib/python3.7/site-packages/cv/python-3.7/cv2.cpython-37m-aarch64-linux-gnu.so ./cv2.so
cd ~
Enter python interactive mode so we can check the opencv version:
python -c 'import cv2; print(cv2.__version__)'
Install PILLOW.
pip install Pillow
Download model and benchmarking scriptThe original files Alasdair created were hosted on dropbox here. Because of changes to Tensorflow v.2, I've updated the code and also included code to benchmark image classification. The only thing it doesn't contain is the EfficientNet-Lite model, but that we can download from the Google Coral github.
You can download the files to the MaaXBoard and unzip the folder like this:
wget https://hacksterio.s3.amazonaws.com/uploads/attachments/1143280/RASPBERRYPI.zip
unzip RASPBERRYPI.zip
cd RASPBERRYPI
Note: We're using the same files on the MaaXBoard that were used on the Raspberry Pi, hence the folder name.
RUN THE BENCHMARKSRun the benchmark for MobileNet v.2 on MaaXBoard:
python benchmark_tf_lite.py --model tflite_for_rpi_V2/mobilenet_v2.tflite --label tflite_for_rpi_V1/coco_labels.txt --input fruit.jpg --output out.jpg --runs 1000
Elapsed time is 364.2792464489976 ms
apple score = 0.953125
banana score = 0.83984375
Run the benchmark for MobileNet v.1 on MaaXBoard:
python benchmark_tf_lite.py --model tflite_for_rpi_V1/mobilenet_v1.tflite --label tflite_for_rpi_V1/coco_labels.txt --input fruit.jpg --output out.jpg --runs 1000
Elapsed time is 282.3099827770002 ms
apple score = 0.80078125
apple score = 0.515625
MobileNet v.1 is faster, but just like Raspberry Pi and Coral Dev Board, it thinks the banana is an apple.
Download models from here: https://coral.ai/models/
Get the EdgeTPU-L model:
wget https://github.com/google-coral/edgetpu/raw/master/test_data/efficientnet-edgetpu-L_quant_edgetpu.tflite
Get labels:
wget https://raw.githubusercontent.com/google-coral/edgetpu/master/test_data/imagenet_labels.txt
Benchmark for EfficientNet L on MaaXBoard:
python benchmark_classification.py --model efficientnet-edgetpu-L_quant_edgetpu.tflite --label imagenet_labels.txt --input fruit.jpg --runs 1000
Elapsed time is 3342.868333061997 ms
0.125490: pineapple, ananas
0.043137: honeycomb
0.027451: cauliflower
As you can see, it's painfully slow and not very accurate.
Benchmark the lite0 modelThere is a version of EfficientNet-Lite that's not specialized for the edge TPU, so I decided to try running this one. It's quite a bit smaller than the other version - only 5.17MB:
wget https://tfhub.dev/tensorflow/lite-model/efficientnet/lite0/int8/2?lite-format=tflite
Benchmark for EfficientNet v0 on MaaXBoard:
python benchmark_classification.py --model efficientnet-lite/efficientnet_lite0_int8_1.tflite --label efficientnet-lite/ImageNetLabels.txt --input fruit.jpg --runs 1000
Elapsed time is 180.288102787 ms
banana score = 0.72265625
Wow, this model is way faster and more accurate!
FINAL RESULTSIf you want to know how the MaaXBoard performed vs the other hardware, the full benchmarking article is here.
Comments