This is a Step-by-Step Tutorial how to create a Hardware Accelerator Platform for AXU2CGA/B Zynq UltraScale+ FPGA development board made by Alinx, that can be used for runnung GNU Radio applications with accelerating functions under the Xilinx Vitis toolset.
Adding the gr-satellites developed by EA4GPZ and building your own OOT modules is also included in this set of Tutorials.
This is a Part-4/4: Build the AI model in Colab and Vitis-AIIf you are looking for other parts of this set of tutorials, you can go there directly:
- Part-1 - Create the Vivado Hardware Design
- Part-2 - Software - Build the PetaLinux and GNU Radio
- Part-3 - Create a Vitis Platform and Application with DPU
- Part-4 - Build the AI model in Colab and Vitis-AI
The RF dataset can be freely downloaded from the RF Datasets For Machine Learning website. Select the RADIOML 2018.01A dataset and download the 2018.01.OSC.0001_1024x2M.h5.tar.gz file.
You should provide your contact information for download, but the dataset is free of charge. You need about 20 GB of free space on disk just for download.
There is another option to reduce the size of the dataset to about 6% only. That is enough for start. This tutorial uses a reduced dataset (about 1.2 GB only) and can be downloaded from provided links inside the attached Jupyter Notebook scripts.
Here is the script that was used for reducing the original large dataset:
You can skip this downloading step and just jump to the next step, in case, you don't want to download and use the whole dataset.
Otherwise, if you want to use the original dataset as a whole, jump to the Xilinx Tutorial - RF Modulation Recognition GitHub page and use their original script to build the model. In this case, you can skip the remaining part of this tutorial.
Get the Jupyter Notebook ScripsThere are two Jupyter scripts provided in this tutorial. The first one will be run on the Google Colab, and the second one on the Xilinx Vitis-AI Docker image.
Clone the s59mz/test-dpu GitHub repository and note both Jupyter notebook scripts in the Jupyter subdirectory. We will need it later.
- rf_classification-Colab.ipynb
- rf_classification-Vitis-AI.ipynb
$ git clone http://github.com/s59mz/test-dpu
$ cd test-dpu/
$ ls jupyter/
rf_classification-Vitis-AI.ipynb
rf_classification-Colab.ipynb
reduce_dataset.ipynb
NOTE: The reduce_dataset.ipynb can be used for manually reducing the original dataset, mentioned in the previous section.
Get the arch.json fileThis is very important file for compiling the model for the DPU. The file is located in models directory:
$ ls models/*.json
models/arch_b1152.json
$ cat models/arch_b1152.json
{"fingerprint":"0x100002062010103"}
NOTE: The fingerprint can be read from Target board directly by running the following command on the FPGA board:
$ export XLNX_VART_FIRMWARE=/media/sd-mmcblk1p1/dpu.xclbin
$ xdputil query
NOTE: Edit the arch.json file and update the fingerprint parameter manually in case you change the DPU build configuration (dpu_conf.vh file)
Building and Training the AI modelThe AI model that we will build is uses a TensorFlow framework. We use the Google Colab service for training the model, because we can use their very powerful graphic cards for computing for free, but only for a limited time (for a few hours).
Be sure to download and save your trained model to your computer before Colab disconnect you from their service. You can repeat the whole process on the next day, as many times you want, but each time you must start from the beginning.
- Open your Web Browser (Google Chrome or Firefox is recommended) and open the Google Colab web page.
- Upload the first script rf_classification-Colab.ipynb to the Colab (File > Upload notebook). You should Sign-In with your Google account if you didn't already.
- Expand the Files icon on the left hand side of the window, so you can see some new created files there (sample_data).
- Set the Runtime to GPU: Runtime > Change Runtime type > GPU and Save, if it is available, otherwise use CPU, but it will take mach longer time for training.
- Start the script: Runtime > Run all and wait for about a half an hour.
What just happened?
- The script first download the reduced dataset from my Google Drive. You can change the path in Cell #2 (!wget...) to use your own dataset. You should see a new file reduced_rf_dataset_XYZ.hdf5 appeared on the left side.
- After some data manipulation and preparation, the model will be trained and you can see some saved checkpoint states in the checkpoint folder.
- When the script finishes, you notice another new file: the saved best trained model, named rf-model-best.h5.
- Download the trained model rf-model-best.h5 to your computer, before the Colab disconnects you from the service due to inactivity.
- The trained model is also available on the GitHub repository (under different name): rfClassification_fp_model.h5
- Explore the results. Here's a Confusion Matrix we got with only 6% of the samples from the dataset and trained with 10 epochs.
- Not perfect, but good enough as a starting point to test the model on the real FPGA.
- Feel free to repeat the whole process again with more data samples from the dataset or longer training time (epochs) or even improve the model, if you want, to get the more accurate results.
- But, remember: You have a limited time per a day for using Google Colab (about 2 hours, maybe). So close the session each time after you download the model (Connect > Manage sessions > Terminate).
Now, as we have a floating point model named rf-model-best.h5, we need to Quantize, Calibrate and Compile it for using it on the DPU unit on the real FPGA.
The details of this process is out of scope of this tutorial. Feel free to:
- Explore the provided script rf_classification-Vitis-AI.ipynb
- Read the official GitHub page for Vitis-AI.
Here are the main steps:
- Install the Docker on your Host machine, if you don't have it already.
- Download the latest Vitis AI Docker image with the following command:
docker pull xilinx/vitis-ai-cpu:latest
- Clone the Vitis-AI GitHub repository to your Host Machine.
git clone --recurse-submodules https://github.com/Xilinx/Vitis-AI
- Copy the previously cloned rf_classification-Vitis-AI.ipynb script to the Vitis-AI repository and also the trained model rf-model-best.h5 from the previous section. You can made a new my_model directory and a sub-directory fp_model where to copy your trained model.
- Copy also the arch_b1152.json file to the same directory and don't change the file name unless you update the Jupyter script too.
$ cd Vitis-AI
$ mkdir -p my_model/fp_model
$ cp ~/test-dpu/jupyter/rf_classification-Vitis-AI.ipynb my_model
$ cp ~/Downloads/rf-model-best.h5 my_model/fp_model
$ cp ~/test-dpu/models/arch_b1152.json my_model
$ cd my_model
$ ls
arch_b1152.json fp_model rf_classification-Vitis-AI.ipynb
- Start the Docker image from the my_model directory with a bash script.
$ ../docker_run.sh xilinx/vitis-ai
- When Docker started, switch to TensorFlow2 by executing conda activate command.
- Start Jupyter Notebook by running the command (inside the Docker):
$ jupyter notebook --no-browser --ip=0.0.0.0 --NotebookApp.token='' --NotebookApp.password=''
- Now, open the Web Browser on your Host Machine and open the web page named http://localhost:8888/
- Click on the previously copied script rf_classification-Vitis-AI.ipynb
- Run the script: Kernel > Restart & Run All and wait for a few minutes to get the compiled model.
- NOTE: The reduced dataset (1.2 GB), needed for calibration, will be downloaded to your computer from my Google Drive. You can modify the script and use your own dataset for calibration.
- Then, a new quantize model will be created, which is optimized by the calibration dataset, downloaded previously. It is located in the quantize_results/quantized_model.h5 directory.
- At the end, a new directory vai_c_output will be created.
- Inside is our rfClassification.xmodel file, ready to be uploaded to our Target board.
Now, we are ready to test out model on a real Hardware.
- Copy or replace the original rfClassification.xmodel file on the SD card with a new one. It is located in the boot partition in the test-dpu/models directory.
$ pwd
/media/sd-mmcblk1p1/test-dpu/models
- If you change the name of the model, you should update the test scrips also and also the RF Claasification GNU Radio module. Otherwise, just replace the original file with a new one.
- Run the Accuracy Test from the Part-3:
$ pwd
/media/sd-mmcblk1p1/test-dpu
$ ./run_test_accuracy.sh
Number of RF Samples Tested is 998
Batch Size is 1
Top1 accuracy = 0.53
And at last...Start the RF Modulation Classification Application on the GNU Radio again.
$ pwd
/media/sd-mmcblk1p1/test-dpu
$ ./run_rf_classification.sh
If you come here to the end through all four parts successfully, Congratulations!
Feel free to comment on this blog.
Comments
Please log in or sign up to comment.