We tested object detection on images from camera using the KR260 and YOLOv3.
Originally, there was a sample program for PYNQ-DPU, which we modified.
Here, we will introduce the model used and the methods of modification.
This project is part of a subproject for the AMD Pervasive AI Developer Contest.
Be sure to check out the other projects as well.
***The main project is currently under submission. ***
0. Main project << under submission
2. PYNQ + PWM(DC-Motor Control)
3. Object Detection(Yolo) with DPU-PYNQ << this project
4. Implementation DPU, GPIO, and PWM
6. GStreamer + OpenCV with 360°Camera
7. 360 Live Streaming + Object Detect(DPU)
8. ROS2 3D Marker from 360 Live Streaming
9. Control 360° Object Detection Robot Car
10. Improve Object Detection Speed with YOLOX
11. Benchmark Architectures of the DPU
12. Power Consumption of 360° Object Detection Robot Car
13. Application to Vitis AI ONNX Runtime Engine (VOE)
14. Appendix: Object Detection Using YOLOX with a Webcam
Please note that before running the above subprojects, the following setup, which is the reference for this AMDcontest, is required.
https://github.com/amd/Kria-RoboticsAI
IntroductionKR260 can utilize DPU (Deep Processing Unit).
By using PYNQ-DPU, the DPU can be controlled from Python. We conducted object detection using the DPU.
We have conducted object detection(Yolo) on 360° and normal images with KR260.
Here is a test video using Jupyter Notebooks with DPU-PYNQ and KR260.
We also tested this in a Python program with similar results.
Modifying the Sample Program from PYNQ-DPUWe started with a sample program from PYNQ-DPU and made necessary modifications. Below, we detail the models and the modification process.
PYNQ-DPU
The PYNQ-DPU sample program is available here:
Included is a YOLO object detection sample (dpu_yolov3.ipynb).
Installation Provided by the Contest Organizer
You can install the required software automatically from the following link:
TensorFlow2 + YOLOv3The provided sample uses an older TensorFlow and the VOC2007 dataset, which is quite outdated.
We opted for YOLOv3 model from Vitis AI using TensorFlow2 with the COCO2017 dataset.
Download and extract the model:
wget https://www.xilinx.com/bin/public/openDownload?filename=tf2_yolov3_3.5.zip
unzip openDownload\?filename\=tf2_yolov3_3.5.zip
Vitis AIActivate TensorFlow2 with Vitis AI and compile the model for KR260:
cd Vitis-AI/
./docker_run.sh xilinx/vitis-ai-tensorflow2-cpu:latest
conda activate vitis-ai-tensorflow2
cd tf2_yolov3_3.5/
echo '{' > arch.json
echo ' "fingerprint": "0x101000016010407"' >> arch.json
echo '}' >> arch.json
vai_c_tensorflow2 -m quantized/quantized.h5 -a arch.json -n kr260_yolov3_tf2 -o ./compiled --options '{"input_shape": "1,416,416,3"}'
If the compilation is successful, a.xmodel file for the DPU (B4096) model will be created on the KR260.
If you want to skip the compilation process, we have provided the model(kr260_yolov3_tf2.xmodel) at the following link:
https://github.com/iotengineer22/AMD-Pervasive-AI-Developer-Contest/tree/main/model
Jupyter Notebook ImplementationWe have uploaded the test notebook file(dpu_yolov3_tf2_coco2017.ipynb) to the following GitHub repository:
Key modifications include loading the TensorFlow2 model and classifying with COCO2017 classes. We also set up the program to recognize JPEG files in the image folder.
overlay.load_model("kr260_yolov3_tf2.xmodel")
classes_path = "img/coco2017_classes.txt"
original_images = [i for i in os.listdir(image_folder) if i.endswith("JPEG")]
Transfer the Model to KR260Prepare the COCO2017 class list, which includes 80 categories.
Please refer to the following link:
Transfer the compiled model, list and.ipynb file to KR260:
Below is an example of copying to KR260 at 192.168.11.7:
scp -r pynq-dpu/ ubuntu@192.168.11.7:/home/ubuntu/
Object Detection(.ipynb)In the /root/jupyter_notebooks/ directory of KR260, copy the.ipynb file, the.xmodel file, the COCO2017 list, and the JPEG files into the existing pynq-dpu folder.
Below is an example of copying to the jupyter_notebooks directory on the KR260.
sudo su
cd $PYNQ_JUPYTER_NOTEBOOKS
cd jupyter_notebooks/
ls
cp -rf /home/ubuntu/pynq-dpu/ ./
Use the Kria-PYNQ environment via Jupyter Notebook to control the PWM. Connect to the KR260 board using a LAN cable and find the IP address using ifconfig
. Access the Jupyter Notebook at http://<IP_ADDRESS>:9090/
.
Test.ipynb
Here is a test video using Jupyter Notebooks with DPU-PYNQ and KR260.
Open the Jupyter Notebook on KR260 and proceed with the execution.
The default.bit file is used for DPU on KR260.Using TensorFlow2 Model with VART for YOLOv3 Detection
We performed object detection on 80 categories from COCO2017 using VART on DPU.
We tested with three photos: two 360° images and one regular camera image.
The 360° images did not perform well in object detection, failing to detect the balls in the foreground.
In contrast, images captured with a regular smartphone camera successfully detected the ball.
Test reslt(360° images)
We have conducted object detection on 360° images(5376x2688).
While we have successfully detected human figures, the yellow ball was not detected.
360° images are too wide, making it difficult for Yolo model to detect objects.
This indicates that further adjustments are needed.
By splitting the image into two and setting the aspect ratio to 1:1(2688x2688), detection improves significantly.
We also implemented the program in Python, available on the following GitHub repository:
Below is an example of program(.py) execution.
ubuntu@kria:~$ sudo su
root@kria:/home/ubuntu# source /etc/profile.d/pynq_venv.sh
(pynq-venv) root@kria:/home/ubuntu# cd $PYNQ_JUPYTER_NOTEBOOKS
(pynq-venv) root@kria:~/jupyter_notebooks# cd pynq-dpu/
(pynq-venv) root@kria:~/jupyter_notebooks/pynq-dpu# python3 app_yolov3_tf2_mymodel-name-test.py
yolov3_test, in TensorFlow2
(1, 416, 416, 3)
Number of detected objects: 2
Details of detected objects: [49 60]
Performance: 2.902664666183155 FPS
Here is a test video using Python with DPU-PYNQ and KR260.
Object detection can be performed using.py files just as effectively as with.ipynb files.
Using DPU-PYNQ on KR260, we successfully performed object detection from both.ipynb and.py files.
Next, we will integrate DPU with GPIO and PWM IP for KR260.
4. Implementation DPU, GPIO, and PWM << next project
Comments