I struggled from the start to finish of this project trying to determine which design flow would allow me to best accomplish my objectives. A lot of that uncertainty is reflected in my design. I had a choice of 3 flows - PetaLinux, Ubuntu, or Ubuntu/PYNQ. All flows used Vitis-AI and the DPU.
Mechanical ConstructionI decided that I would come up with a minimalist mechanical design - a simple mount for the KV260 board that I could extend easily to add accessories necessary to implement the smart camera.
- KV260 Tripod Mount
I needed a mechanism to position the smart camera assembly either for desktop use or to mount on a tripod. I found an aluminum smartphone tripod adapter and 3D printed a mounting bracket clip for the KV260. It is shown below mounted to a baseplate for desktop use.
- Bracket clip
The bracket clip has a vertical extension so that I can mount accessories by adding a customized bracket. Initially I started with no bracket and just the Brio USB camera attached to the extension.
- Peripheral attachment
My final bracket configuration with all of the peripherals attached is shown below. This allowed me to test the various examples from the different implementation design flows. Ultimately, the most difficult part was to integrate the components together within a single framework.
Logitech Brio, OnSemi AR1335 IAS, Digilent Pcam 5C
I have 3 different cameras attached, primarily to try the different interfaces. The Brio and the AR1335 both use 13MP color imagers and 4K/30fps video capability. The PCam 5C has a 5MP color imager and 1080/30fps video capability. The Brio interfaces with USB3. The AR1335 uses a 4 lane MIPI CSI-2 interface. The PCam 5C uses a 2 lane MIPI CSI-2 interface. The PetaLinux and Ubuntu camera examples support the Brio and the AR1335. The PYNQ examples support the Brio and the the PCam 5C.
There are 3 embedded Linux variants that can be used the Kria. There are advantages and disadvantages with each OS and of course each requires specific versions of the Xilinx tools. Here's a quick summary of the different things I tried on each OS.
PetaLinux
Currently, there are 3 different versions of the Xilinx tools used with the Kria - 2020.2, 2021.1, and 2021.2, although 2021.2 is somewhat unusable because there isn't a released BSP for the Starter Kit and the SmartVision apps aren't available yet. Mario Bergeron did a great multi-part tutorial on customizing the VVAS Smart Model App in 2021.1, so I tried that - KV260 VVAS SMS 2021.1.
- Smart Model Select using WebCam
Mario showed how to customize the Smart Model Select app by adding WebCam and MIPI inputs. The SMS app allows selection from different models with different sources and outputs. I was particularly interested in trying the RTSP input from an IP camera, but I could not get that to work.
The image below shows the SMS selection screen.
Demo using WebCam and DisplayPort with SSD_Mobilenet_V2 model
Ubuntu
The Ubuntu image released for the Kria is 20.04.03 LTS. Currently supported with Xilinx 2020.2.2 toolset (Vivado/Vitis/PetaLinux) and Vitis AI 1.3.
- nlp-smartvision app
A short demo video using nlp smartvision with the Brio webcam (for both video and audio). The keyword "Go" switches the bounding box color and the keywords "Up" and "Down" switch detection modes - face detection, object detection or plate detection.
The console output that shows the detected keyword. Because this model was not trained with my voice and hardware - it has about a 25% error rate.
- Vitis AI app
This application allow the running of pre-built Vitis AI Library v1.3.2 sample applications that use either the Brio webcam or the AR1335 IAS MIPI camera. The video clip below is using the AR1335 camera to do person detection at 1280x720 resolution.
Ubuntu/PYNQ
Kria-PYNQ runs PYNQ v2.7 on the same Ubuntu OS used above. It adds the ability to run Jupyter Notebooks which allows for quick prototyping using iPython. It provides a base overlay that provides GPIO interfacing through MicroBlaze IOPs that could be used to connect the LIDAR07 sensor. There is also a DPU-PYNQ overlay that supports Vitis AI 1.4 and a PYNQ_Composable_Pipeline overlay that allows building custom pipelines of accelerated elements.
- OpenCV Face Detect WebCam
- PCam 5C
The base overlay includes the MiPi CSI interface for the PCam 5C, so I tried that out. I don't think the other pre-built platforms that are available have that included. I'd like to try the interface with a Raspberry Pi camera, but I'm not sure how to develop a driver for it. Below is a frame capture in a notebook.
I wanted to use the smart camera untethered from a host computer, so I decided to try adding a USB WiFi dongle. The specific unit that I used is a USB3 802.11 ac dual band (2.4/5.8GHz) adapter with an integrated antenna. The simplest way to test this out with the Kria is with an Ubuntu configuration (for PetaLinux, I need to figure out how to add the device driver).
This adapter is plug and play with Windows, but for Linux it requires that a device driver be installed. Luckily this unit is using a Realtek 8812BU chipset and has an arm64 driver available for it - https://github.com/morrownr/88x2bu-20210702.
For this test I am using the Ubuntu 20.04 image for the Kria.
First step is to identify the adapter/chipset. To do this, attach the WiFi dongle and list the USB devices using lsusb.
The adapter is Device 005, a Realtek 8812BU. Now the installation is straightforward, just need to download the driver repository and rub the install script.
- Install
sudo apt install -y dkms git build-essential
mkdir repos; cd repos
git clone https://github.com/morrownr/88x2bu-20210702
cd 88x2bu-20210702
sudo ARCH=arm64 ./install-driver.sh
Then the adapter ID needs to be determined using iwconfig
Create an /etc/wpa_supplicant.conf file with your network credentials
wpa_passphrase "network_name" " network_password"
Connect to network
sudo wpa_supplicant -c /etc/wpa_supplicant.conf -i "adapter_id" &
Get ipaddr
sudo dhclient "adapter_id"
If successful you should see an ipaddr associated with the adapter
I want to be able to run the smart camera remotely using a remote desktop (GUI). I've used SSH as a remote console, but I'd like to be able to get a remote display. Ubuntu by default is using the Gnome3 desktop which uses the Vino VNC server.
- Install Vino
sudo apt update
sudo apt install vino - Enable Screen Sharing in Ubuntu Settings panel
- Disable encryption (RealVNC Viewer)
gsettings set org.gnome.Vino require-encryption false
The good news is that setting up VNC is easy, the bad news is that I could not figure out how to get it to display the applications video output over VNC. nlp_smartvision specifically will not run with the desktop open, but not sure about the Vitis AI examples.
LIDAR07 TOF SensorI'm going to use a time-of-flight (TOF) sensor to measure the distance from the camera to an object. The sensor that I'm using can be configured to use either UART or I2C for communication. There are libraries available for this sensor for the Arduino IDE, but I haven't seen one that I could apply directly for use in Ubuntu or PetaLinux. I am going to first verify that this sensor works using an Arduino compatible implementation and then attempt to port it to the Kria. I believe the correct approach would be to add a MicroBlaze IOP block to the firmware to implement an I2C interface (not sure if I could interface easily to the PS side of the Kria). If the Arduino test works, I will then try the IOP I2C interface using PYNQ and if that works - try to create a driver for the LIDAR07.
LIDAR07 interfaced to Seeeduino Xiao
Verify that the sensor works using the Arduino IDE and library with I2C interface. I am using a Xiao SamD21 board. The Arduino code is included in Xiao_Lidar07_Expansion_Board.ino.
- Distance Graph from Serial Plotter
Test of the sensor walking into a room toward it and slowly away and again approaching it and then stepping out of the the field of view. The max measurement represents the distance to the doorway. So, the TOF sensor is working as expected.
- I2C Sensor on Kria using PMOD interface
To connect an I2C device to the Kria, I'm going to use PYNQ Grove adapter board. Then test that I could get an I2C device working with the MicroBlaze PMOD IOP that is in the PYNQ base overlay. Since I don't yet have a library for the LIDAR07, I'm going to use a Grove I2C ADC with an analog temperature sensor to verify the interface.
A quick test shows that the interface works. I put my thumb on the temperature sensor to get a change in readings. The Notebook code is included in grove_adc_temperature.ipynb.
I would like to have facial recognition capability and that will require being able to capture data to build my own person database. I've used Edge Impulse for doing object classification, so I'd like to be able to interface to their framework. There are a number of different methods of uploading data to Edge Impulse, but the simplest method that I've seen for the Kria is the Nodejs version of the data uploader on the Ubuntu desktop described in this tutorial by Whitney Knitter Kria KV260 with Edge Impulse.
- Install Edge Impulse CLI on Ubuntu 20.04
ubuntu@kria:~$ sudo apt-get install nodejs sox
ubuntu@kria:~$ curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
ubuntu@kria:~$ sudo apt-get install -y nodejs
ubuntu@kria:~$ npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm - Start CLI
ubuntu@kria:~$ edge-impulse-linux
1) The first time that the CLI is run, you will need to provide your Edge Impulse credentials
2) You'll need to have a project created in your Edge Impulse dashboard that you can associate the data with
3) You'll need to specify which microphone and camera to use
4) You'll need to give the device a name
- Record data samples
- Create impulse
- Live classification
- Standalone classification model deployment
Edge Impulse provides an easy path to standalone model deployment using the Edge Impulse for Linux CLI. Once your model has been trained/built, you can deploy it as a .eim model file that can be operated standalone using the CLI.
I built/downloaded the Linux(AARCH64) model and ran live classification.
edge-impulse-linux-runner --download Kria_Ubuntu.eim
edge-impulse-linux-runner
Console output
The model could not be optimized so the inference times aren't that great and some of the confidence scores are low, so the model needs some work. The Edge Impulse development flow is great, but I need to figure out how to get it to better utilize the Kria accelerated IP.
Kria SD CardsThere were so many possible development options and the possibility of damaging a working configuration that, in addition to making image backups, I also kept SD card copies of known working configurations since it takes more than a few minutes to rewrite a backup image. So far, the SanDisk Ultra cards have worked reliably for me. Just need to keep track of what is on each card - PetaLinux, Ubuntu, and Ubuntu/PYNQ variants.
After a lot of experimenting, I finally decided that I would use Ubuntu as the OS for my application. I think it provides me with the most flexibility for implementation (I really struggle with adding features/devices using PetaLinux recipes). And it also allows me to fall back on the PYNQ framework for prototyping.
Again, I found a nice tutorial by Tom Simpson to create and run a custom machine learning application - Easy Machine Learning on Ubuntu with the Xilinx Kria KV260. For this project I want to use custom person detection and face recognition models with a custom application using the Brio WebCam as the image source.
First step is to use the existing use the existing NLP-SmartVision overlay to run person and face detection samples from the Vitis AI library.
- Install NLP-SmartVision and Vitis-AI Library Snaps
sudo xlnx-config --snap --install xlnx-nlp-smartvision
sudo snap install xlnx-vai-lib-samples - Load the NLP-SmartVision overlay
sudo xlnx-config --xmutil unloadapp
sudo xlnx-config --xmutil loadapp nlp-smartvision - Create a soft link to the loaded dpu.xclbin
sudo ln -sf /var/snap/xlnx-config/current/assets/dpu.xclbin /usr/lib/dpu.xclbin - Run the VAI Person Detection example using the refinedet_pruned_0_96 model using the WebCam as input
xlnx-vai-lib-samples.test-video refinedet refinedet_pruned_0_96 /dev/video0
- Run the VAI Face Detection example using the densebox_320_320 model using the WeCam as input
xlnx-vai-lib-samples.test-video facedetect densebox_320_320 /dev/video0
- Install Custom Person Detection application (instructions to create custom overlay and application files are in Tom's tutorial)
sudo xlnx-config --install ml-accel - Load Custom Person Detection overlay
sudo xlnx-config --xmutil unloadapp
sudo xlnx-config --xmutil loadapp ml-accel - Run Custom Person Detection application using the refinedet_pruned_0_96 model using the WebCam as input
cd ~/person_detect
./person_detection /dev/video0
Next step is to create a custom application to implement the Smart Security Camera.
I've been struggling to get the LIDAR07 to work without using a library (having a bit of a problem understanding the "spec"). While I work that out, I'm going to use the detection bounding box width as a proxy for distance.
Here is a quick demo of switching between detection models based on distance.
SummaryThis has been a fun challenge. Thanks to Xilinx and Hackster for the opportunity and the hardware.
Xilinx has developed a great product with the Kria SOM and Vitis AI. I've only scratched the surface of what can be achieved. I'll keep evolving this project with better customized models and hopefully add IP cameras later.
To solve my inability to understand the LIDAR interface spec, I am going to try using a logic analyzer to capture the transactions when using the library. I will probably have to customize the I2C writes and reads.
I'll put the relevant project files in a github repo - https://github.com/ralphjy/kria-smart-security-camera.
Comments