Security check is a safety alarm for the consideration of passengers and the transportation sectors, keeping danger away, usually applying in the airports, railway stations, subway stations, etc. In the existing security inspection field, security inspection machines are deployed on the inbound passages of public transportation. In general, it requires multiple devices to work at the same time.
Nevertheless, the detection performance of prohibited items in X-ray images is still not ideal due to the overlapping of detected objects during the security inspection. For this matter, based on the de-occlusion module in the Triton Interface Sever, deploying a prohibited item detection algorithm in the Xray images can perform a better way.
Hence, credit to Yanlu Wei, Renshuai Tao et al., we provide this fundamental project that we are going to deploy a Deep Learning model that could detect prohibited items (specifically knives) in the Raspberry Pi and reComuter for Jetson J1010. The Jetson NX and the Jetson AGX are both supported.
Getting StartedTriton Inference Server provides a cloud and edge inferencing solution, optimized for both CPUs and GPUs. Triton supports an HTTP/REST and GRPC protocol that allows remote clients to request inferencing for any model being managed by the server. Here we are going to use Triton (Triton Inference Server) as our local server which will be deployed detection model.
HardwareHardware RequiredIn this project the required devices are shown as below:
- Raspberry Pi 4B*2
- reComputer J1010
- HDMI-display screen
- PC
Both Raspberry Pi and reComputer should be powered on and connected to the internet.
Setting up Raspberry PiStep 1. Install the official website Raspbian Buster system and basic configuration. In this project, we use RASPBERRY PI OS(64 bit) as our operated system.
Step 2. Configure the Raspberry Pi environment.
We will open the Raspberry Pi SSH port and call it remotely using the SSH interface on the PC. To connect to the Pi from the computer, we need to know the IP address of the Pi. Notice: make sure the PC and Raspberry Pi are under the same LAN. These are the following steps to connect Raspberry Pi with the computer.
- Open a new terminal and execute
sudo raspi-config.
The menu will be shown as below and we need to select “Interfacing Options” and then press ENTER
- Select “SSH” and press ENTER.
- Select “Yes” and press ENTER.
- After a while, we will get a message “The SSH server is enabled”.
- Open a Terminal and execute
ifconfig
We can see the IP address of Raspberry Pi shown as below:
- Open the PC’s Terminal and execute
ssh pi@192.168.6.215
and input the password of your computer. The PC then will connect to Raspberry Pi.
Step 3. Configure the Python environment. 1. Make sure python version Execute python –V
and ensure python version is 3.9.2.
- Install Tritonclient dependencies. Execute
pip3 install tritonclient[all]
Install needed Pytroch dependencies.
Notice: Before we install Pytorch, we have to check out Raspbian version.
Notice: Before we install Pytorch, we have to check out Raspbian version.
- Install needed Pytroch dependencies.Notice: Before we install Pytorch, we have to check out Raspbian version.
Execute the command below to install Pytorch:
# get a fresh start
sudo apt-get update
sudo apt-get upgrade
# install the dependencies
sudo apt-get install python3-pip libjpeg-dev libopenblas-dev libopenmpi-dev libomp-dev
# above 58.3.0 you get version issues
sudo -H pip3 install setuptools==58.3.0
sudo -H pip3 install Cython
# install gdown to download from Google drive
sudo -H pip3 install gdown
# Buster OS
# download the wheel
gdown https://drive.google.com/uc?id=1gAxP9q94pMeHQ1XOvLHqjEcmgyxjlY_R
# install PyTorch 1.11.0
sudo -H pip3 install torch-1.11.0a0+gitbc2c6ed-cp39-cp39-linux_aarch64.whl
# clean up
rm torch-1.11.0a0+gitbc2c6ed-cp39-cp39m-linux_aarch64.whl
# or Bullseye OS
# download the wheel
gdown https://drive.google.com/uc?id=1ilCdwQX7bq72OW2WF26Og90OpqFX5g_-
# install PyTorch 1.11.0
sudo -H pip3 install torch-1.11.0a0+gitbc2c6ed-cp39-cp39-linux_aarch64.whl
# clean up
rm torch-1.11.0a0+gitbc2c6ed-cp39-cp39-linux_aarch64.whl
After a successful installation, we can check PyTorch with the following commands.
- Install Torchvision dependencies.
# download the wheel
gdown https://drive.google.com/uc?id=1oDsJEHoVNEXe53S9f1zEzx9UZCFWbExh
# install torchvision 0.12.0
sudo -H pip3 install torchvision-0.12.0a0+9b5a3fe-cp39-cp39-linux_aarch64.whl
# clean up
rm torchvision-0.12.0a0+9b5a3fe-cp39-cp39-linux_aarch64.whl
Notice: PyTorch wheels for Raspberry Pi 4 can be find in https://github.com/Qengineering/PyTorch-Raspberry-Pi-64-OS
Notice: PyTorch wheels for Raspberry Pi 4 can be find in https://github.com/Qengineering/PyTorch-Raspberry-Pi-64-OS
- Install OpenCV 4.5.5 Execute
pip3 install opencv-python
Step 1. Getting started with reComputer J1010, if your reComputer haven’t been installed the JetPack 4.6.1, you can refer to the installation
Step 2. Install and configure sever.
Notice: the following step is a general method for sever. If you only need to deploy this project, you can skip this step.
Notice: the following step is a general method for sever. If you only need to deploy this project, you can skip this step.
Open a new Terminal and execute git clone https://github.com/triton-inference-server/server
then execute cd ~/server/docs/examples
sh fetch_models.sh
Step 3. Create a new folder “opi/1” in “home/server/docs/examples/model_repository ”. Download model.onnx and put it into “1” folder.
Step 4. Install the release of Triton for JetPack 4.6.1 and is provided in the attached tar file: tritonserver2.21.0-jetpack5.0.tgz.
To be noticed: 1. This release supports TensorFlow 2.8.0, TensorFlow 1.15.5, TensorRT 8.4.0.9, Onnx Runtime 1.10.0, PyTorch 1.12.0, Python 3.8 and as well as ensembles. 2. Onnx Runtime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta. 3. System shared memory is supported on Jetson. CUDA shared memory is not supported. 4. GPU metrics, GCS storage, S3 storage and Azure storage are not supported.
To be noticed: 1. This release supports TensorFlow 2.8.0, TensorFlow 1.15.5, TensorRT 8.4.0.9, Onnx Runtime 1.10.0, PyTorch 1.12.0, Python 3.8 and as well as ensembles. 2. Onnx Runtime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta. 3. System shared memory is supported on Jetson. CUDA shared memory is not supported. 4. GPU metrics, GCS storage, S3 storage and Azure storage are not supported.
The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information about how to install and use Triton on JetPack you can refer to jetson.md.
Then execute
mkdir ~/TritonServer && tar -xzvf tritonserver2.19.0-jetpack4.6.1.tgz -C ~/TritonServer
cd ~/TritonServer/bin
./tritonserver --model-repository=/home/seeed/server/docs/examples/model_repository --backend-directory=/home/seeed/TritonServer/backends --strict-model-config=false --min-supported-compute-capability=5.3
Now, we have set up all the preparation, we could try to run our model.
Running the programmingStep 1. Download model and related files.
Clone the module from GitHub. Open a new Terminal and execute
git clone https://github.com/LemonCANDY42/Seeed_SMG_AIOT.git
cd Seeed_SMG_AIOT/
git clone https://github.com/LemonCANDY42/OPIXray.git
Create a new folder “weights” to store the pre-trained weight of this algorithm “DOAM.pth”. Download the weight file.
- Create a new folder “weights” to store the pre-trained weight of this algorithm “DOAM.pth”. Download the weight file.
And execute: - cd OPIXray/DOAM
- mkdir weights
- Use the same steps to create a “Dataset” folder to store the Xray images dataset.
Step 2. Running inference model.
Execute python OPIXray_grpc_image_client.py -u 192.168.8.230:8001 -m opi Dataset
Now, the result will be shown as the figure below:
Stay tuned with us!
TroubleshootingWhen you luanch Triton server, you may meet following errors:
- if error with libb64.so.0d, execute :
sudo apt-get install libb64-0d
- if error with error with libre2.so.2, execute :
sudo apt-get install libre2-dev
- if error: creating server: Internal - failed to load all models, execute :
--exit-on-error=false
When you luanch Triton server, you may meet following errors:if error with libb64.so.0d, execute :sudo apt-get install libb64-0d
if error with error with libre2.so.2, execute :sudo apt-get install libre2-dev
if error: creating server: Internal - failed to load all models, execute :--exit-on-error=false
Comments