We will make a robot dog. We assume that it likes to be close to people, but is afraid of cats.
Therefore, this robot dog needs to have the ability to recognize people and cats.
Now, neural networks can easily find objects in images and distinguish their types.
We will use the Avent Ultra96v2 development board as the main control of the robot dog. This is a platform that uses an FPGA chip that includes the A53 CPU. A53 can run an operating system, such as ubuntu. FPGA can accelerate the calculation of neural networks. This will use the xilinx Vitis-ai stack.
The body of the robot dog needs 3D printing, they come from:
https://www.thingiverse.com/thing:3445283
https://www.bilibili.com/video/BV1i7411n7Qx
The robot dog uses 12 servo motors. Traditional servo motors need to input pwm signals to control the rotation angle of the motor. The Avent Ultra96v2 development board has 40 pins available on the hardware resources. We can make each motor use one pin. (But this requires a comprehensive logic circuit, and I had a problem in this step) So in the end, I used a servo motor with serial communication function. This kind of motor requires a dedicated adapter board. Convert the serial port signal into a simplex uart signal. The user agreement in the serial port contains the rotation information of the motor.
Camera, we need a usb camera. It can be easily connected to the Avent Ultra96v2 development board, compared to mipi camera. In an environment with ubuntu operating system and opencv. It is very easy to get the usb picture. Even just using python scripts.
system:
The CPU of the Avent Ultra96v2 development board can run the Ubuntu operating system. With patalinux you can easily tailor the contents of the linux system. In this design, I used xilinx's Pynq mirror. It provides an ubuntu operating system with a graphical interface. It also contains pre-compiled bit files and examples of neural network pre-training models.
Neural Networks
I used the dpu_yolo_v3 example included in the Pynq image. It is a tailored neural network model. It can recognize 20 different objects. The characters and cats are just included. (The types and numbers of different objects are listed in img/voc_class.txt).
Motion control of robot dog
In order to control each motor of the robot dog more conveniently. We need to model the robot dog’s legs. We want to tell the robot dog where its feet want to fall (relative to the body), and there is a function to automatically calculate the angle of each motor.
The function xyztoang provides such a function. (From: https://github.com/richardbloemenkamp/Robotdog)
https://www.bilibili.com/video/BV1n7411U74W
https://www.bilibili.com/video/BV1p7411U7WG
We also need to be able to control the posture of the robot dog. We need to tell the robot dog's rotation angle and translation distance relative to the original position.
The function pose_control provides such a function. (From: https://zhuanlan.zhihu.com/p/64321561)
https://www.bilibili.com/video/BV1EZ4y1x7fX
https://www.bilibili.com/video/BV12k4y1975K
In order to make the robot dog walk, we need to define the gait trajectory of the robot dog in advance. They are stored in the foot_step_x foot_step_y array.
(Data from: https://github.com/grassjelly/cheetah-algorithms, magnified 1000 times)
https://www.bilibili.com/video/BV1Ep4y1C7QJ
https://www.bilibili.com/video/BV1V64y1u7F3
So we need to use the Pynq mirror. Of course, you can also compile the hardware part yourself to make your own image file.
In pynq, include pynq-dpu\dpu_yolo_v3.ipynb. It implements the use of DPU inference yolov3 network. Identify the categories of objects in the locally stored pictures. We can change the picture input to get from the usb camera and put it into the loop function. Realize continuous judgment.
When a person is included in the recognized category, the robot dog is driven forward. When a cat is included in the recognized category, step back to show fear.
You can also clone my modified example directly in the mirror:
https://github.com/liyang53719/Ultra96_robota.git
It contains an ipynb file that can be run on the web side.
Also includes a py file. It can be run in a serial terminal.
Please make sure you have connected the usb serial port and usb camera before running. And use sudo.
This is the final result presentation:
vlog20201207_Ultra96v2_robota_成果展示_哔哩哔哩 (゜-゜)つロ 干杯~-bilibili
Comments