Glad to notify, that my robot became available on Tindie. If you are not willing to follow all setup steps and just want a ready-to-run robot for your ROS2 adventures, please support me here.
Constant SupportMet an issue? Please join my Slack channel
StoryRecently I was playing with robots using ROS1 and ROS2. I tried to build an F1tenth robot project https://f1tenth.org using Intel NUC, then switched to a robot vacuum cleaners, played with omnidirectional robots.
I find out that except JetBot by Nvidia, there are not so many x86 powered robots that you can build from A to Z just following an instruction manual. Either you have to buy expensive platform like iRobot Create, Husarion, TurtleBot or just has a risk to be stucked in a ROS1/ROS2 building tutorials mess all over the web.
I decided to find a way to make life of a typical x86 ROS-maker easier. For Intel-based processors I found on the web a Robotics SDK. Literally, it is a set of debian packages that you install on your Ubuntu 22.04 system, and can access Intel-proprietary 3D-vision algorithms like CollabSLAM, ADBSCAN, KudanSLAM. I decided to give it a try, and that is how my new robot project got born.
MIKRIK - is an open-source two-wheel drive robot for robotic enthusiasts who is studying robot vision using 3D-cameras.
Abbreviation of the MIKRIK:
- MIni
- Kreative
- Robotics
- Intellective
- Kit
MIKRIK supports any control board you have at home. It has to run ROS1 on Raspberry to read encoder data, control motors, and creates a ROS1 interface. Then it can be accessed by ROS1/ROS2 navigation node running on a separate computer using ROS1-ROS1 bridge.
Why do I need to use ROS1-ROS2 bridge?
Raspberry is not powerful enough to handle vision part, and Robotics SDK is not supported on ARM architecture. For MIKRIK project it is used only for the low-cost tasks. The high-load computer vision tasks, SLAM, and navigation will be done by a separate board. In my case I will use x86 board LattePanda.
Why don't you use a GPIO of the LattePanda, Radxa, UP7000 or any other board?
The project evolved from Raspberry, and I decided not to spend time writing GPIO support for any board that has GPIO pinout. If you are willing to do so, let me know. Now I'm thinking to switch from Raspberry to OrangePI to cut costs.
Project has two parts:
- Robot part. It runs ROS1 using Raspberry PI 4B. ROS1 has nodes to communicate with the robot motors and to read encoder data. There we will run roscore ROS1 master.
Then it creates a robot controller, after that your robot can be controlled by /cmd_vel topic.
By finishing that part you will have a ROS1 robot to be controlled using a PS4 gamepad.
To go further, you will have to connect it to the client part of the project, and control robot by sending control commands to the /cmd_vel topic.
You might skip control part of the project and enjoy just a solo ROS1 robot based on Raspberry. But, unfortunately, you have to add ROS1 nodes to read Lidar data, and create map using SLAM algorithms like SLAM-ToolBox.
As my tutorial is more focused on using Intel Robotics SDK for computer vision, I will not cover ROS1 and SLAM using Lidars, it is in a To-Do list.
- Control computer part. It is the main brain of the robot. It will generate control commands, and move robot according to the main navigation node program. It can be any powerful enough computer running ROS2 ROS1 SLAM and navigation nodes. Navigaton node will subscribe to the /cmd_vel topic and will publish value to your robot using ROS1-ROS2 bridge.
Your robot can run ROS2 on Intel NUC, powered by x86 Intel CPU board like LattePanda, Radxa X4, or UP7000 board. Because they are all x86, the setup will be identical for all of them. The only thing, maybe some of the boards will require external Wi-Fi module.
Then connect computer to a ROS1 low-level robot part using a ROS1-ROS2 bridge over the Ethernet cable.
In this tutorial I will cover:
- How to build a differential drive two-wheel robot
- How to connect robot to the Intel Robotics SDK and run Visual SLAM using Realsense D435(i) camera
Same way you can connect Lidar, but I strongly recommend you to use 3D-cameras or even better an RGB cameras. RGB camera you can teach using AI to detect depth of the scene, but AI part is not covered in the tutorial, if you're a smart guy, you're welcome to contribute.
Of course for a new makers, I will add a 2D-Lidar support for this project, because 2D-Lidars are cheaper than 3D-cameras, and are good for learning. Wait for the updates.
There might be a slight discrepancy between photos and actual CAD-parts that you have. Project is in the development stage, so CAD-files are getting updated often.
Part 1 Robot Building StepsIf you already have a ROS1 or ROS2 robot, and want just to explore Intel Robotics SDK, you can go straight to Part 2. In Part 2 I will explain you how to set your robot to communicate with Robotics SDK software, and run multiple robotics tutorials from the Developer Guide.
Prerequisites- Please laser-cut parts from the MIKRIK-CAD Github repo, and 3D-print some of them. Follow the docs on the Github repo for more details.
If you see that some parts are not same as on the photos below, please be aware that I'm updating project constantly, and design can have changes. It doesn't impact robot usability. All navigation ROS2 constants in the code are related to the robot assembly that you can find on Github MIKRIK-CAD Github repo repo.
Photos below are given only for the reference.
Step 1.1 Motors installationPlease take 1 motor mount plate (left), two M2.5x20mm screws with M2.5nuts, and one L-shape motor. Wires included in the motor set please keep nearby, we will use them later.
Connect motor to the motor plate and fix it using screws.
Same way build right motor mount.
Take bottom chassis plate and metal-ball caster wheel together with two M3x8mm screws.
Install metal-ball wheel on the bottom chassis plate.
Then use M3 nuts to fix screws.
Take again bottom chassis plate and x4 3D-printed (or metal 55mm height) standoffs, and x8 M2.5x10mm screws.
Install screws into the 4 mounting points on the chassis bottom plate.
Install standoffs from other side of the chassis.
Make sure that standoffs are aligned with the slots on the chassis.
Install motor mounts on the chassis. Insert studs located on the motor mounts into slots located on the chassis.
Take top chassis plate. It has a lot of holes to mount Intel NUCs, Nvidia Jetson (Auvidea Version), Raspberry PI, LattePanda, and AAEON UP Xtreme i11 boards. Also it has holes to mount Realsense camera in front and rear part of the robot, so you can have Visual SLAM using two Realsense cameras (that is gonna be next tutorial idea).
Place it on the standoffs by aligning motor mount studs with the rectangular slots on the top plate.
Side view.
Now fix it using four M2.5x10 mm screws.
Now we're going to create a LattePanda holder. This way you can stick Velcro tape on the bottom of the plastic place and easily mount it in any place of the chassis.
Take x1 LattePanda larger plate, 4 standoffs (3D-printed or factory-made), and x4 M2.5x6mm screws. Install standoffs on the plate and fix them using screws.
Now take a LattePanda board, WiFi antennas from included in the box, and SSD M.2 disk. LattePanda 3 Delta is an x86 single board computer powered by the Intel N5105 quard-core processor. With the Intel turbo boost technology, it can reach speeds of up to 2.9 GHz. Also it has an embedded microcontroller Arduino Leonardo.
Install M.2 disk on LattePanda.
Embedded microcontroller can eliminate usage of the Raspberry board in MIKRIK project in the future, but this feature is not ready yet, I'm still working on it.
Fix antennas on the WiFi module soldered on the board and stick WiFi antennas to the plastic plate.
Finally mount board using x4 M2.5x6mm screws.
Now stick a piece of Velcro tape (soft part with loops) on the plate.
Please do the same for Raspberry PI. Mount it on the plate, and then stick Velcro tape.
Time to play with Velcro tape! Take chassis and stick two pieces of velcro tape with hooks (hard part) on the bottom plate chassis.
Here it goes.
Now stick Velcro tape on the top chassis plate.
Take Velcro and put it on the bottom side of the power bank.
Now stick a Velcro tape with loops on the 2S-battery bottom side.
Turn it over and stick Velcro tape with loops on the top side.
Please take DFRobot HAT Motor driver and wires that you took from motor set.
Follow the Wiring Diagram below to connect all wires on the proper pins. One motor set has two kinds of wires: 2-pin wire to power-up motor, 4-pin wire to power-up encoder and read signal encoder data.
Please follow a Raspberry PI GPIO diagram to identify the correct pin number.
After that connect wires to the correct motors. Left motor wires connect to the left motor, and its encoder. Do the same for right motor.
Connect wires
How it will look like
You might use plastic tube to do wiring more accurate.
Now install LattePanda board like that.
Next goes power bank.
After that install Li-Po batter on top.
Battery after installation.
Connect Raspberry PI with HAT Motor Driver.
And now stick Raspberry on top of the Li-Po battery.
Side view.
Now take an Ethernet cable.
And connect LattePanda and Raspberry together.
Now mount D435(i) Realsense camera on the robot using 1/4-20 x 3/8" screw.
First install screw.
And then fasten camera.
Now take a USB Type C cable. One end connect to the power bank, and another to the LattePanda board.
Power-up Raspberry and motors using Li-Po battery. I'm using Traxxas connector, but in the Hardware List I provided link to the T-deans battery and connector to make assembly easier and avoid soldering.
In your case you have to take Deans-T connector and cut the wire.
After that connect Deans-T connector with the same connector on the battery, like I'm doing with the Traxxas connector.
It will power-up motor driver and Raspberry. Motor driver has accept voltage in range 7-12V.
During the setup process I recommend to power-up Raspberry using an external DC power supply via USB Type C slot on Raspberry. After you successfully have system setup, you switch to the battery power supply.
Now take wheels and install them.
Our robot part of the project is ready! We can proceed to the programming part.
Part 2 Control Computer Part Building StepsRaspberry PI 4 SetupHere is a ROS1 code setup that runs on Raspberry. To save time, I will not cover how to write ROS1 nodes.
Here is my Github MIKRIK repo which contains source code to run ROS1 nodes on the robot.
Step 2.1 Download Custom Raspberry ImagePlease download my Ready-To-Run disk image and burn it into your microSD card. I installed on Raspberry PI Ubuntu 20.04 x64 Server and ROS1 Noetic.
Download Link Ready-To-Run MIKRIK Robot Part Disk Image
Step 2.2 Write Image FileNow take 16GB microSD card and burn image using Win32 Disk Imager on Windows or balenaEtcher on Mac. On Linux you can also do it, just use Terminal command dd
Step 2.3 Boot RaspberryInsert microSD card into Raspberry and boot it. Connect microHDMI cable to it.
After successful boot, use a login: mikrik and password mikrik2024
Sudo password is mikrik2024
GUI will be loaded.
Step 2.4 Connect PS4 Gamepad To RaspberryNow it is time to connect PS4 controller to it.
Enable PS4 controller into pairing mode by pressing PS button+SHARE for a 10 seconds.
Now get back to the Ubuntu Terminal and pair Raspberry with PS4 controller.
Run command to open bluetoothctl utility.
bluetoothctl
After that run
power on
And turn on scan
scan on
Find your PS4 controller in the list. It should be detected as "Wireless Controller"
Copy its address name.
Now run command to trust PS4 controller.
trust A4:AE:12:A0:C6:EF
Then pair device
pair A4:AE:12:A0:C6:EF
After that finally connect PS4 controller to Raspberry
connect A4:AE:12:A0:C6:EF
Congratulations! You connected your PS4 controller to the Raspberry. Check that status LED on the PS4 controller became a solid blue.
To test that gamepad can control robot and it works correctly, please run ROS1 nodes, following steps below.
Step 2.6 Power-up Raspberry Using BatteryConnect Li-Po battery to the motor driver and power-up Raspberry, if you didn't do that yet. It is okay if your Raspberry is connected via motor driver (battery power supply) and via Type C DC power supply at the same time.
Connect your Raspberry to the Wi-Fi using Wi-Fi Settings panel in Ubuntu.
Then find IP by running in terminal command:
ifconfig
From this moment you no need to connect display, because now you can SSH your board from your development laptop.
Step 2.8 Launch ROS1 Nodes On MIKRIKOpen Ubuntu terminal and run ROS1 MIKRIK node
Go to the:
cd /home/mikrik/ros
Enter sudo
sudo su
Now source the environment and run ROS1 MIKRIK nodes.
source devel/setup.bash
roslaunch mikrik_description bringup.launch
After that you will see that nodes start running. roscore will be up on the Raspberry side.
Type Ctrl+C to stop program execution.
Step 2.9 Final ChecksCheck that you can move motors using gamepad. See video below to review how to control robot using PS4 gamepad.
MIKRIK Robot Part setup is done!
Part 3 LattePanda Delta 3 Setup With Robotics SDKOur LattePanda will control motions of the robot.
It will read Realsense camera data and process it using Robotics SDK and Intel-proprietary ROS2 nodes for that. We will configure communication with Raspberry using ROS1-ROS2 bridge, and will install Intel Robotics SDK.
Step 3.1 Install UbuntuOn LattePanda we're going to install Ubuntu 22.04 LTS.
- Follow common procedures to install Ubuntu 22.04 LTS on your LattePanda board.
- After installation process configure Wi-Fi network.
- Install and configure SSH.
Now configure Ethernet communication with the robot part. Setup a Wired connection on your freshly-installed Ubuntu system following screenshots below.
Click on the "gear" icon on the left, and open a new window. Here check that Details tab looks the same as mine. Hardware address will be different.
Go to Identity tab. Name is doesn't matter, I put "Rpi". MAC Address choose from the drop-down list, on your system it will be different.
IPv4 Tab. Set IP address 172.100.0.100. It will be your LattePanda local IP address. Net mask 255.255.255.0. Rest in DNS section keep the same.
IPv6 tab. Disable it.
Security tab. Security disabled, default setup.
Reboot LattePanda.
Now try to ping robot part by running in LattePanda's Terminal command
ping 172.100.0.1
LattePanda must reach robot board via Ethernet.
Same way try to ping LattePanda from Raspberry side, it should also be able to reach LattePanda.
ping 172.100.0.100
Step 3.3 Setup VNC On LattePanda Ubuntu 22.04It is pretty straightforward process that is described here.
In the end you must access your LattePanda board from your own laptop using VNC server via Wi-Fi network
Install xfce
sudo apt update
sudo apt install xfce4 xfce4-goodies
Then install vncserver
sudo apt install tigervnc-standalone-server
sudo apt install tigervnc-common
sudo apt install tigervnc-tools
Now configure password, I recommend using again mikrik2024
vncpasswd
Now update open file xstartup
vi ~/.vnc/xstartup
Update contents of it with
#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
exec startxfce4
Then make it executable
chmod u+x ~/.vnc/xstartup
Now get back to your laptop. Open terminal window to SSH your LattePanda board from the laptop. On window use Putty to access LattePanda.
ssh amr@192.168.0.53
Then launch vncserver.
vncserver -localhost no -geometry 1920x1080 -depth 16 :42
If everything works correct, output will be like this, remember port number, I have 5942, you will have the same:
amr@robot-kit:~$ vncserver -localhost no -geometry 1920x1080 -depth 16 :42
New Xtigervnc server 'robot-kit:42 (amr)' on port 5942 for display :42.
Use xtigervncviewer -SecurityTypes VncAuth,TLSVnc -passwd /home/amr/.vnc/passwd robot-kit:42 to connect to the VNC server.
If you receive an error, first make sure that you got disconnected physical display from LattePanda board and try again. If you powered LattePanda via USB Type C and is using the same cable to connect to the display, then power the board not from a display USB C output, but for example from a USB Type C wall-adapter or laptop charger. VNC will not work if any physical display is connected, you might find fix for that on the internet.
Now try to access LattePanda VNC server from laptop.
Take your laptop, I'm using Mac, so here will be Mac setup. For Windows please dowload TightVNC Viewer app.
Open Finder. In menu choose Go -> Connect to Server.
Change my LattePanda IP Address with yours and add Port Number:
vnc://192.168.0.53:5942
Type password you set, mikrik2024.
Finally you can access LattePanda remotely! Congratulations!
This way ROS1-ROS2 bridge running on LattePanda will be able to access Raspberry ROS1 instance via Ethernet.
Go to hosts file and add proper host name for the Raspberry.
sudo vi /etc/hosts
Add the line below into that file, and save it:
172.100.0.1 mxlfrbt
Try to ping Raspberry using its host name "mxlfrbt". It should work save as IP address.
ping mxlfrbt
Step 3.5 Go Back To RaspberryNow go back to the Raspberry PI robot system terminal during this step.
Make sure that in the system it has a proper LattePanda's host name. In my case it was a "robot-kit". Depends on your LattePanda's host name in Ubuntu system, update it accordingly.
Go to:
sudo vi /etc/hosts
And update line with your system host name, my x86 computer uses name "robot-kit", yours might be different:
172.100.0.100 robot-kit
Now try to ping LattePanda from Raspberry using host name.
Run:
ping robot-kit
It should reach computer by using its hostname.
Step 3.6 Check Intel GPU TypeInstall utility intel-gpu-tools
sudo apt install intel-gpu-tools
Then execute
sudo intel_gpu_top -L
Please remember the output (gen12, gen9 or gen11), you will use it during installation process of the Intel Robotics SDK.
card0 Intel Jasperlake (Gen11) pci:vendor=8086,device=4E61,card=0
└─renderD128
As you can see LattePanda has gen11 GPU. Please, remember that.
Step 3.7 Install Intel Robotics SDKAgain return to the LattePanda Ubuntu system setup.
Follow the official guide to install Intel Robotics SDK. Robotics SDK is a software optimised for Intel platforms to build AMR robots. It is distributed via Debian packages.
Documentation looks pretty difficult to understand, but try to manage it. A lot of text, read it and follow steps described on Intel web-page. Intel guys, please fix that.
First complete preparation steps from the official documentation:
Next install Robotics SDK following next steps:
- Register on Intel® Edge Software Hub
- Setup APT Repositories
- Install Intel® Robotics SDK Deb packages (in the middle of the process you will need to enter data from Step 3.7)
It is based on readme from TommyChangUMD Github repo. If something is not working, check the repo's readme source, because it is updated constantly.
From the Readme you will need to run only next commands:
cd ~/
git clone https://github.com/TommyChangUMD/ros-humble-ros1-bridge-builder.git
cd ~/ros-humble-ros1-bridge-builder
docker build . -t ros-humble-ros1-bridge-builder
docker run --rm ros-humble-ros1-bridge-builder | tar xvzf -
Then you can delete builder image:
docker rmi ros-humble-ros1-bridge-builder
In case you can't execute docker commands, try to run them sudo.
If docker not installed, you can install it:
sudo apt install docker.io
To test that it works, launch ROS1 nodes from Step 2.14-2.15 on Raspberry.
Step 3.9 Launch ros1_bridge on LattePandaRun command
source /opt/ros/humble/setup.bash
source ~/ros-humble-ros1-bridge/install/local_setup.bash
export ROS_IP=172.100.0.100
export ROS_MASTER_URI=http://mxlfrbt:11311
ros2 run ros1_bridge dynamic_bridge --bridge-all-1to2-topics
If everything works, there will be no errors, and in the terminal window you will see redirected topics like that:
[INFO] [1717447209.481296013] [ros_bridge]: Passing message from ROS 1 std_msgs/Float64 to ROS 2 std_msgs/msg/Float64 (showing msg only once per type)
[INFO] [1717447209.481600281] [ros_bridge]: Passing message from ROS 1 std_msgs/Float64MultiArray to ROS 2 std_msgs/msg/Float64MultiArray (showing msg only once per type)
[INFO] [1717447209.482872942] [ros_bridge]: Passing message from ROS 1 sensor_msgs/Joy to ROS 2 sensor_msgs/msg/Joy (showing msg only once per type)
[INFO] [1717447209.484723585] [ros_bridge]: Passing message from ROS 1 sensor_msgs/JointState to ROS 2 sensor_msgs/msg/JointState (showing msg only once per type)
[INFO] [1717447209.587996301] [ros_bridge]: Passing message from ROS 1 tf2_msgs/TFMessage to ROS 2 tf2_msgs/msg/TFMessage (showing msg only once per type)
[INFO] [1717447209.588910535] [ros_bridge]: Passing message from ROS 1 nav_msgs/Odometry to ROS 2 nav_msgs/msg/Odometry (showing msg only once per type)
[INFO] [1717447209.589016420] [ros_bridge]: Passing message from ROS 1 tf/tfMessage to ROS 2 tf2_msgs/msg/TFMessage (showing msg only once per type)
created 2to1 bridge for topic '/rosout' with ROS 2 type 'rcl_interfaces/msg/Log' and ROS 1 type 'rosgraph_msgs/Log'
Open a second terminal window on LattePanda, and then run:
source /opt/ros/humble/setup.bash
ros2 topic list
Output of the command must be like, it will print all topics redirected from Raspberry ROS1 to the ROS2 running on LattePanda:
ros2 topic list
/battery
/imu
/joint_states
/joy
/mikrik/left_wheel/angle
/mikrik/left_wheel/current_velocity
/mikrik/left_wheel/pid_debug
/mikrik/left_wheel/pwm
/mikrik/left_wheel/target_velocity
/mikrik/right_wheel/angle
/mikrik/right_wheel/current_velocity
/mikrik/right_wheel/pid_debug
/mikrik/right_wheel/pwm
/mikrik/right_wheel/target_velocity
/mobile_mikrik/cmd_vel
/mobile_mikrik/odom
/parameter_events
/rosout
/tf
/tf_static
Part 4 Visual SLAM Tutorials Based On Intel IP Collab SLAMNow it is time to add some files to make Robotics SDK to communicate with your robot.
We will base our first Visual SLAM Tutorial on the CollabSLAM included into the Robotics SDK. You can read more here. CollabSLAM supports two camera feature, I may cover it later.
For today we will use only one camera installed in the front of the robot for map building, localisation and navigation.
Collaborative Visual SLAM With FastMapping EnabledPrerequisites
- Access LattePanda via VNC, because will use rviz gui interface.
- Launch ROS1 node on Raspberry, follow Step 2.8. Make sure you can control robot using gamepad.
- Launch ROS1-ROS2 bridge LattePanda, follow Step 3.9
- Clone my Github repo with modified Robotics SDK launch files
cd ~/
git clone https://github.com/mxlfrbt/mikrik-robotics-sdk.git
Tutorial steps
Step 4.1 Change Default NamespaceRemove unnecessary namespace 'camera'. Open with sudo file:
sudo vi /opt/ros/humble/share/realsense2_camera/launch/rs_launch.py
Delete default value 'camera' of the camera_namespace. Line below change from:
{'name': 'camera_namespace', 'default': 'camera', 'description': 'namespace for camera'}
to
{'name': 'camera_namespace', 'default': '', 'description': 'namespace for camera'}
Step 4.2 Install Collab-SLAMInstall Collab-SLAM default tutorial package
sudo apt-get install ros-humble-cslam-tutorial-fastmapping
Edit config file to enable GUI-tracker, use sudo
sudo vi /opt/ros/humble/share/univloc_tracker/config/nav_tracker.yaml
Then find line L132
# Whether to visualize keypoints in a window
gui: false
Set value to true
# Whether to visualize keypoints in a window
gui: true
When you will launch my script, you will see Keypoints GUI window
Take script from Github folder you pulled before, and move make it executable
cd ~/mikrik-robotics-sdk/scripts
sudo chmod u+x rs-cslam-fastmapping.sh
Step 4.4 Launch Collab-SLAM Mapping ScriptThen launch mapping script
source /opt/ros/humble/setup.bash
./rs-cslam-fastmapping.sh
If everything works well, you will see behaviour like on the video below. Try to move robot around.
To stop script execution type Ctrl+C.
4.5 Troubleshooting- If you don't see TF tree and map, make sure that ROS1-ROS2 bridge running in a second terminal window on the LattePanda.
- If brdige is runnning on the LattePanda, make sure that you launched ROS1 nodes on Raspberry side. If not, you will see an error:
Output message must tell you that your map file got save successfully!
It is important to save your map, so you can have later localization and navigation using this map.
As you see map is stored in /tmp folder, so after system reboot it will be deleted. You may modify the script rs-cslam-fastmapping.sh, to save map in a different folder. Find Launch Robot Tracker section and change path in the line with the command below:
# Launch the robot tracker
ros2 launch univloc_tracker collab_slam_nav.launch.py use_odom:=true server_rviz:=false enable_fast_mapping:=true zmin:=0.4 zmax:=0.8 traj_store_path:=/tmp/ save_traj_folder:=/tmp/ save_map_path:=/tmp/pointcloud_map.msg octree_store_path:=/tmp/octree_map.bin &
Part 5 Follow-Me DemoLet's make MIKRIK robot to follow a person.
My tutorial is based on the Follow-Me Tutorial included into the Robotics SDK. You can read it here
- Access LattePanda via VNC, because will use rviz gui interface.
- Launch ROS1 node on Raspberry, follow Step 2.8. Make sure you can control robot using gamepad.
- Launch ROS1-ROS2 bridge LattePanda, follow Step 3.9
- Clone my Github repo with modified Robotics SDK launch files
- Change position of the Realsense camera, so it can actually see YOU. Robot has to follow a person. Check my photos for the reference
Install necessary debian packages, containing follow-me algorithms
sudo apt install ros-humble-follow-me-tutorial
Step 5.2 Run Follow-Me Program On MIKRIKTake script from Github folder you pulled before, and move make it executable
cd ~/mikrik-robotics-sdk/scripts
sudo chmod u+x mikrik-follow-me.sh
Then launch Follow-Me script
source /opt/ros/humble/setup.bash
cd ~/mikrik-robotics-sdk/scripts
./mikrik-follow-me.sh
If everything works well, you will see ADBSCAN running in rviz
And then robot will be able to follow you like on the video below
You may toggle settings values, from the official Follow-Me guide
TipsYou might also install realsense-viewer to check your camera performance and to update its firmware. Follow official Realsense guide to install GUI interface app for your camera.
Part 5 Localization and NavigationTo learn about Localization and Navigation using VSLAM please follow our contributor's guide on Hackster here Build a Realsense robot powered by a NUC (Mikrik project)
Still To doWish that some of readers can contribute on the following topics.
- Add Lidar mapping, localisation and navigation using SLAM-Toolbox.
- Add Lidar and Realsense IMU readings to the CollabSLAM.
Created issues on Github.
Feel free to leave comments about the project and please join MIKRIK open-source robotics development.
Comments