Social
Impact
There are several use cases where there is a requirement of a vehicle which can map unknown environments without any human assistance. Some of such use cases would be:
In Mines where humans cannot enter because of toxic gases like Sulphur Dioxide and Methane.
In case of natural calamities like floods and famine, such bots are of great help. They can transport materials like food and equipment to the needy.
So, we will be building one such robot here. But, why use Jetson Nano??
Why Jetson Nano?Firstly, when building such autonomous vehicles, processing power is the primary requirement. We have a lot of options for building a prototype bot, but the 3 most widely recognized choices would be:
- Raspberry Pi
- Jetson Nano
- Google Coral
But, the pi has limited processing power (mainly CPU) and the Coral is quite costly and the performance we can leverage.
So, the obvious and the best choice would be the Jetson Nano! Primary reasons for this choice is:
- Pi compatible GPIO, so very easy to connect external peripherals
- Dedicated GPU unit
- 4 GB massive RAM!!
- Raspberry Pi Camera is also compatible.
- CUDA Support too, which is a cherry on top.
- Runs on a full blown Linux Operating System (Ubuntu)
So, lets get started with the building!
Step-1 Gathering the ComponentsThe required components are already listed above. But, we will have a look at them again with their photographs. The components are:
- Jetson NanoDevelopment Kit. The brain of our whole system would be this great and versatile development board.
- SlamTec RP-LIDAR A1. It is a 2D LiDAR sensor which we will use with the SLAM Algorithm to generate and save maps of the spaces that we explore.
- Pi Cam v2.0 or any Sony IMX Camera or any USB Web Camera
- PCA9685 Servo Shield - This is the module which would be used to control the vehicle throttle and steering through I2C channels. We will be using an RC Car for this project.
These are the main components. Apart from these, there are several important requirements too:
- A 1/16 or larger Exceed RC Car.
- A Laser cut Wooden plate to house the components on the car
- A Power Bank to power the Pi, LIDAR Sensor and the Servo Shield.
- A NiMH or Li-Po battery to power the Car.
So, after everything is setup, let's start with the building.
Step-2 Hardware BuildWe will now assemble our Robot.
- First, we will unbox our RC Car. Then remove the top cover which is held by steel pins. Please preserve the pins as it would be later useful to fix our custom base plate.
- Now, after the top cover of the car is off, first just cut off plastic holdings of the wires that go into the ESC(Electronic Speed Controller) of the car so that we can get them out(to connect them to the servo shield).
- Now, take a wooden plate and get to laser cutting!! Please note that you can even 3D print this plate. The final wooden plate should look something like this:
- Now that we have it, just fix it on the car with the help of the steel pins you took out earlier.
- Now that this is done, we can 3D print a camera holder of our car. For now, due to shortage of time, I didn't do it. But, to give you can idea of what it might look like, here is a cage that I made for the Raspberry Pi based Car.
- Now that this is done, just connect all the components on the car. Now add a power bank to the car too via double sided tape, and then make all the necessary connections.
This wraps up the Hardware setup. So, now let us move on to the software part of the project.
Step-3 Software SetupNow comes the interesting and a rather tedious task, the software part of the project! This setup is divided into several parts, so let us get started.
i) OS Setup on the Nano
First things first, when we get the Nano out of the box, we need to install an Operating System. Just follow the tutorial linked below.
Getting Started with Jetson Nano
ii) Install ROS on the Nano
This is the tricky part. There are a number of tutorials on ROS installation in Linux, but most of them are outdated so, I had to spend quite a lot of time to figure this one out.
Some important things to note:
- Firstly, most tutorials cover the installation of ROS Kinetic, but it is outdated and hence you can not directly install it using pip install. So, we will have to go with the ROS Melodic Full Desktop Version.
- So, just follow this tutorial below but replace kinetic with Melodic.
ROS Installation on Jetson Nano
If everything goes all right, you would be able to see the catkin_ws folder has 3 sub folders and you can successfully compile using catkin_make command in this.
Moreover, you can run the command "roscore" in the terminal to start ROS service.
iii) Setup software to run our Car
Now, we will have to install all the dependencies required to run the car. For this, we will use a popular open source project, the "Donkey Car" project, which will be ported to the Jetson Nano. So, just follow the below tutorial to setup this:
1. Install Dependencies
ssh into your vehicle. Use the the terminal for Ubuntu or Mac. Putty for windows.
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential python3 python3-dev python3-pip libhdf5-serial-dev hdf5-tools nano ntp
Optionally, you can install RPi.GPIO clone for Jetson Nano from here. This is not required for default setup, but can be useful if using LED or other GPIO driven devices.
2. Setup Virtual Env
pip3 install virtualenv
python3 -m virtualenv -p python3 env --system-site-packages
echo "source env/bin/activate" >> ~/.bashrc
source ~/.bashrc
3. Install OpenCV
To install Open CV on the Jetson Nano, you need to build it from source. Building OpenCV from source is going to take some time, so buckle up. If you get stuck, here is another great resource which will help you compile OpenCV.
Note: In some cases Python OpenCV may already be installed in your disc image. If the file exists, you can optionally copy it to your environment rather than build from source. Nvidia has said they will drop support for this, so longer term we will probably be building it. If this works:
mkdir ~/mycar cp /usr/lib/python3.6/dist-packages/cv2.cpython-36m-aarch64-linux-gnu.so ~/mycar/ cd ~/mycar python -c "import cv2"
Then you have a working version and can skip this portion of the guide. However, following the swapfile portion of this guide has made performance more predictable and solves memory thrashing.
Note: In some cases Python OpenCV may already be installed in your disc image. If the file exists, you can optionally copy it to your environment rather than build from source. Nvidia has said they will drop support for this, so longer term we will probably be building it. If this works:mkdir ~/mycar cp /usr/lib/python3.6/dist-packages/cv2.cpython-36m-aarch64-linux-gnu.so ~/mycar/ cd ~/mycar python -c "import cv2"
Then you have a working version and can skip this portion of the guide. However, following the swapfile portion of this guide has made performance more predictable and solves memory thrashing.
The first step in building OpenCV is to define swap space on the Jetson Nano. The Jetson Nano has 4GB
of RAM. This is not sufficient to build OpenCV from source. Therefore we need to define swap space on the Nano to prevent memory thrashing.
# Allocates 4G of additional swap space at /var/swapfile
sudo fallocate -l 4G /var/swapfile
# Permissions
sudo chmod 600 /var/swapfile
# Make swap space
sudo mkswap /var/swapfile
# Turn on swap
sudo swapon /var/swapfile
# Automount swap space on reboot
sudo bash -c 'echo "/var/swapfile swap swap defaults 0 0" >> /etc/fstab'
# Reboot
sudo reboot
Now you should have enough swap space to build OpenCV. Let's setup the Jetson Nano with the pre-requisites to build OpenCV.
# Update
sudo apt-get update
sudo apt-get upgrade
# Pre-requisites
sudo apt-get install build-essential cmake unzip pkg-config
sudo apt-get install libjpeg-dev libpng-dev libtiff-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install python3-dev
Now you should have all the pre-requisites you need. So, lets go ahead and download the source code for OpenCV.
# Create a directory for opencv
mkdir -p projects/cv2
cd projects/cv2
# Download sources
wget -O opencv.zip https://github.com/opencv/opencv/archive/4.1.0.zip
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.1.0.zip
# Unzip
unzip opencv.zip
unzip opencv_contrib.zip
# Rename
mv opencv-4.1.0 opencv
mv opencv_contrib-4.1.0 opencv_contrib
Let's get our virtual environment (env
) ready for OpenCV.
# Install Numpy
pip install numpy==1.16.4
Now let's setup CMake
correctly so it generates the correct OpenCV bindings for our virtual environment.
# Create a build directory
cd projects/cv2/opencv
mkdir build
cd build
# Setup CMake
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
# Contrib path
-D OPENCV_EXTRA_MODULES_PATH=~/projects/cv2/opencv_contrib/modules \
# Your virtual environment's Python executable
# You need to specify the result of echo $(which python)
-D PYTHON_EXECUTABLE=~/env/bin/python \
-D BUILD_EXAMPLES=ON ../opencv
The cmake
command should show a summary of the configuration. Make sure that the Interpreter
is set to the Python executable associated to your virtualenv. Note: there are several paths in the CMake setup, make sure they match where you downloaded and saved the OpenCV source.
To compile the code from the build
folder issue the following command.
make -j2
This will take a while. Go grab a coffee, or watch a movie. Once the compilation is complete, you are almost done. Only a few more steps to go.
# Install OpenCV
sudo make install
sudo ldconfig
The final step is to correctly link the built OpenCV
native library to your virtualenv.
The native library should now be installed in a location that looks like /usr/local/lib/python3.6/site-packages/cv2/python-3.6/cv2.cpython-36m-xxx-linux-gnu.so
.
# Go to the folder where OpenCV's native library is built
cd /usr/local/lib/python3.6/site-packages/cv2/python-3.6
# Rename
mv cv2.cpython-36m-xxx-linux-gnu.so cv2.so
# Go to your virtual environments site-packages folder
cd ~/env/lib/python3.6/site-packages/
# Symlink the native library
ln -s /usr/local/lib/python3.6/site-packages/cv2/python-3.6/cv2.so cv2.so
Congratulations ! You are now done compiling OpenCV from source.
A quick check to see if you did everything correctly is
ls -al
You should see something that looks like
total 48
drwxr-xr-x 10 user user 4096 Jun 16 13:03 .
drwxr-xr-x 5 user user 4096 Jun 16 07:46 ..
lrwxrwxrwx 1 user user 60 Jun 16 13:03 cv2.so -> /usr/local/lib/python3.6/site-packages/cv2/python-3.6/cv2.so
-rw-r--r-- 1 user user 126 Jun 16 07:46 easy_install.py
drwxr-xr-x 5 user user 4096 Jun 16 07:47 pip
drwxr-xr-x 2 user user 4096 Jun 16 07:47 pip-19.1.1.dist-info
drwxr-xr-x 5 user user 4096 Jun 16 07:46 pkg_resources
drwxr-xr-x 2 user user 4096 Jun 16 07:46 __pycache__
drwxr-xr-x 6 user user 4096 Jun 16 07:46 setuptools
drwxr-xr-x 2 user user 4096 Jun 16 07:46 setuptools-41.0.1.dist-info
drwxr-xr-x 4 user user 4096 Jun 16 07:47 wheel
drwxr-xr-x 2 user user 4096 Jun 16 07:47 wheel-0.33.4.dist-info
To test the OpenCV installation, run python
and do the following
import cv2
# Should print 4.1.0
print(cv2.__version__)
4. Install Donkey car Python Code
- Change to a directory you would like to use as the head of your projects.
cd ~/projects
- Get the latest donkeycar from Github.
git clone https://github.com/autorope/donkeycar
cd donkeycar
git checkout master
pip install -e .[nano]
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.13.1+nv19.3
Now, this will be all for setting up the car running platform. Now, the thing that is left is to create a new application to run our nanobot. Just follow this tutorial and you would be good to go.
Now, we will add the components which we will be using in the ROS. That is, the Rplidar A1 which will be used for simultaneous localization and mapping(SLAM).
iv) Setting up the Rp-Lidar A1
Now that everything else is done, the only setup that is left is for our Lidar Sensor. First, lets know a little more about the device that we are using:
About RPLIDARRPLIDAR is a low cost 2D LIDAR solution developed by RoboPeak Team, SlamTec company. It can scan 360° environment within 6meter radius. The output of RPLIDAR is very suitable to build map, do slam, or build 3D model.
You can know more information aboud rplidar from SlamTec HomePage(http://www.slamtec.com/en).
You can know more information aboud rplidar from SlamTec HomePage(http://www.slamtec.com/en).How to build rplidar ros package
- Clone this project to your catkin's workspace src folder
Clone this project to your catkin's workspace src folder
- Clone this project to your catkin's workspace src folder
- Running catkin_make to build rplidarNode and rplidarNodeClient
Running catkin_make to build rplidarNode and rplidarNodeClient
- Running catkin_make to build rplidarNode and rplidarNodeClient
Check the authority of rplidar's serial-port :
ls -l /dev |grep ttyUSB
Add the authority of write: (such as /dev/ttyUSB0)
sudo chmod 666 /dev/ttyUSB0
There're two ways to run rplidar ros package
I. Run rplidar node and view in the rvizroslaunch rplidar_ros view_rplidar.launch
You should see rplidar's scan result in the rviz.
II. Run rplidar node and view using test applicationroslaunch rplidar_ros rplidar.launch
rosrun rplidar_ros rplidarNodeClient
You should see rplidar's scan result in the console
How to remap the USB serial port nameMaybe you should change the usb device port mode with the authority of read and write, more than make the port remap to an fixed name. install USB port remap:./scripts/create_udev_rules.sh
Maybe you should change the usb device port mode with the authority of read and write, more than make the port remap to an fixed name. install USB port remap:./scripts/create_udev_rules.sh
check the remap using following command: ls -l /dev | grep ttyUSB
check the remap using following command: ls -l /dev | grep ttyUSB
Once you have change the USB port remap, you can change the launch file about the serial_port value.<param name="serial_port" type="string" value="/dev/rplidar"/>
Once you have change the USB port remap, you can change the launch file about the serial_port value.<param name="serial_port" type="string" value="/dev/rplidar"/>
RPLidar frameRPLidar frame must be broadcasted according to picture shown in rplidar-frame.png
How to install rplidar to your robotrplidar rotate with clockwise direction. The first range come from the front ( the tail with the line).
rplidar rotate with clockwise direction. The first range come from the front ( the tail with the line).
Please keep in mind the orientation of the Lidar sensor while installing it onto your robot. Here is what the axes look like:
Now that this is done, just try to run a sample launch file "view_rplidar" and see the scan results of the topic "/scan" in rviz. It will look something like this:
Now, to add slam to our application, there are several available algorithms. But, we choose Hector Slam for our project. So, let us move on:
Hector Slam Setup for Jetson Nano
We cannot use the native github repo for Hector slam is the nano as we did for the rplidar. We need to make some changes in the Hector slam files to get it up and running. First, here are the commands you need to run:
cd catkin_ws/src/
git clone https://github.com/tu-darmstadt-ros-pkg/hector_slam
Now, after it has been cloned, we will need to make the following changes:
- In
catkin_ws/src/rplidar_hector_slam/hector_slam/hector_mapping/launch/mapping_default.launch
replace the second last line with<node pkg="tf" type="static_transform_publisher" name="base_to_laser_broadcaster" args="0 0 0 0 0 0 base_link laser 100" />
the third line with<arg name="base_frame" default="base_link"/>
the fourth line with<arg name="odom_frame" default="base_link"/>
- In
catkin_ws/src/rplidar_hector_slam/hector_slam/hector_slam_launch/launch/tutorial.launch
replace the third line with<param name="/use_sim_time" value="false"/>
Now that this changes are done, we can go ahead and run the following:
cd ..
catkin_make
Now if everything goes as planned, this will be successfully compiled. This means we are ready to run hector slam on our Nano. Follow these steps in order to do so:
- Install ROS full desktop version (tested on Kinetic) from: http://wiki.ros.org/kinetic/Installation/Ubuntu
- Create a catkin workspace: http://wiki.ros.org/ROS/Tutorials/CreatingPackage
- Clone this repository into your catkin workspace
- In your catkin workspace run
source /devel/setup.bash
- Run
chmod 666 /dev/ttyUSB0
or the serial path to your lidar - Run
roslaunch rplidar_ros rplidar.launch
- Run
roslaunch hector_slam_launch tutorial.launch
- RVIZ should open up with SLAM data
The RVIZ Slam Data will look something like this:
So, this would be visible on the SSH screen. Moreover, camera data will also be visible on a web based controller for the car. First, lets know some more about cameras in the Jetson Nano:
More about supported Cameras on the NanoThe bad news is that no OVA sensor cameras are supported by the Nano as there are no drivers for that sensor. So, we need a Sony IMX 1219 camera sensor based Camera Module or a good quality Webcam to run on the Nano. Please note that the fps readings will vary for each type. The supported cameras are:
So, now let us move on to the running part of the project.
Running the Donkey Car ApplicationThe controller will be available for local networks at the following address:
(The IP address of your Nano):8887
This will start once you start the following in your Nano terminal:
#first get into the donkey car project you created
cd ~/mycar
python manage.py drive
Taking a look into the manage.py file, it is linked to various donkey car project components which can be used in our project. There are different cameras supported by the Nano, but there is no Pi Camera Library for the Nano, so just comment out the picam part. For some reason, I was still getting errors while running the code, so I had to debug it further.
It turns out, that the camera that I was using (A Quantum Webcam) was not detected by the code directly, even though it supports V4L cameras. So, I had to remove all the other options and just keep the one which supports the V4L cam. But there are still some additional steps that need to be performed.
First, we need to install the v4l2capture python library. Unfortunately, you cannot just install it via pip or apt. So, just follow the steps below: (follow by typing each of them in order in your terminal) :
git clone https://github.com/atareao/python3-v4l2capture
sudo apt-get install libv4l-dev
cd python3-v4l2capture
python setup.py build
pip install -e .
If everything goes fine, you can run any of the example programs given in the v4l2capture folder. Just type the following while being in the "python3-v4l2capture" folder:
python3 capture_picture.py
If everything goes alright, the manage.py file will run successfully and you can access the web controller on any device connected to the Local Network. For some reason, my Quantum Web Camera was not getting detected by the application so I couldn't get my camera frame on the Web Control screen. But the code I am uploading has been tested for a CSI Camera, the Pi Camera v2.1 and a Logitech C920 Webcam. So, if you have any of these, your code will work just fine. As a workaround, I took the camera frame directly from a python program and just displayed it on the Laptop via SSH. I this way, I can control my car from a distance without any difficulties.
You can see this in the Demo Video posted below. The demonstration will show you how I map my own house with the NanoBot!
AI and Machine Learning on the EdgeNow, where does the Machine Learning comes into play? Its here! The fun part starts now.
We will train an autopilot for our car. So, the car once trained on the images captured from the map, it would autonomously go through the whole track without any human assistance! Cool, right!
Moreover, as the Jetson Nano is powerful enough to train a model, we can leverage its power and train our model through TensorRT,on the Edge!
So, to do that, we first need to setup the TensorRT on our Nano and make some changes in our config file of the Donkey car project too. So, lets get on with it:
Setup TensorRT on your Jetson Nano- Setup some environment variables so
nvcc
is on$PATH
. Add the following lines to your~/.bashrc
file.
# Add this to your .bashrc file
export CUDA_HOME=/usr/local/cuda
# Adds the CUDA compiler to the PATH
export PATH=$CUDA_HOME/bin:$PATH
# Adds the libraries
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
- Test the changes to your
.bashrc
.
source ~/.bashrc
nvcc --version
You should see something like:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on ...
Cuda compilation tools, release 10.0, Vxxxxx
- Switch to your
virtualenv
and install PyCUDA.
# This takes a a while.`
pip install pycuda
- After this you will also need to setup
PYTHONPATH
such that yourdist-packages
are included as part of yourvirtualenv
. Add this to your.bashrc
. This needs to be done because the python bindings totensorrt
are available indist-packages
and this folder is usually not visible to your virtualenv. To make them visible we add it toPYTHONPATH
.
export PYTHONPATH=/usr/lib/python3.6/dist-packages:$PYTHONPATH
- Test this change by switching to your
virtualenv
and importingtensorrt
.
> import tensorrt as trt
> # This import should succeed
Now, Train, Freeze and Export your model to TensorRT format ( uff )After you train the linear
model you end up with a file with a .h5
extension.
# You end up with a Linear.h5 in the models folder
python manage.py train --model=./models/Linear.h5 --tub=./data/tub_1_19-06-29,...
# Freeze model using freeze_model.py in donkeycar/scripts
# The frozen model is stored as protocol buffers.
# This command also exports some metadata about the model which is saved in ./models/Linear.metadata
python freeze_model.py --model=./models/Linear.h5 --output=./models/Linear.pb
# Convert the frozen model to UFF. The command below creates a file ./models/Linear.uff
convert-to-uff ./models/Linear.pb
Now copy the converted uff
model and the metadata
to your Jetson Nano.
- In
myconfig.py
pick the model type astensorrt_linear
.
DEFAULT_MODEL_TYPE = `tensorrt_linear`
- Finally you can do
# After you scp your `uff` model to the Nano
python manage.py drive --model=./models/Linear.uff
Training Process (2-3 Hours)The above part is all good, but unfortunately I did not have a Barrel Jack High ampere power supply so every time I tried to train a model using the above procedure, the Jetson shut down itself due to insufficient power. So, I used the below procedure to get the auto pilot up and running:
1)Collect Data
Make sure you collect good data.
- Practice driving around the track a couple times.
- When you're confident you can drive 10 laps without mistake, restart the python mange.py process to create a new tub session. Press
Start Recording
if using web controller. The joystick will auto record with any non-zero throttle. - If you crash or run off the track press Stop Car immediately to stop recording. If you are using a joystick tap the Triangle button to erase the last 5 seconds of records.
- After you've collected 10-20 laps of good data (5-20k images) you can stop your car with
Ctrl-c
in the ssh session for your car. - The data you've collected is in the data folder in the most recent tub folder.
2)Transfer data from your car to your computer
The Jetson nano is more powerful, but still quite slow to train.
In a new terminal session on your host PC use rsync to copy your cars folder from the Nano.
rsync -rv --show-progress --partial pi@<your_pi_ip_address>:~/mycar/data/ ~/mycar/data/
3)Train a model
- In the same terminal you can now run the training script on the latest tub by passing the path to that tub as an argument. You can optionally pass path masks, such as
./data/*
or./data/tub_?_17-08-28
to gather multiple tubs. For example:
python ~/mycar/manage.py train --tub <tub folder names comma separated> --model ./models/mypilot.h5
Optionally you can pass no arguments for the tub, and then all tubs will be used in the default data dir.
python ~/mycar/manage.py train --model ~/mycar/models/mypilot.h5
- You can create different model types with the
--type
argument during training. You may also choose to change the default model type in myconfig.pyDEFAULT_MODEL_TYPE
. When specifying a new model type, be sure to provide that type when running the model, or using the model in other tools like plotting or profiling. For more information on the different model types, look here for Keras Parts.
4)Copy model back to car
In previous step we managed to get a model trained on the data. Now is time to move the model back to Nano, so we can use it for testing it if it will drive itself.
- In previous step we managed to get a model trained on the data. Now is time to move the model back to Nano, so we can use it for testing it if it will drive itself.
Use rsync again to move your trained model pilot back to your car.
- Use rsync again to move your trained model pilot back to your car.
rsync -rv --show-progress --partial ~/mycar/models/ pi@<your_ip_address>:~/mycar/models/
Ensure to place car on the track so that it is ready to drive.
- Ensure to place car on the track so that it is ready to drive.
Now you can start your car again and pass it your model to drive.
- Now you can start your car again and pass it your model to drive.
python manage.py drive --model ~/mycar/models/mypilot.h5
- The car should start to drive on it's own, congratulations! This wraps up the Machine Learning implementation of our project.
If you have come this far, you will now have a complete NanoBot!
Congratulations!
Now, we will have a look at the 3 demo videos I would like to show you. The first one shows myself mapping my own house and the second one shows how the autopilot operates and how we can stop the car when a obstacle comes in front of it immediately. The third one is just the video taken from the camera mounted on the bot itself. Please note that due to shortage of time, I could not get good enough data with the webcam I had so the autonomous driving demo is on an older model on the same Bot, which is powered by the Pi. But, the concept is the same.
Here are some images of the final project:
Now, after everything is setup, you can see that there are 4 terminals open on the Nano to run all the required files. So, here are the commands you need to run to get the NanoBot up and running:
# ----------- Terminal 1 --------------
cd ~/mycar #mycar is the name of the application you created
python manage.py drive #do not forget to download and use the modify the code
#--------------------------------------
# ----------- Terminal 2 --------------
roscore #start the ROS Melodic service on the Nano
#--------------------------------------
# ----------- Terminal 3 --------------
sudo chmod 666 /dev/ttyUSBx #give read write permissions to the USB port of Lidar
roslaunch rplidar_ros rplidar.launch #run the rplidar launch file
#--------------------------------------
# ----------- Terminal 4 --------------
roslaunch hector_slam_launch tutorial.launch #launch the hector slam application
#--------------------------------------
Video DemonstrationsFuture Plans
Now, I would like to make a number of additions to this project which would make it a lot lot better but I am on a bit of a time constraint here. So, some of the desired additions I would love to make:
- Advanced ROS Implementation :Instead of using the Donkey Car project for driving the Car, I would like the whole project to run on the ROS. Currently, I have developed a package that lets you control your car using the keyboard of your host, which will also be running a ROS instance. I will add the further parts as I develop them.
- Obstacle avoidance using the on-chip Lidar sensor : There is no proper development for the Lidar sensor for Python. Only C++ development is on an advanced stage. So, I would like to develop a code that would allow me to detect and avoid any obstacles in the path. Currently, the obstacles are detected by the camera.
- Addition of IMU and GPS via ROS : I am also working on application specific ROS packages for the IMU MPU6050 and the GPS Module UBLOX Neo-6M. This will allow me to added a "GPSGuided" functionality to my Bot. I have already implemented this on a rover I developed using the Arduino Mega. Now, I want to add the same to ROS.
I would like to give a special mention to the Donkey Car Project creators! It helped me a lot and made my life a lot easier..
So, that is all for this project. Hope you people like the project Documentation. Feel free to ask about any issues you face in comments or via a personal message. Also, drop a like if you like the hard work I put into this project!
Adios!
Comments