This is the first part of a write-up about the design and build of a general helper robot named Bellamy Crashbot. Bellamy is a pretty complicated project and work is ongoing, so it seems sensible to break the write-up into chunks. This first chunk will cover the project planning and the initial bench assembly. I came at this as a relative beginner to robotics, so that's the target audience. Be aware that a beginner to robotics is still going to demand an intermediate-level confidence in hardware and software engineering. Intermediate-level skill is recommended as well, but if you've only got the confidence you should be fine.
Project Purpose & Goals
The broadest purpose of this project was to learn how to build a cool robot, and from there myself and friends began brainstorming things we'd like our robot to be capable of. We wanted it to be mobile. We wanted it to be able to observe its environment and navigate it. And we wanted it to be large enough to interact with humans. From there we began elaborating on its purpose and functions.
A primary function would be observing our makerspace to identify whether anyone was present and recognizing certain states, such as whether lights were turned off. A secondary function (or primary, depending on who you ask) was for this robot to invite anthropomorphisation. This robot would seek interactions with humans, ideally communicating through speech bidirectionally.
As you go through, more information may be available on our Confluence Wiki.
General VisionWith a purpose and key functions identified we moved on to the form-factor. A simple wheeled base was decided to be the most practical mobility solution. A mast would allow it to see and interact from about waist or table height while keeping the center of mass low.
For this wheeled base we decided to use a Roomba, for several reasons. First, it saved us the trouble of buying and building a wheeled base. The Roomba would also offer a power system. Roombas have onboard batteries, a charging circuit, and a docking routine that allows them to return to their dock to charge without user intervention.
For observation, we settled on a Kinect depth camera. First, this was available to us. But also, this provided a sensible mapping sensor and visual input for object recognition. We considered adding lidar. Lidar sensor are very popular for simple navigating robots for good reason. There's a lot of software and instruction and they provide great awareness of obstacles in a 2D plane surrounding the robot. But the camera can offer much of the same benefit, so we set out to use the camera alone for sensing the environment and revisit that choice if necessary.
For computation we chose an NVIDIA Jetson Nano Developer Kit single board computer. This is a widely popular computer for running robots. It's small and designed to run Ubuntu. With its GPIO pins and hacker-friendly design and documentation it occupies a role similar to a high-end Raspberry Pi with a CPU & GPU optimized for AI tasks. In many cases, looking to crowd consensus is a first step for me when I'm looking to get into a new field, and I'd seen the incomparable James Bruton use a Jetson Nano running ROS for his general helper robot.
And for software we chose ROS (Robot Operating System) as the central platform. This was decided for reasons similar to those behind the Jetson Nano. I honestly don't really know what the alternative to ROS would be for a project like this.
As you run through the following-hands on setup tasks keep the overall vision in mind, and discuss far-fetched reach goals with friends and teammates. You won't likely make all that you envision, but you likely won't make any more than you envision, so try to brainstorm without limiting yourself. If you're looking for inspiration, check out some of these incredible projects:
- Nox - A House Wandering Robot (ROS)
- Human-Following Robot with Kinect
- Autonomous UV Robot with SLAM
- Build your very first ROS Robot
Get the primary parts and connect them on a clean bench. Here are the recommended connections:
- Keyboard, mouse, and monitor to Jetson Nano
This is optional if you're comfortable setting up the Nano in headless mode, but it's recommended.
- USB WiFi module to Jetson Nano
The Jetson Nano doesn't have a built-in WiFi module, so connecting one to a USB port is necessary.
- Jetson Nano to 5V power
The Nano can be powered through a USB micro-B port or a barrel jack connector. It requires at a minimum a stable 5V 2.5A supply. Many USB power supplies are not reliable enough, so be sure to get one up to the challenge. Eventually, this power will need to be delivered from the on-board battery using a step-down voltage converter.
- Jetson Nano to Roomba
The Jetson Nano must be connected over a USB-to-serial cable. Link 1; Link 2
- Roomba to charger
I recommend that you set the Roomba on its dock. The Roomba must have sufficient power to connect over serial, so make sure the Roomba is charged. And while you're at it, I recommend just keeping it on its dock.
- OPTIONAL: Kinect or other sensor to 12V power
Kinect sensors require a 12 V power supply. Delivered over a special custom cable. This splits into a USB cable for data and a power cable that connects to an AC adapter which plugs into a wall outlet. This too will eventually need to be supplied from the Roomba's battery using a step-down voltage converter.
- OPTIONAL: Kinect or other sensor to Jetson Nano
The Kinect's cable splits into a USB plug for sending data. Plug this into one of the Jetson Nano's USB ports.
Software Installation and SetupThe major tasks include the following.
1. Setup Ubuntu 18.04 for Jetson Nano: Instructions are provided by NVIDIA
Burn an Ubuntu disk image onto an SD card and boot it up on the Nano. You'll probably want to connect a mouse, keyboard, and monitor at this step. Then use those to perform step two:
2. Set up a VNC server: Instructions by Brian Hogan
Set up a VNC server and enable SSH. If the Nano connects to WiFi automatically, This will allow you to turn on the computer in headless mode and develop your robot on a laptop without tethering keyboards and displays to it.
3. Install ROS (we installed ROS Melodic): Instructions from ROS Wiki
Follow the instructions on the ROS Wiki to download and configure ROS. At the end of this step you should be able to run the command roscore in a terminal either over SSH or over a VNC viewer and see output declaring that the ros core has started successfully.
4. Install the Roomba drivers: Instructions by AutonomyLab
This ROS package will allow the Nano to connect to the Roomba over a serial connection across a USB-to-Mini DIN-8 cable (or possibly USB-to-USB micro-B on newer models). Once complete, you'll be able to start the create_bringup package using the command "roslaunch create_bringup create_bringup.launch", assuming the Nano is connected to the Roomba over serial. From here, we can read the battery level, send movement commands, read odometry data, and initiate docking and undocking.
5. OPTIONAL:Start-up the Kinect driver: Instructions from ROS Wiki
The Kinect requires the Freenect driver or an alternative. ROS should come with the Freenect package installed. We can also use the program Rtabmap to see the Kinect camera's output. Run the commands in the link above to confirm the camera is working.
Take our first steps to confirm it's workingAt this point you should be able to plug everything in and have a lot of stuff light up. Log in to Ubuntu on the Nano and run the start command to connect ROS to the Roomba.
roslaunch create_bringup create_bringup.launch
Then, in another terminal, run
rostopic list
to see all the active topics. Run
rostopic echo -n 1 battery/charge_ratio
to see what the battery level is.
Run the following command to see the wheels turn for one second. Consider placing the Roomba on a block to avoid it driving off a table.
rostopic pub -1 /cmd_vel geometry_msgs/Twist -- '[0.10, 0.0, 0.0]' '[0.0, 0.0, 1.8]'
You can also control the wheels by running the following interactive program through a command terminal:
rosrun teleop_twist_keyboard teleop_twist_keyboard.py
You can check that the Kinect is working by running rtabmap.
ConclusionYou now should have the key parts to build a robot that drives around and looks at its environment (in color, depthmap, and IR!). Next, we'll need to assemble something capable of operating untethered to wall power or stationary peripheral devices.
Comments