I'm on a mission, and I want you to join me. Since I started working with robotics two short years ago, one thing has become very clear to me: the realm of robotics is somewhat lacking in terms of open source solutions for some of the key problems in robotics. From data management to fleet management, cloud integrations to code deployment - it seems the open source robotic ecosystem is lagging behind other technology areas in terms of modernization.
In this project, I decided to tackle one of the key universal problems for mobile robots: how to self-dock and charge. While some popular robots like robot vacuum cleaners have self-charging solutions available, anyone who hopes to bring a robot to market will need to solve this problem for themselves - on both the hardware and software side. My mission is to create a universal open-source docking solution that can be dropped in to almost any mobile robot.
Here's my MVP.
ApproachThere were three main focus areas in this project:
1. Identifying the charge station
For this, I decided to leverage OpenCV for feature matching. I chose feature matching because it allows for flexibility without time and resources spent training a custom machine learning model. One can simply provide a reference image with distinct features, and feature matching computer vision can pretty accurately spot matching features in the environment. Maybe Jane wants a lightning bolt on her robot's docking station while Joe wants a PacMan icon on his - each of them would simply need to provide a reference image, and then use that same image on their docking station.
OpenCV exposes a handful of algorithms for feature matching. I chose to use SIFT (Scale Invariant Feature Transform) - which is an open-source algorithm that performs feature matching independent of viewpoint, depth, and scale (meaning it can match at different angles and distances).
I created a module that wraps this functionality and made it available on the Viam Registry - so anyone can easily leverage this functionality with any robot or machine they are building.
2. Navigatingto the docking station
Feature matching allows us to identify the docking station, but how do we navigate to the charge point? I created a simple algorithm that rotates the robot base looking for a feature match detection, attempts to center the detection in view, then moves forward. Once the feature match detection is both centered and larger than a specified dimension, it will attempt a docking routine, and then using an INA219 power sensor (built into the latest Viam rover, easily added to any other robot) check if the voltage jumps up - meaning it has successfully docked.
I wrapped this algorithm in another module and published it to the Viam registry, as well. It allows you to configure attributes like the number of dock attempts, distance and velocity for movement, and of course the robot hardware (base, camera, power sensor) to use for docking. Another nice feature is that if you decided you wanted to use another kind of detector like a machine learning detector or color detector, it supports any of the Viam vision service built-ins or detectors from the registry,
3. Physicallydocking the robot's battery to a power source
Once the robot reaches the docking station, how does it physically connect to allow the robot's battery to charge?
I used Solidworks to design a 3D printed mount that holds two magnetic and conductive steel bars in a position where they can move slightly back and forth - but by default are held closer to each other with springs. The dock side is also a 3D printed mount that holds two strong neodymium magnets behind each copper bar. When the robot's charge bars come close, the magnets draw them in and the battery charges.
First, you'll need access to a 3D printer to print the supplied STL files. If you don't have access to a 3D printer, many online services exist that allow you to have them print and ship to you.
Robot side
While printing, take one of the steel bars and cut it in half with a hacksaw, then drill a hole about 3-5mm from the end of each bar centered vertically.
Take the robot-side charge tower and mount it to the rover with screws, washers, and nuts centered above the webcam.
Now, take a screw, two washers, and a nut as well as a small spring. In this order do the following on each side:
- Place a washer on the screw.
- Carefully slide a small spring (you may need to cut it shorter) and the bar (hole-end first, long end pointing out from the front of the rover) into the charge tower with the spring facing towards the outside.
- Push the screw through the outer charge tower, spring, bar, and inner charge tower.
- Secure loosely with a washer and nut.
- Now, cut a 12 inch length of wire and strip both ends. Push the end through the hole under the washer enough so that the end is snug against the bar inside.
- Tighten the assembly so that the wire is held in place.
Now, wire the other side of each wire to your batteries. In our rover, the easiest way to do this was to attach to the same place the battery pack wires are attached to the power switch. Note which side of the charge tower is attached to positive, and which to negative.
Dock side
Now, secure the barrel jack adapter to the charge dock so the wires are on the inside. You'll later plug the 16V (or use a different voltage adaptor for a different robot) power supply into this.
Place the positive and negative wires into the bottom "pockets" below where the magnets will be placed on opposite sides. The bar will slide into this pocket and be held snug against each wire. Be sure that the negative and positive wires are on the sides that match where you placed them on the robot-side charge tower, so that when the robot docks the polarity matches.
Now, carefully position the magnets on one side of the charge tower and hold them in place while sliding one of the copper bars through the top slot, over the magnets, and into the bottom pocket. Repeat on the other side.
Attach the charge dock to a wall or molding near an outlet with one or more wood screws. Make sure you attach it at a height that allows the bars from the robot charge tower to enter and align above the magnets. Plug the power supply into the barrel jack adaptor.
Finally, print a docking icon of your choice and position it below the docking station. There's one that works well included in the docking code repository.
Configuring your robotIf you are using a Viam rover, configure it as per the Viam rover documentation. If you are using your own mobile robot, you'll need to make sure it is composed of:
- A camera
- A power sensor
- A board
- A base
Follow the documentation to get your robot running using the Viam platform.
Add the feature match detector
First, copy your feature match reference image to a location on your robot's filesystem and note the location you copied it to.
From your robot's configuration page in the Viam App, go to the Services tab, and click Create Service. Select the vision type, then the detector:feature-match-detector model.
Name the service dock-detector (or something else of your choosing), then configure it with the path where you copied the reference image on your robot:
{
"source_image_path": "/home/youruser/charge.jpg"
}
Now press Save Config on the bottom of the page.
We could go ahead and start using the feature match detector with code right away, but in order to visually test it we can set up a transform camera, which is virtual camera type that takes a camera stream from a physical camera and applies transformations (in this case feature match detections) in real-time.
Go to the Components tab, click CreateComponent, and choose the camera type, then the transform model. Name your transform camera dock-cam, and configure it:
{
"pipeline": [
{
"attributes": {
"confidence_threshold": 0.5,
"detector_name": "dock-detector"
},
"type": "detections"
}
],
"source": "cam"
}
Adjust any attributes as needed. Click SaveConfig on the bottom of the page.
Now go to the Control tab, and test the feature match transform camera. If it is not working as expected, check the Logs tab for any errors.
Add detection docking
From your robot's configuration page in the Viam App, go to the Services tab, and click Create Service. Select the action type, then the dock:detection-dock model.
Name it dock-action, and configure it to reference your physical components and configured detector.
{
"detector": "dock-detector",
"power_sensor": "ina219",
"base": "viam_base",
"camera": "cam"
}
Test the docking capabilitiesNow that we have everything set up, we can try having our robot navigate and dock. This is the only time you'll need to run some code, and it won't be much. If you have a Mac or Linux personal computer, you can create the code there, or you can do it on the robot directly. In either case, Viam allows you to securely connect to your robot and issue commands using its configures services and components.
First, create a requirements.txt containing:
viam-sdk
action_api @ git+https://github.com/viam-labs/action-api.git@main
Then install these requirements by running:
pip3 install -r requirements.txt
Now, create a python script with the following contents, and name it dock.py
import asyncio
import os
import time
from viam import logging
from viam.robot.client import RobotClient
from viam.rpc.dial import Credentials, DialOptions
from action_python import Action
# these must be set, you can get them from your robot's 'CODE SAMPLE' tab
robot_api_key = os.getenv('ROBOT_API_KEY') or ''
robot_api_key_id = os.getenv('ROBOT_API_KEY_ID') or ''
robot_address = os.getenv('ROBOT_ADDRESS') or ''
class robot_resources:
robot = None
dock_action = None
async def connect():
opts = RobotClient.Options.with_api_key(api_key=robot_api_key, api_key_id=robot_api_key_id)
return await RobotClient.at_address(robot_address, opts)
async def main():
robot_resources.robot = await connect()
robot_resources.dock_action = Action.from_robot(robot_resources.robot, name="dock-action")
await robot_resources.dock_action.start()
time.sleep(.2)
print(await robot_resources.dock_action.is_running())
print(await robot_resources.dock_action.status())
await robot_resources.robot.close()
if __name__ == "__main__":
asyncio.run(main())
This script does just a few things:
- It securely establishes communication to your robot
- It initializes the docking service
- It tells the docking service to start the docking process, and then calls and prints the results of is_running() and status()
If your robot is within view of the docking station, it should try to detect it, navigate to it, and dock!
You can change any configuration parameters as needed to fine-tune how the robot interacts with its environment during the docking process.
What's NextFirst off, I was serious when I said I wanted you to join me on this mission. Please feel free to help improve the docking algorithm, CAD design, or anything else! You can find me on Discord - let's make it easier for people to bring their robotic ideas to reality.
Beyond improvement, there is the complete picture. For self-docking, you'd in real life often need to have the robot first navigate from one location to another before looking to dock. I hope to test this with SLAM and GPS-based navigation. For outdoors using GPS, we'd want a more weatherized docking solution.
Finally, I want to collaborate with others looking to create reusable open-source components and services for robots and automated machines. If you have an idea, let's discuss - ideas are my favorite.
Comments