See what's new at Remyx.ai
For thorough instructions on how to recreate this project, check out the LitterBug site, repo and wiki!
IntroThe most harmful kind of trash in our oceans today are tiny pieces of plastic that are difficult to pick up and can be easily ingested by wildlife.
The LitterBug can help pick up some of the smaller pieces of trash that people may overlook during a beach clean up. It'll work as a guardian of its beach, regularly cleaning up the area and monitoring wildlife and protected areas such a turtle nest.
Packed with a solar panel, it can always be at work or on the lookout.
You can use a night vision Pi Camera, motion detector, GPS breakout and Hologram Nova to implement security features such as monitoring when unauthorized activity is occurring, identifying perpetrators, logging images and GPS location, as well as notifying a human about such activity.
Servos, motors, and other sensors will be needed to create a "scoop" and sieve to collect only trash and dump out sand. We're imagining it to work sort of how WALL-E picked up trash.
First, we want to test whether the simple CNN used in the donkey car repo can learn navigation and obstacle avoidance without a physical track on the ground.
If we succeed, we'll extend this model to learn how to identify trash in its path and approach it. This will include more data gathering, playing with the model architecture, and/or playing with other model types.
Finally, we'll add more complexity by learning how to scoop an identified piece of trash and release at a location. This entails adding additional data points such as the movement of the scoop as well as gathering more training data, playing with different models/architectures, etc. as the previous test.
We believe that by testing at each step we can build up to being able to successfully gather trash!
Day 1: Getting the Donkey Car KitWe were so excited when we received our Donkey Car Kit in the mail, we got to building it straight away.
On the official Donkey Car site, it says you can build it in ~2 hours but it probably won't take that long.
The docs for putting the hardware together will get you most of the way there. The only part it doesn't cover is how to remove the top part of the RC car (since there are many different RC cars you can use to construct a Donkey Car).
We found this video really helpful to correctly remove the top for the desert monster model.
A quick note: when removing the cables connected in the RC car receiver, channel 1 is usually throttle and channel 2 is steering. Keep these in mind when you are connecting them to the servo shield- you want to connect the throttle cable to channel 0 and steering cable to channel 1 on it.
Continuing on the official docs, you'll learn how to ssh into the pi and configure the software needed for controlling the RC. While we were calibrating the steer and throttle, we stuck to lower maximum values for testing.
Since we want to add several new sensors, we designed and printed a new roll cage. It has the motion sensor embedded at the top, a slightly different camera mount to mount the night pi cam and some more screw holes for the various other sensors.
And here's the final result for the first iteration!
After the calibration and this case update, we took the Donkey Car for a spin!
We've printed some custom risers to lower the Donkey Car platform.
We've also picked up some beefier off-road tires from the local hobby shop for the challenges of driving in sand.
Next, we connected the GPS, temp/humidity, and PIR motion sensors. Since more than one sensor uses i2c, we added a small breadboard and glued it to the roll bar. Check out the fritzing diagram or download it in the attached files for the full wiring.
After wiring and assembly, we have:
We've experimented with using an iPad and a cell phone to control the Donkey Car via web application joystick, tilt and key bindings. However, it's worth considering pairing a PS3 controller with the Raspberry Pi for the responsiveness of bluetooth control over your Donkey Car. Use this guide to set up your PS3 controller.
We found that the initial max throttle for the joystick option was too low to get the rover moving. We increased the max throttle in the config.py file to get it going.
#JOYSTICK
...
JOYSTICK_MAX_THROTTLE = 1.0 #increased to 1.0 from 0.25
After finding this method was the best control, we gave it a test run outdoors. If you want to drive your Donkey Car outside of your home wifi network, you can set up the raspberry pi as an access point in a standalone network. Use this tutorial to set it up.
We used the Termius app for iOS to ssh onto the LitterBug's network using a tablet or a phone. This makes it really easy to debug on the go!
Although we'll need to practice more to get really good training data, it was very easy to avoid obstacles and drive over trash. We even began catching trash without trying when LitterBug got caught in some fishing line!
Next, we're going to start designing a training course specifically for recognizing trash and driving over it since there won't be any tracks for LitterBug to follow in the wild.
Day 3: Course Building & TrainingWe've partitioned the yard into a training circuit with two raised planter beds full of herbs and succulents.
We drove simple consistent loops around these obstacles to get a basic "track" without the lanes. After a little practice, we began recording.
After getting a couple laps in, we want to transfer the data from the pi onto a bigger computer for training. The original instructions are not quite right right now, so follow these to get training:
You'll want to copy everything in the /home/pi/mycar directory from your pi onto your local computer. This directory has the files you'll need to get training.
scp -r pi@<PI-IP-ADDRESS> ~/
You'll need to install the same version of tensorflow on your local machine as well as some other helpful libraries. If your default python version is > 3.5 then you can simply call pip like the command below. Otherwise use pip3
sudo apt-get install virtualenv build-essential python3-dev gfortran libhdf5-dev
pip install tensorflow==1.8.0 #use pip3 if python3 not default, use --upgrade
#if tf already installed
Next, you'll want to clone the donkeycar repo to install donkeycar libraries. In your home directory, run:
git clone https://github.com/wroscoe/donkey donkeycar
cd donkeycar
Once in the donkeycar repo, you'll want to replace the donkeycar/parts/keras.py file with an older keras.py file. Replace donkeycar/parts/keras.py with the keras.py script we've included below. Now install the donkeycar library with:
pip install -e . #again, use pip3 if python3 is not default
You should be ready to go! To begin training, run:
python ~/mycar/manage.py train --tub ~/mycar/tub --model ~/mycar/models/mypilot
If you run into any import errors or module not found errors, pip install the missing libraries. We were able to get a pilot model trained on ~15,000 images, but we're going to record more data, retrain, then test it.
Day 4: Testing AutopilotThe Litterbug performs autopilot using the DonkeyCar's training models. This model works by deciding steering angle and throttle rate based on the output of a convolutional neural network which takes an image from the picamera feed as input. This end-to-end learning learning framework is an example of the behavior reflex approach to autonomous driving.
We gathered a good first batch of training sessions, looping around the garden obstacle course consistently in various lighting.
We train in various lighting conditions and even use night vision to represent the scene in various ways in an effort to create robust feature detectors for our autonomous driver.
After collecting close to 20,000 samples, we trained our first autopilot! Using the command above, we trained locally and recorded some of our preliminary results. We found that the LitterBug did really well on long stretches and steering across a small dip in one of the corners.
Litterbug learned to turn at each end, but sometimes the timing was off. This caused it to crash mostly around the turns. We think it learned to anticipate momentum to carry it around the bends from our training at high speeds.
Exploring our Training CircuitHere we consider the distribution of steering angle and throttle speed for a collection of 150,000 training samples we collected over a couple days of driving around the tracks.
When we look at example images from training, we find that LitterBug typically takes the hard turns with fencing or an AC unit in the field of view. We expect our autopilot learns to associate these visual markers with the action of taking a hard turn.
Similarly, when we take a random sample of images where LitterBug is making a hard reverse.
You can imagine how some additional context might help to disambiguate when a wall or fence is in the field of view. Sampling the training distribution helps us intuit the landmarks our classifier might associate with different steering angle and throttle configurations.
Consider the following scatter plot of the throttle speed versus the steering angle for angles +- 0.1 (nearly straight). We add some transparency to the points to show the concentration of large positive throttle speeds when the steering angle is straight.
So far, LitterBug has performed well for its first model but it needs more training if we are to complete a loop around our vaguely defined track.
After combining roughly 250K training images of running the off-road patio track we then train a model with an order of magnitude more data than our earlier attempts. As a result, the autopilot can complete loops nearly as well as a human.
Notice how Litterbug reorients to move clockwise on the track as it has learned from many laps over multiple training sessions. Litterbug demonstrates obstacle avoidance in a vaguely defined off road track needing an occasional assist.
We found Litterbug comes to a stop in front of large obstacles and even attempts to wiggle out. The tires of the 1/16th RC car wheels do not lock out when the throttle is cut allowing for the occasional roll in reverse. By default the neural network autopilot uses reLU activations for the throttle output. We experiment with training using linear activations to allow the output of both positive and negative throttle values.
Additionally, we've experimented with slightly more complex models by increasing the number of nodes for some of the fully connected layers, by adding an additional layer, by experimenting with larger filters in the first convolutional layer, and trying different loss weights for throttle and steering angle.
After a week of training in the desert heat, the car's ESC finally gave out. When we replaced it with a higher end component, we had more control of the drive modes allowing for much easier control and reverse than before.
At this point, we were ready to upgrade to a 1/10th scale platform designed to crawl off road.
PART IIWe took the design from the donkey car creators and scaled it up to fit the new larger size of the 1/10th scale RC car. Since our 3D printer couldn't print such a long bottom, we opted for a cheap clipboard instead. We removed the hardware and drilled holes above the four supports of the new car.
After attaching the new board, we test how the new rc car drives. Right away, we have much more power and control (including improved reverse!).
Since the board is much bigger, we were able to secure all the extra sensors we added onto it. We swapped the temperature sensor for an accelerometer to collect richer driving data. For the roll cage, we printed two u-shaped roll bars - made it faster/easier to print and used less material.
After the new basic body of the LitterBug is complete, we tried to test how well our autopilot model we trained earlier transfers to this bigger RC.
It does relatively well. It seems that the model has learned the very simple behavior to turn right when it sees an endpoint on the track. It runs into the ends since the dimensions and steering of this car are different. We can use this model as a good starting point to continue training for trash pickup.
To train the LitterBug to identify trash, we chose a variety of outdoor scenes and focus on finding trash in its path. The camera can't see trash very well until it is somewhat close to it. The behavior we are looking for is for LitterBug to roam around until it locks onto a piece of trash, scoops it, and continues looking.
We gathered enough data to train a preliminary trash pilot that will go after trash. With this model, the rover drives relatively straight and seems to speed up near trash. With some more data it could learn better maneuvering and more precise detection of trash.
When the scoop is added to the body, we will record the scoop's movement and add this as another feature to learn from. As the task becomes more complex, so will the model needed to execute this action. Therefore we'll collect many good scooping and driving samples!
Scoop DesignA simple mechanism to pick up trash is to scoop it! Since we're training our LitterBug to drive towards trash, scooping it seems like a good starting point. After thinking through it, we went for a bulldozer design that could lift up and down. We adopted this tiny bulldozer and scaled it up to fit our rc car. Then we attached the hinges to a new bumper we printed.
To control the scoop, we added an extra rc servo to the servo hat and modified the donkey car repo to control it. All the files we edited will be linked down below. We basically programed the right thumb joystick on the PS3 controller to map to the servo on the car and modified the scripts collecting data so that it would also record the scoop movement. We also fixed the controller.py script to include the correct button names for that extra functionality described in the controller docs. Here's a quick map of the buttons:
- Triangle - Increase max throttle
- X - Decrease max throttle
- Circle - Toggle recording (disabled by default. auto record on throttle is enabled by default)
- dpad up - Increase throttle scale
- dpad down - Decrease throttle scale
- dpad left - Increase steering scale
- dpad right - Decrease steering scale
- Start - Toggle constant throttle. Sets to max throttle (modified by X and Triangle).
After putting this all together, we had to take the new LitterBug out for a spin!
The scoop works pretty well so far! The scoop defaults to staying slightly up and we lower it when its near trash. The scoop has some small mesh holes to dump out the sand and keep the litter.
Now that we have a working scoop and its motion data can be recorded like the steering and throttle is being captured, we are now gathering scooping and navigating training data.
Solar PowerFor greater sustainability and efficiency, we wanted to add solar power to the rover. First, we start experimenting with powering the brains of the donkey car- the various sensors and raspberry pi. We used an affordable 12V solar panel, a small solar charge controller, a power booster, and a 3.7V lithium ion battery. This 12V solar panel has enough power to charge a larger battery if we decide to switch it later or try to power the entire rover (experiment coming soon!). The solar charge controller we chose was flexible to charge many kinds of batteries and solar panels. We recycled an 3.7V lithium ion battery from another project and used adafruit's power booster to bring the power up to 5V +1A to power the raspberry pi and peripherals. A note on the power booster- make sure to use the 5V and ground pins from the side of the board like shown below.
We redesigned the body to enclose the solar panel between the two u-shaped roll bars and switched to acrylic bottom and sides.
The scoop was reinforced with extra material screwed in around the area where the bumper attaches to the rc car body. This made the scoop able to flex more and handle heavier loads.
Here's a video of the new build!
And testing at the beach:
Solar powering the entire car
Digging around the internet, we haven't been able to find a small enough solar charge controller that can deliver the appropriate current to the rc and raspberry pi, as well as be able to charge a finicky battery like a li-po rc car battery. The solar charge controllers can get quite bulky the more powerful they are. However, the project could still be completely solar powered with some redesigning. We see two ways the LitterBug project could be modified to make that happen:
- Scale up the LitterBug platform - To accommodate the size of solar charge controllers powerful enough for the entire vehicle, you may want to go for a bigger sized rc car or build a larger platform from scratch. The size of the rover would be constrained by how much weight you're adding and how much power is needed to drive and scoop.
- Build a solar charge docking station - Design wise, this may be more involved but would keep the LitterBug rover a lighter, smaller vehicle. The idea would entail creating a solar power station where the LitterBug would return to a "dock" to charge up when it detects a low battery. This allows you to choose any size solar panel and solar charge controller to power even of fleet of LitterBugs, much like how parking spaces with electric charge stations work.
These options are out of the budget (for now), but could be pursued in the future.
TO DO- We've added more complexity to the task needed to be learned by the autopilot to pick up trash. We'll continue to collect many more samples picking up trash to improve the trash autopilot.
- We're working on a read the docs for this project so it's very easy to recreate the LitterBug!
Thanks again to ARM, Hackster, and DIYRobocars for hosting a really fun contest!
Comments