This project has the intention to solve to prevent accidental death by stepping on LEGO® bricks.
How many times have we been barefoot in your house? Thinking about your future prospects, suddenly and without warning, you step on a LEGO® brick, generating pain just comparable to a blow to the jewels of the family.
We seek the answer to this serious problem, and LEGO® is no stranger to the solution, they have created slippers just for that purpose:
https://www.amazon.com/finally-makes-slippers-protect-parents/dp/B0184LJJ3U
In addition here is an article that supports this thesis:
https://qz.com/366858/legos-are-so-painful-to-step-on-because-of-physics/
There are even people who are capable of doing this out of charity:
https://www.dailymail.co.uk/video/news/video-1616382/Man-walks-bare-foot-120-feet-legos-charity.html
In order to protect yourself from this horror you need nerves of steel and pain immunity.
We have to solve for this calamity!
This has to stop!
And Sony has my back on this.
SolutionWe are going to build a robotic car that is capable of collecting the pieces of lego that are detected by vision algorithms, and will use ultrasonic sensors to give the robot more autonomy.
It would be quite different from the current solutions, because it'll be a robot that will literally go looking for the pieces of LEGO, and will pick them up so that its owner will not be harmed by them.
The robot is based around a car-rover, the Spresense development board will control the engines through a DC motor driver. A sweeper will be used to collect the pieces of lego, this will be made in turn with three servos and 3D printed parts (or laser cutted). To control the car, we will use ultrasonic sensors to prevent the car from crashing into walls or obstacles.
In addition to just motor control, we will use a camera in the Spresense board and its smart image recognition capabilities, to scan the terrain and detect the LEGO bricks, which in turn will be collected and placed in a safe spot. If the car is incapable of picking up the piece, if it is too big or whatever, the system will send an alert to its owner, notifying that there is a dangerous piece of lego that could not be collected and threatens his livelihood.
Main features I want for the system:
- Semi-autonomous (BLE control?)
- Full autonomy
- Accurate brick recognition.
- Light
- Compact design
- Kid-friendly
- WiFi connectivity and notifications (maybe I'll add a Raspberry Pi Zero and connect it to cloud!).
- It has to look cool.
- It has to protect those you love.
For a long time I was thinking how to design the Robot in relation to autonomy and for it to look awesome and I reached a conclusion: A robotic arm to collect the bricks will seldom work and it is quite cumbersome to code for, too many precise movements and if it fails you look like a donkey.
But then I recalled images from a long ago, when I was a little boy.
A BULLDOZER!
- It looks waaaay cooler than a rover with a puny robotic arm.
- Imagine doing that kind of work with an arm. (Children do not leave only just one lego around)
- Your inner child comes forth.
- Your inner destroyer also comes forth.
So for now we have a couple of tasks:
- Seek, identify and destroy any brick (with the Camera board on top of the future Bulldozer)
- Bulldoze those puny bricks to a safe spot where they can't harm anyone.
But we have to build it, and for that I go on to:
DesignLong ago, I purchased a chassis to do a Microcontrollers final project, believe it or not I had to code that, only with Assembly language and an ATmega 16. Yeah, cry for me. But now it gets to be upgraded to the Ultimate in ARM technology, Computer Vision, full autonomy, a whole bunch of sensors, and motors and perhaps the most important thing, a complete makeover!
Spresense BoardsBy now there are several guides and tutorials on how to start with the board such as:
https://developer.sony.com/develop/spresense/developer-tools
I would recommend to stck with the official sources at first.
So I'm just going to share some tips:
Tip #1: When getting the main board out, unpack it in a low light room or have it facing down like in the image:
Next, while still facing down put it inside an envelope or a box, and put the shading seal while inside the box/envelope.
All this to protect the light-sensitive elements of the board, as these can be easily destroyed with bright light.
Tip #2: After burning the bootloader and uploading the example sketch, the board might not be recognized. Don't panic just unpug and re-plug the USB and try again.
Tip #3: Get some gloves to move around the camera, you want this to last a long time and also place some anti static protection on the boeard's bottom side.
Tip# 3:
I have to praise the amazing packaging and value of each of the components, and especially to Sony's package engineers. These are top of the line quality boards and the premium price is worth it. By the way, is Sony telling us something?:
In order for it to later adapt to bigger needs we first need to control the proto-Bulldozer via the Spresense board and a Bluetooth Module. Later we will add the other features.
These are the materials you need for this step:
The HC-06 Bluetooth module, the Motor driver based on the L298D, 4 DC motors, A base for your car, the Spresense board, USB cable, and a bunch of jumpers.
You have to follow the next diagram for the connections:
Important note!: Some Bluetooth modules are rated at 3.3V at RX/TX so just do a voltage divider with a 1kΩ resistor and a 2kΩ resistor, 2k going to ground, the RX pin receiving the divider and the 1k to the TX of the Spresense.
Apart from that I recommend that to send Bluetooth info you download the Arduino Bluetooth Controller Application but any other works.
For the Spresense code you can find the Spresense simple code at the bottom and simply (no pun intended) upload it to the board, but you need to Understand several things about the Spresense that are different from an Arduino. Let's call it the second round of tips:
#1: The Spresense calls the USB serial monitor from the connected cable: "Serial", and the TX/RX pins are: "Serial2".
#2: All the pins in the Spresense are "pulled up" so it might change your normal logic a little.
#3: The code is as simple as it gets, but try to understand every step while you read it.
So we have it functioning like this, I intentionally just allocated everything on top as i'm going to be building a case for it. (Remember it has to look cool as hell)
Now that we know the basics work, its time to upgrade and go to the fun part!
Step 2: AutonomyFor autonomy to be succesfully implemented you need the proper sensors, and Sony has my back again as I have a spresense camera board and I'm gonna add three Ultrasonic sensors, two cheap ones and one very very expensive one, but I had it just laying around.
The design for this is mainly stealed from a Roomba vacuum robot but without the bumpers. Whenever the sensors detect proximity you just turn to one side or the other. If one sensor detects proximity you just have to turn to the other side and if the two sensors detect proximity then you are in a corner.
I'll call this roomba mode because the Bulldozer will just go around the room avoiding obstacles, for that I will use the three ultrasonic sensor and map the place around it, depending on what it may find around it then we can implement logic for it to move around.
-------image representing this with car and three sensors
Let's start with a snippet and testing with a sample code the ultrasonic sensors:
Sample code:
// ---------------------------------------------------------------------------
// Example NewPing library sketch that does a ping about 2 times per second.
// ---------------------------------------------------------------------------
// I adjusted it a little bit to ping far less.
#include <NewPing.h>
#define TRIGGER_PIN 12 // Arduino pin tied to trigger pin on the ultrasonic sensor.
#define ECHO_PIN 11 // Arduino pin tied to echo pin on the ultrasonic sensor.
#define MAX_DISTANCE 200 // Maximum distance we want to ping for (in centimeters). Maximum sensor distance is rated at 400-500cm.
NewPing sonar(TRIGGER_PIN, ECHO_PIN, MAX_DISTANCE); // NewPing setup of pins and maximum distance.
int distance = 0;
void setup() {
Serial.begin(115200); // Open serial monitor at 115200 baud to see ping results.
}
void loop() {
delay(500); // Wait 50ms between pings (about 20 pings/sec). 29ms should be the shortest delay between pings.
Serial.print("Ping: ");
//Serial.print(sonar.ping_cm()); // Send ping, get distance in cm and print result (0 = outside set distance range)
distance=sonar.ping_cm();
Serial.print(distance);
Serial.println("cm");
if (distance <= 5){
Serial.print("weeeeeee");
}
}
As you can see we need the NewPing library for this example, now let's see if it runs on the Spresense with 3 sensors:
Nah, don't try the NewPing library as it is incompatible with the Spresense.
Going back to code:
To run the sensors you need code along these lines:
/*
* Ultrasonic Sensor HC-SR04 and Arduino Tutorial
*
* by Dejan Nedelkovski,
* www.HowToMechatronics.com
*
*/
// defines pins numbers
const int trigPin = 8;
const int echoPin = 9;
// defines variables
long duration;
int distance;
void setup() {
pinMode(trigPin, OUTPUT); // Sets the trigPin as an Output
pinMode(echoPin, INPUT); // Sets the echoPin as an Input
Serial.begin(9600); // Starts the serial communication
}
void loop() {
// Clears the trigPin
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
// Sets the trigPin on HIGH state for 10 micro seconds
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
// Reads the echoPin, returns the sound wave travel time in microseconds
duration = pulseIn(echoPin, HIGH);
// Calculating the distance
distance= duration*0.034/2;
// Prints the distance on the Serial Monitor
Serial.print("Distance: ");
Serial.println(distance);
delay(300);
}
I practically stole it from www.HowToMechatronics.com and altered it a bit myself, the interesting part is that it already has the needed variables to do logic.
So after fiddling with it a lot and getting it to work properly this is the result,
You can get the code at the bottom!
Step 3: Brick (Image) RecognitionFirst let's set up how this is gonna work.
I want the Bulldozer to track its surroundings, searching for pieces of lego. For that it'll have to be in a central position in the room. After it reaches that position It'll have to track the floor by moving on its axis and scanning the terrain. If it doesn't find a piece it just returns to its place in a corner. If it does find a brick, then it will bulldoze the piece away to a corner. And save lives by doing this.
For this to work I'll make use of a Raspberry Pi Zero W. But first I have to test if the raspberry reads the SD card on the Spresense as a mass storage device.
But we first have to go over certain issues I found several people having in the discussion section of this contest.
The installation of the SD card is seamless, it is the same that you have with a Raspberry Pi. Just grab an SD and gently put it in, as you can see in the image.
Having said that then I flashed the camera example provided natively with the board's library. But, with a single change, instead of taking a hundred pictures I reduced this number to just five. for that go to this code snippet and change the counter:
/* This sample code can take 100 pictures in every one second from starting. */
if (take_picture_count < 5) //////Change this one to a number that suits you.
{
/* Take still picture.
After running it once I flashed the USB as mass storage example to see what was in the SD card. For that to work properly you will need an additional USB cable and to connect it to the expansion board's USB micro plug.
You'll also need another port in your PC.
But if you did everything by the book then, when inspecting the new drive that your PC detected you'll get to see the five photos taken.
This is essential to the project as I will be reading that SD card to perform the image recognition. So, first hypothesis completed.
The next step will be to create a Node-RED instance in the RPi that will run an IBM Watson service to use Watson's visual recognition service to identify bricks.
But before continuing let's test if Watson recognizes the bricks. For the next steps you will require an IBM cloud account: https://www.ibm.com/cloud/get-started
You won't need to pay a thing as long as you don't consume more than the free limit. To test my hypothesis I created a Node-RED instance in bluemix (if you need help for that go to: https://nodered.org/docs/platforms/bluemix ) and set it up like so:
Just set the Inject node to send a string with the URL of the image you wish to test and then just display it in the debug node, an example made with an image full of LEGO bricks contained the following results:
Haha these results are great, and with several LEGO images the constants were key words like "eraser", "Tack", "nail", "sharpener" so with these we can later do a list of things we don't want to see in our floors.
Before continuing I literally analized an image of my floor to see which results I could get:
Interesting, the model sees the tiles on my floor as a "net" and nothing else, so making the logic to include these results and avoid the others will be quite easy.
And with that we have just to get on the Raspberry Pi Zero W, run an instance of Node-RED, and play a little bit with the logic so it detects LEGO bricks and other things we want to be removed.
For setting up the RPi Zero W go to: https://www.raspberrypi.org/downloads/raspbian/
I recommend using Etcher and a 16GB Sd card.
www.etcher.io Pro tip: deactivate Validate on success on setup. It's kinda buggy.
Now we have to set up Node-RED.
All the raspberry images come with Node-RED pre-installed but that version is not good for us as it is quite old and limited.
Go to: https://nodered.org/docs/hardware/raspberrypi and follow on the instructions to install the new version, which is just typing in the Raspberry's command line:
bash <(curl -sL https://raw.githubusercontent.com/node-red/raspbian-deb-package/master/resources/update-nodejs-and-nodered)
We will be writing in the Serial of the Bluetooth module in order to provide the logic to move the Bulldozer around. Just remember to activate the raspberry's BL module and pair it with your HC-06 Bluetooth module, with this we can send information via serial and most importantly receive.
For that we will connect the USB of the expansion board to the Raspberry Pi Zero W, to read the image, and you can still flash the Spresense with the other port. For that it has to be also configured as Mass storage.
It is also highly recommended to run the Raspberry in VNC mode. For that go to your Raspberry Pi's configuration and set it up like this:
The main program will be done with a Node-RED flow, while at the same time providing a new UI made with the Node-RED dashboard.
For that we need to install aditional nodes, the IBM Watson ones specifically, in order to run the visual recognition service.
To get them, first you go to your Node-RED instance->Ellipsis->Manage palette-> Install->type Watson in search->install node-red-node-watson.
The flow was created with the serial node, function node, Switch nodes, file node dashboard nodes and the Visual recognition IBM node.
The node is at the bottom but without the required credentials (IBM rule, I don't want to lose my subscription) as you'll have to set them up yourself.
It is very easy, lets go part by part:
The file access node:
Just go to your Raspberry and get the direction of your file and input it there.
In the FIRST switch node:
Just set it up like so, our queue to do visual recognition will be the statement "go" written in Serial2 by the Spresense.
The visual recognition node is a little bit more tricky as you have to configure IBM Watson. But fortunately I already have a guide for that: https://github.com/EddOliver/AggroFox/tree/master/IBM%20cloud%20AggroFox Just create the IBM Cloud Visual Recognition Instance and input your credentials.
There are several Switch nodes at the end, and these are to locate those key words we spoke about, back when we were just testing the system. You can add as many as you need to improve on the image recognition. Set them up like so:
The function node is just to send a proper message to the BT Serial:
The Arduino code will have to be modified also a little bit:
- You'll have to take only one picture and access it every time
- Write in the BT serial (Serial2) to send the request.
- And also get the corresponding BT Serial in order to act. (Or not to act)
Both files as always, at the bottom.
So with this we have a fairly accurate brick detection system with the Spresense camera, board and a Raspberry running Node-RED and IBM Watson. It reads what the spresense leaves in the camera and acts upon it.
Step 4: Full integration and Pimping My BulldozerWith all this we have then to integrate into one package and run it accordingly.
All the Codes are at the bottom, it was just practically a copy paste and integration of all the other codes while providing logic of interactions.
To create my Bulldozer I literally took apart a cheap toy Bulldozer and integrated the most usefull parts into my chassis.
Now we are ready for a demo of the project!
And here it is:
Commentary and Future RolloutThis was an amazing project to develop and I thank Sony and Hackster for the opportunity to just be creative with this.
I practically exploited the Spresense as much as I could. But maybe next time I'll experiment with Lidar a little bit.
I would appreciate feedback on this style on writing and hopefully you liked the project, thank you for reading!
Referenceshttps://developer.sony.com/develop/spresense/developer-tools/get-started-using-arduino-ide
https://developer.sony.com/develop/spresense/developer-tools/hardware-documentation
https://nodered.org/docs/platforms/bluemix
Comments