In a world where time becomes more and more valuable, we have to learn to prioritize the activities that matter. In order to focus more on actions that make an actual change in our community, we should strive to automate as many mundane things as possible, such as household cleaning.
Our solution to this issue is an optimized vacuum cleaner that responds to simple voice calls.
You can tell it to clean certain rooms, move in specific directions, and even schedule the time to start cleaning, being a perfect household assistant. It has different modes for cleaning and three brushes for distinct types of grime.
Using it is as simple as it sounds, you call its name “Bot Reboot!” and the robot will be ready to take orders. A simple “Clean!” will make it go into its default mode and begin vacuuming the house.
The current presented project is still under development and it’s yet just a prototype, with many optimizations awaiting to come. Right now, the budget spent on all the hardware is around 200 EUR.
Some of the currently implemented features are movement, alongside random rerouting upon hitting something, a brush module and a working AI. All these aspects will be later on discussed in more detail. The planning of the robot was made according to the following MoSCoW backlog:
In this project, I used all the components shown in the Hardware section, alongside many CAD designs created with Fusion.
The motors were acquired from Polulu, and have an integrated gearbox. They are really small and of high quality. Also, it is possible to equip them with encoders. We used for the wheels 100:1 6V motors, for the main brush 25:1, and for the side brush 5:1. We also use a Voltage Regulator and a Motor Driver, the latter helping the motors give an output through PWM or normal (LOW-HIGH)
We make use of several sensors, initially going for IR, which failed since in different luminosity conditions the robot acts differently. That’s how we came up with the bumper solution. We created 8 buttons placed horizontally across 180 degrees of our robot. Each one can be connected to an individual input, for advanced feedback, or all to the same input, if you just want to know if the robot hit something, without knowing the position.
The microcontroller we chose is CY8CRPROTO-062-4343W since it had an easy-to-setup mic, unlike other boards.
This development board delivers dual cores, with a 150-MHz Arm Cortex-M4 as the primary application processor and a 100-MHz Arm Cortex-M0+ as the secondary processor for low-power operations. It also supports Full-speed USB, capacitive sensing with CAPSENSE™ and a PDM-PCM digital microphone interface. The CAPSENSE is composed of a touchscreen slider and two buttons, which we will use as the onboard user interface, and the microphone we will integrate with a Voice AI, as a remote user interface, which we will show later in this video.
2. 3D PrintingThe entire concept is divided into modules, more specifically: Main Brush, Side Brush, Dust Bag, Wheels and Bumper, all with the purpose for them to be easily replaceable, without them influencing each other.
The main brush contains two rolls, which take care of the dust and heavier objects. The vacuum itself will be implemented at a later stage of production.
The side brush is placed on the left of the robot and has the main purpose to redirect the dust toward the centre of the robot, plus it is efficient for cleaning up corners.
The brushes themselves without the supports were bought as spare parts of a normal Roomba.
The shape of the entire robot is round, making it easy to turn and go close to the walls without getting stuck to them.
For the code of the AI, we used the demo found on GitHub (https://github.com/Picovoice/picovoice-demo-psoc6). We then set up a project in Modus-Toolbox with our board and the library required for voice pickup. We then trained on the Picovoice platform, first the Porcupine, and after the Rhino Speech-to-Intent.
Porcupine basically takes care of the wake word, which for now is Bot Reboot, but we plan on making it customizable. Meanwhile, Rhino takes care of the commands. We generally have the keyword “clean” alongside its synonyms, for the main intent. We also have added the location attribute, which will be used with machine learning. The instructions we taught the robot are as follows:
And this is the outcome of how it works can be seen in the introduction video.
4. Future DevelopmentsWe plan to start using encoders for the motors. Basically, we’ll start measuring the movements in order to create a map and optimise the cleaning process. This is where machine learning will come into play, for which we will also add ultrasonic sensors to measure the dimensions of the room.
We also intend to implement a more efficient algorithm for movement, as now we only have a really basic one. Also, we will fix the problem where the robot gets stuck on carpets.
Another point for our plan would be developing a proper user interface (wired - CAPSENSE, wireless - phone wifi/Bluetooth), in order to offer the customer a proper setup and customization of the robot.
Comments