This started with rather silly idea - when I bought a robotic vacuum cleaner and was amazed with its navigation capabilities, and a thought run through my head - I should make something similar. I always wanted to do something within my company (a robotic trolley roaming through warehouse with not much expensive cargo) but never believed there would be understanding and funding for it.
Since my first post here, I learned a lot with Raspberry and Python, and bought a third Pi - Raspberry Pi Zero WH. Wifi enabled board with linux, python, and lots of GPIOs for just 15Euros - are you kidding me?
The bot started way back with ESP32 board (in like 2019?), however project was scrapped due to lack of simple code uploading and manually writing everything - imagine plugging it to USB every time you want to put new code revision into it. Embarrasing. Pi Zero seemed like a great start, so I can see if it will lead somewhere and eventually be ready for upgrade to Pi4. You can SSH and VNC into it, run everything remotely, even program your code inside Windows app. This speeds things up significantly!
This article is not how to, but rather how I learned things during development, with explanation for some of my actions. More or less random text. Hopefully it can inspire someone.
Basic platform started with one Chinese robotic platform, two motors and plexi frame. Put together, you have a basic platform for arduino robot. Great, but I do not want Arduino. So, few months passed, until I put a first code into ESP32 with one HC-SR04 ultrasonic range finder sensor.
These sensors are incredibly annoying to work with and I cannot see why they are so popular! You see, these things work only if object in front of them is straight against sensor - small tilt of the obstacle will make ultrasonic wave bounce somewhere else and not into reciever, so you don't know there is a obstacle. It's like when you are looking at a mirror very far away and you can only see yourself in certain spot. I measured it, at 40 cm distance it could barely detect obstacle 4cm away from the axis. What a fail.
I remembered this when rework with Pi came around. Basically robot was moving and rotating, but mostly due to bounce sensor which was the only sensing thing on the robot that worked. I scrapped the ESP32 platform and put a Raspberry Pi Zero with three ultrasonic sensors together, so it should spice things up, shouldn't it? Well, you are as wrong as I was. Instead of one, now 3 things cannot hear anything in front of them.
I do not want to bounce off walls like a blind dog, I need to see things before they happen!
And well, my robotic vacuum bot with Lidar gave me a idea - what about Lidars? They are still expensive compared to simple sensors, but sophisticated enough to detect anything within line of sight (whatever is in same height as a laser beam) and somewhat reliable (Mirror mirror on the wall, what bot can you bounce it from?). So, during one week in forced quarantine (I did not catch covid thankfully, thanks for asking) I looked at my wallet, my wallet looked on me with terrified face and after we thought about it for a few days, I ordered a 100+ euros RPLidar from Slamtec. Kinda expensive hobby.
After basic playing with it on screen like testing Adafruit's python code to see image on LCD screen, and figuring out how to power it so nothing goes out (power requirement is pretty hefty for normal USB device, power protections on Rpi kicks in with often Rpi shutting down or rebooting) plan was met. Put it on a bot through powered USB HUB.
Well one sensor is fine but certainly not enough, now what? Browsing on aliexpress I ordered a TF-Luna lidar, which is a TOF = Time of flight based lidar sensor, that works only in one direction - but much better than those ultrasonic trash. After I recieved one I ran it with some test codes to find it amazingly great with whatever you point it at. Althoug sometimes it (I think) reboots or something because there can happen a appx. 4s delay in between measurements - heat, direct sunlight, too quick movement, power delivery?
So I put following components (so far) on the bot to begin my first (long) robotic programming journey:
1 - RPLidar
2 - TF-Luna - TOF Lidar. I plan to use two of them, but I recieved only one so far
3 - Raspberry Pi Zero WH
4 - Powered USB HUB - this is necessary since lot of things will work with RS232
5 - Controllers and chargers for batteries - making 5V, 12V and charging of on-board batteries. One battery for Pi, one for motors. Will be changed to single 3S/4S cell, since it will be easier to manage and keep eye on.
6 - MPU6050/BMI160/... - an IMU in bot is a must. You cannot control a bot with only looking at RPM of wheels (at least not a smarter one). Wheels can slip, can be blocked etc etc...
7 - Misc: USB/RS232 converters, BME280 for enviro info (temperature affects reading of HC-SR04 sensors, but who cares since they will be put away), ADS1115 for reading battery voltage and current and motors overcurrent, RTC DS3231 for time keeping and reducing of wifi connection problems, OLED screen for basic stat screen
8 - GPS? More TOF lidar sensors (VL53L0X) ? Bump sensors? Camera?...
So the fight plan is following - Except for bigger OLED screen and VL53L0X, I am still waiting for second TF-Luna, USB Hub, batteries and additional circuitry. Focus is currently on deploying some software for avoiding of obstacle, and working with data from IMU, to detect which direction bot is going. Basic skeleton program with controlling motors, reading sensors etc. is already done, and in part II. I will explain my thoughts about it.
Comments