The first version of this sensor combination was built poorly see below:
You might notice the arches that are holding the servos. That's because originally they were holding steppers (those cheap ones with the blue connectors). Since those don't have a "return to" type functionality it was not great to use with regard to the strain from the stiff wires connected to the TFmini-s lidar.
That stiffness is what drove me to make this "floating sensor" where there isn't an encoder on the rotating parts (could be dumb) but instead an IMU is "floating" in the exact middle of the pan/tilt planes. This was the case until I shoved an MPU9250 in place of the MPU6050 (smaller). Now it is a little offset.
The plan is the IMU will stay level with the horizon using NED. By a combination of different threads running I will maintain state and estimate telemetry based on the IMU and physical measurements (expected travel by input on the WiFi buggy) as well as hopefully camera enforced checks... meaning the camera can tell how far it moved based on analyzing what it sees (hard to do).
I'm still working on architecting the code/figuring out how it will all work but it'll be a good time sink project.
I cut an internal brace to accomodate the MPU9250 so it's collapsing inwards unfortunately. I will fix that.
Anyway I just started this project a little over a week ago so it has not gotten very far. There is a dev branch with the actual code work until a first version is determined.
This is the build video if you're interested.
Computer vision
This is pretty crude my approach. I am using 1D/2D histograms to get HSV values to use as masks, to find blobs of colors using contour finding. Then I find the centroid of the blobs and point the ranging sensors at the blobs to know how far away they are/locate them in 3D space with respect to the robot. Since it has an IMU on board it tracks its motion the whole time/where those objects ought to be.
Which this project has lots of sources of compounding errors... so in the end you know it's crap but it's a good thing I can physically work on/improve over time. I'll also do a process where each photo captured/processed gets a decision logged and I can see why the robot decided to do something (pick a direction to go) and improve that.
It's very crude but yeah... can see below part of the process. This is with a GUI/not developed on the robot itself, the robot does this internally since it's headless.
Comments