According to AAA, potholes cost US drivers more than $3 billion USD in damage each year! And that doesn't include the frustration of time wasted to stop and change a tire, or wait for a tow truck because the wheel was damaged and is now unusable!
Many times potholes are not fixed because the organization that does the repairs (the Department of Public Works (DPW)) does not know about them until after several cars have been damaged by the pothole (or unless someone reports it).
My project attempts to solve this problem by providing data of potholes instantaneously to the people that need to make the repairs. The premise of my project is to mount the Sony Spresense kit to something like a garbage or mail truck (which makes daily rounds around town) to passively collect the pothole data, and send the coordinates of the pothole via LTE to the DPW.
To make this happen, I used the Sony Spresense main board, the LTE extension board, and the camera board. One of the useful features on the Spresense board is that it comes with integrated GPS, so I didn't need to add an additional sensor to get location information. Sony also has a robust library of Arduino examples to get you started on using the capabilities of the main board, LTE extension board, and camera board. I personally am familiar with the Arduino IDE so that is what I used.
I started my project by getting the LTE up and running, since that was the area I was least familiar with. I had problems getting my Truphone SIM card activated and running (I couldn't even manage the SIM from the website to determine if it was activated). Fortunately, I had a Hologram SIM card from a prior project, and was able to use that. I was able to run the LteScanNetworks sketch to confirm both the SIM card and LTE extension board were working! (Note: initially I couldn't get the LTE board working. It turns out that the Spresense board wasn't seated properly. You will see an orange light on the LTE board to know that it is connected properly).
My next step was learning how to connect to an MQTT service. I decided to use AWS IOT Core since that is what the Spresense example used, but I had no prior experience so it was quite a learning curve. I learned that there was a required firmware update to the LTE board, that loading AWS certificates from the SD card can be finicky (so need to load from memory), that you need a stable power source to the extension board (better to power from a socket than from a computer), and other gotchas that took a hours to work through. Once I was connected though, it was great to see MQTT messages coming in to the test client on AWS!
Ideally I would have the MQTT message routed to an Amazon map, but I just did not have the skills within the timeframe to develop the lambdas and whatever else needed to be set up.
Once I had that AWS IOT core tested out, I moved over to Edge Impulse to develop my model. Given the complex nature of potholes (different shapes, depths, distances from camera), I decided on image classification instead of FOMO. I felt I would get more reliable results that way.
I pulled a data set from Kaggle to give me a good enough dataset to start my project. I was prepared to bike around with the Spresense with a variant of the camera sketch, but I figured this was a good enough start.
Once I had the image loaded into Edge Impulse, I set up my impulse. I used a 96x96 image size (grayscale) and used a binary classifier. I really only care if there is a pothole there or not.
For a model to use for transfer learning, due to the memory constraints of the board, I selected MobilenetV1 96x96 0.1 so I would have enough RAM for GPS and LTE functionality. With these parameters set, I had about 80% accuracy, which is ok but not the best. I'll take it given the constraints.
I felt these results were a good first step to at least test out the code. I then deployed the model as an Arduino library and imported it into my Arduino sketch. The public project can be found here on Edge Impulse.
Once I had my model up and running, I started my sketch. I borrowed quite a bit from Edge Impulse's Wildlife detection example to get the general framework for capturing, resizing and inputting camera frame buffers into the Edge impulse inference code. In addition, I used the GPS code from the Sony Spresense LteGnssTracker example to connect to the GPS satellites, and if the inference detected a pothole, save that image with the GPS coordinates, and publish the results in the MQTT broker.
As I had mentioned earlier, ideally it would automatically pin to a map, but I just couldn't figure it out. So as a next best step, I outputted the pothole coordinates to a CSV file to load into Google Maps. And voila!
The blue dots represent the GPS coordinates of the potholes that were detected. Note: I slightly changed the coordinates since I was working on this from my home and didn't want my location published.
This was a pretty challenging project. I had no prior experience with LTE or AWS IOT core so it was quite a learning curve. It was a fun project to see end to end though. Happy coding!
Comments