In the U.S, there are around 2 million kilometres of sewer pipes around the country that supported 240 million U.S citizens. Every year, at least 23 000–75 000 sewer pipe failures are reported in the United States, which releases up to 3–10 billion gallons of untreated sewage to the environment. This leads to a high economic loss, water pollution, and threatens communal health.
These sewer pipes need yearly maintenance to keep functioning properly and avoid sewer overflow. In most cases, sewer inspection is performed on-site by an expert inspector, who usually use a remote control robot with a camera to manually inspect the internal structure of the sewer. This process is time-consuming and often time can lead to flawed inspection of the sewer pipe due to the difficult and tiresome nature of the work.
TinySewerTinySewer is an independent camera module that identifies sewer defection using tinyML. This module is intended to be installed on an existing robotic sewer inspection platform to give the platform machine vision capability for identifying sewer faults during the inspection process.
This module allows for autonomous sewer inspection and reduces the workload of the inspector. The inspector can simply drive the car slowly and watch the screen for any detection from the TinySewer application or they can and stop to manually inspect. In addition, TinySewer will tell exactly what type of fault is present so there no need for no need to have an expert sewer inspector, a general inspector or even an entry-level inspector is enough.
To further support autonomous inspection, TinySewer client application record all the footage with detection labels on the video timeline to allow for the inspector to review the footage and easily pick the point in time where the fault happens. This allows for the inspector to work on other tasks while sewer inspection is ongoing.
In terms of scalability, TinySewer is simple cheap as it only cost around 150 USD per unit and can be easily integrated into an existing robotic system or a sewer inspection tool. Furthermore, TinySewer powerful faults detection system allows for a creation of a larger system where autonomous sewer inspection robots routinely doing sewer inspection and send footage with defection report to a single mainframe computer can sort through various sewer fault report and assign personal to repair the defect sewer.
TinySewer uses the Arduino Portenta H7 as its main computing unit. The Arduino Portenta H7 features a dual-core low power Cortex M7 processor which helps reduce power consumption.
In addition, TinySewer allows for its operator to shut down the camera when the inspection is on standby or finish. This will save a total of around 40mA. Finally, there is an option for the operator to shut down the Tiny Sewer which will put the device into deep sleep mode until there is an external interrupt to wake it up.
SchematicsThe TinySewer module contains an Arduino Portenta microcontroller with peripherals include a Vision Shied connected via High Density Connector for camera feed. And two white LEDs that are connected directly to the Arduino Portenta PH15 pin and control by PWM. Everything is power by a 5V, 2.4 Amp Portable Battery.
CasingTinySewer case is made using PLA filament and can be easily printed from a 3D printer with the file provided in this post. The casing includes a top, bottom and a cap.
The unit is put together as follow, the Arduino Portenta H7 go in first then put the two LEDs onto the top 2 middle holes and connected them to the wires from the Arduino Portenta H7. Then place the top over the bottom piece then put 4 m3 screws on the 4 corners. Finally, put the cap over the exposed pin section to prevent water and dust from getting inside the unit.
TinySewer uses a deep learning neural network architecture for classifying and identifying various sewer fault types. Currently, TinySewer can detect the four most common sewer fault types (cracks, root intrusion, obstruction, displacement) with at least 85% confidence
The model is created using the Edge Impulse machine learning platform. Firstly I get the images from ScienceData. The dataset comes with a CSV that contains the image name and its fault type. I simply create a simply python script that reads this CSV and sorts the image into its respective fault folder. These images are then uploaded to Edge Impluse for training. Overall we had 5 different classes for our model: normal, cracks-breaks-collapses, obstacle, root, and displacement.
Next, I go to create an impulse page to set up the workflow. Select 96x96 as image width x height, processing block as image, transfer learning image and then click "Generate Parameters".
Next, go to the Image tab to generate the feature parameters. Remember to select grayscale for colour depth as the Arduino Portenta Vision Shield is a monochrome camera.
Finally, select the Transfer Learning tab to train your model. For TinySewer, I use MobileNetV2 with a learning rate of 0.35 and 40 neurons for the final layers.
The model is trained using 50 epochs with data augmentation turn on. The model overall accuracy of around 94%.
Finally, to generate the model file and label file for the Arduino Portenta. I go to the Deployment tab and select OpenMV and click build. This will generate a zip file that includes the label.txt (label file), train.tflite (model file), and ei_image_classification.py (python classify script). Copy and paste label.txt and train.tflite into the Arduino Portenta internal storage. The script will need modifications to give WLAN, video, and data transfer capability. These modifications will be discussed in the Firmware section
The firmware is made using MicroPython which is a which is simply an implementation of python3 with a subset of standard Python libraries that is optimized to run on a microcontroller.
First is to set up the wifi which can simply be done by using the WLAN method. Then create a socket port so that the client can talk to TinySewer on the same network.
# Create server socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True)
# Bind and listen
print(PORT)
s.bind([HOST, PORT])
s.listen(5)
# Set server socket to blocking
s.setblocking(True)
# Create server socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True)
# Bind and listen
print(PORT)
s.bind([HOST, PORT])
s.listen(5)
Next, the program init the camera object, MQTT object, and set variable load to model and label
# Init Camera
sensor.reset()
sensor.set_framesize(sensor.QVGA)
sensor.set_pixformat(sensor.GRAYSCALE)
# Load in Model and labels
net = "trained.tflite"
labels = [line.rstrip('\n') for line in open("labels.txt")]
#Setup MQTT
payload = MQTTClient("openmv", "test.mosquitto.org", port=1883)
payload.connect()
next we define the streaming function which going stream the video feed from TinySewer camera back into the client application using MJPEG protocol,
def start_streaming(s):
print ('Waiting for connections..')
client, addr = s.accept()
# set client socket timeout to 5s
client.settimeout(5.0)
print ('Connected to ' + addr[0] + ':' + str(addr[1]))
# Read request from client
data = client.recv(1024)
# Should parse client request here
# Send multipart header
client.sendall("HTTP/1.1 200 OK\r\n" \
"Server: OpenMV\r\n" \
"Content-Type: multipart/x-mixed-replace;boundary=openmv\r\n" \
"Cache-Control: no-cache\r\n" \
"Pragma: no-cache\r\n\r\n")
# FPS clock
clock = time.clock()
# Start streaming images
while (True):
clock.tick() # Track elapsed milliseconds between snapshots().
frame = sensor.snapshot()
cframe = frame.compressed(quality=35)
predict = prediction(frame)
#print(predict)
header = "\r\n--openmv\r\n" \
"Content-Type: image/jpeg\r\n"\
"Content-Length:"+str(cframe.size())+"\r\n\r\n"
client.sendall(header)
client.sendall(cframe)
#client.sendall(bytes('POST /%s HTTP/1.0\r\nHost: 127.0.0.1:9990\r\n\r\n' % (predict), 'utf8'))
payload.publish("openmv/test", str(predict))
payload.check_msg() # poll for messages.
print(clock.fps())
Next, I define a prediction method that simply looks at the current frame and calculate the confidence of each label using the tinyML model. The labels and respective confidence is concatenated into a single string which is then send over MQTT to the client application
def prediction(img):
prediction = ""
#print("predict call")
for obj in tf.classify(net, img, min_scale=1.0, scale_mul=0.8, x_overlap=0.5, y_overlap=0.5):
#print("**********\nPredictions at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
#frame.draw_rectangle(obj.rect())
# This combines the labels and confidence values into a list of tuples
predictions_list = list(zip(labels, obj.output()))
#prediction = predictions_list[0][1] #defect confidence
for i in range(len(predictions_list)):
#print("%s = %f" % (predictions_list[i][0], predictions_list[i][1]))
label = str(predictions_list[i][0])
confident = str(predictions_list[i][1])
prediction += label + ":" + confident + ","
#name = (predictions_list[i][0])
#if label != "normal" :
#prediction = predictions_list[i][1]
return prediction
Then I have a lightcontrol() method to control the brightness of the two LEDs. The method takes in an integer between 0 and 100 with 0 being brightest and 100 is no light
def lightControl(percent):
for k, pwm in pwms.items():
tim = Timer(pwm.tim, freq=1000) # Frequency in Hz
ch = tim.channel(pwm.ch, Timer.PWM, pin=Pin(pwm.pin), pulse_width_percent=percent)
Finally, there is a main while loop that set the initial light value and is called the video stream function.
while (True):
try:
lightControl(50)
start_streaming(s)
print("main call")
except OSError as e:
print("socket error: ", e)
#sys.print_exception(e)
Full implementation is on Github inside file called sewer.py
SoftwareThe software is made using a framework called Electron. Electron allows for the development of desktop GUI applications with web technologies such as Node.js. The TinySewer client is separated into two tabs. The first tab contains the video stream from TinySewer, buttons for recording video, a button for light control, and a display box of the current sewer fault and its confidence.
The second tab is for video analysis. Stream is automatically saved as a .mp4 video file that can be playback for further analysis. In addition, there is also a video timeline that has a highlight of duration when sewer faults are detected
Comments