Being able to sort trash according to different materials is very important for recycling. However, sorting trash is one of the toughest tasks to do. While it is easy to sort metals and non-metals, it is very difficult to sort paper, glass, plastic and cardboard.
For many countries segregating trash is a tedious and expensive task. This is why they simply refuse to do it and dump all wastes in to a single waste field. This has been shown to cause more ecological damage and also increases the time it takes for waste to decompose. If this process can be automated and made cheaper, then it can make it easy for countries to implement better waste disposal practices.
Currently, waste segregation is done by people. It is not a good job and workers are often at danger of being exposed to harmful chemicals, medical wastes and be exposed to diseases. If instead we can use a neural network that can do the classification then we can make the process faster, safer and more accurate.
This project attempts to use a convolutional neural network to do just that.
It is not always possible to run a machine learning model on a GPU as there can be cost and space restrictions. And always making an API call can have latencies and internet might not always be available.
In these cases using small, cheap devices at the edge (where the data is generated) is the best solution.
The problem with running models on the edge is that we are limited by the amount of computation power that we have. There are many ways to overcome this. You could use a hardware accelerator like a Neural Computer Stick. Or you could use some models that are built specifically to not be computationally expensive and run on the edge.
In this project we use the UP Embedded Vision Kit to run inference on the edge. We use the mobilenet model which is computationally less expensive.
The Neural Compute Stick (NCS) acts as a hardware accelerator that increases the speed of the neural network computation. As you may know, neural networks are computationally expensive and it takes a long time to load the model into memory and to perform predictions using it. The NCS converts the model into a NCS graph that is optimised for it and can load and run predictions very quickly. To
Connections/SchematicsThe connections needed to make this project is very easy.
If you are using the Raspberry Pi then connect the Pi Cam to it. You can connect the Neural Compute Stick if you have it, or not. The script script for performing prediction will work either way.
If you are using the UP Board kit, then please attach the Basler Camera to it. Since the UP board uses an Intel processor, it is comparatively fast and you do not need to use the NCS for performing predictions.
The data for this project was collected from the trashnet project.
Project StructureThe project can be found here. The project is structured in the following way:
- mobilenet_training.py: You can use this script to train the model. Training is not done on the edge since it is compute intensive and can take time. Instead, you can train it on your laptop or using google colab.
- trash_classifier.py: The script to classify trash on the UP Board Embedded kit. This is the part where Edge Computing comes into play. Once you have the trained model, you can put it in the the UP board and run predictions. These predictions will not require internet connection. Hence this could be used anywhere, even in remote areas and villages where network access is limited
- pi_trash_classifier.py: If instead of using the UP board, you want to use the Raspberry Pi for making predictions then you can use this script.
- prediction_images: Directory Containing the predicted images.
- models: I have already trained a model and saved it here. You can use this model for running predictions instead of training your own.
SamplePredictionCode
import keras
from picamera import PiCamera
from picamera.array import PiRGBArray
from keras.models import Model, load_model
from keras.applications import mobilenet
from keras.applications.mobilenet import preprocess_input
from keras.preprocessing import image
from keras.utils.generic_utils import CustomObjectScope
import numpy as np
#import matplotlib.pyplot as plt
import time
import cv2
import os
def pp_image():
img = image.load_img('pic.png', target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return np.asarray(x)
prediction_list=['cardboard', 'glass', 'metal', 'paper', 'plastic', 'trash']
model=load_model('models/model1.h5', custom_objects={'relu6': mobilenet.relu6})
camera = PiCamera()
rawCapture=PiRGBArray(camera)
for i in range(10):
time.sleep(0.5)
try:
import ipdb; ipdb.set_trace()
# Access the image data.
camera.capture(rawCapture, format='rgb')
img=rawCapture.array
cv2.imwrite('pic.png', img)
#import ipdb; ipdb.set_trace()
pred_img=pp_image()
yo=model.predict(pred_img)
pred=prediction_list[np.argmax(yo)]
cv2.putText(img, pred, (10,1000), cv2.FONT_HERSHEY_SIMPLEX, 5, (0,0,0), 5, False)
name='img'+str(i)+'.png'
cv2.imwrite(os.path.join('prediction_images', name), img)
rawCapture.truncate(0)
#print("Gray value of first pixel: ", img[0, 0])
except:
print('Could not perform prediction')
camera.stop_preview()
TrainingSample bash script for starting the training of the model
python3 mobilenet_training.py --nb_epoch 2 --batch_size 32 --model models/model1.h5
ClassificationSample code for performing classification with the saved model:
Note: This script is for running prediction using the UP embedded kit
python3 trash_classifier.py
For running predictions using the RPi you can use the following script:
python3 pi_trash_classifier.py
SomePredictionImages
This project requires python3.6 and opencv. Other requirements are present in the requirements.txt file.
Comments