Each year humans collectively dump over 2 billion tons of waste and are on track to over 3.4 billion tons/ year by 2050. Currently this directly affects over 61 million people, and amounts to approximately 5% of greenhouse gas emissions.
Almost half of all wastage is food waste, of which 40% is wasted at retail and consumer levels due to an over-emphasis on appearance in quality standards. Despite such a large quantity of waste coming from food, only a small portion of this is composted (when food waste is placed in landfill it produces methane rather than CO2, see compost vs landfill). In addition many methods for up-cycling food wastage simply aren't implemented either due to increased cost of manual labor, or lack of adoption industry wide.
The evolution of single-stream recycling has resulted in increased recycling rates, as consumers don't have to do the sorting manually, and reduces cost for recycling plants (only one collection system). However, it has also lead to a decrease in the quality of materials recovered due to increased contamination from non-approved materials or unclean recyclables (it is generally to expensive for a recycling plant to clean recyclables) being placed in single stream bins.
Over the past few years with the rise of AI on the edge and IoT a number of new products have been in development to help combat the waste issue. Some of these include Alphabets X's Everyday Robot, Tomra's Container Deposit, and 'smart bins' that sort what goes into them. While these products are slowly to help combat some of these issues, adoption can be slow and costly.
This project will showcase some methodologies and processes for the development of a low-cost open-source waste management system, opening possibilities for democratizing the recycling industry. While this isn't an end-to-end project and demonstration (due to limitations in resources), it will discuss the components required in classifying waste types and proposed methodologies for dealing with them.
Proposed systemThis project proposes a system of Robot Operating System (ROS) nodes that can be used and combined to build a fully function system to minimize and process waste. The current particular focus is in reducing food related wastage. The NVIDIA Jetson Nano is used due to its small form-factor and low power consumption enabling these workloads to be processed on the edge and localized to the arm.
Below is a system of ROS nodes for sorting produce to minimize waste. A multi-spectral camera is used to collect NIR images of the individual or collection (e.g. system monitoring large quantities of produce already on shelving) of produce. This data is fed into a ripeness index node that quantifies how ripe the given item is. The Decision Engine then decides what to do with it, instructing the end-effector or actuator (or possibly human) to process accordingly. More on this system is shown below.
Current waste streams require large amounts of human labor and large machines to sort waste. These methods are only applied to waste streams that have more 'value' (i.e. single stream recyclables). A series of robotic arms with custom end-effectors (e.g. specific end-effectors depending on type of valuable object, such as for metals, textiles, etc) could process additional waste streams.
While these arms are significantly slower than fast running pneumatic systems that operate in large facilities, in a decentralized waste economy these arms could be used to collect a more specific value item. They are also more flexible, being able to sort a larger variety and being more general purpose (pneumatic systems are large and generally only capable of sorting 1 material).
The camera preforms first-pass object detection and classification to determine the initial value of an object. If the item is the specific item designated for that arm, it is picked up where further material classification can be preformed with a mini-spectrometer built into the end-effector. This can be used to determine the composition of the material for fine-selection and depositing in its final location.
Such a system could increase valuable recyclables from generic waste streams, provide more fine tune control over material classification leading to higher value products, and minimize human labor involved, leading to a more circular economy.
Getting started with ROS on the Jetson NanoThe modules of this project will be created using the Robot Operating System, due to its modular design and permissive license. It also allows modules to be more easily integrated into larger systems. The NVIDIA Jetson is used as it is a powerful, yet small, SBC capable of running compute and other machine learning tasks on its inbuilt 128 CUDA cores.
See the Jetson Nano start instructions for getting the Jetson Nano Developer Kit SD Card Image loaded.
For the most part we will be following the ROS from source instructions using the PIP method. We will also have to build required packages from source.
Initial update and install pip
sudo apt update
sudo apt upgrade -y
sudo apt install python-pip -y
Install dependencies and initialize rosdep
sudo pip install --upgrade setuptools
sudo pip install -U rosdep rosinstall_generator wstool rosinstall
sudo rosdep init
rosdep update
Create a catkin workspace for building ros
mkdir ~/ros_catkin_ws
cd ~/ros_catkin_ws
rosinstall_generator ros_comm --rosdistro melodic --deps --tar > melodic-ros_comm.rosinstall
wstool init -j8 src melodic-ros_comm.rosinstall
Resolve dependencies, build workspace and source it
rosdep install --from-paths src --ignore-src --rosdistro melodic -y
./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release
source ~/ros_catkin_ws/install_isolated/setup.bash
Sensor: Multi-spectral CameraA multi-spectral camera enables seeing beyond what we humans can see. By looking at specific wavelengths of light we can identify unique features. This is particularly useful sorting plant based foods, as characteristics of ripeness appear in the Near Infrared (NIR) region.
A system to reduce perishable food waste could be built from such a system. The transition from a first-in-first-out (FIFO) practice commonly used in supermarkets to a system the puts out produce by its ripeness could reduce food wastage. By monitoring produce already out an system could be able to monitor ripeness and selectively remove produce that is becoming to ripe (and thus preventing the ripening chain-reaction)
By following a similar approach to the Microsoft HyperCam we can create a cheap (sub hundred dollar) NIR multi-spectral imaging device. The Raspberry Pi NoIR camera is used as it doesn't have an IR filter (this enables us to see wavelengths up to around 1000nm). A grid of IR LED's of varying wavelengths connected to a simple transistor driver circuit allows us to control the wavelength we want to examine using the Jetson GPIO library.
The image above shows a ripe avocado in different wavelengths as taken with the mutli-spectral camera. 890nm has the highest contrast between the ripe & unripe areas when compared to 830nm which only very slightly shows this difference.
The blue filter blocks the blue wavelengths but allows the NIR ones we are interested in as blue channel of the sensor is sensitive to these wavelengths. By only looking at this channel we are able swap blue for NIR.
This can be done in opencv with
#... load image
(r, g, b) = cv2.split(img) # Split channels of image
nir = cv2.merge([b,b,b]) # merge only blue into NIR channel
cv2.imshow("NIR", nir) # or only show the blue channel with grayscale selected
By combining these images in different channels we can compare differences in the details of each wavelength. For example the image below shows channels e, c & f merged into an RGB image.
You can find a list of dependencies to build the required ROS packages at my site here. The code for the multi-spectral camera is in the ROS workspace of the GitHub repository (this code needs major refining, but the idea is there). This node will publish each raw channel as well as any combination of channels defined.
It takes in any image stream of type sensor_msgs/Image. To get a video stream from a camera connected to the Jetson we can use GStreamer in the ROS node GSCAM. All the required dependencies to build from source can be found on my website. rqt_image_view can be used as a graphical interface for viewing ROS image streams.
The gstreamer pipeline we use to get the video from the CSI camera into the format for GSCAM is as follows
nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=30/1, format=NV12' ! nvvidconv flip-method=2 ! ximagesink
Sensor: Material classificationWhile the mutli-spectral camera is able to produce a high resolution imagery, its low-cost design requires narrow wavelength LEDs of specific wavelengths. For higher spectral resolution, we need to use a spectrometer. I have done some experimentation using the Sparkfun Triad Spectroscopy sensor.
This sensor only has a few channels in the NIR range and a relatively low spectral resolution of ~40nm.
To create a high-resolution device to be capable of accurate material classification we need to use a sensor capable of a wide wavelength range (up to around 2000nm), beyond 1000nm and have a high spectral-resolution (in the order of 20nm or less). A sensor built with the Hamamatsu MEMS-FPI spectrum sensor for example would be capable of a spectral range of 1350 to 1650 nm with 18nm spectral-resolution (in the case of the C14272). This device would be small enough to embed into the end-effector of a robotic arm, allowing for simultaneous classification during movements. Obtaining this module however is challenging for a hobbyist to obtain.
I have create a small dataset using the AS7265X sensor. This is available as a csv file consisting of the class then channels. This data can be passed into a classifier to derive the material composition
Object detection: Darknet & ROSTo preform object detection and initial classification we can use the ROS node for Darket and train a custom set of weights to specify specialized model. Darknet ROS with YOLOv3 Tiny (roslaunch files for this are in my fork of the repository) runs at around 10-15fps on the Jetson Nano.
The bounding boxes and classification can be fed into the value-engine for specific selection of target item type. This can then be collected for further analysis using some form of robotic arm.
ROS Launch file example showing the yolov3-tiny configuration
<?xml version="1.0" encoding="utf-8"?>
<launch>
<!-- Use YOLOv3 -->
<arg name="network_param_file" default="$(find darknet_ros)/config/yolov3-tiny.yaml"/>
<arg name="image" default="camera/rgb/image_raw" />
<!-- Include main launch file -->
<include file="$(find darknet_ros)/launch/darknet_ros.launch">
<arg name="network_param_file" value="$(arg network_param_file)"/>
<arg name="image" value="$(arg image)" />
</include>
</launch>
To build a specialized classifier with Darknet we can follow pjreddie's train CIFAR10. The dataset used should be trained around the desired value item for optimal efficiency. Running darknet on TensorRT can yield higher frame-rates as shown by JK Jung's TensortRT ONNX YOLOv3.
Value engine and end-effectorsThe value engine in the ROS node (single stream sorting) is to provide a goal for the actual hardware preforming the sorting. It operates in a 2-stage manner, the initial value (generally of lower or less specific probability) followed by the specific classifier (utilizing the spectrometer) for high confidence material-composition classification so that it can be deposited into the correct bin.
This results in a higher quality outputs to be reused, which is important as outputs from a recycling or waste facility must compete with raw virgin materials. This is also important in recycling sensitive materials such as textiles.
The end-effectors must be specialized to the material they need to collect. This is why the value-engine has an initial classification, so that it can be passed onto the correct subsystem.
Further explorationThis goal of a decentralized, democratized waste & recycling processes requires a lot of work. This project just explorers a few sensors and methods of using devices to preform classification of substances on the edge.
Other sensors for material classification such as radar have also been explored and may be a low cost alternative/ enhancement to the spectrometers.
Comments