TrashScan is a revolutionary new way of waste management and sorting. With it, we can more easily sort trash into the correct bins: compost, bottles & cans, mixed paper, and landfill. Simply place your trash on top of the platform and TrashScan will let you know which bin to place it in. This technology not only makes it easier to sort trash, but it works towards educating users on the intricacies of recycling and composting in order help communities achieve zero waste by 2020.
To select our community, each of our group members interviewed friends who were part of communities outside of our own. Some of these communities included Cal Recycling, Campus Ambassadors, and Bear Walk. Ultimately, we selected Cal Recycling, because it was the only community we interviewed with notable challenges and problems. Cal Recycling, officially known as Campus Recycling and Refuse Services, coordinates waste reduction services on the UC Berkeley campus and educates building managers and students about sorting trash correctly. Their goal is to achieve “Zero Waste by 2020” and increase the diversion rate (disposing trash in compost instead of landfill). The student workers of Cal Recycling are responsible for implementing the bin systems in buildings and conducting waste audits to check for contamination in the bins. They are a part of the greater, concentrated sustainability community at Cal. From speaking with Cal Recycling workers we discovered that there are many opportunities to incorporate technology with sustainability efforts to expedite waste management and educate people about such a complex issue.
Monica Chitre - Campus Recycling and Refuse Services Coordinator
Initial Interview: Monica works closely with the Custodial staff to implement the bins on campus and is responsible for managing the CRRS office and deals with administrative tasks. Findings:
Staff workers face inefficiencies in the process of sorting trash during waste audits when a compost bin ends up getting contaminated. Caused by human error, cross-contamination among the four bins add to the time spent on waste audits.
It’s difficult to educate 40,000 students about disposing waste correctly. The more difficult issue to tackle is motivating the students to take the time to care about disposing their trash. Students don’t care about correcting their habits and don’t possess enough knowledge to know which plastics are recyclable or not.
Sometimes the staff in Cal Recycling feel underappreciated and unrecognized for their efforts to divert waste on campus, even though it’s an important job.
Follow-up Interview:In our follow-up interview with Monica, we probed further about the issues of sorting trash and what could be done to fix the problem. The common sorting errors made by people were:
- dry paper should go in mixed paper (no soiled/wet paper)
- plastics were found in compost, but they don’t belong in the compost bin (should go to landfill or bottles & cans)
- The non-compostable spoons from off campus were placed in compost
It’s crucial for the staff assistants to sort waste correctly during waste audits, otherwise the composting center in Richmond won’t accept the compost. Monica shared that if at least 10-15% of the compost bin was contaminated, then the entirety would end up in the landfill.
We asked which metrics were good to measure the achievement of zero waste and Monica shared that carbon dioxide emissions and pounds of landfill were indicators of achieving zero waste.
Monica reiterated that apathy was the greatest problem that the sustainability community faces as they seek to achieve zero waste on campus. The follow-up interview informed our focus on the problem of sorting trash and educating students about disposing trash in a fun way.
Nicole Cuellar - Campus Recycling and Refuse Services Coordinator
We interviewed Nicole to have a better understanding of what facts would motivate students to dispose trash correctly. As a CRRS Coordinator, Nicole Cuellar initiates the rollout process in buildings on campus and she helps out with side projects, such as designing signage for bins and Zero Waste events.
- Our takeaway was that students are ignorant and unaware of what happens to trash after disposal. This is the primary reason why they are less motivated to care about waste disposal. A lot of landfill ends up being dumped and burned in other impoverished, less regulated countries, creating unhealthy living conditions.
- Practicing sustainability by composting is more important than ever in a world full of limited resources and environmental issues that mismanagement of waste contributes to.
- Reducing our consumption of resources and reusing items (such as water bottles instead of one-time use items) are the most important steps to take in order to achieve zero waste.
Introducing these practices and teaching students about what happens to waste after it goes into the bin provides a holistic picture of how we play a role in creating waste and amassing waste. The issue will become more relatable to users if we incorporate these facts and suggestions into our interface and messaging.
The sources of most of Cal Recycling problems boiled down to two:
People don't know what bin their trash belongs in
People don't care about what bin their trash belongs in
After identifying these main goals, we brainstormed the ways we could create something to help with the sorting process.
We began with ideas to create a device that would keep track of the contamination level of a bin. In our interviews, we learned that Cal Recycling members hand-sort through all the trash bins, and once a bin exceeds a 15% contamination level, the entire bin is put in landfill. So, to make their their process easier and faster, we thought we would add a meter to the bins that should show the contamination level of the bin.
A second idea was a device that would not only sort trash, but throw it away in the correct bind as well. This way users could simply place their trash on the platform and walk away; the device would do all the work for them. However, we wanted to focus more on educating users and teaching them about recycling and composting in a fun way. Thus, our final idea was a device that would sort trash and let the users know which bin the item belonged in.
Once we decided on a final product, we went through several design iterations. Our first design included a box with lights positioned at the top. A camera would look into the box, and once the item was sorted, a light would indicate the correct bin.
However, this initial design was too constricting and bulky, so we simplified out design to a flat platform. The user would place their trash onto the platform and a light attached to the right side of the platform would light up, much like the initial design.
In our third design, we wanted to be able to add more to the device and have some sort of imagery to be displayed on the platform. Thus, we added in a projector next to the camera and removed the lights. The device would have a flat platform on which an image would be displayed. We would customize the image per bin.
For our final design, we used a monitor, which conveniently served both as the platform, as well as where the visuals and sound played from. We place this monitor inside a box. Users would see an initial start screen, with simple instructions guiding them on how to use TrashScan.
Because signage is often hidden and ineffective for telling people what belongs in each bin, we wanted to display animations and play sounds to engage the user and associate each bin with a sound and graphic. We included facts stated during the interviews as part of the messaging. For compost, we played a victorious sound to encourage composting, and an error “wah-wah” noise to discourage people from using the landfill.
How It Works [detailed technical explanation]
The environment is running on a Raspberry Pi Model 2 B.
Here is a breakdown of the runtime loop, written in python:
Step 1. Identify whether the user has placed an object down.
We had a continuous loop running at the start of the program, which did not initiate the image recognition or classification tasks until it determined there was an object. It did this by taking a picture with the camera (positioned overhead) (denoted P1, Could contain an object) and comparing it with a picture it took at the beginning in a calibration step (P0, does not contain an object).
It then reads the jpeg image P1 into memory and extracts the distributions of each individual RGB values, creating a histogram for each using numpy, denoted R1, G1, B1 respectively.
It computes the absolute euclidian distance between the corresponding distributions extracted from P0 and sums them up to find the total euclidian distance
total distance = abs(R1 - R0) + abs( G1 - G0) + (B1 - B0)
If this total distance is within some threshold, our program says it is enough to determine that the user has placed an object down, and that P1 is a photo of that object.
We initially ran into trouble with picking this seemingly arbitrary threshold, and at times different thresholds worked with different lighting scenarios and locations.
Our solution was to initiate up a calibration step at the beginning of the loop, and find the average variation for the first few images (5) due to noise (with no object placed). Once we find this average distance, we set our threshold to ~1.5 times that average distance due to noise. This multiplier makes sense if we assume the noise in images to be normally distributed.
Step 2. Recognize what object the user placed.
We used a REST API to send over our jpeg images in binary to CamFind using the unirest package in python2. The CamFind had a request limit of 500 free requests per month; we totaled about 650, but luckily our project straddled November and December, so we got by with the free subscription.
The CamFind API had 2 main methods,
1. recognize from local upload, which identifies objects in images uploaded locally, and
2. recognize from url, which identifies objects in images hosted at a URL.
Since we wanted to recognize images taken and stored on the raspberry pi, we used 1. recognize from local upload. However, there was a bug with CamFind's recognize from local upload API the day before we were scheduled to deliver our prototype to Cal Recycling!
With their local upload API down, we had to use their recognize from url API, which meant we first had to upload our jpeg to a cloud-based database to host our image at a remote url, after which we could pass that url to CamFind.
We used Cloudinary as our database, which gave us a remote url for our images.
Although this problem was painful when it happened, it means we now have a scalable remote architecture set up to store all images taken with TrashScan, potentially using them for further training of future iterations, through human assisted tagging/verification.
CamFind's api responds with a descriptive string of what it sees in the image,such as "Clear Crystal Geyser Disposable Water Bottle".
Step 3. Recognize what bin the object goes in
Now that we know what it is we are sorting, we have to find out where it goes.
The naive solution would be to have 4 dictionaries Compostable, Landfill, BottlesCans, and MixedPaper, each containing an exhaustive list of everything that we know goes in each bin.
It isn't practical to come up with such a list, and small variations in language indicating different ways of calling the same thing like [ "paper napkins" vs "paper napkin" vs "napkin" vs "napkins" vs "white napkin" etc ] easily break strict string matching solutions.
It's clear that if people were going to be using this device in any nontrivial way, a more generalizable solution was needed, we needed to account for synonyms and expand for category membership.
If we tell our Compostable classifier that Strawberry, Orange, lemon, Grape are compostable, we would like it to be able to infer that 'Cumquat' is also compostable. We want words with similar contexts to go in the same bins.
To achieve this we trained a support vector machine with a linear kernel on word embeddings extracted from the 100 billion word Google News corpus.
A word embedding is just a list of numbers (vector) which defines the context in which a word is used in terms of some features, left and right ngrams and skip-gram with max n=3 were used for the model we trained; these are options set up in the word2vec program.
More about word embedding and word2vec.
https://en.wikipedia.org/wiki/Word_embedding
https://code.google.com/p/word2vec/
We collected a lists of items that are each Compostable, Landfill, BottlesCans, and MixedPaper from various online sources. We then extractedthe word embeddings for the items in our lists from word2vec and passed them into the SVM training function, from python's scikit-learn.
The advantage of SVM for this task compared to alternatives like neural nets is that the fully trained model is relatively small! The final model and the all the word vectors was ~120MB, which was easily loaded into the Pi's 1G ram.
Running cross-validation to test how good our model was determined an accuracy of 87% - 91%.
All of this training happens only once.
On the raspberry pi, we just have to look up the word embeddings for what is returned by CamFind and our model will predict what category it belongs to.
Step 4. Play the animation
We used Processing to create the motion animations with sound. Once we exported it from processing, we put them onto the rpi. There was a 'default' animation continuously looping, and then the corresponding animation would pop up for what bin the user's object was classified into, after which it returns to the beginning, and the user can do it again.
Final Thoughts
We're extremely proud of the TrashScan prototype that we've created. However, we did encounter several issues whole design and construction TrashScan.
We found that the red target that we displayed on the screen was too large and would, at times, object the actual object, especially if the object was small. A redesign would remove the red target and replace it with a thinly outlined square. The outline would be subtle enough to not be detected by the camera.
Another design issue we discovered was the “Let’s Start” wording in the instructions. The presentation of the phrase led many users to believe that the platform was a touchscreen, and thus they would want to the screen to start the program. Future iterations would have to make the wording and look more clear.
A major design issue was the object detection, in which our program would determine whether an object was placed on the platform. Our current design compares the RGB values of two photos - one baseline photo with no object, and another photo taken at an interval. We initially calculate a threshold each time the program is run, and then photo are taken every 5 seconds. If the RGB value differences are greater than the threshold, then our program believes an object is on the platform. However, due to lighting and movement around the platform and camera, our object detection was very sensitive and would always detect correctly. To solve this issue, we thought a weight sensor would be better for future iterations. We could install some sort of scale underneath the platform, and one a weight is detected, we would take a picture of the object.
While creating TrashScan, we also thought of several additional features that would make it better. One of these features is a companion app. Users would be able to open the app and see photos of objects other users have placed on the platform to be sorted. They would then be able to sort these items. This would not only help train the sorting program, but it would serve as a fun and education tool for users to learn more about recycling and composting.
Another feature we wanted was the ability to scan and sort multiple items at a time. Our current design only scans one item at a time. Thus, we would have to create our own object detection API that could correctly identify multiple objects. This would not only remove the need to connect to the internet, but it would also speed up the sorting time of our program.
Lastly, we would like to be able to detect edge cases. While the CamFind API is very good at identifying objects, it cannot identify materials. Therefore, all disposable spoons are currently sorted into the landfill. We would like to detect compostable utensils, cups, and plates from non-compostable versions. This would make our program more accurate and improve the recycling and composting rates.
Comments