I recently bought the Adafruit TCS34725 breakout board, which senses colors, and I wanted to build a project to kick the tires.
The most colorful thing that I know of my our house is my baby daughter's box of Lego. I thought it would be a nice test of the hardware to check if it could recognise the colors, and if I could use the color information with my Raspberry Pi to say what colors that the TCS34725 breakout board sees.
I wanted my device to sense when a place a block near the color sensing breakout board, and then display on a screen what color it has detected. I also wanted the device to synthesize a voice, which will say what color it has detected.
Like lots of things, it seems quite complex at the start, but breaking it down into simpler smaller steps makes it a lot easier. I've uploaded all of my code and designs to GitHub, so if you want to look into any part in a bit more detail, you can spelunk into the source code.
I had a few ideas on how I could achieve this.
- There are already libraries written by Adafruit to let an Arduino interpret the red, green and blue components of whatever color is sensed by the TCS34725, so I can use an Arduino to sense color changes.
- I can detect when a block is placed near the sensor by using a HC-SR04 distance sensor, which is very cheap, and certainly accurate enough for my simple device.
- I've previously created a 3D-printed rig to hold my Raspberry Pi and an Arduino, and I can attach breakout boards to that rig also.
- I can send the data from the Arduino over a serial port to a Raspberry Pi.
- And I can write a C# UWP application for Windows 10 IoT Core on the Raspberry Pi to process data from the Arduino and synthesize a voice using the Windows.Media.SpeechSynthesis library.
And I'd also like my code to be nice enough to extend, and maybe re-use some parts of it.
First - using the TCS34725 to sense a color.Adafruit have written an Arduino library for the TCS34725 and open-sourced their code on GitHub. I found this library and the examples to be great for how to get started with the breakout board. But for the next step, I wanted to output data from the device in a format that was easy for C# to process - so I designed a simple JSON object which can be easily ingested by the JSON.NET framework for .NET.
{
"Protocol": "Bifrost",
"Device": "TCS34725",
"Properties": {
"Red": "8b",
"Green": "74",
"Blue": "74"
}
}
So I can use JSON.NET to hydrate a simple C# class (like the one below) from the JSON object above.
public class TCS34725 : ISensor
{
public string Protocol { get; set; }
public string Device { get; set; }
public IDictionary<string, string> Properties { get; set; }
}
I've uploaded the code to GitHub that generates the JSON object for the TCS34725.
Next - use the HC-SR04 to sense how far away the subject is.There are lots of Arduino libraries for the HC-SR04 distance sensor - I wanted to do the same thing as before, and write JSON output to the Arduino's serial port which can be easily processed by C#. The HC-SR04 object I designed is below:
{
"Protocol": "Bifrost",
"Device": "HCSR04",
"Properties": {
"Distance": 3988
}
}
I've uploaded the code to GitHub that generates the JSON object for the HC-SR04.
Pulling the JSON outputs together into a single message.So I can write a nice and small Arduino project with the code below:
#include <hcsr04.h>
#include <TCS34725.h>
#define TRIG_PIN 12
#define ECHO_PIN 13
HCSR04 hcsr04(TRIG_PIN, ECHO_PIN);
TCS34725 tcs34725;
void setup() {
Serial.begin(9600);
if (!tcs34725.begin()) {
Serial.println("No TCS34725 found ... check your connections");
while (1);
}
}
void loop() {
tcs34725.Read();
Serial.print("[");
Serial.print(tcs34725.ToString());
Serial.print(",");
Serial.print(hcsr04.ToString());
Serial.println("]");
delay(100);
}
And this outputs a list of JSON objects like the one below multiple times a second, which is really easy for a Raspberry Pi to pick up over the serial port, and then it's simple for C# to parse.
[
{
"Protocol": "Bifrost",
"Device": "TCS34725",
"Properties": {
"Red": "80",
"Green": "66",
"Blue": "66"
}
},
{
"Protocol": "Bifrost",
"Device": "HCSR04",
"Properties": {
"Distance": 3988
}
}
]
I've uploaded the sketch which reads data from the HC-SR04 and the TCS34725 to GitHub.
Now for the hardware.I've previously created a 3D-printed rig to hold my Raspberry Pi, Arduino, and a 5" screen - you can read about how I made it here.
I designed a couple of attachments for this rig to hold the TCS34725 breakout and the HC-SR04 breakout, and printed these these out.
I've uploaded the CAD for these brackets to GitHub.
I connected the HC-SR04 and TCS34725 to my Arduino, and was able to see the JSON object list outputted to the Serial Monitor.
Sending serial data from the Arduino to the Raspberry Pi.Sending serial data to a Raspberry Pi running Windows 10 IoT Core is a piece of cake - you can check out my project's source code, or see a more generic example on Microsoft's GitHub site.
Now a little bit of data science...I needed to do a little bit of work to read normal colors from the Adafruit breakout. The TCS34725 is able to tell us the red, green and blue color components of something that's held near the device - but it's not able to tell us if it's what we would call Red, or Yellow, or Light Blue etc.
What I was able to do was hold different pieces of Lego with different colors up to the device, and record the red, green and blue colour components. Obviously there was some variation in how the TCS34725 reported the values of the components, but I was able to apply a little bit of data science - I can use a Box and Whisker Plot graph in MS Excel to draw the variation of the different components.
An example is shown below - I recorded about 100 readings from the TCS34725 while holding a red Lego block next to it. This box and whisker plot graph shows that if I get an output from the TCS34725 where:
- the red component has a value between 156 and 217,
- the green component has a value between 28 and 64, and
- the blue component has a value between 25 and 64,
...then I can be pretty confident that the TCS34725 is seeing a red Lego block. This is quite intuitive, as you can see the red component values are very high, whereas the green and blue components are relatively low.
But that one's easy, when you have a red block and red/blue/green components out of the box. What about a yellow block?
I repeated the experiment, and got a box and whisker graph like the one below. This time the results are very different - the red component is much lower, and the green component is much more pronounced than it was for a red block. So if I get an output from the TCS34725 where:
- the red component has a value between 102 and 128,
- the green component has a value between 80 and 102, and
- the blue component has a value between 28and 47,
...then I can be pretty confident that the TCS34725 is seeing a yellow Lego block.
I repeated this experiment for lots of different colors and shades of Lego blocks, and worked out the red, green and blue component ranges for each. Given that information coming through to the Raspberry Pi over the serial port, I was able to:
- Look at the HC-SR04 information, and if it reported that something was between 20mm and 60mm from the sensor, I checked the RGB components being outputted by the TCS34725.
- Given each of the RGB components in the TCS34725 data, I was able to compare with the ranges calculated from my experiments - if the reading fell into one of the color ranges I'd observed, then my application could assert it was looking at that color.
The UWP framework already has a speech synthesizer built in - so given my calculated color in a variable named "colorname", I was able to use the code below to say the color. I've added a little bit of extra code so that the program randomly chooses from a list of sentences when it's saying the color - just to add a little bit of variation and make it less repetitive.
var thingsToSay = new string[]
{
$"Looks like you've got a {colorname} block.",
$"I think that's a {colorname} piece.",
$"Is it {colorname}?"
};
var speechSynthesizer = new SpeechSynthesizer();
var r = new Random();
int rInt = r.Next(0, thingsToSay.Length);
var stream = await speechSynthesizer.SynthesizeTextToStreamAsync(thingsToSay[rInt]);
media.SetSource(stream, stream.ContentType);
media.Play();
You can hear what's being said by the Raspberry Pi using any headphones or loudspeaker with a 3.5mm jack that can be connected to the Pi.
I also wrote the color to the device's screen.
So now I can detect the color of a Lego block when it's held close to the TCS34725 color sensor, and not only write the color to my Raspberry Pi's screen, but also get the Pi to say what color it sees.
The project turned out to be a lot more interesting that I originally expected - I used hardware, software, 3D printing, processing JSON and data science. Hopefully some of the techniques I've written about here (or the code I've uploaded) will help someone else trying to use the TCS34725, or trying to process serial data from an Arduino.
Comments