This project is still in progress and hence there are some placeholders, broken sentences, and typos below, but then again that would be the case even if this project were complete.
IntroductionSeveral years ago my daughter said that she would like a magic wand with real magic in it. As I am often inspired by Arthur C Clarke's third law:
"Any sufficiently advanced technology is indistinguishable from magic"
I decided to embark on making a magic wand with real "sufficiently advanced technology" for her.
I was inspired by this project and early iterations of my version followed it pretty closely. Over the years I have made changes that include simple neural networks to recognize gestures, a more advanced arduino board to that incorporates Bluetooth communication, and experimenting with mold making for the final wand.
PrototypeI decided to build out a prototype on a single section of solder-less breadboard to experiment with data collection and the machine learning algorithm.
Hardware
For the micro controller I used the Adafruit Feather 32u4 Bluefruit LE. This board has several advantages for this project including: 1) an integrated battery charging and power management circuit, 2) an integrated Bluetooth LE radio for further expansion of this project into controlling Bluetooth devices, and 3) easy interface with the accelerometer. For the accelerometer, I used the Adafruit Precision NXP 9-DOF. This sensor is very easy to integrate with the Feather via the I2C protocol.
Because I wanted to use neural networks to infer gestures from the sensor values, I needed a way to record this data on a host computer in order to train those algorithms. I could have just continuously output the data to the serial terminal, however this would make it very difficult to label when specific gestures started and stopped.
To provide a sort of auto labelling, I included a button and two switches. The microcontroller will only output the sensor values when the button is depressed, and additionally it will output a label corresponding to the binary value of the two switches. This gives me the ability to output labelled data for 4 different gestures and only when I am pressing the button. An alternative to the two switches would be a DIP switch, this would allow for more commands and the wiring would be a bit simpler, perhaps you can add this to your version.
Furthermore, you will see in the Software Section, I output the data in json format so that it is very easy to read the data into the Python/TensorFlow environment for algorithm training.
Finally, I wanted to be able to have visual feedback of what gesture is inferred. To do this I connected 4 LEDs to digital pins which can be lit depending on the inferred command.
The Fritzing file is included in the github repo and the image of the breadboard view is shown below.
The one thing not shown in the diagram is the LiPO battery from Adafruit. They make a whole host of 3.7 Lithium or LiPO that have connectors specifically for their feather boards. I chose to use a cylindrical one to fit within the final handle of the wand.
ArduinoSoftware
This was my first real go at an arduino device, and because of that I relied heavily on bits of code from the examples for the feather, for the accelerometer, and for the example project I mentioned above. As with all of the hardware files, the source code for the software is on the github repo.
The first bit of code that you need to get going is the button_press_read_NXP-9-DOF arduino file. There is a bit of setup up front but you can get a good idea of what this code does by looking at the loop function:
void loop(void)
{
buttonState = digitalRead(buttonPin);
if (buttonState != lastButtonState) {
if (buttonState == HIGH) {
/* get the pin setting */
command_int = digitalRead(switchPin0)+2*digitalRead(switchPin1);
Serial.print("{\"command\": "); Serial.print(command_int);
Serial.println(", \"vectors\": [");
firstvec = 1;
} else {
Serial.println("]},");
}
}
if (buttonState == HIGH) {
/* Get a new sensor event */
sensors_event_t event;
gyro.getEvent(&event);
sensors_event_t aevent, mevent;
accelmag.getEvent(&aevent, &mevent);
/* Display the results (speed is measured in rad/s) */
if (firstvec == 1) {
Serial.print("[");
firstvec = 0;
} else {
Serial.print(",[");
}
Serial.print(aevent.acceleration.x, 4);
Serial.print(", ");
Serial.print(aevent.acceleration.y, 4);
Serial.print(", ");
Serial.print(aevent.acceleration.z, 4);
Serial.print(", ");
Serial.print(event.gyro.x);
Serial.print(", ");
Serial.print(event.gyro.y);
Serial.print(", ");
Serial.print(event.gyro.z); Serial.println("]");
}
lastButtonState = buttonState;
delay(200);
The first thing we do is see if the button state has changed, if so if it changed from low to high (not pressed to pressed), then we read the state of the switches that determine the command we want inserted into the json data. If it was from high to low (pressed to not pressed) then we write the closing json bracket and brace.
Then if the button state is high, we read from the accelerometer, format that into a partial json string (for that array of data), and then print it out to the serial link. After a configurable delay we repeat the loop.
After compiling and loading onto the arduino device, run the arduino IDE serial monitor or monitor it with screen if you are running on linux and a couple of presses of the button should produce the following output:
[{"command": 0, "vectors": [
[2.6824, 1.9478, 9.8178, -0.06, 0.38, -0.43]
,[2.7470, -0.8758, 10.5811, -0.24, 0.34, 0.62]
,[6.4726, -19.6020, 16.1396, 1.02, -0.15, -2.06]
]},
{"command": 0, "vectors": [
[1.7420, -1.4859, 9.6646, -0.01, 0.20, 0.07]
,[0.8901, -3.3715, 10.8323, 0.32, 0.26, 1.21]
,[-19.6020, -5.6710, 17.7284, -0.34, -1.89, -4.47]
]},
{"command": 0, "vectors": [
[2.0076, 1.3352, 7.0708, -0.32, -0.15, 0.10]
,[1.9549, -1.3807, 10.5045, 0.06, 0.03, -0.01]
,[1.7204, -6.5587, 10.4590, -0.70, 0.27, 1.46]
,[-19.2239, -19.6020, 7.6690, 2.04, -0.86, -4.47]
]},
You should notice a couple of things in this:
- The base-10 representation of your two switches will be in the "command" property
- The "vectors" key is an array of samples from the accelerometer (ax, ay, az, rx, ry, rz)
- There is a trailing comma after the last command. This is there because the code is unaware of your intent to press another command, and most JSON parsers will throw an error with this trailing comma. We will likely have to edit these files together a bit so removing this before parsing shouldn't be too big of a deal.
- The commands can be different lengths of command vectors. This will have to be handled depending on the type of neural network you want to implement. Future iterations of the code can easily be modified to only output a certain number of readings per command; that is a good idea actually, I should do that.
I have seen various qwerks such as the accelerometer values will be the same regardless of the way in which you move the wand. A simple restart of the arduino chip should fix this.
Perhaps the biggest problem I ran into, (and which caused me to give up this project for 9 months or so), is an issue with a insidious piece of software enabled by default in ubuntu called... da, da, daa, modemmanager. This thing is used to make connecting to, configuring, and interfacing with cellular modems easy, which I assume it does, however, it also hoses up serial device connections including arduino devices. It is an easy fix but not easy to diagnose as it looks like there are device manufacturer id problems, bootloader problems, serial connection problems, and a whole bunch of other things. So, if you are having any of these issues and your computer doesn't have a cellular modem, deactivate this thing and may clear up your problems.
Data Collection
Alright, now we have our prototype wand spitting out very nearly compliant JSON with the command of our choice, now we get to go about collecting data. Because neural nets work best with a bunch of data, you will want to spend quite a bit of time at this step recording the same motion but with slight variations. This part of the project became a bit of a joke in my family because I would sit on the couch watching tv, and be constantly pressing the button on the wand and recording the commands via serial monitor. Then I would stop for a couple of weeks while I chewed on the data, only to resume filling our TV time with collecting wand gesture data.
The early iterations of the code required quite a bit of "cleaning" of the data, trying to remember which gesture corresponded to which file, manually labelling the gestures, 'cat'ing files together, and then writing a parser in python for the data. Fortunately, the changes that I have made including adding the JSON formatting, adding the binary switch for command labelling, and automating some of the ML model generation code has made this process quite a bit simpler. So to collect data to train the model roughly follow these steps:
- Upload the button_press_read_NXP-9-DOF.ino file to the wand
- Set both switches to 0 or off
- Open the serial monitor in the arduino ide
- Press the button, perform the gesture, then release the button.
If done correctly these step should produce something like the following in the serial monitor. If your gesture is roughly equal to a second, then you will get about 4-6 rows or samples of data.
{"command": 0, "vectors": [
[5.2164, 0.6126, 9.5521, -0.03, 0.06, -0.09]
,[5.0082, -0.1412, 9.2076, 0.05, -0.03, -0.05]
,[5.2618, 0.4690, 8.5065, 0.09, -0.06, -0.08]
,[5.1829, 0.1507, 8.9755, -0.04, 0.02, -0.00]
]},
- Now do this for about 100-1000 times depending on how patient you are. With each iteration, you will get a new JSON object that looks like the one above on the screen.
- Now switch the 1's switch to 1
- Perform a new gesture 100-1000 times, you should notice that the command value should now be 1
- Now do this for all the combinations of the switch positions doing a unique gesture each time.
- Almost finally, in the serial monitor, select all, open up your favorite text editor (you should have just replaced the words "your favorite text editor" with vim), and paste in the copied almost-JSON into the file.
- Page down, and remove that last comma (that is not too much to ask for how easy this makes it to read this data into python)
Now we have a single file that contains many samples of 4 or more gestures in a JSON file. The final thing we need is to collect some data from the accelerometer when there is no gesture intended. To do this:
- Clear the output in the serial monitor
- Press and hold the button on the wand for long periods of time while slowly moving the wand into different positions
- Copy the data from this the serial monitor into a new file named something like noise_file.dat.
These files will then be read into the python code and used to train the neural network. You may be wondering about how to set the command switches, you will see in the code that for the noise file, it doesn't matter what the command key is set to as all of the sequences just get stitched together and randomly sampled to create the noise or "no command" sequence. Alright then, lets get rockin' on the neural network.
Host Software and Neural Network
Alright, you can take a look at the file train_model.py for an early version of a model training script using keras to build a 1 layer model. I don't use this file anymore and I am really only keeping it around to keep some of the model initialization and random search stuff. What you really want to look at is the jupyter notebook model_training_sandbox.ipynb.
I know, I know, putting some critical software in a jupyter notebook is poor form, and I should just have a script that reads the data files and then spits out a neural network. That being said, I have found jupyter notebooks to be a super powerful tool for visualizing data, and prototyping python code. If you have never played with jupyter before it may be a bit cumbersome because you have to pip install jupyter AND THEN run the command
> jupyter notebook
from the project directory, but if you are following this build I think you can handle that.
Also, this is meant to be an interactive notebook where you can try different things out. If you name your data files the same as mine and just run through the notebook, everything should work, but I encourage you to hack on it a little. Change the network, does it perform better or worse? Add a bunch more noise samples than the other commands, what effect does this have on the confusion matrix? Plot (you are in jupyter after all), the sequences of samples for the various commands and see if you can see discriminators in the plots that the neural network will train to. And if you are really feeling open sourcy, modify, add, write tests for, and then submit pull requests, that would be super sweet.
Alright, enough about that. Take a look at the.ipynb file. You should see it doing a couple of things:
The first couple of cells are reading in the files, determining the distribution of the lengths of the commands (remember a sample consists of a variable number of length 6 vectors), and then sampling from the noise file.
It then determines the length of sequence that has the most samples, in my checked in data it is 4 vectors per sample, and then grabs that many samples from the noise file.
We then build the model. I use the super simple keras high-level API that is now integrated with TensorFlow. I also am partial to the functional API on which you can find a great deal of documentation.
The model I used is a simple 1-Layer neural network, where the input is a vector of length (n_lookback * n_features) 4 x 6 in this case, and the output is a vector of length n_commands, 5 in this case. There is only 1 fully connected layer with a linear activation function. This makes it relatively easy to implement this in C code for execution on the micro controller. If any of this doesn't make sense, search neural networks, but make sure not to get to far down the rabbit hole, and if you at any point have the thought, "maybe I should use LSTMs, " stop, step away from the keyboard, and go work on something else for a couple of weeks. Once you have calmed down, come back to this project and get to work implementing a single dense layer.
inputs = keras.layers.Input(shape=(cnt_argmax*n_features,))
x = inputs
outputs = keras.layers.Dense(n_cmds,activation='linear')(x)
model = keras.Model(inputs,outputs)
After training the model, there are a couple of cells that will grab the weights and biases that were learned from the data, format them into strings, and then insert them in placehoders in a template arduino file. There might be a better way to insert constant matricies and vectors into an arduino file, but this template method doesn't seem to be too arduous for a small network.
Wand Version 1Hardware
Like the software and the overall design, the version one of the hardware has undergone several design iterations both in my head and then in the world. The original project used a 3D printed case to enclose the electronics and battery. Thinking about this, I started to think about natural materials I could use; a piece of wood in the style of Harry Potter? A piece of rock or crystal at the end of a handle? We had bought these long quartz crystals intending to make something out of them and I started to try and figure out how to hollow them out to make room for the electronics. Then I thought, epoxy why don't I just encase all of the electronics in epoxy resin then I can make it any shape I want. So as I periodically revisited this project I would brainstorm about how to use a mold (I had a BUNCH of time to brainstorm over the years) and epoxy to make the wand.
I started by thinking that I could just make a long skinny cone shape out of wax paper or something that would release from the epoxy. Then I made the mistake of searching mold making on YouTube and found this little gem. Bill Doran at Punished Props has a molding and casting video playlist that I contributed a significant number of views towards. The 2 part mold making video below was particularly inspiring and if you want to build a two-part mold and you watch that video 100 times and you think you are ready to build your mold, watch it a 101st time, more on that later.
Set on building a mold like Bill did in that video, my brainstorming turned to how to make a master. I had drawings of hexagon shapes that would terminate at different lengths along the wand, I searched online for hexagonal rod and bought some to try and mock it up, I tried triangle rod, looked for canned shapes of plastic to buy to make the master of the mold. I am not sure what reminded me of 3D printing, but there again I figured I could print any shape that I wanted and didn't have to worry about gluing dozens of pieces together. Also I have flirted with learning Blender before and this would give me a good project to get me over the initial learning curve. I made extensive use of the Blender 2.8 Fundamentals series on YouTube as it walks you through the basics in a bunch of short videos.
I started with the same hexagon shape ideas one of which are shown below and slowly built up my Blender chops. I fiddled around with this design for a while and then I remembered that my daughter had requested that the wand have a star on the end.
Playing around with Platonic solids I figured it would be cool to not just do a pentagram star but to do a stellated dodecahedron as the main shape. I am not going to fuss around with writing all of the steps to make this shape but I will say that playing around with extruding and changing the shape of the extruded faces what was got me most of the way on this design. Below there is an example video that should give you some ideas on how to make this shape or modify to make it your own.
And in addition to doing a screen capture to make a video of the final wand shape, we already have it in Blender so we can make little demo videos.
Now with our final shape, we can start to look at the 3D Print tools in Blender. The first thing I did was to calculate the volume to be ~200 cm^3. I then looked online at the per volume cost of 3D print services and immediately set to figuring out how to reduce this volume. I accomplished this by using the Boolean operators and extruded a cylinder to hog out the interior.
This removed about 100 cm^3 of the volume dropping the print price considerably.
Now, on with the 3D printing. I know that personal 3D printers are plentiful and fairly good now a days, but I don't really have the room, and seeing as how this is my first 3D print, I don't anticipate using it a ton. Also, if I get into a good workflow with one of the print shops I can potentially do different materials such as precious metals and even titanium!! So I looked online for 3D print services expecting to get to choose from among a couple of reputable print shops. Instead there were about a dozen adds in the google search before the first non paid for link, and from what I could tell most of them looked reputable. So, again not knowing what I was doing, I just picked one and went with it.
I chose i.materialize.com and their file upload and quote service worked straight away. I paid about £30 per half and received the parts in about 10 days.
I am going to pause here and go on a (short) philosophical aside about how amazing it is that we can generate any shape that we want (on free software), send the bits to a company maybe in another country, and then get that shape shipped to us in the real world 10 days later (and during a global pandemic none the less). This is utterly amazing to me and speaks to the deep philosophical need to get our ideas out of our heads and into the real world. This was an almost spiritual experience.
Ok back to hacking, full of anticipation and excitement I opened the package pulled out the first part and marvelled at my creation. It looks like the walls were a little thin in parts but the detail was very good and I could not detect any deviation from the design I had submitted. Excited to see what the two halves looked like when put together, I pulled out the other half and my heart sank a bit, it had broken in shipment.
This wasn't such a huge deal as I was going to glue the two halves together anyway but I was a little bummed that it hadn't survived suspended in a giant box by a bunch of packing peanuts. As I looked closer though, it looked like it had been glued and then broke again, so I assumed it broke at the print shop, they glued it and then sent the piece where it broke again.
Again not a big deal but I wrote i.materialize, sent them photos and they said they would reprint the piece...below is what I received.
Not only had they reprinted the part, but they had printed a custom (automatically generated I assume) wire-frame cage around it to protect it in shipping. I had already glued the other one and started the claying in process so rather than break apart this 3D work of art, I kept it as a tribute to the fine folks at i.materialize and their customer service. It seems as though I picked right when selecting (nearly at random) which print shop to use.
Alright, watch the 2 part mold (mould if I am true to my host country) video for the 57th time, and crack on with claying it in. For my first go at this I built the box first and this was a mistake. When trying to smooth the clay service it is really hard to access the bits under the dodecahedron stars if the box is already built. I also used a medium hardness clay to start and this proved to be really difficult to work. So with the decision to restart the claying in and some new blocks of soft clay, I began to build the mold.
As you would see in the Punished Props video, the benefit of a two-part mold made in this way is that you can put in registration marks so that the two parts of the mold line up and your part isn't off axis.
Now it is time to pour the silicone rubber. I chose to go with MoldMax 30 because that seemed like a good balance of strength and flexibility. I wanted something without too high of a shore hardness because I have the overhanging points of the star and a higher flexibility will be good for getting the master and the finished part out of the mold. The silicone rubber is definitely not cheap and this shape that doesn't have uniform thickness means that there is a lot of rubber used in this mold especially around the handle section. You can see that I tapered the box sides in near the handle and I thought about putting the mold on an incline when pouring the silicone but I decided to keep the top and the bottom parallel to each other.
Alright then, watch the two-part mold video for the 99th and 100th times and get ready to pour the silicone. I bought a kitchen scale because most of the silicone and urethane compounds are two part mixtures and these ratios are specified by weight.
I don't have a degassing chamber so I used the "bombs away" method described in the video. There were definitely some bubbles but I don't think this affected the performance or the quality of the mold at all. Also, you can see in the image above that there are some swirls of white on the top of the mold. This is because I didn't have a big enough stir stick and was left with mixing the silicone with a tongue depressor which left some unmixed bits at the bottom. These swirls went away as it cured over the next 24 hours but I was sure to get some proper stir sticks before pouring the second part.
Now it is time to remove the clay and pour the other half. This proved to be harder than I expected but once I got the hang of it and took advantage of the fact that the clay tends to stick to itself rather than the silicone it wasn't too difficult.
In the picture above you can see that the registration marks were transferred to the silicone very well and in fact much better than I expected. Before pouring the second half I glued some switches and a button so that I can put these elements of the electronics on the surface of the finished wand.
Ok, cool, ready for the second half of the mold. No need to watch the video again because I have watched it 100 times and I have made 1/2 of a two part mold so I am pretty much an expert. I am standing here with my new big stir sticks, my scale, I glued on the switches, but it feels like I am missing something...Huh nope ready to go. Pour in the second half.
It was right after the first image above, where I realized I should have watched the video a 101st time. I forgot the freaking mold release, the stuff Bill says "if you do one thing, spray on the mold release because if you don't you will loose all the hard work of making the 2 part mold. I was distraught, I got online right away ready to buy some more silicone to start over, but then I remebered all of the times I have told my daughter, the commissioner of this wand, to take a deep breath, don't freak out, and don't throw something away because you made a mistake. I can still cut it apart and try to use the mold even though I will loose the registration marks.
Alright, I can handle this, 24 hours later lets see how this works. I started by cutting a wavy shape in the mold to get some registration and then as I started to pull it apart it looks like some of the registration marks were still there!!
It turns out taking a deep breath and slowing down works after all, man that is good advice I have been giving her, dad win. You can see that one of the switched came unglued so I had to pull it out of the silicone but that is no big deal.
I was really pleased with how precisely the silicone took the shape of the master. Lines visible in the almost flat plane of the 3D printed part were visible, the little studs on the button, and of course the sharp edges of the stellated dodecahedron (if you cant tell I am just trying to type that as much as possible) were all present and well defined in the mold.
The next step was to build out the electronics that go inside the wand. I mostly followed the circuit for the prototype but I added another color led, and added the IR led. With 12 points of the star, minus one for the handle and another one for the IR led, I had 10 points to populate with leds. I decided to put the same color on opposing star points and wire each of the same color to a single control pin. For all of the internal wiring I used ~22 gauge magnet wire. I don't recommend doing this as sanding the enamel off all of the ends before soldering was very tedious.
I used the negative leads from the leds to form a sort of basket around the accelerometer. It was difficult to solder 22 guage magnet wire to the delicate leads of the switches and the button. I think in the future I will design a simple pcb to hold the switches in place and then give breakout through holes to solder the wire to. I used a feather proto board to house all of the resistors for the leds and to connect all of the devices to 3.3 V and to ground.
I should have put one more button on the wand before making the silicone mold. I didn't realize how much I relied on the tiny reset button on the feather and in my current design this was going to get covered in resin. I toyed around with cutting another spot into the mold for another button but I didn't want a button the cried "PRESS ME" to be the reset button for the microcontroller. What I ultimately ended up doing to expose a method to reset the board was to wire the RST pin of the feather and ground to a right angle header that comes out the bottom of the handle. shorting these two pins pulls RST to ground and acts as the reset button. I made sure to test this before casting the resin and that worked fine. I also cut the connector off of the battery and soldered that to the BAT and ground pins of the breakout board. I did this because the battery connector on the board had the wires coming out at a 90 deg angle to the board and took up too much room to fit nicely in the handle of the wand. Plus all of this part of the circuit will be encased in resin anyway so there is no benefit of using the connector.
Finally, I tested all of the components of the electronics. The multi_blink, repeat_send, and read_NXP_serial arduino sketches in the repo are used to test the leds, IR led, and accelerometer functions respectively. I also used the Adafruit provided BLE bluefruit sketches to test the BLE capability.
Finally, after probably close to 4 years of dabbling with this idea, it was time to cast it in the resin. I originally had chosen resin that was purpose designed for water clear electronics potting, however after learning about shore hardness, I discovered that this product was about as hard as the nipple of a baby bottle (that was the reference on the website where I looked up shore hardness). So I went with this product instead. A couple of things worried me about this, including the suggestion to only pour in 15 mm thick layers (I had parts that were thicker than this and I can't pour my mold in layers, wait, maybe I can, I should try that next time), and it suggested using a platinum cured silicone to cut down on moisture. I figured that this product was my best bet and they also had specs like "completely inert after cured," and "can be machined" etc.
On casting day, I started by watching that video a 101st time, (fool me once...), and then gathered up everything and headed outside. The resin ratio was specified in weight so this provided another good opportunity for a volume and mass lesson (and yet another reminder of how awesome the metric system is, 1 cm^3 = 1 ml). I approximated how much resin to mix by using the 3D print tool box in Blender again, but this time for the original part rather than the hogged out halves.
As recommended in the video, I poured some of the resin in and then shook it around and banged on the sides a bit to try and get it all the way down into the stellated dodecahedron (+1). One thing I was curious about was how viscous the resin would be and if I would be able to work it down the handle and into the top of the wand. I have worked with epoxy resin plenty of times and I remember it being fairly thick. This polyurethane resin however, was spec'd as having a very low viscosity and it poured very nicely past the electronics and down into the wand. I was able to guess by how much was left in the pouring container when the mold was full (there should be some left over because of the volume of the battery and electronics) that the majority of the mold volume had resin in it. The demoulding time was stated as 60 min but this was at 25 C and since we were outside in the UK it definitely was not 25 C, so after waiting about 4 hours it was time for the moment of truth.
It worked! There were a couple of blemishes but it worked! There was one big bubble at the surface of one of the star points, I kinda expected this as the points closest to the handle had surfaces close to parallel. Also, one of the switches had pulled into the interior of the wand and hence was stuck in the on position. Finally, there was one wire that snuck between the mold halves and hence was exposed outside. This bit still had the enamel on it and hence didn't compromise the function and rather gave it a rather bespoke look. Also, running all the electronics diagnostics programs showed that all of the electronics worked after being encased in the resin! The IR range seems to be a little bit less perhaps because the resin is not transparent at those frequencies (this would be an interesting experiment).
Software
Alright, now I had my finished wand, it was time to collect some more data with the new geometry of the finished wand using the process described above. I collected the data, ran the neural network learning jupyter notebook, inserted the network parameters into the .ino file and took it for a spin, and a shake, and a counter clockwise turn.
Again being used to setbacks at this point, I didn't expect it to work out of the box, and alas it didn't. I would perform a gesture and the neural network that would get >95% accuracy on the held-out test data wouldn't be recognized, another gesture would be recognized, the timing hold wasn't working, a whole bunch of problems.
I grabbed a bunch of data to test what was going on, I recorded a very long file of samples with all of the gestures in it, loaded it into a jupyter notebook and step by step ran the calculations to test what was going on. In the end, I think what was happening is that most of the training data had the beginning of each gesture in the same position in the sequence of samples. For example the gesture usually would start at sample 2 of the training data, but as this data was getting shifted in it would get evaluated at sample 1 and then 2 and then 3 etc. In these other positions it would infer another gesture. I tested this by evaluating the test data at 1, 2, and 3 sample offsets and then take a look at the confusion matrix. Sure enough, a really good confusion matrix would go to s**t after I shift by a sample or two.
I tried a couple of things to fix this including data augmentation by shifting the samples in time, adding a bunch more noise samples, and increasing the sample rate of the data to 10 samples per second.
ConclusionAs any good engineer would do in the face of adversity, I wrote a demo mode program for the wand and then turned it over to my daughter. I know this is kind of anti-climatic, but this is a work in progress and I think after hacking on a couple more projects I am sure I will return to this one.
A couple of ideas on where to take this project a step further include; 1) sending accelerometer data via bluetooth, 2) exploring other models (not LSTMs) for inference, 3) experimenting with the resin impact on the IR led, and 4) connecting the wand to other BLE devices such as smart home devices, i.e. turn on a smart bulb with a magic wand. Of course, you all could come up with an infinite number of variations and improvements, so hack on.
Overall I was very pleased with how the mold making, 3D printing, evolution of the JSON data reading, and to a slightly less degree, how the neural network and command inference went. I learned blender, how to use Fritzing, how many times to watch a youtube tutorial video, how mold making works, and as with any hack, a great deal about myself, my design process, my focus, my technical skills, and how to improve all of these.
Comments