In the original BABL project, we achieved 86.3% accuracy without a ton of effort, but what if we could improve that number - and do so without a ton of manual tweaking and training? That's the concept behind Edge Impulse's new EON Tuner, which automatically generates the optimal model based on your target and dataset type. Here's a look at how EON Tuner can dramatically improve accuracy in just a few clicks! π€―
Getting Started π° π©βπ»To use EON Tuner, first you have to enable it from the Administrative Zone in your project's Dashboard:
then click Save Experiments:
This will add a new EON Tuner menu item on the left:
which when clicked will present you with the target configuration interface.
Development Process πͺ π»In the original BABL project, we used the Arduino Nano 33 BLE Sense, so we'll configure that as our Target Device, selecting Audible events for our Dataset category, and 1000ms for the Time per inference:
With our target set, we can click the big button and let Edge Impulse do the hard stuff:
Now we can literally sit back and relax while the numbers crunch away in the cloud:
It may take several hours to generate and evaluate all of the model variants, so you can navigate away, and check back on progress any time. The optimal models will automatically make their way to the top of the list, and as you can see, we have a selection of over a dozen models with 100% or high 90s accuracy by the time the process is complete:
While models are sorted by accuracy by default, we can instead for example target latency, RAM usage, or a particular classification:
Next, we can select the model to replace our original, non-optimized blocks, by clicking Select next to the favored result:
If everything went well, our update is confirmed and we can return to our project and see how everything's looking with the superior model applied.
Just to confirm our performance increase, let's take a look at our new model's training results:
Not bad! We've gone from 86.3% accuracy with our original model to 100% with the one that EON automagically generated for us! π€―
Just to make sure everything's good, we can use Live classification to verify our model right there in the browser - in fact, we can even skip the edge-impulse-daemon
that we used in our original project by using WebUSB to connect to our Arduino:
We can then use our baby crying sound effects to test our model using the Arduino's built-in microphone:
Live classification provides simple verification that our crying
and noise
classifications are being applied accurately without having to build and deploy new firmware to our board.
Everything's looking good in theory, but let's take things out of Edge Impulse Studio and onto some real hardware, just to make sure it's performing as well as it says it will! Last time we generated an Arduino Library, but for even quicker validation, we can have Edge Impulse generate fully-fledged firmware with the Build firmware option under Deployment:
We'll go ahead and take advantage of the EON Compiler's optimizations (without which, our new model probably couldn't fit on our Arduino!) and click the big green Build button to generate our firmware. The resultant zip file contains the necessary scripts for Linux, Windows and macOS to flash the included binary onto our device in one click, and then we can run edge-impulse-run-impulse
to observe it in action:
Inferencing settings:
Interval: 0.06 ms.
Frame size: 16000
Sample length: 1000 ms.
No. of classes: 2
Starting inferencing, press 'b' to break
Starting inferencing in 2 seconds...
Recording...
Recording done
Predictions (DSP: 115 ms., Classification: 273 ms., Anomaly: 0 ms.):
crying: 0.00000
noise: 0.99609
Starting inferencing in 2 seconds...
.
.
Recording...
Recording done
Predictions (DSP: 115 ms., Classification: 274 ms., Anomaly: 0 ms.):
crying: 0.00000
noise: 0.99609
Starting inferencing in 2 seconds...
Recording...
Recording done
Predictions (DSP: 115 ms., Classification: 274 ms., Anomaly: 0 ms.):
crying: 0.99609
noise: 0.00000
Starting inferencing in 2 seconds...
Recording...
Recording done
Predictions (DSP: 115 ms., Classification: 274 ms., Anomaly: 0 ms.):
crying: 0.00000
noise: 0.99609
Starting inferencing in 2 seconds...
.
.
Recording...
Recording done
Predictions (DSP: 115 ms., Classification: 274 ms., Anomaly: 0 ms.):
crying: 0.99609
noise: 0.00000
Starting inferencing in 2 seconds...
Recording...
Recording done
Predictions (DSP: 115 ms., Classification: 274 ms., Anomaly: 0 ms.):
crying: 0.00000
noise: 0.99609
Starting inferencing in 2 seconds...
Recording...
Recording done
Predictions (DSP: 115 ms., Classification: 274 ms., Anomaly: 0 ms.):
crying: 0.99609
noise: 0.00000
Starting inferencing in 2 seconds...
Recording...
Recording done
Predictions (DSP: 115 ms., Classification: 273 ms., Anomaly: 0 ms.):
crying: 0.00000
noise: 0.99609
Starting inferencing in 2 seconds...
Recording...
Recording done
Predictions (DSP: 115 ms., Classification: 274 ms., Anomaly: 0 ms.):
crying: 0.00000
noise: 0.99609
.
.
Using our baby crying sound effects from before, you can see in the above results the background noise punctuated with the wailing of our simulated baby! π’
Results and Conclusions βοΈ πEdge Impulse makes it east to collect data, train, and deploy a model without a trained data scientist holding your hand every step of the way. With their new EON Tuner, Edge Impulse gives you your own virtual data scientist in the cloud, who will busily generate and evaluate countless models based on your criteria, then present you with the optimal result while you relax...or maybe put your own real-life baby to bed! πͺ
Comments