The project consists on detecting musical notes (DO, RE, MI, FA, SOL, LA, SI) through the microphone of an Arduino Nano 33 Ble Sense and displaying the output in a pentagram. To do this we will use Arduino, Edge Impulse and Processing programs.
In Edge Impulse the artificial intelligence is trained step by step, starting with samples for each of the chosen notes. For this project they recorded 100 samples per note for the Training part and 10 for Testing, having a total of 770 samples (the length and volume of each one varies since the samples were recorded manually).
Subsequently the impulse is created, which takes raw data, uses signal processing to extract features, and then uses a learning block to classify new data. In the image below you can see the parameters that we select (in the MFE tab we just check the box to save the parameters and in Classifier we have an option that says Audio training options, check the box and save the parameters again).
As last steps to train the intelligence, we go to Model Testing, where we can visualize the accuracy of our work through some graphs, the first is a table where we indicate the "level of belonging of the tested data in each category, and below we see the Feature explorer, where we also see that the samples are represented with dots, each one grouped where the intelligence considers correct.
The only thing left to do is to get the code of the same, for this in the Deployment tab we only need to look for Arduino Library and select build. This way the library will be downloaded to our device and we will be able to view and modify it from the Arduino IDE.
For this project we must do a few things with the downloaded library, in the File-Examples menu, we will look for the downloaded library and select the nanoble33sense-Microphone Continuous option. When we open it we will see many lines of code and if it is compiled, we can see that the serial throws us several lines, indicating the notes and their level of membership, however none of that is very useful, what should be done is to locate our void setup() and the void loop, we will comment the lines that say ei_printf (except for those that are errors), then we will look for the line that says: if (++print_results >= (EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW)) above this line the necessary codinditions should be made so that in the serial we obtain a value depending on the level of membership that recognizes our microphone as seen in the example:
With this it would only remain to make our code in Processing, making a serial communication with the arduino. In the tab Sketch-Import library we select Serial and a line of code will appear. The parameters of the interface are defined, the image is loaded and in a void draw we create new conditions, which, depending on the value received from the microphone will show an ellipse in the place and color indicated (to have clearly how to do it we will attach the code of the same one).
If we do not put the conditions in the void draw what we will see is something like this:
Comments
Please log in or sign up to comment.