Edge Impulse recently added image classification to their powerful Studio platform. Their tutorial provides instructions for the OpenMV Cam H7 Plus - which is currently only available directly from OpenMV - and would have required either waiting 20 business days for $12 shipping, or paying $63 for 3-day shipping in order to obtain. As a backer of the OpenMV Cam H7 Kickstarter, I had a couple of non-Plus cams on hand, as well as a pile of accessories, so I wanted to see what I could accomplish with what I already had. 😤
Getting Started 🔰 👩💻OpenMV provides an impressively comprehensive suite of functionality as part of their goal to become the "Arduino of Machine Vision". This includes the OpenMV IDE, which is available for Linux, Raspberry Pi, Windows, and macOS. Much more than a simple text editor, the OpenMV IDE provides a complete toolkit for working with your OpenMV cam. Go ahead and install it if you have not done so already! Here's how I did this on Ubuntu, after downloading the installer:
cd ~/Downloads
chmod +x openmv-ide-linux-x86_64-2.5.1.run
./openmv-ide-linux-x86_64-2.5.1.run
If this is your first time using the OpenMV Cam H7, you should disassemble and clean the camera IC, and when you connect the device, you will likely need to update the firmware.
If you do not already have one, you should also create an Edge Impulse Studio account at this time. Create a new project and take note of the name that you gave it. Additionally, you will want to install the Edge Impulse CLI (requires Node.js v10 or higher):
npm install -g edge-impulse-cli
in preparation for uploading your dataset later.
Building a Dataset 📸 💾With setup complete, it's time to build a dataset! In the OpenMV IDE, choose Tools > Dataset Editor > New Dataset. The Dataset Editor, which appears in the upper left of the IDE, allows you to create .class
folders, which will hold the classes of images which we're about to capture.
The tutorial used plant, lamp and unknown for their classes, but my home office contains neither of the former, so I chose to classify the many instances of Mega Man and LEGO Minifigures which crowd my work area (as mega.class
and mini.class
respectively), using the New Class Folder button:
Update the dataset_capture_script.py
file which is automatically generated as part of your dataset, as follows:
import sensor, image, time
sensor.reset()
sensor.set_pixformat(sensor.RGB565) # Modify as you like.
sensor.set_framesize(sensor.QVGA) # Modify as you like.
sensor.set_windowing((240, 240)) # Modify as you like.
sensor.skip_frames(time = 2000)
clock = time.clock()
while(True):
clock.tick()
img = sensor.snapshot()
print(clock.fps())
With your OpenMV Cam H7 connected via micro-USB cable, click the Connect button to connect your device to the IDE, then the Play button to run your updated dataset_capture_script.py
:
One issue I ran into here with my fresh OpenMV IDE installation was that the frame buffer viewer was not revealed automatically - if you don't see a preview image and accompanying histograms, look for a handle on the far right side of the IDE, and drag it left to reveal these tools.
Now, grab a bunch of whatever you plan to classify, select the .class
folder for that classification, and use the Data Capture button to take ~30 images of various items of each class, each from various angles.
Additionally, take ~50 images of things which do not resemble the items that you are trying to classify for unknown.class
.
Once you have a dataset that you're happy with, comprised of ~30 images for each class plus ~50 for unknown, you can upload it to Edge Impulse Studio using the handy CLI tool which you installed earlier:
edge-impulse-uploader --clean --format-openmv ~/dev/mega-mini
I am using the --clean
option here since I work with a lot of Edge Impulse projects and needed to force the CLI to ask me to select the correct one - if this is your first project, you can omit that parameter. Note that ~/dev/mega-mini
is the location of my OpenMV project, which should be changed to match yours.
When the upload completes, you should see something like:
Done. Files uploaded successful: 110. Files that failed to upload: 0.
At this point, we're ready to dive into Edge Impulse Studio and train our model!
In Edge Impulse Studio, select the project which you created earlier and navigate to the Data acquisition tab - you will see the images which you just uploaded, and their corresponding 3 labels (one for each class, plus unknown). Click on a few sample rows, and you'll see the images that you gathered earlier.
Now we can use all of this yummy data to create what EI call an "Impulse" using the Impulse design tool. In the tutorial, they use 96x96 pixel images for the Images pre-processing block, but since we're working with far greater constraints (the H7 Plus has 32MBs SDRAM + 1MB of SRAM and 32 MB of external flash + 2 MB of internal flash vs. just 1MB of RAM and 2MB of flash total for the H7), we're going to go with 48x48 pixels here. Add a Transfer Learning block (this lets us boost the small amount of data which we've collected with the power of a larger, well-trained model). The classes from your Data acquisition will propagate as output features and you can go ahead and hit Save Impulse.
The bullets in the left nav under Impulse design serve as a roadmap of the steps required to build our model - you'll see that the small circle next to Create impulse is now green, while Image is grey, so click on Image to continue the process. Our captured images were 240x240 pixels and color, whereas due to memory constraints, we're going to need images that are just 48x48 pixels, and greyscale. Select Grayscale (sic) for Color depth and click Save parameters. You will be automatically redirected to the Feature generation page, where you can click Generate features to reduce the dimensionality of or data to the three features which we care about.
Bonus: you may have noticed that only 94 items are in the training set, despite the fact that we took 30+30+50=110 pictures! Why do you think that might be? 🤔
We're getting close, I promise! Click that last grey circle in the left nav, Transfer learning, and let's get our Neural Network (NN) sorted! Under Training settings, set:
Number of training cycles:20
Learning rate:0.0005
Data augmentation: checked
Minimum confidence rating:0.70
With these settings alone, the model will be too large to run on our 1MB H7, so we need to dive into the Neural network architecture settings. The default MobileNetV2 0.35
CNN is too big for our RAM-limited device, so click Choose a different model and select MobileNetV2 0.1
. We could make our model even smaller by reducing the number of neurons in the final layer (to as little as 0
in order to completely omit it) but the default value of 10
will work within our hardware constraints. Click Start training.
Bonus: the Input layer has 2304 features - where did that number come from? Hint: what is the size of our image? 🤔
Once training is complete, if you scroll down (it's actually quite easy to miss at 1080p resolution if you're not looking for it) you will find a handy summary of training performance - this is of particular use to use as we attempt to squeeze our model onto the H7, since the peak memory usage of 440K shown in the tutorial would be far too great - we only have a few hundred KB to play with (experimentation suggests around 272K) and thankfully our performance summary shows that we need only 179.7K peak! Also handy is the model binary size, which is 272.9K at present, but unlike RAM, we can add more storage easily via a microSD card.
Remember those 16 missing items during our training earlier? It's time to dig them back up and put them to use! In order to validate our model, we need some "fresh" images which it hasn't seen before, and rather than require that we put everything on hold to gather some new pics, Edge Impulse conveniently reserved a portion of the original images for testing purposes! Click on Model testing in the left nav, and select all of the available images via the checkbox at the left of the grid header. The accuracy after reducing the image to a 48-pixel greyscale square is not great, but what is great is the tool's ability to help you determine why: under the three dots (⋮) next to each image, you can select Show classification for more detail on what's going on; in my case, the images which were performing poorly were of Bender dressed as a chef and Mega Man with no helmet, which are edge cases that I'm OK to ignore for now and see how the model performs on-device.
Running our Model 📸 🧠Now for the exciting part! Edge Impulse makes it really easy to deploy your model to supported hardware - in our case, the OpenMV Cam H7! Just click Deployment from the left nav, select OpenMV library under Create library and click Build. Extract the resultant .zip
file and copy the model and label files to your still-connected H7; on Ubuntu, I did this as follows (your destination path for the H7's storage will of course differ):
cd ~/Downloads
unzip ei-openmv-openmv-v3.zip -d ei-openmv-v3
cd ei-openmv-v3
cp labels.txt /media/ishotjr/3E7F-805B
cp trained.tflite /media/ishotjr/3E7F-805B
note that my download is "v3" after several experiments with trying to get the model to fit - yours will probably be "v1" - but I really like that Edge Impulse versions the files for you so that you're not trying to remember which is which and can easily revert etc. 🥰
The third file from the zip, ei_image_classification.py
, you should open in the OpenMV IDE, then run on the connected H7 using the green Start button as you did during capture earlier. If you look carefully, you'll notice that the generated file differs from the tutorial slightly:
sensor.set_pixformat(sensor.GRAYSCALE)
This is of course since we are using greyscale images, rather than RGB, in order to conserve memory.
Okay!! That's it!! We are all set for live classification!! Due to the compromises we had to make in order to fit within the limitations of the non-Plus H7 model, our accuracy won't match that of the device with 32x as much RAM, but we also didn't have to wait 20 business days or pay $63 in shipping to start playing with this exciting new functionality! 😌
While the 32MB H7 Plus may provide more accurate results, reducing the image size by 75% and opting for the MobileNetV2 0.1
model allowed us to operate within the extreme constraints of the regular H7, with 70-80% accuracy in many cases.
There's still some room in the H7's RAM after the above compromises - and Edge Impulse makes it easy to tweak parameters and create and deploy new models - so I'm looking forward to seeing how much better performance we might be able to squeeze out of 1MB! In addition, Edge Impulse have some new models in the works that should help even more!💡🤯
Comments
Please log in or sign up to comment.