This tutorial is part of a series: “TinyML Made Easy, Hands-On with the Arduino Nicla Vision, collected in an open e-book, where you can download versions in .PDF and .EPUB.
As we continue our studies into embedded machine learning or tinyML, it's impossible to overlook the transformative impact of Computer Vision (CV) and Artificial Intelligence (AI) in our lives. These two intertwined disciplines redefine what machines can perceive and accomplish, from autonomous vehicles and robotics to healthcare and surveillance.
More and more, we are facing an artificial intelligence (AI) revolution where, as stated by Gartner, Edge AI has a very high impact potential, and it is for now!
In the "bull-eye" of emerging technologies, radar is the Edge Computer Vision, and when we talk about Machine Learning (ML) applied to vision, the first thing that comes to mind is Image Classification, a kind of ML "Hello World"!
This tutorial will explore computer vision projects utilizing Convolutional Neural Networks (CNNs) for real-time image classification. Leveraging TensorFlow's robust ecosystem, we'll implement a pre-trained MobileNet model and adapt it for edge deployment. The focus will be optimizing the model to run efficiently on resource-constrained hardware without sacrificing accuracy. We'll employ techniques like quantization and pruning to reduce the computational load. By the end of this tutorial, you'll have a working prototype capable of classifying images in real-time, all running on a low-power embedded system based on the Arduino Nicla Vision.
The Nicla VisionThe Arduino Nicla Vision (or NiclaV) is a development board that includes two processors that can run tasks in parallel. It is part of a family of development boards with the same form factor but designed for specific tasks, such as the Nicla Sense ME and the Nicla Voice. The Niclas can efficiently run processes created with TensorFlow™ Lite. For example, one of the cores of the NiclaV computing a computer vision algorithm on the fly (inference), while the other leads with low-level operations like controlling a motor and communicating or acting as a user interface.
The onboard wireless module allows the management of WiFi and Bluetooth Low Energy (BLE) connectivity simultaneously.
The central processor is the dual-core STM32H747, including a Cortex® M7 at 480 MHz and a Cortex® M4 at 240 MHz. The two cores communicate via a Remote Procedure Call mechanism that seamlessly allows calling functions on the other processor. Both processors share all the on-chip peripherals and can run:
- Arduino sketches on top of the Arm® Mbed™ OS
- Native Mbed™ applications
- MicroPython / JavaScript via an interpreter
- TensorFlow™ Lite
Memory is crucial for embedded machine learning projects. The NiclaV board can host up to 16 MB of QSPI Flash for storage. However, it is essential to consider that the MCU SRAM is the one to be used with machine learning inferences; the STM32H747 is only 1MB, shared by both processors. This MCU also has incorporated 2MB of FLASH, mainly for code storage.
Sensors- Camera: A GC2145 2 MP Color CMOS Camera.
- Microphone: A MP34DT05, an ultra-compact, low-power, omnidirectional, digital MEMS microphone built with a capacitive sensing element and an IC interface.
- 6-Axis IMU: 3D gyroscope and 3D accelerometer data from the LSM6DSOX 6-axis IMU.
- Time of Flight Sensor: The VL53L1CBV0FY Time-of-Flight sensor adds accurate and low power-ranging capabilities to the Nicla Vision. The invisible near-infrared VCSEL laser (including the analog driver) is encapsulated with receiving optics in an all-in-one small module below the camera.
Start connecting the board (USB-C) to your computer :
Install the Mbed OS core for Nicla boards in the Arduino IDE. Having the IDE open, navigate to Tools > Board > Board Manager,
look for Arduino Nicla Vision on the search window, and install the board.
Next, go to Tools > Board > Arduino Mbed OS Nicla Boards and select Arduino Nicla Vision
. Having your board connected to the USB, you should see the Nicla on Port and select it.
Open the Blink sketch on Examples/Basic
and run it using the IDE Upload button. You should see the Built-in LED (green RGB) blinking, what means that the Nicla board is correctly installed and functional!
Testing the MicrophoneOn Arduino IDE, go to Examples > PDM > PDMSerialPlotter, open and run the sketch. Open the Plotter and see the audio representation from the microphone:
Vary the frequency of the sound that you are generating and confirm that the mic is working correctly.Testing the IMU
Before testing the IMU, it will be necessary to install the LSM6DSOX library. For that, go to Library Manager and look for LSM6DSOX.
Install the library provided by Arduino:
Next, go to Examples > Arduino_LSM6DSOX > SimpleAccelerometer
and run the accelerometer test (you can also run Gyro and board temperature):
As we did with IMU, installing the ToF library, the VL53L1X is necessary. For that, go to Library Manager and look for VL53L1X
. Install the library provided by Pololu:
Next, run the sketch proximity_detection.ino:
On the Serial Monitor, you will see the distance from the camera and an object in front of it (max of 4m).
We can also test the camera, using, for example, the code provided on Examples > Camera > CameraCaptureRawBytes
. We can not see the image, but it is possible to get the raw image data generated by the camera.
Anyway, the best test with the camera is to see a live image. For that, we will use another IDE, the OpenMV.
Installing the OpenMV IDEOpenMV IDE is the premier integrated development environment for use with OpenMV Cameras and the one on the Portenta. It features a powerful text editor, debug terminal, and frame buffer viewer with a histogram display. We will use MicroPython to program the camera.
Go to the OpenMV IDE page, download the correct version for your Operating System, and follow the instructions for its installation on your computer.
The IDE should open, defaulting the helloworld_1.py
code on its Code Area.
Any messages sent through a serial connection (using print()
or error messages) will be displayed on the Serial Terminal during run time. The image captured by a camera will be displayed in the Camera Viewer Area (or Frame Buffer) and in the Histogram area, immediately below the Camera Viewer.
OpenMV IDE is the premier integrated development environment with OpenMV Cameras and the Arduino Pro boards. It features a powerful text editor, debug terminal, and frame buffer viewer with a histogram display. We will use MicroPython to program the Nicla Vision.
Before connecting the Nicla to the OpenMV IDE, ensure you have the latest bootloader version. To that, go to your Arduino IDE, select the Nicla board, and open the sketch on Examples > STM_32H747_System STM_32H747_updateBootloader.
Upload the code to your board. The Serial Monitor will guide you.
After updating the bootloader, put the Nicla Vision in bootloader mode by double-pressing the reset button on the board. The built-in green LED will start fading in and out. Now return to the OpenMV IDE and click on the connect icon (Left ToolBar):
A pop-up will tell you that a board in DFU mode was detected and ask you how you would like to proceed. First, select "Install the latest release firmware." This action will install the latest OpenMV firmware on the Nicla Vision.
You can leave the option of erasing the internal file system unselected and click [OK].
Nicla's green LED will start flashing while the OpenMV firmware is uploaded to the board, and a terminal window will then open, showing the flashing progress.
Wait until the green LED stops flashing and fading. When the process ends, you will see a message saying, "DFU firmware update complete!". Press [OK].
A green play button appears when the Nicla Vison connects to the Tool Bar.
Also, note that a drive named “NO NAME” will appear on your computer.:
Every time you press the [RESET] button on the board, it automatically executes the main.py script stored on it. You can load the main.py code on the IDE (File > Open File...).
This code is the "Blink" code, confirming that the HW is OK.
For testing the camera, let's run helloword_1.py.
For that, select the script on File > Examples > HelloWorld > helloword.py
,
When clicking the green play button, the MicroPython script (hellowolrd.py) on the Code Area will be uploaded and run on the Nicla Vision. On-Camera Viewer, you will start to see the video streaming. The Serial Monitor will show us the FPS (Frames per second), which should be around 14fps.
Let's go through the helloworld.py script:
The code can be split into two parts:
- Setup: Where the libraries are imported and initialized, and the variables are defined and initiated.
- Loop: (while loop) part of the code that runs continually. The image (
img
variable) is captured (a frame). Each of those frames can be used for inference in Machine Learning Applications.
To interrupt the program execution, press the red [X]
button.
In my GitHub, You can find Python scripts to test other sensors.
Connecting the Nicla Vision to Edge Impulse StudioWe will use the Edge Impulse Studio later on this project. Edge Impulse Is a leading development platform for machine learning on edge devices.
Edge Impulse officially supports the Nicla Vision. So, for starting, please create a new project on the Studio (we will use this project later for our Image Classification application) and connect the Nicla to it. For that, follow the steps:
- Download the last EI Firmware and unzip it.
- Open the zip file on your computer and select the uploader related to your OS:
- Put the Nicla-Vision on Boot Mode, pressing the reset button twice.
- Execute the specific batch code for your OS, for uploading the binary (arduino-nicla-vision.bin) to your board.
Go to your project on the Studio, and on the Data Acquisition tab, select WebUSB (1). A window will appear; choose the option that shows that the Nicla is pared (2) and press Connect (3).
In the collect data section, you can choose what sensor data you will pick.
For example. IMU data:
Or Image:
And so on. You can also test an external sensor connected to the Nicla ADC (pin 0) and the other onboard sensors, such as the microphone, and the ToF.
After you test the sensors, delete those data; once, we should upload the dataset to be used on the project, which will be collected externally.Computer Vision
At its core, computer vision aims to enable machines to interpret and make decisions based on visual data from the world—essentially mimicking the capability of the human optical system. Conversely, AI is a broader field encompassing machine learning, natural language processing, and robotics, among other technologies. When you bring AI algorithms into computer vision projects, you supercharge the system's ability to understand, interpret, and react to visual stimuli.
When discussing Computer Vision projects applied to embedded devices, the most common applications that come to mind are Image Classification and Object Detection.
Both models can be implemented on tiny devices like the Arduino Nicla Vision and used on real projects.
Go to the 2nd part of this tutorial, for the Object Detection project.
Let's start with Image Classification.
Image Classification ProjectThe first step in any ML project is to define our goal. In this case, it is to detect and classify two specific objects present in one image. For this project, we will use two small toys, a robot, and a small Brazilian parrot (named Periquito). Also, we will collect images of a background where those two objects are absent.
Once you have defined your Machine Learning project goal, the next and most crucial step is the dataset collection. You can use the Edge Impulse Studio, the OpenMV IDE we installed, or even your phone for the image capture. Here, we will use the OpenMV IDE for that.
Collecting Dataset with OpenMV IDE
First, create in your computer a folder where your data will be saved, for example, "data". Next, on the OpenMV IDE, go to Tools > Dataset Editor
and select New Dataset
to start the dataset collection:
The IDE will ask you to open the file where your data will be saved and choose the "data" folder that was created. Note that new icons will appear on the Left panel.
Using the upper icon (1), enter with the first class name, for example, "periquito":
Run the dataset_capture_script.py,
and clicking on the bottom icon (2), will start capturing images:
Repeat the same procedure with the other classes
I suggest around 60 images from each category. Try to capture different angles, background, s and light conditions
The stored images use a QVGA frame size, of 320x240, and RGB565 (color pixel format).
After capturing your dataset, close the Dataset Editor Tool on the Tools > Dataset Editor.
On your computer, you will end with a dataset that contains three classes: periquito, robot, and background.
You should return to Edge Impulse Studio and upload the dataset to your project.
Training the model with Edge Impulse StudioAs commented before, we will use the Edge Impulse Studio for training our model. Enter your account credentials at Edge Impulse and, select your project, create a new one:
Here, you can clone my project: NICLA-Vision_Image_Classification.Dataset
Using the EI Studio (or only Studio), we will pass over four main steps to have our model ready for use on the Nicla Vision board: Dataset
, Impulse
, Tests
, and Deploy
(on the Edge Device, in this case, the NiclaV).
Regarding the Dataset, it is essential to point out that our Original Dataset
, captured with the OpenMV IDE, will be split into three parts: Training, Validation, and Test. The Test Set will be divided from the beginning and left a part to be used only in the Test phase after training. The Validation Set will be used during training.
On Studio, go to the Data acquisition
tab, and on the UPLOAD DATA
section, upload from your computer the files from chosen categories:
Left to the Studio to automatically split the original dataset into training and test and choose the label related to that specific data:
Repeat the procedure for all three classes. At the end, you should see your "raw data in the Studio:
The Studio allows you to explore your data, showing a complete view of all the data in your project. You can clear labels and inspect or change labels by clicking on individual data items. In our case, a simple project, the data seems OK.
In this phase, we should define how to:
Pre-process
our data, which consists of resizing the individual images and determining the color depth to use (RGB or Grayscale) andDesign a Model
That will be "Transfer Learning (Images)" to fine-tune a pre-trained MobileNet V2 image classification model on our data. This method performs well even with relatively small image datasets (around 150 images in our case).
Transfer Learning with MobileNet offers a streamlined approach to model training, which is especially beneficial for resource-constrained environments and projects with limited labeled data. MobileNet, known for its lightweight architecture, is a pre-trained model that has already learned valuable features from a large dataset (ImageNet).
By leveraging these learned features, you can train a new model for your specific task with fewer data and computational resources, yet achieve competitive accuracy.
This approach significantly reduces training time and computational cost, making it ideal for quick prototyping and deployment on embedded devices where efficiency is paramount.
Go to the Impulse Design Tab and create impulse, defining an image size of 96x96 and squashing them (squared form, without crop). Select Image and Transfer Learning blocks. Save the Impulse.
All input QVGA/RGB565 images will be converted to 27, 640 features (96x96x3).
Press [Save parameters]
and Generate all features:
In 2007, Google introduced MobileNetV1, a family of general-purpose computer vision neural networks designed with mobile devices in mind to support classification, detection, and more. MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of various use cases. in 2018, Google launched MobileNetV2: Inverted Residuals and Linear Bottlenecks.
MobileNet V1 and MobileNet V2 aim for mobile efficiency and embedded vision applications but differ in architectural complexity and performance. While both use depthwise separable convolutions to reduce the computational cost, MobileNet V2 introduces Inverted Residual Blocks and Linear Bottlenecks to enhance performance. These new features allow V2 to capture more complex features using fewer parameters, making it computationally more efficient and generally more accurate than its predecessor. Additionally, V2 employs a non-linear activation in the intermediate expansion layer. Still, it uses a linear activation for the bottleneck layer, a design choice found to preserve important information through the network better. MobileNet V2 offers a more optimized architecture for higher accuracy and efficiency and will be used in this project.
Although the base MobileNet architecture is already tiny and has low latency, many times, a specific use case or application may require the model to be smaller and faster. MobileNets introduces a straightforward parameter α (alpha) called width multiplier to construct these smaller, less computationally expensive models. The role of the width multiplier α is to thin a network uniformly at each layer.
Edge Impulse Studio has available MobileNet V1 (96x96 images) and V2 (96x96 and 160x160 images), with several different α values (from 0.05 to 1.0). For example, you will get the highest accuracy with V2, 160x160 images, and α=1.0. Of course, there is a trade-off. The higher the accuracy, the more memory (around 1.3M RAM and 2.6M ROM) will be needed to run the model, implying more latency. The smaller footprint will be obtained at another extreme with MobileNet V1and α=0.10 (around 53.2K RAM and 101K ROM).
For this project, we will use MobileNetV2 96x96 0.1, which estimates a memory cost of 265.3 KB in RAM. This model should be OK for the Nicla Vision that gas 1MB of SRAM. On the Transfer Learning Tab, select this model:
Another necessary technique to be used with Deep Learning is Data Augmentation. Data augmentation is a method that can help improve the accuracy of machine learning models, creating additional artificial data. A data augmentation system makes small, random changes to your training data during the training process (such as flipping, cropping, or rotating the images).
Under the rood, here you can see how Edge Impulse implements a data Augmentation policy on your data:
# Implements the data augmentation policy
def augment_image(image, label):
# Flips the image randomly
image = tf.image.random_flip_left_right(image)
# Increase the image size, then randomly crop it down to
# the original dimensions
resize_factor = random.uniform(1, 1.2)
new_height = math.floor(resize_factor * INPUT_SHAPE[0])
new_width = math.floor(resize_factor * INPUT_SHAPE[1])
image = tf.image.resize_with_crop_or_pad(image, new_height, new_width)
image = tf.image.random_crop(image, size=INPUT_SHAPE)
# Vary the brightness of the image
image = tf.image.random_brightness(image, max_delta=0.2)
return image, label
Exposure to these variations during training can help prevent your model from taking shortcuts by "memorizing" superficial clues in your training data, meaning it may better reflect the deep underlying patterns in your dataset.
The final layer of our model will have 12 neurons with a 15% dropout for overfitting prevention. Here is the Training result:
The result is excellent, with 77ms of latency, which should result in 13fps (frames per second) during inference.
Model TestingNow, you should take the data put apart at the start of the project and run the trained model having them as input:
The result was, again, excellent.
At this point, we can deploy the trained model as.tflite and use the OpenMV IDE to run it using MicroPython, or we can deploy it as a C/C++ or an Arduino library.
Arduino Library
First, Let's deploy it as an Arduino Library:
You should install the library as.zip on the Arduino IDE and run the sketch nicla_vision_camera.ino
in Examples
under your library name.
Note that Arduino Nicla Vision has by default 512KB of RAM allocated for M7 core and additional 244KB on the M4 address space. In the code, this allocation was changed to 288 kB to guarantee that the model will run on the device (malloc_addblock((void*)0x30000000, 288 * 1024)
;).
The result was good, with 86ms of measured latency.
Here is a short video showing the inference results:
OpenMV
It is possible to deploy the trained model to be used with OpenMV in two ways: as a library and as a firmware.
Three files are generated as a library: the.tflite model, a list with the labels, and a simple MicroPython script that can make inferences using the model.
From my tests, running this model as a.tflite directly in the Nicla was impossible. So, we can sacrifice the accuracy using a smaller model or deploy the model as an OpenMV Firmware (FW). As an FW, the Edge Impulse Studio generates optimized models, libraries, and frameworks needed to make the inference. Let's explore this last one.
Select OpenMV Firmware on the Deploy Tab and press [Build]
.
On your computer, you will find a ZIP file. Open it:
Use the Bootloader tool on the OpenMV IDE to load the FW on your board:
Select the appropriate file (.bin for Nicla-Vision):
After the download is finished, press OK:
If a message says that the FW is outdated, DO NOT UPGRADE. Select [NO].
Now, open the script: ei_image_classification.py that was downloaded from the Studio, together with the.bin file for the Nicla.
And run it. Pointing the camera to the objects we want to classify, the inference result will be displayed on the Serial Terminal.
Changing Code to add labels:
The code provided by Edge Impulse can be modified so that we can see, for test reasons, the inference result directly on the image displayed on the OpenMV IDE.
Upload the code from my GitHub, or modify it as below:
# Marcelo Rovai - NICLA Vision - Image Classification
# Adapted from Edge Impulse - OpenMV Image Classification Example
# @24Aug23
import sensor, image, time, os, tf, uos, gc
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pxl fmt to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240)) # Set 240x240 window.
sensor.skip_frames(time=2000) # Let the camera adjust.
net = None
labels = None
try:
# Load built in model
labels, net = tf.load_builtin_model('trained')
except Exception as e:
raise Exception(e)
clock = time.clock()
while(True):
clock.tick() # Starts tracking elapsed time.
img = sensor.snapshot()
# default settings just do one detection
for obj in net.classify(img,
min_scale=1.0,
scale_mul=0.8,
x_overlap=0.5,
y_overlap=0.5):
fps = clock.fps()
lat = clock.avg()
print("**********\nPrediction:")
img.draw_rectangle(obj.rect())
# This combines the labels and confidence values into a list of tuples
predictions_list = list(zip(labels, obj.output()))
max_val = predictions_list[0][1]
max_lbl = 'background'
for i in range(len(predictions_list)):
val = predictions_list[i][1]
lbl = predictions_list[i][0]
if val > max_val:
max_val = val
max_lbl = lbl
# Print label with the highest probability
if max_val < 0.5:
max_lbl = 'uncertain'
print("{} with a prob of {:.2f}".format(max_lbl, max_val))
print("FPS: {:.2f} fps ==> latency: {:.0f} ms".format(fps, lat))
# Draw label with highest probability to image viewer
img.draw_string(
10, 10,
max_lbl + "\n{:.2f}".format(max_val),
mono_space = False,
scale=2
)
Here you can see the result:
Note that the latency (136 ms) is almost double what we got directly with the Arduino IDE. This is because we are using the IDE as an interface and the time to wait for the camera to be ready. If we start the clock just before the inference:
The latency will drop to only 71 ms.
When working with Embedded machine learning, we are looking for devices that can continually proceed with the inference and result, taking some action directly on the physical world and not displaying the result on a connected computer. To simulate this, we will define one LED to light up for each one of the possible inference results.
For that, we should upload the code from my GitHub or change the last code, to include the LEDs:
# Marcelo Rovai - NICLA Vision - Image Classification with LEDs
# Adapted from Edge Impulse - OpenMV Image Classification Example
# @24Aug23
import sensor, image, time, os, tf, uos, gc, pyb
ledRed = pyb.LED(1)
ledGre = pyb.LED(2)
ledBlu = pyb.LED(3)
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixl fmt to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240)) # Set 240x240 window.
sensor.skip_frames(time=2000) # Let the camera adjust.
net = None
labels = None
ledRed.off()
ledGre.off()
ledBlu.off()
try:
# Load built in model
labels, net = tf.load_builtin_model('trained')
except Exception as e:
raise Exception(e)
clock = time.clock()
def setLEDs(max_lbl):
if max_lbl == 'uncertain':
ledRed.on()
ledGre.off()
ledBlu.off()
if max_lbl == 'periquito':
ledRed.off()
ledGre.on()
ledBlu.off()
if max_lbl == 'robot':
ledRed.off()
ledGre.off()
ledBlu.on()
if max_lbl == 'background':
ledRed.off()
ledGre.off()
ledBlu.off()
while(True):
img = sensor.snapshot()
clock.tick() # Starts tracking elapsed time.
# default settings just do one detection.
for obj in net.classify(img,
min_scale=1.0,
scale_mul=0.8,
x_overlap=0.5,
y_overlap=0.5):
fps = clock.fps()
lat = clock.avg()
print("**********\nPrediction:")
img.draw_rectangle(obj.rect())
# This combines the labels and confidence values into a list of tuples
predictions_list = list(zip(labels, obj.output()))
max_val = predictions_list[0][1]
max_lbl = 'background'
for i in range(len(predictions_list)):
val = predictions_list[i][1]
lbl = predictions_list[i][0]
if val > max_val:
max_val = val
max_lbl = lbl
# Print label and turn on LED with the highest probability
if max_val < 0.8:
max_lbl = 'uncertain'
setLEDs(max_lbl)
print("{} with a prob of {:.2f}".format(max_lbl, max_val))
print("FPS: {:.2f} fps ==> latency: {:.0f} ms".format(fps, lat))
# Draw label with highest probability to image viewer
img.draw_string(
10, 10,
max_lbl + "\n{:.2f}".format(max_val),
mono_space = False,
scale=2
)
Now, each time that a class gets a result superior of 0.8, the correspondent LED will be light on as below:
- Led Red 0n: Uncertain (no one class is over 0.8)
- Led Green 0n: Periquito > 0.8
- Led Blue 0n: Robot > 0.8
- All LEDs Off: Background > 0.8
Here is the result:
In more detail
Several development boards can be used for embedded machine learning (tinyML), and the most common ones for Computer Vision applications (with low energy), are the ESP32 CAM, the Seeed XIAO ESP32S3 Sense, the Arduinos Nicla Vison, and Portenta.
Using the opportunity, the same trained model was deployed on the ESP-CAM, the XIAO, and Portenta (in this one, the model was trained again, using grayscaled images to be compatible with its camera. Here is the result, deploying the models as Arduino's Library:
A last item to be explored is that sometimes, during prototyping, it is essential to experiment with external sensors and devices, and an excellent expansion to the Nicla is the Arduino MKR Connector Carrier (Grove compatible).
The shield has 14 Grove connectors: five single analog inputs, one single analog input, five single digital I/Os, one double digital I/O, one I2C, and one UART. All connectors are 5V compatible.
Note that besides all 17 Nicla Vision pins will be connected to the Shield Groves, some Grove connections are leaving disconnected.
This shield is MKR compatible and can be used with the Nicla Vision and the Portenta.
For example, suppose that on a TinyML project, you want to send inference results using a LoRaWan device and add information about local luminosity. Besides, with offline operations, a local low-power display as an OLED display is advised. This setup can be seen here:
The Grove Light Sensor would be connected to one of the single Analog pins (A0/PC4), the LoRaWan device to the UART, and the OLED to the I2C connector.
The Nicla Pins 3 (Tx) and 4(Rx) are connected with the Shield Serial connector. The UART communication is used with the LoRaWan device. Here is a simple code to use the UART.:
# UART Test - By: marcelo_rovai - Sat Sep 23 2023
import time
from pyb import UART
from pyb import LED
redLED = LED(1) # built-in red LED
# Init UART object.
# Nicla Vision's UART (TX/RX pins) is on "LP1"
uart = UART("LP1", 9600)
while(True):
uart.write("Hello World!\r\n")
redLED.toggle()
time.sleep_ms(1000)
To verify if the UART is working, you should, for example, connect another device as an Arduino UNO, displaying the Hello Word.
Here is a Hello World code to be used with the I2C OLED. The MicroPython SSD1306 OLED driver (ssd1306.py), created by Adafruit, should also be uploaded to the Nicla (the ssd1306.py can be found in the project GitHub).
# Nicla_OLED_Hello_World - By: marcelo_rovai - Sat Sep 30 2023
#Save on device: MicroPython SSD1306 OLED driver, I2C and SPI interfaces created by Adafruit
import ssd1306
from machine import I2C
i2c = I2C(1)
oled_width = 128
oled_height = 64
oled = ssd1306.SSD1306_I2C(oled_width, oled_height, i2c)
oled.text('Hello, World', 10, 10)
oled.show()
Finally, here is a simple script to read the ADC value on pin "PC4" (Nicla pin A0):
# Light Sensor (A0) - By: marcelo_rovai - Wed Oct 4 2023
import pyb
from time import sleep
adc = pyb.ADC(pyb.Pin("PC4")) # create an analog object from a pin
val = adc.read() # read an analog value
while (True):
val = adc.read()
print ("Light={}".format (val))
sleep (1)
The ADC can be used for other valuable sensors, such as Temperature.
Note that the above scripts (downloaded from my Github) are only to introduce how to connect external devices with the Nicla Vision board using MicroPython.Conclusion
The Arduino Nicla Vision is an excellent tiny device for industrial and professional uses! However, it is powerful, trustworthy, low power, and has suitable sensors for the most common embedded machine learning applications such as vision, movement, and sound.
On my GitHub repository, you will find the last version all the codes used or commented on this project.
Before we finish, consider that Computer Vision is more than just image classification. For example, you can develop Edge Machine Learning projects around vision in several areas, such as:
- Autonomous Vehicles: Use sensor fusion, lidar data, and computer vision algorithms to navigate and make decisions.
- Healthcare: Automated diagnosis of diseases through MRI, X-ray, and CT scan image analysis
- Retail: Automated checkout systems that identify products as they pass through a scanner.
- Security and Surveillance: Facial recognition, anomaly detection, and object tracking in real-time video feeds.
- Augmented Reality: Object detection and classification to overlay digital information in the real world.
- Industrial Automation: Visual inspection of products, predictive maintenance, and robot and drone guidance.
- Agriculture: Drone-based crop monitoring and automated harvesting.
- Natural Language Processing: Image captioning and visual question answering.
- Gesture Recognition: For gaming, sign language translation, and human-machine interaction.
- Content Recommendation: Image-based recommendation systems in e-commerce.
Go to the Object Detection project for the 2nd part of this tutorial.
If you want to learn more about Embedded Machine Learning (TinyML), please see these references:
- "TinyML - Machine Learning for Embedding Devices" - UNIFEI
- "Professional Certificate in Tiny Machine Learning (TinyML)" – edX/Harvard
- "Introduction to Embedded Machine Learning" - Coursera/Edge Impulse
- "Computer Vision with Embedded Machine Learning" - Coursera/Edge Impulse
- "Deep Learning with Python" by François Chollet
- “TinyML” by Pete Warden, Daniel Situnayake
- "TinyML Cookbook" by Gian Marco Iodice
- "AI at the Edge" by Daniel Situnayake, Jenny Plunkett
On the TinyML4D website, You can find lots of educational materials on TinyML. They are all free and open-source for educational uses – we ask that if you use the material, please cite them! TinyML4D is an initiative to make TinyML education available to everyone globally.
That's all, folks!
As always, I hope this project can help others find their way into the exciting world of AI!
link: MJRoBot.org
Greetings from the south of the world!
See you at my next project!
Thank you
Marcelo
Comments
Please log in or sign up to comment.