In the traditional meter industry, the roulette meter played an important role. However, with the digital age arriving, the roulette meter has gradually been replaced by the digital meter. Some old buildings cannot directly update to a digital meter due to their structural design and other reasons.
To provide a solution, we have created a demo that uses an AI sensor to detect the digital numbers on the meter. The sensor collects the data and sends it to the cloud, enabling digital transformation even in buildings with roulette meters.
OK, let's move on to the building sensor steps.
Hardware Preparation- SenseCAP A1101
- PC
The software setup for Windows, Linux, and Intel Mac will be the same whereas for M1/M2 Mac will be different.
For Windows, Linux, Intel Mac
1. Make sure Python is already installed on the computer. If not, visit this page to download and install the latest version of Python
2. Install the following dependency
pip3 install libusb1
M1/ M2 Mac
- Install Homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
- Install conda
brew install conda
- Download libusb
wget https://conda.anaconda.org/conda-forge/osx-arm64/libusb-1.0.26-h1c322ee_100.tar.bz2
- Install libusb
conda install libusb-1.0.26-h1c322ee_100.tar.bz2
Step 1: Collect Image DataConnect the A1101 to the PC via a type-C cable. Double-click the button to enter mass storage mode. Drag and drop this.uf2 file to SENSECAP drive. As soon as the uf2 finishes copying into the drive, the drive will disappear. This means the uf2 has been successfully uploaded to the module.
A1101 connected to the PC
Copy and paste this Python script Inside a newly-created file named capture_images_script.py on your PC. Execute the Python script to start capturing images
python3 capture_images_script.py
By default, it will capture an image every 300ms. If you want to change this, for example, to capture an image every second, you could execute the script like this:
python3 capture_images_script.py --interval 1000
After the above script is executed, SenseCAP A1101 will start to capture images from the in-built cameras continuously and save all of them inside a folder named save_img.
Change back the firmware
After you have finished recording images for the dataset, you need to make sure to change the firmware inside the SenseCAP A1101 back to the original one, so that you can again load object detection models for detection.
Enter Boot mode on SenseCAP A1101. Drag and drop this.uf2 file to SENSECAP drive according to your device. As soon as the uf2 finishes copying into the drive, the drive will disappear. This means the uf2 has been successfully uploaded to the module.
Step 2: Generate Dataset with RoboFlowRoboflow is an annotation tool based online. Click here to sign up for a Roboflow account and start a new project. Drag and drop the images that you have captured using SenseCAP A1101. Continue to annotate all the images in the dataset and generate a dataset. In the end, you will be able to export a dataset as a.zip file or code. (To learn more detailed instructions, please find the A1101 wiki here).
To continue the next step, we need to generate a new version of the dataset and export the code.
This will generate a code snippet that we will use later inside Google Colab training. So please keep this window open in the background.
After we finish the annotation and get the dataset code, we need to train the dataset. Click here to open an already prepared Google Colab workspace, go through the steps mentioned in the workspace and run the code cells one by one.
It will walk through the following:
- Setup an environment for training
- Download a dataset
- Perform the training
- Download the trained model
At the end of these steps, you will be able to download a.uf2 model file, which is ready to deploy in your A1101.
Step Four: Deploy the trained modelNow we will move the model-1.uf2 that we obtained at the end of the training into SenseCAP A1101.
Connect SenseCAP A1101 to your PC via a USB Type-C cable, double-click the boot button on SenseCAP A1101 to enter mass storage mode, and drag the model file to your A1101. As soon as the uf2 finishes copying into the drive, the drive will disappear. This means the uf2 has been successfully uploaded to the module.
Click here to open a preview window of the camera stream.
Click Connect button. Then you will see a pop up on the browser. Select SenseCAP Vision AI - Paired and click Connect
View real-time inference results using the preview window!
That's it for the meter detection demo, but we won't stop there. In the future, we will produce more meter recognition demos, such as digital meters and pointer meters. These are all useful in meter recognition scenarios, and you can create even more with A1101. You can also use it for object recognition, face recognition, object counting, and more.
Moreover, to better meet the diverse needs of users for different AI application scenarios, it is also open for customization. At your request, we can help you build and scale your own AI sensor featuring reference designs provided to accelerate time-to-market, a rich collection of class-leading CMOS ultra-low power vision sensors, and Certification (CE, FCC, IC, RoHS, REACH, etc.). From concept to market, it will only take 8-20 weeks! Customizing your smart AI sensor is just one step away. Click here and submit now! https://bit.ly/435G6HZ
Seeed offers different products for industries especially those that are seeking digital transformation. Let us know how we might be able to supportyou!
Resource[Wiki] https://wiki.seeedstudio.com/Train-Water-Meter-Digits-Recognition-Model-with-SenseCAP-A1101/
[A1101 User Guide]https://files.seeedstudio.com/wiki/SenseCAP-A1101/SenseCAP_A1101_LoRaWAN_Vision_AI_Sensor_User_Guide_V1.0.4.pdf
Comments
Please log in or sign up to comment.