Ever since prototyping Kindbot, we've anticipated the arrival of a device like the new Coral Dev board micro. This microcontroller's PiZero-esque form-factor features an edgeTPU for compute capability closer to a neural compute stick.
And setting up their demos is delightfully smooth! Begin by cloning the repo:
git clone --recurse-submodules -j8 https://github.com/google-coral/coralmicro
The multi-core model cascading demo showcases some of the board's strengths. We want to stream the inference results to another device to classify activities so we also snapped a wireless shield onto our dev board and added a file wifi.txt into the repo root.
export wifi_ssid=
export wifi_psk=
Now you can determine the device IP by first flashing the wifi example onto your dev board.
python3 scripts/flashtool.py -e curl --subapp curl_wifi --wifi_config wifi.txt
In another tab, connect to the serial console by running:
screen /dev/ttyACM0 115200
After a moment, you should find the microcontroller's IP printed to the serial console. In our case, we have 192.168.0.215.
DHCP succeeded, our IP is 192.168.0.215
Type CTRL+A, then K, then Y to escape the console.
Real-Time Pose Estimation on a Microcontroller!After installing dependencies here, you can flash the demo onto your dev board.
python3 scripts/flashtool.py -a multicore_model_cascade --subapp multicore_model_cascade_wifi --wifi_config wifi.txt
In another tab, after substituting your dev board's IP address, run:
python3 apps/multicore_model_cascade/multicore_model_cascade.py --device_ip_address 192.168.0.215
A window pops up to show the 324x324 himax camera feed. When no person is detected by a classifier running on the Cortex M4, the camera feed will indicate as much with a text overlay. In this case, the Cortex M7 and edgeTPU are dormant.
If a person is detected, the more powerful processors are applied to regress body keypoint locations. This dual-core cascade helps the device to reduce battery consumption when running the edgeTPU is not beneficial.
In ActionAI, we show how you can apply LSTMs to a featurized sequence of body keypoints to classify activities. By running the multicore_model_cascade.py file, we can distribute the most computationally expensive processing to an edgeTPU on a microcontroller.
This means after connecting to many distributed devices, we can run LSTM classifiers on low-dimensional features to classify activities from many cameras!
Add Your Own Twist with RemyxPerson detectors are pretty useful but don't fit many use cases. Since DroopThereItIs, we've been showing how we build custom computer vision applications at the edge.
Don't get stuck manually labeling data! We've created the first autoML platform which trains on synthetic data generation for a genuinely automatic experience.
With the remxy.AI model engine, we generate a new classifier which better suits our use case. It's a no-code experience so anybody can specify what they need to classify and download the model.
By using a visual wake word of guitar instead of person, we keep our device on ice until we are ready to rock!
Or perhaps you use cutlery as a visual cue for your kitchen robot to monitor meal prep!
What will you innovate now that training custom ML is no longer the bottleneck? Share your concepts below!
Comments
Please log in or sign up to comment.