Avnet Unveils Its Performant, Efficient Edge AI Demos for the 2023 Embedded Vision Summit This Month
From high-performance FPGAs to power-sipping Neural Decision Processors (NDP), Avnet's demos and talks are not to be missed.
Avnet is once again participating in the Embedded Vision Summit, the must-visit event for anyone interested in computer vision and edge AI technologies present and future — and will be showcasing four cutting-edge development platforms in live demos during the event.
Taking place in Santa Clara, California, May 22nd-24th, the 2023 Embedded Vision Summit is the premier event for all those interested in practical, deployable computer vision and other perceptual artificial intelligence systems running on-device, in the cloud, and on mobile hardware. As always, Avnet will be an active participant at the event — not only hosting demos of the latest technologies but also speaking, with Avnet's Peter Fenn giving a talk on battery-powered always-on edge AI sensing and Monica Houston diving into the topic of selecting image sensors for embedded vision applications with three practical case studies as examples.
At the same time, the Avnet booth will be hosting live demos of four key edge AI hardware platforms and their capabilities — starting with the Avnet RZBoard V2L and its DRP-AI accelerator core. Designed to run alongside the application-class Arm Cortex-A55 CPUs, dedicated Cortex-M33 real-time core, and 3D-capable graphics processor, the DRP-AI accelerator will be shown as a driving force for cost-effective embedded AI applications — and Avnet will show how the accessible development board and software ecosystem can help jump-start your projects.
Another demo will show off Avnet's Edge AI Kit, built around a production-ready SMARC-format system-on-module (SOM) featuring NXP Semiconductors' powerful i.MX 8M Plus and a custom carrier board. Linked to a 10" touchscreen panel and dual cameras, the Edge AI Kit will be shown performing power-efficient yet lightning-fast stereo facial recognition tasks on-device — and demonstrating how Avnet can assist with scaling designs from prototype to production.
The third of the planned demos will see Avnet's ZUBoard 1CG field-programmable gate array (FPGA) platform, built around the cost-optimized AMD Xilinx Zynq Ultrascale+ ZU1 MPSoC system-on-chip, driving numerous reference design examples which take advantage of the on-chip DPU AI acceleration engine and a dual-camera add-on module for low-latency computer vision tasks executed entirely on-device.
Finally, Avnet will be demonstrating the RASynBoard, a low-power always-on sensor fusion platform which includes a Renesas RA6M4 microcontroller unit (MCU), dedicated Wi-Fi and Bluetooth radio module, and a Syntiant NDP120 ultra-efficient Neural Decision Processor (NDP) acceleration engine. In addition to showcasing its capabilities for industrial sensor fusion, the demo will show the board in action in projects making use of its split domain AI processing system — including predictive maintenance, always-on sound detection, and on-device voice recognition in an extremely low power envelope.
Attendees interested in seeing any of the above demos can find Avnet in Booth #303, with details on how to attend the Summit available on the official website. Those wishing to attend the talks can find Monica Houston speaking on May 23rd at 2:05pm, and Peter Fenn on May 24th at 2:40pm.