A Clear Vision for Edge AI Efficiency
Ev-Edge improves event-based vision algorithm execution on edge computing hardware platforms by reducing latency and energy consumption.
Event cameras, also known as dynamic vision sensors, are an alternative approach to vision sensing that deviates from the traditional frame-based paradigm. Unlike conventional cameras that capture entire frames at fixed intervals, event cameras detect changes in brightness asynchronously at each pixel, reporting only the significant changes along with precise timing information. This asynchronous operation allows event cameras to achieve exceptionally high temporal resolution, detecting changes in microseconds and providing a continuous stream of sparse and asynchronous events rather than discrete frames.
In contrast to traditional cameras, event cameras excel in scenarios that demand real-time processing, low-latency sensing, and low-power consumption. Applications across a number of fields can benefit from the unique capabilities of event cameras. In robotics, event cameras enable robots to perceive and react to their environment swiftly and accurately, making them well-suited for tasks such as navigation, object tracking, and manipulation. Similarly, autonomous vehicles can leverage event cameras for efficient and robust perception, enhancing their ability to detect and respond to dynamic driving conditions in real-time.
The asynchronous data streams produced by event cameras are very different from the frames produced by traditional cameras, so specialized processing algorithms are needed to interpret the data. For a number of use cases, like semantic segmentation and depth estimation, artificial neural networks (ANNs), spiking neural networks (SNNs), and hybrid ANN-SNN algorithms have proved to be highly accurate. However, running these algorithms efficiently on edge computing hardware containing a mixture of CPUs, GPUs, and specialized neural network accelerators can be very challenging.
These challenges often result in suboptimal performance of the deployed system. In an effort to correct this problem, researchers at Purdue University have developed a framework called Ev-Edge that was designed to simplify the efficient execution of event-based vision algorithms on common edge computing hardware platforms. The team demonstrated that Ev-Edge can lead to significant reductions in both latency and energy consumption when running these applications.
This was accomplished through a number of optimizations. First, Ev-Edge introduces what they call the Event2Sparse Frame converter. This takes raw data from the event streams and turns it directly into a sparse frame. In this way, the need for intermediate event frames is eliminated. That, in turn, makes the data easier to work with and ensures that the computational workload remains directly proportional to the number of events that need to be processed.
Next, Ev-Edge utilizes what is called the Dynamic Sparse Frame Aggregator. This step enhances hardware utilization. It does so by combining sparse frames dynamically β that is, it puts them together in a smart way based on how much data there is and how fast the hardware can process it.
Finally, the Network Mapper distributes the tasks to be executed to the most appropriate hardware that is available. That could be a CPU, GPU, or some other type of hardware accelerator. This step is also able to adjust the precision of computations to optimize them for the available resources.
The team tested Ev-Edge out on an NVIDIA Jetson AGX Xavier single-board computer to assess how well it performs with event-based vision workloads. A number of state-of-the-art ANNs, SNNs, and ANN-SNNs were evaluated, and it was found that leveraging the new framework led to 1.28x to 2.05x improvements in latency. Similarly, 1.23x to 2.15x reductions in energy consumption were observed. These improvements were achieved with a negligible impact on algorithm accuracy. This combination of accuracy, speed, and energy efficiency could make Ev-Edge a valuable tool for computer vision algorithm developers in the near future.