Eye Spy with My Little AI
An AI-based approach, inspired by biology, downsamples images while optimizing sensory encoding, paving the path to better retinal implants.
Traditional assistive technologies tend to concentrate on compensating for impaired bodily functions instead of aiming to correct them. To illustrate, while a wheelchair offers mobility to individuals with limited leg movement, it does not restore their natural ability to walk. While these assistive devices offer a measure of mobility and freedom to their users, this is most certainly not the ideal solution. Rather, it is a stopgap until more sophisticated technologies can be developed to correct, or fully compensate for, the source of the problem.
One proposed solution that could restore impaired or lost bodily functions involves the use of neural prostheses. Unlike traditional assistive technologies that merely compensate for disabilities, neural prostheses have the potential to interface directly with the nervous system, bypassing damaged or dysfunctional pathways to restore or enhance bodily functions. By bridging the gap between the brain and the body, neural prostheses hold promise for individuals with conditions such as spinal cord injuries, stroke, or limb loss, offering the potential to regain lost movement, sensation, or control.
These technologies are still in the early stages of development, and many problems have yet to be solved before they can live up to their potential. One issue is the mismatch between the number of electrodes that are present in a prosthesis (which interface with sensory neurons) and the number of sensory neurons in biological systems — the artificial system has orders of magnitude less connections. Accordingly, information acquired by sensors must be heavily downsampled, while still retaining crucial information, before it can be forwarded to a neural prosthesis.
Researchers at the Swiss Federal Institute of Technology Lausanne have made an effort to tackle this problem by using a biology-inspired approach to downsample image data, which could one day be used to develop retinal implants that can restore vision to the blind. Traditionally, images are downsampled using algorithms that average nearby pixel values before being fed into a neural implant. This is a simplistic approach that can result in the loss of crucial information, leading to a reduced effectiveness of the device. Using the new technique, a machine learning-based approach was utilized to encode the images in a way that mimics certain aspects of natural retinal processing.
In their work, the team leveraged a machine learning approach called an actor-model framework. This involves the use of two neural networks, with the “model” network serving as a digital twin of the retina. It is trained to translate a high-resolution image into the sort of neural signals that are normally produced by a biological retina. The “actor” network is then trained to downsample images with the goal of producing a response in the model network that is as close as possible to a biological response. This produces a downsampled image that is optimized for sensory encoding.
Based on the results of an experiment, it was found that this approach may enable the development of better prosthetic systems in the future. This new approach was compared with signals produced by ex-vivo mouse retinas, and it was demonstrated that similar neuronal responses were produced.
Beyond vision restoration, the researchers intend to explore other applications in the years ahead. They believe that their innovation will also have applications in restoring hearing and limb function.