Revolutionizing visual Prostheses with Neural Network-based sensory encoding

EPFL scientists introduce a machine learning approach for visual sensory encoding, enhancing retinal implants and sensory prostheses.

Researchers at the École Polytechnique Fédérale de Lausanne (EPFL) have made a significant breakthrough in the field of sensory prostheses by developing a novel machine learning approach for visual sensory encoding. This new approach has the potential to revolutionize visual prostheses, particularly in the context of retinal implants, by surpassing traditional methods in image data compression.

Sensory encoding is a crucial process in enhancing neural prostheses, where sensory information is converted into neural signals for interpretation by the brain. However, due to the limited number of electrodes available in a prosthesis, compressing environmental input while preserving data quality is a significant challenge.

Advertisement

The EPFL scientists, led by Diego Ghezzi, addressed this challenge by applying machine learning to improve downsampling, the process of reducing the number of pixels in an image for transmission via a retinal prosthesis. Their approach, known as the actor-model framework, consists of two neural networks: the model component, which acts as a digital twin of the retina, and the actor component, which is trained to downsample images effectively.

One of the key findings of the study, published in Nature Communications, was that the actor-model framework learned to mimic aspects of retinal processing, finding a “sweet spot” for image contrast. This approach was shown to be more effective than traditional, learning-free methods like pixel averaging.

To test the effectiveness of their approach, the researchers conducted experiments on both the digital twin of the retina and explanted mouse retinas. The results demonstrated that images produced by the actor-model framework elicited neuronal responses closer to the original image compared to images generated by other methods.

The implications of this research are profound. By enhancing sensory encoding in visual prostheses, the actor-model framework could significantly improve the quality of life for individuals with visual impairments. Furthermore, the framework’s capabilities could be expanded to compress images with multiple visual dimensions simultaneously, opening up possibilities for applications in other sensory prostheses and even linking to other prosthetic devices, such as auditory or limb prostheses.

In conclusion, the EPFL researchers’ novel machine learning approach represents a major advancement in the field of sensory prostheses. Their work not only enhances sensory encoding for visual prostheses but also holds promise for future developments in prosthetic technology across various sensory modalities.