Cortical Vision: Can We See with Just Our Brain?

Our eyes help us see, but the brain is where the magic really happens. In the Eye & Ear Foundation’s May 31 webinar, “Cortical Vision: Can We See with Just Our Brain?” attendees learned about visual neuroscience, brain stimulation, and the work being done that will hopefully lead to some level of vision restoration.

“This will be an evolving field over the coming years,” said Dr. José-Alain Sahel, Distinguished Professor and Chairman of Ophthalmology, Eye & Ear Foundation Endowed Chair, University of Pittsburgh School of Medicine, Director, UPMC Vision Institute, and Exceptional Class Professor at Sorbonne Université, Paris. “We’re already making a lot of progress.”

Computing Input from the Eye

Dr. Marlene Behrmann, professor in the Department of Ophthalmology and a leading figure in the field of visual neuroscience, said her goal was to explain how the input received at the eye is transformed into a meaningful percept.

For example, she said, when we look out at the world and see a fly, the brain is computing the color of the fly. It is also computing its depth, distance, form, and motion.

While we know a lot about the eye and how it receives signals, and we know a lot about the brain, what we don’t know as much is how the signals from the eye get transformed and computed into the meaningful perception that we all enjoy, Dr. Behrmann said.

This gives rise to such questions like, how are we able to recognize a picture of a fly on a bicycle even though we have never previously seen such a thing? How do we recognize someone like Bill Gates in different pictures? One is black and white, and in another he is looking off camera.

Face recognition is a particularly interesting model to study the human visual system for a number of reasons, Dr. Behrmann said. Face recognition involves complex geometry and shared inputs to convey a large amount of critical information like age, gender, identity, emotion, eye gaze, and intention. It is a useful domain to explore brain-behavior relations because it is such a finely tuned system.

Face Recognition and the Brain

One way to study this topic is to look at the brain of humans in a MRI machine. While they are inside, images of faces, words, common objects, houses, and body parts are projected onto a screen. Turns out researchers can identify exactly which regions of the brain are involved in recognizing particular kinds of stimuli.

A host of regions in the brain, some on the inferior or bottom surface and some on the side close to the temple on the skull, are activated when we look at faces. Different regions are activated when we look at houses or places, and yet others for common objects. Where does this organization come from? One possibility is that we are simply born with these kinds of regions already in place in the human brain. This theory was disproven when children aged 5-8 were tested and found not to have obvious face-selective areas of the brain as compared to adolescents and adults. Their facial recognition is still developing.

Breakdown of Facial Recognition

When facial recognition breaks down, this is known as prosopagnosia, or, colloquially, “face blindness.” One type of prosopagnosia occurs in individuals who had normal face recognition but then lost the ability to recognize faces. This can happen due to an infection, a car accident that damaged that part of the brain, or a heart attack in which that part of the brain is deprived of oxygen. They identify others by using nonface cues (voice, or perhaps specific items of clothing).

People who have impaired face recognition recognize very few famous faces. They cannot recognize their own family members or even themselves.

A second type of prosopagnosia, which appears to be congenital (CP for short) is a lifelong impairment and appears because individuals, over the course of development, do not develop the ability to recognize faces. These people have not sustained brain damage of any sort, have normal intelligence and normal vision in the eyes. CP individuals identify others by using nonface cues.

CP apparently affects roughly 2% of the population and has a familial component thought to involve an autosomal dominant mode of inheritance.

Surprisingly, individuals with CP exhibit normal brain activation. This left researchers with a dilemma: people with CP cannot recognize faces yet the brain circuit appears to be normal. To explore an alternative explanation, using an MRI machine, researchers mapped out fibers (“white matter”) that connect different areas of the brain on people. The particular focus was on a fiber tract that goes right through the face areas on the base of the brain. Researchers discovered a reduction in the integrity of the fiber tract and this breakdown correlated with the severity of face recognition impairment. Now they are beginning to understand not just the regions of the brain engaged in face recognition, but the connectivity between them.


How does the brain tell the difference between faces? How is this difference coded? What is the “language” of the brain? In an experiment, four very similar faces, stripped of any facial or head hair, were shown to participants in the MRI magnet and their brain signals measured in response to the different faces (repeated many times). The goal was to take the brain signal that was generated by the faces in the first place and then see whether the brain signature could be reversed to reconstruct the stimulus that gave rise to the brain signal in the first place. The best way to think about this brain-stimulus correspondence, Dr. Behrmann said, is like a French-English (stimulus to brain signal) dictionary where you can also look up the corresponding English to a French word (brain signal to stimulus).

“We were amazed that we could reconstruct anything at all,” Dr. Behrmann said. That they obtained something like faces was remarkable. The second thing they were excited about was that subjects did not just reproduce the same face over and over again; the brain signal clearly was precise enough for them to regenerate the different faces. The reconstructed faces matched the stimuli around 80% on average, which means there is good accuracy in the reconstruction signal.


To what extent are all these brain processes “plastic?” What if a single hemisphere is resected in childhood? This dramatic surgery is done for the management of epileptic seizures in individuals who have many seizures in a day. In these instances, medication can manage the seizures but this is not true for all patients. For these drug-resistant patients, a surgery may be performed to remove the region where the seizure begins and sometimes this can even entail removal of an entire hemisphere.

Patients, in this case, those who had surgery in childhood, in the experiment performed surprisingly well, albeit worse than the controls. Even though they were missing 50% of their brain, their accuracy for the recognition of both faces was about 85%. This is pretty dramatic with only a single hemisphere. They had the same result with words. It also did not matter which hemisphere is removed. The findings indicate that just one hemisphere can take on some of the functions of the other hemispheres and has plasticity.

Clinical Methodology

When children are being evaluated for epilepsy surgery, surgeons want to know exactly where in the brain the epilepsy is starting so as to cut out that region instead of a larger area. This is done by implanting electrodes through tiny windows in the skull and then monitoring for seizures via these electrodes. These children typically stay in the hospital to have their epilepsy monitored and participate in an experiment in which they watch movies or view various images displayed on a computer. Very fine and robust neural responses can be detected in response to the stimuli.

“We are very excited about this methodology,” Dr. Behrmann said. “It allows us pretty much for the first time ever to go into the human brain and record from it directly.”

The hope is that technology like this will be useful for clinical management of epilepsy as well as translatable to intervene in individuals with blindness using a cortical implant.

Stimulating the Visual Cortex

Blindness affects approximately 40 million people throughout the world and is likely to increase to more than a hundred million by 2050 due to the aging population, said Dr. Xing Chen, assistant professor in the Department of Ophthalmology whose research focuses on brain-computer interfaces, neuroprosthetics, vision, blindness, electrical stimulation, and sleep. Blindness severely affects navigation, social interactions, and reading, leading to easily about $12 billion in economic losses in the U.S. alone.

In normally sighted individuals, light signals enter the eye, hit the retina, and are transduced into electrical signals that are processed in the visual cortex of the brain. The vast majority of blindness originates in the eye or optic nerve. In the developed world, leading causes of blindness are AMD, glaucoma, diabetic retinopathy, and optic neuropathy. Hence, in most blind people, the brain remains functional, but signals are unable to reach the brain due to damage of the eye and/or optic nerve.

How exactly can we interface with the visual system to restore vision? One approach is the interface directly with the visual cortex, located at the back of the brain, thereby passing the eye. This approach has been explored since the 60s and involves the use of a mini video camera and eye tracker positioned in the frame of glasses worn by the user. The video feed is processed by a mobile device, which converts the images into instructions for interfacing with the brain. Instructions are sent wirelessly to a device, which is implanted in the brain. The device sends tiny electrical pulses to the tissue in the visual cortex, causing the user to see artificially generated visual percepts known as phosphenes.

Retinotopy/Receptive Field

The primary visual cortex, an area of about 50 square centimeters at the back of our brain is devoted to processing information coming in from the eyes. We have a map of the visual field in the primary visual cortex, and each neuron in the primary visual cortex receives and processes information from a particular region of the visual field, explained Dr. Chen. Collectively, across millions of neurons, information is compiled across the visual field, eventually allowing us to recognize shapes and objects.

How can stimulation of the brain give rise to artificial vision? When we stimulate a particular part of the visual cortex via a single electrode, we activate neurons in the vicinity of the electrode tip, and generate the percept of a dot of light, i.e., a phosphene. Importantly, the location of a phosphene in the visual field depends on the location of the stimulating electrode in the map of the brain. When we stimulate neurons in a particular part of the map, the phosphene will appear at the corresponding location in the visual field.

Extensive experiments allowed researchers to prove that stimulation of the visual cortex via multiple electrodes can generate recognizable shapes – a demonstration that was a historical first.


Dr. Chen described the the proof-of-concept experiments in monkeys, in which the animals were not only able to see an artificially generated phosphene, but also recognize simple shapes composed of phosphenes when stimulation of the visual cortex was carried out using multiple electrodes simultaneously. The monkeys could recognize motion and letters composed of phosphenes, bypassing the eye.

To translate this work into clinical studies, a 96-channel Utah array was implanted in the occipital cortex of a blind human volunteer. The patient reported seeing individual phosphenes, as well as shapes composed of multiple phosphenes, at locations that matched the locations of the electrodes in the “map” of the visual cortex. Furthermore, the parameters that successfully generated phosphenes in monkeys were also effective in the human volunteer.

Translational Research

Dr. Chen and her colleagues have been carrying out several developments to speed up the calibration of future clinical neuroprostheses for vision restoration.

Right now, artificial vision systems have to be manually calibrated via a laborious and time-consuming process, which is not readily scalable to future devices that would consist of thousands of electrodes. The researchers have been developing new, automated techniques to carry out calibration to make it fast, easy, and affordable.

To summarize, Dr. Chen and her colleagues developed a high-channel-count neuroprosthesis for microstimulation and recording in the visual cortex, allowing shape recognition without input via the eyes.

“Our goal is to provide a durable and usable visual perception in people who totally lost their vision and are considered untreatable,” Dr. Sahel said. It is important to understand that vision is very complex and has many steps, with a need for technological development and the right type of electrodes, but even more importantly, an in-depth understanding of the complexity to visual processing. “We work on programs that won’t just be an experiment for a few weeks showing that it can work,” Dr. Sahel added. “We aim at developing and validating an approach that can really help people in their daily activities.”

The Department is approaching that by gathering people with expertise, including engineers, neuroscientists, and low vision experts. The goal is to bring to humans some level of vision restoration, hopefully with a trial in the next two years.