Introducing Street Lab – Improving Functional Outcomes in Low Vision Rehabilitation

Did you know that the UPMC Vision Institute has a patient-centered approach to low vision rehabilitation that involves novel technologies? In the Eye & Ear Foundation’s March fifth webinar, “Introducing Street Lab – Improving Functional Outcomes in Low Vision Rehabilitation,” viewers learned exactly how this works.

In ophthalmology, there is a tendency to be focused on therapies and ways to fix visual conditions, Dr. José-Alain Sahel, Chair of the Department of Ophthalmology, said at the start of the presentation. Assessments are based on classical testing, but this may not fully reflect what occurs in patients’ daily lives. It is also important to demonstrate therapy’s benefit in daily life. This is part of a continuum of care that begins at diagnosis and aims to return patients to the lives they previously enjoyed.


To that end, the mission of the StreetLab at the Vision Institute is to enhance the quality of life of people with low vision. This involves improving their function, promoting their independence, and promoting their productivity in society.

Rakié Cham, PhD, researcher and full professor in the Departments of Bioengineering, Ophthalmology, and Physical Therapy at the University of Pittsburgh co-leads the StreetLab with Dr. Sahel.

The StreetLab boasts a multidisciplinary team of low vision experts, including ophthalmologists, optometrists and rehabilitation experts, human factors, biomedical engineers, neurologists, psychologists, neuroscientists, psychiatrists, and psychologists. Psychiatrists are involved because often people with low vision have a higher increased risk for mental health problems like depression and anxiety. Neuroscientists are interested in what is happening in the brain because there is a lot of adaptation to vision loss – including both structural and functional brain changes, which in turn may impact other functions, e.g. balance and mobility.

The StreetLab is housed on the fifth floor of the Vision Institute, a state-of-the-art building with state-of-the-art technology. It has received support from the Jack Buncher Foundation, the Henry L. Hillman Foundation, and UPMC.

Measurement & Assessment

The StreetLab assesses an individual’s needs and their perception of their impairments, which may include visual field loss and visual acuity deficits. Interactions with other impairments are often present, like balance and mobility, sensory deficits, and cognitive skills.

Assessments evaluate people’s activities and participation, such as their ability to maintain balance, walk, and perform fine motor tasks important for daily activities. These are very objective, performance-based assessments.

“We take a holistic approach of understanding the patient as a whole, what they need, and what they want us to help them with,” Dr. Cham said. “This information is fed into the activities and participation assessment.”

Dr. Cham’s background in engineering comes in handy because while there are a lot of very exciting treatments for people with low vision, what is missing is how to access and evaluate those treatments. “We want to be able to come up with ways of very precisely and objectively assess all those new treatments,” she said.

The StreetLab has been funded by NIOSH (National Institute of Occupational Safety and Health) to develop metrics on what people with low vision are able to do at work. This information can be used to put people in the right jobs, which is important because people with low vision are not well represented in the workforce. When they do find work, they are often not in the right jobs, so underemployment is an issue as well. Other potential sources of funding include federal agencies such as NIH, FDA, industry, and philanthropy.

Fine Motor Tasks in the Lab

Examples of fine motor tasks that have translational implications include the following assessments:

The Purdue task is an example of a fine motor task done in the lab. Well established, it involves putting pegs into holes. The metrics include questions like, how many pegs were put in the holes and how challenging was it? Performance can be tracked, for example, with and without assistive technology to ascertain its effectiveness. The patient’s hand movements and where they are looking are tracked.

The Assembly task can be very stressful. A patient has to put parts together on a turning wheel, the speed of which can be controlled. How many parts was the patient able to do? Hand movements are tracked. This task has real-life relevance, as it can be applied to manufacturing jobs with assembly lines or conveyor belts.

Other tasks include a size discrimination one, where nuts and balls of different sizes have to be matched with the right size, and a sorting task with chips to be sorted by color, letters, or numbers.

People with different diagnoses and vision impairments are put through these tasks to see what they are able to do – whether at home or at work. Tasks are done in different lighting conditions.

Multisensory Information Integration

Vision loss is a significant risk factor for falls, so balance and mobility assessments are often among the basic evaluations done at the StreetLab. To maintain balance, we use vision, the vestibular channel inside the ear, and proprioception. All of these get integrated into the brain, which outputs a command to maintain balance in sensory challenging conditions. This process is called multisensory information integration.

In people without vision, the brain puts more weight on the other two channels to maintain balance. This takes more attention and cognitive resources.

To assess this, people stand on a floor that can be moving. The lab measures how much and how fast they sway. With this data, the patient can then meet with a physical therapist who is an expert in balance therapy.

Mobility Assessments

In collaboration with Paris colleagues, the StreetLab has a state-of-the-art mobility assessment called the Mobility Standardized Test (MoST) in Virtual Reality. There are two versions. The first is walking naturally in a maze, which has been reproduced in virtual reality with goggles. This is a very controlled task in which lighting and difficulty can be modified. Multiple, objective performance measurements are taken, like how many errors there are, how much time to complete, tracking feet and movements.

Low Vision & Driving

“One of the questions that might come up is what’s the connection between work that is happening in low vision at the Vision Institute and in the area of low vision rehabilitation?” asked Clive D’Souza, PhD, MS, Assistant Professor and researcher in the Department of Rehabilitation Science and Technology at the University of Pittsburgh School of Health and Rehabilitation Sciences.

That connection is driving, which is vital to access and independence. Nearly 90% of driving information is visual: the environment, location, others on the road, potential hazards, and dashboard information. Approximately 86% of Americans 65 and older continue to drive. By 2050, 25% of all licensed drivers are expected to be 65+.

These statistics show that people continue driving even though they might have other age-related conditions that impact their vision, mobility, or cognition. They may not even know sometimes what implications this has for their own safety and others around them. People are living longer and want to be independent and mobile in their community.

“Driving is for people to access different resources in the community, whether it is health care, employment, education, social participation, or recreation,” said Dr. D’Souza. “There are other modes of transport, but driving is the most common mode in the US.”

We all know people or might even dread the day when we might be faced with the task of having to hand over our car keys, Dr. D’Souza said. Focusing on driving as a critical outcome can really help people maintain independence and quality of life.

There is an overlap between conditions affecting vision and aging in the population as well as driving.

Low Vision & Driver Rehabilitation

One of the questions that comes up as an outcome patients might be interested in is the question of whether they can drive. In other words, would the patient still be meeting the eligibility requirements of today’s licensing laws to be safe on the road?

The current way of answering this question is to work with a driver rehabilitation specialist. At UPMC, this would be the Adaptive Driving Program. Patients work with a clinician or therapist who perform assessments and provide driving specific feedback. They can even take patients on the road to assess driving fitness.

There are some limitations to this pathway, however.

UPMC Adaptive Driving Program

This program has two driver rehabilitation specialists that conduct a two-part evaluation. First is the clinical assessment, which evaluates visual acuity, field of view, contrast sensitivity, glare recovery, and the ability to see and detect objects (maybe in low lighting conditions). If the patient passes this assessment, then they are considered eligible and safe enough for the on-road assessment.

Two cars are available for use in this second part, a Volvo S90 sedan and Nissan Rogue SUV. Both have modifications to be usable by drivers in a wheelchair.

The problem with this current path is that it is very resource intensive. The clinicians have a limited capacity to take and schedule patients, which also take a lot of time. There are concerns about passenger safety as well – not just of the patient, but the clinicians and others on the road. Very few patients actually end up making it to the on-road assessment and might end up with severe driving restrictions or driving cessation.

StreetLab’s Driving Research

As a path between these two options, a driving simulator at the Vision Institute allows patients’ driving skills to be assessed and evaluated in a safe environment. The simulator looks at driving performance in a variety of conditions and help patients develop better strategies on how to drive safety.

The recently acquired driving simulator has three large screens and a driving console constructed from actual vehicle components. It has the look and feel of an actual driver station, with a steering wheel and driver’s seat. The whole platform in a sense vibrates and moves to road conditions, doing so differently for a smooth road vs a gravel path, or accelerating vs slowing down. There will also be a bit of tilt and movement to replicate taking a turn. Essentially, it captures the physical movement of a real car. It also has hand controls for drivers with mobility impairments unable to use foot pedals and a 480-degree field of view and ability to change driving scenarios.

From a research perspective, measuring driving performance is one of the areas of interest. How is the driver operating the vehicle, and are they doing so in a proper and safe manner? How fast is the vehicle going, how close is it getting to other vehicles, is the driver staying in the center of the lane and maintaining the speed limit?

Driver safety behavior is also an area of interest. What is the driver doing? Are they scanning the rearview mirrors correctly, checking the speedometer to make sure they are monitoring their speed, looking and making sure they are aware of other vehicles and pedestrians? What kind of behaviors is the driver demonstrating to show they are driving properly?

The driving simulator has a surround sound system, providing sounds from different locations about things like the engine, other cars, or a car honking. But 90% of the information is visual, so researchers are interested in where the driver is looking. Their gaze is monitored to ensure they are able to see where they are looking and determine how much time they have spent looking at the road in front of them vs the rearview mirror, dashboard, environment, etc. If certain kinds of gaze patterns or behaviors are deemed unsafe, then strategies can be given to improve those tasks.

The driving simulator can put people in various situations, like a traffic jam. The driver can safely bump another car without severe consequences. “It allows us to put people into somewhat unsafe conditions to understand how they’re going to respond and make sure they’re going to be safe when they do end up driving on the road,” Dr. D’Souza said.

When talking about driving, a big question is the effect of driving automation. We hear a lot about driverless cars and technologies in the car like forward collision warning, blind spot warning systems, and adaptive cruise control. “To a limited extent, we are also able to simulate some of those technologies in our car,” Dr. D’Souza said.

The simulator means researchers can now start understanding the effects of these new technologies along with a vision impairment or deficit someone might have and what the benefit would be to their driving performance and safety.

To be added to the simulator will be the ability to map over time precisely where the driver was looking during the entire drive depicted as sequential dots and a color-coded heat map of gaze fixations. This gives clinicians and researchers a detailed understanding of whether the driver was using correct and safe visual gaze patterns while driving.

“We are able to give feedback and training so they can develop more safe driving habits,” Dr. D’Souza concluded.