A growing number of tools allow users to create online data displays, such as graphs, that are accessible to blind or partially sighted people. However, most tools require an existing visual chart that can then be converted into an accessible format.
This creates barriers that prevent blind and visually impaired users from building their own customized data views and can limit their ability to explore and analyze important information.
A team of researchers from MIT and University College London (UCL) wants to change the way people think about representations of available data.
They created a software system called Umwelt (German for “environment”) that can enable blind and visually impaired users to create customized, multimodal data displays without the need for an initial visual chart.
Umwelt, an authoring environment designed for screen reader users, includes an editor that allows someone to load a dataset and create a custom display, such as a scatter plot, that can include three modalities: visualization, textual description, and sonification. Sonification involves converting data into non-speech sound.
The system, which can represent different types of data, includes a browser that allows a blind or partially sighted user to interactively explore the data display, seamlessly switching between each modality to interact with the data in a different way.
The researchers conducted a study with five expert screen reader users who found Umwelt useful and easy to learn. In addition to offering an interface that empowered them to create data visualizations—something they said they lacked—users said Umwelt could facilitate communication between people who rely on different senses.
“We must remember that blind and partially sighted people are not isolated. They exist in these contexts where they want to talk to other people about data,” says Jonathan Zong, a graduate student in Electrical and Computer Engineering (EECS) and lead author of the paper presenting Umwelt. “I hope Umwelt will help change the way researchers think about analyzing available data. Enabling the full participation of blind and partially sighted people in data analysis involves viewing visualization as just one piece of this larger, multisensory puzzle.”
Zong is joined at work by fellow EECS graduate students Isabella Pedraza Pineros and Mengzhu “Katie” Chen; Daniel Hajas, a UCL researcher working with the Global Disability Innovation Hub; and senior author Arvind Satyanarayan, an associate professor of computer science at MIT who leads the Visualization Group in the Laboratory for Computational Science and Artificial Intelligence. The paper will be presented at the ACM conference on human factors in computing.
Decentering the visualization
Researchers have previously developed interactive interfaces that provide a richer experience for screen reader users as they explore available data displays. Through that work, they realized that most tools for creating such displays involve converting existing visual charts.
With the goal of decentralizing visual representations in data analysis, Zong and Hajas, who lost his sight at age 16, began designing Umwelt together more than a year ago.
Early on, they realized that they would have to rethink how to present the same information using visual, auditory and textual forms.
“We had to put a common denominator behind the three modalities. By creating this new language for representation and making outputs and inputs available, the whole is greater than the sum of its parts,” says Hajas.
To build the Umwelt, they first considered what is unique about the way people use each sense.
For example, a sighted user can see the overall pattern of a scatterplot and, at the same time, move their eyes to focus on different data points. But for someone listening to sonification, the experience is linear as the data is converted into tones that must be played one at a time.
“If you’re just thinking about directly translating visual features into non-visual features, then you’re missing the unique strengths and weaknesses of each modality,” adds Zong.
They designed the Umwelt to offer flexibility, allowing the user to easily switch between modalities when one better suited their task at a given moment.
To use the editor, load a dataset into Umwelt, which uses heuristics to automatically create default views in each modality.
If the dataset contains stock prices for companies, Umwelt could generate a multi-series line chart, a text structure that groups the data by ticker symbol and date, and a sonification that uses tone length to represent the price for each date, arranged by ticker symbol.
The default heuristics are intended to help the user get started.
“In any kind of creative tool, you have the blank slate effect where it’s hard to know where to start. It’s complex in a multimodal tool because you have to specify things in three different views,” says Zong.
The editor links interactions between modalities, so if the user changes the textual description, that information is adjusted in the appropriate sonification. One might use the editor to create a multimodal view, switch to the browser for initial exploration, and then return to the editor to make adjustments.
Helping users communicate about data
To test the Umwelt, they created a diverse set of multimodal displays, from scatterplots to multiview graphs, to ensure that the system could effectively represent different types of data. Then they put the tool in the hands of five expert screen reader users.
Study participants generally found Umwelt useful for creating, exploring, and discussing data representations. One user said Umwelt was like an “enabler” that reduced the time needed to analyze data. Users agreed that Umwelt could help them communicate data more easily with sighted colleagues.
“What stands out about Umwelt is its core philosophy of de-emphasizing the visual in favor of a balanced, multi-sensory data experience. Often, non-visual representations of data are relegated to the status of secondary considerations, mere additions to their visual counterparts. However, visualization is only one aspect of data representation. I appreciate their efforts in changing this perception and embracing a more inclusive approach to data science,” says JooYoung Seo, an assistant professor in the School of Information Sciences at the University of Illinois at Urbana-Champagne, who was not involved in this work.
Going forward, the researchers plan to create an open-source version of Umwelt that others can build on. They also want to integrate the tactile sensor into the software system as an additional modality, enabling the use of tools such as refreshable tactile graphics displays.
“Beyond its impact on end users, I hope Umwelt can be a platform for asking scientific questions about how people use and perceive multimodal displays and how we can improve design beyond this initial step,” says Zong.
This work was supported in part by the National Science Foundation and the MIT Morningside Academy for Design Fellowship.