Imagine those times when you’re having a conversation, and no matter what the other person says, you have to ask them to repeat it at least three times before you can finally hear them. Hearing loss affects 15% of American adults, while three out of every 1,000 children are born with hearing loss in one or both ears. Two students in the Technion-Cornell dual master’s degree programs, Christopher Caulfield and Devon Bain, have developed a prototype of augmented reality (AR) headset to aid people who suffer from hearing loss.
Both boys have a personal connection to hearing loss that fueled their quest to develop an aid. Christopher Caulfield was born deaf, while Devon Bain’s mother suffers from hearing loss. Their research has also motivated fellow students to develop solutions for accessibility issues.
Caulfield says, “We were surprised by how each of them [was] interested in accessibility because they were related to someone with a disability or good friends with someone with a disability.” Moreover, people who didn’t have a personal connection to a disability were moved to create positive social change.
At the beginning of their research, Caulfield and Bain conducted interviews with seven people dealing with hearing difficulty to better understand their challenges, the tools they use, and what technology might be useful. The ages of participants ranged from students to senior citizens, where the most recorded difficulty was holding a conversation in noisy locations. Due to this challenge, many of the participants lost motivation to socialize and became self-conscious.
“Our research focuses on how this is a barrier in one-on-one conversations,” says Caulfield. “We want to make conversations more seamless.” Many of those interviewed suggested a wearable AR captioning system that could aid in having longer conversations in loud atmospheres. This suggestion prompted Caulfield and Bain to display captions on AR glasses.
As part of their project, the students developed software to be used in conjunction with an AR headset. Caulfield and Bain pitched their idea to stakeholders with Verizon, their mentoring company. Bain says, “Hopefully, that will lead to a fully functional commercial project that people will use.” To create a prototype, they utilized an ARKit, the iOS platform for AR, as well as a text box for captions.
All users have to do is plug their smartphone into a MERGE AR headset, which will then use scripted dialogue to demonstrate the speech. Using computer vision and face recognition, the students were able to locate a person’s face and attach captions directly beneath the speaker’s chin. The text and color would change depending on the speaker’s tone and emphasis on certain words.
“Connective media is really unique because it doesn’t just teach us about software development, it also teaches about user research and design,” Bain says.
Caulfield and Bain are partnering with Shiri Azenkot, assistant professor at the Jacobs Technion-Cornell Institute at Cornell Tech, who focuses on accessible technology.
Students at Cornell Tech meet on a regular basis to discuss potential accessibility projects. Bain says, “Our project has really inspired a lot of people to think about accessibility at Cornell Tech.”
Leave a Reply