Student Scholar Symposium Abstracts and Posters

Document Type

Poster

Publication Date

Spring 5-2021

Faculty Advisor(s)

Franceli Cibrian

Abstract

With the increment of voice assistants, speech recognition technologies have been used to support natural language processing. However, there are limitations on how well the technologies perform depending on who the users are. They have been predominantly trained on “typical speech” patterns, leaving aside people with disabilities with unique speech patterns. More specifically, people with Down Syndrome are having trouble using speech recognition technology due to their differences in speech. To develop a more accessible voice assistant, this project aims to characterize the speech recognition from individuals with Down Syndrome. To accomplish this aim, we analyze the quality of transcripts generated by two popular algorithms used for speech recognition (IBM and Google) to see the differences of speech from neurotypicals and people with Down Syndrome. We analyzed 7 videos of interviews between a neurotypical interviewer and Down Syndrome participants. We computed the symmetric differences between auto generated subtitles(IBM and youtube) and subtitles that were provided by humans (ground true) as well as the word error rate in all sentences. We found that current speech recognition algorithms don’t recognize Down Syndrome speeches as well as speeches from neurotypicals. We are currently analyzing the specific type of error. By finding the speech patterns for people with disabilities, speech recognition technologies will be more inclusive, and truly help those who need voice assistants the most.

Comments

Presented at the virtual Spring 2021 Student Scholar Symposium at Chapman University.

Included in

Engineering Commons

Share

COinS