Student Scholar Symposium Abstracts and Posters

Document Type

Poster

Publication Date

Spring 5-6-2026

Faculty Advisor(s)

Kendra Day

Abstract

American Sign Language (ASL) is a visually elaborate, spatially oriented linguistic methodology that relies on combinations of hand movements, body positioning, facial expressions, and motion/spatial perception, aspects of which make interpretation difficult for automated machine recognition. Current assistive technology approaches to ASL interpretation are generally within the categories of computer vision models (including deep learning, multi-focus image fusion, and keypoint tracking) and wearable, multimodal/sensor-based approaches (such as smart glasses and inertial-sensor gloves). Within controlled environments, computer vision models perform well. However, when applied to conditions such as non-manual signs/features, signer variability, and rapid assimilation, they falter in processing all aspects of speech. Wearable systems combat these issues but introduce their own usability issues, including difficulty with comfort and limited social acceptance. Consistent gaps persist, similar to inconsistent benchmarks, as these restricted databases remain undiverse, arising from models that minimally incorporate deaf signers in their construction. This literature review highlights the need for multi-faceted datasets with multimodal fusion with a clear focus on human factors, advancing ASL assistive technologies to achieve real-world accessibility for the deaf community.

Comments

Presented at the Spring 2026 Student Scholar Symposium at Chapman University.

Share

COinS