We use insights from research on American Sign Language (ASL) phonology to train models for isolated sign language recognition (ISLR), a step towards automatic sign language understanding. Our key insight is to explicitly recognize the role of phonology in sign production to achieve more accurate ISLR than existing work which does not consider sign language phonology. We train ISLR models that take in pose estimations of a signer producing a single sign to predict not only the sign but additionally its phonological characteristics, such as the handshape. These auxiliary predictions lead to a nearly 9% absolute gain in sign recognition accuracy on the WLASL benchmark, with consistent improvements in ISLR regardless of the underlying prediction model architecture. This work has the potential to accelerate linguistic research in the domain of signed languages and reduce communication barriers between deaf and hearing people.
Kezar, L., Thomason, J. & Sehyr, Sevcikova Z., (2023). Improving Sign Recognition with Phonology. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2732–2737, Dubrovnik, Croatia. Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.eacl-main.200
Association for Computational Linguistics
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.