S2S: AI-Powered Translation Between Sign and Spoken Languages

computing
According to the World Health Organization (WHO), over 5% of the global population experiences disabling hearing loss. Recently, the development of Sign Language Translation (SLT) models has become essential for automating communication between Deaf and hearing individuals. However, current SLT approaches are limited in both the number of signs they can recognise and the quality of their recognition. This project proposes a novel vision-based SLT model designed to segment continuous American Sign Language (ASL) sentences and identify their corresponding glosses using both manual and non-manual signs, as well as word-level sign data. The model enables users to translate ASL syntax and spoken English by developing and fine-tuning multiple large language and statistical models.
Canada
Angela Cao
Angela Cao
Age: 15