Did You Know You Could Employ AI To Translate Sign Language To English?

Even with ASL (American Sign Language), communicating with a deaf person might be difficult because few people learn it. Consider the following scenario: ask a deaf individual to email a vendor. It’s challenging to go into great depth and directly express everything required.

It would be best to focus on what else goes into the message outside of the words, which is the key to building excellent communication abilities.

Any good professional relationship is built based on communication. Clarity, conciseness, and coherence amongst all parties are essential aspects of effective communication. Clarity, conciseness, and coherence, on the other hand, do not always come readily in situations like this.

How Does It Work?

 

Gupta can seamlessly interact as soon as the camera detects the ASL gesture, interpreting it as English on her phone screen within three seconds. But how does this happen? Her GitHub repository has several Image Collection python files corresponding to the sign language.

The AI system converts signals into English text by analyzing the skeletal motions of multiple body parts, such as fingers and arms, using image recognition technology. She has digitized a few people’s sign language to develop the technology.

In contrast to human interaction with deaf or hard-of-hearing people in writing, the AI system offers a much more dynamic way of communicating since it is in real-time.

Nevertheless, the app is in its initial stages as it can only correctly convert movements into roughly six gestures at the moment – Hello, I Love You, Thank You, Please, Yes, No. There is still a lot of prospects for development in the app. A large amount of sign language data is required to create a model for reliable sign translations into English text.

More nuance is required for AI to be a successful tool for deaf or hard of hearing. Something that will be tough for a single individual to do.

Who Built It?

The Vellore Institute of Technology’s student Gupta invented this method. Using Tensorflow’s object identification API, it transforms sign-language movements into English.

“To build a deep learning model solely for sign detection is a tough problem, but very doable,” she told one of our sources, “and right now I’m an amateur student, but I’m learning. I’m confident in the open-source community that will come up with a solution, and maybe we can find a way to make lives better.”

“I think object detection performed just great for a short dataset and a small-scale individual project,” she continued. It’s merely a small-scale implementation of the principle of inclusion in our varied society.

The Bottom Line

AI is used in various scenarios today. Why should it be left out to promote communication with deaf or hard-of-hearing persons?

The translation of the existing AI-based sign language is insufficient since sign language contains dialects and geographically distinct idioms. A more advanced and more comprehensive version could be used daily by deaf people to conduct simple conversations.

You may also like...