Services like Google Translate can help millions of people communicate in over 100 languages. Users can type or speak words to be translated, or even translate text in photos and videos using augmented reality.
Now, computer science professor Andrea Salgian and Ben Guerrieri ’26 are working to add one more language to the list: American Sign Language.
Using computer vision and machine learning, the researchers are setting out to create a program to serve as a Google Translate tool for ASL speakers to sign to the camera and receive a direct translation.
“Right now, we’re looking at recognizing letters and words that have static gestures,” Salgian said, referring to letters in the ASL alphabet with no hand movement. The program will act more like a dictionary at first. The pair will then develop the automated translation, she explained.
Salgian’s research utilizes a free machine-learning framework called Mediapipe, which is developed by Google and uses a camera to detect joint locations in real time. The program tracks the user’s movements, provides the coordinates of every single joint in the hand, and uses the coordinates to extract gestures that are matched to ASL signs.
Computer science major Ben Guerrieri ’26 discovered Salgian’s project shortly after arriving at TCNJ and is now working alongside her in this AI research.
“It’s such a hands-on thing for me to do,” he said of his contribution to the project, which consists of researching and developing the translator algorithms. “We get to incrementally develop algorithms that have super fascinating real-time results.”
This project is part of Salgian’s on-going interest and research into visual gesture recognition that also includes applications to musical conducting and exercising.
“ASL is a fascinating application, especially looking at the accessibility aspect of it,” Salgian said. “To make communication possible for those who don’t speak ASL but would love to understand would mean so much,” Salgian said.
— Kaitlyn Bonomo ’23