SignAll recently attended CES, the number one technology convention worldwide.
We got the opportunity to meet many leaders, developers, accessibility organizations, deaf people, and general attendees. We were met with inspirational feedback, and look forward to sharing our newest developments over the coming months.
Watch the 2018 SignAll CES video here
We would like to share with you the two articles of Devin Coldewey about SignAll. The first article is a short overview of the CES 2018 and the second article highlights the essence of our innovation.
We are honored to be featured on TechCrunch as a reason to attend CES in 2018.
“The secret to avoiding CES cynicism is never really going”
… () Signall (pronounced “sign all”), a company using a rather complex camera/Kinect setup to translate sign language in real time. Here’s a tremendously difficult engineering problem, further complicated by it being a language and social problem as well, yet this small company is approaching it slowly and steadily and with the support of the deaf people it hopes to enable.
“SignAll is slowly but surely building a sign language translation platform”
…() sign language is a unique case, and translating it uniquely difficult, because it is fundamentally different from spoken and written languages. All the same, SignAll has been working hard for years to make accurate, real-time machine translation of ASL a reality.
SignAll’s system works with complete sentences, not just individual words presented sequentially. A system that just takes down and translates one sign after another (limited versions of which exist) would be liable to creating misinterpretations or overly simplistic representations of what was said. While that might be fine for simple things like asking directions, real meaningful communication has layers of complexity that must be detected and accurately reproduced.
This long-running project is a sobering reminder of both the possibilities and limitations of technology. True, automatic translation of sign language is a goal only just becoming possible with advances in computer vision, machine learning and imaging. But unlike many other translation or CV tasks, it requires a great deal of human input at every step, not just to achieve basic accuracy, but to ensure the humanitarian aspects are present, as well.
After all, this isn’t just about the convenience of reading a foreign news article or communicating abroad, but of a class of people who are fundamentally excluded from what most people think of as in-person communication — speech. To improve their lot is worth waiting for.