The Content4ALL project works to make content more accessible to the deaf community by providing a long-term automated form of gesture subtitle.
The project aims to create a photorealistic 3D human avatar for sign-interpreted content creation, to enable the low-cost personalization of content for Deaf viewers with no disruption to hearing viewers, to develop the necessary technologies and algorithms to achieve automatic sign-translation capabilities.
The main target Innovation areas are:
Photorealistic rendering/ animation of human avatars: a new low cost and low complexity capture and animation framework for automatic sign language translation.
Opportunities for different business sectors: sign-interpretation as a service, freelance sign interpreters from home, interactive apps and services for obtaining sign interpreted version of a demand content.
Automatic concatenation and blending of basic movement and expression sequences to form a completely new algorithm to create novel video sequences
Effective Sign Language translation for a small domain of discourse: a novel approach will combine statistical and rule-based perspective to overcome the deficiencies of both approaches when applied on their own. HbbTV applications and content synchronization mechanisms for cloud based dynamic content creation: innovations to ensure the latency in the real-time sign translation stream receiving does not impact the user experience.
Standards for user interfaces for user groups with impairments: a definition for usability and UX for the Deaf. Identification of essential criteria for the content, the transmission of UI.