Open Exhibits & CMME: Accessibility in Interactive Exhibits
Open Exhibits takes another step towards accessibility in the latest release with the addition of the text-to-speech and speech-to-text capabilities. With the new Open Exhibits 3.0 release, there is an example project that contains an interface to the Microsoft Speech Application Programming Interface (SAPI). This technology allows you to create applications that are accessible to visually impaired users:
- Command your exhibit using voice commands by setting up a vocabulary of recognizable words and phrases.
- Describe the contents of your applications using text-to-speech synthesis.
This initiative was made possible in part by the Creating Museum Media for Everyone (CMME) project, a collaborative effort of the Museum of Science, the WGBH National Center for Accessible Media, Ideum, and Audience Viewpoints. This project furthers the science museum field’s understanding of ways to research, develop, and evaluate digital interactives that are inclusive of all people.
Future developments of the Open Exhibits + CMME initiative will make audio accessibility more streamlined within the SDK including:
- Integration within the UX library.
- Automatic voice-accessibility mode to more easily incorporate the technology.
- Built-in navigation system for visually impaired users targeting multiuser, multitouch displays and environments.
View the CMME webpage for more information:
/research/cmme/
by Ken Willes on November 13, 2013