Open Exhibits - Blog

Blog

  
  
  
  

SDK Update - Version 2.0.5

We know it has only been a week, but we're so excited we can't wait. This release offers offers a vast improvement to our native touch architecture. We've been able to double our performance when using native touch as the input source to Open Exhibits. 

We've also addressed several issues that have been brought to our attention by the community including fixes for the tap and hold gestures. Thanks for your input and support.

Release Notes:

  • BUG FIX: gesture start, complete and end reset correctly
  • BUG FIX: tap, double_tap and triple_tap now have arbitrary names in GML
  • BUG FIX: event time and time between events can now be set in GML for tap gestures
  • Added n-tap, n-double_tap and n-triple_tap functionality. Now only fires when n specified events occure, note: if n=0 batch events are fired
  • Added complete n-hold gesture control
  • Integrated HOLD gesture processing into kinemetric class
  • Improved TUIO implementation for AIR including CML components and elements

The new release can be downloaded here: 
/downloads/sdk/

More Info

by Charles Veasey View all posts by Charles Veasey on Jun 13, 2012
  
  
  

Getting Started for Non-Programmers

Since the release of the Open Exhibits 2.0 back in March, we've been busy building new components, updating the SDK (software development kit) (we are now at version 2.0.4 if you haven't updated lately), and improving the site.  One item that we are beginning to tackle is increasing awareness about the ability to customize Open Exhibits components, exhibits, and templates without programming.

Open Exhibits 2.0 has two new associated XML-based markup languages: GestureML (GML)is a markup language that defines gestures and their interactions. CreativeML (CML) is used for defining object creation, management, and interaction within a multiuser / mulititouch environment.  We've just put together our first basic tutorial on customizing CML, you will need to know how to edit a markup language.  (You can learn more about these Markup languages at: gestureml.org and creativeml.org. )

If you download the Image Viewer component from the Open Exhibits site, you'll find a brief guide that explains how get started editing CML (GettingStartedWithCMLV1.0.pdf).  I've linked to the document here,  if you want to take a look without download the entire package.

While the Getting Started document is geared for the Image Viewer,  you'll find that most of our components are similarly structured. We will be greatly expanding our documentation of CML, and trying find other ways to make Open Exhibits more accessible and easy to use. We'll be posting more resources in coming weeks.

Update, June 10, 2012:  We've added a Wanted Board item for developing a screen cast of the Getting Started with CML tutorial.  Claim it! You can contribute the community and we'll give you an Amazon $75 Gift Certificate for your help.

More Info

by Jim Spadaccini View all posts by Jim Spadaccini on Jun 8, 2012
  
  
  

SDK Update - Version 2.0.4

We are pleased to announce an Open Exhibits SDK update - version 2.0.4. This release offers several bug fixes and new features:

  • BUG FIX: gesture start, complete and end reset correctly
  • BUG FIX: tap, double_tap and triple_tap now have arbitrary names in GML
  • BUG FIX: event time and time between events can now be set in GML for tap gestures
  • Added n-tap, n-double_tap and n-triple_tap functionality. Now only fires when n specified events occure, note: if n=0 batch events are fired
  • Added complete n-hold gesture control
  • Integrated HOLD gesture processing into kinemetric class
  • Improved TUIO implementation for AIR including CML components and elements

The new release can be downloaded here: 
/downloads/sdk/

More Info

by Charles Veasey View all posts by Charles Veasey on Jun 8, 2012
  
  
  

The Maxwell - An Experiment in Viewing

The Maxwell Museum of Anthropology has recently opened a new exhibition, “An Experiment in Viewing,” which features an Ideum multitouch table with Open Exhibits Collection Viewer.

Curators Catherine Baudoin and Amy Grochowski selected a broad range of culturally and geographically diverse objects, along with photographs of people using similar objects in context. Visitors can view objects in multiple ways to compare and connect their place of origin, purpose, material composition and cultural context.

This exhibition will give the visitor an opportunity to reflect on an object’s meaning and its journey through place and time. The viewer can imagine the creative process from idea and selection of materials, to construction and completion of piece. The multitouch table allows the visitor to view the physical object in a digital format where its materials and construction can be seen in detail.

The Maxwell Museum of Anthropology, founded in 1932 was the first public museum in Albuquerque, New Mexico. Its mission is to increase knowledge and understanding of the human cultural experience across space and time. The Maxwell is also an Open Exhibits partner.

More Info

by Jessica Gonzalez View all posts by Jessica Gonzalez on Jun 5, 2012
  
CMME
  
  

Multitouch Table Exhibit with Audio Layer Prototype

As I mentioned in my previous post,  Open Exhibits Lead Developer, Charles Veasey and I attended a workshop at the Museum of Science in Boston this week that explored accessibility issues in computer-based exhibits.   In the next few weeks, we will share a number of findings from the workshop, which was held as part of the NSF-sponsored Creating Museum Media for Everyone (CMME) project.

I want to start this process by sharing some of the findings from our breakout group, which over the course of a day-and-a-half, explored the challenges in creating audio descriptions for multitouch / multiuser exhibits. In particular, we looked at developing an assistive audio layer for a multitouch table exhibit. 

Push Button Audio
For many years, kiosks have been made more accessible by adding audio descriptive layers, commonly activated by a push button. This feature has allowed blind, low-vision, and non-reading museum visitors to access content. At the Museum of Science in Boston most kiosks have standard set of buttons for descriptive audio with a “hearphone “– an audio handset, along with another set of buttons that are used for navigation. 

An image of the button interface at a Museum of Science single-user kiosk

Our group discussed the possibility of developing a similar system as an adjunct element on the side of a multitouch table or nearby the installation. However, this approach would essentially require developing an additional stand-alone, audio exhibit. Also, more importantly, the experience that visitors would have would be a fundamentally different than interacting directly on the table itself.

On a large multitouch table multiple visitors can interact simultaneously with physical multitouch gestures. The experience is both physical and social. By relegated a visitor to an audio button system, you are essentially isolating them from the more compelling qualities that are inherent in multitouch and multiuser exhibits.

Two Approaches
We looked at two possible approaches for integrating audio into a multitouch table. One involved using unique gestures to activate audio descriptions. The other approach involved the use of a fiducial device in the form of an “audio puck” to do the same.

Both approaches would make use of an introductory “station” and/or a familiar push-button “hearphone” to orient blind, low-vision, or non-reading users to the table and instruct them on how to either activate the audio via a unique gesture or how to use of the “audio puck.” This portion of the experience would be brief, but important to help the visitor understand how to use the audio descriptive layer.

You can see a PDF document our presentation here (114KB)

In the gesture-based approach, we decided to implement a gesture that would probably not be activated accidentally by visitors. A three-finger drag was used to provide short audio descriptions of the objects and elements found on the table.  A three-finger tap activated a lengthier audio description and we discussed the possibility of a three-finger double-tap to activate even another layer. Most visitors interact with digital objects  using either two-fingers or a whole hand, so there was less of a chance that either a two-point or a five-point gesture would trigger the audio accidentally.

A diagram of possible interaction on a multitouch table with audio descriptive layer.
Since these are unique gestures, the orientation to the exhibit is very important. Visitors need to know how to access the audio layer. While some gestures for audio standards have emerged on personal devices, they are not necessarily compatible with the interactions found in a multiuser space on a multitouch table. For example, our group member Dr. Cary Supalo, who is blind, uses an anchored two-finger swipe on his iPhone to activate audio (the thumb is stationary and the two-fingers swipe). However, that type of a gesture is similar to many others that are used to accomplish very different tasks.  If we enabled this gesture on the table, visitors would likely, inadvertently, trigger the audio descriptions and/or not be able to interact in ways that are familiar to them.

As part of our group project, Charles programmed a protoptype exhibit using Open Exhibits Collection Viewer that allowed us to test the functionality of the three-fingered gesture approach. This rapid prototype demonstrated how the audio descriptive layer might work. 

Along with being able to test the interaction, the prototype software allowed us to split the audio into two separate zones. There are ongoing concerns about the audio playback. Will the audio disrupt other visitors’ experience? How does the exhibit handle multiple, simultaneous audio streams?  By using directional audio, we hoped to mitigate this issue. Unfortunately, due to some audio problems in the rapid prototyping process, the results were inconclusive; we will need to develop another prototype to better understand the dynamics of sound separation in the table environment and whether two simultaneous audio streams can work effectively in a museum setting.

In our second scenario, we imagined using an “audio puck “ that would work in a similar way to the gesture-activated approach. The group felt that in some ways this would be preferred option.  Audio would probably not be set off accidentally. A button on the fiducial would allow users to click to get more thorough audio descriptions. Overall, it is a simpler and more direct approach for visitors. Also, importantly, the pucks would be of assistance to visitors with limited mobility and/or dexterity. Finally, you would only have as many simultaneous audio streams as you would have pucks on the table, so exhibit developers could easily limit the number of possible audio streams.

The drawbacks in this approach would be additional development time (in fiducial recognition and fabrication of the audio pucks) and limitations on some multitouch devices (such as IR overlays and projected capacitive screens).  Also, whenever untethered physical objects appear in a busy museum environment there is a chance they will get lost, stolen, or damaged.

Conclusion
Multitouch and multiuser exhibits such as multitouch tables and walls are becoming much more common in museums and appear to be here to stay.  At the same time, very little has been done in researching accessibility issues and providing tools to make these types of exhibits more accessible to individuals with disabilities.  We have a lot of work to do in this area and this is just a start.

We are happy to report that most workshop attendees who saw the presentation and interacted with the rough prototype agreed that both approaches have potential. Obviously, more testing and development needs to happen, but we are now looking quite seriously at expanding our user-testing and eventually integrating a descriptive audio feature into the Open Exhibits framework. This would then be available to all Open Exhibits developers and could be configured in a variety of ways. We hope by making this feature available (along with guidelines for “getting started” with descriptive audio), the larger community might be able to make more progress.  As always, we welcome your questions and feedback on this topic.

Acknowledgements and More About CMME
We had a fantastic group!  Charles and I would like to acknowledge everyone who contributed: Michael Wall from San Diego Natural History Museum, Cary Supalo from Independent Science, and a great group from the Museum of Science, Boston that included: Juli Goss, Betty Davidson, Emily Roose, Stephanie Lacovelli, and Matthew Charron.

You can learn more about the NSF-funded Creating Museum Media for Everyone (CMME) on the Informal Science website.  You can find a few pictures from the workshop on the Open Exhibits Flickr site (much more to come!). Also, take a look at Independence Science Access Blog, they wrote an article on the workshop as well, Dr. Supalo Consults with Boston Museum of Science to Develop Accessible Museum Exhibits.

We will continue to share materials, resources, and information from the project here on the Open Exhibits website. We plan on developing an entire section of the site devoted to the CMME project and to issues concerning accessibility and universal design.  Many thanks to Principal Investigator Christine Reich, project manager Anna Lindgren-Streicher, and all of the amazing staff at the Museum of Science, Boston for organizing such an informative and exciting workshop.

More Info

by Jim Spadaccini View all posts by Jim Spadaccini on May 26, 2012
  
First  <<  11 12 13 14 15 16 17 18 19 20  >>  Last