Blog
First prototype Z-OUT by Social Spaces
This is the first test with Open exhibits for the future exhibit Z-OUT @ art center Z33 in Belgium.
We created a custom class which loads markers/content dynamically via xml. We start by defining a marker at a specific latitude and longitude in Google maps, after the position of the marker we load the content of the marker by defining the needed tag youtube ... /youtube , flickrTag ... /flickrTag this will add all the content to the marker.
Presenting at Blur Conference
In February, I'll be presenting at the Blur Conference in Orlando, Florida. My talk will focus on what we've learned from developing multitouch and multiuser installations over the last few years.
The Blur Conference focuses on the new ways in which people are interacting with computers. This is first time this event has ever been held.
So what is Blur about? From the conference webpage…
It’s easy to forget that the computer mouse is over 45 years old.
What’s not as easy to forget is that we’re now collectively getting used to interacting with computers via means and interfaces that have moved way beyond the keyboard and the mouse — the iPhone and Wii being the most prominent examples.
The truth is that we stand on the verge of a major revolution in the models of Human Computer Interaction (HCI). A revolution that will fly right past academic and into a world of retail, medical, gaming, military, public event, sporting, personal and marketing applications.
From multi-touch to motion capture to spatial operating environments, over the next 10 years, everything we know about HCI will change.
Blur is the only conference that is exploring the line of interaction between computers and humans in a substantive, real-world and hands-on way.
I'll be presenting, "New Museum Experiences: Learning from Multitouch and Multiuser Installations" on February 22, 2011. I'll also be on a panel that same afternoon talking about Kinect and our Open Exhibits module. You can learn more about the Blur conference on their Website.
Interactive Tables in the Wild - A Study
When we launched our Canada's Arctic gallery in 2009 we wanted to learn from our two multi-touch tables. As near as we could see no one had done any studies on unattended tables in ultra-high traffic galleries.
The InnoVis Group, Interactions Lab at the University of Calgary were very interested in our offer to come down and study visitor interactions with our tables. Using extensive observation video and surveys they pulled together some really worthwhile insights.
In short, the tables met the needs of our visitors which means they definitely have a role in our plans for the future.
When we first rolled the tables out, we were often questioned about being too simplistic in our use of the tables. We were told people would just play with the pictures and there wasn't enough information attached. But we designed for that level of engagement knowing that the rest of the exhibit would deliver more in depth information.
It was very gratifying to see that the tables supported our overall content design like we thought they would. We hoped that one would amplify the other and this turned out to be the case.
As the study shows, not everything was a success with the tables, but they are, overall, successful. Our lessons learned from the development of the tables with Ideum and the study and our subsequent observations are incredibly valuable to us. I highly recommend a read of this study.
Hied First that Kinect Technology
Hi. Team Open Exhibits
Look and see the date of publication of the videos.
- http://www.youtube.com/watch?v=f5jfOJ3TwyI
- http://www.youtube.com/watch?v=_mLckWbA7nc
- http://www.youtube.com/watch?v=l8GSExpS6yU
- http://www.youtube.com/watch?v=JNvPv1eQIlM
Also see this link of Patent
CARLOS ANZOLA
Controlling a Gigapixel Image With Kinect
We've recently released two new modules on Open Exhibits. The gigapixel viewer module allows Open Exhibits and GestureWorks users to plug any gigapixel image into our Flash application and drag and zoom it using multitouch inputs. We recently demo'd this app for the first time at CES 2011 and it was a big hit.
MT-Kinect, our other new module, allows users to interface with a Kinect to manipulate multitouch applications using gesturing (like Minority Report) rather than direct touches. We combined it with the gigapixel viewer to create an application that allows you to move and zoom by waving your arms. The Kinect software doesn't require Open Exhibits Core or GestureWorks to work. Watch us demo our Kinect-controlled gigapixel viewer in the video above.
So how does our application convert Kinect data to multitouch-compatible input that our gigapixel Flash application can read? We decided to write a directshow source filter, a virtualized webcam device that reads data from the drivers released by OpenKinect. After adjusting the depth data to amplify the edges - which optimizes this application for gestures from a single user centered in the Kinect's camera - we output a simple webcam feed. We route this information to a vanilla installation of CCV (theoretically, other blob trackers should work), which runs various filters, finds the blobs, and outputs the data in whatever format we would like to consume (in our case, FLOSC). Ideum's gigapixel viewer can then read this input as though it came from any other multitouch device.
These modules are free to download and use; you just need to be an Open Exhibits member. The gigapixel viewer also requires that you have either Open Exhibits Core or GestureWorks software. Open Exhibits Core is available free to educational users. Commercial users can try GestureWorks free or purchase a license.
And if you're wondering about the stunning gigapixel image of El Capitán, it was taken by xRez Studio who were nice enough to let us use the image for this demo.