Skip to content
March 3, 2013 / carlispina

Liberact Workshop on interactive, gesture-based systems for library settings: Day 1

On February 28th and March 1st, I was fortunate to have the chance to attend the Liberact Workshop hosted by the Harvard Library. The official description of the event said:

“This workshop aims to bring librarians and developers together to discuss and brainstorm interactive, gesture-based systems for library settings. An array of gesture-based technologies will be demonstrated on the first day with presentations, brainstorming and discussions taking place on the second day. The workshop will be hosted by the Harvard Library in Cambridge, Massachusetts, and take place February 28 – March 1.”

The event brought together librarians, researchers, and representatives of companies that work in this field, among others, to discuss existing examples of gesture-based technology in libraries. The projects showed the wide range of options that these systems offer for libraries hoping to engage patrons for work, education and fun.

The presenter of the day was Peter Der Manuelian, the Philip J. King Professor of Egyptology at Harvard, who delivered a talk called Navigating the Giza Pyramids: Archival Data and 3D Interfaces. The project, which is a partnership between several institutions that hold materials related to the Giza Pyramids, has two distinct parts. The first is The Giza Archive, which is a public website hosted by the MFA that includes a huge variety of materials related to the various excavation parties that have studied the Giza Pyramids, as well as a map view that allows users to navigate through the site via an aerial view map. Materials available on the site include images, maps, diagrams, published and unpublished writings, diaries and drawings to name just a few. Users have the ability to create their own private or public collection of items from these materials. The site also offers resources for educators who want to integrate the materials into their curriculums as well as a current live view of Giza via a webcam. The second tool covered in the presentation is the Giza 3D site, which offers a 3D trip through the Giza Pyramids site. Created by Dassault Systems, Giza 3D offers an impressive trip through the Giza Pyramids via a complete 3D replica of the site. Professor Der Manuelian discussed his use of the Giza 3D view in his classes, including the way that he uses the accompanying fully 3D version with his students via a large screen and 3D classes to completely immerse students in Egypt of the time they are studying. The 3D site was an impressive example of how these technologies can be used in a research and teaching setting to give researchers and students an immersive experience of a place or time. You can see a demonstration of The Giza Archive in the video below.

Next up was a talk on the IntuiFace Presentation by Intuilab. This tool allows users to build interactive exhibits for Windows 7 or Windows 8 touchscreen devices. The entire interface for creating the exhibits is designed to make it easier for any user to create an exhibit using content that they already have in their collection, whether this content is images, videos, documents or 3D models. Users can even start from an existing PowerPoint presentation. While the presentation focused primarily on touchscreen options, IntuiFace Presentation can also support integration with a Microsoft Kinect. The tool is focused on offering a fast and easy route to interactive touch exhibits. It looked like a great option for those who are interested in creating content for a multitouch device without worrying about programming new software for the device or struggling through a steep learning curve to create exhibits. The presenter from Intuilab, Geoffrey Bessin, also offered a list of best practices that those considering touch-based tools should consider when choosing tools and creating exhibits. You can see IntuiFace Presentation in action in the video demo below.

The third speaker of the day was Dave Brown from the Microsoft Technology Centre in the UK who spoke about NUIverse. NUI is an acronym for Natural User Interface and Dave spoke about NUIverse, an example of how NUI can be used to visualize a large dataset, in this case data about the universe, and make the visualization interactive in a way that users find intuitive. NUIverse is designed to be used from any angle or even by multiple users at one time, so menus can be dragged in from any side of the device. Using a multitouch device, the software allows users to interact with the image using a large range of gestures and motions that moves far beyond the typical pinching to zoom or swiping to move. You can watch a demo of the NUIverse in action in the video below.

After Dave, Matt Hickey spoke about how Windows 8 works with gesture-based computing. He demonstrated the new Windows 8 interface, which most readers have probably seen advertised in TV commercials or online. With this Windows 8 interface, Microsoft is trying to move towards a time when users can use a single device as both a touch tablet device and, when docked, as a desktop computer replacement. He also discussed how Windows 8 can be transported to any device via a USB drive. The demonstration showed that Windows 8 is very focused on offering users a personalized experience in a way that past computers typically weren’t.

Next, Orit Shaer, the Clare Boothe Luce Assistant Professor of Computer Science and Media Arts and Sciences at Wellesley College discussed the G-nome Surfer and MoClo Planner. This talk offered an interesting perspective because the team at Wellesley has done usability testing that reveals a lot about how users interact with touch devices. The G-nome Surfer also demonstrates how touch devices can be deployed for teaching biological sciences. You can read the team’s publications on the G-nome Surfer website and you can see it in action below.

Josh Wall, the Director of Advanced Technology Group, then discussed The Wonder of Light: Touch and Learn!, an exhibit at the Smithsonian that included a Microsoft Surface to engage children with topics related to light. The team created 7 applications that made use of a wide variety of the Surface’s capabilities to allow users to create a fire by dragging logs onto a pile and then spinning a physical stick on top of the device to start the fire, see deep sea fish using a flashlight pointed at the Surface or refract light by placing a prism on the Surface. His presentation was very helpful in describing how the creative process for designing these applications worked and also offering some information about the types of physical items that can be used to interact with the Surface. He also discussed the overwhelmingly positive results that the museum saw once the device was deployed, with the users proving to come from all age ranges rather than just the original target audience of children under 12. The video below shows a bit of some of the applications that they created for the Surface.

The final talk of the day was given by Neil Roodyn of nsquared. He opened by talking about Corning’s “A Day Made of Glass” video, which imagined a world filled with interactive glass. This is the world that his company seeks to make possible, starting with a wide variety of applications for digital tables, ranging from business applications to education applications. nsquared offers several software options, including one that can be used as a building or campus map or space reservation system, but the two that I thought were most interesting for libraries were their nsquared thoughts tool and their nsquared presenter tool. The nsquared thoughts sofware allows users to collaborate on a single large device to draw, write, diagram and import images onto a single canvas. Completed canvases can be saved to USB drives or sent to collaborators. The nsquared presenter tool allows a group to work together to create and deliver a presentation on the multitouch surface and in interaction with the mobile devices being used by the members of the team. For multitouch devices that will always be used by a single team, nsquared can tie their software to smartphones and other mobile devices through tags that then allow the devices to communicate with each other, for example by setting a smartphone on the multitouch device, which will automatically transfer the data from the smartphone to it. The company also offers Kinect-based applications, though that was not the main focus of their presentation at Liberact. You can watch nsquared software in action in the video below:

As this post suggests, the first day of the workshop was a busy one. I plan to publish a post on the second day of the Workshop on Thursday, so check back then, but if you want more information before then, check out the Storify collection of tweets from the #Liberact hashtag.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: