Skip to content
March 7, 2013 / carlispina

Liberact Workshop on interactive, gesture-based systems for library settings: Day 2

Welcome back for a summary of day 2 of the Liberact Workshop (I wrote about Day 1 earlier this week). The second day had fewer speakers to allow time for the attendees to gather together in small groups to discuss what we had learned and brainstorm ideas for how these technologies could be used in libraries, but it nevertheless included a number of impressive presentations.

First up was Alyssa Goodman, Professor of Astronomy at Harvard and a Research Associate at the Smithsonian Institute. She spoke about World Wide Telescope, an application that uses data about the universe to create an interactive visualization that can be used on a wide variety of devices. The World Wide Telescope supports touchscreen devices as well as gesture-based interactions through a Microsoft Kinect that allows users to navigate through the universe by moving their arms without ever touching the device. Users can see different views of the universe and galaxy including viewing specific wavelengths of radiation or focusing in on specific features of the universe with the rest turned off to decrease distractions. The application is available for use with other large datasets, though Prof. Goodman did note that typically other instances are hosted at the home institution rather than being hosted by Harvard or Microsoft. You can try out several tours of the World Wide Telescope on the website.

The second speaker of the day was Andries van Dam, the Thomas J. Watson, Jr. University Professor of Technology and Education and Professor of Computer Science at Brown University. He discussed the Touch Art Gallery (TAG) and WorkTop, which was developed at Brown to visualize digitized versions of large pieces of art in new and interactive ways. The first example of this platform is the Garibaldi Panorama visualization, which displays a huge panorama that could not otherwise be displayed due to its immense size. While the visualization is based on a “tour,” users can opt to pause the tour at any time to view more detail about the artwork or other materials included as annotations and can then resume the tour with a single click. The tool for creating tours is based around a timeline of the tour and is designed to allow users to create tours using WYSIWYG features that don’t require any knowledge of computer programming. The video below shows an example of a tour as well as the backend authoring tool used to create it.

After Prof. van Dam, Chris Erdmann, the primary organizer of the event and the Head Librarian at the Harvard-Smithsonian Center for Astrophysics John G. Wolbach Library, spoke about the Harvard Library Explorer, a tool in use at several libraries at Harvard. This application displays high-resolution images on multitouch devices and allows users to navigate through the images and zoom in on specific sections. It is currently used to display both artwork and images of astronomical features. After a brief introduction to the application, Rong Tang, Associate Professor and Director of the Simmons GSLIS Usability Lab, discussed the usability tests that had been done on the Library Explorer, which revealed data about how users interacted with it. You can see a brief demonstration of the Library Explorer below.

Next, Paul Worster, Multimedia Librarian at Harvard’s Lamont Library, discussed the interactive exhibit created to go with the digital Going for Baroque exhibit of ornamental baroque maps. This exhibit can be viewed on both a standard web browser and a multitouch device making it a perfect option for institutions that want visitors to be able to view a digital exhibit on a multitouch device in their exhibit space while also allowing others to view the exhibit from home.

The final speaker of the day was Alice Thudt, a graduate student at Ludwig Maximilian University of Munich. She presented on the Bohemian Bookshelf, a display that includes five different ways of visualizing a collection of books: publication date, cover color, author name, page count and keywords. In the words of the project’s website, the goal of these visualizations is to “support serendipitous discoveries in the context of digital book collections.” You can read more about the project, including two publications that Alice has co-written on the topic, on the project website. The video below also shows the application in action.

The day ended with the University of Calgary offering to host a follow up event in October to hear how others are using gesture-based technologies and to see how the universe of applications has progressed by that time. For those interested in learning more about the event, videos and documents from the event should be posted on the event website soon and you can check out the Storify collection of tweets from the #Liberact hashtag. I look forward to hearing more about these tools as more libraries start to adopt them and I would love to hear from anyone who is already using these sorts of applications in their libraries.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: