Wednesday, March 10, 2010

Human-Centered Interaction with Documents

Andreas Dengel, Stefan Agne, Bertin Klein

Knowledge Management Lab, DFKI GmbH Kaiserslautern, Germany

{dengel,agne,klein}@dfki.de

Achim Ebert, Matthias Deller

Intelegent Visualization Lab, DFKI GmbH Kaiserslautern, Germany

{ebert,deller}@dfki.de


Comments:


Manoj’s Blog

Franck’s Blog



Summary:


Introduction:

This article presents a new user interface for organizing and visualizing documents in 3D. In the last decade documents have ceased to be tangible objects. With this transition some of their defining qualities embodied in their physical layout have also been lost.

A collection of documents can be viewed as an information space. A virtual environment that resembles a real space can be more readily interpreted by users without prior computer knowledge. Qualities such as size, and relation to other documents to name a few can be visually represented. Additionally, the user experience can also be more fun.

Documents in this implementation are presented as if they are arranged in a book case. A search can be invoked by a gesture. Preselected documents, appear with greater detail, occupy a higher zoom domain. Several viewing modes are available. Pulsation, to draw attention to a document, and color in the form of yellowing to illustrate age are used. The arrangement of documents also reflects their relevance to each other.


Interaction:

The most natural way to manipulate objects is with ones hands. Hands are used to grab, and move, or manipulate objects in other ways. In the interest of minimizing the cognitive load on the user a gesture recognition engine that recognizes natural hand gestures is employed. It has to be compatible with multiple devices, operated in multiple environments.


Hardware:

P5 Data glove was chosen due to its economic cost and integrates position tracking. The finger flexion data was fairly reliable, unlike the position data which was needed additional processing in order to be usable.


Posture, Gesture Recognition & Learning:

Postures are learned by performing them and giving them a name. The system is intended to provide realtime functionality on an average PC without taking up too much processing power.


Recognition Process:

Recognition is done in a two step process with data acquisition and gesture management.


Gesture Recognition:

Gestures are seen as a series of successive postures, in this way dynamic gestures can be perceived. Posture change events are used to segment the end of one gesture and the beginning of another.


Implementation & Results:

A SeeReal C-I 3D display was used to present a stereo image to the user.

Semiotic gestures are used to communicate information and ergotic gestures are used are used to manipulate ones surroundings. A calendar and pin board were used to allow users to experiment with the interface. They were able to manipulate objects and replace existing gestures with new ones. Several users, from a range of backgrounds, were tested in moving and browsing through documents. After a short adaptation period to the glove they were able to successfully use naturalistic gesture. However, leafing through lengthy documents proved difficult.


Discussion:

The loss of non verbal information through the transition to non tangible electronic media is frequently underestimated. The layout of documents was an element that I had not thought of, The attempt to provide a more tangible environment all be it a virtual one in which to manipulate and store documents is logically very sound.

The user study was rather brief and not commensurate with the effort put into the implementation.


No comments:

Post a Comment