Monday, March 29, 2010

User-Defined Gestures for Surface Computing

Jacob O. Wobbrock

The InformationSchool DUB Group, University of Washington, Seattle, WA 98195 USA

wobbrock@u.washington.edu

Merideth Ringel Morris,

Andrew D. Wilson

Microsoft Research, One Microsoft Way, Richmond, WA 98052 USA

{merrie, awilson}@microsoft.com



Comments:


Frank's Blog



Summary:


The article presents a gesture based surface computing system which is reflective of user behavior. This system is based on eliciting gestures and then asking the user to perform its effect. The study revealed that desktop idioms strongly influence user mental model, with some commands eliciting little or no gesture agreement. A complete user defined gesture set is presented. The results will hopefully help create better gesture sets.

The exploration of interactive surface tops has revealed a preference for multitouch over the traditional mouse based input. Typically, surface gestures are predefined by the system designers. In contrast, the proposed system allows the user to express their own interpretation. Users rather than principals are used to determine which gestures are chosen.


Eliciting Input from Users:

Participatory design is not a new concept. It has been used successfully in the development of many naturalistic gestures. Other examples include the choice of speech commands from listening to verbal exchanges during similar collaborative tasks.


Developing a User-Defined Gesture Set:

Each of the participants were given a voice command and then saw the effect of a gesture, a block moving across the screen, and was then invited to perform a gesture which would cause the effect. A think aloud protocol was used in addition to videotaping. A wizard of OZ approach was employed with particular attention given to the think-aloud data. Gestures with the highest common use would ultimately be assigned to the particular task.


Procedure:

Each of the 20 participants was presented with 27 random referents. A one handed followed by a two handed gesture was made. The participant was finally asked to evaluate the effectiveness and ease of the gesture before moving on the next one. The gestures were classified according to form, nature, binding, and flow. Within each of these categories there were further subdivisions.

Once the data had been collected an agreement score was calculated to reflect the degree of consensus.



Discussion:

Of the three experts directing the project, their individually generated gesture sets only covered 43.5% of the gesture set generated by the 20 participants in fact the combination of all three individually generated sets still only managed to cover 60.9%. A priori it was by no means clear that the participants gestures would generate a coherent set. Additionally, a combination of widgets and gestures would be effective for the efferents where imaginary ones were used.


Evaluation:

One hand is preferred over two.

Surprising consistency in gesture selection.

Number of fingers are used is not important.

The greater the complexity of the referent the greater the planning time of the gesture.


Data:

1080 gestures

20 participants

22 out of 27 referents were assigned to gestures.

2 referents were combined.

4 were not assigned as they compromise other more primitive gestures or relied on imaginary widgets.

The resulting gesture set covers 57% of all proposed gestures.


Conceptual complexity and therefore planning time inversely correlated with goodness.

Gesture articulation time did not affect goodness.



Discussion:

A very powerful approach to uncovering a more naturalistic gesture set.

The conception and testing was particularly impressive.

However, I am surprised that the investigators were not able to come up with a more representative gesture set as the gestures described seem very familiar.


1 comment: