An Architecture for Gesture-Based Control of Mobile Robots
Soshi Iba, Michael Vande Weghe Christiaan J.J. Paredis, and Pradeep K. Khosla
The Robotics Institute
The Institute for Complex Engineered Systems
Carnegie Mellon University
Pittsbergh, PA 15213
Comments:
Summary:
Introduction:
This article presents a gesture based method for controlling mobile robots. Hidden Markov Models are used to spot and recognize six gestures reliably, with the use of a wait state to differentiate non-gestures. The gestures are mapped onto global and local modes of operational control. The two modes refer to the frame of reference from which the command is based.
The current state of the art is based on iconic programming. This is a method by which information can be communicated through human demonstration. In this way the programming burden is transferred from robot experts to task experts. The interface has to be intuitive and be able to cope with potentially vague information. Examples of data input are vision, data glove, and tactile sensing. The challenge is to interpret rather than mimic the input data, interpreting the intent as it were.
System Description & Hardware:
Data is collected through a combination CyberGlove and Polhemus 6DOF position sensor. The mobile robot is tracked with a geolocation system that measures both position and orientation.
Hardware:
P5 Data glove was chosen due to its economic cost and integrates position tracking. The finger flexion data was fairly reliable, unlike the position data which was needed additional processing in order to be usable.
Onboard sensors:
8 sonar sensors
7 IR obstacle detectors
A black & white camera with radio transmitter
Stereo mocrophones
position encoders on the tread drive mechanisms
on-board PC104-based i486 running linux
The onboard system is responsible for motion and camera control
A CyberRAVE client server is used to manage the gesture spotter/interpreter and geoposition system.
Gesture Recognition:
The HHM was chosen to take advantage of the temporal component of the gestures. Data is preprocessed in two stages. First, the 18-dimensional joint vector is reduced to a 10-dimensional feature vector. This is augmented with its first derivative to produce a 20-dimensional column vector. The second stage reduces this a 1-dimensional code word. through vector quantization. The codebook is trained offline with a vocabulary of 32 codewords.
Six Gestures:
OPENING: closed fist to flat hand.
OPENED: flat hand.
CLOSING: flat hand to closed fist.
POINTING: moving from a flat hand to a pointing index finger.
WAVING LEFT: fingers extended waiving to the left.
WAIVING RIGHT: fingers extended waiving tot he right.
Local Robot Control:
Closing decelerates the robot.
Opening, Opened maintains the current speed.
Pointing, accelerates the robot.
Waiving Left/RIGHT increases the rotational velocity in the appropriate direction.
Global Robot Control:
Closing decelerates and eventually stops the robot.
Opening, Opened maintains the current speed.
Pointing, “go there”
Waiving Left/RIGHT increases the rotational velocity in the appropriate direction.
Discussion:
The use of hand gestures to control a mobile robot takes advantage of the rich and natural vocabulary of hand gestures. A particularly interesting feature of the implementation is the use of a wait state to segment gestures and discriminate between gestures and non gestures. This produced a reduction in false positive identifications by an order of magnitude in comparison to traditional HHM recognizers. In future research this will be extended to multi robot systems.
No comments:
Post a Comment