Technology has finally reached that dimension when our hands will take over the job and replace Remote Controls by directly communicating with the computer or television. For instance, in order to delete a folder or file from the computer, place your palm on it, and throw it like a paper in a dustbin. Even while using the microwave oven to bake a cake, waving our hands in the air like a magician would serve as a command for the oven.
While some of us might be thinking of it being a futuristic vision, some of us have already experienced it through what we call “Gesture Recognition Technology”.
GESTURE SENSING TECHNOLOGIES
Gesture recognition is the process by which gestures made by the user are made known to the system. It can also be explained as the mathematical interpretation of a human motion by a computing device. Various types of gesture recognition technologies in use currently are:
· Contact type
It involves touch based gestures using a touch pad or a touch screen. Touch pad or touch screen based gesture recognition is achieved by sensing physical contact on a conventional touch pad or touch screen. Touch pads & touch screens are primarily used for controlling cursors on a PC or mobile phones and are gaining user acceptance for point of sale terminals, PDAs, various industrial and automotive applications as well. They are already being used for automotive applications, and PDAs. User acceptance of touch-based gesture automotive systems technologies are relatively easier for the public to accept because they preserve a physical user interface.
· Non-Contact
· Device Gesture Technologies
Device-based techniques use a glove, stylus, or other position tracker, whose movements send signals that the system uses to identify the gesture.
One of the commonly employed techniques for gesture recognition is to instrument the hand with a glove; the glove is equipped with a variety of sensors to provide information about hand position, orientation, and flex of fingers. First commercial hand tracker, Dataglove, used thin fiber optic cables running down the back of each hand, each with a small crack in it. Light is shone down the cable so when the fingers are bent light leaks out through the cracks. Measuring light loss gives an accurate reading of hand poses. Similar technique is used for wearable suits used in virtual environment applications. Though gloves provide accurate measurements of hand shape, they are cumbersome to wear, and connected through wires.
Various other kinds of systems are reported in literature for intrusive hand gesture recognition. Some uses bend sensor on the index finger, an acceleration sensor on the hand, a micro switch for activation.
Styli are interfaced with display technologies to record and interpret gestures like the writing of text.
To reduce physical restriction due to the cables, an alternate technique used is to wear an ultrasonic emitter on the index finger and the receiver capable of tracking the position of the emitter is mounted on a head mounted device (HMD).
To avoid placing sensors on the hand and fingers the "Gesture Wrist" uses capacitive sensors on a wristband to differentiate between two gestures (fist and point). Wearing a glove or suit is clearly not a practical proposition for many applications like automotives.
· Vision-based Technologies
There are two approaches to vision based gesture recognition;
Model based techniques:
They try to create a three dimensional model of the users hand and use this for recognition. Some systems track gesture movements through a set of critical positions. When a gesture moves through the same critical positions as does a stored gesture, the system recognizes it. Other systems track the body part being moved, compute the nature of the motion, and then determine the gesture. The systems generally do this by applying statistical modelling to a set of movements.
Image based methods:
Image-based techniques detect a gesture by capturing pictures of a user’s motions during the course of a gesture. The system sends these images to computer-vision software, which tracks them and identi?es the gesture.
These methods typically extract flesh tones from the background images to find hands and then try and extract features such as fingertips, hand edges, or gross hand geometry for use in gesture recognition.
· Electrical Field Sensing
Proximity of a human body or body part can be measured by sensing electric fields; the term used to refer to a family of non- contact measurements of the human body that may be made with slowly varying electric fields. These measurements can be used to measure the distance of a human hand or other body part from an object; this facilitates a vast range of applications for a wide range of industries.
Working of Gesture Recognition Technology
Gesture technology follows a few basic states to make the machine perform in the most optimized manner. These are:
1. Wait: In this state, the machine is waiting for the user to perform a gesture and provide an input to it.
2. Collect: After the gesture is being performed, the machine gathers the information conveyed by it.
3. Manipulate: In this state, the system has gathered enough data from the user or has been given an input. This state is like a processing state.
4. Execute: In this state, the system performs the task that has been asked by the user to do so through the gesture.
Devices that work on this technology usually follow these stages but their duration might vary from machine to machine depending on its configuration and the task it is supposed to do.
A basic working of the gesture recognition system can be understood from the following figure:
Applications of Gesture Recognition Technology
While the initial need of gesture recognition technology was only to improve the human computer interaction, it found plenty of applications as usage of computer went widespread. Currently, the following applications of gesture recognition technology are there:
· In Video Game Controllers: With the arrival of 6th generation video game consoles such as Microsoft X-Box with Kinect sensor, Sony PS3 with motion sensor controller, gesture recognition was widely implemented. In X-Box, often the user is the controller and has to perform all the physical movements that they desire the character in the game to do. For instance, one has to imitate kicking a football if he is playing football on any of the above listed gaming console. The Kinect sensor has a camera that catches the motions and processes it so that the character exactly does it.
In Sony PS3, users have to move the controller in such a way so that it imitates the action the user wants the character in the game to perform.
And in XBox 360 I have already put an Article on my blog.
· Aid to physically challenged: People who are visually impaired or have some other complexity in their motor functions can take help of gesture based input devices so that there is no discomfort while they access computers. Also, these days machine wheel chairs are coming with gesture based systems. All that is required from the user in here is to lightly move hands on the panel at the arm rest of the wheel chair. The movements of the hands will act as a controller and speed as well as direction can be easily controlled.
Shown below is a typical example of gesture controlled wheel chair.
· Other Applications: Gesture recognition technology is gaining popularity in almost every area that utilizes smart machines. In aircraft traffic controls, this technology can aid in detailing every part of location information about the airplanes near to the airport. In cranes, this can be used instead of remotes so that easy picking and shedding of load can be load at difficult locations.
Smart TVs are nowadays coming with this technology making the user carefree about the remote and allowing him to use his hands for changing the channel or volume levels. Qualcomm has recently launched smart cameras and tablet computers that are based on this technology. The camera will recognize the proximity of the object before taking the picture and will adjust itself according to the requirements. The tablet computers with this technology will ease out the task where user has to give presentations or change songs on his juke box. He can browse all the data just by waving his hands around. Various touch screen smart phones are also incorporating this technology to provide easy access. Gesture recognition technology can also be used to make the robots understand the human gestures and make them work accordingly.
GESTURE RECOGNITION CHALLENGES
1. Latency
One of the key challenges in gesture recognition is that the image processing can be significantly slow creating unacceptable latency for video games and other similar applications.
2. Lack of Gesture Language
Since common gesture language is not there, different users make gestures differently. If users make gestures as they seem fit, gesture recognition systems would certainly have difficulty in identifying motions with the probabilistic methods currently in use.
3. Robustness
Many gesture recognition systems do not read motions accurately or optimally due to factors like insufficient background light, high background noise etc.
4. Performance
Image processing involved in gesture recognition is quite resource intensive and the applications may found difficult to run on resource constrained devices like PDA.
The rate of user acceptability of gesture recognition systems will be driven by how fast and wide spread gesture recognition becomes established and accepted in our everyday lives including other environments in which human interaction with machines takes place. These include interactions in the office, home, banking, gaming and other leisure activities.
During the next few years, Gesture recognition is most likely to be used primarily in niche applications because making mainstream applications work with the gesture recognition technology will take considerable effort than it’s worth.