gesture recognition
gesture recognition

 

What Is Gesture Recognition?

Gesture recognition is a kind of perceptual computing user interface that permits computers to capture and interpret human gestures as commands. Most consumers are aware of the concept through Wii fit, X-box and PlayStation games like “Just Dance” and “Kinect Sports.”

Gestures will originate from any bodily motion or state but normally originate from the face or hand. Users will use easy gestures to control or interact with devices without physically touching them. Many approaches have been created using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the topic of gesture recognition techniques. Gesture recognition allows humans to communicate with the machine and interact naturally without any devices. Using the idea of gesture recognition, it’s possible to point a finger at the computer screen so that the cursor can move accordingly. This could make conventional input devices like mouse, keyboards and even touch-screens redundant.

Gesture recognition, along with facial recognition, voice recognition, and eye tracking are components of what developers refer to as a Perceptual User Interface (PUI). The goal of PUI is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline called usability.

In personal computing, gestures are mostly used for input commands. Recognizing gestures as input permits computers to be more accessible for the physically impaired and makes interaction more natural in a gaming or 3-D virtual reality environment. A controller that contains accelerometers and gyroscopes to sense tilting, rotation, and acceleration of movement amplifies hand and body gestures or the computing device can be outfitted with a camera so that software in the device will recognize and interpret specific gestures. A wave of the hand, for example, terminate the program.

 

How Gesture Recognition Works?

Gesture recognition is an alternative user interface for providing real-time information to a computer. Rather than typing with keys or tapping on a touch screen, a motion sensor perceives and interprets movements as the primary source of data input.

  1. A camera feeds image information into a sensing device that’s connected to a computer. The sensing device usually uses an infrared sensor for calculating the depth.
  2. The specially designed software identifies meaningful gestures from a predetermined gesture library where each gesture is matched to a computer command.
  3. The software then correlates each registered real-time gesture, interprets the gesture and uses the library to identify meaningful gestures that match the library.
  4. Once the gesture has been interpreted, the computer executes the command correlated to that specific gesture.

For instance, Kinect looks at a range of human characteristics to provide the simplest command recognition based on natural human inputs. It provides both skeletal and facial tracking additionally to gesture recognition, voice recognition and in some cases the depth and color of the background scene. Kinect reconstructs all of this information into printable three-dimensional (3D) models. The newest Kinect developments include an adaptive user interface that can detect a user’s height.

 

Who Makes Gesture Recognition Software?

Microsoft is leading the charge with Kinect, a gesture recognition platform that enables humans to communicate with computers entirely through speaking and gesturing. Kinect offers computers, “eyes, ears, and a brain.” There are a few other players in the space such as SoftKinect, GestureTek, PointGrab, eyesight and PrimeSense, an Israeli company recently acquired by Apple. Emerging technologies from companies like eyesight go far beyond gaming to allow for a new level of small motor precision and depth perception.

 

Gesture Recognition Examples Beyond Gaming

example

Gesture recognition has vast potential in making interactive, engaging live experiences. Here are 5 gesture recognition examples that illustrate the potential of gesture recognition to educate, simplify user experiences and delight consumers.

  • In-Store Retail Engagement

Gesture recognition has the ability to deliver an exciting, seamless in-store expertise. This instance uses Kinect to create an engaging retail expertise by immersing the shopper in relevant content, helping her to try on products and offering a game that enables the shopper to earn a discount incentive.

  • Changing How We Interact With Traditional Computers

A company named Leap Motion last year introduced the Leap Motion Controller, a gesture-based computer interaction system for computer and mac. A USB device and roughly the size of a Swiss army knife, the controller permits users to interact with traditional computers with gesture control. It’s very easy to see the live expertise applications of this technology.

  • The Operating Room

Companies like Microsoft and Siemens are working together to redefine the way that everyone from motorists to surgeons accomplishes highly sensitive tasks. These corporations have been targeted on refining gesture recognition technology to focus on fine motor manipulation of pictures and enable a surgeon to virtually grasp and move an object on a monitor.

  • Windshield Wipers

Google and Ford are also reportedly working on a system that enables drivers to control features like air conditioning, windows and windshield wipers with gesture controls. The Cadillac CUE system recognizes some gestures like tap, flick, swipe and spread to scroll lists and zoom in on maps.

  • Mobile Payments

Seeper, a London-based startup, has created a technology known as Seemove that has gone beyond image and gesture recognition to object recognition. Ultimately, Seeper believes that their system could enable people to manage personal media, like photos or files, and even initiate online payments using gestures.

  • Sign Language Interpreter

There are many examples of using gesture recognition to bridge the gap between the deaf and non-deaf who may not know sign language. This example showing how Kinect will understand and translate sign language from Dani Martinez Capilla explores the notion of breaking down communication barriers using gesture recognition.

 

Gesture Recognition Features:

  • More accurate
  • High stability
  • Time-saving to unlock a device

 

The important application areas of gesture recognition are:

  • Automotive sector
  • Consumer electronics sector
  • Transit sector
  • Gaming sector
  • To unlock smartphones
  • Defense
  • Home automation
  • Sign language interpretation

 

Classification Of Gestures

Gestures can be categorized to fit into the subsequent application domain classifications:-

  • Pre-Emptive Gestures

A pre-emptive natural hand gesture occurs when the hand is moving towards a particular control (device/ appliance) and the detection of the hand approach is used to pre-empt the operators’ intent to operate a specific control.

Examples of such gesture may include the operation of the interior light; as the hand is detected, approaching the light switch the light may switch on. If the hand is detected approaching the light switch again it would switch off, so the hand movement to and from the device being controlled could be used as a pre-emptive gesture.

  • Function Associated Gestures

Function Associated gestures are those gestures that use the natural process of the arm/hand/ another body part to associate or provide a cognitive link to the function being controlled.

For example, moving the arm in circles pivoted about the elbow towards the fan could be used to signify that the operators’ want to switch on the fan. Such gestures have an action that can be associated with a specific function.

  • Context Sensitive Gestures

A natural hand gesture is used to respond to operator prompts or automatic events. Possible context-sensitive gestures to indicate yes/no or accept/reject might be thumbs-up and a thumbs-down. These may be used to answer or reject an incoming phone call, an incoming voice message or an incoming SMS text message.

  • Global Shortcut Gestures

Global shortcut gestures are in fact natural symbolic gestures that may be used at any time, the term natural refers to the use of natural hand gestures that are generally used in human-to-human communications. it’s expected that hand gestures will be selected whereby the user will simply link the gesture to the function being controlled. Possible applications could include frequently used controls that present unwanted high visual workload, like phone dial home, phone dial work.

  • Natural Dialogue Gestures

Natural dialogue hand gestures utilize natural gestures as used in human to human communication to initiate a gesture dialogue with the vehicle, usually, this would involve two gestures being used although just one gesture at any given time.

For example, if a person fanned his hand in front of his face, the gesture system may detect this and interpret that he is too hot and would like to cool down.

 

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.