In-air gesture development was begun in earnest by Google in 2015, under the name “Project Soli”. Project Soli uses radar-based motions sensors to detect small hand movements to control devices. With recent approval from the FCC (Federal Communications Commission) to develop and deploy a Soli radar sensor, development is expected to really take off.
Under Project Soli, users and devices communicate through virtual tools– gestures mimicking everyday actions, such as turning a knob to adjust volume. Hyundai Mobis’ concept autonomous driving vehicle unveiled at the Consumer Electronic Show (CES) in January 2019 also featured in-air gesture technology, allowing drivers and passengers to adjust the air conditioner or music volume using finger gestures only.
Where in-air gestures currently stand
Nevertheless, there are still limitations to in-air gesture technology, as most of the digital elements we encounter today are confined to the screens on our smartphones or tablets. Further, in-air gesture systems have yet to be coupled with sound or haptic systems that provide users with immediate feedback. Clearly, there is still a ways to go until in-air gestures truly enter the mainstream.
English startup LITHO has devised a controller worn on the fingers to overcome this limitation. A user wearing this controller on their fingers is able to interact with digital elements touch-free, and receives feedback as each in-air gesture is used.
The progress of augmented reality (AR) technology is creating new possibilities for the development of in-air gesture technologies, and also highlighting its necessity. We wait in anticipation for the user interface revolution in-air gestures will bring about in the not-so-distant future.