Xbox One Kinect developers launch new company ‘to introduce Fluid Experience Technology’

A group of developers that helped created "key elements" of the Xbox One Kinect have launched a new company, Aquifi, to introduce "Fluid Experience Technology."

The new, software-only platform — its name coined by the company — "will yield new economics and usability in the perceptual computing and gesture-control markets, by allowing a new style of “adaptive” interface to be created." The term "implies that barriers are removed between human and machine, with the latter interpreting a user’s movements and gestures, and initiating intuitive actions in response," the company says.

“Within the next decade, machines will respond to us and our needs through intuitive interpretation of our actions, movements, and gestures,” said Aquifi CEO Nazim Kareemi. “Our fluid experience platform represents the next generation in natural interfaces, and will enable adaptive interfaces to become ubiquitous, thanks to our technology’s breakthrough economics.”

I know what you're wondering — how does this differ from the Kinect or other motion-based technology? Glad you asked, because Aquifi has presented a simple-to-understand bulleted list of features:

  • It can interpret user movements over a wide area, as opposed to shallow or narrow “interaction zones”;
  • It can interpret far more than simply hand gestures, or gross body positions, such as the 3-D position of a user’s face or even whose face is in view (facial fingerprinting);
  • It can adapt its response based upon machine learning;
  • It is a software-only solution that can use inexpensive, commodity imaging sensors, rather than specialized chips or other expensive hardware;
  • It can be used across a full spectrum of machines to simplify both the developer’s and user’s experience.

Aquifi also notes that because the technology works with inexpensive, commodity image sensors, it will be possible smartphones, tablets, PC, and other machines to all have interfaces that adjust automatically to users. And as developers get more hands-on time with the technology over the next six months, new applications may even include augmented reality that uses smartphone 3D object scanning and room mapping, or safer car applications that combine voice for feedback to eliminate having to look at the screen.