Interactions between people and computers continues to be strongly reliant on 30 year old technology – namely the mouse and keyboard. Touchscreens only recently added to this experience. With next generation sensors, and software designed for a more natural and intuitive interface, communication between man and machine will improve beyond measure.
From the age of one, a human child is capable of declarative pointing, and can use it to express interest or desire. From that moment on, this gesture and others like it, are ingrained into our daily lives.
Gesture Recognition is currently best suited for simple command and pointing. Depending on the sensor solution, gestures are a suitable means of control directly in front of the device, as well as from a distance.
Our eyes can fixate on an item on the screen in milliseconds, and can process a great deal of information on the screen in seconds. This makes gaze tracking one of the fastest, most intuitive, and least intrusive method of detection of interest.
While remote eye trackers are still limited to the area in front of the screen, smart glasses technology can be used for eye tracking over a distance, or in a Virtual Reality (VR) or Augmented Reality (AR) environment.
The average person can process over 400 words per minute (wpm), speak 180 wpm, but can only type 35 wpm. Not only is voice command suitable for entering text, but also for delivering complex commands.
Depending on the environment, it is also usable directly in front of the device, via a headset, or from across the room.
Regardless of the type of sensor first installed, (whether voice, 3d camera/gesture recognition, or eye tracker), NUIA adds natural user interaction capabilities to your device. NUIA is capable of intelligently adapting interaction models to both single sensor solutions and multi-sensor solutions, allowing for ever more intuitive hybrid solutions.
Whenever a new sensor is added, the interaction possibilities and use cases will simply increase. In a multi-sensor setup, NUIA not only treats the sensors as parallel inputs, but also cross-checks inputs from various sensors to increase detection rate and decrease false positives.
Each type of sensor has different capabilities. Depending on the task at hand, having the right framework for understanding the capabilities of each type of sensor, or the combined capabilities of multiple sensors is the first step. Designing the right interaction model is the another challenge in creating a truly natural interaction experience for your device, workflow, or application.
Whether you want to enable a workflow with a given set of sensors, or want to create a new sensor setup around an application or operating system, or simply have a target customer in mind and a very vague set of requirements, NUIA is the ideal platform for exploring possibilities.
Natural user interfaces (NUI) allow us more freedom to move around while still staying in full control of our devices.
From the moment you walk into the office, you can tell your PC to open email, lean back, point to a particular email, and declare “That’s spam.” With your coffee in one hand you can swipe with your other hand to return to the desktop, and glance at your news feed, which automatically opens the morning’s stock index. A visit from a colleague prompts you to say “Wait for me” while standing up, and the voice recognition enters a standby mode, allowing you to discuss the local sports team without your computer reacting to your voice input.
All of these modes of interaction are possible, and inherently natural to us. Since the capabilities of different sensors can differ, each use case has to keep sensor ranges in mind.
NUIA already supports a wide range of popular productivity, media and gaming applications, as well as websites and web applications. Never satisfied with the status quo, and always intrigued by exploring new possibilities, 4tiitoo and NUIA are striving to add more.