Manufacturers now offer various remote-control options beyond simple handsets with buttons. Ian Calcutt looks at how gesture recognition may be making waves in the future.
The traditional remote may not be dead but the way we control electronic devices is evolving. The main reasons for this can be attributed to motion-sensing interfaces for games consoles and the rise of smartphones and tablets.
Home automation systems and brands such as Sonos are moving to mobile apps instead of making costly, bespoke touchscreen control devices.
Elsewhere, commonplace CE products can be controlled over a wireless network by tablets and phones. These include hi-fi receivers from the likes of Pioneer, Denon and Onkyo, and digital recorders such as Sky+ and TiVo.
It means that everyday items can be managed by touchscreens as well as or instead of clicking physical buttons. These bring a potentially more intuitive form of control, mainly using finger swipes. Even this established field is advancing, with new companies such as Qeexo researching ways to make touchscreen devices recognise the difference between fingertips, knuckles, fingernails or a stylus.
Some apps also use the gyroscope and accelerometer inside mobiles to bring motion-sensitivity into the mix, similar to Nintendo’s Wii systems.
On the face of it
Another main form of gesture control is motion recognition via cameras, either built into a TV or supplied as a peripheral, such as the popular Kinect for Microsoft’s Xbox 360 console, which combines a camera with senors to measure changes in the 3D space around it.
Plenty of applications exist for this technology in video games but there has been a gradual spread of such features into TV interfaces.
Smart-TV platforms can make extensive use of a camera, such as Samsung’s 7000, 8000 and 9000 series. The obvious feature is video calling through Skype but such TVs can enable recognition so that members of the household can store different preferences in their connected-TV profile.
“Our TVs use face recognition for logging in,” Steve Mitchell, general manager of marketing for Samsung’s TV division, told IER. “We have Signature services such as Your Video, which will recommend content based on your viewing, so it’s important that’s tailored to the individual. You can log in using face recognition and then use the built-in camera and microphone to do gesture and voice control on the TV.”
The Samsung TVs contain voice recognition for certain basic commands. This can be done through the TV’s microphone, or for noisier environments, by speaking into a remote control. “You can say ‘Hi TV’, and up on the screen will come a list of available commands, which will be things like volume up/down, channel up/down,” explains Mitchell. “You can go into the web browser and use voice search in Google, for example, or once you’re in Google you can use gestures – use your hand as a mouse in effect – to point at an area of the screen that you want to activate.”
This breed of smart TVs will only recognise specific movements, so it shouldn’t respond unexpectedly to viewers’ impromptu gesticulations.
“It’s also used with our Fitness app,” adds Mitchell. “It can show you next to the trainer to make sure you’re doing the exercises right. It will also monitor your movements to see that you are doing the exercises right.”
Other apps can be expected to employ similar features in the future now that Samsung has made its voice and gesture control technology available to its Smart TV app partners.
Toshiba uses a built-in camera for features such as energy saving, by detecting if the TV has been left on with no one watching. It responds by dimming the screen and eventually switching off, though if 2012’s version is anything to go by, it needs fine tuning, as merely sitting still for a few minutes made the TV think the room was unoccupied. Like many features, they are somewhat gimmicky right now but there is scope for improvement.
Microsoft is adapting Kinect technology for Windows 8, so it can be used for numerous applications. In the past, tech enthusiasts ‘hacked’ Kinect cameras to make 3D computer scans of objects and people, among other experiments. Kinect could be applied to home automation control, for example, taking it well beyond those old lamps that responded to hand claps.
Philips’ uWand motion sensing system is available under licence for other companies to integrate into hardware. The system aims to get around over-complicated remotes. Extra functions can be controlled by a new gesture instead of having to add buttons on the handset.
Meanwhile, Apple recently patented its own motion-detecting ‘wand-like’ remote control, which may be used for its Apple TV box or the brand’s rumoured entry into television sets.
Up and coming
There are a few touchscreen apps for managing music playback through simple finger movements, for example CarTunes, which is popular with users both in and out of vehicles.
In autumn 2012, Pioneer unveiled a new Raku Navi (‘easy navigation’) car system in Japan. It has a touchscreen interface along with an ‘air gesture sensor’ that accesses certain controls, such as switching between its sat-nav display and information on the current song. However, it still relies on touchscreen ‘soft keys’ for most functions.
The Leap Motion company has developed a 4-inch-long PC accessory that costs $70 (about £50 for the UK) and can track motion in 3D space. The Leap is said to be accurate down to 0.01mm and can tell the difference between individual fingers or if you are holding something, like a pencil. Its creators claim that it is 200 times more sensitive than Kinect, despite being smaller and cheaper, though Microsoft is expected to launch the upgraded Kinect 2 in 2013.
Another up-and–coming technology is eyeSight, which allows motion-based control to be added to any form of digital camera. It could be a smartphone or tablet’s front-facing lens, a webcam or one built into a smart TV. It is not hardware based, so its multi-platform software can be included in an app and downloaded into suitable devices.
While conventional remote controls are fine for many purposes, as devices become increasingly complex, an important way to make technology more intuitive to operate will be a gesture-based interface. Don’t touch that dial!
Sign of the times
From Minority Report to Iron Man and Avatar, many of the concepts being developed for humans to control devices around them that have already been glimpsed in movies.
Researchers at Microsoft Research Cambridge and Newcastle University are developing a wrist-based sensor to track 3D hand movements to control any device by mapping finger movement and orientation. “The Digits sensor doesn’t rely on any external infrastructure so it is completely mobile,” Newcastle University PhD student David Kim said at a press conference in October 2012. “This means users are not bound to a fixed space. They can interact while moving from room to room or even running down the street. What Digits does is finally take 3D interaction outside the living room.”
Sweden’s Tobii Technology has an eyesight control system called iBeam, which uses an infrared LED and camera to triangulate a tablet user’s viewing angle and calculate where the cursor should to be placed. It has collaborated with Fujitsu on a prototype tablet for the Japanese market. Further ahead, scientists are hoping to create devices to detect and interpret the small, naturally occurring electromagnetic field produced by the human body.
A related idea, seemingly plucked from science fiction but being worked on in R&D labs is the ‘brain-computer interface’ or mind control. BCI systems can read the changing electrical s
ignals in the brain through non-invasive sensors on the scalp and digitise them into a computer-readable form.