Beyond Touch to Gesture Based Control
In my last blog post I discussed the power of touch technology. As mentioned in that post there are exciting technologies all ready on the market or coming soon that could obsolete touch in the long run.
At the Computer Human Interface (CHI) conference this year, Texas A&M University’s Interface Ecology Lab demonstrated ZeroTouch a system that favors gestures over touch. ZeroTouch looks like an empty picture frame and the edges are lined with a total of 256 infrared sensors pointing toward the center. The frame is connected to a computer and the computer to a digital projector. When the light created by the sensors is broken the computer interprets the break and displays it as a brushstroke. So when using your finger it becomes a pencil and your arm a paint roller. The virtual canvas is just a proof of concept. ZeroTouch can also be layered over a traditional computer screen to make it a touchscreen.
The researchers feel that two-dimensional interaction is just the beginning and stacking layers of ZeroTouch could enable depth sensing, which means the system could then sense 3D space. Microsoft Research is taking the 3D challenge head on with HoloDesk. With HoloDesk users can manipulate 3D, virtual images with their hands. The team developing HoloDesk see possible future applications in areas such as board gaming, rapid prototype design or team collaboration, where users would share a single 3D scene viewed from different perspectives. HoloDesk isn’t the only 3D interaction experiment but what differentiates it is the use of beam-splitters and a graphic processing algorithm, which work together to provide a more life-like experience.
HoloDesk leverages a hacked Kinect and a half-silvered mirror to ‘see’ your hands in 3D space. Another project tapping into the Kinect gesture based control is called Snowglobe.
The Snowglobe project is the brainchild of John Bolton of the Human Media Lab at Queens University. Using a large acrylic ball with an image projected, using a 3D projecter, inside via a hole in the bottom of the ball and two Microsoft Kinect sensors, a user can approach and move around the ball and the image inside follows them. If the user stretches out their hands they can control the orientation and size of the image. Since the image is cast by a 3D projector wearing 3D glassed takes the experience to another level. According to Bolton, “If we nest an object inside we can present all 360 degrees of that object if somebody walks around the display. So opposed to just sitting there with a mouse you can walk around and you’re presented with the correct view as your position changes.”
As the ads for Microsoft Kinect state, “You are the Controller”. That’s right we have the power!