Making ‘Over the Network’ better than ‘Over the desk’
In 1998, when we were building the original Cisco IP Phones, we spent a lot of time talking about ‘better than’ features that would improve the voice communication experience over the standard PSTN/PBX voice model. At the time, we ran into an established hardware chain that didn’t support wideband handsets, DSPs without G.722 support, and so on. We are at the same point in Networked Virtual Environments today, with a few caveats.When you communicate or collaborate over the desk with someone, you benefit from four of your five senses. You can see them, hear them, smell them, and shake their hand. Generally, I try not to use my sense of taste in meetings, unless it’s sampling the local coffee.When you have an ‘over the wire’ or ‘over the network’ interaction, you use fewer senses. Lets summarize the pros and cons for now, and some possible technology opportunities to address the shortcomings:Sight- You can still see the other party (albeit as they want to be perceived, not as they physically may appear), but not their body language. This is a big disadvantage, as we have all read the studies that say how much non-verbal behavior contributes to person-to-person interactions. This is a drawback. This is a feature that Telepresence offers (high quality video, visibility of the other party’s body language) that is better than NVEs.What can we do to address this? Well, we can intrusively interject biometrics/affective sensors to determine mood or disposition and have that trigger animation overrides, wire up algorithms to your webcam to mirror facial queues, and develop pleasing animations triggered on your force of impact on the keyboard and the amplitude and pacing of your voice (as is done today in call centers to detect angry customers). That’d be a good start, but obviously nowhere near as ‘signal rich’ as an over the desk interaction.There is still some opportunity to leverage 3D displays when those technologies mature.Touch- Keyboard and Mice. Until we get better sensors and force-feedback gloves, we have the industrial-age keyboard and same old mouse. I have a drawer full of nifty I/O devices that I hope will one-day supplant the keyboard/mouse duo, but the applications and user interfaces are a direct byproduct of the I/O devices in use.It will be fun when we can reach across the virtual table and shake each others hand, and feel it.Smell- Nothing we can do here, but I do recall a ‘smell over IP’ company at either Interop or Macworld in the early 1990s. Perhaps someone smart acquired those patents. ;-)Hearing- Same as or better than. I can have spatial, wideband audio in a Second Life meeting today. Why is this better? I can get the same audio experience of an in-person meeting but across a broad geography. This makes a ton of difference in the overall experience, as any early-adopter of SL voice can attest.So where is the better than given these disadvantages? It’s in achieving your overall goal quicker, with the right people, information, and context.Because this is an electronically mediated interaction, we could augment the interaction with another person in ways that are infeasible to do in-person. You could record entire meetings like you can with TiVo and television, and mine that data for later decisions or content. You could have documents, websites, media, past meetings, in orbit around your virtual table to support the decision or conversation at hand.You could also easily create and interact with 3D models of data, which is very useful in those instances we have all run into when your 2D spreadsheet or presentation has a lossy impact on the topic at hand.You could mine the metadata of the conversation and recommend people that need to be present at the conversation that aren’t there (the local subject matter expert, perhaps?).These are the areas that need the most attention, in my opinion. We have the tools now, recommendation engines, inference engines, and need to apply those to our collaboration modalities. There is some great work being done in academia right now including the MIT Media Lab, Eurecom, and Coventry University’s Serious Games Institute along these lines, which will help accelerate the ‘better than’ of these environments.