Visions of the future vary drastically in popular culture, the scenes shift and circumstances can be an almost infinite number of possibilities, but what is one constant? At some point, the main character will inevitably interact with a thin-client device during a pivotal moment. It usually takes the form of a handheld screen with access to a limitless amount of media and data from seemingly anywhere.
Storage and compute power is good, and getting better—but I find it hard to believe the entire library of congress, and the tools to manipulate that data could fit on a tablet the size of my placemat. What does that leave? Virtualization and high-speed wireless access. You don’t need to store or process anything on the client, or even go beyond rendering images on the screen. Everything can be stored, provisioned and sent direct to you. The future is beginning to look a little more plausible.
This week, Cisco announced the Virtualization Experience Infrastructure (VXI), enabling rich media communication to virtual desktops. Applications and services can be quickly deployed across your entire workforce, and the many devices increasingly entering our lives. Fundamental to VXI is the secure, reliable delivery of media across the network. Much of it is latency sensitive, such as live video or audio—but regardless of the content, it needs to be delivered on-demand flawlessly.
The overall collaboration market is now $38 billion (USD) worldwide. Video, including You Tube and Flip, is growing 35% year over year.
And there is a more than an $8 billion (USD) market for video end points. (That’s not including the services opportunities that add three to five times to that $8 billion.)
I’m not throwing out these stats just to impress you. (Though I hope they are.)
With more than 80% of Cisco’s business going through the channel and more 95% of Tandberg’s legacy business through the channel, collaboration doesn’t just mean creating products that will help businesses collaborate. Sure, that’s an integral part of it, but the collaboration we mean here is also about ways we collaborate with our partners.
We sat down with Richard McLeod (Senior Director of Collaboration here at Cisco) to find out about the products and programs launching this week at our 2010 Collaboration Launch, why everything from Cisco will be video-enabled, and what it means for Cisco and Tandberg partners.
Our 2010 Collaboration Launch brings to market a slew of new video, desktop virtualization, and collaboration products for businesses of all sizes. Not to mention new programs for partners. The new products and programs include…
Today Cisco announced the Virtualization Experience Infrastructure (VXI). I wanted to take the opportunity to discuss this new system, and offer some thoughts on what it means, relative to desktop virtualization.
As the solutions marketing manager for Cisco’s Desktop Virtualization solutions, I want to use this opportunity to start a dialog around the trends we’re seeing across various IT organizations, and their efforts to embrace desktop virtualization. I thought we might start with posing a common question: As you’re designing your data center infrastructure to handle virtual desktop workloads, Is desktop virtualization really just another workload? Do you build a single consolidated, shared infrastructure to accommodate the usual server workloads alongside VM-hosted desktops, or do you handle these somehow differently? A VM’s a VM, regardless of what’s sitting on it, right? Consolidated… shared… elastic… this is the cloud infrastructure vision isn’t it? Whether you’re embracing VDI or App Virtualization, why should desktops be different? Maybe we should start with probably the biggest challenge and exposure associated with moving to virtual desktops: The End User
Quality of user experience and application responsiveness as impacted by a sub-optimal infrastructure still tend to be among the biggest impediments for virtual desktop implementations moving from proof-of-concept to production (that and the sometimes elusive path to expected ROI/TCO, which we’ll get into in another post). These are often the result of insufficient testing to replicate end-state loads on network, computing and storage. The results of a small pilot quite often don’t accurately predict what really happens when you multiply “The End User” by ten-fold.
So often with these projects, somewhere along the way, the combination of disappointing user experience, maybe compounded by unrealistic expectations results in the solution never getting off the ground. I’d like to say that there’s one solution that never fails. But let’s be honest – there are so many variables when you consider the infrastructure (compute, network, storage) as well as use cases across the constituents in your workforce, that there’s likely no single prescriptive approach to ensuring success Day 1.
This much we can agree on: your chances for success significantly improve when you commit to the right Day 1 infrastructure approach, tailored to delivering the best user experience possible, vs. hoping your current infrastructure is agile and elastic enough to accommodate the 300 users who don’t know what they’re about to step into Monday morning when they log-on to their new virtual desktop.
Here are a few questions to consider when trying to “build-it-right” on Day 1 :
What would happen if you were to mix desktop workloads directly with enterprise application workloads? Isn’t the approach to updating, patching and securing desktops very different from the approach taken with business critical applications in the data center? It’s not hard to imagine A/V scans on desktop workloads impacting the performance of applications workloads residing among the same compute resource pool.
What’s the profile of the compute and storage infrastructure? It’s well known that desktop virtualization can place a significant burden on memory and I/O before it does on CPU, except in the case of graphics-intensive apps. Therefore it makes sense from both an economics and “user-experience” perspective to ensure that the memory / CPU / I/O ratio is well suited to hosting virtual desktops. Likewise with storage -- virtual desktop IOPS can be extremely high, especially during boot and logon storms… this can account for larger than necessary storage costs… so doesn’t it make sense to ensure that the compute and storage infrastructure are designed and configured around the unique requirements of desktop workloads?
What about security? The advent of virtual desktops gives IT a unique opportunity to dynamically create virtual workgroups that have access rights to certain resources and not others. This could possibly be achieved even when mixing desktops and application workloads, but how much more difficult would that be to manage and maintain, especially at the outset of moving your virtual desktops into full production?
Its interesting to now be talking about moving ‘beyond’ with regards to virtualization. Its been such a hot topic with such a wealth of new technology-new problems that reveal themselves only to be addressed in creative ways.
Welcome to the shownotes for our latest Data Center focused show…this one is a deep dive around multiple new technology announcements. If for some reason you are just now reading this and have not watched the show…I encourage to you to check it out right away…these notes make much more sense in the context of of our video…Heres the teaser to get you started…
So what are the most intriguing items covered in this one?