Today Cisco is introducing an expanded architectural portfolio and partner ecosystem in support of our successful desktop virtualization solution built on Cisco Unified Computing System (UCS). Cisco UCS market traction has been phenomenal over the last 3 years. In fact, desktop virtualization has been one of the top workloads deployed on UCS as IT organizations apply the benefits of our stateless, simplified operations model, expansive I/O, and scalable performance to desktop workloads in the data center. Combined with unique product integration and the software eco-system partners such as VMware, Citrix and Microsoft, Cisco has delivered a number of reference designs with our strategic storage partners such as EMC and NetApp. Typically, these architectures were based on designs that easily scale from supporting a few hundred virtual desktops to thousands of desktops.
We have seen an inflection point with the perfect storm of the evolution of storage options, desktop software maturity, and data center architectures. One of the important changes in the storage market is the emergence of flash storage to address performance problems.
Taking advantage of enhanced UCS features and expanding the eco-system of storage partners including Atlantis Computing, Fusion-io, LSI, Nexenta, Nimble Storage and Tegile, Cisco is defining a broader portfolio of data center architectures for delivering desktop virtualization solutions – on-board architecture, simplified architecture and scalable architecture. “Converged” or “Unified” infrastructure stacks such as FlexPod and vBlock have, and will continue to be another successful option for desktop delivery infrastructure. Let me walk you through each of these architectural approaches.
Sure, there are many events and conferences going on this week, but stick a reminder on your calendar to watch this week’s episode of Engineers Unplugged. Ed Saipetch (@edsai) of Speaking in Tech and other fames and Andre Leibovici (@andreleibovici) of VMware talk about the evolution of BYOD (Bring Your Own Device), VDI, EUC, and the changes brought about by new devices.
Bringing the 1970s office to you, unicorn style.
Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
Practice drawing unicorns
What have been your challenges (IT or client side) as we move into the world of mobile employment and endlessly proliferating devices and apps? Post a comment here, or join the discussion on Twitter, #EngineersUnplugged.
Data Centres are evolving rapidly, in response to the many industry IT Megatrends we have previously discussed. Services and applications are increasingly being delivered from very large data centres and, increasingly, from hybrid and public clouds too.
Specifically, a good example of services being delivered from data centres is Hosted Desktops. I discussed in my last post how technologies such as TrustSec can help secure VXI/VDI deployments. VXI is a good example of a service originally delivered only from private data centres, now being delivered As A Service as well.
Video is (and will be) increasingly delivered from data centers as a service. Infrastructure services (servers/VM, storage…) are also delivered internally more and more through Private Clouds.
Consequently, securing those environments is now perceived by our customers CTOs and architects, as the biggest barrier to adopting clouds on a much larger scale.
We will therefore look at how TrustSec can pervasively help secure all data centre traffic. Read More »
We recently discussed the perfect IT storm that is currently brewing in business. BYOD, Unified Access, Video, the Many Clouds, SDN… all happening at once, on current infrastructure, and yet demanding more.
Some of the comments you made further emphasized the need to have an architectural approach.
Discussing VDI deployments with our customers in EMEAR, two things really are at the centre of our discussions from an infrastructure standpoint.
- Security, which I’ll discuss in today’s post.
- Latency and user experience. Two recent posts, here and here, provide great insight on how to tackle this challenge.
I have therefore asked Steinthor Bjarnason (firstname.lastname@example.org), Senior EMEAR Security Consultant, based out of Norway, to give me his perspective. He has 15 year experience in the security space and his perspectives are drawn from numerous customer projects, both in the Enterprise and the Service Provider space. Read More »
Can you see it? The end is nigh! The end of this blog series, not necessarily “the end” as in AMC’s the Walking Dead sort of end. Are you Zombie stumbling across this blog from a random Google search? Here is a table of contents to help you on your journey as we once again delve into the depths and address another question on our quest to answer… The VDI questions you didn’t ask, but really should have.
Got RAM? VDI is an interesting beast both from a physical perspective as well as the care and feeding of it. One thing this beast certainly does like is RAM (and braaaiiiins). Just in case I am still being stalked by that tech writer, RAM stands for Random Access Memory. I spoke a bit about Operating Systems in our 5th question in this series, and this somewhat builds upon that in regards to the amount of memory you should use. Microsoft says Windows 7 needs: 1 gigabyte (GB) RAM (32-bit) or 2 GB RAM (64-bit). For the purpose of our testing, we went smack in the middle with 1.5GB of RAM. Does it really matter what we used for this testing? It does a little – one, we need to have sufficient resources for the desktop to perform the functions of the workload test, and second, we need to pre-establish some boundaries to measure from.
Calculating overhead. In order to properly account for memory usage, we need to take into account the overhead of certain things in the Hypervisor. If you want to learn more about calculating overhead, click here. Here are a couple of things we are figuring in overhead for: