While there is more and more talk of cloud computing lately, it’s not clear how data center managers can integrate this into their capacity planning in a standardized way. Most of the various approaches to both internal and external cloud computing offered today work differently from vendor to vendor, and vary by the type of application problem being solved or cloud service required. For example a business may choose to access an application in the cloud such as Salesforce.com, or choose to move a particular infrastructure or platform stack to an internal cloud technology or external cloud provider. And for cloud computing to be truly valuable, it needs to offer the data center manager a range of technologies that work seamlessly together, deploying services as required to meet business needs.
I spent the first part of this week in Las Vegas at the Gartner Data Center show. I live to tell the tale. Here it is:
1. I still hate Vegas and always will. The smoke. The expensive sleaze. The blasé, graceless service. The aging, overburdened airport. The fact that you tend to spend your entire stay without ever seeing the sun or breathing fresh air. And I’ve come back fluffier (with more avoirdupois, for non-American readers).
2. Cloud, cloud, cloud, cloud. We even seem to be past the previously obligatory “nebulous” puns (finally!), because the conversation is no longer “what is cloud computing” or even “why cloud computing” but “what’s the best way to get there”.
Taking the Guesswork out of Deploying Virtual Desktops: Cisco + VMware Validated Designs for View 4.5
So, like many IT organizations you may have already made the decision to deploy virtual desktops – you’re ready to move from a small pilot to full production. But a lot of questions (and possibly some guesswork) stand in the way – what does the end state architecture need to look like? How do you get there? How are you going to make sure that you can move quickly and seamlessly from proof of concept to scalable production? Accounting for sufficient server capacity, network bandwidth and performance, storage IOPS, and especially quality of experience at the end-user level – there are a lot of factors to contend with. And how do you predict user behavior in a production environment, including the load they’ll collectively place on your infrastructure when they log into their brand-new virtual desktops on Monday morning? Read More »
So, what are customer trends in the next 3-5 years, how much are they really buying into virtualization and cloud, and what does all this change in the data center mean for their careers?
As part of our on-going series of Connected World Reports, we asked these and more questions to 2,600 folks from 13 counties across the globe and got some surprising responses back. We are getting together some visionaries of our own to discuss the responses and add their own insights into where IT and the data center are going in the next few years:
- John Manville, vice president of IT, Cisco
- Jackie Ross, vice president, Server Access and Virtualization Group, Cisco
- Brian Modoff, senior analyst, Deutsche Bank
Join our panel on December 8 at 8:00 a.m. PST, via a live Internet TV event to review the results and implications of the third and final Cisco Connected World Report, called “Focus on the Data Center”.
- To view the program, visit www.ustream.tv/ciscotv. Registration is not required, and the programs will also be available for re-play at the same link: www.ustream.tv/ciscotv.
The Fibre Channel Industry Association (FCIA) announced the completion of FCoE/8GFC Plugfest held recently at the University of New Hampshire Interoperability Lab (UNH-IOL). While it looks like many vendors are on top of FCOE/8G interop testing, several major vendors were conspicuously missing:
“Participating companies in the Plugfest included ATTO, Broadcom, Chelsio, Cisco, Emulex, Hewlett-Packard, Intel, Ixia, JDSU, LSI, Mellanox Technologies, NetApp, QLogic and SANBlaze Technology”.
Why is Multi-Vendor Interoperability Important?
As customers transition to virtualized and cloud environments and attempt to wring the most out of their existing technology, multi-protocol environments are inevitable. The single biggest hurdle to overcome in multi-protocol environments (FC, FCoE, iSCSI, etc) is intelligent scaling — and intelligent scaling requires interoperability between legacy and virtualized environments.
Standards-based architectures — The only way to ensure Interop!
Because most customer environments are heterogeneous in nature, a standards-based approach is paramount to enable intelligent scaling at reduced cost without forklift upgrades.
- Cisco participates in over 75+ standards bodies
- Cisco has been issued several thousand patents in networking hardware and software innovations over the last 25 years
- Cisco invests significant R&D effort in innovations, drives innovations into standards and invests in post-standards activity based on customer requirements Read More »