One of the central tenets for Data Center 3.0 is the migration from GbE to 10GbE. Whether its to support unified fabric (FCoE or iSCSI) or to support the type of I/O consumption server virtualization is driving, we feel 10GbE is a fundamental building block of the next generation of data centers.
To that end, we have spent a good deal of effort providing our customers a granular and cost effective path from their current GbE infrastructure to 10GbE. We support fibre based connections across our switching portfolio with a wide variety of optics. With the advent of the Nexus 5000, we also added twin-ax to the mix with significantly lowered costs. A little while later, we introduced the Nexus 2000 fabric extenders at yet another option for customer migrating their data centers to 10GbE that both lowered costs and simplified management. We also recently added the Cisco Nexus 4000 blade switch for the IBM BladeCenter to the mix. The latest option we are offering is 10GBase-T (IEEE 802.3an-2006) support which allows customers to take advantage of their existing copper cabling as they navigate the transition to 10GbE. In keeping with the extend-your-investment theme, 10GBase-T will initially be available for the Cisco Catalyst family first, then the Cisco Nexus family.
With the addition of the 10GBase-T options, we continue to offer the broadest, most flexible portfolio of 10GbE options. Here are some more details:
One of the ongoing challenges for our customers is finding ways to easily interconnect their data centers. Traditional drivers for this have been business continuance and the desire to load balance and make better use of underutilized resources. While these continue to be important, because of the spread of server virtualization, we also see emerging drivers around supporting inter-data center workload mobility and cloud import/export of workloads.
At this point, you may be thinking “Omar, there are already ways to do this, some even offered by Cisco–some you have have blogged about!” Yes, dear reader, its true, but with the release of a new NX-OS feature called Overlay Transport Virtualization (OTV) on our Nexus 7000, we expect to make connection your data centers simpler while at the same time, making those connections more intelligent and better suited to the emerging demands on a data center interconnect solutions.
The cool, unique thing about our OTV solution is that it works with your existing transport. Essentially, OTV provides Ethernet LAN extension and “MAC routing” on top of the existing layer 3 (i.e. IP) infrastructure. Here is a quick (~ 3 min) video that goes over the basics of the OTV solution.
The adoption of cloud-based computing promises to improve the agility, efficiency, and cost effectiveness of IT operations required to provision, scale, and deliver applications to the enterprise. As with other technology trends, delivering applications from the cloud, to remote sites, creates challenges with application performance, availability, and security.
Enterprise IT departments are continuing to invest in technologies that generate cost savings while making their business applications more agile and available. These initiatives, such as consolidation of branch-office servers and virtualization of data center servers, are increasingly being adopted by the enterprise; however, they have not been without consequences. For example, branch-office server consolidation projects, while reducing the server footprint, can result in a poor end-user experience and increased bandwidth utilization because applications traverse a WAN link with higher latency and packet loss and lower bandwidth than they traverse a LAN link. WAN optimization solutions, such as Cisco® Wide Area Application Services (WAAS), are implemented to deliver LAN-like application response times for end users and to defer a WAN bandwidth upgrade.
We have seen some incredible uptake on the Cisco Nexus 1000V as customers deploy the switch to help them scale their sever virtualization efforts. In support of that, starting next week, we are rolling out a seminar series to help customers better understand the solution and the applicability in their own environments. The sessions are free and are delivered by folks that are experts in their areas–OK, disclaimer, I am leading one session, but that’s as a host for a customer panel, so I think I shouldn’t screw that up too much. There is a business track and a technical track and include topics like ROI and business impact, basic and advanced design and configuration topics, and broader architectural considerations.
Anyway, check out the complete list of sessions and and register here. BTW, if there is a topic you’d like to see us address, let me know in the comments section.
The importance of reviewing data for Virtualization by Harris Sussman, Cisco Data Center Solutions – Unified Computing System
When buying a car, you have a choice to do research, ranging from perusing the mfg’s brochure, Consumer Report or actually driving a vehicle. Choosing a server vendor is similar, in that an IT Mgr. needs to assure the server meets the criteria for their business objectives.
For most IT buyers, purchasing decisions are not trivial, and each organization applies their own philosophies. As IT staffs embark on new virtualization projects, the aim is to reduce cost, increase business agility, and reduce complexity. There are a plethora of tools and industry benchmarks available, but when it comes to virtualized environments, it’s critical organizations get this decision right.
While most Hypervisor vendors have adequate benchmarks for their respective products, VMware’s VMmark is still perceived as the gold standard, The which combines 6 of the most common DC workloads running within a unit of work referred to as a tile. This methodology is still the most sought after.
VMware’s VMMark benchmark is one of the most active benchmark sites where vendors are constantly trying to improve their results. Just recently, Cisco published their latest UCS results http://www.vmware.com/products/vmmark/results.html to regain the number one position using the latest Intel Xeon processor for 8 core 2 socket systems. While bragging rights are important for the vendor, customers rely in this info for buying decisions.
Performance benchmarks are an important data point, but in the absence of a standard virtualization benchmark businesses must do the due diligence necessary to ensure they choose the right Hypervisor. Chris Wolf, A Burton Group analyst posted a nice blog about the need for SPECvirt now http://www.chriswolf.com/?p=303.