A key WAN optimization benefit is the mitigation on bandwidth consumption during the huge traffic burst on Mondays, when employees arrive at work and email attachments come to their mailboxes. This chart shows actual bandwidth consumption well below what applications would have required.
A few weeks back I highlighted a report from VCE about our virtual WAAS (vWAAS) WAN optimization solution running on the Vblock platform. Now comes a new case study of a vWAAS deployment at Georgetown, Kentucky-based Toyota Tsusho America, Inc. (TAI). For the Georgetown data center, TAI decided on vWAAS rather than WAAS appliances. The detailed case study is a compelling argument for virtualizing WAN optimization for improved high-availability and more streamlined operations.
“We were an early adopter of vWAAS,” says Chris Jones, TAI Manager of Infrastructure and Operations, “and we perceived value in placing WAN optimization close to the data rather than near the WAN edge. In particular, we felt we could have lower-cost high availability (HA) for WAN optimization by leveraging the Vblock HA. And we perceived operational simplicity in the event of failure, compared with replacing a physical appliance and rebuilding the cache.” Read More »
At Cisco Live London 2012, we announced that the Nexus 1000Vdistributed virtual switch (DVS) architecture will scale to support 10K+ ports across hundreds of servers. This is a multi-fold increase over our current support of 2K ports and 64 servers. What is driving the need to scale? Two reasons: More VMs and broader VM mobility.
The number of VMs is growing leaps and bounds in data centers and cloud computing environments, which in turn is driving the need to scale virtual switch ports. Depending on who you ask, we have already reached or are about to reach the tipping point where 50% of enterprise workloads have been virtualized. In most IT environments today, you get a VM by default for computing needs; to run an app on a bare metal physical server requires special approval. And needless to say, Moore’s Law continues to drive dense multi-core CPUs with extended memory architectures – thus enabling many more virtual machines to be instantiated on a single physical server. We have seen UCS customers deploy 10 – 30 VMs per server for production workloads, and 50+ (in some cases 100+) VMs per server for non-production workloads and virtual desktops. Increased adoption of public cloud computing resources, as well as growing deployments of private clouds in enterprises is also rapidly increasing the VM count. Also, customers often assign multiple vNICs per VM, e.g. a NIC for data traffic, another for management, a third for backup and so on. These factors are contributing to increased demand for virtual Ethernet (vEth) ports on the Nexus 1000V DVS. Read More »
For anyone who has ventured to a tech conference, flown into an airport or even driven down CA highway 101 this past year, it’s clear that cloud is still top of mind for many technical and business decision makers. We believe this means that enterprises are no longer just talking the talk, but are looking deeper into their networking infrastructure to see if they are ready to meet the challenges of cloud, virtualization and workload mobility. At Cisco, it is our job to help build clouds that can handle elastic demand and efficiently use the networking infrastructure at both a virtual and physical level. This week, we are announcing several key upgrades to the Nexus 1000V family that bring scalability and cloud readiness to the network.