Cisco Blogs


Cisco Blog > Data Center and Cloud

Feeling the “need for speed”? Announcing 100GE on the Nexus 7000 Series

It’s clearly evident from the evolution of technology that the “need for speed” seems to be deeply embedded in human nature.  Reflecting back without going too far back in history, the horse and buggy was the main mode of transportation, unfortunately not fast enough. So we invented the locomotive, automobile, airplane, fax machine, e-mail, and mobile phones with text messaging among the hundreds of other inventions to fulfill our need to do things faster.

Being a networking guy, I might be biased, but I see networks as the new frontier for speed, especially now that we are a media/information driven society. It wasn’t long ago that a 10Mbps shared Ethernet LAN and 56kbps WAN links were considered fast (showing my age here). However, every time faster networking speeds were introduced, newer applications quickly consumed the capacity driving the need for even higher speeds.

Over the years we’ve seen Ethernet speeds increase in increments of 10x starting with 10Mbps to 100Mbps to 1GE and 10GE and now, we’re again at another speed inflection point -100Gigabit Ethernet! This week Cisco added to our 100GE router portfolio (CRS and ASR routers) with the announcement of a 100GE M2-Series module for the Cisco Nexus 7000 Series switches. Along with the 100GE module, we also announced a 40GE M2-Series module for the Nexus 7000 and a 40GE module for the Catalyst 6500.

Read More »

Tags: , , , , ,

Latest 100 Gigabit News from Cisco

In case you might have missed it (or don’t read Russian) I wanted to call out two newsworthy items related to Cisco and 100G technology.

Last week at CiscoLive! London we announced the availability of 100GE interfaces on the Nexus 7000 to reduce bandwidth bottlenecks in the data center and help our customers meet the demands of emerging cloud computing applications. With this announcement Cisco becomes the only vendor in the industry offering an end-to-end 100G solution which includes the core (CRS), edge (ASR 9000), data center (Nexus 7000), and coherent DWDM optical transport (ONS 15454 MSTP). Furthermore we’re also one of only a handful of companies in the networking industry that owns (through our acquisition of CoreOptics) the underlying technology needed to make 100G (and beyond) a cost effective reality. With the high forecasted growth rate of the global Internet we believe that our customers will strongly benefit from the unique breadth of our solution to meet both their business and technology requirements.

Cisco end-end 100 Gbps Solution-- Core, edge, optical, data center.

Read More »

Tags: , , , , , , , , , ,

How Cisco Switching Innovations Help Deliver Cloud-Ready Networking

Today, we made a significant announcement that transcends data center, campus and service provider and Cloud-based deployments,  geared towards helping our customers embrace the winds of change that are buffeting the IT landscape.  This announcement is precipitated by a number of mega-trends that were buzzwords even a couple of years ago but have become looming realities in the IT landscape. Think video, virtualization, 10G, Bring your own device (BYOD) and not to forget – the journey to cloud.

Layer in ongoing careabouts like security and Energy Efficiency – and boy, do we have the perfect storm brewing.

The three “Cs”:

For many customers,   it is no longer sufficient to take a “band-aid approach”.  A faster switch here or a new wireless LAN access point there just doesn’t cut it. They have to step back and evaluate their environment holistically, and minimize the chokepoints proactively. This is causing them to evaluate the three “Cs” of capacity, complexity and cost, while ensuring that they’re in a position to deliver the end-to-end IT experience.

Read More »

Tags: , , , , , , , , , , ,

FHRP – Egress Path Optimization from the Server to the Client

I previously discussed using LISP to optimize your client-server traffic so today I’ll discuss the reverse direction: Egress Path Optimization from the Server to the Client. Let’s go over the need for Path Optimization in the direction from Server-to-Client with some pictures and explanations.

The Virtual Machine (VM) server is configured with a default gateway IP address, 192.168.1.1, which is the next hop IP address that the VM will forward packets towards as the traffic returns to the client outside the data center.  In this data center environment, we’ve deployed the default gateway using the First Hop Redundancy Protocol (FHRP).  In reality, FHRP is an umbrella technology term that includes Hot Standby Routing Protcol (HSRP) and Virtual Router Redundancy Protocol (VRRP), two main technologies that provide transparent failover and redundancy at the first hop IP router.  Please see info on FHRP here.

Also notice that the VM default gateway is the same as the HSRP Virtual IP Address (VIP). The HSRP VIP binds itself to one of the physical HSRP Routers via an HSRP election process using Layer 2 control packets between the two physical HSRP Routers and this means that the VM default gateway, since it points to a VIP, may move between physical HSRP Routers, and of course which is then intent and design when using any type of FHRP.

In the above picture, the Path is Optimized from Server to Client, so now let’s take a look at what happens when we migrate the VM to the new data center.

Read More »

Tags: , , , , , , , , , , , ,

Best Practices for Application Delivery in Virtualized Networks – Part II

As we start off this New Year, how about including a resolution to improve application delivery? In Best Practices for Application Delivery in Virtualized Networks – Part I , we covered key application delivery challenges that have come up due to the complexities of managing the many types of applications that enterprises use today, and further complicated by data center consolidation and virtualization. We then covered some best practices, courtesy of Dr. Jim Metzler’s 2011 Application Service Delivery Handbook, which recommended taking a lifecycle approach to planning and managing application performance.

A key step to the lifecycle approach is to implement network and application optimization tools, such as WAN Optimization solutions and Application Delivery Controllers, including server load balancers. Of course, these solutions are not new to the market and already address many of the needs that exist with delivering enterprise applications in virtualized data centers -- namely, the need to ensure network reliability, availability and security for users accessing these applications. In this post, we will discuss a recent study by IDC, where IT decision makers across Europe and the US spoke out about their strategies for using server load balancers to deal with emerging challenges.



.                                                                                                                                                                                                                                                                                              What important attributes do you look for in your server load balancers?

Read More »

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,