One of the annual joys and rites of summer for millions of Americans is cheering for their favorite baseball teams. Fans continue to have more options than ever for following their favorite teams on-the-go with always-on access to MLB.com’s news and streaming video anywhere, anytime, on any device, supported by Cisco networking and data center infrastructure. Press coming to Cisco Live are in for a baseball treat today, with a tour of Petco park, followed by a behind the scenes look at the MLB IT infrastructure from Joe Choti, CTO and Sr. VP, Major League Baseball Advanced Media and Steve Reese, VP of Technology, Petco Park.
It’s clearly evident from the evolution of technology that the “need for speed” seems to be deeply embedded in human nature. Reflecting back without going too far back in history, the horse and buggy was the main mode of transportation, unfortunately not fast enough. So we invented the locomotive, automobile, airplane, fax machine, e-mail, and mobile phones with text messaging among the hundreds of other inventions to fulfill our need to do things faster.
Being a networking guy, I might be biased, but I see networks as the new frontier for speed, especially now that we are a media/information driven society. It wasn’t long ago that a 10Mbps shared Ethernet LAN and 56kbps WAN links were considered fast (showing my age here). However, every time faster networking speeds were introduced, newer applications quickly consumed the capacity driving the need for even higher speeds.
Over the years we’ve seen Ethernet speeds increase in increments of 10x starting with 10Mbps to 100Mbps to 1GE and 10GE and now, we’re again at another speed inflection point -100Gigabit Ethernet! This week Cisco added to our 100GE router portfolio (CRS and ASR routers) with the announcement of a 100GE M2-Series module for the Cisco Nexus 7000 Series switches. Along with the 100GE module, we also announced a 40GE M2-Series module for the Nexus 7000 and a 40GE module for the Catalyst 6500.
In case you might have missed it (or don’t read Russian) I wanted to call out two newsworthy items related to Cisco and 100G technology.
Last week at CiscoLive! London we announced the availability of 100GE interfaces on the Nexus 7000 to reduce bandwidth bottlenecks in the data center and help our customers meet the demands of emerging cloud computing applications. With this announcement Cisco becomes the only vendor in the industry offering an end-to-end 100G solution which includes the core (CRS), edge (ASR 9000), data center (Nexus 7000), and coherent DWDM optical transport (ONS 15454 MSTP). Furthermore we’re also one of only a handful of companies in the networking industry that owns (through our acquisition of CoreOptics) the underlying technology needed to make 100G (and beyond) a cost effective reality. With the high forecasted growth rate of the global Internet we believe that our customers will strongly benefit from the unique breadth of our solution to meet both their business and technology requirements.
Today, we made a significant announcement that transcends data center, campus and service provider and Cloud-based deployments, geared towards helping our customers embrace the winds of change that are buffeting the IT landscape. This announcement is precipitated by a number of mega-trends that were buzzwords even a couple of years ago but have become looming realities in the IT landscape. Think video, virtualization, 10G, Bring your own device (BYOD) and not to forget – the journey to cloud.
Layer in ongoing careabouts like security and Energy Efficiency – and boy, do we have the perfect storm brewing.
The three “Cs”:
For many customers, it is no longer sufficient to take a “band-aid approach”. A faster switch here or a new wireless LAN access point there just doesn’t cut it. They have to step back and evaluate their environment holistically, and minimize the chokepoints proactively. This is causing them to evaluate the three “Cs” of capacity, complexity and cost, while ensuring that they’re in a position to deliver the end-to-end IT experience.
I previously discussed using LISP to optimize your client-server traffic so today I’ll discuss the reverse direction: Egress Path Optimization from the Server to the Client. Let’s go over the need for Path Optimization in the direction from Server-to-Client with some pictures and explanations.
The Virtual Machine (VM) server is configured with a default gateway IP address, 192.168.1.1, which is the next hop IP address that the VM will forward packets towards as the traffic returns to the client outside the data center. In this data center environment, we’ve deployed the default gateway using the First Hop Redundancy Protocol (FHRP). In reality, FHRP is an umbrella technology term that includes Hot Standby Routing Protcol (HSRP) and Virtual Router Redundancy Protocol (VRRP), two main technologies that provide transparent failover and redundancy at the first hop IP router. Please see info on FHRP here.
Also notice that the VM default gateway is the same as the HSRP Virtual IP Address (VIP). The HSRP VIP binds itself to one of the physical HSRP Routers via an HSRP election process using Layer 2 control packets between the two physical HSRP Routers and this means that the VM default gateway, since it points to a VIP, may move between physical HSRP Routers, and of course which is then intent and design when using any type of FHRP.
In the above picture, the Path is Optimized from Server to Client, so now let’s take a look at what happens when we migrate the VM to the new data center.