Cisco Blogs


Cisco Blog > Data Center and Cloud

Re-Thinking System Availability in the World of Virtualization

September 10, 2008 at 12:00 pm PST

So I have been a auto enthusiast for forever. For much of that time, I have been an adherent to the mantra”there is no replacement for displacement” (sorry Rob). What has changed over the years, however, is that car buying has evolved beyond looking for the biggest engine and lowest 0-60 times I could afford. Don’t get me wrong, I still optioned the larger engine in my last two rides (sorry again, Rob), but I finally figured out other things are as, if not more important. Opportunities to open the throttle up all the way are thrilling, but limited, and these days I actually have more fun hunting for tasty switchbacks on the backroads between my home and San Jose. Simply, the daily aspects of my current ride have define my overall on my experience with it.Back in February, I talked a bit about what makes the Nexus 7000 better than anything else on the market: the value of the switch does not hinge on how fast it goes, but on things that really matter to folks who have to live with these switches on a daily basis. One of these areas is the growing intolerance for system downtime. The current trends around consolidation and virtualization demand a different mindset and different expectations of infrastructure. While there are indisputable benefits to consolidation and virtualization, the flip side of this is that the size of a failure domain grows proportionally to the level of consolidation and virtualization. Every part of the data center needs to meet this higher bar. For example, VMware has its HA and DRS solutions. On the network side, one of the things we offer is a Zero Service Loss architecture on the Nexus 7000. Network World just took the Nexus 7000 though its paces and scored it 5/5 for availability. With all 256 10GbE ports forwarding traffic, the testers killed the OSPF process, upgraded and downgraded the software, and finally pulled 4 of the 5 the fabric modules from the switch. In all cases, the switch did not drop a packet.In fact, in a similar example, you can see how NX-OS and the Nexus 7000 handle things when you kill spanning tree while the switch is serving as the root bridge.I will have some other cool stuff to talk about next in the next couple of weeks, but in the interim, we have an At-A-Glance and whitepaper that dig into this a little further.

DCE, CEE and DCB. What is the difference?

In one word, NOTHING. They are all three letter acronyms used to describe the same thing. All three of these acronyms describe an architectural collection of Ethernet extensions (based on open standards) designed to improve Ethernet networking and management in the Data Center. The Ethernet extensions are as follows:- Priority-Based Flow Control = P802.1Qbb- Enhanced Transmission Selection = P802.1Qaz- Congestion Notification = P802.1Qau- Data Center Bridging Exchange Protocol = This protocol is expected to leverage functionality provided by 802.1AB (LLDP)Cisco has co-authored many of the standards referenced above and is focused on providing a standards-based solution for a Unified Fabric in the data center The IEEE has decided to use the term “DCB” (Data Center Bridging) to describe these extensions to the industry. You can find additional information here:http://www.ieee802.org/1/pages/dcbridges.html In summary, all three acronyms mean essentially the same thing “today”. Cisco’s DCE products and solutions are NOT proprietary and are based on open standards.

Outstanding Innovators Under 35: Sundar Iyer

An engineer on Cisco’s data center team, Sundar Iyer, was recently recognized by Technology Review Magazine as an outstanding innovator under the age of 35 for his work on scaling router performance, a topic he first began exploring while an doctoral student at Stanford University. So what problem does Sundar’s work address? With the growth in real-time applications such as voice, video, distributed computing and gaming, there’s a growing expectation from end users that their experiences with these applications will be high quality and high performing (e.g., high definition images, great sound quality, no lag in conversations). Yet, if the memory on a router (which is used to temporarily store the video, voice and other data that is being transmitted) cannot support the speeds required to provide these quality voice/video experiences, then we cannot scale the performance of a router beyond 10-40 Gb/s, and the router performance becomes very unpredictable. For example, on a 40Gb/s link a stream of video information can arrive approximately every 10 nano-seconds (this is roughly a million times faster than the fastest human reaction time), and commodity memory cannot be accessed at such fast speeds. For example, imagine the difference between police dispatcher A who’s juggling calls for many small robberies and police dispatcher B who has one call come in on a large bank heist. Clearly, police dispatcher B will be able to give the most accurate time when the cops will arrive. Similarly if one cannot predict when one’s voice or video conferencing information gets dispatched by a router, the performance of these real-time applications will suffer. For example, the quality of a conversation degrades and becomes formal and stilted when the voice delay experienced, is more than 150 milli-seconds.It does not appear that this problem will go away anytime soon; commodity memory is built for use in computers so that they can store more data, rather than be accessed at very high speeds. And as networking speeds increase this problem will become progressively worse.Sundar’s work has allowed fast routers to overcome this memory performance problem using commodity memory. As a consequence the router can provide a high-level of guarantee on the performance of end-user applications. For the next generation of Internet applications (such as an orchestra whose musicians are in remote locations, or a doctor who performs surgery remotely using robotic sensors) this is of paramount importance. Expanding more upon innovation in the area of network memory technology that was recently recognized by Technology Magazine (published by MIT), here are a few examples of how this work impacts us: 1. Cnn.com [link: http://edition.cnn.com/2008/TECH/08/18/cyber.warfare/?imw=Y&iref=mpstoryemail] had an article earlier this week, on the threat of cyber attacks. While most attacks have been focused on computers, servers and router control infrastructure, router hardware is not immune from them — memory and interconnect (I/O) performance gaps can be exploited by a coordinated set of adversaries who may send packets in a specific order or of a specific type for which the hardware is inefficient. As an example, assume that an adversary can predict that a particular segment of memory will be used by the router, based on a specific pattern. The pattern can be repeated multiple times, causing the memory resource to be overwhelmed over time. Other similar inefficiencies may be exploited to further degrade router performance. As an analogy, imagine a large retailer flooded with customers who clogged up customer billing counters and purchased trivial items worth less than a few cents each. This would overwhelm the billing personnel and their systems. While this is hard to do in reality, on the Internet such performance attacks are easier to orchestrate. Network memory technology makes the memory and I/O 100% efficient, and provides a robust protection against cyber attacks, which exploit such inefficiencies. 2. Ikea, the Swedish furniture retailer, is successful in shipping and delivering furniture (or varying sizes) by packing them efficiently. Customers can assemble them later when they finally reach their home premises. Network memory technology performs a similar function on packets, by packing and unpacking them as required. The advantage of efficiently transferring packets is that a router can be built with a smaller number of components and interfaces. Another advantage is power, which has become the number one issue in data centers (also see a related blog posting on power on Aug 12th,”Green as a Journey not a Destination”). If packets are efficiently transferred, the worst case power consumed is reduced. Since it can easily take up to 2 watts of power to cool every watt of power consumed (due to potential cooling inefficiencies), this results in additional savings for the data center. Also the capacity that a router can support (for a given area) can be increased because the components and chips used to build the router are smaller and more efficient.

That Cloud Has a Chrome Lining

September 3, 2008 at 12:00 pm PST

Boy, who knew a simple open source Webkit based browser would cause so much angst. Google’s release of the beta of Chrome today caused all kinds of havoc in the blogosphere. The first article I happened to read was written by Heather Haverstein of ComputerWorld, who declared it a Windows killer and like a number of folks referred to Chrome as Google’s OS. Hmmm…really?Well, actually, I think yeah, but I would not short MSFT just yet…. Read More »

Getting Closer

Back in June I was chatting about how this year the server and the network will get closer to each other than ever before. We also said that every time networks evolve and get faster and more capable two things happen: -- Networks Consolidate -- Servers DisaggregateAnd lastly we said that application architectures are evolving -- that the SOA and Cloud eras we are stepping boldly into are the most network-centric application development environments we have ever seen.So now let’s do a classic technology mash-up. What does this all mean? I am sitting here at our annual Global Sales Meeting with eight to ten thousand of Cisco’s Finest and everyone keeps asking me about these topics. They also apparently read the same financial message boards I do and also pedantically ask about our M&A strategies in this space as well, but we’ll leave that discussion for another time. Here’s what I think you will see happen-1) There will be one network in the data center. It will connect all the servers and storage together as well as link the Data Center to the outside world.2) The Virtual Machine will become the atomic unit du jour of the DC. Network equipment will morph to embrace the virtual-port rather than the physical3) As these VMs move network technologies that enable larger, flatter, and more scalable broadcast domains will emerge. We have a few racks and maybe a row at a time addressed today -- then we will go for multiple rows or pods, then a whole data cneter, then inter-site connectivity.4) We will have to re-think how Firewalls and Load Balancers are deployed, where they are deployed, the actual performance numbers needed, and how much state needs to be maintained. I would imagine an architectural shift from monolithic box-based to a federated model for these may emerge and the capabilities may become more ingrained into some of the hardware platforms as well as extend the SW logic into the hypervisor.5) There will be a strong integration between the hypervisor and the network, allowing for increased transparency to the operating characteristics of a VM and enabling policy portability from one physical machine to the next in a dynamically scheduled environment6) It may be a stretch, but I think within some reasonable 1-2m distance RAM may be able to be networked at a reasonable speed and access rate for many applicatins, but by no means all.7) If that’s the case the role of the hypervisor gets very very interesting -- imagine a data center with one network, connecting all resources, that understands the virtual machine and enabled VM mobility. Then imagine racks of servers with central pools of RAM, and centralized storage systems that are synchronously replicated between multiple facilities. Now the role of the hypervisor gets very very interesting… gathering pools of resources and abstracting the physical manifestations of workload processing resources and presenting them to the Guest OSs on an as needed and true on-demand model.Data Centers are a lot like Oreo Cookies or Reese’s Peanut Butter cups. From the outside an Oreo looks more or less like any other cookie to the untrained eye. Bite into it though and there is ‘something special’ in the middle that differentiates it, makes it unique amongst other cookies. The network and the hypervisor will get closer together -- and that is some of the secret sauce inside the cookie so to speak that makes data centers unique.