Cisco Blogs


Cisco Blog > Data Center and Cloud

Outstanding Innovators Under 35: Sundar Iyer

An engineer on Cisco’s data center team, Sundar Iyer, was recently recognized by Technology Review Magazine as an outstanding innovator under the age of 35 for his work on scaling router performance, a topic he first began exploring while an doctoral student at Stanford University. So what problem does Sundar’s work address? With the growth in real-time applications such as voice, video, distributed computing and gaming, there’s a growing expectation from end users that their experiences with these applications will be high quality and high performing (e.g., high definition images, great sound quality, no lag in conversations). Yet, if the memory on a router (which is used to temporarily store the video, voice and other data that is being transmitted) cannot support the speeds required to provide these quality voice/video experiences, then we cannot scale the performance of a router beyond 10-40 Gb/s, and the router performance becomes very unpredictable. For example, on a 40Gb/s link a stream of video information can arrive approximately every 10 nano-seconds (this is roughly a million times faster than the fastest human reaction time), and commodity memory cannot be accessed at such fast speeds. For example, imagine the difference between police dispatcher A who’s juggling calls for many small robberies and police dispatcher B who has one call come in on a large bank heist. Clearly, police dispatcher B will be able to give the most accurate time when the cops will arrive. Similarly if one cannot predict when one’s voice or video conferencing information gets dispatched by a router, the performance of these real-time applications will suffer. For example, the quality of a conversation degrades and becomes formal and stilted when the voice delay experienced, is more than 150 milli-seconds.It does not appear that this problem will go away anytime soon; commodity memory is built for use in computers so that they can store more data, rather than be accessed at very high speeds. And as networking speeds increase this problem will become progressively worse.Sundar’s work has allowed fast routers to overcome this memory performance problem using commodity memory. As a consequence the router can provide a high-level of guarantee on the performance of end-user applications. For the next generation of Internet applications (such as an orchestra whose musicians are in remote locations, or a doctor who performs surgery remotely using robotic sensors) this is of paramount importance. Expanding more upon innovation in the area of network memory technology that was recently recognized by Technology Magazine (published by MIT), here are a few examples of how this work impacts us: 1. Cnn.com [link: http://edition.cnn.com/2008/TECH/08/18/cyber.warfare/?imw=Y&iref=mpstoryemail] had an article earlier this week, on the threat of cyber attacks. While most attacks have been focused on computers, servers and router control infrastructure, router hardware is not immune from them — memory and interconnect (I/O) performance gaps can be exploited by a coordinated set of adversaries who may send packets in a specific order or of a specific type for which the hardware is inefficient. As an example, assume that an adversary can predict that a particular segment of memory will be used by the router, based on a specific pattern. The pattern can be repeated multiple times, causing the memory resource to be overwhelmed over time. Other similar inefficiencies may be exploited to further degrade router performance. As an analogy, imagine a large retailer flooded with customers who clogged up customer billing counters and purchased trivial items worth less than a few cents each. This would overwhelm the billing personnel and their systems. While this is hard to do in reality, on the Internet such performance attacks are easier to orchestrate. Network memory technology makes the memory and I/O 100% efficient, and provides a robust protection against cyber attacks, which exploit such inefficiencies. 2. Ikea, the Swedish furniture retailer, is successful in shipping and delivering furniture (or varying sizes) by packing them efficiently. Customers can assemble them later when they finally reach their home premises. Network memory technology performs a similar function on packets, by packing and unpacking them as required. The advantage of efficiently transferring packets is that a router can be built with a smaller number of components and interfaces. Another advantage is power, which has become the number one issue in data centers (also see a related blog posting on power on Aug 12th,”Green as a Journey not a Destination”). If packets are efficiently transferred, the worst case power consumed is reduced. Since it can easily take up to 2 watts of power to cool every watt of power consumed (due to potential cooling inefficiencies), this results in additional savings for the data center. Also the capacity that a router can support (for a given area) can be increased because the components and chips used to build the router are smaller and more efficient.

Getting Closer

Back in June I was chatting about how this year the server and the network will get closer to each other than ever before. We also said that every time networks evolve and get faster and more capable two things happen: -- Networks Consolidate -- Servers DisaggregateAnd lastly we said that application architectures are evolving -- that the SOA and Cloud eras we are stepping boldly into are the most network-centric application development environments we have ever seen.So now let’s do a classic technology mash-up. What does this all mean? I am sitting here at our annual Global Sales Meeting with eight to ten thousand of Cisco’s Finest and everyone keeps asking me about these topics. They also apparently read the same financial message boards I do and also pedantically ask about our M&A strategies in this space as well, but we’ll leave that discussion for another time. Here’s what I think you will see happen-1) There will be one network in the data center. It will connect all the servers and storage together as well as link the Data Center to the outside world.2) The Virtual Machine will become the atomic unit du jour of the DC. Network equipment will morph to embrace the virtual-port rather than the physical3) As these VMs move network technologies that enable larger, flatter, and more scalable broadcast domains will emerge. We have a few racks and maybe a row at a time addressed today -- then we will go for multiple rows or pods, then a whole data cneter, then inter-site connectivity.4) We will have to re-think how Firewalls and Load Balancers are deployed, where they are deployed, the actual performance numbers needed, and how much state needs to be maintained. I would imagine an architectural shift from monolithic box-based to a federated model for these may emerge and the capabilities may become more ingrained into some of the hardware platforms as well as extend the SW logic into the hypervisor.5) There will be a strong integration between the hypervisor and the network, allowing for increased transparency to the operating characteristics of a VM and enabling policy portability from one physical machine to the next in a dynamically scheduled environment6) It may be a stretch, but I think within some reasonable 1-2m distance RAM may be able to be networked at a reasonable speed and access rate for many applicatins, but by no means all.7) If that’s the case the role of the hypervisor gets very very interesting -- imagine a data center with one network, connecting all resources, that understands the virtual machine and enabled VM mobility. Then imagine racks of servers with central pools of RAM, and centralized storage systems that are synchronously replicated between multiple facilities. Now the role of the hypervisor gets very very interesting… gathering pools of resources and abstracting the physical manifestations of workload processing resources and presenting them to the Guest OSs on an as needed and true on-demand model.Data Centers are a lot like Oreo Cookies or Reese’s Peanut Butter cups. From the outside an Oreo looks more or less like any other cookie to the untrained eye. Bite into it though and there is ‘something special’ in the middle that differentiates it, makes it unique amongst other cookies. The network and the hypervisor will get closer together -- and that is some of the secret sauce inside the cookie so to speak that makes data centers unique.

That Cloud Has a Chrome Lining

September 3, 2008 at 12:00 pm PST

Boy, who knew a simple open source Webkit based browser would cause so much angst. Google’s release of the beta of Chrome today caused all kinds of havoc in the blogosphere. The first article I happened to read was written by Heather Haverstein of ComputerWorld, who declared it a Windows killer and like a number of folks referred to Chrome as Google’s OS. Hmmm…really?Well, actually, I think yeah, but I would not short MSFT just yet…. Read More »

Getting Server Virtualization from Here to There

September 2, 2008 at 12:00 pm PST

So I have had the opportunity over the last few weeks to talk to a number of folks about server virtualization--likes, dislikes, where its going. Depending on your choice of market research or anecdote, it seems that the number of virtualized production x86 servers in the the neighborhood of 10%. Again, depending on your favorite flavor of Kool-Aid, the expectation is for this to jump to 40-60% in the next 2-3 years. Even the most conservative scenario shows a significant increase in server virtualization.The question I have--and I’d love to hear from folks who have deployed some form of server virtualization--is what needs to happen in your data center to make the jump from 10% to the aforementioned 40-60% range. Do you see any inhibitors to having more of your servers virtualized?One of the areas we see some challenges is around ensuring network and storage services follow virtual machines as the undergo live migration. More than one customer has noted the challenges of using VMotion or DRS in a VMware environment. There are certainly workarounds, but they are often operationally burdensome, and what may work when you have 10% of your production servers virtualized may not be tenable when half your servers are virtualized. Because, by its nature, there is a degree of abstraction in server virtualization, I have also heard a number of concerns around areas like troubleshooting and regulatory compliance.As always, the industry is evolving to meet these new challenges. For example, one of the advantages of unified fabric is the ability to deliver a consistent set of network and storage services to all the attached servers in the data center, which simplifies live migration to some degree. However, there still seem to be gaps. Some gaps might be technological, while others might be more along the lines evolving the org structure to deal more effectively with shared, virtualized infrastructure.Anyway, what do you think--what needs to happen to drive a higher rate of virtualization for production servers?

Building Clouds with Network Equipment

An interesting write-up from Greg Ferro at ‘Ethereal Mind’ on how a lot of what we do as a company pulls together to create cloud computing style infrastuctures for our customers. What thoughts do you all have on this -- make sense, or way off base?

You want to read as well Building the Cloud with Cisco Unified Computing System , EMC and VMware

Read More »