Quentin Hardy at Forbes gets it pretty right today with an article on Cisco in the Data Center. As we evolve the network to becoming more of a platform, and applications evolve to SOA and Cloud architectures -- i.e. the most network centric application architectures ever. The network plays an increasing role in the data center. I have to give Quentin some grief for not quoting me (darn!) but he got John Chambers and McCool and Prem so we’re well covered. One or two factual things- our new data center is in Richardson, Texas; not RTP. Primarily for the redundant power grids that are available to us there. It has about a 10Mw critical load. We are also expanding Webex data centers to ensure global service delivery as we pass the 7M minutes a month mark on that SaaS offering. Our Catalyst switching line is the one Quentin is referring to that is around $10B a year, growing nicely in the data center too, even as we introduce the Nexus line which is purpose built for the data center. And Quentin, thanks for the grief on naming! Got some ideas for me there? Nexus came to me in the shower one morning…. (bad joke) but yeah as a certain point we regress to boring numbers. Are we as bad as HP with the BL495c? And lastly, the NX-OS operating system was internally developed by Cisco engineers and stemmed in part from our acquisition of Procket Networks engineering assets. NX-OS is now on the Nexus 7000, Nexus 5000, and was used as the core for our ACE system development. Next week we expand it a bit into some other products as well as an interesting new area.The thing I think Quentin nails though is that the application architecture is coming to the network. The next evolution of what we created with the global connectivity offered by the Internet (will alwyas use the Big I) is these large data center and cloud computing infrastructures and this is an area we have focused on for a very long time that is now coming into the light of day and into its own while in a period of economic turbulence, technology disruption, and market transition. What better time to broaden your focus, build systems and solutions, and purpose build products for key market opportunities. There are some markets that are stagnant in innovation where the systems vendors have sold out their innovation and creativity to other companies and silicon providers that are ripe for disruption and innovation that captures market opportunities. It’s going to be one very very fun year. Speaking of this- check out what we are introducing on Tuesday. In the words of the famous anchorman Ron Burgundy, “it’s kind of a big deal.”
So I have been a auto enthusiast for forever. For much of that time, I have been an adherent to the mantra”there is no replacement for displacement” (sorry Rob). What has changed over the years, however, is that car buying has evolved beyond looking for the biggest engine and lowest 0-60 times I could afford. Don’t get me wrong, I still optioned the larger engine in my last two rides (sorry again, Rob), but I finally figured out other things are as, if not more important. Opportunities to open the throttle up all the way are thrilling, but limited, and these days I actually have more fun hunting for tasty switchbacks on the backroads between my home and San Jose. Simply, the daily aspects of my current ride have define my overall on my experience with it.Back in February, I talked a bit about what makes the Nexus 7000 better than anything else on the market: the value of the switch does not hinge on how fast it goes, but on things that really matter to folks who have to live with these switches on a daily basis. One of these areas is the growing intolerance for system downtime. The current trends around consolidation and virtualization demand a different mindset and different expectations of infrastructure. While there are indisputable benefits to consolidation and virtualization, the flip side of this is that the size of a failure domain grows proportionally to the level of consolidation and virtualization. Every part of the data center needs to meet this higher bar. For example, VMware has its HA and DRS solutions. On the network side, one of the things we offer is a Zero Service Loss architecture on the Nexus 7000. Network World just took the Nexus 7000 though its paces and scored it 5/5 for availability. With all 256 10GbE ports forwarding traffic, the testers killed the OSPF process, upgraded and downgraded the software, and finally pulled 4 of the 5 the fabric modules from the switch. In all cases, the switch did not drop a packet.In fact, in a similar example, you can see how NX-OS and the Nexus 7000 handle things when you kill spanning tree while the switch is serving as the root bridge.I will have some other cool stuff to talk about next in the next couple of weeks, but in the interim, we have an At-A-Glance and whitepaper that dig into this a little further.
In one word, NOTHING. They are all three letter acronyms used to describe the same thing. All three of these acronyms describe an architectural collection of Ethernet extensions (based on open standards) designed to improve Ethernet networking and management in the Data Center. The Ethernet extensions are as follows:- Priority-Based Flow Control = P802.1Qbb- Enhanced Transmission Selection = P802.1Qaz- Congestion Notification = P802.1Qau- Data Center Bridging Exchange Protocol = This protocol is expected to leverage functionality provided by 802.1AB (LLDP)Cisco has co-authored many of the standards referenced above and is focused on providing a standards-based solution for a Unified Fabric in the data center The IEEE has decided to use the term “DCB” (Data Center Bridging) to describe these extensions to the industry. You can find additional information here:http://www.ieee802.org/1/pages/dcbridges.html In summary, all three acronyms mean essentially the same thing “today”. Cisco’s DCE products and solutions are NOT proprietary and are based on open standards.
An engineer on Cisco’s data center team, Sundar Iyer, was recently recognized by Technology Review Magazine as an outstanding innovator under the age of 35 for his work on scaling router performance, a topic he first began exploring while an doctoral student at Stanford University. So what problem does Sundar’s work address? With the growth in real-time applications such as voice, video, distributed computing and gaming, there’s a growing expectation from end users that their experiences with these applications will be high quality and high performing (e.g., high definition images, great sound quality, no lag in conversations). Yet, if the memory on a router (which is used to temporarily store the video, voice and other data that is being transmitted) cannot support the speeds required to provide these quality voice/video experiences, then we cannot scale the performance of a router beyond 10-40 Gb/s, and the router performance becomes very unpredictable. For example, on a 40Gb/s link a stream of video information can arrive approximately every 10 nano-seconds (this is roughly a million times faster than the fastest human reaction time), and commodity memory cannot be accessed at such fast speeds. For example, imagine the difference between police dispatcher A who’s juggling calls for many small robberies and police dispatcher B who has one call come in on a large bank heist. Clearly, police dispatcher B will be able to give the most accurate time when the cops will arrive. Similarly if one cannot predict when one’s voice or video conferencing information gets dispatched by a router, the performance of these real-time applications will suffer. For example, the quality of a conversation degrades and becomes formal and stilted when the voice delay experienced, is more than 150 milli-seconds.It does not appear that this problem will go away anytime soon; commodity memory is built for use in computers so that they can store more data, rather than be accessed at very high speeds. And as networking speeds increase this problem will become progressively worse.Sundar’s work has allowed fast routers to overcome this memory performance problem using commodity memory. As a consequence the router can provide a high-level of guarantee on the performance of end-user applications. For the next generation of Internet applications (such as an orchestra whose musicians are in remote locations, or a doctor who performs surgery remotely using robotic sensors) this is of paramount importance. Expanding more upon innovation in the area of network memory technology that was recently recognized by Technology Magazine (published by MIT), here are a few examples of how this work impacts us: 1. Cnn.com [link: http://edition.cnn.com/2008/TECH/08/18/cyber.warfare/?imw=Y&iref=mpstoryemail] had an article earlier this week, on the threat of cyber attacks. While most attacks have been focused on computers, servers and router control infrastructure, router hardware is not immune from them — memory and interconnect (I/O) performance gaps can be exploited by a coordinated set of adversaries who may send packets in a specific order or of a specific type for which the hardware is inefficient. As an example, assume that an adversary can predict that a particular segment of memory will be used by the router, based on a specific pattern. The pattern can be repeated multiple times, causing the memory resource to be overwhelmed over time. Other similar inefficiencies may be exploited to further degrade router performance. As an analogy, imagine a large retailer flooded with customers who clogged up customer billing counters and purchased trivial items worth less than a few cents each. This would overwhelm the billing personnel and their systems. While this is hard to do in reality, on the Internet such performance attacks are easier to orchestrate. Network memory technology makes the memory and I/O 100% efficient, and provides a robust protection against cyber attacks, which exploit such inefficiencies. 2. Ikea, the Swedish furniture retailer, is successful in shipping and delivering furniture (or varying sizes) by packing them efficiently. Customers can assemble them later when they finally reach their home premises. Network memory technology performs a similar function on packets, by packing and unpacking them as required. The advantage of efficiently transferring packets is that a router can be built with a smaller number of components and interfaces. Another advantage is power, which has become the number one issue in data centers (also see a related blog posting on power on Aug 12th,”Green as a Journey not a Destination”). If packets are efficiently transferred, the worst case power consumed is reduced. Since it can easily take up to 2 watts of power to cool every watt of power consumed (due to potential cooling inefficiencies), this results in additional savings for the data center. Also the capacity that a router can support (for a given area) can be increased because the components and chips used to build the router are smaller and more efficient.
Boy, who knew a simple open source Webkit based browser would cause so much angst. Google’s release of the beta of Chrome today caused all kinds of havoc in the blogosphere. The first article I happened to read was written by Heather Haverstein of ComputerWorld, who declared it a Windows killer and like a number of folks referred to Chrome as Google’s OS. Hmmm…really?Well, actually, I think yeah, but I would not short MSFT just yet…. Read More »