I put together this teaser video showing our approach to modeling highly efficient, highly dense data center zones (aka pods, clusters). Simply defined, a data center zone is a physical construct that is built to support any number of logical dependencies or attributes. For example, you might tell us you want 99.999% availability, 80% electrical efficiency, cooling burden of 1.2, FCoE, 75% IT asset utilization using Unified Computing and so on. How we go about aligning these attributes/dependencies is critical to mitigate risk as well as ensuring high operative efficiency.In my new role as principal energy solutions architect in our Data Center Advanced Services team I am using Google SketchUp extensively as a means of modeling new physical designs. We do this not just for the IT infrastructure and racks but the full Mechanical, Electrical and Plumbing (MEP) designs that support IT. All this infrastructure can also be tied into Cisco EnergyWise to provide common monitoring of energy use (phase 1 -- today), control of IT assets (phase 2) and facilities infrastructure (phase 3).I am hoping that our users will provide some feedback on… Read More »
While you regularly hear from Cisco IT in this space, courtesy of Sidney and Doug, I thought you folks might want to get some other perspectives. Here is a series of podcasts from Intel IT where they share their thoughts on virtualization and the dynamic data center.
Cisco Unified Computing System (UCS) just won “Best Data Center Innovation” award at BladeSystems Insight 2009 event. Details will be available here soon.BladeSystems Insight is an executive summit for data center blade server technologies, took place April 19-21, 2009 in Las Vegas before an audience of over 150 hosted end-user executives, vendor sponsors, key industry analysts such as Forrester & IDC, association leaders etc. This award was based on voting from attending IT executives near the end of the event when attendees have had the opportunity to experience and evaluate the full range of companies, products and presentations.
As mentioned on the IPTV broadcast yesterday, the preliminary benchmarking for out new blade servers for our Unified Computing System is pretty darn good--something along the lines of 164% faster than previous-gen Intel-based two-socket systems. I think this not only makes a clear case for the upgrading to Intel Xeon 5500 Series processor but, the same way you would not put a Ferrari engine in a Cavalier, you also want to upgrade to a system that is designed to take advantage of that kind of performance, not just retro-fitted to deal with it. Here is a rundown of our preliminary results for some key industry benchmarks that cover a variety of workloads:
So, before we dig into CEE (Convergence Enhanced Ethernet), I have a quick quiz for you: take a look at the two pictures below and make note of the differences:
Ready? OK, back to the topic at hand….So, one of our competitors marked their entry into the realm of Ethernet switching with an FCoE capable switch. I honestly thought was kinda cool, since their actions continue to validate a vision, Data Center 3.0, we laid out almost two years ago and a unified fabric strategy we laid out a year ago. During their launch, however, the company made a curious pronouncement: said newly announced switch was the “industry’s only end-to-end Fibre Channel over Ethernet (FCoE)-based solution that brings the Fibre Channel (FC) standard and Converged Enhanced Ethernet (CEE) together.”This had me scratching my head a bit since we announced the Nexus 5000 a year ago, with a fine collection of ecosystem partners, and have customers with the solution in production already. Perhaps there is something magical in CEE that I missed? Well, if you read IBM’s Redbook paper on FCoE and CEE, you will see that CEE looks remarkably similar to our discussion of the elements of Data Center Ethernet (DCE). The reality is that neither CEE and DCE are standards but rather marketing shorthand for a half a dozen extensions to the Ethernet standards that are in the process of being finalized and published--the exact same standards. We collectively came up with constructs like DCE and CEE because “IEEE 802.1Qaz” and its brethren is somewhat awkward to into conversation. Thankfully, this naming dichotomy is going to be short-lived. As the standards move towards finalization, you will either see adoption of the formal name, such as Data Center Bridging (DCB), of they will simply be folded into the term “Ethernet”, as has happened in the past.Oh, and as for the ducks, much like DCE and CEE, they are the same.