Anwar Ghuloum recently posted to the Intel Research blog about the conversations they are having with developers around developing for multi-core and terascale environments. To quote Anwar: “Ultimately, the advice I’ll offer is that these developers should start thinking about tens, hundreds, and thousands of cores now in their algorithmic development and deployment pipeline.” Read More »
There has been some concern that Cisco’s efforts around FCoE and Data Center Ethernet are proprietary implementations and not supported by the industry. Nothing could be further from the truth. The fact is that FCoE and Data Center Ethernet enjoy support from a variety of vendors and are being adopted by many standards bodies including IEEE and INCITS T11.Many vendors including Intel, Emulex, and QLogic have already announced FCoE and Data Center Ethernet products and many more have committed to doing so.Also, here are just a few industry events where vendors have come together to demonstrate interoperability and showcase their technology:FCIA Demonstration at SNWFCoE Plugfest at the University of New HampshireFCoE Test Drive by QLogicFor more information on FCoE and DCE, click here.
I’ve spoken frequently here about the benefits of a Unified Fabric in the data center. I’ve discussed CapEx and OpEx savings associated with reduced number of devices, adaptors, and cables when building a Unified Fabric with FCoE. But until now, it’s been difficult to quantify the savings without going through a detailed design exercise.Here is an online calculator that makes this process a much simpler one. I encourage you to test it out with your data and see if you come to the same conclusion that we have: a converged data center fabric can save you money.
One of the things I’ve noticed over the last year is the tactic by some vendors to hop on the Green bandwagon and try to use the attention being paid to it as a competitive differentiator. Let me be the first to say (as someone with ~10 years in facilities) beware of bad or incomplete math. Analyzing the efficiency of a single box is less complicated than you might think. Let me try to impart some lessons learned in planning the power and cooling design for 20+ enterprise class data centers.The three main considerations for power and cooling are; capacity, density and efficiency. Capacity is typically used in planning the power you need to provision. Density for cooling supply and airflow. My boss Doug “Dawg” Gourlay likes to call this Air Supply after his favorite band… Then there is efficiency. The later metric can be applied to a box, system, architecture and of course travel.Box level efficiency is almost purely dependent on the products power supplies as the power distribution within the chassis is Direct Current (DC) and rarely gets re-converted. Almost all IT equipment uses an Alternating Current (AC) Switched Mode Power Supply (SMPS) these days. These power supplies are typically sourced from Asia and have different levels of quality that can be specified. The quality typically relates to how efficient said supply is. For years Cisco has been paying a premium for highly efficient power supplies, particularly in the data center. Prior to roughly 2002 AC power supplies had efficiencies in the range of 70% efficiency at optimum load. Why did their efficiency jump up so high? This was a carry over from the .com boom where space efficiency was the big concern -- we had plenty of power but not enough space. Therefore the likes of Exodus and GlobalSwitch pushed equipment suppliers to make smaller boxes. So how do you do that? You make more efficient components. So highly efficient power supplies are readily available on the open market today.The best way to compare one box to another box is to first start by looking at the power supplies. This is purely predicated on how the supply is loaded. We at Cisco have power supplies that are ~90% efficient when loaded at 70% or higher. As an example, if I have a power supply that requires 1000 Watts I want to make sure that power supply is on average drawing 700 Watts. If I do this I am in the “efficiency sweet-spot”. The next level of box and ultimately systems comparison is much more complex. Power per port is a good starting point but must be blended with a use case. This involves a back and forth if you will between vendors and customers. First, the user must clearly define a use case, then the vendor can show how the feature sets in that product or solution address the use case. So in essence its Power Per Port to do “what” or what we typically refer to as Power Per Work Unit Performed.Given the complexity of IT operations this is still very much an emerging science. The broad scope of interoperability, scale up, scale out, virtualization, etc. of systems level comparison, this is not a simple task. Given the fact there is only a small difference at the box level, we’ve been laser focused on driving efficiency within the architecture and through improved asset utilization using things like storage virtualization. In case you have been wondering this is why we only engage on efficiency analysis at the product level with customers who ask us to and when the comparison has to do with operative efficiency using sound metrics. We are not adding to the “Green-wash” by using misplaced metrics and over-simplifying what can be a very complex comparison.So the moral to the story is while capacity, density and efficiency are all intra-related they are not the same. Don’t confuse capacity requirements of a single box with that boxes operative efficiency or systems efficiency. A good way to think of it is the automobile example which Omar Sultan described in earlier postings. My spin on his example is a Prius versus a Big Rig. Which is more efficient? Please answer back if you know the answer. I would also be very interested if there are any facilities professionals reading and would like to comment on efficiency analysis.
Read the following article today on GigaOm- Seeing as I inadvertently left my briefcase and laptop in a cab this week and it took a few days to get it I may be a bit behind. (Thank you to the incredibly kind cab driver in New Orleans who brought it back to the airport so they could send it to me though! Was quite the good karma day where that was concerned)I somehow usually disagree with Simon on things, might be because of background, ideaology, where we work, or something, but in this article I find myself agreeing with his latter viewpoints quite a bit. For instance this time around I think Desktop Virtualization or Remote Desktops are poised to take off. I also agree with him that Ethernet is changing the way storage networks have been historically run and designed, although in the case of FCoE it is doing so in a very transparent way that is as non-disruptive as possible to both storage and network administrators. I think ‘Clouds’ are the next hype-term for the next year or two. People are coming to grips with Virtualization and how it reshapes IT, creates service and software based models, and in many ways changes a lot of the physical layer we are used to. Clouds will be the next transformation over hte next several years, building of of the software models that virtualization enabled.Wanted to say thank you to all the customers and partners I met at CiscoLive/Networkers last week. Was great seeing all of you, all 11,000 of you for that matter. Also wanted to pass a special thanks to those who filmed some vlogs with us and the 1500 or so who joined me wishing DIno Farinacci a ‘Happy Birthday’. Take Care.dg