Enjoying being back in the office after a few days of travel and lots of wonderful customer visits at CiscoLive (ne Networkers) the other week. A special thanks to everyone who did the Video Blogging with us, some of those were pure comedic genius, others quite insightful. But on to today’s point or thought du jour….Power draw is not an absolute. Devices will vary in the amount of power it takes throughout the course of time in which workload is processed, sometimes a switch is forwarding packets, sometimes not, sometimes it needs buffers, sometimes not, some traffic takes more lookups than other traffic, and so on so forth. i.e. power draw is variable based on the type of workload being done.If we can take this as a given that would help me a bit here. I would like to offer up ‘nominal use case’ test results for most/many of our products showing how much power they draw. Right now we have moved forward and provide an accurately measured number based on the data center devices being resident in a controlled temperature facility operating somewhere int he 20-25C range. This is available on our Data Center Assurance Program tool today. In the future though I see this less as an absolute, and courtesy of a good discussion with Paul Marcoux I see it as more or a graph with multiple slopes represented.We need to show power draw under different load factors, in different thermal conditions, with different features turned on or off, and in the end still provide a nominal use case number for planning purposes to get us started with proper planning information.This begs the question though- why did one of our competitors recently shout from the rooftops while wrapping themselves in a green flag and decrying the ‘Cisco Energy Tax’ and talk about how much more efficient their infrastructure is… …when the test had the switches unplugged, no traffic going them, and no real-world features in use? I think I may have to offer an answer, spicy/snarky as it may be: Because for this competitor in particular the nominal use case is that their infrastructure remains unplugged, with no traffic going through it, with no real world features turned on? I am not sure the industry will ever, or could ever, resolve to a one-size fits all test scenario. What we can do is test as accurately as possible, inform openly about what our products can and cannot do and how efficiently then can do this function, and continue to innovate in ways that will ultimately reduce the power draw required to process workload, store the results of it, and communicate to other servers, storage, apps, and end users. dgP.S. the ultimate problem though, and challenge, is that which scenario below is best?1) Business Problem A uses Application X. It runs on 100 Quad-Core servers at 85% efficiency via a superior interconnect that enables built-in clustering and automated load balancing. It has a distributed clustered storage file system with 65% storage utilization efficiency. 2) Same Business Problem A using Application Y. It runs on 10 Dual-Core Servers at 15% efficiency on a standard Ethernet network. It uses a central SAN (eitehr FCoE, iSCSI, or FC to avoid argument) and the storage utilization is also at 65% efficiency.By some measures Scenario 1 is most effective. By other measures Scenario 2 is. The challenge to those that intend to standardize power draw measurements is to ensure that your test methodology picks the right one.
Anwar Ghuloum recently posted to the Intel Research blog about the conversations they are having with developers around developing for multi-core and terascale environments. To quote Anwar: “Ultimately, the advice I’ll offer is that these developers should start thinking about tens, hundreds, and thousands of cores now in their algorithmic development and deployment pipeline.” Read More »
There has been some concern that Cisco’s efforts around FCoE and Data Center Ethernet are proprietary implementations and not supported by the industry. Nothing could be further from the truth. The fact is that FCoE and Data Center Ethernet enjoy support from a variety of vendors and are being adopted by many standards bodies including IEEE and INCITS T11.Many vendors including Intel, Emulex, and QLogic have already announced FCoE and Data Center Ethernet products and many more have committed to doing so.Also, here are just a few industry events where vendors have come together to demonstrate interoperability and showcase their technology:FCIA Demonstration at SNWFCoE Plugfest at the University of New HampshireFCoE Test Drive by QLogicFor more information on FCoE and DCE, click here.
I’ve spoken frequently here about the benefits of a Unified Fabric in the data center. I’ve discussed CapEx and OpEx savings associated with reduced number of devices, adaptors, and cables when building a Unified Fabric with FCoE. But until now, it’s been difficult to quantify the savings without going through a detailed design exercise.Here is an online calculator that makes this process a much simpler one. I encourage you to test it out with your data and see if you come to the same conclusion that we have: a converged data center fabric can save you money.
Read the following article today on GigaOm- Seeing as I inadvertently left my briefcase and laptop in a cab this week and it took a few days to get it I may be a bit behind. (Thank you to the incredibly kind cab driver in New Orleans who brought it back to the airport so they could send it to me though! Was quite the good karma day where that was concerned)I somehow usually disagree with Simon on things, might be because of background, ideaology, where we work, or something, but in this article I find myself agreeing with his latter viewpoints quite a bit. For instance this time around I think Desktop Virtualization or Remote Desktops are poised to take off. I also agree with him that Ethernet is changing the way storage networks have been historically run and designed, although in the case of FCoE it is doing so in a very transparent way that is as non-disruptive as possible to both storage and network administrators. I think ‘Clouds’ are the next hype-term for the next year or two. People are coming to grips with Virtualization and how it reshapes IT, creates service and software based models, and in many ways changes a lot of the physical layer we are used to. Clouds will be the next transformation over hte next several years, building of of the software models that virtualization enabled.Wanted to say thank you to all the customers and partners I met at CiscoLive/Networkers last week. Was great seeing all of you, all 11,000 of you for that matter. Also wanted to pass a special thanks to those who filmed some vlogs with us and the 1500 or so who joined me wishing DIno Farinacci a ‘Happy Birthday’. Take Care.dg