We had an interesting thread unfold on an internal list, which I thought I would open up to our readership. Someone was foraging around the network and came across some impressive server uptime (all server names changed to keep infosec happy):
server-x% uptime7:13pm up 500 day(s), 3:17, 53 users, load average: 0.08, 0.11, 0.11
to which someone else countered with
server-y$ uptime23:45:15 up 700 days, 8:31, 3 users, load average: 0.00, 0.00, 0.00
The irony behind this server is that it has outlasted the business unit it apparently supported.
However, the winner so far is:
WS-C5000 Software, Version McpSW: 3.1(2) NmpSW: 3.1(2a)Copyright (c) 1995-1998 by Cisco SystemsNMP S/W compiled on Feb 20 1998, 18:56:57MCP S/W compiled on Feb 20 1998, 19:05:51System Bootstrap Version: 2.4(1)Hardware Version: 2.1 Model: WS-C5000 Serial #: 007584271…Uptime is 2618 days, 9 hours, 11 minutes
7+ years--guess there is something to that investment protection thing after all.
So what is the best system uptime in your data center? The response with the best uptime gets a Cisco fleece.
This week saw the reactive execution of something we have predicted for a while- the consolidation of corporate entities to bring together LAN and SAN and try to challenge our Unified Fabric and Data Center 3.0 Vision. First I wanted to express gratitude for a multi-billion dollar valuation and endorsement of our vision that we started execution on in 2002. 2002 was when we first brought out our MDS 9500 Storage Director. It was designed with several key common architectural points with the Catalyst 6500- same Fabric ASICs, common equipment designs, etc. Subsequent to that we have evolved both platforms and introduced the Nexus 7000 and 5000 bringing FCoE to the market and delivering next-generation convergence platforms purpose-built for the data center and most importantly have delivered a strategic operating system that enables the convergence of these areas into a common hardware, silicon, and now software model- the key abstraction to the administrator/operator. It’s years of work, and we’re fortunate to have it finished.I personally don’t so much see this weeks news as the emergence of a new and stronger competitor as much as I see it as the loss of a respected adversary in the LAN market who played the game well….dgP.S -- US Principles of War for Offensives: Seize, retain, and exploit the initiative or if I was to paraphrase -- lead, don’t follow.
Ken Oestreich wrote this piece with a nice summary of the SF Data Center Dynamics conference.Even quoted our Greenest VP, Paul MArcoux on some of the areas we are seeing synergies between Campus, WAN, Data Center, Building Automation, etc. dg
First let me preface all comments below by saying that Cisco does not adhere to any particular political point of view on global climate change. However, we do listen to the world’s scientific community. I personally am a lifelong registered independent and believe in choosing the best team to get a particular job done. So, with the disclaimer out of the way I will say that political will and individual responsibility are the key components to tackling this massive set of challenges. However, there is a lot we all can do to take personal responsibility to make changes in our personal and professional lives.Not sure if you’ve seen it yet but the abbreviated version of Big Al’s Challenge is a good view (5:06 minutes), posted by WeCanSolveIt.org on July 17, 2008. He makes it crystal clear what needs to be done moving forward, in my opinion. I think it’s safe to say we’ve moved past Green 0.1 which is the basic awareness that our activities impact the planet. Now it’s clear we are moving forward towards Green 1.0 which I would define as “what does Green mean to me”? Now that question doesn’t just get applied to ones home life but work life as well.So we can imagine a million little questions we ask everyday (like that paper versus plastic question at the grocery). Here are a few I ask when consulting on data center projects:1) When considering a Services Oriented approach; what system attributes are important to me (security, availability, scalability, general quality of service and simplicity of management as examples). What infrastructure is required to meet these attributes? Now among said infrastructure how many power supplies are needed, how efficient are they (total average) and lastly what feature sets do I really need within the components?2) When designing facilities to support IT systems; how flexible and scalable is my facilities design over what time period? What if I need to move/consolidate within the 10-15 design life of data center facilities? Can my design accommodate the dynamic power and cooling characteristics of a virtualized IT architecture?3) Where are we today? How efficient is our data center? Without having some key metrics to measure against how do we know we are getting better?4) What steps can we take to improve upon our net power consumption, power growth curve, emissions profile and standard operating procedures?5) Is our organization set up to support energy efficiency as a business priority?6) What is the financial case for operating in a more sustainable manner?7) What type of power source are we plugging our data center into? Is it coal, is it nuke, is it hydro? Can we implement local renewable generation to compliment our main power source? What is the ROI for renewable investment?These points are of course vastly simplified but hopefully impart some points of view that have not traditionally been considered in our daily professional lives. Just like in our personal lives, everything needs to make financial sense. This is one of the reasons I am taking a bit of a risk here by citing Al Gore’s challenge as he clearly spells out the fact that the environment, the economy and global security issues are all inter-related. As data center professionals we have a big role to play here and our operations will be under increased scrutiny moving forward.The good news is that we can already demonstrate that reducing our emissions saves money. So whether you lean to the right or to the left doesn’t matter, money speaks to both. If you would like to see for yourself, please take a look at the Green Data Center Model Calculator beta release within the Efficiency Assurance Program.Whether you agree with Al Gore’s political stance or not, he is spurring discussion which helps us all learn from each other. I’ve already received some scathing emails from putting his name up here so feel free to post them as comments versus direct mails so other can learn from your point of view. Here is an anonymous excerpt from one in particular:“I very strongly object to linking Cisco’s great energy-efficient data center story to the unscientific, inaccurate, and, quite frankly, hysterical (and completely politically-motivated) rhetoric of Al Gore.”Your thoughts?PS -- the answer to the paper versus plastic question is bring your own bags, it’s not hard to do. In fact San Francisco has now outlawed plastic bags within the city limits, now there are some bragging rights!
Another milestone for FCoE was reached this week with interoperability testing sponsored by QLogic in the FCoE Test Drive.Here are some of the more interesting quotes from the event:”In our opinion, FCoE is ready to be tested by those customers that want to run their storage traffic over a single, 10Gb converged fabric,” said Dennis Martin, founder and president of computer industry analyst firm Demartek, a third party auditor of the test drive.”"All FCoE traffic on the test drive went through the Cisco Nexus 5000 Series 10 Gigabit Ethernet switches and ran seamlessly with all the other FCoE products in the test drive,” said Ed Chapman, Cisco vice president of product management for the Server Access and Virtualization Business Unit. “We are very encouraged by the prospects of FCoE market acceptance due to the fact that it is a non-disruptive technology, which customers can deploy at their own pace to unify their network infrastructures, and offers significant savings in adapter and cable investment as well as in power and cooling costs.”"The ability to converge data and storage networking traffic through a single adapter greatly simplifies life in the data center, as evidenced by this test drive,” said Frank Berry, vice president of marketing, QLogic Corp.”The FCoE Test Drive results further verify that the implementation of FCoE solutions will allow customers to leverage the proven benefits of Fibre Channel and the simplicity and economics of Ethernet fabrics,” said Patrick Rogers, vice president of Solutions Marketing at NetApp.Clearly, FCoE is gaining momentum and the multi-vendor approach the industry is taking will ensure that FCoE will gain the maturity and production quality that the storage market requires.Click here for more information on the test drive. Don’t forget to check out the cool videos, too!