Ken Oestreich wrote this piece with a nice summary of the SF Data Center Dynamics conference.Even quoted our Greenest VP, Paul MArcoux on some of the areas we are seeing synergies between Campus, WAN, Data Center, Building Automation, etc. dg
First let me preface all comments below by saying that Cisco does not adhere to any particular political point of view on global climate change. However, we do listen to the world’s scientific community. I personally am a lifelong registered independent and believe in choosing the best team to get a particular job done. So, with the disclaimer out of the way I will say that political will and individual responsibility are the key components to tackling this massive set of challenges. However, there is a lot we all can do to take personal responsibility to make changes in our personal and professional lives.Not sure if you’ve seen it yet but the abbreviated version of Big Al’s Challenge is a good view (5:06 minutes), posted by WeCanSolveIt.org on July 17, 2008. He makes it crystal clear what needs to be done moving forward, in my opinion. I think it’s safe to say we’ve moved past Green 0.1 which is the basic awareness that our activities impact the planet. Now it’s clear we are moving forward towards Green 1.0 which I would define as “what does Green mean to me”? Now that question doesn’t just get applied to ones home life but work life as well.So we can imagine a million little questions we ask everyday (like that paper versus plastic question at the grocery). Here are a few I ask when consulting on data center projects:1) When considering a Services Oriented approach; what system attributes are important to me (security, availability, scalability, general quality of service and simplicity of management as examples). What infrastructure is required to meet these attributes? Now among said infrastructure how many power supplies are needed, how efficient are they (total average) and lastly what feature sets do I really need within the components?2) When designing facilities to support IT systems; how flexible and scalable is my facilities design over what time period? What if I need to move/consolidate within the 10-15 design life of data center facilities? Can my design accommodate the dynamic power and cooling characteristics of a virtualized IT architecture?3) Where are we today? How efficient is our data center? Without having some key metrics to measure against how do we know we are getting better?4) What steps can we take to improve upon our net power consumption, power growth curve, emissions profile and standard operating procedures?5) Is our organization set up to support energy efficiency as a business priority?6) What is the financial case for operating in a more sustainable manner?7) What type of power source are we plugging our data center into? Is it coal, is it nuke, is it hydro? Can we implement local renewable generation to compliment our main power source? What is the ROI for renewable investment?These points are of course vastly simplified but hopefully impart some points of view that have not traditionally been considered in our daily professional lives. Just like in our personal lives, everything needs to make financial sense. This is one of the reasons I am taking a bit of a risk here by citing Al Gore’s challenge as he clearly spells out the fact that the environment, the economy and global security issues are all inter-related. As data center professionals we have a big role to play here and our operations will be under increased scrutiny moving forward.The good news is that we can already demonstrate that reducing our emissions saves money. So whether you lean to the right or to the left doesn’t matter, money speaks to both. If you would like to see for yourself, please take a look at the Green Data Center Model Calculator beta release within the Efficiency Assurance Program.Whether you agree with Al Gore’s political stance or not, he is spurring discussion which helps us all learn from each other. I’ve already received some scathing emails from putting his name up here so feel free to post them as comments versus direct mails so other can learn from your point of view. Here is an anonymous excerpt from one in particular:“I very strongly object to linking Cisco’s great energy-efficient data center story to the unscientific, inaccurate, and, quite frankly, hysterical (and completely politically-motivated) rhetoric of Al Gore.”Your thoughts?PS -- the answer to the paper versus plastic question is bring your own bags, it’s not hard to do. In fact San Francisco has now outlawed plastic bags within the city limits, now there are some bragging rights!
With all this talk of clouds, EMC has done something interesting and practical with them. If you purchase an Iomega external hard drive, you get access to a Windows or Mac friendly software bundle that integrates their Retrospect Express backup software, their Mozy online backup software, and 2GB of free online storage (or $4.95 for all-you-can-eat). So, in one fell swoop, you have seamless, integrated online and nearline backup. I think this is the perfect example of cloud computing (at least in 2008)--a service front-end and fully abstracted back-end. Read More »
Another milestone for FCoE was reached this week with interoperability testing sponsored by QLogic in the FCoE Test Drive.Here are some of the more interesting quotes from the event:”In our opinion, FCoE is ready to be tested by those customers that want to run their storage traffic over a single, 10Gb converged fabric,” said Dennis Martin, founder and president of computer industry analyst firm Demartek, a third party auditor of the test drive.”"All FCoE traffic on the test drive went through the Cisco Nexus 5000 Series 10 Gigabit Ethernet switches and ran seamlessly with all the other FCoE products in the test drive,” said Ed Chapman, Cisco vice president of product management for the Server Access and Virtualization Business Unit. “We are very encouraged by the prospects of FCoE market acceptance due to the fact that it is a non-disruptive technology, which customers can deploy at their own pace to unify their network infrastructures, and offers significant savings in adapter and cable investment as well as in power and cooling costs.”"The ability to converge data and storage networking traffic through a single adapter greatly simplifies life in the data center, as evidenced by this test drive,” said Frank Berry, vice president of marketing, QLogic Corp.”The FCoE Test Drive results further verify that the implementation of FCoE solutions will allow customers to leverage the proven benefits of Fibre Channel and the simplicity and economics of Ethernet fabrics,” said Patrick Rogers, vice president of Solutions Marketing at NetApp.Clearly, FCoE is gaining momentum and the multi-vendor approach the industry is taking will ensure that FCoE will gain the maturity and production quality that the storage market requires.Click here for more information on the test drive. Don’t forget to check out the cool videos, too!
Not so much a comment on the weather as on some prognostication around the evolution of cloud computing… 1) Today the term ‘cloud’ doesn’t mean a whole lot. It’s a nice catchy phrase for what many companies have been doing for a long time. Build a data center, outsource processing cycle and storage capacity to a variety of consumers, charge for it. Make sure it is connected to a network so the service they have outsourced can be ubiquitously accessed from a variety of locations, and allow the compute and storage capacity to be re-purposed. 2) What seems to be changing is the rate of change. The pace or velocity so to speak. i.e. To add a consumer to a hosted data center model in the mid to late 90′s involved buying a ‘cage’ and putting into that cage lots of physical stuff like routers, servers, storage arrays, load balancers, switches, firewalls, tape drives, terminal servers, etc. This meant that the deployment time was measured in months, weeks at best, to turn a new service up. Even a simple capacity add required procurement, cabling, electricians, rack mounting stuff, etc. The fastest single activity in the workflow could be measured in days.3) Time compressed. Server Virtualization compressed the time frame in which a ‘server’ (err, VM) could be turned up, cloned, copied, uniquely provisioned, et cetera. This created strain on the other areas of traditionally physical infrastructure such as storage, load balancers, and security. They have responded and replied to this with their own unique forms of virtualization and there are emergent provisioning platforms for enterprises and service providers that automate some of the monotonous workload tasks to speed up the delivery and thus efficacy of the entire service.Leaving us where we are today, but what about moving forward?4) Enterprises will build mini-clouds. As time compresses and workload can be rapidly re-provisioned/re-purposed in an increasingly automated fashion the aggregate number of CPU Cores/Sockets and Memory that will be necessary to support the peak aggregate workload will decrease within the cloud.5) Service Providers will move into higher revenue cloud models as they continue to try to extract a higher revenue per square foot or per kilowatt/hour out of a hosting facility. This will be driven by shareholders and market consolidation as well as the number of facilities that will come available to the SPs as they consolidate their own DC infrastructures.6) Hypervisors will become THE way of defining the abstraction between physical and virtual within a server and there will be a standardization of the hypervisor ‘interface’ between the VM and the hypervisor. This will allow a VM created on Xen to move to VMWare or Hyper-V and so on. Management capability and system-wide integration will become the key differentiators for this piece of technology.7) Service Providers will scale their cloud managed application/hosting/hypervisor offerings out initially by taking ‘low hanging fruit’ applications like email, web, call managers but will then want to continue the expansion into larger enterprise customers and more custom applications. The standardized hypervisor will enable workload portability and the SPs will try to acquire more customers.8) IP Addressing will move to IPv6 or have IPv4 RFCs standardized that allow for a global address device/VM ID within the addressing space and a location/provider sensitive ID that will allow for workload to be moved from one provider to another without changing the client’s host stack or known IP address ‘in flight’. Here’s an example from my friend Dino.9) This will allow workload portability between Enterprise Clouds and Service Provider Clouds.10) The SP community will embrace this and start aggressively trying to capture as much footprint as possible so they can fill their data centers to near capacity allowing for them to have the maximum efficiency within their operation. This holds to my rule that ‘The Value of Virtualization is compounded by the number of devices virtualized’.11) Someone will write a DNS or a DNS Coupled Workload exchange. This will allow the enterprise to effectively automate the bidding of workload allocation against some number or pool of Service Providers who are offering them the compute, storage, and network capacity at a given price. The faster and more seamless the above technologies make the shift of workload from one provider to another the simpler it is in the end for an exchange or market-based system to be the controlling authority for the distribution of workload and thus $$$’s to the provider who is most capable of processing the workload.12) Skynet becomes self aware.