The trend to consolidate data centers is well in process, or even into the home stretch, for most companies and organizations. Nearly a year ago, a survey by Gartner in 2007 noted that 92% of respondents had a data center consolidation planned for, in progress or completed.So what about the software applications themselves? These have been much more distributed than data centers, having homes on the desktop, branch office, regional offices and data centers.New ResearchNemertes Research has published the results of a new research report on branch IT architectures that’s interesting, citing that branch office app centralization may also have reached close to its limit. The report cites that 67.7% of companies currently store their applications centrally, up from 56% one year ago. Also interesting is the 25% that also reported a “hybrid model” where most are centralized, while some are still hosted locally. The question there is how can you further optimize those applications that you must keep local (maybe retail transaction app, or even basic IT services like Windows Print)? Certainly virtualization can play a big role — either virtualizing the local server(s) you decide to keep in the branch, or even skipping them and virtualizing the branch platform to host the remaining local apps directly…a strategy Cisco is driving with the recent addition of virtualization to its WAAS platform.And then there’s software as a service (SaaS) options, which centralize applications even further — into the cloud of your SaaS provider like Salesforce.com, Google and others.What all these technologies and solutions really give you as IT leaders are a couple key benefits: flexibility, and business agility. Flexibility so you can choose *what* application goes *where*, based on cost, time management, resiliency requirements and other criteria. So you’re no longer bound by physical or cost limits. You also receive much better business agility, because the architectures and solutions you can build with these new application delivery models allows your business to deploy new apps, features and services much faster than before, from central (yours or a provider’s) infrastructure vs. distributed systems.While these trends towards application centralization, branch virtualization, and SaaS/cloud-based hosting are in their early years still, the directions seem pretty clear where the majority of architectures and deployments models will go. Your Thoughts?Where is your organization with its application deployment and delivery models? Centralizing (and if so, what apps are going home vs. staying out still)? What are you still keeping local for remote users? And as SaaS a part of your plans?
While sharing your quarterly results and a look forward is fair play in today’s competitive IT vendor environment, grossly overstating things doesn’t benefit the vendor (and its customers) in the long run.See this recent blog from The VAR Guy, who attended F5′s partner summit in New Orleans recently.Couple points worth noting:”œLess than 10 years ago, the relevant players in the data center were server vendors,” said McAdam (F5 CEO). But the data center market has shifted toward F5 Networks and its network application expertise, he insisted.Hmmm. That means that resellers (and IT buyers) should focus more on F5 (or any other vendors’) load balancers more so than servers and server virtualization?And then there is the issue of honest vendor claims (even if New Orleans can lead to late nights and rough mornings): “At the product level,” McAdam said,”we beat Cisco 99 percent of the time.” Hmmm (again). If F5 wins 99% of the time, and even half the opportunities were competitive (the real % is higher) of the 2500+ ACE customers, then F5 must have 250,000+ units shipped in the last two years since ACE was launched. Hard #’s to back up.So a note to the wise: enjoy the glory of a good quarter and share with your field counterparts, but it helps to stay between the lines, even if that’s in the French Quarter down in New Orleans.
Today, as I sat in my office wondering what to do since I cannot play Scrabulous any more (and still try to get a 500 point game, hit 490 the other week….) I was reading about clouds. Not the cloudy kind of judgement that causes things like Scrabulous to be shut down, but the kind of clouds that are on the beginning of the superhighway of Hype- Cloud Computing! There was an interesting article on GigaOm today about Networking Vendors Must Change Their Stripes to address the opportunity provided by the cloud computing evolution that is beginning to happen in the market. The first thing about evolution is that these architectural evolutions do not happen nearly as fast as the authors of such articles would like to think, and definitely not as fast as it takes to write above-said article. But notwithstanding there is some real ‘meat’ behind the Cloud Movement. It is an EVOLUTION though- an evolution of servers, of storage, of the networks that interconnect them, of load balancing, of firewalling, of security policy, of the atomic unit that application processing architectures are built on top of, of management tools, and an evolution of billing/accounting models. Combine them, and yes in the end if you were to look at the current de rigeur state of computing and compare it to the possibilities hopefully enabled by the cloud models it would look REVOLUTIONARY.However, evolution takes time. And in that there is a distinct first-mover advantage that sometimes comes to bear- for instance as I commented in my reply to the GigaOm article we have been focused on virtualizing as much of our infrastructure as possible. It is not a quick journey, its not a simple feature, its not a hack. It’s a complete top-down and bottom-up redesign of many things that people take for granted. It’s looking at the hardware, the ASICs, the memory subsystems and controllers, resource schedulers, arbiters, and software operating systems designed with stateful process restart and fully separate and independent processes for each function. This takes a long time. For some functions the Virtual Appliance concept makes sense -- I have been an advocate of this for some of our own products for a long time. These would be product where the underlying hardware is not the source of differentiation or competitive advantage and having the appliance be capable of being ported from one class of machine to another could offer some intrinsic value or allow the customer to reuse processing cycles more efficiently. I can’t publish our road map and state which Cisco applications lend themselves best to this but lets say most things with deep packet inspection and encryption processing DO NOT lend themselves to virtual appliances very well. Given that caveat what applications do you want to see us release as virtual appliances????Now for the good news -- we’ve been preparing for this for over six years. From the first virtualized firewall to the first virtualized load balancer, to the Nexus 7000 and Nexus 5000 that enable the I/O itself to become virtual, or software provisionable as the case may be. We also brought out tools like VFrame to enable a simplification of the deployment and an automation of common IT workflows so we can speed up IT responsiveness and really become an enabler of Enterprise Clouds. Kind of a profound realization -- we have the tools today that can build Enterprise Clouds. At least the core infrastructure and a lot of the hard part- we still ahve work to do, and there are still significant organizational barriers to the deployment of some of these offerings. But they are maturing, and evolving.Our competitors are trying as well -- through their M&A strategy, or in another case through new management that may be a result of that same aforementioned M&A strategies execution path. All I can say is the next few years are going to continue to be very very fun….
For those of you going to going to the Next Generation Data Center conference in San Francisco in a couple of weeks, be sure to check out Rajiv Ramaswami‘s keynote: “Data Center 3.0: How the Network is Transforming the Data Center” (Tuesday, Aug 5th, 1:30pm). Rajiv is VP/GM for Cisco’s Data Center Business Unit. As both an industry and Cisco veteran, Rajiv has some great insight into what is on the horizon in the data center and how the network will accelerate key trends such as virtualization, collaboration, and new service models.
We had an interesting thread unfold on an internal list, which I thought I would open up to our readership. Someone was foraging around the network and came across some impressive server uptime (all server names changed to keep infosec happy):
server-x% uptime7:13pm up 500 day(s), 3:17, 53 users, load average: 0.08, 0.11, 0.11
to which someone else countered with
server-y$ uptime23:45:15 up 700 days, 8:31, 3 users, load average: 0.00, 0.00, 0.00
The irony behind this server is that it has outlasted the business unit it apparently supported.
However, the winner so far is:
WS-C5000 Software, Version McpSW: 3.1(2) NmpSW: 3.1(2a)Copyright (c) 1995-1998 by Cisco SystemsNMP S/W compiled on Feb 20 1998, 18:56:57MCP S/W compiled on Feb 20 1998, 19:05:51System Bootstrap Version: 2.4(1)Hardware Version: 2.1 Model: WS-C5000 Serial #: 007584271…Uptime is 2618 days, 9 hours, 11 minutes
7+ years--guess there is something to that investment protection thing after all.
So what is the best system uptime in your data center? The response with the best uptime gets a Cisco fleece.