I wanted to append and slightly change my aforementioned theory on server disaggregation and network evolution. Previously it was basically something like this….”Whenever a network transport is faster than a server bus speed the peripheral connecting to that bus will go from a parallel connection to a serial one to a shared/packetized one.Printers, Hard Drives, CPU-CPU Interconnect, and potentially even memory will follow this theorem. Thus over time all of the elements necessary to process an IT workload will be not only interconnected by a common fabric, but more importantly connected with a layer of abstraction that will make any resource available to any workload at any time. “Now it still seems to be holding true (even 3-4 years after I made my first slide trying to explain this trend) but I realized in conversation with some sage reporters today that there is another axis to the graph. Network speed and capability increase. The faster and more capable the network the more disaggregated the server becomes. The faster and more capable a network is the more the network consolidates other network types. I need to sit down with a decent Cabernet and mull over if there is an end-state that is achieved. But if I had a data center that looked like a rack of CPU, a rack of memory, a rack of storage, a common network linking these physical assets and a network-enabled hypervisor running on top of this infrastructure providing a layer of abstraction between the physical elements and the operating system(s) this would be interesting. The faster I can reallocate resources against workload the less aggregate resources I need to support an increasingly sporadic workload profile. Also- couple with this witty comment #2 -- “The value of virtualization is compounded by the number of resources consolidated into the virtualized data center” and we have a scenario that drives branch consolidation, virtual desktop infrastructure, and other forms of resource consolidation into the data center.Now there are still some networking firms out there that are chasing a speeds/feeds game. Some that have not recognized the value of innovation. Some that still go for the largest routing table, CAM table, buffer pool, millions of packets per second measurement, or Gbps backplane speed comparison. Some hire third parties that were strong advocates of Gigabit Token Ring to champion their product marketing efforts and contrive tests to show how fast/great/cool/neat/geewhiz/wow/smart they are. Some wrap their products in a green flag and parade them through these ‘testing houses’ and declare victory. Some simply exist. Well my friends, the data center is a rapidly evolving part of the network. One that demands focus, innovation, and efficiency- not just existence. So to commemorate a great author who sadly passed away today-“Blow the dust off the clock. Your watches are behind the times. Throw open the heavy curtains which are so dear to you - you do not even suspect that the day has already dawned outside.”
You may have read some recent news about Sun making FCoE available as part of their OpenSolaris project:Just today, QLogic announced that are now shipping the industry’s first Converged Network Adaptor supporting FCoE.And EMC announced they have taken another step closer to offering FCoE on their Clariion line.Expect to see even more announcements in the coming weeks and months for our other FCoE partners that will enable customers to deploy a complete FCoE solution.
Today, as I sat in my office wondering what to do since I cannot play Scrabulous any more (and still try to get a 500 point game, hit 490 the other week….) I was reading about clouds. Not the cloudy kind of judgement that causes things like Scrabulous to be shut down, but the kind of clouds that are on the beginning of the superhighway of Hype- Cloud Computing! There was an interesting article on GigaOm today about Networking Vendors Must Change Their Stripes to address the opportunity provided by the cloud computing evolution that is beginning to happen in the market. The first thing about evolution is that these architectural evolutions do not happen nearly as fast as the authors of such articles would like to think, and definitely not as fast as it takes to write above-said article. But notwithstanding there is some real ‘meat’ behind the Cloud Movement. It is an EVOLUTION though- an evolution of servers, of storage, of the networks that interconnect them, of load balancing, of firewalling, of security policy, of the atomic unit that application processing architectures are built on top of, of management tools, and an evolution of billing/accounting models. Combine them, and yes in the end if you were to look at the current de rigeur state of computing and compare it to the possibilities hopefully enabled by the cloud models it would look REVOLUTIONARY.However, evolution takes time. And in that there is a distinct first-mover advantage that sometimes comes to bear- for instance as I commented in my reply to the GigaOm article we have been focused on virtualizing as much of our infrastructure as possible. It is not a quick journey, its not a simple feature, its not a hack. It’s a complete top-down and bottom-up redesign of many things that people take for granted. It’s looking at the hardware, the ASICs, the memory subsystems and controllers, resource schedulers, arbiters, and software operating systems designed with stateful process restart and fully separate and independent processes for each function. This takes a long time. For some functions the Virtual Appliance concept makes sense -- I have been an advocate of this for some of our own products for a long time. These would be product where the underlying hardware is not the source of differentiation or competitive advantage and having the appliance be capable of being ported from one class of machine to another could offer some intrinsic value or allow the customer to reuse processing cycles more efficiently. I can’t publish our road map and state which Cisco applications lend themselves best to this but lets say most things with deep packet inspection and encryption processing DO NOT lend themselves to virtual appliances very well. Given that caveat what applications do you want to see us release as virtual appliances????Now for the good news -- we’ve been preparing for this for over six years. From the first virtualized firewall to the first virtualized load balancer, to the Nexus 7000 and Nexus 5000 that enable the I/O itself to become virtual, or software provisionable as the case may be. We also brought out tools like VFrame to enable a simplification of the deployment and an automation of common IT workflows so we can speed up IT responsiveness and really become an enabler of Enterprise Clouds. Kind of a profound realization -- we have the tools today that can build Enterprise Clouds. At least the core infrastructure and a lot of the hard part- we still ahve work to do, and there are still significant organizational barriers to the deployment of some of these offerings. But they are maturing, and evolving.Our competitors are trying as well -- through their M&A strategy, or in another case through new management that may be a result of that same aforementioned M&A strategies execution path. All I can say is the next few years are going to continue to be very very fun….
Mario Apicella, senior analyst for the InfoWorld Test Center, recently posted the results of his comprehensive test of the Cisco Nexus 5020. The article provides good insight into the Nexus 5020 in particular and the Nexus family in general. Of particular interest are his experiences testing PFC and configuring NX-OS.
The trend to consolidate data centers is well in process, or even into the home stretch, for most companies and organizations. Nearly a year ago, a survey by Gartner in 2007 noted that 92% of respondents had a data center consolidation planned for, in progress or completed.So what about the software applications themselves? These have been much more distributed than data centers, having homes on the desktop, branch office, regional offices and data centers.New ResearchNemertes Research has published the results of a new research report on branch IT architectures that’s interesting, citing that branch office app centralization may also have reached close to its limit. The report cites that 67.7% of companies currently store their applications centrally, up from 56% one year ago. Also interesting is the 25% that also reported a “hybrid model” where most are centralized, while some are still hosted locally. The question there is how can you further optimize those applications that you must keep local (maybe retail transaction app, or even basic IT services like Windows Print)? Certainly virtualization can play a big role — either virtualizing the local server(s) you decide to keep in the branch, or even skipping them and virtualizing the branch platform to host the remaining local apps directly…a strategy Cisco is driving with the recent addition of virtualization to its WAAS platform.And then there’s software as a service (SaaS) options, which centralize applications even further — into the cloud of your SaaS provider like Salesforce.com, Google and others.What all these technologies and solutions really give you as IT leaders are a couple key benefits: flexibility, and business agility. Flexibility so you can choose *what* application goes *where*, based on cost, time management, resiliency requirements and other criteria. So you’re no longer bound by physical or cost limits. You also receive much better business agility, because the architectures and solutions you can build with these new application delivery models allows your business to deploy new apps, features and services much faster than before, from central (yours or a provider’s) infrastructure vs. distributed systems.While these trends towards application centralization, branch virtualization, and SaaS/cloud-based hosting are in their early years still, the directions seem pretty clear where the majority of architectures and deployments models will go. Your Thoughts?Where is your organization with its application deployment and delivery models? Centralizing (and if so, what apps are going home vs. staying out still)? What are you still keeping local for remote users? And as SaaS a part of your plans?