I wanted to append and slightly change my aforementioned theory on server disaggregation and network evolution. Previously it was basically something like this….”Whenever a network transport is faster than a server bus speed the peripheral connecting to that bus will go from a parallel connection to a serial one to a shared/packetized one.Printers, Hard Drives, CPU-CPU Interconnect, and potentially even memory will follow this theorem. Thus over time all of the elements necessary to process an IT workload will be not only interconnected by a common fabric, but more importantly connected with a layer of abstraction that will make any resource available to any workload at any time. “Now it still seems to be holding true (even 3-4 years after I made my first slide trying to explain this trend) but I realized in conversation with some sage reporters today that there is another axis to the graph. Network speed and capability increase. The faster and more capable the network the more disaggregated the server becomes. The faster and more capable a network is the more the network consolidates other network types. I need to sit down with a decent Cabernet and mull over if there is an end-state that is achieved. But if I had a data center that looked like a rack of CPU, a rack of memory, a rack of storage, a common network linking these physical assets and a network-enabled hypervisor running on top of this infrastructure providing a layer of abstraction between the physical elements and the operating system(s) this would be interesting. The faster I can reallocate resources against workload the less aggregate resources I need to support an increasingly sporadic workload profile. Also- couple with this witty comment #2 -- “The value of virtualization is compounded by the number of resources consolidated into the virtualized data center” and we have a scenario that drives branch consolidation, virtual desktop infrastructure, and other forms of resource consolidation into the data center.Now there are still some networking firms out there that are chasing a speeds/feeds game. Some that have not recognized the value of innovation. Some that still go for the largest routing table, CAM table, buffer pool, millions of packets per second measurement, or Gbps backplane speed comparison. Some hire third parties that were strong advocates of Gigabit Token Ring to champion their product marketing efforts and contrive tests to show how fast/great/cool/neat/geewhiz/wow/smart they are. Some wrap their products in a green flag and parade them through these ‘testing houses’ and declare victory. Some simply exist. Well my friends, the data center is a rapidly evolving part of the network. One that demands focus, innovation, and efficiency- not just existence. So to commemorate a great author who sadly passed away today-“Blow the dust off the clock. Your watches are behind the times. Throw open the heavy curtains which are so dear to you - you do not even suspect that the day has already dawned outside.”
You may have read some recent news about Sun making FCoE available as part of their OpenSolaris project:Just today, QLogic announced that are now shipping the industry’s first Converged Network Adaptor supporting FCoE.And EMC announced they have taken another step closer to offering FCoE on their Clariion line.Expect to see even more announcements in the coming weeks and months for our other FCoE partners that will enable customers to deploy a complete FCoE solution.
Mario Apicella, senior analyst for the InfoWorld Test Center, recently posted the results of his comprehensive test of the Cisco Nexus 5020. The article provides good insight into the Nexus 5020 in particular and the Nexus family in general. Of particular interest are his experiences testing PFC and configuring NX-OS.
The trend to consolidate data centers is well in process, or even into the home stretch, for most companies and organizations. Nearly a year ago, a survey by Gartner in 2007 noted that 92% of respondents had a data center consolidation planned for, in progress or completed.So what about the software applications themselves? These have been much more distributed than data centers, having homes on the desktop, branch office, regional offices and data centers.New ResearchNemertes Research has published the results of a new research report on branch IT architectures that’s interesting, citing that branch office app centralization may also have reached close to its limit. The report cites that 67.7% of companies currently store their applications centrally, up from 56% one year ago. Also interesting is the 25% that also reported a “hybrid model” where most are centralized, while some are still hosted locally. The question there is how can you further optimize those applications that you must keep local (maybe retail transaction app, or even basic IT services like Windows Print)? Certainly virtualization can play a big role — either virtualizing the local server(s) you decide to keep in the branch, or even skipping them and virtualizing the branch platform to host the remaining local apps directly…a strategy Cisco is driving with the recent addition of virtualization to its WAAS platform.And then there’s software as a service (SaaS) options, which centralize applications even further — into the cloud of your SaaS provider like Salesforce.com, Google and others.What all these technologies and solutions really give you as IT leaders are a couple key benefits: flexibility, and business agility. Flexibility so you can choose *what* application goes *where*, based on cost, time management, resiliency requirements and other criteria. So you’re no longer bound by physical or cost limits. You also receive much better business agility, because the architectures and solutions you can build with these new application delivery models allows your business to deploy new apps, features and services much faster than before, from central (yours or a provider’s) infrastructure vs. distributed systems.While these trends towards application centralization, branch virtualization, and SaaS/cloud-based hosting are in their early years still, the directions seem pretty clear where the majority of architectures and deployments models will go. Your Thoughts?Where is your organization with its application deployment and delivery models? Centralizing (and if so, what apps are going home vs. staying out still)? What are you still keeping local for remote users? And as SaaS a part of your plans?
While sharing your quarterly results and a look forward is fair play in today’s competitive IT vendor environment, grossly overstating things doesn’t benefit the vendor (and its customers) in the long run.See this recent blog from The VAR Guy, who attended F5′s partner summit in New Orleans recently.Couple points worth noting:”œLess than 10 years ago, the relevant players in the data center were server vendors,” said McAdam (F5 CEO). But the data center market has shifted toward F5 Networks and its network application expertise, he insisted.Hmmm. That means that resellers (and IT buyers) should focus more on F5 (or any other vendors’) load balancers more so than servers and server virtualization?And then there is the issue of honest vendor claims (even if New Orleans can lead to late nights and rough mornings): “At the product level,” McAdam said,”we beat Cisco 99 percent of the time.” Hmmm (again). If F5 wins 99% of the time, and even half the opportunities were competitive (the real % is higher) of the 2500+ ACE customers, then F5 must have 250,000+ units shipped in the last two years since ACE was launched. Hard #’s to back up.So a note to the wise: enjoy the glory of a good quarter and share with your field counterparts, but it helps to stay between the lines, even if that’s in the French Quarter down in New Orleans.