The Olympics gives the world two weeks to pause and marvel at athletes who shine brightly with their intense dedication to the pursuit of excellence, spurred by fierce competition. It’s worth taking a few minutes to note the results of equally intense dedication to televising the Games (also under intense and relentless market competition) with the most innovative technology, bringing the excitement, the drama, and the incredible achievement of the Olympics to as many people as possible. NBC Universal is making broadcasting history this week by presenting 3600 hours of coverage from Beijing, more than the combined hours of all previous summer Olympics Games. Viewers of the 2008 Olympic Games will be able to use their PCs and laptops to access 2,200 hours of video that they can play back on demand, as well as 3,000 hours of highlights, rewinds, and encores. People will also be able to watch video and view results on their smartphones.With all that video to transmit, NBC has selected Cisco to provide IP video network infrastructure and video encoding solutions to NBC during the network’s coverage of the 2008 Beijing Olympic Games, including one of Cisco’s Data Center technologies: Wide Area Application Services (WAAS). Rather than sending 400 video shot selectors and editors to Beijing, NBC will be using Cisco WAAS for WAN optimization and application acceleration between Beijing, New York and Los Angeles. By optimizing 35Mbps links into 140Mbps links, Cisco WAAS allows editors and shot selectors to access gigabyte-sized video files over the WAN with the same performance as if they were stored locally. This reduces operating costs of housing, air travel, transportation, and food. Avoiding 800 airplane trips also supports NBC’s green initiatives for the Olympic Games.To transmit video to its studios, NBC has deployed three 155Mbps OC-3 pipes between Beijing and New York. A Cisco 12004/4 Router collapses all three into one virtual pipe using equal cost load balancing. The types of traffic on the network range from video content and IP telephony to teleprompter content and event scoring. Cisco WAAS leverages rather than overwrites router QoS, giving NBC the confidence to dedicate 400Mbps to video, unlike tunnel-based architectures. So sit back, enjoy the Olympic Games this year wherever you happen to be at any given moment. Employers around the world are already anticipating lost productivity due to the video accessibility of the Games, and wondering how their internal networks are going to handle the increased load. I’ll be dutifully watching my favorite events, largely the track and field ones, while my friend Feng who wrote most/all of this posting goes for synchronized swimming and rhythmic gymnastics.
I wanted to append and slightly change my aforementioned theory on server disaggregation and network evolution. Previously it was basically something like this….”Whenever a network transport is faster than a server bus speed the peripheral connecting to that bus will go from a parallel connection to a serial one to a shared/packetized one.Printers, Hard Drives, CPU-CPU Interconnect, and potentially even memory will follow this theorem. Thus over time all of the elements necessary to process an IT workload will be not only interconnected by a common fabric, but more importantly connected with a layer of abstraction that will make any resource available to any workload at any time. “Now it still seems to be holding true (even 3-4 years after I made my first slide trying to explain this trend) but I realized in conversation with some sage reporters today that there is another axis to the graph. Network speed and capability increase. The faster and more capable the network the more disaggregated the server becomes. The faster and more capable a network is the more the network consolidates other network types. I need to sit down with a decent Cabernet and mull over if there is an end-state that is achieved. But if I had a data center that looked like a rack of CPU, a rack of memory, a rack of storage, a common network linking these physical assets and a network-enabled hypervisor running on top of this infrastructure providing a layer of abstraction between the physical elements and the operating system(s) this would be interesting. The faster I can reallocate resources against workload the less aggregate resources I need to support an increasingly sporadic workload profile. Also- couple with this witty comment #2 -- “The value of virtualization is compounded by the number of resources consolidated into the virtualized data center” and we have a scenario that drives branch consolidation, virtual desktop infrastructure, and other forms of resource consolidation into the data center.Now there are still some networking firms out there that are chasing a speeds/feeds game. Some that have not recognized the value of innovation. Some that still go for the largest routing table, CAM table, buffer pool, millions of packets per second measurement, or Gbps backplane speed comparison. Some hire third parties that were strong advocates of Gigabit Token Ring to champion their product marketing efforts and contrive tests to show how fast/great/cool/neat/geewhiz/wow/smart they are. Some wrap their products in a green flag and parade them through these ‘testing houses’ and declare victory. Some simply exist. Well my friends, the data center is a rapidly evolving part of the network. One that demands focus, innovation, and efficiency- not just existence. So to commemorate a great author who sadly passed away today-“Blow the dust off the clock. Your watches are behind the times. Throw open the heavy curtains which are so dear to you - you do not even suspect that the day has already dawned outside.”
You may have read some recent news about Sun making FCoE available as part of their OpenSolaris project:Just today, QLogic announced that are now shipping the industry’s first Converged Network Adaptor supporting FCoE.And EMC announced they have taken another step closer to offering FCoE on their Clariion line.Expect to see even more announcements in the coming weeks and months for our other FCoE partners that will enable customers to deploy a complete FCoE solution.
Mario Apicella, senior analyst for the InfoWorld Test Center, recently posted the results of his comprehensive test of the Cisco Nexus 5020. The article provides good insight into the Nexus 5020 in particular and the Nexus family in general. Of particular interest are his experiences testing PFC and configuring NX-OS.
The trend to consolidate data centers is well in process, or even into the home stretch, for most companies and organizations. Nearly a year ago, a survey by Gartner in 2007 noted that 92% of respondents had a data center consolidation planned for, in progress or completed.So what about the software applications themselves? These have been much more distributed than data centers, having homes on the desktop, branch office, regional offices and data centers.New ResearchNemertes Research has published the results of a new research report on branch IT architectures that’s interesting, citing that branch office app centralization may also have reached close to its limit. The report cites that 67.7% of companies currently store their applications centrally, up from 56% one year ago. Also interesting is the 25% that also reported a “hybrid model” where most are centralized, while some are still hosted locally. The question there is how can you further optimize those applications that you must keep local (maybe retail transaction app, or even basic IT services like Windows Print)? Certainly virtualization can play a big role — either virtualizing the local server(s) you decide to keep in the branch, or even skipping them and virtualizing the branch platform to host the remaining local apps directly…a strategy Cisco is driving with the recent addition of virtualization to its WAAS platform.And then there’s software as a service (SaaS) options, which centralize applications even further — into the cloud of your SaaS provider like Salesforce.com, Google and others.What all these technologies and solutions really give you as IT leaders are a couple key benefits: flexibility, and business agility. Flexibility so you can choose *what* application goes *where*, based on cost, time management, resiliency requirements and other criteria. So you’re no longer bound by physical or cost limits. You also receive much better business agility, because the architectures and solutions you can build with these new application delivery models allows your business to deploy new apps, features and services much faster than before, from central (yours or a provider’s) infrastructure vs. distributed systems.While these trends towards application centralization, branch virtualization, and SaaS/cloud-based hosting are in their early years still, the directions seem pretty clear where the majority of architectures and deployments models will go. Your Thoughts?Where is your organization with its application deployment and delivery models? Centralizing (and if so, what apps are going home vs. staying out still)? What are you still keeping local for remote users? And as SaaS a part of your plans?