Cisco Blogs


Cisco Blog > Data Center and Cloud

Video Applications and the Network: Friends or Foes?

Everyone knows the +’s and -’s of real-time video applications for business. Plus points include more collaboration, stronger executive communications and easier-to-digest employee training. Some of the major downsides are a potentially large quantity of servers (as servers are often mapped to sets of end users, and not load balanced), much bigger bandwidth requirements to remote offices, and/or dedicated proxy servers at those remote offices getting video streams to distribute to end users, and the associated “pre-positioning” of those servers to receive the video each time a live session occurs.With Cisco’s recent Data Center 3.0 announcement, the Company showed how the network can empower the delivery of video end-to-end, from the data center over the WAN to the remote office. Specifically, by leveraging the new “video smarts” in the ACE application switch family, customers can reduce the number of servers required to process and distribute video streams (can you say “load balancing for live video”?). And for the WAN and remote video viewers, customers can leverage new “video smarts” on the WAAS family to send single live video streams over the WAN, and then tee up multiple copies locally to send to end users in each remote office (see YouTube interview of WAAS video delivery, speaking of video).A 2006 article written by another vendor and published by SearchNetworking highlighted several of these video delivery challenges, and yet video adoption continues to grow significantly.So blog back to us — what are your thoughts on deploying live (or on-demand) video apps, and the server + network-related challenges? Do you have in production now, or thinking of? What’s your business goal or ROI target?

A Cloudy Day

Not so much a comment on the weather as on some prognostication around the evolution of cloud computing… 1) Today the term ‘cloud’ doesn’t mean a whole lot. It’s a nice catchy phrase for what many companies have been doing for a long time. Build a data center, outsource processing cycle and storage capacity to a variety of consumers, charge for it. Make sure it is connected to a network so the service they have outsourced can be ubiquitously accessed from a variety of locations, and allow the compute and storage capacity to be re-purposed. 2) What seems to be changing is the rate of change. The pace or velocity so to speak. i.e. To add a consumer to a hosted data center model in the mid to late 90′s involved buying a ‘cage’ and putting into that cage lots of physical stuff like routers, servers, storage arrays, load balancers, switches, firewalls, tape drives, terminal servers, etc. This meant that the deployment time was measured in months, weeks at best, to turn a new service up. Even a simple capacity add required procurement, cabling, electricians, rack mounting stuff, etc. The fastest single activity in the workflow could be measured in days.3) Time compressed. Server Virtualization compressed the time frame in which a ‘server’ (err, VM) could be turned up, cloned, copied, uniquely provisioned, et cetera. This created strain on the other areas of traditionally physical infrastructure such as storage, load balancers, and security. They have responded and replied to this with their own unique forms of virtualization and there are emergent provisioning platforms for enterprises and service providers that automate some of the monotonous workload tasks to speed up the delivery and thus efficacy of the entire service.Leaving us where we are today, but what about moving forward?4) Enterprises will build mini-clouds. As time compresses and workload can be rapidly re-provisioned/re-purposed in an increasingly automated fashion the aggregate number of CPU Cores/Sockets and Memory that will be necessary to support the peak aggregate workload will decrease within the cloud.5) Service Providers will move into higher revenue cloud models as they continue to try to extract a higher revenue per square foot or per kilowatt/hour out of a hosting facility. This will be driven by shareholders and market consolidation as well as the number of facilities that will come available to the SPs as they consolidate their own DC infrastructures.6) Hypervisors will become THE way of defining the abstraction between physical and virtual within a server and there will be a standardization of the hypervisor ‘interface’ between the VM and the hypervisor. This will allow a VM created on Xen to move to VMWare or Hyper-V and so on. Management capability and system-wide integration will become the key differentiators for this piece of technology.7) Service Providers will scale their cloud managed application/hosting/hypervisor offerings out initially by taking ‘low hanging fruit’ applications like email, web, call managers but will then want to continue the expansion into larger enterprise customers and more custom applications. The standardized hypervisor will enable workload portability and the SPs will try to acquire more customers.8) IP Addressing will move to IPv6 or have IPv4 RFCs standardized that allow for a global address device/VM ID within the addressing space and a location/provider sensitive ID that will allow for workload to be moved from one provider to another without changing the client’s host stack or known IP address ‘in flight’. Here’s an example from my friend Dino.9) This will allow workload portability between Enterprise Clouds and Service Provider Clouds.10) The SP community will embrace this and start aggressively trying to capture as much footprint as possible so they can fill their data centers to near capacity allowing for them to have the maximum efficiency within their operation. This holds to my rule that ‘The Value of Virtualization is compounded by the number of devices virtualized’.11) Someone will write a DNS or a DNS Coupled Workload exchange. This will allow the enterprise to effectively automate the bidding of workload allocation against some number or pool of Service Providers who are offering them the compute, storage, and network capacity at a given price. The faster and more seamless the above technologies make the shift of workload from one provider to another the simpler it is in the end for an exchange or market-based system to be the controlling authority for the distribution of workload and thus $$$’s to the provider who is most capable of processing the workload.12) Skynet becomes self aware.

A Model for Open Platform Development

I was discussing the Apple Victory this weekend with a lot of friends. Personally, I think the mobile platform war was just won, or darn close to it, and even naysayers must admit it was a major offensive that opened the battle on a new front. (Think Battle of Normandy, 1944). What I really appreciate though is the brilliance of the development model and what was retained in the ‘closed’ portions of the platform that to me are a master-stroke. The application acquisition, commercial transaction, and catalog are all maintained as part of Apple’s infrastructure. Subtly, but I think potentially most important for on-going development and adoption is the retention of the ongoing application upgrade cycle, version control so to speak. Apple built the application upgrade model centrally and into a core part of the mobile app delivery platform. Most importantly in the end, Apple is masterful in consumerizing IT, making technology fun, cool, and approachable. There are many lessons we can learn from this, lessons that go back to Cisco’s core as a company such as: Why do static routes when dynamic routing protocols make things simpler and easier; why run 4 routing protocols when you can run one for multiple network protocols, etc. In then end revolutionary developments, market entrances, and new operating models for open platform development like this set a good bar for all of the technology industry. So while applauding Apple’s success and style in this announcement and achievement, I also want to ensure that we take away some very positive lessons learned about platform development, and IT simplification.dg

Q&A with John McCool on Virtualization in the Network

Jim Duffy from Network World just posted a solid interview with John McCool, our SVP of the Data Center Technology Group here at Cisco. (I sorta work for John so be nice on the comments, okay?) but this is a really good snapshot synopsis of how we are approaching optimizing the network for future broader deployments of virtualization, especially some of the advanced capabilities virtualization technologies enable such as disaster recovery, VM portability/mobility, and VM segmentation. John also gives some good leading indicators as to how our network platforms are evolving and what the future holds for key investments such as our Catalyst 6500 Series and Cisco Nexus Family of Data Center switches.

‘Green’ Business, not just DC

Sitting here in Data Center land I often write about how we can drive energy efficiency in the data center, of course. But there was a nice article on TechSoup today showcasing some of the comments our VP of Green Engineering, Paul Marcoux, made about the potential benefits of collaboration technologies and Unified Communications in driving a tremendous reduction in carbon emissions, potentially greater than that of your data centers alone. Employee commutes, both from the home to the office and then from office to office, and then the ever present flight out to meet customers and ‘hit the field/road’” creates a tremendous cost center and carbon footprint. Some costs the business has to bear, others the employee does. I am not going to be so bold as to advocate permanent telecommuting for all work-types as there have been some efforts in creating work-spaces that are shared by multiple businesses, located along mass transit lines, and that then use collaboration technologies like TelePresence to link them back to the main HQ or Campus locations in the area. The participating companies in many cases then subsidize the mass transit fares for the employee base as well.Collaboration technologies benefit from Virtualization technologies in the Data Center to allow for more efficient implementation of the technology, lowering the cost of deployment and operations of these ever more critical technologies.