Reduction in the complexity of deploying and managing services, accelerating new service introduction, and reducing capital/operational expenditure overhead are key priorities for network operators today. These priorities are in part driven by the need to generate more revenue per user. But competitive pressures and increasing demand from consumers are also pushing them to experiment with new and innovative services. These services may require unique capabilities that are specific to a given network operator and in addition may require the ability to tailor service characteristics on a per-consumer basis. This evolved service delivery paradigm mandates that the network operator have the ability to integrate policy enforcement alongside the deployment of services, applications, and content, while maintaining optimal use of available network capacity and resources.
Existing service-related features and resources such as service appliances, integrated router service blades, servers, content caches, and applications, will continue to be utilized. In addition, new technology such as the Cisco OnePK and Linux Containers will be integrated to enable a much broader service delivery capability. It is most likely that network operators will opt to distribute the full set of resources across their entire network footprint and therefore flexible access to them will become increasingly important. Real world deployments will demand better utilization of dormant service function capacity, and the delivery of specific types of services from different locations at different times dependent upon customer demand. (For example, some services may require deployment close to the subscriber edge whilst others may come from one or more regional locations within the transport network or data centers.)
Meeting these challenges cost-effectively and flexibly will require new approaches in service delivery that enable:
Non-disruptive service introduction
- Allow new services to be introduced without significant disruption to the existing network topology and already deployed services; faster TTM from a design, upgrade, testing, and deployment perspective.
Network & services infrastructure decoupling
- The ability to distribute the locations that deliver services throughout the network and deployment of Service Nodes independently of the physical network topology thereby breaking the one-to-one mapping between customer attachment points and service delivery locations.
Service resource pooling
- Service resources may be locally or centrally pooled and shared by all edge routers; providing flexibility and streamlined capacity planning of services.
- Effective use of centralized and distributed service functions regardless of physical location to leverage the economies of resource scale.
- Ability for services to be protected by the underlying traffic redirection technology as well as integration of Fast Reroute (FRR) functionality with application/service failover trigger for rerouting traffic.
- Routing of individual service flows based on real-time network conditions and service delivery and/or network policies.
- Policy options to manage all services and service delivery locations centrally, and enables the services infrastructure to respond to current network conditions independently.
- Providing a real-time view of all service, application, and content resources to the network operator.
- As service functions and resources may be deployed independently of the underlying physical topology, services may be expanded or contracted based upon consumer demand. This includes an ability to add or remove services from VRFs on demand.
A plethora of technologies can be applied serve these approaches. These will be explored in future posts to this blog.
Tags: architect, capacity planning, Cisco, decoupling, delivery of services, deployment, engineers, extensibility, infrastructure, innovative services, Linux Containers, network topology, onePK, resource pooling, Servers, service appliances, service delivery, services, technology