The survey results are in. They confirm that service providers are ready to go mainstream with IP over DWDM (IPoDWDM). More than half of the respondents plan to deploy IPoDWM within a year. Eventually over 85% of them will do so beyond 2012. Infonetics Research carried out a global survey recently on packet-optical convergence technologies for routers including IPoDWDM, OTN and others. It published the findings for its clients on July 16th under the ‘Carrier Routing Switching and Ethernet’ Continuous Research Service.
The motivation for developing IPoDWDM for us was simple. By integrating colored optics as interfaces within routers, providers could reduce the need for external optical-electrical-optical (OEO) transponder racks, cross connects and SONET/SDH switching layers. This dramatic network simplification resulted in lesser number of devices in the network, thus lowering maintenance, power, cooling and rack-space operational expenses. Additionally, routers now had direct visibility into layer 1 optical transport network (DWDM) leading to innovative resiliency features like Proactive Protection.
I just read a blog from Eve Griliches of ACG Research which highlighted some recent challenges related to the progress of a single, industry-wide MPLS-TP standard being developed by the IETF/ITU Joint Working Team. I tend to agree with her that these recent events are a problem and an undesirable pothole on what is a very promising highway to a standardized next generation packet network. Let me provide a bit more background.
First, the world is shifting from TDM to IP and therefore (of course) the telecommunications infrastructure is also moving from TDM to IP equipment. The industry showed us their understanding of that when T-MPLS was put to rest and an almost revolutionary event happened – the IETF and the ITU joined forces to create a Joint Working Team to define how MPLS is used in a transport network. That was back in 2008.
However, what happened recently is disappointing. At the last ITU-T meeting, strong efforts were made to resurrect T-MPLS on the final day, when some of the key member countries had already gone home. The justification provided by some members was based on few pilot deployments and older interoperability results of T-MPLS technology between two vendors. The result is a potential paralysis of future work with a possible outcome being two different development tracks – creating significant expense for both service providers and equipment vendors.
T-MPLS is based on Y.1731 Ethernet OAM which is quite different from MPLS OAM. The bottom line is that T-MPLS is not interoperable with MPLS and will create an expensive lack of interoperability between the T-MPLS domains and MPLS domains. A key reason for the joint working group to terminate work on T-MPLS in 2008 was to ensure OAM alignment with MPLS. This would ensure a single OAM infrastructure – protecting investments and promoting interoperability.
To make matters worse, some customers have told me that claims are still being made to them (by certain vendors) that T-MPLS equals and interoperates with MPLS-TP. Additionally assertions are being made that upgrading from T-MPLS to MPLS-TP is going to be painless and trivial. These claims are simply not true.
In speaking with Service Providers, the conversation around cloud often moves away from the technical challenges of delivering a large-scale multi-tenant on-demand offering, and into business considerations such as the readiness of customers to consume cloud services and the prioritization of cloud services to be offered.
Cisco is working with SP’s and SP customers to assist in delivering successful offerings and advancing the cloud market.
For SP’s, it’s important to understand who the offering is being built for, what revenue opportunity is enabled, and what ultimately needs to be delivered in order to meet the demands of the customer.
Customers of cloud need to understand how the cloud will impact their business. Concerns including policy compliance, end-to-end security, quality of service, and transitioning to (and from) the cloud will be need to be addressed.
The buzz associated with the 2010 FIFA World Cup may have already peaked, but the results of how sports fans viewed the video from the tournament’s numerous games have left us with some noteworthy usage statistics to consider.
ESPN estimates that out-of-home viewing and usage of non-TV platforms adds an amazing 47 percent to ESPN’s daily World Cup TV average audience. Moreover, there are some interesting regional differences in the way fans in the U.S. consumed the World Cup events on television.
The U.S. Eastern time zone gets the greatest audience lift from out-of-home TV viewing (18%), while the Mountain and Pacific time zones have the greatest percent of time-shifted recorded viewing (16% and 13% respectively).
ESPN also estimates that 132 million people consumed World Cup related content across all ESPN platforms — that’s more than two out of five Americans. Of that total 132 million people, 90% watched TV, 27% used the Internet, 11% listened to Radio, 7% used mobile and 2% read ESPN The Magazine.
Online Activities Reach Record Levels
Soccer fan visits on ESPN3.com was highlighted by the USA vs. Algeria game on June 23, which resulted in the largest U.S. audience ever for a single sporting event on the Web. ESPNSoccernet received more visits than any other day in its history and ESPN Mobile had the most-trafficked day to-date for World Cup content.
Overall online content consumption reached some impressive new highs. World Cup content on ESPN.com (includes ESPN Soccernet and ESPN Deportes/copa-mundial) delivered 87.5 million visits and 305.9 million page views from June 11-27. Based on the last reported estimate, 26.4 million video starts came from World Cup highlights, news and analysis content on the ESPN.com site.
“Ironically, it appears that the most critical factors to the success of cloud computing projects in the enterprise hinge on human factors, not technical ones. That’s because cloud computing is all about connecting IT technologies to business processes, in a way that reflects the business imperatives and organizational structure of those who are leveraging cloud.”
This struck a chord with me as many service provider customers are also raising a similar question: “If we move applications into the SP public cloud, and we virtualize the computing, storage, and network, the risk of one mistake by an operator could impact a lot of customers.”
We’ve been developing our solutions to help reduce this risk in a similar way that the IP NGN is virtualized and simplified with policies and automatically adapts to change. We have added more cloud intelligence into our solutions. For example, in our recent CRS-3 launch we introduced the Network Positioning System (NPS) and Cloud VPNs, both of which were designed with this is mind. With those two capabilities, we aim to reduce those critical risk factors that are key to making the adoption of cloud computing a reality. Traditional business process are not “cloud-like”…they’re not on-demand, not in near-real-time, not dynamic, and often tied directly to a fixed set of known assets as opposed to abstracted from the physical world. These new features in the CRS-3 automate some of the manual touch points where human factors can interfere with the ability of the business process to be cloud-like in nature. This enables human factors to come into play where they are needed – in making the business decisions that machines just can’t make.