Avatar

It’s easy to forget about TV in summer. Sunshine, holidays and a drop off in content mean viewers are less likely to tune in compared with other times of year.

So providers should be working harder than ever to retain their viewers. Research from Sanford Bernstein shows that  pay-TV subscriptions are falling. It found that subscriptions dropped by 1.4 per cent in the third quarter of 2015, compared to a 0.9 per cent gain in the third quarter one year earlier.

And it’s a well-known fact that customers are a lot more expensive to get back: the cost of acquiring new viewers is far higher than retaining them. Competition is fierce and if you lose customers, they have more choice than ever to go somewhere else.

Cuddle the cord cutters

Know your customers

But there are things you can do to act now. You know more about your customers’ viewing habits than they do. And you can use that to your advantage. One way of doing that is to offer more relevant content.

With that insight, you can make suggestions for content that viewers may not be aware of. That will give them the most out of their service and keep them loyal.

You should blend elements of personalization into the customer’s viewing experience. For example, provide subtle viewing suggestions through their electronic program guide.

Flexible models

Subscription packages ought to make viewing content across devices easier, as everything is in one place. Fragmented viewing habits, where customers have a number of pay-as-you-go deals at the same time, can land them with the extra cost of paying to watch content on their phones.

Remind customers that they can access your content from anywhere and on any device and ensure your services are optimized for all platforms.

Analyze this

Offering services across devices also means you can view customers’ habits, preferences, likes and dislikes with greater accuracy. So, for example, this may mean offering similar content, or box sets of a show when that series ends. It may be in offering different sports content in the off-season.

It may even mean offering payment breaks and more flexible pricing and bundling that reflect how the viewer uses your service, in order to keep them as a customer, and reduce the risk of the dreaded churn.

When it comes to the seasonal drop off, analytics can also be used on a higher level.

Look for trends. Is it always the same people who leave and come back? What types of customers are demographically most likely to leave your service during the Summer? Once you start asking those questions, you can start targeting specific ‘at risk’ customer groups.

As consumers become increasingly savvy in the way they pay for their TV services, it’s up to you to become equally intelligent and flexible with their services.

Make sure that the summer slowdown doesn’t turn into a permanent drop in your subscription services uptake.

Find out more

To find out how you can improve your services and maintain a loyal subscriber base, take a look at Cisco’s Service Provider Video Solutions.

Authors

Adam Davies

Technical Leader, Engineering

Service Provider, Video Solutions

Avatar

Software engineering and developer communities are driving the market for cloud consumption and leading each industry into a new era of software-defined disruption. There are no longer questions about elastic and flexible agile development as the way to innovate and reduce time to market for businesses. Open source software plays a key role in the digital transformation taking place to cloud native and understanding how your business strategy needs to address this next disruption in software development is crucial to the success of your business.

The Cloud Native Computing Foundation (CNCF) has defined cloud native as:

  • Containerized
  • Distributed Management and Orchestration
  • Micro-services Architecture

The first 2 aspects make perfect sense with the current maturity of development, virtualization, and cloud deployment experience. However the 3rd aspect is very much at the root of how digital transformation will explode over the next several years.

Micro-services Architecture Defined

A micro-service architecture is a software architecture style where complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These application services are small, highly decoupled and focus on doing a small task.microservicearch

The figure above further decomposes the architecture into 4 key sub-systems:

  • Application Composition – How the application is composed of individual services and the API interface requirements. Most application architectures consist of common software patterns which can be further decomposed into application services focusing on individual tasks.
  • Application Delivery – How the application is deployed must be separate from the composition. Application portability is a key business requirement and one of the more reliability methods to achieve this to decouple the application code from the underlying deployment target. This cac be accomplished by:
    • Deploying the application into different environments (dev, test, prod) each of which can consists of different environments (laptop, server, bare metal, private cloud, or public cloud)
    • Deploying to different locations (data center(s), availability zones, geo-location constraints)
    • Continuous Integration and Continuous Delivery of the application services across environments, location, and hybrid models
  • Provide governance, security, networking, and application policy intent frameworks
  • Provide common single control panel for running of the services, policy management, and operational support

The Tale of Two Towers

When you’re beginning a journey, it’s important to recognize how you arrived at today and the lessons learned and in retrospective, what you would have done differently. The stack of today is shown below:

stackoftoday

This stack is very much about metadata and system of record mindset. Orchestration and management is all about automation of the infrastructure and OSS systems. There is no running code or executable services. This model is all about templates and scripts that must be executed in a specific sequence to ensure minimal configuration drift. All the application code must handle the integration, orchestration, and deployment complexity in code.

The container stack for cloud native development takes a different approach based on all the great experiences and lessons we have learned over the last decade. This stack is shown below:

Container Stack

The cloud native stack will consist of:

  • Micro-services architecture
  • Distributed cluster and workflow orchestrators and managers
  • Containerization – file format will be driven admins and becomes the new metadata
  • Infrastructure – scale out infrastructure with lightweight linux and HDFS services

Developer Experience Matters

Shipped is a modern, simple developer experience for cloud native. The project addresses both the developer needs in the build and deploy phases as well as the operations users in the run (monitor and metering) capabilities. Shipped leverages another open source project called Mantl for multi-cloud/data center deployments for a full container platform that supports Kubernetes and Mesos side by side.

shipped-mantl

The mantl components are shown below.

mantio

Mantl is an open source, end to end, integrated stack for running container workloads across multiple clouds. Mantl includes deployment automation and assurance and monitoring. We designed the project to be pluggable and grow into a hybrid platform to support application development and data services. With Mantl, enterprise grade networking (L2-4 and overlay), security (secret, AAA, network), and storage (persistent, object, and ephemeral) capabilities built in.

Mantl address a common problem in application orchestration – multi-orchestrator capabilities. There are several use cases and different types of orchestrators that address these use cases. Mantl’s design is extensible and today supports Mesos/Marathon, and/or Kubernetes, and/or Docker Swarm. What is important in a multi-orchestrator model is unification across the service discovery and load balancing to enable multi-cloud deployments – customer choice.

If you are in Toronto at CloudNativeDay, stop by for demos of Mantl and Shipped as well as FD.io and Calico for enhanced network and security without compromise.

Authors

Kenneth Owens

Chief Technical Officer, Cloud Infrastructure Services

Avatar

While no one has yet built a general purpose Quantum Computer (QC) capable of breaking the public key cryptography in use on the Internet, that possibility is now considered a realistic threat to long-term security.  As research into the design of a QC has intensified (including public access to a small implementation), so has the need to develop standards for ‘postquantum secure’ encryption algorithms that would resist its cryptanalytic capabilities. Along with other cryptographers, we are looking at ways to engineer postquantum security into Internet protocols; here are some of our thoughts.

On the Internet, confidentiality and authenticity are provided by cryptographic protocols such as TLS, IPsec, and SSH, each of which consists of multiple cryptographic components. If a QC existed, then an attacker could attempt to use it to break those protocols with any of three different tactics: 1) by directly recovering the keys that encrypt the traffic, 2) by extracting the encryption keys from the key establishment mechanism, or 3) by forging authentication credentials, and then posing as a legitimate party. The first case is the easiest to defend against; while it is speculated that it might be possible for a QC to recover a 128 bit symmetric key, it is generally agreed that they would not be able to recover a 256-bit symmetric key (source: Arxiv.org and Wikipedia). Current standards and implementations mostly support 256-bit keys, so moving to that key size is easy.

However, it is more difficult to defend against the other tactics. When a TLS client talks to a server, for instance, public key cryptography is used to establish the traffic-encryption keys. This negotiation is designed so that an attacker cannot recover those keys; however, the methods we currently use today (RSA, DH, ECDH) are all known to be vulnerable to algorithms that would run on a QC. An attacker could store today’s encrypted traffic, including the negotiation, until a QC is available.   At that time, they can break the key establishment used in the negotiation, recover the traffic keys, and then decrypt the traffic.   This threat to long-term confidentiality is especially important for sensitive data, and for cryptography that will be in the field for a long time, such as hardware-dependent implementations and embedded software.

While the need to adopt a key exchange method that is secure against a QC is apparent, it is not clear what method is suitable.  There are a number of candidates for postquantum key exchanges:

  • The oldest such method is code-based cryptography, such as the McEliece cryptosystem. It is well trusted because it has survived years of cryptanalysis, but it suffers from the drawback that its public keys are huge; it would require that a megabyte of data be exchanged in each negotiation.
  • Lattice cryptosystems are newer (source: Wikipedia); they are believed to be secure against a QC, but their exact security is unknown.  There are a range of such systems; the ones with more compact public keys (including the NewHope system described below) are based on Lattices with more structure, and some people worry that this extra structure might make the problem easier.
  • There are Elliptic Curve Isogeny-based systems; however these systems are not nearly as well studied as one would like.

There is a risk with more recently designed key exchange methods: they might not even be secure against a classical, non-quantum computer. The history of public key cryptography shows that it often took fifteen years or so before all of the best attacks against a new crypto algorithm were discovered; early estimates of key sizes were wrong, and early implementations were insecure as a result. As we move towards postquantum security, we need to avoid the pitfall of selecting an algorithm and key size too soon.

This problem can be solved by using redundant key establishment, that is, by redesigning the negotiation so that two distinct mechanisms are used in parallel, in such a way that the negotiation is secure if either of the mechanisms is secure. This technique is the cryptographic equivalent to a RAID-1 mirrored disk. For instance, to establish a TLS session, we can perform both a lattice-based key exchange and a standard elliptic curve based one, with the TLS messages carrying the messages from both methods.  As elliptic curve cryptography has small key sizes, this redundancy does not add much data overhead.

The Internet Key Exchange (IKE)

Along with our colleague Panos Kampanakis, we proposed a redundant key establishment technique for IKE that uses symmetric cryptography. In this draft, a Postquantum Preshared Key (PPK) acts as a symmetric shared secret, and this value is included in all of the IKE key derivation processes, so that even if all of the public key cryptography were vulnerable to a QC, the keys established by IKE would still be secure.  The benefit of this approach is that we can have very high confidence that the symmetric cryptography will be postquantum secure.  On the down side, the distribution of shared secret keys is difficult; this technique is more applicable to Virtual Private Networks than to World Wide Web Security.   The goals (but not the detailed mechanisms) have been adopted by the IETF IPsec Maintenance and Extensions Working Group (IPsec ME WG) – you should participate in the working group discussions to help and encourage standards for postquantum security.

Experimenting with postquantum key establishment

Google has recently announced that they are starting an experiment with post-quantum cryptography for the Web; it uses a compact Lattice cryptosystem commonly known as “New Hope” along with redundant key establishment. This New Hope system was developed by a group of European researchers, and tries to strike a balance between usability and security.

Google’s experimental implementation is available only in the Canary variant of the Chrome web browser, which is intended for developers and early adopters, who understand that the browser might be ‘prone to breakage’. This makes it a great platform for experimenting with postquantum security.   Unlike a typical beta version, Canary is entirely independent of the stable version of Chrome; the two can be installed side by side. Another positive aspect of this experiment is the fact that Google is explicitly not trying to create a de-facto standard; in fact, they ‘plan discontinue this experiment within two years, hopefully by replacing it with something better’.

Getting real world experience with postquantum key establishment is a great idea. Of course, it addresses only one piece of the problem; a full solution would also need to address the authentication piece, for instance by using postquantum secure digital signatures in X.509 certificates. A QC would be able to efficiently recover the private keys from the signature methods currently used on the Internet. This threat might not look to be as critical as the corresponding attacks on the key establishment, because the attacker would not be able to retrospectively exploit a signature-algorithm vulnerability to decrypt previously recorded data. However, the problem with signatures is that to have post-quantum secure certificates, the entire certificate infrastructure needs to be updated, including the certificate authorities and root certificates (if those can be broken, then the attacker could issue any certificate he wants). Because of the large amount of infrastructure that would need to be updated, there is important work to be done in developing techniques and standards for postquantum signatures as well. (Look for a post from Cisco on this topic in the near future.)

To follow or participate in the development of postquantum secure cryptography techniques and standards, you can attend the upcoming 4th ETSI Workshop on Quantum-Safe Cryptography. The IRTF Crypto Forum Research Group (CFRG) mail list is another place where these discussions are taking place, as well as the IPSec ME WG. Postquantum cryptography is important to long-term security, so we encourage you to be involved in its development and adoption.

 

 

Authors

David McGrew

Cisco Fellow

Security

Avatar

Where you attended school in Romania used to make a world of difference. Students in the city schools had access to updated facilities, digital content and opportunities to engage with thought leaders, while students in rural schools had limited resources.

In 2013, the European Commission identified Romanian education as a possible barrier to economic growth. Its report cited quality and access shortcomings in secondary and tertiary education. On basic skills, it ranked Romania among the worst European Union (EU) performers.

Cover

In response to this report, the Romanian Ministry of Education embarked on a digital transformation that would provide equal learning opportunities for all. The goal was to establish connected classrooms where learning is possible anywhere, anytime, with any device – and they did.

Read the full case study to learn how the Romanian Ministry of Education used Cisco technology to improve student motivation by 90% and exam scores by 10%.

Authors

Sangita Patel

Cisco SD-WAN, Routing, Cloud Networking Marketing Lead

Avatar

Giving BackIt’s a numbers game for good!

Every Cisco office. Every country. Every year. That’s the multiplier for how Cisco’s employees invest their working hours to give back to the local communities. Plus, it’s all about the local needs, too!

In Ljubljana, where the Slovenian office is located, we used one of our days to come together for kids with special needs to make one of THEIR days a little more special by visiting the local ZOO with the residents of Education and Care Center CUDV Draga for children, adolescents and adults with intellectual disabilities.

I would like to share some more numbers with you from that day:

  • 1kg, the weight of the ostrich’s egg
  • 5 m, how tall an average giraffe is
  • 40 years, how long a snake can live
  • 11 – how many employees in Slovenia participated
  • 25 kids – how many lives we touched
  • 60 – how many hours total were volunteered
  • 5 days – how much time each Cisco employee has to take off and give back
  • 1 day – the time it took to make a difference

Let me tell you a little about our day. With employees, kids and companions, we were in total more than 30. The day was just perfect for this kind of outdoor activity. From the very first moment the kids were so happy, that some strangers (for them) would spend half of their day with them.

First we went to the “ZOO classroom,” where the kids could touch a snake and pet a dove. Afterwards the guide went with us to the animals in ZOO (tigers, elephants, snakes, … and the most exciting for the kids: elephant).

If that wasn’t fun enough, at the end of our time we went to lunch together and gave them small presents. The kids were very joyful, they gave us high-fives, hugs, a lot of “thank you’s” and some of them even gave speeches in front of us.

I really wish that everyone could be more thankful for the small things that they receive everyday, that’s probably something that we can for sure learn from those kids. Every time I met with them, they give me a special energy to appreciate them, people close to me and to think how lucky we are to be who we are.

I am a virtual systems engineer for Cisco, specializing in Collaboration Solutions. I’m always learning, going to weekly trainings, getting instant feedback from my colleagues. I can use Cisco technology for my job, but also to help my customers! Plus, this is only my second year at Cisco, so I have the resources of the Early in Career network as well!

Team Giving Back

But collaborating in this way with my team was extra special. This was also my second (official) giving back day at Cisco. I am proud to be a part of this kind of initiative that the company fully supports, and even encourages like this!

Want to join a team that gives back to their local community? See openings here.

 

 

 

 

Authors

Mitja Rakar

Account Systems Engineer

Sales Systems Engineers Italy

Avatar

Telemetry and analytics are all the rage in networking today. The industry is abuzz with the potential unlocked by streaming vast quantities of operational data.  But what’s so special about “model-driven” telemetry?

Models enable networking devices to precisely describe their capabilities to the outside world: what kind of data they expose (e.g. interfaces statistics, configuration options, etc), the data type (string, integer, etc), any restrictions on the data (optional or required, etc), and even what kind of operations are supported on the data.   The data model is like a contract, an agreement to obey instructions that conform to the model and return data according to the rules of the model.  This kind of contract is pure gold when it comes to writing applications.  You can explore data models offline using standard tools, automatically generate libraries and code, and write applications that will interact with the router in a predictable way.

Model-driven telemetry (MDT) leverages models in two ways.   First, MDT is fully configurable using telemetry YANG models.  You can precisely specify what data to stream, to where, and with what encoding and transport using just the models, no CLI required.  If you’re ready to dig into the details, check out our step-by-step xrdocs tutorials on configuring MDT with the OpenConfig YANG telemetry model, the IOS XR native telemetry model, and even YDK.

The second way that MDT leverages YANG data models is in the specification of the data to be streamed. The router measures a staggering number of parameters all the time – how do you tell it which subset you want?  With CLI, you memorized show commands.  With SNMP, you requested a MIB (To learn more about how telemetry improves on SNMP, check out this blog and watch this video).   With MDT, you specify the YANG model that contains the data you want.  Practically speaking, that means retrieving the supported models from the router real-time using a NETCONF <get-schema> operation or fetching them from github and exploring them offline using tools like pyang.  All published YANG operational models can be configured for streaming, but here’s a sampling of commonly used (and extensively tested) ones:

table

So, in summary, MDT is special because it inherits the power of models, making it easier to define, consume and subscribe to the data you want. The goal of telemetry is to get as much data off the box as fast as possible.  By modeling that data with YANG, MDT ensures that those vast quantities of data are truly usable.

If you want to learn more about MDT, attend our Cisco Knowledge Network webinar on September, 6th. Register today!

Authors

Shelly Cadora

Technical Marketing Engineer

Avatar

When you think about digital disruption in educational institutions, most people’s automatic reaction is to think about the impact that technology is having on teaching and learning, including blended and online learning.

However, institutions are increasingly realizing that everyone in the organization – not just academic staff – need to adapt their work practices to available technology. Digital collaboration technologies are seen as critical to a broad range of future roles, not just remote workers. These technologies can be used in different ways based on whether an employee is anchored to a workstation or highly mobile.

Screen Shot 2016-08-24 at 9.44.58 AM

The below infographic and Workplace Revolution report were developed by Cisco to highlight the reasons why so many institutions are now focused on embedding digital collaboration technologies in their workplace. In fact, some institutions are well on their way, like Griffith University and Central Queensland University, two institutions that have invested in creating genuine workplaces of the future (and present).

Digital collaboration technologies provide organizations with the ability to access information, share, review, make decisions and implement in a collaborative way – all of characteristics of a contemporary digital institution.

Check out the Workplace Revolution report here, and read on for some of the report’s highlights.

TheWorkplaceRevolution

Authors

Reg Johnson

General Manager, Education

Cisco Australia and New Zealand

Avatar

There are 2 questions I am asked from time to time: The first is related to my job – “OK I understand what’s involved in product development, but why do ‘service development’?  Don’t you just have really smart people turn up and do their [professional services] ‘stuff’?”    The second is, quite simply, “Why Cisco for professional services?”

The answer to the first question is that we invest up front to help the organization scale and deliver faster, with greater customer value.  And answer to the second question is similar – we invest more up front in planning to ensure rapid execution and delivery of higher business value for our customers.  To illustrate, I’ll use the following video.  (It’s worth watching!)

This video shows how a railway tunnel was “inserted” underneath a major road highway (the A12 highway towards Arnhem in the Netherlands) over a 3 day weekend (yes 3 days – not a typo!), by Dutch construction company Heijmans. The busy national road was shut over the weekend but was back up and running by the Monday morning.  Now, if you’re experience of roadwords is anything like mine, you would probably expect this to be a 6 month project – yet this amazing Dutch company completed the task in 3 days!  You can read more here on this amazing feat of engineering.

Let’s now discuss the connection between the approach used to build this tunnel and Cisco Services.

Continue reading “Go Slow to Go Fast: Why Cisco Services Invests More Up Front”

Authors

Stephen Speirs

No Longer at Cisco

Avatar

Over the last few years, companies are disrupting established businesses from a software centric model. Some examples are Amazon, Netflix, Uber, and AirBnB. These companies were able to disrupt these industries by leveraging the agility and speed of changes that software can deliver. The largest companies are trying to become software companies. This is why software methodologies are critical for your business strategy today. Companies are realizing that innovation and the agility drive competitive advantage. As your software strategy takes shape, it’s important to consider several aspects of innovation and agility as well as some areas to avoid.

With software strategy now front and center, it’s important to understand open source. Open source technology and strategy are at the core business software transformation and innovation. There are several important reasons for this that makes sense to understand. The first reason is pace of technology innovation and speed of adoption that a community of developers can deliver in open source. It is very difficult for an organization to keep up with the rate of change that a community of developers can. The second reason is around the power of community support and contribution quality. The quality of the code and the rigorous review of the code ensures reliability and production readiness. The last reason I’ll mention is the transparency and openness of the community. The projects road-map is open, Pull Requests can be submitted by anyone, and the acceptance and issues are completely visible. These reasons have led to the emergence of open source being central to your company’s business strategy for digital transformation.

Open source concerns to consider

happypath

However, open source has a few down sides that your strategy needs to take into consideration. Open source projects like to take the “happy path”, ie in an ideal world where everything works and you can function in a self-contained bubble. Unfortunately, as we know all too well, stuff happens in the real world that we need to be prepared for. Here a just a few of the more critical areas that you should manage into your open source strategy:

  • Fault Management – The ability to monitor project related systems, detect issue, log them with understandable errors, and create an API for external monitoring tools to pull the data is critical to reliability and availability.
  • Performance Management – The ability to understand how your software will perform and scale and under stress is critical to the enterprise. Most open source projects do little to now performance or scale testing. This leave the enterprise left to profile, manage the underlying infrastructure, and ensure scalability.
  • Security – This one always surprised me as most enterprise software developers are very security conscious and understands why security is import to their application. Threats, vulnerabilities, and best practices for secSDLC are not part of open source projects and as a result, the enterprise is as great risk when using open source and needs to fill this gap.
  • Metering – Since open source software is not for sale it’s obvious why projects do not consider metering statistics that would be necessary to build a commercial offering. Unfortunately, enterprises are “for Profit” and require charging for use of their overall service, for which this open source project(s) is just one part of the overall architecture of the application.
  • Integration – This is just bad software engineering. We know that components need to be connected to existing services – back office or operations. Open source projects should have an API to support basic integration patterns, but they do not. This is left up to the developer or business architect. In addition, continuous integration with updates and compatibility are always an afterthought.

Building an open source pattern for success

roadtosuccess

While these concerns are import to address, with the proper strategy and organizational rigor, defining a pattern for success can be achieved.  The following principals provide guidelines for building an open source program in your organization.

  • Measurable – It’s important to define aspects of the program that can be measured. The success of the program overall should be measured by either adoption or increased sales/product views. However, this takes time, it’s better to start out with number of contributors, number of forks, community impact (number of “likes”) or Ecosystem (number of projects incorporating your project)
  • Organizational Impact – Contributing to open source sounds great, and it is, however you need to understand the organization impacts. The impact will not only be in terms of the software develop team make-up, but also technical considerations in relation to intellectual property decisions, code support, and liability (refer to legal implications below).
  • Happy path items need to be addressed in community – as a community we need to take action to create better testing practices, adopt system integration and testing practices, and define a common set of security and performance issues that projects need to address.
  • Legal implications – It’s important to consult with your legal team or seek outside counsel with expertise in open source to develop a program to protect your company and your employees contributing code. There are several programs in place to model yours after.

For more information. Follow us in Twitter @CiscoOpenSource and check out our website often for valuable resources and information. If your @linuxcon #cloudnativeday in Toronto, come have a short stack with me and The New Stack at Cloud Native Day. See the #pancakeoverlordrobot print some @usemantl flapjacks and join us for a discussion about the great forces shaping the new galaxies of the container and open source ecoystem.

Authors

Kenneth Owens

Chief Technical Officer, Cloud Infrastructure Services