Virtualization is an overloaded word. Originally it was used to describe a form of hardware emulation, but these days it appears that any form of software abstraction is called “virtualization”. In the context of the CMTS the term cloud native more accurately describes the upcoming evolution of the CMTS than virtualization.
Cloud native is becoming popular as a way to describe the cloud deployment approach used by the large web companies. In a nutshell, the cloud native approach sees the application as the center of the universe, not the network. This is a different view than ESTI-NFV because in the cloud native world the “network function” is just an application that, as long as it roughly conforms to the “12-factor app” guidelines, can be deployed, scaled and protected the same way any application can.
How does the virtual CMTS fit into the cloud native perspective? In this rush to label anything as virtual, a key point is often missed…
Virtualization is a means to an end and not a goal in itself.
The end goal is to achieve:
- Service velocity through DevOps and CI/CD (continuous integration/continuous deployment)
- High Availability
- Flexible and rapid scaling
The cloud native world has a framework to achieve all the above and virtualization is only a part of it. In fact, the form of virtualization used in those cloud native deployments is called “containers” and if anything, it’s a very lightweight form of virtualization.
Another term that is frequently associated with cloud native is “micro-services”. The key concept behind the term micro-service is the ability to create small and well-contained software packages that can be upgraded and scaled separately. It’s common to associate a micro-service with a container.
Many other concepts in cloud native can be traced back to multi-processor designs and mainframe architectures dating back to the 1970s. Well modularized software is the ABC of software architecture and containers have been around for a while. Given that, what is the key innovation in the cloud native approach?
One might argue that it is the ability to build a cost-effective high availability system from low-availability components. The cloud native one is fundamentally a load-sharing distributed system, if any component fails then its load can be moved to a different place. Furthermore, scaling up/down can use the same load sharing and distribution system and in that respect a failure is simply a case of a “forced scale down”. Similarly, software upgrades are using the same distribution and load-sharing infrastructure: as the old version is being scaled down the new version is scaled up and as a result software upgrades can occur without service interruption.
Once we have a method for reliable in-service upgrades of software combined with micro-services we start a virtuous cycle: because upgrades are less risky an operator can upgrade more frequently and that results in smaller code changes between upgrades which reduces the upgrade risk even further. Eventually feature velocity increases because software changes can be phased in more quickly.
The cloud native environment also encourages in-production testing. Because the overall system is highly resilient the staging phase of new software can be much shorter. Some companies take the concept even further and force random faults (e.g. forced software crashes) into a live production system on purpose to make sure it indeed recovers from anything that gets thrown at it.
We believe that the virtual CMTS can be broken into micro-services and deployed in a cloud native form and the reliability, scalability and feature velocity seen in the web and business app space can be applied to the virtual CMTS as well.
Look for more on this topic – we’re excited about making the CMTS “cloud ready”!
CONNECT WITH US