Avatar

As far back as 2016 when Tech Republic used metrics like search queries and StackOverflow activity, Kubernetes claim to fame as the top container cluster management platform has been pretty solid. Follow that up with a survey that the Cloud Native Computing Foundation conducted the summer of 2017, and it’s pretty clear that developers are choosing Kubernetes in droves to manage multi-host container installations.

But why do developers love Kubernetes so much? What do they get out of it?

And who’s managing all these Kubernetes clusters today? Will that stay the same going forward as the technology scales?

These become important questions as Kubernetes and the microservices-based architectures it enables mature and become more widely used by enterprises.

Why Devs Love Kubernetes

Maximizing iterations. Velocity. The latency between when a developer writes a line of code and that line of code gets used by a customer.

No matter what terminology you prefer, there’s only one key performance indicator that matters to a developer. Since agile methodologies toppled older waterfall approaches to software development, developers have figured out that most ideas are bad and that the best way to sort the good ones that introduce innovative change from the bad ones is to try them out as quickly as possible with customers. Microservices-based applications are composed of smaller, less-coupled components than the monolithic designs that came before them.

That makes it easier to implement incremental changes more often to see if that new feature is innovative or not, and to quickly move on to the next feature if it isn’t. More iterations, more innovation is the way to think about it.

This is why developers love Kubernetes. By putting the smaller components in containers, which can be spun up in seconds compared to the 10-15 minutes it takes to launch a virtual machine, it becomes easier to introduce change into a deployed application. Kubernetes enables those containers to span multiple hosts for redundancy and throughput purposes, giving resiliency to those applications.

Who’s Managing Kubernetes Today?

But there are two problems with the current state of Kubernetes deployments, one organizational and one educational. The organizational problem is that, increasingly, developers reside in line-of-business teams because just about every company has turned itself into a software company in recent years, recognizing that the easiest way to introduce change into any marketplace is with software. A developer in a line-of-business team values speed above all else and has typically found that the traditional internal IT team that would ordinarily host applications isn’t yet up to speed on Kubernetes deployments.

And that’s the second problem, the educational issue. IT teams sitting in cost centers typically have limited budgets and spend a ton of time managing the older, monolithic applications that run a lot of enterprises today. Getting this group the time and money to learn Kubernetes can be a daunting task without a lot of automation to go along with it to lower that learning curve.

The result is that, today, most Kubernetes deployments are out on public clouds and managed by those line-of-business development teams. Longtime cloud analyst Krish Subramanian recently conducted an openport analysis of various opensource solutions running on public IP addresses and found an overwhelming number of Kubernetes deployments running on public clouds. In an enterprise environment where it typically takes many tickets and many days to get a virtual machine, let alone multiple virtual machines on top of which a Kubernetes cluster can be run, what choice to developers really have right now?

Scaling Kubernetes Management in the Future

But when returning to the key performance indicator for a developer, they’d much rather be spending time writing code to try out on customers than managing their own Kubernetes cluster on the public cloud. As we have seen, though, they have little choice since their cousins in IT do not have a mechanism to curate a Kubernetes distribution from its opensource trunk, make key configuration decisions, and map that to their hardware.

This is why the Cisco Container Platform (CCP) is so important. It addresses issues that both of these audiences have that can enable Kubernetes management to scale long term.

For the IT department, it lowers the Kubernetes learning curve by providing the same curated distribution off of trunk that Google uses in its own hosted Kubernetes solution, GKE. CCP presents itself to that IT management staff as a wizard-like interface that makes it possible to spin up a Kubernetes cluster in minutes without detailed knowledge of the underlying configuration choices. CCP can optionally integrate network configurations with an existing APIC controller, so the network administrators don’t even have a new tool to learn.

For developers, they can quit managing clusters and repatriate that time for use on code, which is what they do best anyway. CCP gives them assurances that their Kubernetes cluster will be as close to trunk as GKE and when something goes wrong, Google provides secondlevel support behind Cisco TAC, which is better than scrolling through pages of forum posts trying to find someone else in the opensource community who has had the same problem you are experiencing.

Summing Up

There’s no doubt that the popularity of Kubernetes is rooted in developer productivity, but to unlock even more developer time, they have to get out of the business of managing their clusters on public clouds wherever possible. CCP gives IT teams the ability to meet the demands of those line-of-business developer teams in an approachable, supportable way. Longer term, that allows developers and IT teams to each do what they do best and give a corporation more chances at finding innovation for the markets they serve.

 



Authors

Pete Johnson

Principal Architect

Global Partner Organization