Find previous blogs in Julio’s DevOps series.

If you followed my previous posts on deployments for on-premises and public Cloud environments, you are now quite familiar with how each one of them work. Today I would like to elaborate a bit on how they compare after our recent experience working on both.


You might have noted some differences between the experience you get from an on-prem deployment vs one in the Cloud.

The main one is, of course, it is easier to work on a managed Kubernetes cluster deployed in the Cloud. You don’t need to be concerned about any management aspect, everything is sorted out for you by the Cloud provider. But this also means that you don’t really understand much about how the underlying infrastructure is built. Maybe you are not interested in knowing more, but if you really want to customize your cluster and maximize its benefits, it is great to have an on-prem setup to get your hands on.

Then you may have noticed it is also easier to deploy an Ingress resource in a public Cloud environment, like Google Cloud Platform (GCP). Their Google Kubernetes Engine (GKE) setup already includes an Ingress controller, so you do not need to install one yourself, like we did during our on-prem Learning Lab. With GCP you just apply the Ingress resource and their controller manages it for you.

In terms of connectivity it is also easier with a Cloud provider, because you do not need to deal with any kind of port mapping in a gateway (like your home router in the on-prem example). You just get a public IP address for your Ingress and your services can use it straight away. You can see this when you try the Operations in public Cloud learning lab.


Other than those aspects mentioned previously, you can see how similar it is to work on a Kubernetes cluster on-prem and in the Cloud.

All the main elements are the same on both environments:

  • Application code
  • Application architecture
  • Build files (ie. Dockerfile)
  • Images (see note below)
  • Image publishing process
  • Deployment manifests (ie. k8s .yml files)
  • Name resolution (ie. DNS/DDNS)
  • Ingress connectivity (ie. Ingress resource)
  • Platform CLI (ie. k8s command line)
  • Package Manager (ie. Helm)

Note: Images are not really the same in our specific case, but that is only because we decided to build an affordable MiniDC on RPi boards. As you can imagine any real on-prem Data Center will be built on standard architecture systems, and images will be exactly the same ones.

DevOps Series 6 meme

You Can Leverage Common Knowledge in Both Environments

In summary, I would say they are quite similar and most importantly: in both environments you can leverage all common knowledge you acquired previously. Isn’t that great? No matter where you deploy your applications you can reuse everything you know!

Congratulations! With this post we have completed our first section on DevOps. See you in my next post, where we will discuss how to package our new application for easy deployments. Stay tuned!

Any questions or comments please let me know in the comments section below, Twitter or LinkedIn.

We’d love to hear what you think. Ask a question or leave a comment below.
And stay connected with Cisco DevNet on social!

Twitter @CiscoDevNet | Facebook | LinkedIn

Visit the new Developer Video Channel


Julio Gomez

Programmability Lead, EMEAR

Systems Engineers