Avatar

That banana-sized cluster we built in Part 4 of this DevOps series is so cool that you might be interested in maybe taking it with you on the road? And although bringing it on holiday with your family is definitely not recommended, there might be other occasions when having your cluster with you could be… well… joyful!

Let’s say that maybe you would like to show it to a colleague from the office, or bring it to an event and brag about it. Its current architecture connects to the outside world using just one Ethernet cable from the USB-powered switch to your home router, so you could think of just unplugging it, putting everything in a case, and once you arrive to your destination connect it to the network there, cross your fingers and hope that everything works.

Unfortunately this won’t work… and for good reasons.

A) At home you were configuring your upstream home router to map <WAN_IP>:<WAN_port> to <LAN_IP>:<LAN_port> for each microservice that needed to be accessible from Internet. At the office you don’t have any control at all over the upstream router, so you cannot perform this kind of configuration anymore. And without it your application will not be accessible from Internet.

B) The IP addresses we used for your cluster nodes were based on the addresses available in your home LAN segment (ie. 192.168.1.0/255). Please remember we could not use DHCP for our master & worker nodes, so the moment you connect your cluster to the office LAN segment, there will be a different addressing scheme that will not accept your pre-defined static IPs.

But what’s the point of having something as cool as a banana-sized cluster and not being able to show off? There has to be a solution…And there is.

Making it happen

For point A) we will have to solve 2 challenges:

  1. How to get traffic from the Internet to your cluster.
  2. How to fan out traffic arriving to your cluster, so that it goes to the required specific microservice.

For challenge #1 there are several different online services that offer you this capability of forwarding traffic towards a private local environment:

  • ngrok: the most famous and reliable one (it even has a GUI!) but its free-tier does not support custom domain names, and needs you to install an agent in your cluster node
  • local tunnel: it also needs you to install a local agent in your cluster node
  • localhost.run: agent-less but does not allow for custom domain names
  • serveo: agent-less and allows for custom domain names in the free tier… our choice!

Custom domain names will be really important for our setup (more on this later). Unfortunately it is a little bit unreliable, so be ready for some service interruptions… but what’s life without some risks?

Let’s use serveo to create reverse SSH tunnels for every microservice that needs to be accessible from Internet. Specifically for our example application (myhero) you need the following services to be reachable: ui, spark and app. For each one of them you will need to create a tunnel, specifying:

  • Domain name URL (ie. ui_julio.serveo.net) and port (ie. 80) you would like to use to access your microservice.
  • Destination IP address and port, that needs to be reachable from the node where the tunnel is created (in this case from your master node). As long as every kubernetes NodePort service can be reached from any of the worker/master nodes, you might decide to use the IP of any of your nodes. And for the port you would use port 80, as per the available services:
$ kubectl get services
NAME         TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)        AGE
kubernetes   ClusterIP 10.43.0.1     <none>       443/TCP        3d17h
myhero-app   NodePort  10.43.188.136 <none>       80:31522/TCP   3d17h
myhero-data  NodePort  10.43.18.39   <none>       80:31747/TCP   3d17h
myhero-mosca NodePort  10.43.111.11  <none>       1883:30704/TCP 3d17h
myhero-spark NodePort  10.43.188.95  <none>       80:32753/TCP   3d17h
myhero-ui    NodePort  10.43.8.88    <none>       80:32728/TCP   3d17h

From your master node you will have to create one tunnel for each microservice, specifying the <URL>:<port> (ie. app_julio:80) and port 80 in your master node as <dest_IP>:<port> (ie. 192.168.1.100:80):

ssh -R ui_julio:80:192.168.1.100:80 serveo.net
ssh -R spark_julio:80:192.168.1.100:80 serveo.net
ssh -R app_julio:80:192.168.1.100:80 serveo.net

These commands will create 3 tunnels from the serveo servers to your cluster master node, so that all traffic going to the following URLs is sent to port 80 in your master node:

  • ui_julio.serveo.net
  • spark_julio.serveo.net
  • app_julio.serveo.net

(Please note you will have to modify your myhero_ui and myhero_spark kubernetes manifests to use these specific URLs before deploying your app)

But wait… that means you will be sending traffic going to three different microservices towards the same destination IP (192.168.1.100) and port (80). How will your cluster be able to determine what traffic should go to each specific microservice?

Julio Gomez DevOps blog part 15

And that takes us exactly to challenge #2: how can we fan out traffic going to a single <dest_IP>:<port> towards different microservices? The answer is ingress. As we did before, you will need to define an ingress resource that specifies where traffic should go, depending on the destination URL. That definition needs to include the URL for each destination microservice, and it will be applied when the ingress resource is created. This is the reason why we were so interested in custom domain names, so that we did not have to change them every time the tunnels are reset. Second challenge solved!

For point B) we will need to find a way to isolate our cluster, so that it does not depend on the upstream router IP addressing scheme. Ideally you would like your cluster LAN segment to be private, and not shared with the office environment, so the best way to accomplish this is… adding a router to our setup! It will be able to route between your private LAN segment and the office one, allowing you to manage your cluster IP addresses independently from the office network.

Of course, the main requirement for this router will be do what we need it to do, but also… to be tiny. There are many different options, but I chose this one (40g, 5x5x2 cm).

The Ethernet cable previously going from your cluster switch to the home router, will now be connected to your new tiny router. And the WAN port from the tiny router will go to the upstream router.

The great thing about this setup is that, as long as the cluster LAN segment does not overlap the upstream router LAN subnet, it will always work, no matter where you are!

Everything is ready! You can now take your cluster with you on any occasion, ain’t those the best news?

Julio Gomez DevOps part 15

See you in my next post, stay tuned! Any questions or comments please let me know in the comments section below, or reach me Twitter or LinkedIn.

Related resources:


We’d love to hear what you think. Ask a question or leave a comment below.
And stay connected with Cisco DevNet on social!

Twitter @CiscoDevNet | Facebook | LinkedIn

Visit the new Developer Video Channel



Authors

Julio Gomez

Programmability Lead, EMEAR

Systems Engineers