Kubernetes networking is, for the most part, intra-cluster. It enables communication between pods within a single cluster:
The most fundamental service Kubernetes networking provides is a flat L3 domain: Every pod can reach every other pod via IP, without NAT (Network Address Translation).
The flat L3 domain is the building block upon which more sophisticated communication services, like Service Mesh, are built:
Fundamental to a service mesh’s capability to function is that the service mesh control plane can reach each of the proxies over a flat L3, and each of the proxies can reach each other over a flat L3.
This all “just works” within a single Kubernetes cluster, precisely because of the flat L3-ness of Kubernetes intra-cluster networking.
Multi-cluster communication
But what if you need workloads running in more than one cluster to communicate?
If you are lucky, all of your clusters share a common, flat L3. This may be true in an on-prem situation, but often is not. It will almost never be true in a multi-cloud/hybrid cloud situation.
Often the solution proposed involves maintaining a complicated set of L7 gateway servers:
This architecture introduces a great deal of administrative complexity. The servers have to be federated together, connectivity between them must be established and maintained, and L7 static routes have to be kept up. As the number of clusters increases, this becomes increasingly challenging.
What if we could get a set of workloads, no matter where they are running, to share a common flat L3 domain:
The green pods could reach each other over a flat L3 Domain.
The red pods could reach each other over a flat L3 Domain.
The red and green pod could reach both the green pods and the red pods in the green (and red respectively) flat L3 Domains.
This points the way to a solution to the problem of stretching a single service mesh with a single control plane across workloads running in different clusters/clouds/premises, etc.:
An instance of Istio could be run over the red vL3, and a separate Istio instance could be run over the green vL3.
Then the red pods are able to access the red Istio instance.
The green pods are able to access the green Istio instance.
The red/green pod can access both the red and the green Istio instances.
The same could be done with the service mesh of your choice (such as Linkerd, Consul, or Kuma).
Network Service Mesh benefits
Network Service Mesh itself does not provide traditional L7 Services. It provides the complementary service of flat L3 domain that individual workloads can connect to so that the traditional service mesh can do what it does *better* and more *easily* across a broader span.
Network Service Mesh also enables other beneficial and interesting patterns. It allows for multi-service mesh, the capability for a single pod to connect to more than one service mesh simultaneously.
And it allows for “multi-corp extra-net:” it is sometimes desirable for applications from multiple companies to communicate with one another on a common service mesh. Network Service Mesh has sophisticated identity federation and admissions policy features that enable one company to selectively admit the workloads from another into its service mesh.
Learn More
Network Service Mesh (NSM) is a CNCF project. To learn more, check out Network Service Mesh—an introduction, the NSM documentation, and follow the project on Twitter.
If you would like to get involved, you can check the communication channels as well as the GitHub repositories.
Cisco will be at KubeCon + CloudNativeCon Europe 2022 this May in the beautiful city of Valencia, Spain. Come visit the Cisco booth or join us virtually. Learn more about Cisco at KubeCon.
We’d love to hear what you think. Ask a question or leave a comment below.
And stay connected with Cisco DevNet on social!
LinkedIn | Twitter @CiscoDevNet | Facebook | Developer Video Channel
CONNECT WITH US