Note to reader: Time stamps embedded in the text below in [ ] are there to help you navigate to the related section of the full interview video included in this blog.
There’s still a great deal of confusion around API gateways – how and when to use them and what benefit they bring. In the world where microservices rule application flexibility, they are a fundamental aspect of driving the services, products, and operational flexibility of applications. Whether it’s enterprise-facing or customer-facing, API gateways help developers create applications that make services functionality simple, automated, and agile.
In this episode of Cloud Unfiltered, we talked to Vik Gamov, Principal Developer Advocate of Kong Inc. about the nature, use, importance, and possible evolution path of API gateways in modern app development.
What is an API gateway and why do we need it?
Applications do much of the heavy lifting in the life of every business and customer across every sector. Whether it’s banking and financial services anywhere, any device access to your favorite streaming service, social media platform access, and much more, microservices make third-party provider integration possible. Developers needed a way to make all those back-end requests simple and orderly while minimizing latency, and the API gateway was born.
According to Vik, an API gateway is essentially the implementation of a gateway pattern to give you access to something and perform other tasks [01:02]. The API gateway is the key to managing and monitoring hundreds of APIs providing the access to different microservices connected to applications.
“With the gateway, your application doesn’t have to check every ID and every possible application in the authorization logic,” said Vik. “The API gateway does the heavy lifting so your application doesn’t have to do it.”
Vik explained how microservices fit into the picture in more detail and why something like an API gateway is so needed right now because of microservices. “All these services need to connect from the outside. There must be a mechanism to figure out where they’re coming from, how to communicate with them, and what pieces of those are going to go to what services.”
There are always new microservices being added to an application as part of the development process to make the app work better for different people who want to do different things. Adding those microservices and the APIs connecting them to third-party integrations must also happen in an orderly fashion.
Vik explained how a developer of microservices may want to go with the new features in your API. The challenge is that service level agreements (SLAs) for your API consumers mean you cannot just make the switch.
He explained how it’s possible to use some of the request information to support different versions of the client. These can be based on headers to route one service as version 1, 2 or 3, and you can have backward compatibility in your service without slowing you down. “You can continue to innovate in the development of services, but the API gateway allows you to have the flexibility and control over how the traffic will run in the services,” explained Vik.
The evolution and origin of the API and microservices
Although we’re well into the digital age, we’re still constantly dealing with the technical debt of monolithic applications that have a tough time being compatible with what the cloud can offer. These monolithic applications use libraries to link everything together [05:58].
According to Vik, microservices came about from the need to design these services in a better way and introduce them faster. Developers often use canary deployments (like the early warning trouble signal of using canaries in the coal mine) that allows developers to introduce the microservice, check for errors and back off quickly if they found any. This all made it easier for developers to weigh risk in a better way,” said Vik.
Evolution of API Gateways
API gateways have continued to grow, with developers and vendors contributing to making them more useful in wrangling countless APIs and their microservices [07:10]. But before true API gateways, they used web servers like Nginx or Apache as a kind of load balancer to front these applications.
Vic explained that when trying to grow this idea of what would become API gateways, “canary” deployments are among the things you want to accomplish. “You want to have some type of weighted logic where, say, 50 percent of requests will go to this one, 50 percent will go to version two.”
This led to a line of thinking about how these systems, like Kubernetes and ingress, need to work and come into something called a gateway. According to Vik, this is something Kubernetes and the special interest group are working on.
The idea, Vik explained, is the gateway supersedes functionality for ingress by providing the gateway-like capabilities to any system fronting your services that are on Kubernetes.
“This thinking strengthens applications for developers and helps vendors support them in seeing things in a gateway-like approach rather than a simple HTTP proxy.”
The power of API gateways and service mesh
While many people are still figuring out API gateways, they must also contend with a service mesh which adds monitoring, security, and reliability functionality to applications on the platform layer. Vik explained how the combination of API gateways and a service mesh are powerful in providing most of the data and information about how services are running.
“One of the biggest problems you have in something like Kubernetes, which is great, is the need for a way to make decisions based on all the data and the telemetry,” said Vik. “The combination also frees the developer from having to create a lot of things like encryption and telemetry.”
Choosing Among a World of Solutions
Today, developers must contend with countless options for service meshes and gateways. Even Kubernetes is developing their own gateway. Like all technology evolution, the question soon arises on whether there should be a standardization.
With API gateways and service meshes, Vic takes a pragmatic approach [10:10]. “I love the standard things, but I’m also a fan of competing standards. In this case, I think there should at least be a standard interface all developers need to code against,” said Vik. “But (in the bigger picture) we need to stop looking at overall standardization and the rest should be based on implementation.”
Vik used ingress in Kubernetes as an example. He feels it didn’t go very far to define the possible functionality on the edge or gateway level, which is something gateway APIs will fix. “The biggest thing is that developers must weigh it out and decide what needs to be part of the core,” he emphasized.
Why all the confusion?
Despite the differences in these API gateway and service mesh solutions, there is a lot of overlap where the technologies borrow ideas from each other [Timestamp 16:00]. Vik pointed to aspects like separation and hybrid deployment between the data plane and the control plane, which come from the service mesh idea. He believes all those options are why many people get confused as to the role of the API gateway, the service mesh, and how they will talk to each other.
Vik sees answers in using a holistic approach where developers write applications in different programming languages to make things simpler for developer decisions. “I’ve used a popular Google app with everything written in different languages to show how the code never changed,” said Vik. “I just took their containers and wrapped them into YAML as an example, and it just works. It shows that without changing the code, you can lift and shift your existing container workloads and move them into a service mesh world.”
How Do we Get Started on the API Gateway Journey?
While the conversation with Vik went a long way to clearing up API gateways, many people still don’t know where to start. Vik used an example from his own professional experiences before and after the start of the Cloud Native Computing Foundation’s Kuma Project [22:20].
According to Vik, Kuma wants to be a universal mesh, which means regardless of deployment points, it doesn’t have a strong dependence on Kubernetes, and you can run it on VMs or even windows.
“One thing that frustrated us as engineers at Kong, before we started our own project around Kuma, was the existing solutions were very convoluted,” Vik explained. “Our idea was to take the driver technology and make it simple so it only focuses on policies required for the application to support users that just want to get started with gateway APIs.”
We’d love to hear what you think. Ask a question or leave a comment below.
And stay connected with Cisco DevNet on social!