Avatar

Note to reader: Time stamps embedded in the text below in [  ] are there to help you navigate to the related section of the full interview video included in this blog.  

APIs are a big subject area for the Cloud Native developer community. One aspect in particular, quality of APIs is coming more into focus. In this episode of my Cloud Unfiltered podcast I talk to Matt LeRay, co-founder and CTO of Speedscale about why APIs are so important, and why API testing is critical.  

APIs and the return of service-based architectures

APIs are not new, so why are cloud-native companies suddenly prioritizing them? According to Matt, a big reason is because “we’ve all become obsessed with service based architectures again” [1:06]. He notes some advantages to the return of this method including how organizational structure matches the technology because teams and individuals can work within their own containers. 

APIs come into play because they hold these structures together. In a perfect world, the work of APIs working within these structures would be seamless. However, according to Matt, “the problem is [that] in the real world, you get contract changes, and the drift that happens between them”[1:33]. In addition, edge cases in which APIs are used in ways in which they were not designed contribute to the drift. Keeping on top of the drift between services is why API testing is so important.  

APIs and quality

In order to test APIs, there have to be standards against which APIs are judged. This isn’t the case at many organizations. Within companies, divisions have different ways of thinking about how APIs should be used, but there are no organization-wide standards.  

Matt attributes some of this lack of quality to cloud-native companies thinking more about infrastructure than the application level. “[The] concept of load testing is incredibly difficult in cloud-native architecture, and people have kind of just thrown it to the side and they load test in production” [3:15]. 

But customers aren’t going to tolerate broken APIs. Each organization and their development teams have to define the standard and stick with them.  

Automating API testing

While organizations can decide to prioritize API quality, and develop standards, they still face the issue of the perception of API testing as a manual process. “That is what we are used to,” Matt says. “There are great tools, but they’re designed for one kind of modality, which is, ‘I’m a developer, and I know what’s going to happen” [7:28]. 

Matt challenges cloud-native companies to take what they have learned from Kubernetes and apply it to API testing. Namely the tenet “automation is key.” 

Automating testing is important because:   

  • You don’t want to do manual testing before your components get to the pipeline, and if you did, there’s a high likelihood that it wouldn’t work.  
  • As your development teams grow, you might be using internal and external APIs, and different kinds of service meshes– things that need to come together, and work every time.  

How Speedscale tests APIs

Matt discussed how Speedscale is automating API testing, and in his words, treating testing “like cattle instead of pets”[7:47]. 

Traffic Replay 

Speedscale’s Traffic Replay tool copies what’s going on in production, constantly refreshes the entire testing suite, and continuously tests, using what is currently happening. The tool matches the mindset of the people building the application and  creates a constantly updated automated framework.  

Matt walked through the entirety of Speedscale’s testing process step-by-step:  

  1. Take a “snapshot”: Speedscale makes a recording of the entire set of interactions happening in production. This includes the interactions of internal and external users and downstream dependencies as well.  
  2. Move snapshot into CI pipeline: Every time a merge request or pull request is submitted, Speedscale will run the entire battery of tests from the production environment.  

According to Matt, “when you submit a PR and it’s accepted, it means more than just someone read the code, it means that it’s actually going to work in production with the new software”[11:38]. 

How do we simplify API testing?

API quality will improve with more automated testing, but to encourage more testing in general, it also needs to be simpler. Matt mentioned that some of Speedscale’s customers are trying to make this work easier by figuring out how to do things like automated test containers. He describes the systems as a “sort of edge compute where every developer gets their own version, and they don’t have to understand it anymore.”  

He likens automated test containers to what Terraform did for Infrastructure as Code.  

“They’re taking that concept and saying, ‘people can get the benefits of production or the benefits of developing in that system without actually having to understand the infrastructure,’” Matt says [17:50]. 

Maturity will lead to standardization

Within the cloud-native community, standardization and definitions about what makes a “good” or “poor” API will come with time and maturity. In addition, according to Matt, setting up measurements and KPIs will be a part of standardization as well.  

“What I’m focusing on is ‘how do we give people the knowledge and the tools to build measurements?’” Matt says.  Just showing people a dashboard that says, ‘hey, this build granted this latency,’ is stunningly useful” [25:21]. 

Matt ended the conversation by stating that APIs need “Golden Signals,” like those found in Google’s SRE Handbook.  “You need more than the Golden Signals, Matt says, “but it’s a good starting point; we’re figuring that out for APIs”[26:36]. 
 


We’d love to hear what you think. Ask a question or leave a comment below.
And stay connected with Cisco DevNet on social!

LinkedIn | Twitter @CiscoDevNet | Facebook | Developer Video Channel



Authors

Michael Chenetz

Technical Marketing Engineer

Cloud