Avatar

As discussed in my previous post, we will now start working on getting some hands-on experience with different FaaS over kubernetes providers. And our first engine will be OpenFaaS, a popular option in the community.

Let’s get started!

The first thing you will need to do is to install OpenFaas CLI in your own workstation. Once this is done, you can use it to build and deploy functions. For example, in OSX you would install it with:

brew install faas-cli

The easiest way to install OpenFaaS is to use arkade, again in OSX:

sudo curl -SLsf https://dl.get-arkade.dev/ | sudo sh
arkade install openfaas --load-balancer

Using the ‘–load-balancer’ option will ask our k8s cloud provider to give us an externally accessible IP address for the ‘gateway-external’ service (it might take a couple of minutes):

$ kubectl get svc gateway-external -n openfaas
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gateway-external LoadBalancer 10.31.244.182 35.241.131.142 8080:32433/TCP 114s

We need to assign that IP address to the required URL variable:

export OPENFAAS_URL=http://35.241.131.142:8080

You might also want to check that all your pods in the openfaas namespace are running and readily available:

kubectl get pods -n openfaas

Time now to login from your local workstation into the OpenFaaS deployment:

PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
echo -n $PASSWORD | faas-cli login --username admin --password-stdin

The OpenFaaS runtime engine is now setup and you are ready to start deploying your functions in it! For reference, function pods will be deployed in a different namespace named openfaas-fn.

For our first test let’s use something simple as figlet, a simple program to make large letters out of a provided message. Deploying it is as simple as running the following command:

faas-cli store deploy figlet

You may check it has been deployed with:

faas-cli list

Now let’s see it working:

echo "Hello Cisco" | faas-cli invoke figlet

It’s working! But… what happened? Well, basically the deploy command created a k8s deployment with a single replica in the openfaas-fn namespace:

$ kubectl get deployment -n openfaas-fn
NAME READY UP-TO-DATE AVAILABLE AGE
figlet 1/1 1 1 40m

And every time you run a message through it the number of invocations grows:

$ faas-cli list
Function Invocations Replicas
figlet 2 1

You can also see the number of pod replicas based on the workload.

Please feel free to explore other available apps

faas-cli store list

Hopefully you are now excited and want to start deploying your own code as functions! If that’s the case you may use the OpenFaaS CLI to find templates for the most common programming languages, running the following command:

faas-cli template store list

To download them to a local template folder you just need to run:

faas-cli template pull

With that you can start creating your functions and see the available template options:

$ faas-cli new --list
Languages available as templates:
- csharp
- dockerfile
- go
- java11
- python
- node

Let’s create a simple one using the node template

faas-cli new callme --lang node

This will create a callme.yml manifest and a new folder named callme with the template for your new function. Before anything else let’s edit the manifest and include your DockerHub user-id before the resulting image name, so that it can be published correctly later. It should look similar to this:

image: juliocisco/callme:latest

If you take a look at the handler.js file in the callme folder you will notice that the template code just returns a status: “done” message. For your function you would include here your own code, but this is good enough for our demo.

First thing you will need to do is to build the container image that includes your code. Please make sure you have Docker running locally in your workstation, as the build process will be run locally.

faas-cli build -f callme.yml

With that you can now publish the image to your repo (in DockerHub by default):

faas-cli push -f callme.yml

And finally, you need to create a new deployment in your k8s cluster using the published image:

faas-cli deploy -f callme.yml

Your new function is now deployed in OpenFaaS! Let’s invoke it and see if it works, we will pass it any input (ie. today’s date) and it should answer with the status: “done” message.

$ date | faas-cli invoke -f callme.yml callme
{"status":"done"}

It works!

This function is available now to be consumed from the outside world using the HTTP endpoint accessible via the LoadBalancer IP, let’s give it a try:

$ curl -X GET $OPENFAAS_URL/function/callme
{"status":"done"}

Nice!

You can of course use your browser as well:

Serverless 17 DevOps

As you can see OpenFaaS is easy to deploy, very k8s friendly with its own namespaces & functions deployments, and a great starting point with templates to deploy your own code.

See you in my next post to continue exploring other serverless engines you can run on kubernetes, stay tuned!

Any questions or comments please let me know in the comments section below, Twitter or LinkedIn.

Related resources:

 


We’d love to hear what you think. Ask a question or leave a comment below.
And stay connected with Cisco DevNet on social!

Twitter @CiscoDevNet | Facebook | LinkedIn

Visit the new Developer Video Channel



Authors

Julio Gomez

Programmability Lead, EMEAR

Systems Engineers