Elastic Event-Driven Applications

mmussett
4 min readMar 4, 2021

--

I’m writing this article to show how easy it is to have your Event-Driven applications easily scale using KEDA.

“So, what’s KEDA and why should I need it?”.

Well, KEDA is a Kubernetes-based event-driven autoscaler component that can easily scale your deployments based on the number of events needing to be processed.

KEDA is very flexible and can be used to scale up and down your deployments by defining key heuristics to monitor in your infrastructure such as messaging queue depth.

As an example, I will take you through deploying a typical Event-Driven application, apply a KEDA configuration that will scale the application up or down based on depth of an AWS SQS Queue.

First, you’re going to need an application. I’ve got a simple TIBCO BusinessEvents application i’m using. It’s configured to consume messages from AWS SQS Queue.

Building a container-based application from TIBCO BusinessEvents is very simple. Using BE_TOOLS build_image tool, container image can be created.

./build_image.sh -i app -a /Users/mmussett/src/be/BE6/AWSDemo/build -s /Users/mmussett/src/be/BE6/AWSDemo/installer   -t awssqsdemo:1.0

Tag and push the image to your favourite container registry…

docker tag awssqsdemo:1.0 mmussett/awssqsdemo:latestdocker push mmussett/awssqsdemo:latest

Deploy your application to Kubernetes…

kind: NamespaceapiVersion: v1metadata:  name: be-apps---kind: ConfigMapapiVersion: v1metadata:  name: aws-config  namespace: be-appsdata:  AWS_ACCESS_KEY_ID: #key  AWS_SECRET_ACCESS_KEY: #secret  AWS_REGION: #region  AWS_ROLE_ARN: #arn---apiVersion: apps/v1kind: Deploymentmetadata:  name:  awssqsdemo  namespace: be-apps  labels:    app:  awssqsdemospec:  selector:    matchLabels:      app: awssqsdemo  template:    metadata:      labels:        app: awssqsdemo    spec:      containers:      - name: awssqsdemo        image: mmussett/awssqsdemo:latest        imagePullPolicy: Always        envFrom:          - configMapRef:              name: aws-config      restartPolicy: Always$ kubectl apply -f all-in-one.yaml

Now comes the interesting part, deploy KEDA to your cluster and configure your application for scaling

kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.1.0/keda-2.1.0.yaml

You need to create a TriggerAuthentication and ScaledObjects. TriggerAuthentication is referenced by the ScaleTargetRef object and contains the necessary credentials used by the Scaler in order to scrape queue metrics. The ScaledObject links your deployment to the correct Scaler and provides the necessary parameters to trigger a horizontal pod scale event.

apiVersion: v1kind: Secretmetadata:  name: aws-secrets  namespace: be-appsdata:  AWS_ACCESS_KEY_ID: #key  AWS_SECRET_ACCESS_KEY: #secret---apiVersion: keda.sh/v1alpha1kind: TriggerAuthenticationmetadata:  name: keda-trigger-auth-aws-credentials  namespace: be-appsspec:  secretTargetRef:  - parameter: awsAccessKeyID     # Required.    name: aws-secrets            # Required.    key: AWS_ACCESS_KEY_ID        # Required.  - parameter: awsSecretAccessKey # Required.    name: aws-secrets            # Required.    key: AWS_SECRET_ACCESS_KEY    # Required.---apiVersion: keda.sh/v1alpha1kind: ScaledObjectmetadata:  name: aws-sqs-queue-scaledobject  namespace: be-appsspec:  scaleTargetRef:    name: awssqsdemo  minReplicaCount: 0  maxReplicaCount: 2  pollingInterval: 5 # Optional. Default: 30 seconds  cooldownPeriod: 30 # Optional. Default: 300 seconds  triggers:  - type: aws-sqs-queue    authenticationRef:      name: keda-trigger-auth-aws-credentials    metadata:queueURL: https://sqs.eu-west-1.amazonaws.com/1234567890/mmussett-test-queue      queueLength: '50'      awsRegion: "eu-west-1"      identityOwner: pod      awsRoleArn: arn:aws:iam::1234567890:role/K8sDev

Apply our KEDA configuration to our cluster…

kubectl apply -f keda.yaml

Our SQS Queue-based autoscaler is now configured and running

Let’s take a look at the KEDA’s ScaledObject in more detail…

The scaleTargetRef.name has to be assigned the name of the resource you are configuring it for, in our demo it’s the name of our Deployment awssqsdemo:

The triggers section defines the type of scaler trigger we are using. In this instance it’s type: aws-sqs-queue.

Each trigger has it’s own metadata configuration that must be set in order to set in order for the trigger to work.

The important value is the queueLength parameter which is passed to the scaler to calculate the number of pods to scale.

So for example, if a single pod can consume messages at a rate of 10 per second, we set the queueLength to 10. Our scaler will use the actual message queue depth to calculate the number of pods to scale to.

There are a number of other tuning arguments available to you such as minReplicaCount, maxReplicaCount, pollingInterval etc which can be set.

Advanced settings allow you to finer control of the Horizontal Pod Autoscaler such as cooldownPeriod which determines how long to wait since the last trigger interval before scaling your resources back to minReplicaCount.

Currently KEDA supports the following scalers, in the coming months I hope to add support for TIBCO Messaging to this list.

Current supported scalers

Hopefully this short article highlights the benefits of using KEDA in your EDA deployments where you need elasticity based on demand.

--

--

mmussett
mmussett

Written by mmussett

I am a husband, father-of-two, dog owning, video-game-playing, motorbike-riding technologist.

No responses yet