AWS Fargate (ECS) is a serverless platform for running container workloads without the complexity of managing EC2 instances running Docker or the more complex EKS services.
TIBCO BusinessWorks Container Edition coupled with AWS Fargate allows the less adventurous or inexperienced users to quickly move their on-premise BusinessWorks resources to AWS with little fuss.
In this Medium post and Github repository I will show you how easy it is to quickly build and deploy a simple BusinessWorks Container Edition application to AWS Fargate.
I’m a big fan of automation tooling especially when it saves both time and effort. That’s why i’ve provided Terraform scripts that will take the donkey work out of creating the AWS infrastructure necessary to run BusinessWorks Container images on AWS Fargate.
The Github repository contains a sample BusinessWorks project that implements a simple demo API we will use to deploy to AWS Fargate.
You can download a trial version of BusinessWorks Container Edition here if you want to explore the TripStatus demo API or create your own.
Building the BWCE Container Image for deployment
In order to build any BusinessWorks Container Edition (BWCE) image we need to start with a pre-built runtime base image provided by TIBCO.
You’ll need Docker installing on your machine and building the runtime is as simple as:
Copy bwce-runtime-<version>.zip file to <BWCEHOME>/docker/resources/bwce-runtime folder
On terminal, goto <BWCE-HOME>/docker and RUN: ‘docker build -t <TAGNAME> .’
e.g. docker build -t tibco/bwce:latest .
Once we have the base runtime image we can create our BWCE container image using either TIBCO Business Studio IDE or via Maven with the BWCE Maven Plugin found here.
Our generated EAR is combined with the BWCE runtime image to give you the complete container image ready for deployment.
In the package folder i’ve included the Dockerfile and the BWCE EAR that does this for you…
FROM tibco/bwce:latest
LABEL maintainer=”mmussett@tibco.com”
LABEL version=”1.0.0"
LABEL description=”Trip Status API”ADD TripStatusAPI_1.0.0.ear /
EXPOSE 8080
EXPOSE 7777
Build the target container image by running
$ docker build -t tripstatus:latest .
Pushing Container Image to AWS Elastic Container Repository
Once we have the container image built we need to push it to AWS Docker Elastic Container Repository (ECR) so that the image can be pulled by Fargate
Using AWS CLI creating the registry is a simple as running:
$ aws ecr create-repository — repository-name tripstatus
The URI of the repository can be found by running:
$ aws ecr describe-repositories | jq ‘.repositories[] | {name: .repositoryName, uri: .repositoryUri}’
You’ll need the URI to be able to tag your local image to be able to push the image to ECR
jq is a great utility for processing json output and can be found here
Tag the image by running:
$ docker tag tripstatus:latest <<repositoryUri>>
Example…
docker tag tripstatus:latest 696093067220.dkr.ecr.eu-west-1.amazonaws.com/tripstatus
Finally, push the image with:
$ docker push <<image>>
Example…
docker push 696093067220.dkr.ecr.eu-west-1.amazonaws.com/tripstatus:latest
Using Terraform to provision our AWS infrastructure
Our AWS architecture we’re going to create using Terraform scripts will provide us with new VPC configure with an Application Load Balancer supporting ECS running in two Availability Zones in our Region.
From here all the ‘heavy-lifting’ is done for us via Terraform. If you’re not familiar with Terraform it provides us the ability configure cloud infrastructure using code or Infrastructure as Code (IaC) as it’s known.
Terraform is a great way of building infrastructure as it removes the burden of navigating the complex AWS CLI syntax and flags.
Provided in the Github repository is the Terraform scripts..
I’ve extracted configuration items in to variables that reside in the variables.tf file that you’ll need to edit prior to running Terraform.
The variables.tf contains such information as AWS credentials needed for Terraform to connect (key and secret) as well as the image URL needed by ECS to pull from. I’ll avoid going in to further details here as the each variable has a description of its purpose.
To provision our infrastructure using Terraform you just need to run:
$ terraform plan
All being well, Terraform will create its execution plan and inform you of what will be created.
To actually apply that plan you just need to run:
$ terraform apply
It will take a few minutes for Terraform to complete execution and AWS ECS to bring up the containers, at the end you will have a running BusinessWorks Container running on AWS Fargate.
We can test the results of our labour by using cURL tool. Firstly we need the Application Load Balancer address which can be found by running:
$ aws elbv2 describe-load-balancers | jq ‘.LoadBalancers[] | {DNS: .DNSName}’
{
“DNS”: “mm-bwce-fargate-lb-1588994854.eu-west-1.elb.amazonaws.com”
}
We can now use the DNS of the ALB to call our API endpoint:
$ curl -X GET — header ‘Accept: application/json’ ‘http://mm-bwce-fargate-lb-1588994854.eu-west-1.elb.amazonaws.com:8080/trips'
The intention of this article was to hopefully show how easy deploying any form of container workload to AWS Fargate service. I hope you find the article useful if you’re taking your first steps in the world of containers.