Using AWS SecretsManager for Configuration Management with TIBCO BusinessEvents 6

mmussett
5 min readJan 15, 2021
Photo by Kristina Flour on Unsplash

Like most configuration management tools AWS Secrets Manager allows you to have centralised control over passwords used by your applications.

TIBCO BusinessEvents 6 now supports integration with AWS Secrets Manager allowing your application to retrieve its configuration values at container runtime.

TIBCO BusinessEvents provides Global Variables as an easy way of setting configuration values throughout your project. When a BusinessEvents project is deployed, all occurrences of the global variable name are replaced with the provided global variable value.

Global Variable values can also be over-ridden, normally by setting environment variable using either flag argument “- -propVar” or “-p”.

With the new integration with AWS Secrets Manager, you now have the ability to override the project Global Variables using values stored as Secret Key/Value pairs within a Secret.

In AWS Secrets Manager we create a new Secret called “dev/tibco/be”. Our BusinessEvents application at runtime will reference any Secret Key/Values we define outside of the application development phase.

AWS Secret Manager Secret named “dev/tibco/be”

Each Global Variable we wish to over-ride at runtime will need to have a Secret key/value defined. As BusinessEvents supports group hierarchical structure these need to be “flattened” when defining the Key. Using a / between Key terms allows us to represent group hierarchy within the Key:

Global Variable group hierarchy within TIBCO BusinessEvents Studio
Secret Key/Value used by our BusinessEvents Application

To use AWS Secrets Manager, you must deploy your BusinessEvents application within a container architecture. Tooling is provided to create your container using the be-tools repository found on TIBCOSoftware Github repository:

Building our BusinessEvents Application Container Image

Creating a container image for your BusinessEvents application is straight-forward to do. You need to bring to the party:

  • Your Application EAR file generated from your BE application project.
  • Your Application CDD file.
  • Access to either local BE installation or BE installation image file (linux variant).
  • Download of the be-tools repository.
  • Optionally, any additional JAR files you may need

I’ll not go to far with the container build process — i’ll save that for another medium post to shortly follow this one!

I’ve marshalled all my artefacts in to a single directory e.g.

Using the BE Tools container image builder scripts using the following command line example:

./build_image.sh -i app \-a /Users/mmussett/src/be/BE6/AWSDemo/build \-s /Users/mmussett/src/be/BE6/AWSDemo/installer \-t awssqsdemo:1.0 \--gv-provider aws

The BE Tools container image tools actually uses the Docker BuildKit under the covers for improved docker build performance.

All the magic is wrapped up in the build_image tool, it’s command line arguments are:

Usage: build_image.sh

[-i/--image-type] : Type of the image to build ("app"|"rms"|"teagent"|"s2ibuilder") [required]
Note: For s2ibuilder image usage refer to be-tools wiki.

[-a/--app-location] : Path to BE application where cdd, ear & optional supporting jars are present [required if --image-type is "app"]

[-s/--source] : Path to BE_HOME or TIBCO installers (BusinessEvents, Activespaces or FTL) are present (default "../../")

[-t/--tag] : Name and optionally a tag in the 'name:tag' format [optional]

[-d/--docker-file] : Dockerfile to be used for generating image [optional]

[--gv-provider] : Name of GV provider to be included in the image ("consul"|"http"|"custom") [optional]
Note: This flag is ignored if --image-type is "teagent"

[--disable-tests] : Disables docker unit tests on created image (applicable only for "app" and "s2ibuilder" image types) [optional]

[-h/--help] : Print the usage of script [optional]

NOTE : supply long options with '='

As you can see we’re using “--gv-provider aws” as our GV provider value, there’s also support for API-based providers using “http” as well as Consul.

Our built docker image comes in at around 738MB:

Running our containerised BusinessEvents application

We can run our BusinessEvents application using the normal docker run command line.

We need to pass the container details on how to connect to AWS Secrets Manager using environment variables passed to the container:

-e AWS_ACCESS_KEY_ID-e AWS_SECRET_ACCESS_KEY-e AWS_REGION-e AWS_DEFAULT_REGION-e AWS_ROLE_ARN=-e AWS_SM_SECRET_ID=

For our application, the entire command line looks something like this:

docker run \-e AWS_ACCESS_KEY_ID=AKIA3WYHJAA64EHABC \-e AWS_SECRET_ACCESS_KEY=32sdf4v65xEKVfFw0p7AaLNJaa3xcv9UNgXe \-e AWS_REGION=eu-west-1 \-e AWS_DEFAULT_REGION=eu-west-1 \-e AWS_ROLE_ARN=arn:aws:iam::123456789012:role/TIBCO/Administrator \-e AWS_SM_SECRET_ID=dev/tibco/be \-p 8108:8108 --name=awssqsdemo awssqsdemo:1.0

Running our container:

We can see from the log output of the BE Engine that the Global Variable values have been over-ridden from the Secret Key/Value strings in AWS Secrets Manager:

In summary, BusinessEvents 6.0 allows you to utilise DevOps best-practices for integrated Configuration Management in order to increase agility and reduce risk by maintaining configuration outside of the application.

I do hope this article provides new insight in to how TIBCO BusinessEvents 6 supports DevOps and your cloud adoption initiatives.

If you have any further questions or would like to discuss anything related to TIBCO BusinessEvents 6 please do reach out to me.

--

--

mmussett

I am a husband, father-of-two, dog owning, video-game-playing, motorbike-riding technologist.