Java Lambda function — Terraform deployment

Nicolas Vogt
4 min readMay 15, 2021

How to use IAC to publish Java Code on AWS Lambda

Photo by Martin Shreder on Unsplash

You will find a whole litterature on this subject, but most of it takes for granted that you are confident with Java compilation specificities. I have used Java during my studies, but then I took a system engineering Job and everything vanished out of my head with the lack of practice.

Now, as a cloud architect, I have to deal with all kinds of languages and find ways to automate and secure their cloud deployments. What you will find here is a basic example on how to deploy a Java lambda function on AWS with Terraform.

Code

First of all, you need to write your lambda function. The information that was not easily found is about the structure of the code. You will have to structure your code like this :

- src/
| - main/
| | - java/
| | | - my_package_name/
| | | | - my_sub1/
| | | | | myHandler.java
- pom.xml

This structure is very important, otherwise the compiler will not find your class(es).

Your class will look like this :

This function retrieves a value from the environment variable MY_ENV and then prints it concatenated with the current date. It is a little more complex than a hello world, but it is complex enough for me, so you should be okay.

What matters here, is that you import the aws libraries and that you define a class that implements RequestHandler, mine is called Handler . Then in this class, you will define a method that will be the target of you Lambda function, mine is called handleRequest .

To be more precise with the naming convention and the directory structure I mentioned before, my file will have to be Handler.java and my package is com.example . So here is what my directory structure looks like :

- src/
| - main/
| | - java/
| | | - com/
| | | | - example/
| | | | | Handler.java
- pom.xml

That should be enough code to start, now let’s compile it.

Compile

There are two ways to compile according to the AWS documentation, you can use either gradle or maven. I chose maven because it was mostly used in my organization. I will assume that we compile with it for the rest of this section.

Maven needs a pom.xml as configuration file in order to know what to do. This file should be found at the root of our directory structure. Here is the configuration file you will need to compile the code above :

Basically, all you need to do here is to set an <artifactId>, a <version>, a <name>, and a <description>.

We will also need to tell maven to retrieve the libraries we are using, this is what is defined in the <dependencies> block, plus the maven-shade plugin that wraps everything up into a package.

If you still don’t have the maven binary installed, then proceed to :

apt-get install maven

All you have to do now is to compile with the following command :

mvn package

This should creates a target/ directory with a jar file inside.

Easy, isn’t it? Okay, let’s Terraform it.

Publish

First you will need to set up a provider. I usually place it in a providers.tf file but you don’t have to.

This specifies the region where our deployment will be published. I set it as a variable, but you can set a fixed value if your deployments are all made in the same region. If you need to assume a role in order to create resource, you can set your provider like this :

In the main.tf file, we will start to define our resources :

Terraform will look to a file template in templates/pom.tpl and replace the artifact, version and description variables with the input parameters.

To make the pom.xml file a template, rename it into pom.tpl, move it into a templates/ folder and change the following tags into :

<artifactId>${artifact}</artifactId>
<version>${version}</version>
<name>java-${artifact}</name>
<description>${description}</description>

One thing you probably already know is that Terraform is stateful. Once the resource is created it is not changed until we modify its definition. This means that the compilation will only run once because it will not see any configuration change. You will have to taint your resource if you want to run this process every time you apply your Terraform plan.

Then we define our role :

This is a minimalist role that suits our example, if you need extra permissions to access external resources you should grant them here.

Here is the major part, this is the definition of our lambda function :

The source_code_hash takes the file and hash it before send it to AWS Lambda. In this way, you don’t have to zip you file and push it as you would do in the console management interface. Everything is done inline.

The rest has to do with the log group and permissions :

And finally we will decide to trigger it by cron every days, so we need to declare an EvenBridge resource to do the schedule and to trigger the lambda.

Before running it, we need to declare our variables :

Now, it is your turn. Define your variables in a terraform.tfvars file, and the run :

terraform init
terraform plan -out .tfpan
terraform apply .tfplan

This should do the work.

Note that the next time you will run it, you will have either to apply with the -replace= option or to taint the build resource in order to force a new build.

Hope this quick guide helped you, I wish you a very good evening and see you next time!

--

--

Nicolas Vogt

Curious, most of the time, eager to learn something new when I’m not