Terraform, and infrastructure-as-code, in general, are an essential tool in the DevOps arsenal. They can provide the means to create a stable pipeline of environments through which developers can safely test their applications, and – if handled correctly – offer the potential of fast, one-click deployments of full environments with minimal effort. To get to this point, however, can require careful planning. Terraform is a simple, flexible and powerful tool – it doesn’t take much learning to get started with it – but if you don’t think ahead, your code can quickly become an over-complicated mess.
For the past two years, I have worked on projects creating cloud platforms with Terraform and it is safe to say that I have made a fair few mistakes. Thankfully, I’ve also picked up a few useful techniques. I’ll be sharing some of what I’ve learnt while keeping things generic as far as cloud providers are concerned; these concepts can be applied to any project using Terraform.
From here on out I will be assuming that you have a grasp of the core concepts of Terraform – basic commands and syntax, writing modules, basic state management – it’s not essential that you do for everything I’ll talk about, but for some topics, it will certainly help.
Everything in a module
You might already have found it useful to group certain sets of resources into modules, probably the most obvious example of this being virtual machines and their associated network interfaces, public IP addresses and so on, but this also applies to common patterns for most if not all of the services you will come across. Neatly organising your Terraform into modules is a huge step toward maintainability and easy use of your codebase – but don’t stop there.
Having created a squeaky clean collection of useful modules, you might then put together everything you need for your environment in a directory, fill in the variables required by your modules, run “terraform init” and be well on your way to your first environment of many. Catch yourself at this point, hard-coding is not your friend – parameterise everything and turn your environment code into one big module.
From this point, you can create as many environments as you like by calling the module with different inputs, or better yet…
Use the -var-file flag
terraform plan -var-file=tfvars/devTest.tfvars
It’s possible to declare variables in your working directory, just as one would inside a module, and populate them with values from a file by adding the -var-file flag when generating your plan. In the above example, Terraform will look for a file called “devTest.tfvars” in a directory called “tfvars”, located in the current working directory.
If you’ve been paying attention, you might have just spotted a handy way to cut down on boiler-plate code: by calling your environment module as described above, then passing in all of your environment-specific variables with tfvars files. In this way, you can build all of your environments from the same place, switching between them by specifying a different tfvars file.
Note at this point that your code should generate the name of your statefile from your input variables when defining your backend so that you can run terraform for each environment without accidentally destroying your infrastructure. For example:
terraform {
backend “local” {
path = “path/to/${var.environment_name}.tfstate”
}
This is all well and good, but as your platform grows in size and complexity, as they so often do, you may find yourself having to maintain tfvars files containing many, many values. In my opinion, the solution is to move as much of this data as possible into a module, from which environment-specific values can be looked up. Some kind of…
Lookups module
When you know in advance certain facts about your environments (for example, IP ranges), it can be tempting, and even sometimes helpful to hard-code these values in some way, rather than pass them in with variable files. However if this is not handled carefully it may become difficult to keep track of where everything is being defined, so try to keep all of your data in one place, and you will always know where to look.
One way to achieve this is by creating a module called, for example, “lookups” containing only outputs like the following:
output “ip_range” {
value {
“devTest” = “192.168.0.0/30”
“preProd” = “192.168.0.4/30”
“Prod” = “192.168.0.8/30”
The above output could then be referenced from your environment module to obtain an IP range based on an environment name like so:
ip_range = “${lookup(module.lookups.ip_range, var.env_name)}”
Where var.env_name is set to one of the three options shown in the output above.
It is important to note that the above lookups module should not be used in the place of a terraform remote state lookup when referring to resources in your environment from outside of your module, for example when adding a new VM to the environment network, as this can lead to some unpleasant behaviour from Terraform. However, in the first instance, within the environment module itself, these values must be defined somewhere, and the lookup module can act as a single source of truth. The lookups module can also be used to store static values referring to external dependencies such as subscription IDs or the IP addresses of outside services.
As with many other technologies, Terraform can be used in many different ways to achieve the same goal, and these techniques are by no means a definitive approach to environment creation. That said, I have personally found these ideas very helpful, and I hope that you do too.
Looking for more interesting content? Check out our Engineering Hub!
We work with our clients to de-risk and accelerate their business goals realisation. Our approach is based on tailoring our services to fit your needs leveraging our portfolio of strategy, execution, innovation and service delivery offerings to help you reach your objectives
We’re always on the lookout for exceptional talent and people who share our values. Even as we continue to grow, we maintain a family environment with respect and teamwork core to our culture.
Companies can start deploying containerised workloads to Kubernetes In days not months, by leveraging Automation Logic’s Kubernetes Platform Accelerator.
Largest Irish-Founded Tech Employer Surpasses 3000 Employees Signing 15th and 16th Acquisition Deals Version 1, a leading digital transformation partner, is to acquire Automation Logic – for an undisclosed sum as part of its ambitious growth strategy for 2023 and beyond. The Automation Logic deal is subject to clearance by the National Security and Investment […]
Automation Logic were recognised as one of the UK’s best Workplaces again for the 5th year running! We also placed 29th in best workplaces for women. This highlights the great work that we are doing as a business, including all the wonderful work being put in by the employee-led DE&I groups. This award means so […]