What Is Terraform? A Beginner's Guide to the Most Famous Infrastructure-as-Code Tool
Table of contents
- What is Infrastructure-as-Code?
- Key Concepts of Terraform
- Benefits of Using Terraform
- Getting Started with Terraform
- Installation and Setup
- Basic Syntax and Semantics - Our First Project
- Remote State Management
- Using Modules
- Module and Resource Outputs
- HCL and Expression Functions
- Meta Arguments
- Dynamic Blocks
- Managing Multiple Environments
- Testing and Validation
- Manually Removing or Importing Resources to Terraform's State
- Best Practices for Using Terraform
- Terraform vs. Other Infrastructure Management Tools
- Valuable Resources for Terraform
- Conclusion
- Frequently Asked Questions (FAQ)
- Related Reads
Infrastructure-as-Code (IaC) tools are everywhere. They are used for managing and provisioning all kinds of infrastructure resources. They allow you to define infrastructure as code, making it easier to manage, version control, and automate.
The article covers the fundamentals of Terraform, including key concepts, the benefits of using Terraform, and a comparison of Terraform with other infrastructure management tools.
The article also provides guidance on getting started with Terraform, including installation and setup, creating a first project, and advanced Terraform concepts such as using modules and managing multiple environments. Additionally, the article highlights best practices for using Terraform, including respecting key concepts, code structuring, naming conventions, and code styling.
The article concludes by emphasizing the importance of Terraform as a widely-used and battle-tested IaC tool for managing infrastructure resources in cloud computing environments.
What is Infrastructure-as-Code?
Before the rise of the cloud, the hardware, that your application runs on, was like the word says: "hard" to change.
This has changed now. Scaling applications either vertically (having a bigger server) or horizontally (adding more servers that run your application) can be done in the blink of an eye. You're also able to provision infrastructure all around the globe without any upfront costs. You don't need it anymore? Delete it in seconds and never worry about it again.
All of this led to the obvious necessity that those infrastructure resources need to be managed programmatically. And that's exactly where IaC tools step in.
They allow you to define every infrastructure resource as code, meaning that your repository now not only contains your application code, but also every infrastructure resource that's needed to run it.
Terraform is the most prominent and maybe also flexible IaC tool out there.
If you're interested in reading more about the fundamentals of the history of IaC on AWS, have a look at one of our previous blog articles: Infrastructure as Code on AWS - An Introduction.
The Declarative and Imperative Approaches and Methods
IaC tools are either declarative or imperative.
Declarative: this approach solely defines the intended state of the infrastructure. This means, there's nothing that indicates how this state would be achieved. There is no list of involved steps. Terraform as well as CloudFormation are declarative tools. Declarative tools use their domain-specific language (DSL), mostly defined as YAML or JSON.
Imperative: on the other hand, imperative approaches explicitly list the commands that are required to reach the final state of the infrastructure. Mostly Object-Oriented languages like TypeScript are used for the imperative approach.
One benefit of imperative approaches is that, based on your application language, don't need to learn another one to create your infrastructure. This also means that the application code can merge with the infrastructure code, which helps to leverage platform specifics. But this can also tightly couple your application to the cloud provider.
Key Concepts of Terraform
Terraform is maybe the most flexible IaC tool out there. To understand its internal workings and to use it properly, we need to learn about its key concepts first. Then we'll have a look at why its declarative style and the large list of supported providers provide many benefits. Lastly, we'll compare Terraform to other famous tools out there.
Terraform Follows a Declarative Approach
We've learned before that Terraform follows a declarative approach, which means only the desired state of the infrastructure is defined. Terraform figures out how to reach that state.
Providers for Multi-Platform Support
Terraform does not only support AWS. It supports a large range of cloud providers (among them are AWS, GCP, and Azure) and other services (like NewRelic) through provider plugins. This enables you to build a multi-cloud or better multi-platform environment without having to use multiple tools.
Terraform Keeps Track of Your Infrastructure State
Every time you execute Terraform and it modifies your infrastructure by creating, updating, or deleting resources, it will save the current state.
This means it will remember which resources are currently created and which configurations they have. This enables Terraform to determine the necessary changes that are necessary to reach the desired state and therefore to reach consistency and accuracy.
This state can be tracked remotely, for example in an S3 bucket. With this, you can run Terraform from different places with the same results.
Modules to Create Abstractions and Blueprints
Terraform modules allow you to create blueprints for your infrastructure and encapsulate configurations. This allows the creation of complex infrastructure setups that are hidden behind an abstraction layer. Modules also enable you to reuse those setups in different places.
Previewing Upcoming Changes
Terraform provides you with two important commands: plan and apply.
plan - this will preview the changes that Terraform would execute. It helps to prevent unintended changes and allows for safer and more controlled updates. A previewed plan can be saved to your local storage and can be afterward executed as is.
apply - this will execute all changes that Terraform indicates as necessary to achieve the desired state of the infrastructure.
Benefits of Using Terraform
Using Terraform comes with many benefits:
Multi-Platform Support - you're not bound to a single platform, but you can leverage multiple of them in a single place. This enables you to build very robust, provider-independent solutions that may not have a single point of failure.
Improved Scalability - Terraform can handle infrastructure deployments of any scale, making it the perfect tool for large-scale enterprises. Due to its flexibility and modules, new environments can be created quickly and existing ones can be scaled fastly.
Reduced Risks - Terraform's ability to create a preview of upcoming changes makes it less likely to execute unwanted changes and therefore introduce downtimes or infrastructure bugs. As with other IaC tools, the ability to define infrastructure via code drastically eases the creation of equal environments.
Better Collaboration - Terraform can version control its state, making it easy to track change histories or restore previous states. Its modular design helps to create large-scale resource setups without forfeiting manageability.
Enhanced Automation - Terraform allows the automation of any infrastructure-related task, including updates of existing resources.
All of those benefits will ultimately result in increased productivity.
Getting Started with Terraform
Knowing the fundamentals is important. But knowledge gets lost quickly if you didn't apply it properly. This is especially true for software development.
Installation and Setup
If you're running macOS you can quickly install Terraform via homebrew:
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
For other operating systems, have a look at HashiCorp's installation guide for Terraform.
Run terraform --version
to check that Terraform is properly installed.
> terraform --version
Terraform v1.3.9
on darwin_arm64
💡 Terraform allows for version pinning, both for Terraform itself as well as for the providers. Due to this, it's often necessary to install a Terraform version management tool like tfenv that allows the installation, usage, and management of multiple versions.
Basic Syntax and Semantics - Our First Project
Getting started with Terraform and AWS is very simple. In this example, we'll create a simple Node.js Lambda function.
As a precondition, we need three things:
the AWS CLI has to be installed
you have to have your credentials configured.
Verify this by running aws sts get-caller-identity
. It should return your AWS account's unique 12-digit number.
> aws sts get-caller-identity
{
"UserId": "AIDAQJICDY7KWY752N5EI",
"Account": "012345678901",
"Arn": "arn:aws:iam::012345678901:user/awsfundamentals"
}
After this has worked successfully, let's create a new directory and our first Terraform file.
mkdir terraform-lambda-example
cd terraform-lambda-example
touch main.tf
Open main.tf
in your favorite editor and add the following code.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "=4.57.1"
}
archive = {
source = "hashicorp/archive"
version = "~>2.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
By this, we're defining that...
we want to use the exact version
4.57.1
of the AWS Terraform provider module.we want to use Terraform's
archive
provider with the latest2.x
version.we want to use
us-east-1
as the region where we want to create our infrastructure resources.
Let's now create an empty Lambda handler function and package it directly into a ZIP file we can deploy later on.
mkdir dist
echo "exports.handler = async (event) => {};" > dist/handler.js
Preconditions are fulfilled now. We can now create the necessary infrastructure via Terraform.
resource "aws_iam_role" "role" {
name = "lambda-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "attachment" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
role = aws_iam_role.role.name
}
data "archive_file" "dist" {
type = "zip"
source_dir = "dist"
output_path = "dist.zip"
}
resource "aws_lambda_function" "lambda" {
function_name = "my-function"
filename = data.archive_file.dist.output_path
source_code_hash = data.archive_file.dist.output_base64sha256
role = aws_iam_role.role.arn
handler = "index.handler"
runtime = "nodejs18.x"
}
What did we do here?
we're creating an IAM role for our Lambda function.
we're attaching the AWS-managed policy
AWSLambdaBasicExecutionRole
to our created role.we create a ZIP archive of our index file.
we create a Lambda function that uses our previously created IAM role and deploys the code we put into our ZIP archive.
We're now already good to go. Let's apply our infrastructure! But before we do this, we need to initialize our providers via terraform init
.
terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Finding latest version of hashicorp/archive...
- Installing hashicorp/aws v4.57.1...
- Installed hashicorp/aws v4.57.1 (signed by HashiCorp)
- Installing hashicorp/archive v2.3.0...
- Installed hashicorp/archive v2.3.0 (signed by HashiCorp)
[...]
You'll now notice that Terraform created a lock file that contains a hash of the used providers. This ensures that this can only be executed with the specified versions. If they are republished due to a compromise of the HashiCorp repository, the apply
command will be rejected.
Let's have a look at the upcoming changes that Terraform will create by running terraform plan
.
The output will look something like this:
terraform plan
# aws_iam_role.role will be created
+ resource "aws_iam_role" "role" {...}
# aws_iam_role_policy_attachment.attachment will be created
+ resource "aws_iam_role_policy_attachment" "attachment" {...}
# aws_lambda_function.lambda will be created
+ resource "aws_lambda_function" "lambda" {...}
Plan: 3 to add, 0 to change, 0 to destroy.
The +
indicator shows that those resources will be created by Terraform. Let's now apply our changes via terraform apply
. Terraform will preview the changes again and will then prompt you before applying them.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_iam_role.role: Creating...
aws_iam_role.role: Creation complete after 1s [id=lambda-role]
aws_iam_role_policy_attachment.attachment: Creating...
aws_lambda_function.lambda: Creating...
aws_iam_role_policy_attachment.attachment: Creation complete after 1s [id=lambda-role]
aws_lambda_function.lambda: Still creating... [10s elapsed]
aws_lambda_function.lambda: Creation complete after 16s [id=my-function]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
That's it. Our function and role were created and are now available at AWS.
aws lambda list-functions \
--query 'Functions[].{Name:FunctionName}' \
--output table --region us-east-1
-----------------
| ListFunctions |
+---------------+
| Name |
+---------------+
| my-function |
+---------------+
As we've successfully applied our infrastructure, we'll now find terraform.tfstate
in our root directory. This file contains the current state of the infrastructure.
Let's complete this small tutorial by removing all of our resources by running terraform's destroy
command.
terraform destroy
Plan: 0 to add, 0 to change, 3 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
aws_iam_role_policy_attachment.attachment: Destroying... [id=lambda-role]
aws_lambda_function.lambda: Destroying... [id=my-function]
aws_iam_role_policy_attachment.attachment: Destruction complete after 0s
aws_lambda_function.lambda: Destruction complete after 0s
aws_iam_role.role: Destroying... [id=lambda-role]
aws_iam_role.role: Destruction complete after 1s
Destroy complete! Resources: 3 destroyed.
That's it. We went through the full cycle.
After we've gone through the very fundamentals, let's visit some more advanced concepts in the next paragraph.
Remote State Management
Our example has one major downside: it's not ready for collaboration, as we didn't push our state to any remote location.
Terraform does support remote states. Commonly used for this are for example S3 buckets. Terraform will then sync the state with the bucket.
terraform {
# [...]
backend "s3" {
bucket = "my-state-bucket"
key = "terraform.tfstate"
region = "us-east-1"
}
}
If versioning is enabled, every change of the state will be historized so you can trace back changes or recover from broken states more easily.
Using Modules
Terraform modules allow you to encapsulate and reuse infrastructure configurations, which makes it easier to manage complex environments. This also helps to share and therefore respect best practices.
Have a look at the graphic below: we're having two independent modules that do define blueprints for...
a frontend distribution, built via CloudFront, IAM, and S3
an ECS-based service using Fargate, IAM, and an Elastic Load Balancer.
For simplicity reasons, let's focus on a simpler example: we only want to have a generic blueprint for a module that creates a public S3 bucket.
Let's create a directory modules/public-s3
and create a new file s3.tf
in there with the following configuration code:
resource "aws_s3_bucket" "example_bucket" {
bucket = var.bucket_name
acl = "private"
}
resource "aws_s3_bucket_policy" "example_bucket_policy" {
bucket = aws_s3_bucket.example_bucket.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "AllowPublicRead"
Effect = "Allow"
Principal = "*"
Action = "s3:GetObject"
Resource = "${aws_s3_bucket.example_bucket.arn}/*"
}
]
})
}
This example module creates an S3 bucket with a private ACL and a bucket policy that allows public read access to objects in the bucket.
What's to notice here: the bucket name itself references a variable we didn't define yet: bucket_name
.
Let's define this variable in a new file variables.tf
:
variable "bucket_name" {
type = string
description = "The name of the S3 bucket to create."
}
Now we've got our first useable blueprint. Let's make use of it in our main.tf
file.
# [...]
module "example" {
source = "./modules/public-s3"
bucket_name = "my-public-bucket"
}
This example uses the public-s3
module we created earlier, passing in a value for the bucket_name
variable.
Module and Resource Outputs
Each cloud platform has its schema of how resources are uniquely identified. Those identifiers are often used to couple components.
AWS for example uses Amazon Resource Names (ARNs). So if you, for example, want to grant access to a DynamoDB table respecting the least privilege principle (only grant permissions that are necessary so that the user or service can fulfill its tasks), this corresponding ARN needs to be placed in the corresponding IAM policy.
An example of such an ARN looks like the following:
arn:aws:dynamodb:us-west-2:123456789012:table/my-table
This ARN contains the name of the table, its corresponding region, and your account's unique 12-digit identifier. We don't want to hardcode this into any other module that depends on this table.
That's why Terraform offers outputs. You can reference outputs of resources that are only available after the resource is created before Terraform runs.
Terraform will then automatically create a dependency graph so it knows which resources have to be created at first, as they are references to other resources.
Let's have a look at the example:
The IAM policy references the ARN of the DynamoDB table.
The IAM role relies on the IAM policy.
The Lambda function needs to have the IAM role attached.
So our dependency graph starts at the Lambda function and ends with the DynamoDB table. The table needs to be created first, and the Lambda function last.
Terraform will be automatically aware of this.
HCL and Expression Functions
Terraform files are defined with HashiCorp's configuration language HCL (HashiCorp Configuration Language). This includes expression functions that offer powerful features to dynamically created resources and values, manipulate data structures, and perform simple or complex calculations in your Terraform code.
Included expression functions are for example:
regex(string, pattern)
: returns true if the given string matches the specified regular expression pattern.join(separator, list)
: returns a string that is the concatenation of the elements in the given list, separated by the specified separator.
You can find the whole list of expressions in the Terraform documentation.
Expression functions help to make your Terraform code more dynamic and flexible. This reduces the amount of repetitive or manual work.
Meta Arguments
Generally, a resource block in Terraform only results in a single infrastructure resource. But sometimes we want to create several objects and not define them multiple times due to their similarity.
Meta arguments like for_each
can help with that. Let's have a look at our previous example for the public-s3
bucket module and extend our variables.tf
.
variable "bucket_names" {
type = set(string)
default = []
}
Instead of providing a single string, we're now expecting a set of strings. That will help us to not only create one bucket but multiple. Let's extend our aws_s3_bucket
next:
resource "aws_s3_bucket" "example_bucket" {
for_each = var.bucket_names
bucket = each.value
acl = "private"
}
We're now looping through all the values of our string set via for_each
and providing their values as the corresponding bucket name via each.value
.
module "example" {
source = "./modules/public-s3"
bucket_name = toset(["first-bucket","second-bucket"])
}
Finally, we pass the values in our main.tf
. Terraform will then take care of creating multiple buckets.
The meta argument for_each
can be used with any complexity as we're also able to provide a list of complex objects that pass more values, not just a string.
Looking at our example, we're creating multiple ECS-based microservices.
Dynamic Blocks
With for_each
we can dynamically create multiple resources. This is also possible for blocks within our resources by combining them with dynamic blocks.
For example, we can attach multiple EBS volumes based on a variable to an EC2 instance:
variable "ebs_volumes" {
type = list(map(string))
default = [
{
size = "100"
type = "io1"
iops = "500"
}
]
}
resource "aws_instance" "ec2_instance" {
# [...]
dynamic "block_device" {
for_each = var.ebs_volumes
content {
device_name = "/dev/sdf"
volume_size = block_device.value.size
volume_type = block_device.value.type
iops = lookup(block_device.value, "iops", null)
}
}
}
We're now providing a list of complex objects (a map in Terraform) to our EC2 instance configuration. We loop over all those values via for_each
and then provide the values of each field to the EBS configuration.
We also make use of the expression lookup
that will either extract the value for a given key (in our case the IOPS via iops
) or a fallback value null
.
If we'd pass an empty list, the dynamic block would be empty and therefore didn't add any EBS volume.
Managing Multiple Environments
Terraform allows you to easily manage multiple environments, e.g. development, staging, and production, due to the module approach.
You can either use different workspaces or separate state files. This also helps to deploy your environments into different AWS regions without having to configure those regions individually. Each region can receive the exact duplicate of the main region.
Testing and Validation
Terraform comes with helpers to validate your configurations upfront. This includes plugins for syntax validation and linting in your IDE (there are VSCode and IntelliJ plugins to name the most famous ones), as well as additional tools for automated testing.
Advanced tools like Infracost go even further and provide you with additional information like the expected monthly costs for the desired infrastructure. This can be included in your Github actions to preview how much additional costs will be introduced with a given pull request.
Manually Removing or Importing Resources to Terraform's State
Sometimes, there's already existing infrastructure that was created manually. Regardless of the reason for this, we may want to or have to keep it.
Nevertheless, we don't want to manage this infrastructure in the future via the AWS console but manage it via Terraform.
This can be done by importing the existing resource into the Terraform state.
To import an existing S3 bucket into your Terraform state, you can use the terraform import
command. The terraform import
command allows you to import existing resources into your Terraform state so that Terraform can manage them going forward.
Let's do this via another example featuring S3 buckets.
Create a new bucket manually via the AWS Console or via the AWS CLI.
As we've already got our resource definition for our S3 bucket when we learned how to use the
for_each
meta argument, we don't need to configure another additional resource, but only extend our list of strings via the target name of our bucket.
Generally, you need to define the resource upfront in Terraform so you can reference it in the import.Next, we need to run the
terraform import
command to import the existing S3 bucket into your Terraform state.
Theterraform import
command requires two arguments:
• the resource type and name in the formataws_s3_bucket.example_bucket
• the ID of the existing resource. In this case, the name of the S3 bucketterraform import modules.aws_s3_bucket.example_bucket["my-third-bucket"] my-third-bucket
After importing the resource, run the
terraform plan
command to verify that Terraform has imported the resource correctly.Finally, run the
terraform apply
command to apply the changes to your infrastructure.
After we've imported the bucket into our state, we can continue to manage the bucket solely via code.
Best Practices for Using Terraform
A powerful tool always comes with best practices to make the best out of it. So with Terraform. We'll explore the most important ones so your infrastructure setup will be maintainable now and in the future.
Using Version Constraints
In Terraform, you can specify version constraints for your provider plugins and modules. These constraints are used to ensure that the correct version of the plugin or module is used when you apply your Terraform configuration.
Among them are:
=
(equals sign): This pin ensures that only the specified version of the provider or module is used. For example,=2.1.0
means that only version 2.1.0 of the provider or module is allowed.>=
(greater than or equal to): This pin allows any version of the provider or module that is equal to or greater than the specified version. For example,>=2.1.0
means that version 2.1.0 and any later versions are allowed.<=
(less than or equal to): This pin allows any version of the provider or module that is equal to or less than the specified version. For example,<=2.1.0
means that version 2.1.0 and any earlier versions are allowed.
It's best practice to pin your Terraform version and providers so that you don't implicitly run into major changes. Terraform manages your infrastructure and automated continuous deployment and delivery pipelines can therefore also introduce any changes as long as it is allowed by the underlying IAM policies.
Conventions for Code Structuring and Resource Naming
Terraform offers a lot of flexibility regarding the structuring of your code. With an ever-growing project and no predefined set of rules that everybody respects, a project can quickly become a burden to maintain and extend.
There's no universal advice on how to do things, but it's important to rely on a common set of rules that the whole team respects.
As we've seen with the previous examples, generally good advice is to not put everything into main.tf
but to make use of modules.
main.tf
- use your blueprints that are defined with your modules.variables.tf
- the declaration of the variables you use in your modules. The values are passed from yourmain.tf
file.locales.tf
- contains a fixed set of variables that are used across the modules.
More good advice:
Keep the number of managed resources per state file rather small than large - An enterprise-scale infrastructure ecosystem in a single Terraform composition can take a long time to apply and there can be many changes in parallel, probably causing a lot of side effects. This means, slicing your Terraform compositions so that infrastructure that does not have any dependencies on each other are separated.
Keep modules as generic as possible - this ensures high reusability and less later pain if there is a need for refactoring and adaptions. Anything that's hardcoded in the module will create unavoidable changes in every composition that uses the blueprint.
Strict naming convention - stick to a solid naming convention that benefits readability for humans.
Don't hardcode infrastructure outputs - don't hardcode outputs like resource identifiers but use Terraform's output feature to reference them dynamically. That not only prevents human errors but also prevents conflicts due to concurrent creation or modification of resources that depend on each other, as Terraform will be aware of that beforehand.
This is a non-exhaustive list, but a good starting point.
Terraform vs. Other Infrastructure Management Tools
Terraform is not the only famous player in the area of IaC tools. There are many more. Among them are AWS CloudFormation and the Cloud Development Kit (CDK), as well as Pulumi and Serverless Framework.
AWS CloudFormation
AWS CloudFormation is the AWS-native IaC tool developed by Amazon. Resources are defined via YAML files and the resulting resources are managed in a CloudFormation Stack.
---
AWSTemplateFormatVersion: "2010-09-09"
Resources:
MyS3Bucket:
Type: "AWS::S3::Bucket"
Properties:
BucketName: "my-s3-bucket"
AccessControl: "Private"
This stack keeps track of the created resources (therefore the current state) and the necessary changes. CloudFormation can perform a rollback if conflicts arise while the stack update is in progress. This means you're generally not left with an update that did only apply a subset of all changes and aborted in the middle of the process.
For creating the CloudFormation stack for the template above, you also only need the AWS CLI installed. Afterward, you can run:
aws cloudformation create-stack --stack-name sample --template-body file://template.yml
Equal to Terraform, AWS CloudFormation follows a declarative approach by only defining the desired state of the infrastructure.
AWS Cloud Development Kit
The AWS Cloud Development Kit (CDK) is an abstraction layer on top of AWS CloudFormation. It supports multiple prominent programming languages, including TypeScript, Python, Java, and C#.
Due to its object-oriented approach, meaning that you can use your well-known and loved programming language, it's easier (or at least feels easier) to get started.
import * as cdk from 'aws-cdk-lib';
import * as s3 from 'aws-cdk-lib/aws-s3';
const app = new cdk.App();
const myS3Bucket = new s3.Bucket(app, 'MyS3Bucket', {
bucketName: 'my-s3-bucket',
accessControl: s3.BucketAccessControl.PRIVATE
});
app.synth();
CDK also offers a lot of high-level abstractions (e.g. Level-2 or Level-3), that encapsulate one or multiple infrastructure components that can be simply used without knowing too much about their internal structure.
By running cdk deploy
anywhere, CDK will deploy your infrastructure to your AWS account, using AWS CloudFormation under the hood.
Pulumi
Pulumi is an open-source IaC platform that also allows developers to follow an object-oriented approach and use their favorite programming language.
Contrary to CDK, Pulumi supports multiple cloud providers, including not only AWS but also Azure, and Google Cloud Platform.
import * as pulumi from '@pulumi/pulumi';
import * as aws from '@pulumi/aws';
const bucket = new aws.s3.Bucket('my-s3-bucket', {
acl: 'private',
bucket: 'my-s3-bucket',
});
export const bucketName = bucket.bucket;
To deploy this infrastructure using Pulumi, you can use the Pulumi CLI and run via pulumi up
.
As with Terraform, Pulumi also provides features like infrastructure testing, preview mode, and rollbacks, and can keep your state remote.
Serverless Framework
Serverless Framework (SF) is another open-source framework for declarative IaC that is widely known and supports popular cloud providers. SF offers a very high abstraction level and encapsulates a lot of otherwise tedious tasks into very few lines of configuration.
Serverless Framework, as the name already suggests, focuses on Serverless architectures. For AWS, this means architectures that are powered by AWS Lambda. It greatly supports other strictly related components like AWS API Gateway (including all of its types: REST, HTTP, and WebSocket).
For AWS, SF also uses CloudFormation under the hook to manage the infrastructure.
The template files can be created via TypeScript or YAML.
service: my-service
provider:
name: aws
runtime: nodejs18.x
functions:
example:
handler: index.handler
events:
- http:
path: /{proxy+}
method: ANY
The example above will do a lot of stuff solely with those few lines: create and deploy a Lambda function that resides in index.js
, create an HTTP gateway, and expose all routes for all HTTP methods to the Lambda integration.
Valuable Resources for Terraform
Terraform is not a simple tool, but a huge ecosystem that's community driven. This includes amongst others:
private and public registries for module blueprints
additional tools that enhance Terraform even further
a nearly endless list of community resources
... and much more.
You can find a great curated list of resources on Github at awesome-terraform.
Additionally, there are many other great blog posts out there that do hands-on guidance for Terraform, especially for starters. One well-written example is Spacelift's beginner guide for creating IAM users and policies.
Conclusion
In conclusion, Terraform is a powerful Infrastructure-as-Code tool that simplifies the process of managing infrastructure resources. By using Terraform, you can define your infrastructure as code, automate the process of creating and updating resources, and version control your infrastructure configurations.
Terraform provides many benefits, including increased productivity, better collaboration, enhanced automation, reduced errors and risk, multi-cloud support, and improved scalability.
In addition to the basic features, Terraform also has many advanced concepts, including modules, managing multiple environments, remote state management, expression functions, testing, and validation.
Overall, Terraform provides a versatile and flexible solution for managing infrastructure as code, and it is a must-have tool for any organization that wants to streamline its infrastructure management process and increase efficiency.
Frequently Asked Questions (FAQ)
What is Terraform?
Terraform is an open-source infrastructure-as-code tool that allows you to define and manage your infrastructure in a declarative way.What are the benefits of using Terraform?
Terraform provides several benefits, including the ability to manage infrastructure in a repeatable and consistent way, the ability to version control infrastructure changes, and the ability to collaborate with team members on infrastructure changes.What kind of infrastructure can I manage with Terraform?
Terraform can manage a wide range of infrastructure, including cloud resources like virtual machines, databases, and load balancers, as well as on-premises resources like servers and networking equipment.How does Terraform work?
Terraform works by defining infrastructure in a configuration file, which is then used to create or modify resources in the target environment. Terraform uses a provider model to support different cloud providers and infrastructure technologies.Is Terraform difficult to learn?
Terraform has a learning curve, but it is generally considered to be approachable for beginners. The official documentation and community resources can help you get started.What are some best practices for using Terraform?
Best practices for using Terraform include using version control, testing infrastructure changes before applying them, using modules to organize and reuse infrastructure code, and following security and compliance best practices.
Related Reads
If you found this article on what Terraform is interesting, you might also enjoy these: