Introduction to Terraform - Ben Gnoinski (2022)

May 29, 2018 in UTILITY
aws terraform
10 min read

Up till now I have been using the AWS provided cli to manage their resources, but what if I also want to use Google Cloud? I would need to download their tools, as well as learn their configuration syntax. Terraform is a fantastic tool that gives us a consistent configuration syntax for managing many different providers. There are around 200 providers, 80 of which are supported directly by Hashicorp. Of course you still need to understand all of the provider specific terminology, ec2 for AWS, instances for Google Cloud.

I have worked almost exclusively with AWS so I will be using AWS for my examples on how to use Terraform.

Requirements

Give the docs linked above a read if you haven’t already and you’ll be better off.

Steps I’m going to cover

  1. “Install” Terraform
  2. Syntax
  3. Terraform files
  4. Terraform commands

Let’s roll

“Install” Terraform

The reason why Install is in quotes is that Terraform is just a binary that needs to be added to our path. Visit The Downloads Page and download the zip file for your OS.

wget https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip

(Video) This Cat's Battle with a Printer Might be the Best Thing You See Today♥️♥️👊🐱#kitten #cutekitten #cat

Unzip it

unzip terraform_0.11.7_linux_amd64.zip

Now there will be an executable called ‘terraform’ in the directory.

Depending on which directory you are in you likely want to move this into a bin folder that is in your path. I am going to put mine in /usr/local/bin/ and then test to make sure terraform is working.

sudo mv terraform /usr/local/bin/terraform -vTerraform v0.11.7

While working with AWS, Terraform will gather credentials in all of the same ways that the aws cli does. If you’ve already setup your AWS credentials then you are good to go. If not click on the AWS Credentials link in the requirements.

Syntax

Like programming languages there are strings, lists, and maps. These data types are commonly used for a key values ex:

string = "value"list = ["value1", "value2", "value3"]map = { "key1" = "value1" "key2" = "value2" "key3" = "value3"}

The basic format for most of the terraform resources follows this pattern:

argument1 "argument_2" "argument3" { key1 = "value1"}

ex:

provider "aws" { region = "ca-central-1"}resource "aws_vpc" "dev_vpc" { cidr_block = "10.0.0.0/24"}

argument1: Describes which terraform module to use. Valid options are

  • provider: Does not use argument3. Used to describe which provider terraform should use
  • resource: Used to create a resource within a specified provider
  • data: Used to gather information on a resource within a provider, read-only

argument2: Used to specify the resource type for the module.

  • provider: “aws” OR “google” OR “github”
  • resource: “aws_instance” OR “google_compute_instance” OR “github_repository”
  • data: “aws_availability_zones” OR “google_compute_image” OR “github_ip_ranges”

argument3: Unique identifier for the resource. Used for outputs or resource interpolation(More on this in a different post).

Key1: Used to specify which key we are configuring on the resource. Some keys are required, some are optional.

Value1: Key1 value

If you use an AWS resource type for example, you need a corresponding aws provider. So you can’t setup an aws_vpc if you have not previously configured an AWS provider. If you only have a single provider of a specific type it will be used by default for all of the associated resources. You can have multiple providers of the same type by naming them (outside of the scope of this post.) You can use multiple different providers, AWS and Google Cloud in the same deployment. Maybe you have your DNS on Route53, but you are deploying your instances on Google.

This is the very basic syntax that you should know. You can perform interpolation on your resources. (outside of the scope of this post.)

Terraform files

We should now have a grasp on the basic concepts of Terraform, but we can’t actually deploy anything. We need to put the above code into a file. I like to put provider and state information into a file called main.tf. One of the nice things about terraform is that it will read all .tf files in the current directory so you can logically separate your code but only need to run plan once, as well you can read information(interpolate) from resources in the other files. Because it concatenates all of the .tf files, you need to ensure that your resource unique identifiers are well, unique.

As I said I am going to put the the code provided above in a file called main.tf You can download it here if you haven’t already to follow along.

main.tf

provider "aws" { region = "ca-central-1"}resource "aws_vpc" "dev_vpc" { cidr_block = "10.0.0.0/24"}

Terraform commands

Now that we have the above code, we need to run terraform init. What this command does is look in your .tf files and find which providers you are using and download them to a folder called .terraform.

If you don’t run terraform init you see something like:

1 error(s) occurred:* provider.aws: no suitable version installed version requirements: "(any version)" versions installed: none

terraform init

Initializing provider plugins...- Checking for available provider plugins on https://releases.hashicorp.com...- Downloading plugin for provider "aws" (1.20.0)...The following providers do not have any version constraints in configuration,so the latest version was installed.To prevent automatic upgrades to new major versions that may contain breakingchanges, it is recommended to add version = "..." constraints to thecorresponding provider blocks in configuration, with the constraint stringssuggested below.* provider.aws: version = "~> 1.20"Terraform has been successfully initialized!You may now begin working with Terraform. Try running "terraform plan" to seeany changes that are required for your infrastructure. All Terraform commandsshould now work.If you ever set or change modules or backend configuration for Terraform,rerun this command to reinitialize your working directory. If you forget, othercommands will detect it and remind you to do so if necessary.

The next command that we run is terraform plan what this will do is parse through all of the .tf files, make sure that variables(we haven’t explored this yet) are defined, syntax is correct, and if you have previously deployed infrastrucutre query it to see what if any changes need to be made. Finally it will show you what changes it will be performing.

terraform plan

terraform planRefreshing Terraform state in-memory prior to plan...The refreshed state will be used to calculate this plan, but will not bepersisted to local or remote state storage.------------------------------------------------------------------------An execution plan has been generated and is shown below.Resource actions are indicated with the following symbols: + createTerraform will perform the following actions: + aws_vpc.dev_vpc id: <computed> assign_generated_ipv6_cidr_block: "false" cidr_block: "10.0.0.0/24" default_network_acl_id: <computed> default_route_table_id: <computed> default_security_group_id: <computed> dhcp_options_id: <computed> enable_classiclink: <computed> enable_classiclink_dns_support: <computed> enable_dns_hostnames: <computed> enable_dns_support: "true" instance_tenancy: <computed> ipv6_association_id: <computed> ipv6_cidr_block: <computed> main_route_table_id: <computed>Plan: 1 to add, 0 to change, 0 to destroy.------------------------------------------------------------------------Note: You didn't specify an "-out" parameter to save this plan, so Terraformcan't guarantee that exactly these actions will be performed if"terraform apply" is subsequently run.

Notice the little message on the bottom we’ll go over plan files later. For now the important thing to note is the + symbol beside our vpc which means we are adding a resource. The other options are ~ which means modifying in place, - which means destroying, -/+ which means it will destroy and then create the same resource. Some resources can be modified in place while others may need to be destroyed and re-created. Because of this it is imperative that you pay attention to the plan.

Since we are only adding a vpc I am going to move on to actually deploying this with

terraform apply

An execution plan has been generated and is shown below.Resource actions are indicated with the following symbols: + createTerraform will perform the following actions: + aws_vpc.dev_vpc id: <computed> assign_generated_ipv6_cidr_block: "false" cidr_block: "10.0.0.0/24" default_network_acl_id: <computed> default_route_table_id: <computed> default_security_group_id: <computed> dhcp_options_id: <computed> enable_classiclink: <computed> enable_classiclink_dns_support: <computed> enable_dns_hostnames: <computed> enable_dns_support: "true" instance_tenancy: <computed> ipv6_association_id: <computed> ipv6_cidr_block: <computed> main_route_table_id: <computed>Plan: 1 to add, 0 to change, 0 to destroy.Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yesaws_vpc.dev_vpc: Creating... assign_generated_ipv6_cidr_block: "" => "false" cidr_block: "" => "10.0.0.0/24" default_network_acl_id: "" => "<computed>" default_route_table_id: "" => "<computed>" default_security_group_id: "" => "<computed>" dhcp_options_id: "" => "<computed>" enable_classiclink: "" => "<computed>" enable_classiclink_dns_support: "" => "<computed>" enable_dns_hostnames: "" => "<computed>" enable_dns_support: "" => "true" instance_tenancy: "" => "<computed>" ipv6_association_id: "" => "<computed>" ipv6_cidr_block: "" => "<computed>" main_route_table_id: "" => "<computed>"aws_vpc.dev_vpc: Creation complete after 9s (ID: vpc-e717ba8f)

On the newer versions of terraform it peforms a plan again, and then have you confirm that you want to deploy it. I went ahead and deployed it and we can see that it created a vpc for us and gave us the vpc ID in the console output.

In terraform if you want to do multi line comments you wrap the lines in /* and */. in the example file that you have already downloaded, comment out the first resource and uncomment the ‘Update in place’ resource. What I have done is updated the existing vpc to have a tag “Name” with a value of “dev_vpc”. Since you can add, modify, or remove tags at will it updates the resource in place. noted by the ~

terraform plan

An execution plan has been generated and is shown below.Resource actions are indicated with the following symbols: ~ update in-placeTerraform will perform the following actions: ~ aws_vpc.dev_vpc tags.%: "0" => "1" tags.Name: "" => "dev_vpc"Plan: 0 to add, 1 to change, 0 to destroy.

I am not actually going to apply the change as this is just to show the differences in the plan. Now make sure the first 2 ‘aws_vpc’ resource blocks are commented out and uncomment ‘Destroy and re-create’. I changed the cidr of the vpc which can not be updated in place, so it has to first destroy the vpc, and then re-create it with the new cidr.

terraform plan

An execution plan has been generated and is shown below.Resource actions are indicated with the following symbols:-/+ destroy and then create replacementTerraform will perform the following actions:-/+ aws_vpc.dev_vpc (new resource required) id: "vpc-e717ba8f" => <computed> (forces new resource) assign_generated_ipv6_cidr_block: "false" => "false" cidr_block: "10.0.0.0/24" => "10.10.0.0/16" (forces new resource) default_network_acl_id: "acl-bda92ad5" => <computed> default_route_table_id: "rtb-219d0049" => <computed> default_security_group_id: "sg-eb288080" => <computed> dhcp_options_id: "dopt-a7448bce" => <computed> enable_classiclink: "" => <computed> enable_classiclink_dns_support: "" => <computed> enable_dns_hostnames: "false" => <computed> enable_dns_support: "true" => "true" instance_tenancy: "default" => <computed> ipv6_association_id: "" => <computed> ipv6_cidr_block: "" => <computed> main_route_table_id: "rtb-219d0049" => <computed> tags.%: "0" => "1" tags.Name: "" => "dev_vpc"Plan: 1 to add, 0 to change, 1 to destroy.

If you were to apply this plan, it will destroy the vpc before re-creating it. Now if you have other resources in this vpc chances are it won’t actually be able to delete it, but you need to be careful, and plan out your networking ahead of time for production networks.

I am just going to mention that after you did the initial apply there is now a file called terraform.tfstate in your folder which is how Terraform keeps track of the resources it has created.

If you are new to Terraform this is a fair bit of information to absorb so go through it all another time until you have these basic concepts down.

What is Terraform?. Terraform is an open-source infrastructure as Code tool developed by HashiCorp.. Does orchestration, not just configuration management Supports multiple providers such as AWS, Azure, GCP, DigitalOcean and many more Provide immutable infrastructure where configuration changes smoothly Uses easy to understand language, HCL (HashiCorp configuration language) Easily portable to any other provider Supports Client only architecture, so no need for additional configuration management on a server. Module : It is a folder with Terraform templates where all the configurations are defined State : It consists of cached information about the infrastructure managed by Terraform and the related configurations.. Plan : It is one of the stages where it determines what needs to be created, updated, or destroyed to move from real/current state of the infrastructure to the desired state.. Terraform init initializes the working directory which consists of all the configuration files Terraform plan is used to create an execution plan to reach a desired state of the infrastructure.. Terraform apply then makes the changes in the infrastructure as defined in the plan, and the infrastructure comes to the desired state.. Terraform destroy is used to delete all the old infrastructure resources, which are marked tainted after the apply phase.. The first input source is a Terraform configuration that you, as a user, configure.. And the second input source is a state where terraform keeps the up-to-date state of how the current set up of the infrastructure looks like.. Terraform has over a hundred providers for different technologies, and each provider then gives terraform user access to its resources.. So through AWS provider, for example, you have access to hundreds of AWS resources like EC2 instances, the AWS users, etc.. So, this is how Terraform works, and this way, it tries to help you provision and cover the complete application setup from infrastructure all the way to the application.. We will install Terraform on Ubuntu and provision a very basic infrastructure.. Go to the directory and create a terraform configuration file where you define the provider and resources to launch an AWS EC2 instance.

This post is an introduction to Terraform which is a tool to manage various cloud infrastructure services in the form of code.. Internally, Terraform makes use of cloud provider APIs to carry out the creation of the resource.. The code contains 2 blocks — provider and resource .. provider block lets the Terraform know, that we are going to use aws provider in the region "us-west-1" .. resource block lets the Terraform know that out of all the infrastructure resources offered by AWS, we want to create a resource of type " instance " (EC2).. Thus we have successfully managed to express our infrastructure in the form of code.. Based on the configuration provided, Terraform generates an execution plan.

A Terraform project is just a set of files in a directory containing resource definitions .. Those files, which by convention end in .tf , use Terraform's configuration language to define the resources we want to create.. For our “Hello, Terraform” project, our resource will be just a file with fixed content.. The main.tf file contains two blocks: a provider declaration and a resource definition.. In this step, Terraform scans our project files and downloads any required provider — the local provider, in our case.. It exposes a set of resource types using a common abstraction, thus masking the details of how to create, modify, and destroy a resource pretty much transparent to users .. Terraform downloads providers automatically from its public registry as needed, based on the resources of a given project.. After that, there's the user-defined resource name, which must be unique for this resource type in the same module – more on modules later.. The state of a Terraform project is a file that stores all details about resources that were created in the context of a given project .. For instance, if we declare an azure_resourcegroup resource in our project and run Terraform, the state file will store its identifier.. The primary purpose of the state file is to provide information about already existing resources, so when we modify our resource definitions, Terraform can figure out what it needs to do.. Terraform modules are the main feature that allows us to reuse resource definitions across multiple projects or simply have a better organization in a single project .. Later, we can create a new workspace with the terraform workspace new command, optionally supplying an existing state file as a parameter.

They include different configuration files such as variables , resources, and modules.. Blocks : These act like containers, and always begin with the name of the type of block, which can be a resource, variable, or provider.. This particular block is written inside the “terraform” block .. Source : Requirements Block NOTE : Only 1 provider can be specified for each block.. A few providers will also need you to write configuration blocks before they’re used.. MODULES : If your module has child modules, nest it in the same directory.

In this article I am going to give the basics of installing Dcoker, getting a Docker image, running a container and some basic Docker interactions.. Install Docker Docker Images Pull Image Run Docker Container. Tags gets removed from the first image when a second image is tagged with the same tag.. Run docker images. We can remove old images by running docker rmi {IMAGEID}. If you run this all you will get is a new line so let’s check and see what containers are running with docker ps. This time we gave docker run the command we wanted to run at the end ls -alh .. For troubleshooting the easiest way to get a container to run is like so docker run -td ubuntu:18.04 .. The next command is docker exec -it {CONTAINER_ID} {command} where {CONTAINER_ID} is the container id returned from the run command.. {COMMAND} is the command you want to run, I usually run a shell such as /bin/bash or sh depending on which image I am using.. docker pull docker run docker ps docker exec -it docker rmi docker rm

Copy Posts Script Post Update Update Metadata. Now that I have the categories and tags on their own lines, I found here that you can use sed on a specific line number like so. Info I started making my list on the tags(line 7), as it will not change which line the categories are on (line5).. Can’t forget the ‘** <’ and ‘> **’ updates that I need to do which don’t have anything to do with the metadata, but since I’ll be running this on all of the posts, I’ll include it anyways.. I could get all of the posts filenames and copy and paste the above sed lines for each file like I did on my awscli_setup.md example or I can build a script that takes a filename as an input, then dynamically find all of my posts and execute the script.. I though about doing some clever regex to parse through all the files finding the date and modifying it, but it will likely take me longer to write that then to find all of the current dates in the files and quickly build a sed to fix them.. So first I need to find all of the dates in my files.. I know that some of those files have dates I don’t want to modify so I’ll remove them and trim my list.. I first need to build a sed on a single file to make sure I have the command correct.. sed “3 s|2018-04-29 14:25|$(DATE=2018-04-29 14:25; date -d”$DATE” +%Y-%m-%dT%TZ)|g” awscli_setup.md. NEWDATE=$(DATE=“2018-04-29 14:25”; date -d “$DATE” +%Y-%m-%dT%TZ); sed -i “3 s|2018-04-29 14:25|$NEWDATE|g” awscli_setup.md

A tainted resource will be planned for destruction and recreation upon the next terraform apply.. You can change this behavior by setting the on_failure attribute. If they fail, Terraform will error and rerun the provisioners again on the next terraform apply.. Due to this behavior, care should be taken to destroy provisioners to be safe to run multiple time. If a resource block with a destroy-time provisioner is removed entirely from the configuration, its provisioner configurations are removed along with it and thus the destroy provisioner won’t run.. Multiple provisioners are executed in the order they’re defined in the configuration file. Values are meant to be interpolated references to variables or attributes of other resources.

You might also like

Latest Posts

Article information

Author: Barbera Armstrong

Last Updated: 08/24/2022

Views: 6077

Rating: 4.9 / 5 (59 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Barbera Armstrong

Birthday: 1992-09-12

Address: Suite 993 99852 Daugherty Causeway, Ritchiehaven, VT 49630

Phone: +5026838435397

Job: National Engineer

Hobby: Listening to music, Board games, Photography, Ice skating, LARPing, Kite flying, Rugby

Introduction: My name is Barbera Armstrong, I am a lovely, delightful, cooperative, funny, enchanting, vivacious, tender person who loves writing and wants to share my knowledge and understanding with you.