I have recently had the pleasure of getting more intimate with Terraform, the goto tool for reigning-in your cloud infrastructure if you want to be pseudo-independent of a specific cloud provider. My main encounter have been during office hours with AWS as the prime target, an interesting beast in itself since I have only used Cloudformation for this earlier when spinning up serverless projects.

At work I have the “benefit” of being presented an existing codebase built with terraform, being a quite large project it easily can get quite confusing when starting to get into multi tier cloud architecture and having to map out where all the different dependencies are within services. Quite the mouthful one could say.

Usually I learn best by getting my hands dirty, which led me to take control of my own environments at Digitalocean, my favourite service for getting small projects up and running and stimulating my own OPS side since around 5 years back.

## What is Terraform?

In essence (to avoid going down a rabbit-hole), it’s a tool with which you can take control of your cloud environments by writing code. Writing code is something atleast I’m comfortable with so that’s a good thing! It also allows you to keep a track of the environment state in different manners which makes it easier and safer to collaborate within teams both locally and remote. Once a state is achieved, any further changes we make will have a plan presented on what the state will look like after changes being applied. Neat, right?

## Getting started

These steps assume that there would be infrastructure parts already present in our Digitalocean account that needs to be imported, if this werent the case then we could do the exact same things but instead ignore any parts where we would import and skip to the planning and applying stages.

### Bootstrapping the project

It feels good to have these things separate and nice from our application codebases so we will create a infrastructure repository with git to contain our terraform code.

mkdir infrastructure && cd infrastructure
git init


Myself, I like to have a clear structure so having a subfolder to work in for Digitalocean as a cloud provider seems like a neat thing.

mkdir digitalocean && cd digitalocean


Here we create a file called main.tf which we put the following content in:

variable "do_token" {}

provider "digitalocean" {
token = "${var.do_token}" }  The do_token variable statement will help us inject our access token when running our terraform codebase which will allow it to make all the needed changes. Other than that the rest just configures everything needed for terraform to communicate with Digitalocean, one could see it as setting up a client. Run the following, which will download and install whatever terraform needs to use the digitalocean provider. terraform init  While we’re at it, let’s create a file called terraform.tfvars and add the following content. do_token = "your access token from digitalocean"  *This is a helper file which terraform loads itself and map the defined words to expected input variables whenever run, treat this as one would a dotenv file in a regular project. Version control an example file with all fields empty and ignore the real .tfvars file for security reasons. Now we are ready for the interesting stuff! ### Mapping out what we have (Mostly doctl will be used for retrieving information about our current infrastructure, hop over to the doctl github page and install it!) First we start with the largest parts, namely droplets. To see the ones we have a quick doctl command can be issued: doctl compute droplet list ID Name Public IPv4 Private IPv4 Public IPv6 Memory VCPUs Disk Region Image Status Tags 1234578 www1 xx.xx.xx.xx xx.xx.xx.xx 1024 1 25 lon1 Ubuntu 16.10 x64 active 1234579 www2 xx.xx.xx.xx xx.xx.xx.xx 1024 1 25 lon1 Ubuntu 18.04 x64 active  Based on our findings here we need to write code for two droplets, which is quite similar and we can start from the same snippet for both. Copy and paste the following right below the provider in our main.tffile. This is for a very basic droplet and there are other options, please check the [detailed page](https://www.terraform.io/docs/providers/do/r/droplet.html) for more information and use what you need. # define a digitalocean droplet resource with a name of www2 (this name is for Terraform to keep track of things) resource "digitalocean_droplet" "www2" { image = "ubuntu-18-04-x64" name = "www2" region = "lon1" size = "s-1vcpu-1gb" resize_disk = false }  Image Run doctl compute image list --public Find your Image in the list, probably it’s the same as in the droplets above but formatted to lowercase and space replaced with dash. This was the case with my droplet running Ubuntu 18.04 (www2), but I could not find one for my www1 droplet running 16.10. After some investigation it seems like Digitalocean have removed it and in this case a blank string should be the image value above, else the droplet will be detected as having changes later! Name and region Just fill in the same value as the droplet list shows Size Once again, we have to run a command to find this value (or slug as it’s called by Digitalocean). Run doctl compute size list and find the line which matches up with your memory, disk, VCPU and put it’s slug (first column value in command results) as the value of this field. Resize disk Just set this to false, this controls if a disk will be wiped and resized when upgrading to a size with larger capacities. When importing this will be set to an empty string by default and when we run terraform plan it will be converted into being true. Which we don’t want right now. Run the import command for each droplet, Terraform will give feedback quickly if everything is ok. terraform import digitalocean_droplet.www2 12345679 .... digitalocean_droplet.www2: Refreshing state... (ID: 12345679) Import successful! The resources that were imported are shown above. These resources are now in your Terraform state and will henceforth be managed by Terraform.  Then run the plan command to see if Terraform is picking everything up correctly. terraform plan --out=plan .... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: ~ digitalocean_droplet.www2 resize_disk: "" => "false" Plan: 0 to add, 1 to change, 0 to destroy.  Great! Everything seems to be working! Terraform have successfully imported our droplet state correctly and wants to update the unassigned value for resize_disk to be false. Let’s apply the plan terraform apply "plan" .... digitalocean_droplet.www2: Modifying... (ID: 12345679) resize_disk: "" => "false" digitalocean_droplet.www2: Modifications complete after 1s (ID: 12345679) Apply complete! Resources: 0 added, 1 changed, 0 destroyed.  Huge success! Another resource that we would like to have imported is obviously our ssh-key (since we are responsible developers that think about security) that we have added to our Digitalocean account earlier. Once again we can run a doctl command to retrieve any ssh-keys registered. doctl compute ssh-key list .... ID Name FingerPrint 123456 SkeletonKey xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx  Quickly jot down a few lines of code that represents this in our file main.tf resource "digitalocean_ssh_key" "skeleton_key" { name = "SkeletonKey" public_key = "${file("${path.root}/../keys/skeleton_key.pub")}" }  Public key The main thing to discuss here. In this project it have been decided to add our public ssh keys to the repository under a keys folder from root level. The strange syntax is mainly first file(...) which reads the content of a file by path and then using path.root to get our Terraform project root for ease of navigation. Do the import almost the same way as for our droplet terraform import digitalocean_ssh_key.skeleton_key 123456 .... digitalocean_ssh_key.skeleton_key: Importing from ID "123456"... digitalocean_ssh_key.skeleton_key: Import complete! Imported digitalocean_ssh_key (ID: 123456) digitalocean_ssh_key.skeleton_key: Refreshing state... (ID: 123456) Import successful! The resources that were imported are shown above. These resources are now in your Terraform state and will henceforth be managed by Terraform.  Run a plan to see what we get, hopefully nothing should want to be updated! terraform plan --out=plan .... Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. digitalocean_droplet.www2: Refreshing state... (ID: 12345679) digitalocean_ssh_key.skeleton_key: Refreshing state... (ID: 123456) ------------------------------------------------------------------------ No changes. Infrastructure is up-to-date. This means that Terraform did not detect any differences between your configuration and real physical resources that exist. As a result, no actions need to be performed.  Huzzah! We have not taken control of our infrastructure on Digitalocean, albeit it’s small right now. But with this newly found confidence we can surely find new ways to use their service. :) The beauty of this, when we have our key managed by terraform is that we can create new droplets in terraform and reference the key.. IN CODE! No need for dashboards or copying files over scp etc. Just write some magic code and run commands. Done. Since we are brave adventurers, let’s try this out with a new droplet. Adding in the ssh key on the get-go, lets throw in a floating ip into the ring! Adding the following to the main.tf resource "digitalocean_droplet" "www3" { image = "ubuntu-18-04-x64" name = "www3" region = "lon1" size = "s-1vcpu-1gb" resize_disk = false ssh_keys = ["${digitalocean_ssh_key.skeleton_key.fingerprint}"] # This just reference the key we created earlier, not included here because of DRY.. XD
}

resource "digitalocean_floating_ip" "www_floating_ip" {
droplet_id = "${digitalocean_droplet.www3.id}" region = "${digitalocean_droplet.www3.region}"
}

output "www_floating_ip" {
}


The droplet is more or less the same configuration from before, just with changed name and ssh keys added there. Then the floating ip resource just references the new droplet to get a target id and region. Ouput just returns a variable with the given name and a value of whatever the actual floating ip will be so that we can use it later after applying the plan.

Now! Plan and apply!

terraform plan --out=plan
....
Terraform will perform the following actions:
+ digitalocean_droplet.www3
+ digitalocean_floating_ip.www_floating_ip
Plan: 2 to add, 0 to change, 0 to destroy.
...
terraform apply "plan"
....
digitalocean_droplet.www3: Creating...
digitalocean_droplet.www3: Still creating... (10s elapsed)
digitalocean_droplet.www3: Still creating... (20s elapsed)
digitalocean_droplet.www3: Creation complete after 24s (ID: 123456710)
digitalocean_floating_ip.www_floating_ip: Creating...
digitalocean_floating_ip.www_floating_ip: Creation complete after 16s (ID: xxx.xxx.xxx.xxx)
Outputs:

www_floating_ip = xxx.xxx.xxx.xxx



Woho! Now let’s try to access it through our regular terminal!

ssh root@xxx.xxx.xxx.xxx
....
root@www3:~#


Cheers to us!

Now, if we grow tired of our new droplet and want to delete it and the floating ip. Just delete the code we just added, hit plan and then apply.

terraform plan --out=plan
- digitalocean_droplet.www3
- digitalocean_floating_ip.www_floating_ip
Plan: 0 to add, 0 to change, 2 to destroy.
....
terraform apply "plan"
digitalocean_floating_ip.www_floating_ip: Destroying... (ID: xxx.xxx.xxx.xxx)
digitalocean_floating_ip.www_floating_ip: Still destroying... (ID: xxx.xxx.xxx.xxx, 10s elapsed)
digitalocean_floating_ip.www_floating_ip: Destruction complete after 12s
digitalocean_droplet.www3: Destroying... (ID: 123456710)
digitalocean_droplet.www3: Still destroying... (ID: 123456710, 10s elapsed)
digitalocean_droplet.www3: Destruction complete after 12s

Apply complete! Resources: 0 added, 0 changed, 2 destroyed.


Real magic in the works here, creating and destroying with such precision. Almost makes me religious.

And that wraps up this really long post that probably contains lots of mistakes that the helpful people of internet will find and correct me on. Good times!

Until the next time and Terraform safely!