Home Terraform to provision VMs on Oracle Cloud (IaC)
Post
Cancel

Terraform to provision VMs on Oracle Cloud (IaC)

With this article, I intend to document the process that I had to follow to provision two test virtual machines on Oracle Cloud Infrastructre in my pursuite to learning IaC.

Terraform is an IaC tool that lets you build, change, and version infrastructure safely and efficiently. This includes low-level components like compute instances, storage, and networking, as well as high-level components like DNS entries and SaaS features. Terraform configuration files .tf are declarative, meaning that they describe the end state of your infrastructure. You do not need to write step-by-step instructions to create resources because Terraform handles the underlying logic. Terraform builds a resource graph to determine resource dependencies and creates or modifies non-dependent resources in parallel. This allows Terraform to provision resources efficiently.

Pre-requisites

  • Install terraform if you have not done it already.

Terraform is available for arm and arm64. Do you have a spare raspberry pi?!

  • Unzip and move the terraform binary to a directory covered by your $PATH variable. E.g on a linux distro:
1
$ sudo mv terraform /usr/local/bin
  • Confirm the installation:
1
2
3
$ terraform -v
Terraform v1.3.3
on linux_arm64
  • Create RSA Keys to authenticate to OCI:
1
2
3
$ openssl genrsa -out oci-priv.pem 4096
$ chmod 600 oci-priv.pem
$ openssl rsa -pubout -in oci-priv.pem -out oci-pub.pem

Copy and paste the public key to your account on OCI:

  • From your user avatar, go to My Profile.
  • Click API Keys under the Resources tab on the left panel.
  • Click Add Public Key.
  • Select Paste a public key.
  • Paste the public key.
  • Click Add.

OCI will now provide values to authenticate, take a note as you will need it in the next steps. They will require you to save it to the home directory however we can use it directly on the code for this testing phase.

Prepping up

  • Create ssh encryption keys to connect to your compute instances. Using ssh-keygen on Linux :
1
2
$ ssh-keygen -t rsa -b 4096
follow the prompts... and note the path to the public key. You will need to pass this to the instance using terraform.
  • Create a new directory named tf-compute, cd tf-compute and nano provider.tf as follows:
1
2
3
4
5
6
7
provider "oci" {
  tenancy_ocid = "tenancy_ocid_from_the_previous_step"
  user_ocid = "user_ocid_from_the_previous_step" 
  private_key_path = "private_key_path_you_can_readlink -f_on_linux"
  fingerprint = "fingerprint_from_the_previous_step"
  region = "region_identifier_from_the_previous_step"
}
  • nano availability-domains.tf with the following:
1
2
3
data "oci_identity_availability_domains" "ads" {
  compartment_id = var.cid
}
  • nano vars.tf as follows:
1
2
3
4
5
6
7
8
9
10
11
12
variable "instance_count" {
  default = "2"
}

variable "instance_names" {
  type = list
  default = ["tf-debian11-0", "tf-debian11-1"]
}

variable "cid" {
  default = "ocid1.tenancy.oc1.........."
}
  • nano outputs.tf with the following:
1
2
3
4
5
6
7
# The "name" of the availability domain to be used for the compute instance.
output "name-of-first-availability-domain" {
  value = data.oci_identity_availability_domains.ads.availability_domains[0].name
}
output "public-ip" {
  value = resource.oci_core_instance.tf-instances.*.public_ip
}
  • nano compute.tf as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#amd64 instances
resource "oci_core_instance" "tf-instances" {
    count = var.instance_count
    availability_domain = data.oci_identity_availability_domains.ads.availability_domains[0].name
    compartment_id = var.cid
    shape = "VM.Standard.E2.1.Micro"
    shape_config {
     memory_in_gbs = 1
     ocpus = 1
    }
    source_details {
        #source_id = "ocid1.image.oc1.ap-hyderabad-1.aaaaaaaaammbtmhmaozuu7gqqlyz3zftzfnvamc5n3paxv4qpyynf5obwzsa" # ubuntu 22.04 minimal platform image
        source_id = "ocid1.image.oc1.ap-hyderabad-1.aaaaaaaaamfe7qoh45kgzg5xsbtyldsqav2duul7uorak7lprwrersdwtp2a" # debian 11 minimal custom image
        source_type = "image"
        boot_volume_size_in_gbs = "50"
    }

    display_name = element(var.instance_names, count.index)
    create_vnic_details {
        assign_public_ip = true
        subnet_id = oci_core_subnet.nevins_subnet.id
    }
    metadata = {
        ssh_authorized_keys = file("/path/to/.ssh/oracle-arm64-india.pub")
    } 
    preserve_boot_volume = false
}
  • nano security.tf as following (this is to ensure that a security list is added to the subnet allowing ssh from the home network only):

You may need to automate this to update the security list when the ip address on the home router changes.

1
2
3
4
5
6
7
8
9
10
11
12
13
resource "oci_core_security_list" "nevins_security_list" {
        compartment_id = var.cid
        vcn_id = oci_core_vcn.nevins_vcn.id
        display_name = "NevinsSecurityList"
        ingress_security_rules {
                protocol = "6"
                source = "current_home_ip_address/32"
                tcp_options {
                    min = "22"
                    max = "22"
                }
        }
}
  • nano subnet.tf as follows:
1
2
3
4
5
6
7
8
9
10
11
resource "oci_core_subnet" "nevins_subnet" {
    availability_domain = data.oci_identity_availability_domains.ads.availability_domains[0].name
    cidr_block          = "10.0.0.0/24"
    display_name        = "NevinsSubnet"
    compartment_id      = var.cid
    vcn_id              = oci_core_vcn.nevins_vcn.id
    route_table_id      = oci_core_vcn.nevins_vcn.default_route_table_id
    security_list_ids   = [oci_core_vcn.nevins_vcn.default_security_list_id, oci_core_security_list.nevins_security_list.id]
    dhcp_options_id     = oci_core_vcn.nevins_vcn.default_dhcp_options_id
    dns_label           = "nevinssubnet"
}
  • nano vcn.tf as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
resource "oci_core_vcn" "nevins_vcn" {
    cidr_block     = "10.0.0.0/24"
    compartment_id = var.cid
    display_name   = "NevinsVCN"
    dns_label      = "nevinsvcn"
}

resource "oci_core_internet_gateway" "nevins_internet_gateway" {
  compartment_id = var.cid
  display_name   = "NevinsIG"
  vcn_id         = oci_core_vcn.nevins_vcn.id
}

resource "oci_core_default_route_table" "nevins_route_table" {
  manage_default_resource_id = oci_core_vcn.nevins_vcn.default_route_table_id

  route_rules {
    destination       = "0.0.0.0/0"
    destination_type  = "CIDR_BLOCK"
    network_entity_id = oci_core_internet_gateway.nevins_internet_gateway.id
  }
}

Deploy

Initiate terraform:

1
$ terraform init

Plan (do not make changes but see an overview):

1
$ terraform plan

Execute:

1
$ terraform apply

The output will display the public ip addresses of the virtual machines. You can now ssh to the instances using the private keys. E.g:

1
$ ssh -i <path to the priv key> debian@<public ip>

Thank you!

Source

P.S

I used a custom OS image as I think the minimal platform image on OCI is not minimal! You can read about it on my previous blog post.

This post is licensed under CC BY 4.0 by the author.