how to get private IP of EC 2 dynamically and put it in /etc/hosts

10,036

Solution 1

I had a similar need for a database cluster (some sort of poor's man Consul alternative), I ended using the following Terraform file:

variable "cluster_member_count" {
  description = "Number of members in the cluster"
  default = "3"
}
variable "cluster_member_name_prefix" {
  description = "Prefix to use when naming cluster members"
  default = "cluster-node-"
}
variable "aws_keypair_privatekey_filepath" {
  description = "Path to SSH private key to SSH-connect to instances"
  default = "./secrets/aws.key"
}

# EC2 instances
resource "aws_instance" "cluster_member" {
  count = "${var.cluster_member_count}"
  # ...
}

# Bash command to populate /etc/hosts file on each instances
resource "null_resource" "provision_cluster_member_hosts_file" {
  count = "${var.cluster_member_count}"

  # Changes to any instance of the cluster requires re-provisioning
  triggers {
    cluster_instance_ids = "${join(",", aws_instance.cluster_member.*.id)}"
  }
  connection {
    type = "ssh"
    host = "${element(aws_instance.cluster_member.*.public_ip, count.index)}"
    user = "ec2-user"
    private_key = "${file(var.aws_keypair_privatekey_filepath)}"
  }
  provisioner "remote-exec" {
    inline = [
      # Adds all cluster members' IP addresses to /etc/hosts (on each member)
      "echo '${join("\n", formatlist("%v", aws_instance.cluster_member.*.private_ip))}' | awk 'BEGIN{ print \"\\n\\n# Cluster members:\" }; { print $0 \" ${var.cluster_member_name_prefix}\" NR-1 }' | sudo tee -a /etc/hosts > /dev/null",
    ]
  }
}

One rule is that each cluster member get named by the cluster_member_name_prefix Terraform variable followed by the count index (starting at 0): cluster-node-0, cluster-node-1, etc.

This will add the following lines to each "aws_instance.cluster_member" resource's /etc/hosts file (the same exact lines and in the same order for every member):

# Cluster members:
10.0.1.245 cluster-node-0
10.0.1.198 cluster-node-1
10.0.1.153 cluster-node-2

In my case, the null_resource that populates the /etc/hosts file was triggered by an EBS volume attachment, but a "${join(",", aws_instance.cluster_member.*.id)}" trigger should work just fine too.

Also, for local development, I added a local-exec provisioner to locally write down each IP in a cluster_ips.txt file:

resource "null_resource" "write_resource_cluster_member_ip_addresses" {
  depends_on = ["aws_instance.cluster_member"]

  provisioner "local-exec" {
    command = "echo '${join("\n", formatlist("instance=%v ; private=%v ; public=%v", aws_instance.cluster_member.*.id, aws_instance.cluster_member.*.private_ip, aws_instance.cluster_member.*.public_ip))}' | awk '{print \"node=${var.cluster_member_name_prefix}\" NR-1 \" ; \" $0}' > \"${path.module}/cluster_ips.txt\""
    # Outputs is:
    # node=cluster-node-0 ; instance=i-03b1f460318c2a1c3 ; private=10.0.1.245 ; public=35.180.50.32
    # node=cluster-node-1 ; instance=i-05606bc6be9639604 ; private=10.0.1.198 ; public=35.180.118.126
    # node=cluster-node-2 ; instance=i-0931cbf386b89ca4e ; private=10.0.1.153 ; public=35.180.50.98
  }
}

And, with the following shell command I can add them to my local /etc/hosts file:

awk -F'[;=]' '{ print $8 " " $2 " #" $4 }' cluster_ips.txt >> /etc/hosts

Example:

35.180.50.32 cluster-node-0 # i-03b1f460318c2a1c3
35.180.118.126 cluster-node-1 # i-05606bc6be9639604
35.180.50.98 cluster-node-2 # i-0931cbf386b89ca4e

Solution 2

Terraform provisioners expose a self syntax for getting data about the resource being created.

If you were just interested in the instance being created's private IP address you could use ${self.private_ip} to get at this.

Unfortunately if you need to get the IP addresses of multiple sub-resources (eg ones created by using the count meta attribute) then you will need to do this outside of the resource's provisioner using the null_resource provider.

The resource provider docs show a good use case for this:

resource "aws_instance" "cluster" {
  count = 3
  ...
}

resource "null_resource" "cluster" {
  # Changes to any instance of the cluster requires re-provisioning
  triggers {
    cluster_instance_ids = "${join(",", aws_instance.cluster.*.id)}"
  }

  # Bootstrap script can run on any instance of the cluster
  # So we just choose the first in this case
  connection {
    host = "${element(aws_instance.cluster.*.public_ip, 0)}"
  }

  provisioner "remote-exec" {
    # Bootstrap script called with private_ip of each node in the clutser
    inline = [
      "bootstrap-cluster.sh ${join(" ", aws_instance.cluster.*.private_ip)}",
    ]
  }
}

but in your case you probably want something like:

resource "aws_instance" "ceph-cluster" {
  ...
}

resource "null_resource" "ceph-cluster" {
  # Changes to any instance of the cluster requires re-provisioning
  triggers {
    cluster_instance_ids = "${join(",", aws_instance.ceph-cluster.*.id)}"
  }

  connection {
    host = "${element(aws_instance.cluster.*.public_ip, count.index)}"
  }

  provisioner "remote-exec" {
      inline = [
        "cat /etc/hosts",
        "cat ~/scripts/ceph/ceph_rsa.pub >> ~/.ssh/authorized_keys",
        "cp -arp  ~/scripts/ceph/ceph_rsa ~/.ssh/ceph_rsa",
        "chmod 700 ~/.ssh/ceph_rsa",
        "echo 'IdentityFile    ~/.ssh/ceph_rsa' >> ~/.ssh/config",
        "echo 'User            ubuntu' >> ~/.ssh/config",
        "echo '${aws_instance.ceph-cluster.0.private_ip} node01 ceph01' >> /etc/hosts ",
        "echo '${aws_instance.ceph-cluster.1.private_ip} node02 ceph02' >> /etc/hosts "
      ]
  }
}

Solution 3

This could be a piece of cake with Terrafrom/Sparrowform. No need in null_resources, with the minimum of fuss:

Bootstrap infrastructure

$ terrafrom apply

Prepare Sparrowform privision scenario to insert ALL nodes public ips / dns names into every node's /etc/hosts file

$ cat sparrowfile

#!/usr/bin/env perl6

use Sparrowform;

my @hosts = (
  "127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4",
  "::1         localhost localhost.localdomain localhost6 localhost6.localdomain6"
);

for tf-resources() -> $r {
  my $rd = $r[1]; # resource data
  next unless $rd<public_ip>;
  next unless $rd<public_dns>;
  next if $rd<public_ip> eq input_params('Host');
  push @hosts, $rd<public_ip> ~ ' ' ~ $rd<public_dns>;
}

file '/etc/hosts', %(
  action  => 'create',
  content => @hosts.join("\n")
);

Give it a run, Sparrowform will execute scenario on every node

$ sparrowform --bootstrap --ssh_private_key=~/.ssh/aws.key --ssh_user=ec2-user

PS. disclosure - I am the tool author

Share:
10,036
negabaro
Author by

negabaro

vue.js,terraform,docker,rails

Updated on June 27, 2022

Comments

  • negabaro
    negabaro almost 2 years

    I would like to create multiple EC2 instances using Terraform and write the private IP addresses of the instances to /etc/hosts on every instance.

    Currently I am trying the following code but it's not working:

    resource "aws_instance" "ceph-cluster" {
      count = "${var.ceph_cluster_count}"
      ami           = "${var.app_ami}"
      instance_type = "t2.small"
      key_name      = "${var.ssh_key_name}"
    
      vpc_security_group_ids = [
        "${var.vpc_ssh_sg_ids}",
        "${aws_security_group.ceph.id}",
      ]
    
      subnet_id                   = "${element(split(",", var.subnet_ids), count.index)}"
    
      associate_public_ip_address = "true"
      // TODO 一時的にIAM固定
      //iam_instance_profile        = "${aws_iam_instance_profile.app_instance_profile.name}"
      iam_instance_profile        = "${var.iam_role_name}"
    
      root_block_device {
        delete_on_termination = "true"
        volume_size           = "30"
        volume_type           = "gp2"
      }
    
      connection {
        user        = "ubuntu"
        private_key = "${file("${var.ssh_key}")}"
        agent = "false"
      }
    
      provisioner "file" {
        source      = "../../../scripts"
        destination = "/home/ubuntu/"
      }
    
      tags {
        Name = "${var.infra_name}-ceph-cluster-${count.index}"
        InfraName = "${var.infra_name}"
      }
    
      provisioner "remote-exec" {
          inline = [
            "cat /etc/hosts",
            "cat ~/scripts/ceph/ceph_rsa.pub >> ~/.ssh/authorized_keys",
            "cp -arp  ~/scripts/ceph/ceph_rsa ~/.ssh/ceph_rsa",
            "chmod 700 ~/.ssh/ceph_rsa",
            "echo 'IdentityFile    ~/.ssh/ceph_rsa' >> ~/.ssh/config",
            "echo 'User            ubuntu' >> ~/.ssh/config",
            "echo '${aws_instance.ceph-cluster.0.private_ip} node01 ceph01' >> /etc/hosts ",
            "echo '${aws_instance.ceph-cluster.1.private_ip} node02 ceph02' >> /etc/hosts "
          ]
      }
    
    }
    
    
    aws_instance.ceph-cluster. *. private_ip
    

    I would like to get the result of the above command and put it in /etc/hosts.