Skip to content
Search
Generic filters
Exact matches only

Automate Elasticsearch deployment in GCP with Terraform | by Calle Engene | Jul, 2020

1.1. Virtual machines

First of all, we need the VMs running Elasticsearch:

There is a lot happening here. You can choose how many machines you’d like to deploy by iterating over the above snippet. The most interesting is the startup script that I call ./startup-elastic.sh. We will get back to this topic in section 2.

We also need a new instance for running Kibana:

So, this is the heart of what we need, the actual GCE VM (Google compute engine) running Elasticsearch and Kibana as Hashicorp configuration language.

You can choose whatever settings that suit you in the var file. This configuration will deploy a VM with a 200 GB SSD drive in its own subnet with a service account that has minimal required rights. There are no external IPs, so we have to figure out another way of accessing the VMs.

My tfvar file looks something like this:

So far so good, but we need more than just VMs, right?
Let’s set up the IAM and give the service account used by the VMs correct permissions.

1.2. Permissions

To be able to create a backup of our data in GCS (Google Cloud Storage), the service account will need some fine-grained permissions.

First, we create a custom role with the permissions the VMs need to backup towards GCS and get the key to the service account, then we create a service account and apply that role to it. Last of all, we generate a key to that service account. This key we will save in the Keystore in Elasticsearch and use it to create a backup repository in GCS.

1.3. Network

We also need some networking infrastructure for security.

We start by creating a separate VPC (Virtual Private Cloud) and subnet for the VMs and infrastructure. Here we specify the IP range that you can give to the machines.

Since the VMs don’t have an external IP, you cannot access the internet and download any software. To get around this problem, we need to open a NAT gateway with a router to be able to download anything from the internet. Once the software is installed, we can remove the NAT gateway.

In GCP, if you want to access your VPC network from e.g. Appengine or Cloud functions, you’ll need a VPC connector. In my setup, I host an API on GAE that invokes Elasticsearch internal load balancer via the VPC connector.

An Elasticsearch cluster with more than 1 node needs a load balancer to distribute the requests. To put the VMs under a load balancer, we need to create instance groups. For redundancy, we put the VMs in the same region, but different zones. In case there is a problem in one zone, the others won’t be affected. In this example we’re deploying three VMs, my-elastic-instance-1 and 2 are in zone d, while my-elastic-instance-3 is in zone c.

Since we also want to access Kibana through a load balancer, we’ll create an instance group for that too.

Now, all we need is the load balancer with health-checks and forwarding rules.

1.4. Firewall rules

And of course, the part that I always forget, the firewall rules. Allowing internal communication between the nodes in the subnet, allowing load balancer and health checks to communicate with the VMs.