blog

How to setup and install Elasticsearch: From a single node to a cluster of nodes

Paul Namuag

Published:

A guide to setup and install Elasticsearch: From a Single Node to a Cluster of Nodes

Elasticsearch is a complex software incomparable to other mainstream database software, whether it’s RDBMS or NoSQL. It functions primarily as your search engine and works as your document database to store valuable data for analytics.

Elasticsearch works as part of the ELK stack (Elastic, Logstash, Beats, and Kibana). It can also be used with other external visualization tools such as Grafana or Knowi, for example. However, we’ll not cover these tools in this blog.

Instead, we will focus on setting up a single node and then transitioning to a 3-node cluster, with default security using SSL/TLS socket for communication.

Elasticsearch has various ways to achieve this installation and setup. But it can be tedious and difficult to install if you stumble into problems. Especially with the recent version releases, as there have been several vital changes and some settings may not work in your current configuration setup.

Here, we will guide you to install Elasticsearch version 8.2.0 on an Ubuntu 20.04.3 LTS (Focal Fossa) environment.

For installation under a Debian environment, you can reference the official documentation

I have the following server nodes for this example, which shall be ready to be set up and installed with Elasticsearch.

Server nodes:
1st node: pupnode170 (192.168.40.170)
2nd node: pupnode171 (192.168.40.171)
3rd node: pupnode172 (192.168.40.172)

Linux OS Version:

root@pupnode170:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

The steps will be laid out in order, so it’s easier to follow. At times, you will need to revisit and repeat previous steps.

Let’s get started.

Setting up your single node installation

In this section, we need to set up the server nodes that must be installed with Elasticsearch.

Let’s begin setting up the first node with host IP 192.168.40.170. Just follow the steps accordingly by running the commands as stated in the steps.

1. Setup the GPG required for repository

Open up your shell and ensure that your OS system user has the sudo privilege or as a super admin or root user.

$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg

2. Install necessary dependency tools

$ sudo apt update && sudo apt-get install apt-transport-https

3. Setup the repository

$ echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list

4. Install Elasticsearch

$ sudo apt-get update && sudo apt-get install elasticsearch

This will install the Elasticsearch package and the result will look like this:

root@pupnode170:~# sudo apt-get update && sudo apt-get install elasticsearch
Get:1 https://artifacts.elastic.co/packages/8.x/apt stable InRelease [10.4 kB]
Get:2 https://artifacts.elastic.co/packages/8.x/apt stable/main amd64 Packages [66.2 kB]
Hit:3 http://security.ubuntu.com/ubuntu focal-security InRelease
Hit:4 http://archive.ubuntu.com/ubuntu focal InRelease
Hit:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
Fetched 76.6 kB in 1s (70.8 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  elasticsearch
0 upgraded, 1 newly installed, 0 to remove and 247 not upgraded.
Need to get 607 MB of archives.
After this operation, 1247 MB of additional disk space will be used.
Get:1 https://artifacts.elastic.co/packages/8.x/apt stable/main amd64 elasticsearch amd64 8.10.2 [607 MB]
Fetched 607 MB in 12s (49.7 MB/s)
Selecting previously unselected package elasticsearch.
(Reading database ... 72714 files and directories currently installed.)
Preparing to unpack .../elasticsearch_8.10.2_amd64.deb ...
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Unpacking elasticsearch (8.10.2) ...
Setting up elasticsearch (8.10.2) ...
--------------------------- Security autoconfiguration information ------------------------------

Authentication and authorization are enabled.
TLS for the transport and HTTP layers is enabled and configured.

The generated password for the elastic built-in superuser is : lfmBlynaPj4UbJ9v1FFU

If this node should join an existing cluster, you can reconfigure this with
'/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <token-here>'
after creating an enrollment token on your existing cluster.

You can complete the following actions at any time:

Reset the password of the elastic built-in superuser with
'/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic'.

Generate an enrollment token for Kibana instances with
 '/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana'.

Generate an enrollment token for Elasticsearch nodes with
'/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node'.

-------------------------------------------------------------------------------------------------
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service

If you look closely, Elasticsearch has generated the password from the output from the installation log.

…
The generated password for the elastic built-in superuser is : lfmBlynaPj4UbJ9v1FFU
…

Make sure you have saved this password for the record. It will be needed to set up the other two nodes in the cluster later on.

Alternatively, you can reset the password, but we’ll use the generated one for our example here.

5. Get ready to start Elasticsearch

According to the installation log, you can proceed by following the series of commands provided.

Simply run the following commands:

## reload the daemon
sudo systemctl daemon-reload
## allow elasticsearch service to start whenever system boots up
sudo systemctl enable elasticsearch.service
## start the elasticsearch service
sudo systemctl start elasticsearch.service

Technically, the default setup that the Elasticsearch provides is good to start a single-node setup.

You’ll notice that the default /etc/elasticsearch/elasticsearch.yml will have the following:

root@pupnode170:~# cat /etc/elasticsearch/elasticsearch.yml | sed -e 's/#.*$//' -e '/^$/d' -e '/^$/N;/^\n$/D' | sed '/^\t*$/d'
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
cluster.initial_master_nodes: ["pupnode170"]
http.host: 0.0.0.0

We’ll explain later what these optional variable settings mean. Using the default configuration, you can use it without any necessary changes just to play with Elasticsearch.

For a production setup, you should adjust accordingly to your needs.

To verify your setup, you can run the following:

root@pupnode170:~# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:lfmBlynaPj4UbJ9v1FFU https://localhost:9200/?pretty
{
  "name" : "pupnode170",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "cICM6Xv3QmGuER9XktiUpQ",
  "version" : {
    "number" : "8.10.2",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "6d20dd8ce62365be9b1aca96427de4622e970e9e",
    "build_date" : "2023-09-19T08:16:24.564900370Z",
    "build_snapshot" : false,
    "lucene_version" : "9.7.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

Setting up your Elasticsearch 3-node cluster

An Elasticsearch cluster operates on quorum-based decision-making for the cluster to remain functional.

The quorum is based on the voting configuration, and these votes are only counted for primary-eligible nodes, whose votes should count for the election to happen. When we say primary, it means nodes that has master role (enabled by default when node.role is not explicitly specified) are only eligible for quorum-based voting.

For the election to proceed and successfully elect an eligible primary, it requires that at least half of the votes + 1 should remain alive or participating in the cluster.

This means that if there are three or four primary-eligible nodes, the cluster can tolerate one of them being unavailable.

If there are two or fewer primary-eligible nodes, they must all remain available.
With that being said, let’s begin editing the configuration file. At this point, ensure you have your first node’s configuration set correctly.

6. Generate the CA (certificate authority), PEM files that we need for TLS security configuration

In the first node (192.168.40.170), let’s generate the CA files and its HTP and transport security files needed for TLS configuration. To do that, go to /usr/share/elasticsearch directory and do the following sequence of actions.

Generate the certificate authority

This shall be your certificate to use when authorizing your certificate files to be generated through transport and http SSL connection.

root@pupnode170:/usr/share/elasticsearch# bin/elasticsearch-certutil ca

By default, this will generate a file namely elastic-stack-ca.p12. You are about to ask to fill up a password. For simplicity, let’s make it empty but for security especially in a prod environment, make sure you add a password.

Generate our transport certificate using X.509

root@pupnode170:/usr/share/elasticsearch# bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12

The argument –ca shall be your generated certificate authority that was created earlier, and that is file /usr/share/elasticsearch/elastic-stack-ca.p12.

When generating this certificate, this will ask for a password, but let’s let it empty and just hit enter. This will then supply a file (by default) namely elastic-certificates.p12.

Certificates will be written to /usr/share/elasticsearch/elastic-certificates.p12.

Again, this file will be used for the transport SSL connection.

Generate our http certificate per node

The http certificate is useful for any RESTful transactions and helps your environment secure. This means that all nodes involved, which are pupnode170, pupnode171, and pupnode172 with their respective hostnames and IP addresses will be configured to have their certificates.

root@pupnode170:/usr/share/elasticsearch# bin/elasticsearch-certutil http

This command will generate series of questions as shown below:

…
Generate a CSR? [y/N]N
….
Use an existing CA? [y/N]y
….

Just supply N and y respectively since we have already our CA and no need to generate a new CSR. Next, supply the path of your certificate of authority that we generated earlier in the first command.

Please enter the full pathname to the Certificate Authority that you wish to
use for signing your new http certificate. This can be in PKCS#12 (.p12), JKS
(.jks) or PEM (.crt, .key, .pem) format.
CA Path: /usr/share/elasticsearch/elastic-stack-ca.p12

Since there’s no need for a password for this exercise, let’s leave it empty and hit enter.

Reading a PKCS12 keystore requires a password.
It is possible for the keystore's password to be blank,
in which case you can simply press <ENTER> at the prompt
Password for elastic-stack-ca.p12:

It will ask for the duration of validity for your certificate. Let’s put it 365D for example:

For how long should your certificate be valid? [5y] 365D
….

Next, this is very important. It will ask you to generate a certificate per node. Just enter y and do the series of configuration for the rest of your nodes. For example:

Generate a certificate per node? [y/N]y

## What is the name of node #1?

This name will be used as part of the certificate file name, and as a
descriptive name within the certificate.

You can use any descriptive name that you like, but we recommend using the name
of the Elasticsearch node.

node #1 name: pupnode170

## Which hostnames will be used to connect to pupnode170?

These hostnames will be added as "DNS" names in the "Subject Alternative Name"
(SAN) field in your certificate.


You should list every hostname and variant that people will use to connect to
your cluster over http.
Do not list IP addresses here, you will be asked to enter them later.

If you wish to use a wildcard certificate (for example *.es.example.com) you
can enter that here.

Enter all the hostnames that you need, one per line.
When you are done, press <ENTER> once more to move on to the next step.

pupnode170


You entered the following hostnames.

 - pupnode170

Is this correct [Y/n]Y

## Which IP addresses will be used to connect to pupnode170?

If your clients will ever connect to your nodes by numeric IP address, then you
can list these as valid IP "Subject Alternative Name" (SAN) fields in your
certificate.

If you do not have fixed IP addresses, or not wish to support direct IP access
to your cluster then you can just press <ENTER> to skip this step.

Enter all the IP addresses that you need, one per line.
When you are done, press <ENTER> once more to move on to the next step.

192.168.40.170

You entered the following IP addresses.

 - 192.168.40.170

Is this correct [Y/n]Y

## Other certificate options

The generated certificate will have the following additional configuration
values. These values have been selected based on a combination of the
information you have provided above and secure defaults. You should not need to
change these values unless you have specific requirements.

Key Name: pupnode170
Subject DN: CN=pupnode170
Key Size: 2048

Do you wish to change any of these options? [y/N]N
Generate additional certificates? [Y/n]y

## What is the name of node #2?

This name will be used as part of the certificate file name, and as a
descriptive name within the certificate.

You can use any descriptive name that you like, but we recommend using the name
of the Elasticsearch node.

node #2 name: pupnode171
...

Now do apply and answer the rest of your remaining nodes, and as of this time, we have pupnode171 and pupnode172 for the to-dos in the sequence.

Lastly, if you do not need any host to be added, proceed with n (or “no”) to generate a PKCS #12 keystore file. Just left the password empty just like as shown below:

Generate additional certificates? [Y/n]n

## What password do you want for your private key(s)?

Your private key(s) will be stored in a PKCS#12 keystore file named "http.p12".
This type of keystore is always password protected, but it is possible to use a
blank password.

If you wish to use a blank password, simply press <enter> at the prompt below.
Provide a password for the "http.p12" file:  [<ENTER> for none]

## Where should we save the generated files?

A number of files will be generated including your private key(s),
public certificate(s), and sample configuration options for Elastic Stack products.

These files will be included in a single zip archive.

What filename should be used for the output zip file? [/usr/share/elasticsearch/elasticsearch-ssl-http.zip]

Zip file written to /usr/share/elasticsearch/elasticsearch-ssl-http.zip

Once completed, unzip the file. If done correctly, you should see files similar to the following:

root@pupnode170:/usr/share/elasticsearch# ls -R elasticsearch-ssl-http.zip elasticsearch kibana/ ./elastic-certificates.p12 ./elastic-stack-ca.p12
./elastic-certificates.p12  ./elastic-stack-ca.p12  elasticsearch-ssl-http.zip

elasticsearch:
pupnode170  pupnode171  pupnode172

elasticsearch/pupnode170:
README.txt  http.p12  sample-elasticsearch.yml

elasticsearch/pupnode171:
README.txt  http.p12  sample-elasticsearch.yml

elasticsearch/pupnode172:
README.txt  http.p12  sample-elasticsearch.yml

kibana/:
README.txt  elasticsearch-ca.pem  sample-kibana.yml

Take into account that we will be using these files for our nodes:

root@pupnode170:/usr/share/elasticsearch# ls -1 
/usr/share/elasticsearch/elastic-certificates.p12 
/usr/share/elasticsearch/elasticsearch/pupnode17[0-2]/http.p12  /usr/share/elasticsearch/kibana/elasticsearch-ca.pem
/usr/share/elasticsearch/elastic-certificates.p12
/usr/share/elasticsearch/elasticsearch/pupnode170/http.p12
/usr/share/elasticsearch/elasticsearch/pupnode171/http.p12
/usr/share/elasticsearch/elasticsearch/pupnode172/http.p12
/usr/share/elasticsearch/kibana/elasticsearch-ca.pem

We will copy these files (in accordance to its hostname) in the following sections.

7. Copy the generated certificates and files to /etc/elasticsearch/certs directory

First, we’ll backup the old /etc/elasticsearch/certs directory and re-create it. In our first node (pupnode170), run:

$ mv /etc/elasticsearch/certs{,.old}
$ mkdir /etc/elasticsearch/certs

Then copy the files as follows:

$ cp /usr/share/elasticsearch/elastic-certificates.p12 /usr/share/elasticsearch/elasticsearch/pupnode170/http.p12 /usr/share/elasticsearch/kibana/elasticsearch-ca.pem /etc/elasticsearch/certs

Then make sure the files are in the correct owner and group and its restrictive permissions.

$ chown -R root.elasticsearch /etc/elasticsearch/certs
$ chmod -R o-rwx,g+r /etc/elasticsearch/certs

8. Notify the keystore settings that we need to set the right password

Since we set the password empty, we need to notify these three parameter settings under xpack.security. These settings are the following:

  • xpack.security.http.ssl.keystore.secure_password
  • xpack.security.transport.ssl.keystore.secure_password
  • Xpack.security.transport.ssl.truststore.secure_password

If you need to know the passwords, running the commands below will show you its own password set:

$ /usr/share/elasticsearch/bin/elasticsearch-keystore show xpack.security.http.ssl.keystore.secure_password

$ /usr/share/elasticsearch/bin/elasticsearch-keystore show xpack.security.transport.ssl.keystore.secure_password

$ /usr/share/elasticsearch/bin/elasticsearch-keystore show xpack.security.transport.ssl.truststore.secure_password

Now, since we set the password empty, we need to notify xpack.security by running the commands below. Leave each value empty since we have no password set and hit enter. Otherwise, if you set the password during the certificate generation, make sure you have it recorded somewhere and add it here.

Take note, your Elasticsearch service won’t start if you have a wrong password or if the password was set but you have no password in your new certificate files. That means you have to make it exactly what value it has been set. Elasticsearch is very sensitive and strict to these settings.

root@pupnode170:~# /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password

root@pupnode170:~# /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password

root@pupnode170:~# /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password

9. Edit /etc/elasticsearch/elasticsearch.yml and add the required nodes

The host pupnode170 (192.168.40.170), should have similar to the following configuration settings after editing:

root@pupnode170:~# cat /etc/elasticsearch/elasticsearch.yml
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/elastic-certificates.p12
  truststore.path: certs/elastic-certificates.p12
http.host: 0.0.0.0
cluster.initial_master_nodes: ["192.168.40.170","192.168.40.171", "192.168.40.172"]
network.host: [_local_, "192.168.40.170"]
discovery.seed_hosts: ["192.168.40.170:9300", "192.168.40.171:9300", "192.168.40.172:9300"]
node.name: pupnode170
network.publish_host: 192.168.40.170

These files in directory /etc/elasticsearch, i.e. certs/http.p12 and certs/elastic-certificates.p12 are assigned and set for xpack.security.http.ssl.keystore.path, xpack.security.transport.ssl.keystore.path, and xpack.security.transport.ssl.truststore.path accordingly.

Take note that default ports are:

transport.port: 9300-9400.
network.host: local
http.port: 9200-9300

The settings http.host or transport.host will default to the value assigned to the network.host.

In this example setup, I am using network.host to be the base value of http.host or transport.host.

The discovery.seed_hosts are the hosts of the discovery process by which the cluster formation module finds other nodes with which to form a cluster.

This process runs when you start an Elasticsearch node or when a node believes the primary node failed, and continues until the primary is found or a new primary is elected.

The discovery process starts with a list of seed addresses from one or more seed host providers and addresses of any primary-eligible nodes in the last-known cluster.

Take note that in our example here, I did not set up the cluster.name which defaults to elasticsearch. If you want to change the cluster name, you can set it in /etc/elasticsearch/elasticsearch.yml and add cluster.name=<your-desired-cluster-name>.

10. Save your superuser password to a keystore file and restart the elasticsearch service in the first node

Remember that we have our superuser password recorded earlier during the installation:

>>> The generated password for the elastic built-in superuser is : lfmBlynaPj4UbJ9v1FFU

Now it’s time to save it to your keystore file.

$ echo "lfmBlynaPj4UbJ9v1FFU" > /etc/elasticsearch/certs/keystore.tmp
$ chmod 600 /etc/elasticsearch/certs/keystore.tmp
$ chown elasticsearch.elasticsearch /etc/elasticsearch/certs/keystore.tmp
$ sudo systemctl set-environment ES_KEYSTORE_PASSPHRASE_FILE=/etc/elasticsearch/certs/keystore.tmp

Now that we’re done, restart your Elasticsearch service.

	$ systemctl restart elasticsearch.service

11. Check the logs and verify if it’s running accordingly

By using the _cat/nodes REST request, we can verify the status of the nodes as shown below:

root@pupnode170:~# curl --cacert /etc/elasticsearch/certs/elasticsearch-ca.pem -uelastic:lfmBlynaPj4UbJ9v1FFU   https://192.168.40.170:9200/_cat/nodes?v
ip             heap.percent ram.percent cpu load_1m load_5m load_15m node.role   master name
192.168.40.170           37          93   1    0.03    0.08     0.03 cdfhilmrstw *      pupnode170

Check if the ports are used and also listening to its network IP.

root@pupnode170:~# ss -a4l6ipn|sed -n '1p;/java/p'
Netid State   Recv-Q  Send-Q             Local Address:Port   Peer Address:Port Process
tcp   LISTEN  0       4096     [::ffff:192.168.40.170]:9300              *:*     users:(("java",pid=74497,fd=365))
tcp   LISTEN  0       4096          [::ffff:127.0.0.1]:9300              *:*     users:(("java",pid=74497,fd=364))
tcp   LISTEN  0       4096                       [::1]:9300           [::]:*     users:(("java",pid=74497,fd=363))
tcp   LISTEN  0       4096     [::ffff:192.168.40.170]:9200              *:*     users:(("java",pid=74497,fd=369))
tcp   LISTEN  0       4096          [::ffff:127.0.0.1]:9200              *:*     users:(("java",pid=74497,fd=368))
tcp   LISTEN  0       4096                       [::1]:9200           [::]:*     users:(("java",pid=74497,fd=367))

Alternatively, you can use the old fashioned netstat to check the ports and processes running for Elasticsearch.

Then check the log file where logfile is the cluster.name value or cluster name you assigned.

Since we are using the default cluster.name in our example, the command below will allow you to monitor the last 10 lines of your Elasticsearch node logs.

$ tail -5f /var/log/elasticsearch/elasticsearch.log

Setting up your second node out of the three-cluster nodes

In this part, we are going to set up the second node, which is pupnode171 (192.168.40.171).

Some of the previous steps will apply here. Go to the 2nd node, then install Elasticsearch while following next steps:

12. Repeat steps 1 – 4 for your second node and configure your Elasticsearch server

Right after you’re done with step 4, copy the files from /etc/elasticsearch/certs/ from the first node.

Take note that the password generated by Elasticsearch during your installation in the next node (second node in this context) is not important at all since we’re going to sync its keystores and certificates along with the first node.

Moving through, you can skip the succeeding actions as long as you are able to copy the files to /etc/elasticsearch/certs/ directory in your second node (pupnode171).

At this point, in our second node pupnode171, we will use scp command for convenience to copy the files from source to target node. First, make sure you set the root password:

root@pupnode171:~# passwd
New password:
Retype new password:
passwd: password updated successfully

In your /etc/ssh/sshd_config, make sure these options are enabled like as follows:

PermitRootLogin yes
PasswordAuthentication yes

If these options are disabled, verify it with the commands below:

root@pupnode172:~# egrep -rn 'PermitRootLogin|PasswordAuthentication' /etc/ssh/sshd_config
34:#PermitRootLogin prohibit-password
58:PasswordAuthentication no
80:# PasswordAuthentication.  Depending on your PAM configuration,
82:# the setting of "PermitRootLogin without-password".
84:# PAM authentication, then enable this but set PasswordAuthentication

You can then use the command below to enable it:

$ sed -i 's|PasswordAuthentication no|PasswordAuthentication yes|g' /etc/ssh/sshd_config
$ sed -i '35 i PermitRootLogin yes' /etc/ssh/sshd_config

Just make sure to restart sshd daemon:

$ systemctl restart sshd

13. Copy the certs and PEM files generated to /etc/elasticsearch/certs from the first node to the second node

First backup the old certificate directory in your second node (pupnode171) and create a new one.

$ mv /etc/elasticsearch/certs{,.old}
$ mkdir /etc/elasticsearch/certs

Next, from the first node (pupnode170), copy the files to the second node (pupnode171) by doing the following:

root@pupnode170:/usr/share/elasticsearch# scp 
/usr/share/elasticsearch/elastic-certificates.p12 
/usr/share/elasticsearch/kibana/elasticsearch-ca.pem /usr/share/elasticsearch/elasticsearch/pupnode171/http.p12  
192.168.40.171:/etc/elasticsearch/certs/

Then on the second node (192.168.40.171), set the proper permission and ownership of the files.

chown -R root.elasticsearch /etc/elasticsearch/certs
chmod -R o-rwx,g+r /etc/elasticsearch/certs

14. In the second node, apply changes to the config file /etc/elasticsearch/elasticsearch.yml

Make sure that the settings cluster.initial_master_nodes and network.host, discovery.seed_hosts variables are set correctly.

As mentioned earlier, http.host and transport.host will default to the value assigned on network.host.

This setting is only applicable if the IP or hostname bind for this node is the same as your http.host and transport.host requirement.

Otherwise, refer to Elasticsearch documentation for more details about this setting.

At this stage, you have to set the proper variables as shown below:

root@pupnode171:/etc/elasticsearch/certs#  cat /etc/elasticsearch/elasticsearch.yml | sed -e 's/#.*$//' -e '/^$/d' -e '/^$/N;/^\n$/D' | sed '/^\t*$/d'
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/elastic-certificates.p12
  truststore.path: certs/elastic-certificates.p12
http.host: 0.0.0.0
cluster.initial_master_nodes: ["192.168.40.170","192.168.40.171", "192.168.40.172"]
network.host: [_local_, "192.168.40.171"]
discovery.seed_hosts: ["192.168.40.170:9300", "192.168.40.171:9300", "192.168.40.172:9300"]
node.name: pupnode171
network.publish_host: 192.168.40.171

Make sure that the xpack.security.http.ssl.keystore.path, xpack.security.transport.ssl.keystore.path, and xpack.security.transport.ssl.truststore.path are set to the correct path of the files.

15. Change the keystore passwords in the second node

Since we did not set any password while generating our certificates and PEM files, now we need to tell the xpack.security settings about this. Just like what we have applied in step #8 we need to empty the three xpack.security settings by running the command below:

root@pupnode171:~# /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
Setting xpack.security.http.ssl.keystore.secure_password already exists. Overwrite? [y/N]y
Enter value for xpack.security.http.ssl.keystore.secure_password:

root@pupnode171:~# /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
Setting xpack.security.transport.ssl.keystore.secure_password already exists. Overwrite? [y/N]y
Enter value for xpack.security.transport.ssl.keystore.secure_password:

root@pupnode171:~# /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
Setting xpack.security.transport.ssl.truststore.secure_password already exists. Overwrite? [y/N]y
Enter value for xpack.security.transport.ssl.truststore.secure_password:

Take note that in this exercise, we left the password empty. Just set it overwrite and hit enter as no password is set.

16. Create a keystore file containing the password that was generated by Elasticsearch during the installation of the first node

Just like what we have done in step #10, we have to store the password from the first node. To do that, just do this sequence of commands:

$ echo "lfmBlynaPj4UbJ9v1FFU" > /etc/elasticsearch/certs/keystore.tmp
$ chmod 600 /etc/elasticsearch/certs/keystore.tmp
$ chown elasticsearch.elasticsearch /etc/elasticsearch/certs/keystore.tmp
$ sudo systemctl set-environment ES_KEYSTORE_PASSPHRASE_FILE=/etc/elasticsearch/certs/keystore.tmp

17. Start the Elasticsearch in the second node

$ sudo /bin/systemctl daemon-reload
$ sudo /bin/systemctl enable elasticsearch.service
$ sudo systemctl start elasticsearch.service

18. Monitor the logs

Go to /var/log/elasticsearch/ and choose the file <cluster.name>.log, for example:

root@pupnode171:/var/log/elasticsearch# tail -5f elasticsearch.log
[2022-06-07T21:27:10,712][INFO ][o.e.h.AbstractHttpServerTransport] [pupnode171] publish_address {192.168.40.171:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}, {192.168.40.171:9200}
[2022-06-07T21:27:10,712][INFO ][o.e.n.Node               ] [pupnode171] started
[2022-06-07T21:27:10,765][INFO ][o.e.i.g.DatabaseNodeService] [pupnode171] successfully loaded geoip database file [GeoLite2-Country.mmdb]
[2022-06-07T21:27:11,010][INFO ][o.e.i.g.DatabaseNodeService] [pupnode171] successfully loaded geoip database file [GeoLite2-ASN.mmdb]
[2022-06-07T21:27:12,781][INFO ][o.e.i.g.DatabaseNodeService] [pupnode171] successfully loaded geoip database file [GeoLite2-City.mmdb]

Setting up your third node out of the three-cluster nodes

Lastly, let’s set up the third node.

This will be much easier since we are done with the first node (pupnode170) and second node (pupnode171).

Technically, the steps done for node 192.168.40.171 will be repeated for the third node to add, i.e. pupnode172 (192.168.40.172) in this exercise.

All you need to do is apply steps 9 – 14. Make sure the IP addresses, or hostname, or path (check step #13 during scp command) have been replaced correctly and are in the correct format and values. For example, in your /etc/elasticsearch/elasticsearch.yml file, the following should be correct (which at this point it shall be using IP 192.168.40.172).

network.host: [_local_, "192.168.40.172"]
node.name: pupnode172  
network.publish_host: 192.168.40.172

Verifying the Elasticsearch cluster nodes (the three-node cluster)

At this point, you should have a running 3-node cluster and are ready to function as a production cluster. We’ll use the Elasticsearch /_cat/nodes API to verify the cluster:

root@pupnode172:~# curl –cacert /etc/elasticsearch/certs/elasticsearch-ca.pem -uelastic:lfmBlynaPj4UbJ9v1FFU https://192.168.40.170:9200/_cat/nodes?v

ip             heap.percent ram.percent cpu load_1m load_5m load_15m node.role   master name
192.168.40.171           41          96   0    0.00    0.00     0.00 cdfhilmrstw -      pupnode171
192.168.40.172           28          96  27    0.84    0.33     0.15 cdfhilmrstw -      pupnode172
192.168.40.170           23          94   0    0.03    0.03     0.00 cdfhilmrstw *      pupnode170

You can verify the health of the cluster with the same command to any of the following nodes in the cluster. i.e. either in the first node, second node, or the third node. This is where you can use the generated file /etc/elasticsearch/certs/elasticsearch-ca.pem, which is also dedicated to be used for your Kibana – once you have already setup your Kibana. You can use this file to request through a secure connection with your Elasticsearch nodes.

The output above defines that the current primary is set to host pupnode170 (the first node) where you can notice that the master field assigns the asterisk to the first node.

The node.role defines the default roles of the three-node cluster since we did not specify or filter the type of roles that these nodes have to be assigned. Specifically, the roles of the nodes mean as follows:

  • c (cold node)
  • d (data node)
  • f (frozen node)
  • h (hot node)
  • i (ingest node)
  • l (machine learning node)
  • m (primary-eligible node)
  • r (remote cluster client node)
  • s (content node)
  • t (transform node)
  • v (voting-only node)
  • w (warm node)
  • – (coordinating node only).

Check the Elasticsearch documentation regarding these node roles.

Using the enrollment token to join the cluster

Elasticsearch has introduced this new approach since version 8.x where you can create a cluster by using tokens. You can also use this approach to have your node join to an existing cluster. During the first node installation, the installation output log reveals the method to use the command below:

$ /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node

Simply, run the command above on the first node to create a token. For example:

root@pupnode170:~# /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
 eyJ2ZXIiOiI4LjIuMiIsImFkciI6WyIxOTIuMTY4LjQwLjE3MDo5MjAwIl0sImZnciI6IjdjNTM3OTI4MjBhZDdhZjdiNWRjNzlkYzJmZTBlZjUyOTA2MDljNDM1ZDVkMjg5MjUxY2NlY2I2YjIwMGI2ZmMiLCJrZXkiOiJJd2hzUUlFQmkwc1AwOGhvNXQzajpjSTNISDUzTlRkcWRxdDd0Q0x4dG1BIn0=

Now, in the nodes that you want to join or create a cluster, you only need to run the following example command (at this point, in the second node):

 root@pupnode171:~#  /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token eyJ2ZXIiOiI4LjIuMiIsImFkciI6WyIxOTIuMTY4LjQwLjE3MDo5MjAwIl0sImZnciI6IjdjNTM3OTI4MjBhZDdhZjdiNWRjNzlkYzJmZTBlZjUyOTA2MDljNDM1ZDVkMjg5MjUxY2NlY2I2YjIwMGI2ZmMiLCJrZXkiOiJJd2hzUUlFQmkwc1AwOGhvNXQzajpjSTNISDUzTlRkcWRxdDd0Q0x4dG1BIn0=

 This node will be reconfigured to join an existing cluster, using the enrollment token that you provided.
 This operation will overwrite the existing configuration. Specifically:
   - Security auto configuration will be removed from elasticsearch.yml
   - The [certs] config directory will be removed
   - Security auto configuration related secure settings will be removed from the elasticsearch.keystore
 Do you want to continue with the reconfiguration process [y/N]y

Elasticsearch will manage it for you to set up the required private and public keys and the certificates required for cluster and client communication.

It will also generate unique private and public keys in /etc/elasticsearch/certs (except for http_ca.crt as the base certificate to create the private PKCS #12 files) in contrast to what we’ve done, we just copied from the first node.

Just make sure that you have setup the settings in your configuration file /etc/elasticsearch/elasticsearch.yml for the following parameters:

  • cluster.initial_master_nodes,
  • http.host
  • transport.host
  • network.host
  • discovery.seed_hosts

The steps we followed earlier, including the verification process, apply here as well.

Conclusion

The setup and installation of a single Elasticsearch node is quite simple. However, if you want a cluster of at least three nodes, it can become a more tedious and complicated experience. Especially if you lack the experience and knowledge how to manage and do this. There are helpful resources, though, such as this guide and the official documentation.

We hope you found this guide helpful and you can now set up and install a 3-node Elasticsearch cluster – not only for a testing environment, but a production environment too!

Stay on top of all things Elasticsearch by subscribing to our newsletter below.

Follow us on LinkedIn and Twitter for more great content in the coming weeks. Stay tuned!

Subscribe below to be notified of fresh posts