❓ Create a user called elastic
View Solution (click to reveal)
As root, create a user called elastic, set the password (I used something simple, you might not want to do this if your server is accessible from the internet or by other users). Add that user to the wheel group to allow them access to use sudo.
[root@centos8streams ~]# useradd -m -c "elastic training" elastic
[root@centos8streams ~]# echo "Password1!" | passwd --stdin elastic
[root@centos8streams ~]# usermod -G wheel elastic❓ Set the nofile limits to the recommended size
View Solution (click to reveal)
https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html https://www.elastic.co/guide/en/elasticsearch/reference/master/setting-system-settings.html
Elasticsearch uses a lot of file descriptors or file handles. Running out of file descriptors can be disastrous and will most probably lead to data loss. Make sure to increase the limit on the number of open files descriptors for the user running Elasticsearch to
65,536 or higher.
vi /etc/security/limits.conf
elasticsearch - nofile 65535
elastic - nofile 65535
This change will only take effect the next time the elasticsearch user opens a new session.
❓ Set the vm.max_map_count to the minimum required value
View Solution (click to reveal)
https://www.elastic.co/guide/en/elasticsearch/guide/master/_file_descriptors_and_mmap.html https://www.elastic.co/guide/en/elasticsearch/reference/master/_maximum_map_count_check.html
Continuing from the previous point, to use mmap effectively, Elasticsearch also requires the ability to create many memory-mapped areas. The maximum map count check checks that the kernel allows a process to have at least 262,144 memory-mapped areas and is enforced on Linux only.
Temporary fix
sysctl -w vm.max_map_count=262144
Permenant fix
vi /etc/sysctl.conf
vm.max_map_count=262144`
Note: that this is just the ElasticSearch component, and not Kibana, Logstash or Beats.
Here we are installing the tarball.
Download the version that matches the exam (at time of writing it is still 7.2)
Go here : https://www.elastic.co/downloads/elasticsearch
Then go to Not the version you're looking for? View past releases.
Find the version that matches: eg. https://www.elastic.co/downloads/past-releases/elasticsearch-7-2-1
Grab the LINUX tarball - right click and copy link
$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.1-linux-x86_64.tar.gzExtract the files
$ tar xf elasticsearch-7.2.1-linux-x86_64.tar.gzRename the folder
$ mv elasticsearch-7.2.1 elasticsearchNote that we aren't using a service here as we want to test the startup
$ cd
$ cd elasticsearch
$ ./bin/elasticsearch --pidfile mypidfileIn another terminal session run the following curl command.
[elastic@centos8streams ~]$ curl localhost:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1617885668 12:41:08 elasticsearch green 1 1 0 0 0 0 0 0 - 100.0%This shows the cluster health
We are looking for status = green
To look at node health
[elastic@centos8streams ~]$ curl localhost:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1 11 95 1 0.00 0.00 0.00 mdi * centos8streams.preview.localIn another terminal session
[elastic@centos8streams elasticsearch]$ ls
bin config data jdk lib LICENSE.txt logs modules mypidfile NOTICE.txt plugins README.textile
[elastic@centos8streams elasticsearch]$ pkill -F mypidfile❓ Using the previously installed application we will update the two main configuration files.
elasticsearch.yml and jvm.options
See: https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html
for v7.2 See: https://www.elastic.co/guide/en/elasticsearch/reference/7.2/modules-node.html There are differences between 7.12 (current) and 7.2 (exam).
[elastic@centos8streams config]$ pwd
/home/elastic/elasticsearch/config
[elastic@centos8streams config]$ ls
elasticsearch.keystore elasticsearch.yml jvm.options log4j2.properties role_mapping.yml roles.yml users users_roles
[elastic@centos8streams config]$Written in YAML. Below are in dotted notation, but can be written as YAML dictionaries.
The following elasticsearch.yml are worth noting - but there are many mode.
- cluster.name: The name of the cluster (most likely the same for all your nodes)
- node.name: The name of this node - needs to be unique
- node.master: true/false - master-eligible role
- node.data: true/false - data role
- node.ingest: true/false - ingest role
- node.attr.some_attribute: node attributes to tag nodes (such as hot/warm architecture)
- path.repo: The filesystem path to be used as the snapshot repository
- network.host: Interfaces for Elasticsearch to bind to (typically local and/or site)
- discovery.seed_hosts: List of nodes to 'seed' thus ping on startup (normally these are master-eligible nodes)
- cluster.initial_master_nodes: List of master-eligible node names for bootstrapping the cluster and preventing split-brain
- xpack.security.enabled: true/false
- xpack.security.transport.ssl.enabled: true/false
- xpack.security.transport.ssl.verification_mode: full/certificate - node-level verification (DNS), or certificate for just certificate-level verification (each end has certs)
- xpack.security.transport.ssl.keystore.path: Path to keystore file for transport network encryption
- xpack.security.transport.ssl.truststore.path: Path to truststore file for transport network encryption
- xpack.security.http.ssl.keystore.path: Path to keystore file for http network encryption
- xpack.security.http.ssl.truststore.path: Path to truststore file for http network encryption
The most important jvm.options are the following:
- -Xms initial heap size
- -Xmx max heap size
❓ Set the JVM heap space to 2 gig
View Solution (click to reveal)
Edit the jvm.options file
[elastic@centos8streams config]$ cd /home/elastic/elasticsearch/config
[elastic@centos8streams config]$ vi jvm.options# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms2g
-Xmx2g
This would be done on each affected node and then restart elasticsearch on that node.
❓ Set the cluster to the following
- cluster name: testcluster01
- node name: node01
- node roles: master, data
- node attributes: datacenter = uksouth
View Solution (click to reveal)
Edit the elasticsearch.yml file
[elastic@centos8streams config]$ cd /home/elastic/elasticsearch/config
[elastic@centos8streams config]$ vi elasticsearch.ymlSet the following
cluster.name: testcluster01
node.name: node01
node.master: true
node.data: true
node.ingest: false
node.attr.datacenter: uksouthnode.ingest is default true, so you need to specifically set this to false.
This would be done on each affected node and then restart elasticsearch on that node.
Using the API pull the node settings back.
Here i have used the awesome command jq to pretty print the results. You may want to install this on your nodes. (probably not available in the exam setting)
[elastic@centos8streams config]$ curl -s -X GET http://localhost:9200/_nodes/settings | jq .{
"_nodes": {
"total": 1,
"successful": 1,
"failed": 0
},
"cluster_name": "testcluster01",
"nodes": {
"1XCLS3R4ROG1GASp5xM4GQ": {
"name": "node01",
"transport_address": "127.0.0.1:9300",
"host": "127.0.0.1",
"ip": "127.0.0.1",
"version": "7.2.1",
"build_flavor": "default",
"build_type": "tar",
"build_hash": "fe6cb20",
"roles": [
"master",
"data"
],
"attributes": {
"ml.machine_memory": "1905602560",
"xpack.installed": "true",
"ml.max_open_jobs": "20",
"datacenter": "uksouth"
},
"settings": {
"pidfile": "mypidfile",
"cluster": {
"name": "testcluster01"
},
"node": {
"name": "node01",
"attr": {
"datacenter": "uksouth",
"xpack": {
"installed": "true"
},
"ml": {
"machine_memory": "1905602560",
"max_open_jobs": "20"
}
},
"data": "true",
"ingest": "false",
"master": "true"
},
"path": {
"logs": "/home/elastic/elasticsearch/logs",
"home": "/home/elastic/elasticsearch"
},
"client": {
"type": "node"
},
"http": {
"type": "security4",
"type.default": "netty4"
},
"transport": {
"type": "security4",
"features": {
"x-pack": "true"
},
"type.default": "netty4"
}
}
}
}
}To secure a cluster you need to do a few things:
- Create CA and node certificates
- Enable SSL on the Transport Network
- Enable SSL on the HTTP Network
- Change the built-in user default passwords
❓ Create the CA file and node certificate.
View Solution (click to reveal)
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup.html
Use: ./bin/elasticsearch-certutil ca
[elastic@centos8streams elasticsearch]$ pwd
/home/elastic/elasticsearch
[elastic@centos8streams elasticsearch]$ ./bin/elasticsearch-certutil ca
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.bouncycastle.jcajce.provider.drbg.DRBG (file:/home/elastic/elasticsearch/lib/tools/security-cli/bcprov-jdk15on-1.61.jar) to constructor sun.security.provider.Sun()
WARNING: Please consider reporting this to the maintainers of org.bouncycastle.jcajce.provider.drbg.DRBG
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.
Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority
By default the 'ca' mode produces a single PKCS#12 output file which holds:
* The CA certificate
* The CA's private key
If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key
Please enter the desired output file [elastic-stack-ca.p12]:
Enter password for elastic-stack-ca.p12 :Ignore the reflective WARNINGS. Accept the defaults.
Use: ./bin/elasticsearch-certutil cert --ca ./elastic-stack-ca.p12 --name node01 --dns centos8streams.preview.local --ip 172.30.5.202
[elastic@centos8streams elasticsearch]$ ./bin/elasticsearch-certutil cert --ca ./elastic-stack-ca.p12 --name node01 --dns centos8streams.preview.local --ip 172.30.5.202
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.bouncycastle.jcajce.provider.drbg.DRBG (file:/home/elastic/elasticsearch/lib/tools/security-cli/bcprov-jdk15on-1.61.jar) to constructor sun.security.provider.Sun()
WARNING: Please consider reporting this to the maintainers of org.bouncycastle.jcajce.provider.drbg.DRBG
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
The 'cert' mode generates X.509 certificate and private keys.
* By default, this generates a single certificate and key for use
on a single instance.
* The '-multiple' option will prompt you to enter details for multiple
instances and will generate a certificate and key for each one
* The '-in' option allows for the certificate generation to be automated by describing
the details of each instance in a YAML file
* An instance is any piece of the Elastic Stack that requires a SSL certificate.
Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
may all require a certificate and private key.
* The minimum required value for each instance is a name. This can simply be the
hostname, which will be used as the Common Name of the certificate. A full
distinguished name may also be used.
* A filename value may be required for each instance. This is necessary when the
name would result in an invalid file or directory name. The name provided here
is used as the directory name (within the zip) and the prefix for the key and
certificate files. The filename is required if you are prompted and the name
is not displayed in the prompt.
* IP addresses and DNS names are optional. Multiple values can be specified as a
comma separated string. If no IP addresses or DNS names are provided, you may
disable hostname verification in your SSL configuration.
* All certificates generated by this tool will be signed by a certificate authority (CA).
* The tool can automatically generate a new CA for you, or you can provide your own with the
-ca or -ca-cert command line options.
By default the 'cert' mode produces a single PKCS#12 output file which holds:
* The instance certificate
* The private key for the instance certificate
* The CA certificate
If you specify any of the following options:
* -pem (PEM formatted output)
* -keep-ca-key (retain generated CA key)
* -multiple (generate multiple certificates)
* -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files
Enter password for CA (elastic-stack-ca.p12) :
Please enter the desired output file [node01.p12]:
Enter password for node01.p12 :
Certificates written to /home/elastic/elasticsearch/node01.p12
This file should be properly secured as it contains the private key for
your instance.
This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.
For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.Ignore the reflective WARNINGS. Accept the defaults.
This file can be your keystore and truststore.
❓ The Transport Network is the network between nodes within a cluster (or between clusters). On a single-node cluster you do not need this. On a production cluster you WILL need this.
View Solution (click to reveal)
v7.2 https://www.elastic.co/guide/en/elasticsearch/reference/7.2/encrypting-internode.html
Edit the elasticsearch.yml file and set
cluster.initial_master_nodes: ["node01"]
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: ${node.name}.p12
xpack.security.transport.ssl.truststore.path: ${node.name}.p12${node.name} is an elasticsearch environment variable.
If you set a password when creating the node certificate then you need to do the following, where it will ask you for the password each time.
$ ./bin/elasticsearch-keystore create
$ ./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
$ ./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_passwordRestart Elasticsearch afterwards.
❓ Serveral accounts needs their passwords changed
View Solution (click to reveal)
Use: ./bin/elasticsearch-setup-passwords
auto - Uses randomly generated passwords
interactive - Uses passwords entered by a user
[elastic@centos8streams elasticsearch]$ pwd
/home/elastic/elasticsearch
[elastic@centos8streams elasticsearch]$ ./bin/elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]
❓ This is the HTTPS used to access port 9200. eg. when using curl or python. It is advised that default user passwords should be changed before doing this. (see previous section)
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup-https.html
v7.2 https://www.elastic.co/guide/en/elasticsearch/reference/7.2/configuring-tls.html#tls-http
View Solution (click to reveal)
You should have created the CA certs and keystores before (see previous sections)
Edit the elasticsearch.yml file and set
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: ${node.name}.p12
xpack.security.http.ssl.truststore.path: ${node.name}.p12Restart Elasticsearch afterwards.
⚠️ ⚠️ ⚠️ IMPORTANT NOTE: from here on it is assumed you have a working kibana node to work from the "development console" and that you have imported the sample data.
❓ To do this section you need to:
- create roles and users
You will see an error like the below if you have a Basic licence.
The following role parameters are not available under Basic licence.
"field_security" : { ... }, # field level security
"query": "..." # document level securityError message:
{
"error": {
"root_cause": [
{
"type": "security_exception",
"reason": "current license is non-compliant for [field and document level security]",
"license.expired.feature": "field and document level security"
}
],
"type": "security_exception",
"reason": "current license is non-compliant for [field and document level security]",
"license.expired.feature": "field and document level security"
},
"status": 403
}- a role called
flights_allfor read only access on the Flight sample data - the role should have cluster monitor access
- a user called
flight_reader_allthat has the role applied - the user password should be
flight123
View Solution (click to reveal)
https://www.elastic.co/guide/en/elasticsearch/reference/7.2/built-in-roles.html
https://www.elastic.co/guide/en/elasticsearch/reference/7.2/security-privileges.html
PUT _security/role/flights_all
{
"cluster": [ "monitor" ],
"indices": [
{
"names": ["kibana_sample_data_flights"],
"privileges": ["read","view_index_metadata", "monitor"]
}
]
}
PUT _security/user/flight_reader_all
{
"password": "flight123",
"roles": [ "kibana_user", "flights_all" ],
"full_name": "flights all",
"email": "fa@abc.com"
}Test the user access
- Logout as elastic
- Login to Kibana as
flight_reader_allpasswordflight123 - Go to dev console and see what indices you have access to.
Check that we can access the index stats (monitor) and only the index we have allowed access to.
GET _cat/indices
green open kibana_sample_data_flights R1AptZYHTrivEUEmtftubg 1 0 13059 0 6.6mb 6.6mbCheck that we can query the index. Get the document count
GET kibana_sample_data_flights/_count
{
"count" : 13059,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
}
}- a role called
flights_australiafor read only access on the Flght data that only allows access to data that has a Destination Country of Australia. - the following fields are allowed to be displayed: Flight Number, Country of Origin and City of Origin
- a user called
flight_reader_aushould have the role applied to it
View Solution (click to reveal)
# test your query first
POST kibana_sample_data_flights/_search
{
"query": {
"match": {
"DestCountry": "AU"
}
}
}
PUT _security/role/flights_australia
{
"indices": [
{
"names": [
"kibana_sample_data_flights"
],
"privileges": [
"read"
],
"field_security": {
"grant": ["FlightNum", "OriginCountry", "OriginCityName"]
},
"query": {
"match": {
"DestCountry": "AU"
}
}
}
]
}Create a user for that role
PUT _security/user/flight_reader_au
{
"password": "flight123",
"roles": "flights_australia",
"full_name": "flights australia",
"email": "fau@abc.com"
}Test
- Logout as elastic
- Login to Kibana as
flight_reader_au - Go to dev console and see what indices you have access to.
#TODO: write a query to count the number of documents accessible