Those of you with Elasticsearch/Logstash/Kibana might have found out that there are by default nothing out of the box to protect your installation. Elasticsearch allows all the indices to be deleted, viewed or modified. And this is perfectly fine if you have a local installation (for development of testing purpose). But what to do if there is some sensitive data you would like to hide/restrict access to?

There are of course some plug-ins out there, like elasticsearch-http-basic or a-la enterprise solution - Shield. But we will choose a simpler way making use of IPtables, Nginx and Kibana.

Hopefully that few next paragraphs will shed light on some tricks one can use to make life easier.

Let’s assume Elasticsearch is installed and now it is fully functional. Next what we are going to do is to install Nginx. You can refer to the documentation page and follow the instruction adding the repositories to your distributions. If there is already Nginx in your repos just type:

Debian/Ubuntu apt-get install nginx

RHEL/CentOS yum install nginx

After installation we can nail the config file. In /etc/nginx/conf.d/default.conf in the server section

listen       198.51.100.100:80;
server_name  example.com

Next, we will hide our example.com:9200 behind the Nginx server with Basic Authentication and on the suitable port (:80 in our case)

location ~ ^/es.*$ {
    proxy_pass http://localhost:9200;
    rewrite ^/es(.*) /$1 break;
    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/.htpasswd;
}

Now we should only prepare the /etc/nginx/.htpasswd file for our users.

By running htpasswd -c /etc/nginx/.htpasswd <your username> you will be asked to type the password, where -c option stays for creating a new file. Every next run the -c will not be needed anymore.

For Kibana, we should add one more section to nginx default.conf

location / {
    proxy_pass http://localhost:5601;
    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/.htpasswd;
}

assuming one has Kibana4 installed, where 5601 is the default port.

So, what do we get? The entire (and quite small) Nginx config:

server {
    listen       198.51.100.100:80;
    server_name  example.com
    
    location / {
        proxy_pass http://localhost:5601;
    }

    location ~ ^/es.*$ {
        proxy_pass http://localhost:9200;
        rewrite ^/es(.*) /$1 break;
        auth_basic "Restricted";
        auth_basic_user_file /etc/nginx/.htpasswd;
    }
   
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

Please, keep in mind that we have not done any performance optimizations (let’s leave this task to the reader), all the settings are default.

In Kibana config config/kibana.yml few settings must be changed:

# The host to bind the server to.
host: "localhost"
...

# If your Elasticsearch is protected with basic auth, this is the user credentials
# used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied thorugh
# the Kibana server)
kibana_elasticsearch_username: <your username>
kibana_elasticsearch_password: <your password>
....

Now we can restart Kibana and Nginx and check our setup.

Using curl -u<user>:<password> 'http://example.com/es' we should get a response, something like:

{
  "status" : 200,
  "name" : "Glob",
  "cluster_name" : "ES",
  "version" : {
    "number" : "1.6.0",
    "build_hash" : "cdd3ac4dde4f69524ec0a14de3828cb95bbb86d0",
    "build_timestamp" : "2015-06-09T13:36:34Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.4"
  },
  "tagline" : "You Know, for Search"
}

Everything what is left to do is to set up IPtables. And again we will choose the simplest way (for the sake of easier maintenance and administration)

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]

# Accept everything for lo interface
-A INPUT -i lo -j ACCEPT
# Accept all the icmp requests 
-A INPUT -p icmp -j ACCEPT
# Make sure that all the new request for SSH will go through 
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
# Accept all already ESTABLISHED and RELATED connections 
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Accept all the connections attempts only from our trusted production networks 
# and only on selected ports (for ES): 9300-9400
-A INPUT -m state --state NEW -m tcp -p tcp -s 198.51.100.0/24 --dport 9300:9400 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp -s 192.0.2.0/24 --dport 9300:9400 -j ACCEPT
# Accept all the connection's attempts on the port 80
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
# Reject everything else for INPUT and FORWARD 
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

That is basically it. We do not need to invent a wheel here. But just following some simple and straightforward practices we can create a miracle.