How to use the official docker elasticsearch container?
Solution 1
I recommend using docker-compose (which makes lot of things much easier) with following configuration.
Configuration (for development)
Configuration starts 3 services: elastic itself and extra utilities for development like kibana and head plugin (these could be omitted, if you don't need them).
In the same directory you will need three files:
- docker-compose.yml
- elasticsearch.yml
- kibana.yml
With following contents:
docker-compose.yml
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.0
container_name: elasticsearch_540
environment:
- http.host=0.0.0.0
- transport.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
volumes:
- esdata:/usr/share/elasticsearch/data
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- 9200:9200
- 9300:9300
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 2g
cap_add:
- IPC_LOCK
kibana:
image: docker.elastic.co/kibana/kibana:5.4.0
container_name: kibana_540
environment:
- SERVER_HOST=0.0.0.0
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
headPlugin:
image: mobz/elasticsearch-head:5
container_name: head_540
ports:
- 9100:9100
volumes:
esdata:
driver: local
elasticsearch.yml
cluster.name: "chimeo-docker-cluster"
node.name: "chimeo-docker-single-node"
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: "Authorization"
kibana.yml
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
elasticsearch.username: elastic
elasticsearch.password: changeme
xpack.monitoring.ui.container.elasticsearch.enabled: true
Running
With above three files in the same directory and that directory set as current working directory you do (could require sudo, depends how you have your docker-compose set up):
docker-compose up
It will start up and you will see logs from three different services: elasticsearch_540
, kibana_540
and head_540
.
After initial start up you will have your elastic cluster available for http under 9200 and for tcp under 9300. Validate with following curl if the cluster started up:
curl -u elastic:changeme http://localhost:9200/_cat/health
Then you can view and play with your cluster using either kibana (with credentials elastic / changeme):
http://localhost:5601/
or head plugin:
http://localhost:9100/?base_uri=http://localhost:9200&auth_user=elastic&auth_password=changeme
Solution 2
Your container is auto exiting because of insufficient virtual memory, by default to run an elastic search container your memory should be a min of 262144
but if you run this command sysctl vm.max_map_count
and see it will be around 65530. Please increase your virtual memory count by using this command sysctl -w vm.max_map_count=262144
and run the container again docker run IMAGE ID
then you should have your container running and you should be able to access elastic search at port 9200 or 9300
edit : check this link https://www.elastic.co/guide/en/elasticsearch/reference/5.0/vm-max-map-count.html#vm-max-map-count
CommonSenseCode
Software Skills: JavaScript NodeJS Golang React Redis Android Ionic/Cordova Framework XML, HTML, CSS, Sass, Less jQuery, Bootstrap MongoDB SQLite, Postgres & MySQL Git, Github, Bitbucket & Gitlab Linux Agile Development Unit Testing
Updated on July 05, 2022Comments
-
CommonSenseCode almost 2 years
I have the following Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:5.4.0 RUN elasticsearch EXPOSE 80
I think the 3rd line is never reached.
When I try to access the dockercontainer from my local machine through: 172.17.0.2:9300
I get nothing, what am I missing? I want to access elasticsearch from the local host machine.
-
CommonSenseCode almost 7 yearsmany thanks slawek for writting such a complete guide, I'll try to make it run later
-
CommonSenseCode almost 7 yearsThanks(& sorry for late answer) it is working at
http://localhost:9100/
so If I wanted to put some info inside the elastic search container how do I proceed for that, any pointers appreciated -
slawek almost 7 yearsI recommend starting here. Just keep in mind that all the REST calls will have to be run similar to health check
curl -u elastic:changeme http://localhost:9200/_cat/health
with credentials and on port 9200 (if you didn't change the defaults). So creating an index would becurl -u elastic:changeme -XPUT http://localhost:9200/customer
and indexing documentcurl -u elastic:changeme -XPUT 'localhost:9200/customer/external/1?pretty' -H 'Content-Type: application/json' -d '{ "name": "John Doe" }'
. -
code_blue over 6 yearsThe Elastic search node doesn't show up when using Head plugin. I can see it from Kibana though. Also in case I want to add multiple nodes to the cluster should I just replicate the elasticsearch service in docker compose?
-
slawek over 6 yearsWeird. Are you sure you didn't change the default password? Or the
http.cors
settings from elasticsearch.yml? Maybe mobz/elasticsearch-head:5 has changed in the meantime. Not sure, cause on my machine the compose still works. As for multi node setup I have not played with it but there is a example setup in elasticsearch docs. So something similar should work. They are using unicast to point second node to the first one. -
MMT over 6 yearsIt may be silly but... am i right that the only purose of the
head_540
is to provide some GUI for the elastic search? -
slawek over 6 yearsYes you're correct. Head plugin is just for convenience, for people used to having it available in previous elastic versions. It is just a web page GUI for managing indices. Similar could be told about kibana. They are not required, just useful to graphically see that things are working. In fact I'm using cerbero more often than head these days.
-
Lars C. Magnusson over 5 yearsThank you! I tried with enviromentvariables in the composer-file but I couldnt get it working. But with your help its working :)