How to forward Docker container logs to ELK?
Docker compose has the logging keyword: source
logging:
driver: syslog
options:
syslog-address: "tcp://192.168.0.42:123"
So if you know where to go from there, go for it.
If not, I can advice you to look into the gelf
logging driver for docker and the logstash gelf input plugin
If you would for instance use this basic ELK-stack-in-containers setup, you would update the docker-compose file and add port - "12201:12201/udp"
to logstash.
Edit the logstash.conf
input section to:
input {
tcp {
port => 5000
}
gelf {
}
}
Then configure your containers to use logging driver gelf
(not syslog) and the option gelf-address=udp://ip_of_logstash:12201
(instead of syslog-address).
The only magic you will have to take care of is how Docker will find the IP address or hostname of the Logstash container. You could solve that through docker-compose naming, Docker links or just manually.
Docker and ELK are powerful, flexible, but therefore also big and complex beasts. Prepare to put in some serious time, in reading and experimentation.
Don't be afraid to open new (and preferably very specific) questions you come across while exploring all this.
Related videos on Youtube
ndarkness
Updated on September 18, 2022Comments
-
ndarkness over 1 year
I would like to know what is the easiest way to forward my docker container logs to an ELK server, so far the solutions I have tried after having searched the internet didn't work at all.
Basically I have a docker image that I run using docker-compose, this container does not log anything locally (it is composed of different services but none of them are logstash or whatever) but I see logging through
docker logs -tf imageName
ordocker-compose logs
. Since I am starting the containers with compose I cannot make use (or at least I don't know how) of the--logs-driver
option of docker.Thus I was wondering if someone may enlighten me a bit regarding how to forward that logs to an ELK container for example.
Thanks in advance,
Regards
SOLUTION:
Thanks to madeddie I could achieve to solve my issue in the following way, mention that I used the basic ELK-stack-in-containers which madeddie suggested in his post.
First I update the
docker-compose.yml
file of my container to add entries for the logging reference as madeddie told me,I included an entry per service, a snippet of my docker-compose looks like thisversion: '2' services: mosquitto: image: ansi/mosquitto ports: - "1883:1883" # Public access to MQTT - "12202:12202" #mapping logs logging: driver: gelf options: gelf-address: udp://localhost:12202 redis: image: redis command: redis-server --appendonly yes ports: - "6379:6379" # No public access to Redis - "12203:12203" #mapping logs volumes: - /home/dockeruser/redis-data:/data logging: driver: gelf options: gelf-address: udp://localhost:12203
Secondly, I had to use a different port number per sevice in order to be able to forward the logs.
Finally,I updated my elk container
docker-compose.yml
file to map each of the upd port where I was sending my logs to the one that logstash listens tologstash: build: logstash/ command: logstash -f /etc/logstash/conf.d/logstash.conf volumes: - ./logstash/config:/etc/logstash/conf.d ports: - "5000:5000" - "12202:12201/udp" #mapping mosquitto logs - "12203:12201/udp" #mapping redis logs
This configuration and adding the entry of
gelf {}
inlogstash.conf
made it work, it is important as well to set up properly the IP address of the docker service.REgards!
-
madeddie almost 8 yearsThis is an interesting solution, you now have a port forward on all other containers to the logstash container. Normally you would not have all those 1220* ports on the other containers, only the 12201/udp on the logstash one and the logging would point to
gelf-address:udp://ip_of_dockerhost:12201
(if all the containers are running on the same host, then 172.17.0.1 would work as ip_of_dockerhost, otherwise use the main IP address of the host the logstash container is running on. -
ndarkness almost 8 yearsI tried to do so, but while running
docker-compose up
I saw a complaint that the port was already in use. -
madeddie almost 8 yearsyes, because you open a port for logstash on all your containers, while you should only open it for the logstash container. there is no need for all the other containers to listen for logstash connections, therefor, they don't need to open a port for it.
-
ndarkness over 7 yearsI don't understand it completely,do you mean that my
logging
option should be just at the same level ofservices
tag instead per container? -
madeddie over 7 yearsthe logging keyword doesn't actually open a port to listen on, it configures where docker connects to. I'm talking about the port mappings you make on each container, you can only open a port once, but you shouldn't open a port for logging on each container, just the logstash one. The problem lies in your usage of "localhost" as host of the geld endpoint, instead of localhost you should use the IP of the machine where logstash container is running or 172.17.0.1 if all the containers are on the same host or use "logstash" if all containers are started with the same compose file.
-
ndarkness over 7 yearsI know what you mean now, I think I tried that before getting the error that the port was already in use, but I can give a second try. Thanks again
-
ndarkness over 7 years@madeddie I am facing a small issue with that elk stack, I can see that kibana or elasticsearch reboots around 2 hours or so, have you seen this behaviour as well? If I set my images to the latest instead of the build ones, would this stack still working?
-
ndarkness over 7 years@madeddie in order to have less load in my server, I wanted to change the port fowarding, however, I cannot maneage to make it work. I added a line like this
gelf-address: udp://172.17.0.1:12201
in each service, getting error when I start up the services due to the port in usageERROR: for redis driver failed programming external connectivity on endpoint ttnbackend_redis_1 : Bind for 0.0.0.0:12201 failed: port is already allocated ERROR: for broker driver failed programming external connectivity on endpoint ttnbackend_broker_1 : Bind for 0.0.0.0:12201 failed: port is already allocated
-
madeddie over 7 yearsDo you also have ports configured per service? The gelf-address doesn't actually bind a port
-
ndarkness over 7 yearsYes I have ports per service
-
madeddie over 7 yearsdo you configure the 12201 port per service? because you shouldn't
-
ndarkness over 7 yearsI think i did, should I only add a line like
gelf-address: udp://172.17.0.1
? Then it will complain that it needs a port -
madeddie over 7 yearsthe gelf line needs a port, I meant a "ports" configuration
-
ndarkness over 7 yearsNowadays I keep using " the port forwarding" solution...
-
-
madeddie almost 8 yearsI can't help you with filebeat (don't use it myself), but I'll add a sample config snippet for logstash. Although I expect you to read up on how to use the ELK stack, because that's a bit beyond the scope of this question (and also enough to fill a week of introductory training :))
-
ndarkness almost 8 yearsThanks again, it is fine if you can share the config snippet for logstash, I have downloaded an
sebp/elk
image where elk is set up and I am planning to forward that syslog to that container. -
ndarkness almost 8 yearsI have notice that I can only one of my services attached to one port due to the fact that is tcp, can I use udp too for the syslog?
-
madeddie almost 8 yearsUDP can also only be opened by one listening process. The
syslog-address: "tcp://192.168.0.42:123"
option means "send logs to this syslog server", so it doesn't itself set up and listen to syslog messages. -
ndarkness almost 8 yearsthanks again, I will try it tomorrow, as quick though If I set my
docker-compose.yml
like this, would it work since now I forward the logs?yourapp: image: your/image ports: - "80:80" links: - elk elk: image: sebp/elk ports: - "5601:5601" - "9200:9200" - "5044:5044" - "5000:5000"
-
madeddie almost 8 yearsno, my explanation explained how to use syslog or gelf, the container you're using is not configured to receive either, so you're basically forwarding your logs nowhere at the moment. once you configure gelf and open that port on the elk container, and configuring your containers to send their logs there, it should work
-
ndarkness almost 8 yearsthanks again, I changed the compose file to use port 5000 in the gelf address, since "my" elk container should listen to witht the config I post before? I will with your elk container tomorrow morning once I am fresh again! Thanks
-
madeddie almost 8 yearsyeah, the elk container listens to port 5000, but it's currently not configured to expect gelf traffic on port 5000. it actually expects 'lumberjack' type logs there, so logstash forwarder type.
https://github.com/spujadas/elk-docker/blob/master/01-lumberjack-input.conf
. if you're serious about using this, i really suggest reading the documentation (minimally the getting started) for elasticsearch, logstash, kibana and docker logs -
ndarkness almost 8 yearsI am trying with your suggested image of elk and I don't see traffic either into kibana. Something that I don't understand is why in logstash.conf says to use as input tcp, but my gelf uses udp to send the data
-
ndarkness almost 8 yearsI have some traffic of just one of my services, I had to enable the port 12201 in my service as well to be able to log, however, I cannot reuse that port for the other services... Do you know if I can reuse that port for the other services or I need to have a different per service?
-
ndarkness almost 8 yearsI finally managed it, I will mark your solution as the one that solved it and update my post with the current implementation