How to install Elastic Logstash Kibana (ELK Stack) using Docker and Docker Compose
Introduction
What’s ELK stack?
The ELK stack, composed of Elasticsearch, Logstash, and Kibana, is a powerful open-source solution for searching, analyzing, and visualizing log data in real time.
- Elasticsearch is a distributed, RESTful search and analytics engine capable of swiftly handling large volumes of data.
- Logstash functions as a data processing pipeline, ingesting data from various sources, transforming it, and sending it to Elasticsearch.
- Kibana provides a web interface for visualizing this data, enabling users to create dashboards, charts, and graphs for in-depth data exploration.
The ELK stack is widely used for log management, real-time monitoring, security analytics, and operational intelligence, which turns complex data into actionable insights.
Click here to learn more about the ELK stack.
Watch how I do it in this video demo below:
Prerequisites
- Install Docker and Docker Compose. Mac users, go here to install. For Windows computers, here is how you could do it. If you have a Linux computer, click here.
- Create your working environment as shown below, which I will also show you how to do in the following lines.
Also, get the code from my GitHub Repo: https://github.com/Here2ServeU/elk-docker-install.
Step One: Create a Directory for the ELK Stack
Create a directory to hold your Docker Compose file and any necessary configuration files.
Use the following commands to create a directory and name it t2s-stack. Then, move into it.
mkdir t2s-stack
cd t2s-stack
Step Two: Create a Docker Compose File
Create a docker-compose.yml file in the t2s-stack directory with the following content:
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
networks:
- elk
logstash:
image: docker.elastic.co/logstash/logstash:7.14.0
container_name: logstash
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000"
- "9600:9600"
networks:
- elk
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:7.14.0
container_name: kibana
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
esdata:
driver: local
There are four major sections to the docker-compose.yml file:
Section 1: Version
It specifies the version of the Docker Compose file format we would like to use. In our demo, we’re using “version 3.7.”
Section 2: Services
This section defines the different services that will make up our application. We need three for our ELK stack: Elasticsearch, Logstash, and Kibana.
Each service has the following:
- image, which pulls the desired image from Docker Hub.
- container_name, which names the container as desired.
- environment that sets environment variables for the container (discovery.type, xpack.security.enabled, bootstrap.memory_lock, and ES_JAVA_OPTS key values).
- ulimits, which sets resource limits.
- volumes to mount a volume to persist data.
- ports that map ports don't host to the container.
- networks, which connects the container to the specified network.
Section 3: Networks
The section defines the network(s) the services will use. For this project, we are using elk as our network.
We will also use the bridge network driver for the driver. This type of network will create an isolated network for the container.
Section 4: Volumes
Last, we will define the volume(s) the services will use. To do so, we use esdata and specify that we will use the local driver for the volume.
Step Three: Configure Logstash Pipeline
Create the Logstash pipeline configuration file. Create a directory named logstash/pipeline, and inside it, create a file named logstash.conf with the following content:
input {
beats {
port => 5000
}
}
filter {
# Add your filters here
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
Step Four: Start the ELK Stack
From the t2s_stack directory, run the following command to start the ELK stack:
docker-compose up -d
The above command will start up your Docker Compose application in detached mode.
docker-compose: This is the Docker-compose command-line tool used to manage multi-container Docker applications defined in a docker-compose.yml file.
up: This subcommand tells Docker Compose to create and start containers based on the services defined in the docker-compose.yml file.
d: This flag stands for “detached mode”.
When you run docker-compose up with the -d flag, it starts the containers in the background and returns control to your terminal.
Without the -d flag, the command would run in the foreground, showing logs and output from the containers in the terminal.
Step Five: Verify The Installation
Elasticsearch: Open your web browser and go to http://localhost:9200. You should see a JSON response with information about your Elasticsearch node.
Kibana: Open your web browser and go to http://localhost:5601. You should see the Kibana dashboard.
Logstash: Logstash will be running and waiting for input on port 5000.
Step Six: Stop the ELK Stack
To stop the ELK stack, run the following command:
docker-compose down
This command will do the opposite of the docker-compose up, which we discussed earlier.
It will stop and remove all the resources the docker-compose up command created. It will do the following:
- Stopping containers: the command stops all the running containers defined in the Docker Compose file.
- Removing containers: the command removes all the stopped containers.
- Removing networks: the command removes the networks created by Docker Compose unless they are external networks defined in the Compose file.
- Removing volumes: The -v flag (docker-compose down -v) will also remove the volumes defined in the Compose file. This is useful for cleaning up persistent data.
- Removing images. The — rmi flag (docker-compose down — rmi) will remove the images used by any service.
Conclusion
Yes, you have just deployed an ELK stack using Docker Compose. Happy Coding. God bless you!