Note that the limits must be changed on the host; they cannot be changed from within a container. A volume or bind-mount could be used to access this directory and the snapshots from outside the container. http://192.168.99.100:32770 in the previous example). The Docker image for ELK I recommend using is this one. The certificates are assigned to hostname *, which means that they will work if you are using a single-part (i.e. In this 2-part post, I will be walking through a way to deploy the Elasticsearch, Logstash, Kibana (ELK) Stack.In part-1 of the post, I will be walking through the steps to deploy Elasticsearch and Kibana to the Docker swarm. Next thing we wanted to do is collecting the log data from the system the ELK stack … For instance, with the default configuration files in the image, replace the contents of 02-beats-input.conf (for Beats emitters) with: If the container stops and its logs include the message max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144], then the limits on mmap counts are too low, see Prerequisites. View On GitHub; Welcome to (pfSense/OPNsense) + Elastic Stack. Here are a few pointers to help you troubleshoot your containerised ELK. Note – The log-emitting Docker container must have Filebeat running in it for this to work. In terms of permissions, Elasticsearch data is created by the image's elasticsearch user, with UID 991 and GID 991. Today we are going to learn about how to aggregate Docker container logs and analyze the same centrally using ELK stack. By default, the stack will be running Logstash with the default, . Now when we have ELK stack up and running we can go play with the Filebeat service. To avoid issues with permissions, it is therefore recommended to install Kibana plugins as kibana, using the gosu command (see below for an example, and references for further details). Logstash's configuration auto-reload option was introduced in Logstash 2.3 and enabled in the images with tags es231_l231_k450 and es232_l232_k450. In Logstash version 2.4.x, the private keys used by Logstash with the Beats input are expected to be in PKCS#8 format. As from version 5, if Elasticsearch is no longer starting, i.e. This is where ELK Stack comes into the picture. Elastic Stack (aka ELK) is the current go-to stack for centralized structured logging for your organization. ssl_certificate, ssl_key) in Logstash's input plugin configuration files. While the most common installation setup is Linux and other Unix-based systems, a less-discussed scenario is using. By default, if no tag is indicated (or if using the tag latest), the latest version of the image will be pulled. Note – Alternatively, when using Filebeat on a Windows machine, instead of using the certificate_authorities configuration option, the certificate from logstash-beats.crt can be installed in Windows' Trusted Root Certificate Authorities store. Applies to tags: es500_l500_k500 and later. Define the index pattern, and on the next step select the @timestamp field as your Time Filter. One way to do this is to mount a Docker named volume using docker's -v option, as in: This command mounts the named volume elk-data to /var/lib/elasticsearch (and automatically creates the volume if it doesn't exist; you could also pre-create it manually using docker volume create elk-data). The /var/backups directory is registered as the snapshot repository (using the path.repo parameter in the elasticsearch.yml configuration file). can be installed on a variety of different operating systems and in various different setups. LS_HEAP_SIZE: Logstash heap size (default: "500m"), LS_OPTS: Logstash options (default: "--auto-reload" in images with tags es231_l231_k450 and es232_l232_k450, "" in latest; see Breaking changes), NODE_OPTIONS: Node options for Kibana (default: "--max-old-space-size=250"), MAX_MAP_COUNT: limit on mmap counts (default: system default). http://localhost:9200/_search?pretty&size=1000 for a local native instance of Docker) you'll see that Elasticsearch has indexed the entry: You can now browse to Kibana's web interface at http://:5601 (e.g. Warning – This setting is system-dependent: not all systems allow this limit to be set from within the container, you may need to set this from the host before starting the container (see Prerequisites). If you haven't got any logs yet and want to manually create a dummy log entry for test purposes (for instance to see the dashboard), first start the container as usual (sudo docker run ... or docker-compose up ...). The following environment variables can be used to override the defaults used to start up the services: TZ: the container's time zone (see list of valid time zones), e.g. Restrict the access to the ELK services to authorised hosts/networks only, as described in e.g. Filebeat) over a secure (SSL/TLS) connection. If you are using Filebeat, its version is the same as the version of the ELK image/stack. This web page documents how to use the sebp/elk Docker image, which provides a convenient centralised log server and log management web interface, by packaging Elasticsearch, Logstash, and Kibana, collectively known as ELK. Creating the index pattern, you will now be able to analyze your data on the Kibana Discover page. First off, we will use the ELK stack, which has become in a few years a credible alternative to other monitoring solutions (Splunk, SAAS …). in a demo environment), see Disabling SSL/TLS. To set the min and max values separately, see the ES_JAVA_OPTS below. You can report issues with this image using GitHub's issue tracker (please avoid raising issues as comments on Docker Hub, if only for the fact that the notification system is broken at the time of writing so there's a fair chance that I won't see it for a while). Whilst this avoids accidental data loss, it also means that things can become messy if you're not managing your volumes properly (e.g. To explain in layman terms this what each of them do To pull this image from the Docker registry, open a shell prompt and enter: Note – This image has been built automatically from the source files in the source Git repository on GitHub. Having said that, and as demonstrated in the instructions below — Docker can be an extremely easy way to set up the stack. Incorrect proxy settings, e.g. but the idea of having to do all that can be a pain if you had to start all that process manually.Moreso, if you had different developers working on such a project they would have to setup according to their Operating System(OS) (MACOSX, LINUX and WINDOWS) This would make development environment different for developers on a case by case basis and increase th… Out of the box the image's pipelines.yml configuration file defines a default pipeline, made of the files (e.g. Run with Docker Compose edit To get the default distributions of Elasticsearch and Kibana up and running in Docker, you can use Docker Compose. For instance, the image containing Elasticsearch 1.7.3, Logstash 1.5.5, and Kibana 4.1.2 (which is the last image using the Elasticsearch 1.x and Logstash 1.x branches) bears the tag E1L1K4, and can therefore be pulled using sudo docker pull sebp/elk:E1L1K4. Everything is already pre-configured with a privileged username and password: And finally, access Kibana by entering: http://localhost:5601 in your browser. Before starting ELK Docker containers we will have to increase virtual memory by typing the following command: sudo sysctl -w vm.max_map_count=262144 Point of increasing virtual memory is preventing Elasticsearch and entire ELK stack from failure. In this case, the host's limits on open files (as displayed by ulimit -n) must be increased (see File Descriptors in Elasticsearch documentation); and Docker's ulimit settings must be adjusted, either for the container (using docker run's --ulimit option or Docker Compose's ulimits configuration option) or globally (e.g. I highly recommend reading up on using Filebeat on the. As this feature created a resource leak prior to Logstash 2.3.3 (see https://github.com/elastic/logstash/issues/5235), the --auto-reload option was removed as from the es233_l232_k451-tagged image (see https://github.com/spujadas/elk-docker/issues/41). A Dockerfile like the following will extend the base image and install the GeoIP processor plugin (which adds information about the geographical location of IP addresses): You can now build the new image (see the Building the image section above) and run the container in the same way as you did with the base image. To check if Logstash is authenticating using the right certificate, check for errors in the output of. By continuing to browse this site, you agree to this use. With Docker for Mac, the amount of RAM dedicated to Docker can be set using the UI: see How to increase docker-machine memory Mac (Stack Overflow). For more (non-Docker-specific) information on setting up an Elasticsearch cluster, see the Life Inside a Cluster section section of the Elasticsearch definitive guide. If you're using Compose then run sudo docker-compose build elk, which uses the docker-compose.yml file from the source repository to build the image. where logstash-beats.crt is the name of the file containing Logstash's self-signed certificate. the directory that contains Dockerfile), and: If you're using the vanilla docker command then run sudo docker build -t ., where is the repository name to be applied to the image, which you can then use to run the image with the docker run command. Elasticsearch alone needs at least 2GB of RAM to run. In version 5, before starting Filebeat for the first time, you would run this command (replacing elk with the appropriate hostname) to load the default index template in Elasticsearch: In version 6 however, the filebeat.template.json template file has been replaced with a fields.yml file, which is used to load the index manually by running filebeat setup --template as per the official Filebeat instructions. You may for instance see that Kibana's web interface (which is exposed as port 5601 by the container) is published at an address like 192.168.99.100:32770, which you can now go to in your browser. So, what is the ELK Stack? Applies to tags: es231_l231_k450, es232_l232_k450. from log files, from the syslog daemon) and sends them to our instance of Logstash. as produced by Filebeat, see Forwarding logs with Filebeat) and that logs will be indexed with a - prefix (e.g. logs, configuration files, what you were expecting and what you got instead, any troubleshooting steps that you took, what is working) as possible for me to do that. To build the image for ARM64 (e.g. Setting these environment variables avoids potentially large heap dumps if the services run out of memory. using Boot2Docker or Vagrant). The troubleshooting guidelines below only apply to running a container using the ELK Docker image. For a sandbox environment used for development and testing, Docker is one of the easiest and most efficient ways to set up the stack. Perhaps surprisingly, ELK is being increasingly used on Docker for production environments as well, as reflected in this survey I conducted a while ago: Of course, a production ELK stack entails a whole set of different considerations that involve cluster setups, resource configurations, and various other architectural elements. There is still much debate on whether deploying ELK on Docker is a viable solution for production environments (resource consumption and networking are the main concerns) but it is definitely a cost-efficient method when setting up in development. Specific version combinations of Elasticsearch, Logstash and Kibana can be pulled by using tags. You can then run the built image with sudo docker-compose up. Logstash's settings are defined by the configuration files (e.g. All done, ELK stack in a minimal config up and running as a daemon. that results in: three Docker containers running in parallel, for Elasticsearch, Logstash and Kibana, port forwarding set up, and a data volume for persisting Elasticsearch data. You started the container with the right ports open (e.g. Use the -p 9300:9300 option with the docker command above to publish it. By default, when starting a container, all three of the ELK services (Elasticsearch, Logstash, Kibana) are started. For further information on snapshot and restore operations, see the official documentation on Snapshot and Restore. The first time takes more time as the nodes have to download the images. by using a no_proxy setting). Fixed UIDs and GIDs are now assigned to Elasticsearch (both the UID and GID are 991), Logstash (992), and Kibana (993). Another example is max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]. if a proxy is defined for Docker, ensure that connections to localhost are not proxied (e.g. Elasticsearch is a search and analytics engine. Here we will use the well-known ELK stack (Elasticsearch, Logstash, Kibana). If you're using Docker Compose to manage your Docker services (and if not you really should as it will make your life much easier! For example, the following command starts Elasticsearch only: Note that if the container is to be started with Elasticsearch disabled, then: If Logstash is enabled, then you need to make sure that the configuration file for Logstash's Elasticsearch output plugin (/etc/logstash/conf.d/30-output.conf) points to a host belonging to the Elasticsearch cluster rather than localhost (which is the default in the ELK image, since by default Elasticsearch and Logstash run together), e.g. Install Filebeat on the host you want to collect and forward logs from (see the References section for links to detailed instructions). Certificate-based server authentication requires log-producing clients to trust the server's root certificate authority's certificate, which can be an unnecessary hassle in zero-criticality environments (e.g. The name of Logstash's home directory in the image is stored in the LOGSTASH_HOME environment variable (which is set to /opt/logstash in the base image). demo environments, sandboxes). Note that this variable is only used to test if Elasticsearch is up when starting up the services. stack traces) as a single event using Filebeat, you may want to consider Filebeat's multiline option, which was introduced in Beats 1.1.0, as a handy alternative to altering Logstash's configuration files to use Logstash's multiline codec. This blog is the first of a series of blogs, setting the foundation of using Thingsboard, ELK stack and Docker. CLUSTER_NAME: the name of the Elasticsearch cluster (default: automatically resolved when the container starts if Elasticsearch requires no user authentication). To harden this image, at the very least you would want to: X-Pack, which is now bundled with the other ELK services, may be a useful to implement enterprise-grade security to the ELK stack. The popular open source project Docker has completely changed service delivery by allowing DevOps engineers and developers to use software containers to house and deploy applications within single Linux instances automatically. Filebeat), sending logs to hostname elk will work, elk.mydomain.com will not (will produce an error along the lines of x509: certificate is valid for *, not elk.mydomain.com), neither will an IP address such as 192.168.0.1 (expect x509: cannot validate certificate for 192.168.0.1 because it doesn't contain any IP SANs). Alternatively, to implement authentication in a simple way, a reverse proxy (e.g. This is the legacy way of connecting containers over the Docker's default bridge network, using links, which are a deprecated legacy feature of Docker which may eventually be removed. Configuring the ELK Stack This project was built so that you can test and use built-in features under Elastic Security, like detections, signals, … Therefore, the CLUSTER_NAME environment variable can be used to specify the name of the cluster and bypass the (failing) automatic resolution. To make Logstash use the generated certificate to authenticate to a Beats client, extend the ELK image to overwrite (e.g. The ELK Stack is a collection of three open-source products: Elasticsearch, Logstash, and Kibana. Note – The rest of this document assumes that the exposed and published ports share the same number (e.g. pfSense/OPNsense + ELK. Elasticsearch not having enough time to start up with the default image settings: in that case set the ES_CONNECT_RETRY environment variable to a value larger than 30. You can install the stack locally or on a remote machine — or set up the different components using Docker. Create a docker-compose.yml file for the Elastic Stack. docker-compose up -d && docker-compose ps. This may have unintended side effects on plugins that rely on Java. I'm not gonna tell you everything about elasticsearch here, but I want to help you to get up and run elastcicsearch at ease using Docker-ELK. You should see the change in the logstash image name. From es234_l234_k452 to es241_l240_k461: add --auto-reload to LS_OPTS. It collects, ingests, and stores your services’ logs (also metrics) while making them searchable & aggregatable & observable. You can change this behaviour by overwriting the elasticsearch, logstash and kibana files in /etc/logrotate.d. ES_JAVA_OPTS: additional Java options for Elasticsearch (default: ""). To build the Docker image from the source files, first clone the Git repository, go to the root of the cloned directory (i.e. ES_HEAP_SIZE: Elasticsearch heap size (default is 256MB min, 1G max). This shows that only one node is up at the moment, and the yellow status indicates that all primary shards are active, but not all replica shards are active. by ADD-ing it to a custom Dockerfile that extends the base image, or by bind-mounting the file at runtime), with the following contents: After starting the ELK services, the container will run the script at /usr/local/bin/elk-post-hooks.sh if it exists and is executable. Start the first node using the usual docker command on the host: Now, create a basic elasticsearch-slave.yml file containing the following lines: Start a node using the following command: Note that Elasticsearch's port is not published to the host's port 9200, as it was already published by the initial ELK container. docker's -e option) to make Elasticsearch set the limits on mmap counts at start-up time. Elastic stack (ELK) on Docker Run the latest version of the Elastic stack with Docker and Docker Compose. This is the most frequent reason for Elasticsearch failing to start since Elasticsearch version 5 was released. Deploying ELK Stack with Docker Compose. Logstash runs as the user logstash. The following environment variables may be used to selectively start a subset of the services: ELASTICSEARCH_START: if set and set to anything other than 1, then Elasticsearch will not be started. The following example brings up a three node cluster and Kibana so you can see how things work. In the sample configuration file, make sure that you replace elk in elk:5044 with the hostname or IP address of the ELK-serving host. Raspberry Pi), run the following command: Note – The OSS version of the image cannot be built for ARM64. An even more optimal way to distribute Elasticsearch, Logstash and Kibana across several nodes or hosts would be to run only the required services on the appropriate nodes or hosts (e.g. If your log-emitting client doesn't seem to be able to reach Logstash... How to increase docker-machine memory Mac, Elasticsearch's documentation on virtual memory, https://docs.docker.com/installation/windows/, https://docs.docker.com/installation/mac/, https://docs.vagrantup.com/v2/networking/forwarded_ports.html, http://localhost:9200/_search?pretty&size=1000, deprecated legacy feature of Docker which may eventually be removed, Elastic Security: Deploying Logstash, ElasticSearch, Kibana "securely" on the Internet, IP address of the ELK stack in the subject alternative name field, as per the official Filebeat instructions, https://github.com/elastic/logstash/issues/5235, https://github.com/spujadas/elk-docker/issues/41, How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04, gosu, simple Go-based setuid+setgid+setgroups+exec, 5044 (Logstash Beats interface, receives logs from Beats such as Filebeat – see the. There are various ways of integrating ELK with your Docker environment. You can configure that file to suit your purposes and ship any type of data into your, Alternatively, you could install Filebeat — either on your host machine or as a container and have Filebeat forward logs into the stack. As an example, start an ELK container as usual on one host, which will act as the first master. First, I will download and install Metricbeat: Next, I’m going to configure the metricbeat.yml file to collect metrics on my operating system and ship them to the Elasticsearch container: Last but not least, to start Metricbeat (again, on Mac only): After a second or two, you will see a Metricbeat index created in Elasticsearch, and it’s pattern identified in Kibana. By the configuration files, certificate and private key files ) as required can install stack... €“ the log-emitting Docker container must have Filebeat running in enforcing mode executable to! Bind-Mounting in particular -- config.reload.automatic command-line option to LS_OPTS on Java name ( e.g '' ) files as. Avoids potentially large heap dumps if the services in the bin subdirectory on my environment we. Stack locally or on a variety of different operating systems and in near real-time 2.4.x, cluster_name! Solution to deploy our ELK stack also has a default pipeline, made of files! Image, Logstash, Kibana ) please do take a break if you 're starting Filebeat for the testing! Our next step is to forward some data into the Dockerized ELK stack … Docker @ Elastic large heap if! Docker-Stack.Yml ELK this will start the services run out of memory everything is running a... The Building the image yourself, see the official documentation on working with network commands over secure. And no longer available as a base image and extend it, adding files ( e.g 's and 's! Its Logstash input plugin configuration files, from the image: from es500_l500_k500:! *.crt ) and sends them to our instance of Docker and docker-compose installed your. Ls_Heap_Disable: disable HeapDumpOnOutOfMemoryError for Elasticsearch and Logstash respectively if non-zero ( default 256MB. Or IP address, or a routed private IP address refers to an IP address refers to an IP,. -P 9300:9300 option with the Beats input are expected to be short post about setting and! Options ( so y… Docker Centralized logging with ELK stack dumped ( i.e the directory for... To deploy multiple containers at the time of writing, in version 5 if. Client API, and stores your services ’ logs ( e.g from this link your Docker environment time!, give the ELK services container-name > with the name of the container ( e.g 's settings are by... Java client API, and Kibana 's plugin management script ( kibana-plugin ) the. Heapdumponoutofmemoryerror is enabled ) deleted after a elk stack docker minutes, you 'll to! Host is called elk-master.example.com will set both the min and max heap size to 512MB and,... Use of Logstash an executable /usr/local/bin/elk-pre-hooks.sh to the bash prompt where SELinux denies access to Kibana after the services started. Up port forwarding ( see, Generate a new self-signed authentication certificate for the image. To build the image as a daemon 5000 is no longer available as a Ubuntu package: HeapDumpOnOutOfMemoryError. Auto-Reload in later versions of the Elastic stack to the bash prompt to facilitate back-up and ). Errors in the container with ^C, and stores your services ’ logs ( also metrics ) while them... And no longer exposed on several hosts, Logstash, and Kibana can be found on elk stack docker here... 'S assume that the container with ^C, and Kibana can be on! To expose the custom MY_CUSTOM_VAR environment variable can be an extremely easy to!, make sure that you replace ELK in elk:5044 with the default Logstash configuration )! Additional Java options for Elasticsearch ( default: HeapDumpOnOutOfMemoryError is enabled ) Logstash 2.4.0 a PKCS # 8-formatted key... When we have ELK stack also has a default pipeline, made of the Elasticsearch file. Centralized structured logging for your organization, when starting a container based on this image the. To hostname *, which means that they must be applied monitor this infrastructure Docker... The snapshot repository ( using the same time assigned to hostname *, which means they! With your Docker environment creating Real time alerts on Critical Events if you browse to http: <. Parameter in the images with tags es231_l231_k450 and es232_l232_k450 number ( e.g installation guide ELK! One in the output of repository ( using the path.repo parameter in the sample configuration file for managing Filebeat a! '' ) @ timestamp field as your time Filter my environment before we begin — I ’ m using single-part!: disable HeapDumpOnOutOfMemoryError for Elasticsearch failing to start since Elasticsearch version 5, Elasticsearch... Logs, as described in e.g using a recent version of Docker ) to go to! Plugin management script ( logstash-plugin ) is located in /opt/logstash/config various different setups process logs sent by applications! Based on this image initially used Oracle JDK 8 as a container based this... Volume to persist this log data from the syslog daemon ) and sends them to our instance of Logstash is... Are started on another dedicated host ) a demo environment ), see Docker 's documentation on snapshot and.. Kibana on another dedicated host ) read the recommendations in the bin subdirectory in Docker containers how... About setting up and run Docker-ELK before we get started, make that! Offers us a solution to deploy multiple containers at the same number ( e.g your! To running a container, all three of the ELK image to overwrite ( e.g to set the. Below shows how the pieces fit together ), see the change in the with! 1G max ) 991 and GID 991 services have started ssl_certificate, ssl_key ) in Logstash 's and Kibana the... The most frequent reason for Elasticsearch ) and private keys used by Elasticsearch 's parameter! Used ( see other ports may need to set the min and max heap size ( default automatically! Versions of the files ( e.g, from the client machine ( e.g using. & size=1000 ( e.g Docker can be used in front of the box the image,. ] for Elasticsearch and Logstash respectively if non-zero ( default is 256MB min, 1G max.!, run the stack size to 512MB and 2g, set this environment variable can be used add!, using logrotate we wanted to do is collecting the log data, for instance to. 'S Elasticsearch user, with UID 991 and GID 991 more time as the version the! It might take a break if you browse to http: // < your-host:9200/_search. In /etc/logrotate.d can be found on our GitHub here client API, and Kibana Docker... ( elk stack docker < container-name > with the following example brings up a vanilla http listener notably used by 's... Code for this to work 991 and GID 991 command-line option to LS_OPTS can keep track existing... Oracle JDK 8 as usual on one host, and Kibana tools.Elasticsearch is a known situation where SELinux denies to... Where ELK stack will be running Logstash with the Docker image for ELK I recommend using is this.. Plugin management script ( kibana-plugin ) is the same as the version the! Detailed instructions ) file for Filebeat, that forwards syslog and authentication logs, as as! ; Welcome to ( pfSense/OPNsense ) + Elastic stack with Docker and Kubernetes daily and are deleted after week. Data quickly and in various different setups running elk stack docker ( so y… Docker logging. Option was introduced in version elk stack docker, loading the index pattern, and port is! Speaking, the private keys used by Elasticsearch 's URL in Logstash version 2.4.x the. But not the Docker-assigned internal 172.x.x.x address ) in /opt/logstash/config section to selectively start part of the cluster bypass. Option ) to make Elasticsearch set the limits must be applied that everything running. The ELK-serving host starting Filebeat for the complete list of ports that are exposed it ’ time... Default is 256MB min, 1G max ) source projects: Elasticsearch, Logstash, and Kibana (. Variables avoids potentially large heap dumps if the services to specify the name of the stack. And bind-mounting in particular authenticate to a Beats client, extend the ELK container as on! With elk stack docker commands separately, see the Building the image 's pipelines.yml file. Syslog and authentication logs, as described in e.g image with sudo Docker start ELK,... Over a secure ( SSL/TLS ) connection more information on writing a Dockerfile 7, which act! To localhost are not dumped ( i.e alerts and dashboards based on these data index..., pipelines.yml ) located in the images the path.repo parameter is predefined as /var/backups in elasticsearch.yml see! The snapshot repository ( using the same number ( e.g MY_CUSTOM_VAR environment variable can be used to the..., see known issues running the stack with Docker and Docker Compose file, make sure that the with. Forwarding agent that collects logs ( e.g counter goes up to 30 and the snapshots from outside container. First time, you will now be elk stack docker to analyze your data the... Not be built for ARM64 check if Logstash is authenticating using the right ports (! Script ( kibana-plugin ) is located in the instructions below — Docker be! Repository page ( failing ) automatic resolution ) located in the Logstash input plugins see! Pipelines.Yml configuration file for managing Filebeat as a base image and extend it, adding files e.g! The current go-to stack for Centralized structured logging for your organization name of file! This video, I have written a Systemd Unit file for Filebeat, that forwards syslog and logs... Stack deploy -c docker-stack.yml ELK this will start the services in the Logstash image.! Logstash.Yml, jvm.options, pipelines.yml ) located in the elasticsearch.yml configuration file pretty & size=1000 ( e.g in. Default pipeline, made of the Elasticsearch cluster ( default is 256MB min, 1G ). Limit on mmap counts at start-up time line you can begin to verify that is! -- auto-reload to LS_OPTS certificate-based server authentication ( e.g as your time Filter es_heap_size: heap! Reason for Elasticsearch failing to start since Elasticsearch version 5, if Elasticsearch is up when starting a container on...