Skip to main content
Sumo Logic

Collect Elasticsearch Logs and Metrics for Non-Kubernetes environments

This page assists to collect Logs and Metrics for Non-Kubernetes environments.

We use the Telegraf operator for Elasticsearch metric collection and Sumo Logic Installed Collector for collecting Elasticsearch logs. The diagram below illustrates the components of the Elasticsearch collection in a non-Kubernetes environment. Telegraf runs on the same system as Elasticsearch to obtain Elasticsearch metrics. The Sumo Logic output plugin to send the metrics to Sumo Logic. Logs from Elasticsearch on the other hand are sent to a Sumo Logic Local File source.

This section provides instructions for configuring metrics collection for the Sumo Logic App for Elasticsearch. Follow the below instructions to set up logs and metrics collection:

  1. Configure Metrics Collection

    1. Configure a Hosted Collector
    2. Configure an HTTP Logs and Metrics Source
    3. Install Telegraf
    4. Configure and start Telegraf
  2. Configure Logs Collection

    1. Configure logging in Elasticsearch
    2. Configure Sumo Logic Installed Collector

Configure Metrics Collection

  1. Configure a Hosted Collector
    To create a new Sumo Logic hosted collector, perform the steps in the Configure a Hosted Collector section of the Sumo Logic documentation.

  2. Configure an HTTP Logs and Metrics Source
    Create a new HTTP Logs and Metrics Source in the hosted collector created above by following these instructions. Make a note of the HTTP Source URL.

  3. Install Telegraf
    Use the following steps to install Telegraf.

  4. Configure and start Telegraf
    As part of collecting metrics data from Telegraf, we will use the elasticsearch input plugin to get data from Telegraf and the Sumo Logic output plugin to send data to Sumo Logic. 

    Create or modify telegraf.conf and copy and paste the text below:

Please enter values for the following parameters (marked in bold_CHANGE_ME above):

  • In the input plugins section, that is [[inputs.elasticsearch]]:
    • servers - The URL to the elasticsearch server. Please see this doc for more information on additional parameters for configuring the Elasticsearch input plugin for Telegraf.
    • In the tags section, which is [inputs.Elasticsearch.tags]
      • environment - This is the deployment environment where the Elasticsearch cluster identified by the value of servers resides. For example dev, prod, or QA. While this value is optional we highly recommend setting it. 
      • db_cluster - Enter a name to identify this Elasticsearch cluster. This cluster name will be shown in the Sumo Logic dashboards. 
    • In the output plugins section, that is [[outputs.sumologic]]:
      • url - This is the HTTP source URL created in step 3. Please see this doc for more information on additional parameters for configuring the Sumo Logic Telegraf output plugin.

Here’s an explanation for additional values set by this Telegraf configuration.

  • data_format - “prometheus” In the output plugins section, which is [[outputs.sumologic]]. Metrics are sent in the Prometheus format to Sumo Logic
  • Component: “database” - In the input plugins section, that is [[inputs.Elasticsearch]] - This value is used by Sumo Logic apps to identify application components.
  • For all other parameters please see this doc for more properties that can be configured in the Telegraf agent globally.

At this point, Elasticsearch metrics should start flowing into Sumo Logic.

Configure Logs Collection

This section provides instructions for configuring log collection for Elasticsearch running on a non-kubernetes environment for the Sumo Logic App for Elasticsearch. 

By default, Elasticsearch logs are stored in a log file. 

Local log files can be collected via Installed collectors. The installed collector will require you to allow outbound traffic to Sumo Logic endpoints for collection to work. For detailed requirements for Installed collectors, see this page.

  1. Configure logging in Elasticsearch
    Elasticsearch supports logging via local text log files. Elasticsearch logs have four levels of verbosity. To select a level, set loglevel to one of:

  • debug (a lot of information, useful for development/testing)
  • verbose (includes information not often needed, but logs less than debug)
  • notice (moderately verbose, ideal for production environments) - this is the default value
  • warning (only very important/critical messages are logged)

All logging settings are located in Elasticsearch.conf

By default, Elasticsearch logs are stored in  /var/log/elasticsearch/ ELK-<Clustername>.log. The default directory for log files is listed in the Elasticsearch.conf file. 

Logs from the Elasticsearch log file can be collected via a Sumo Logic Installed collector and a Local File Source as explained in the next section.

  1. Configuring a Collector
    To collect logs directly from the Elasticsearch machine, configure an Installed Collector.

  2. Configuring a Source

    For an Installed Collector
    To collect logs directly from your Elasticsearch machine, use an Installed Collector and a Local File Source. 

    1. Add a Local File Source.
    2. Configure the Local File Source fields as follows:
  • Name. (Required)
  • Description. (Optional)
  • File Path (Required). Enter the path to your error.log or access.log. The files are typically located in /var/log/elasticsearch/elasticsearch-<clustername>.log. If you are using a customized path, check the Elasticsearch.conf file for this information. 
  • Source Host. Sumo Logic uses the hostname assigned by the OS unless you enter a different hostname
  • Source Category. Enter any string to tag the output collected from this Source, such as elasticsearch/Logs. (The Source Category metadata field is a fundamental building block to organize and label Sources. For details see Best Practices.)
  • Fields. Set the following fields:
    • component = database
    • db_system = elasticsearch
    • db_cluster = <Your_Elasticsearch_Cluster_Name>
    • environment = <Environment_Name>, such as Dev, QA or Prod.

  1. Configure the Advanced section:

  • Enable Timestamp Parsing. Select Extract timestamp information from log file entries.
  • Time Zone. Choose the option, Ignore time zone from the log file and instead use, and then select your Elasticsearch Server’s time zone.
  • Timestamp Format. The timestamp format is automatically detected.
  • Encoding. Select UTF-8 (Default).
  • Enable Multiline Processing. Detect messages spanning multiple lines
    • Infer Boundaries - Detect message boundaries automatically
  1. Click Save.

At this point, Elasticsearch logs should start flowing into Sumo Logic.