We use the Telegraf operator for Elasticsearch metric collection and Sumo Logic Installed Collector for collecting Elasticsearch logs. The diagram below illustrates the components of the Elasticsearch collection in a non-Kubernetes environment. Telegraf runs on the same system as Elasticsearch to obtain Elasticsearch metrics. The Sumo Logic output plugin to send the metrics to Sumo Logic. Logs from Elasticsearch on the other hand are sent to a Sumo Logic Local File source.
This section provides instructions for configuring metrics collection for the Sumo Logic App for Elasticsearch. Follow the below instructions to set up logs and metrics collection:
Configure Metrics Collection
- Configure a Hosted Collector
- Configure an HTTP Logs and Metrics Source
- Install Telegraf
- Configure and start Telegraf
Configure Logs Collection
- Configure logging in Elasticsearch
- Configure Sumo Logic Installed Collector
Configure Metrics Collection
Configure a Hosted Collector
To create a new Sumo Logic hosted collector, perform the steps in the Configure a Hosted Collector section of the Sumo Logic documentation.
Configure an HTTP Logs and Metrics Source
Create a new HTTP Logs and Metrics Source in the hosted collector created above by following these instructions. Make a note of the HTTP Source URL.
Use the following steps to install Telegraf.
Configure and start Telegraf
As part of collecting metrics data from Telegraf, we will use the elasticsearch input plugin to get data from Telegraf and the Sumo Logic output plugin to send data to Sumo Logic.
Create or modify telegraf.conf and copy and paste the text below:
[[inputs.elasticsearch]] servers = ["http://<USER_CHANGE_ME>:<PASS_CHANGE_ME>@localhost:9200"] http_timeout = "5s" local = true cluster_health = true cluster_stats = true cluster_stats_only_from_master = true indices_include = ["_all"] indices_level = "cluster" node_stats = ["indices", "os", "process", "jvm", "thread_pool", "fs", "transport", "http"] [inputs.elasticsearch.tags] environment="dev_CHANGE_ME" component="database" db_system="elasticsearch" db_cluster="elasticsearch_on_premise_CHANGE_ME" [[outputs.sumologic]] url = "<URL Created in Step 3_CHANGEME>” data_format = "prometheus"
Please enter values for the following parameters (marked in bold_CHANGE_ME above):
- In the input plugins section, that is
- servers - The URL to the elasticsearch server. Please see this doc for more information on additional parameters for configuring the Elasticsearch input plugin for Telegraf.
- In the tags section, which is
- environment - This is the deployment environment where the Elasticsearch cluster identified by the value of servers resides. For example dev, prod, or QA. While this value is optional we highly recommend setting it.
- db_cluster - Enter a name to identify this Elasticsearch cluster. This cluster name will be shown in the Sumo Logic dashboards.
- In the output plugins section, that is
- url - This is the HTTP source URL created in step 3. Please see this doc for more information on additional parameters for configuring the Sumo Logic Telegraf output plugin.
Here’s an explanation for additional values set by this Telegraf configuration.
- data_format - “prometheus” In the output plugins section, which is
[[outputs.sumologic]]. Metrics are sent in the Prometheus format to Sumo Logic
- Component: “database” - In the input plugins section, that is
[[inputs.Elasticsearch]]- This value is used by Sumo Logic apps to identify application components.
- For all other parameters please see this doc for more properties that can be configured in the Telegraf agent globally.
At this point, Elasticsearch metrics should start flowing into Sumo Logic.
Configure Logs Collection
This section provides instructions for configuring log collection for Elasticsearch running on a non-kubernetes environment for the Sumo Logic App for Elasticsearch.
By default, Elasticsearch logs are stored in a log file.
Local log files can be collected via Installed collectors. The installed collector will require you to allow outbound traffic to Sumo Logic endpoints for collection to work. For detailed requirements for Installed collectors, see this page.
Configure logging in Elasticsearch
Elasticsearch supports logging via local text log files. Elasticsearch logs have four levels of verbosity. To select a level, set loglevel to one of:
- debug (a lot of information, useful for development/testing)
- verbose (includes information not often needed, but logs less than debug)
- notice (moderately verbose, ideal for production environments) - this is the default value
- warning (only very important/critical messages are logged)
All logging settings are located in Elasticsearch.conf.
By default, Elasticsearch logs are stored in /var/log/elasticsearch/ ELK-<Clustername>.log. The default directory for log files is listed in the Elasticsearch.conf file.
Configuring a Collector
To collect logs directly from the Elasticsearch machine, configure an Installed Collector.
Configuring a Source
For an Installed Collector
To collect logs directly from your Elasticsearch machine, use an Installed Collector and a Local File Source.
- Add a Local File Source.
- Configure the Local File Source fields as follows:
- Name. (Required)
- Description. (Optional)
- File Path (Required). Enter the path to your error.log or access.log. The files are typically located in /var/log/elasticsearch/elasticsearch-<clustername>.log. If you are using a customized path, check the Elasticsearch.conf file for this information.
- Source Host. Sumo Logic uses the hostname assigned by the OS unless you enter a different hostname
- Source Category. Enter any string to tag the output collected from this Source, such as elasticsearch/Logs. (The Source Category metadata field is a fundamental building block to organize and label Sources. For details see Best Practices.)
- Fields. Set the following fields:
- component = database
- db_system = elasticsearch
- db_cluster = <Your_Elasticsearch_Cluster_Name>
- environment = <Environment_Name>, such as Dev, QA or Prod.
Configure the Advanced section:
- Enable Timestamp Parsing. Select Extract timestamp information from log file entries.
- Time Zone. Choose the option, Ignore time zone from the log file and instead use, and then select your Elasticsearch Server’s time zone.
- Timestamp Format. The timestamp format is automatically detected.
- Encoding. Select UTF-8 (Default).
- Enable Multiline Processing. Detect messages spanning multiple lines
- Infer Boundaries - Detect message boundaries automatically
At this point, Elasticsearch logs should start flowing into Sumo Logic.