Skip to main content
Sumo Logic

Collect PostgreSQL Logs and Metrics for Non-Kubernetes environments

Collect PostgreSQL Logs and Metrics for Non-Kubernetes environments

We use the Telegraf Operator for PostgreSQL metric collection and the Sumo Logic Installed Collector for collecting PostgreSQL logs. The diagram below illustrates the components of the PostgreSQL collection in a non-Kubernetes environment for each database server. Telegraf runs on the same system as PostgreSQL, and uses the PostgreSQL Extensible input plugin to obtain PostgreSQL metrics, and the Sumo Logic output plugin to send the metrics to Sumo Logic. PostgreSQL logs are sent to Sumo Logic Local File Source on Installed Collector.

This section provides instructions for configuring metrics collection for the Sumo Logic App for PostgreSQL. Follow the below instructions to set up the metric collection for a given node in a PostgreSQL cluster:

  1. Configure Metrics Collection

    1. Configure a Hosted Collector
    2. Configure a HTTP Logs and Metrics Source
    3. Install Telegraf
    4. Configure and start Telegraf
    5. (Optional)Collecting metrics from multiple databases
  2. Configure Logs Collection 

    1. Configure logging in PostgreSQL
    2. Configure an Installed Collector
    3. Configuring a Local File Source

Step 1 Configure Metrics Collection 

1. Configure a Hosted Collector

To create a new Sumo Logic hosted collector, perform the steps in the Configure a Hosted Collector section of the Sumo Logic documentation.

2. Configure a HTTP Logs and Metrics Source

Create a new HTTP Logs and Metrics Source in the hosted collector created above by following these instructions. Make a note of the HTTP Source URL.


 

Use the following steps to install Telegraf on each database server node

As part of collecting metrics data from Telegraf, we will use the Postgresql extensible input plugin to get data from Telegraf and the Sumo Logic output plugin to send data to Sumo Logic.

Create or modify telegraf.conf in /etc/telegraf/telegraf.d/ and copy and paste the text from this file

Please enter values for the following parameters (marked with CHANGE_ME) in the downloaded file:

  • In the input plugins section which is [[inputs.postgresql_extensible]]: 
    • address - Specify the db user, db name, and password used for connecting to the database. This is the user you created for monitoring the PosgreSQL database in Step 1. Ex "host=localhost dbname=postgres user=postgres password=mypassword sslmode=disable"
    • In the tags section which is [inputs.postgresql_extensible.tags]:
      • environment - This is the deployment environment where the Postgresql cluster resides. For example dev, prod or qa. While this value is optional we highly recommend setting it. 
      • db_cluster - Enter a name to identify this PostgreSQL cluster. This cluster name will be shown in the Sumo Logic dashboards. For example  analytics-dbcluster, webapp-dbcluster
  • In the output plugins section which is [[outputs.sumologic]]: 
    • url - This is the HTTP source URL created in step 3. Please see this doc for more information on additional parameters for configuring the Sumo Logic Telegraf output plugin.

Here’s an explanation for additional values set by this Telegraf configuration that we request you please do not modify these values as they will cause the Sumo Logic apps to not function correctly.

  • data_format = “prometheus” In the output plugins section which is [[outputs.sumologic]]  This indicates that metrics should be sent in the Prometheus format to Sumo Logic
  • component = “database” - In the input plugins section which is [[inputs.postgresql_extensible.tags]] - This value is used by Sumo Logic apps to identify application components.
  • db_system = “postgresql” - In the input plugins sections which is [inputs.postgresql_extensible.tags]] -  This value identifies the database system.

For other optional parameters like databases, max_lifetime please refer to this plugin documentation for configuring the postgresql_extensible input plugin for Telegraf.

Here is an example sample_telegraf.conf file.

For all other parameters please see this doc for more properties that can be configured in the Telegraf agent globally.

Once you have finalized your telegraf.conf file, you can start or reload the telegraf service using instructions from the doc.

At this point, PostgreSQL metrics should start flowing into Sumo Logic.

5. (Optional)Collecting metrics from multiple databases

If you want to monitor multiple databases then you can copy and paste the text from this file and create another [[inputs.postgresql_extensible]] section. This section contains only those queries which are meant to be run for each database.

Here is an example sample_telegraf_multiple.conf

Step 2 Configure Logs Collection for each database server

Perform the steps outlined below for each PostgreSQL database server. 

Pre-requisites:

  • Locate your local PostgreSQL postgresql.conf configuration file in the database data_directory. For more information, see the PostgreSQL File Locations documentation. By default it’s located at /var/lib/pgsql/<version>/data/postgresql.conf. You can run SHOW config_file command inside your server’s psql shell to get the location.

After determining the location of conf file modify the PostgreSQL postgresql.conf configuration file logging parameters 

  1. Connect to the database server(using SSH) in a terminal window.
  2. Open postgresql.conf configuration file.
  3. Under the ERROR REPORTING AND LOGGING section of the file, use following config parameters. For more information on the following parameters, Click here
log_destination = 'stderr'
logging_collector = on
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_truncate_on_rotation = off
log_rotation_age = 1d
log_min_duration_statement = 250
log_connections = on
log_duration = on
log_hostname = on
log_timezone = 'UTC'
log_min_messages = 'WARNING'
log_line_prefix = '%m [%p] %q%u@%d '

4. Save the postgresql.conf file and restart the postgresql server:

sudo service postgresql restart

2. Configure an Installed Collector 

To add an Installed collector, perform the steps as defined on the page Configure an Installed Collector.

3. Configuring a Local File Source 

To add a Local File Source source for PostgreSQL do the following

  1. Add a Local File Source in the installed collector configured in the previous step.
  2. Configure the Local File Source fields as follows:
  3. Name. (Required)
  4. Description. (Optional)
  5. File Path (Required). Enter the path to your log file.By default postgreSQL log files are located in /var/lib/pgsql/<version>/data/log/*.log  
  6. Source Host. Sumo Logic uses the hostname assigned by the OS unless you enter a different hostname
  7. Source Category. Enter any string to tag the output collected from this Source, such as PostgreSQL/Logs. (The Source Category metadata field is a fundamental building block to organize and label Sources. For details see Best Practices.)
  8. Fields. Set the following fields:
    • component = database
    • db_system = postgresql
    • db_cluster = <Your_Postgresql_Cluster_Name> For example  analytics-dbcluster, webapp-dbcluster
    • environment = <Environment_Name> For example dev, prod or qa.
  9. Configure the Advanced section:
    • Enable Timestamp Parsing. Select Extract timestamp information from log file entries.
    • Time Zone. Use the timezone from log file option.
    • Timestamp Format. The timestamp format is automatically detected.
    • Encoding. Select UTF-8 (Default).
    • Enable Multiline Processing. Detect messages spanning multiple lines
      • Select Infer Boundaries - Detect message boundaries automatically
  10. Click Save.

Here’s the sample source.json

{
  "api.version":"v1",
  "source":{
    "name":"PostgreSQL_Logs_Source",
    "category":"/PostgreSQL/logs",
    "automaticDateParsing":true,
    "multilineProcessingEnabled":true,
    "useAutolineMatching":true,
    "forceTimeZone":false,
    "filters":[],
    "cutoffTimestamp":0,
    "encoding":"UTF-8",
    "fields":{
      "environment":"dev",
      "db_cluster":"analytics_cluster",
      "component":"database",
      "db_system":"postgresql"
    },
    "pathExpression":"/var/lib/pgsql/12/data/log/*.log",
    "blacklist":[],
    "sourceType":"LocalFile"
  }
}

:

At this point, PostgreSQL logs should start flowing into Sumo Logic.

Sample Log Messages


2021-04-01 08:30:20.002 UTC [11916] postgres@postgres LOG:  connection authorized: user=postgres database=postgres