Skip to main content
Sumo Logic

Collect MySQL Logs and Metrics (Non-Kubernetes)

This page provides instructions for configuring log and metric collection for the Sumo Logic App for MySQL in non-Kubernetes environments.

Collection overview

We use the Telegraf Operator for MySQL metric collection and a Sumo Logic Installed Collector for collecting MySQL logs.

The diagram below illustrates the components of the MySQL collection in a non-Kubernetes environment for each database server. Telegraf runs on the same host as MySQL, and uses the MySQL input plugin to obtain MySQL metrics, and the Sumo Logic output plugin to send the metrics to Sumo Logic. MySQL logs are sent to a Sumo Logic Local File source on an Installed Collector.

non-k8s.png

Step 1: Configure Fields in Sumo Logic

Create the following Fields in Sumo Logic prior to configuring collection. This ensures that your logs and metrics are tagged with relevant metadata, which is required by the app dashboards. For information on setting up Fields, see the Fields topic.

  • component
  • environment
  • db_system
  • db_cluster

Step 2: Configure metric collection

  1. Configure a Hosted Collector. For instructions, see Configure a Hosted Collector.
  2. Configure an HTTP Logs and Metrics Source. For instructions, see HTTP Logs and Metrics Source. Make a note of the HTTP Source URL.
  3. Install Telegraf. For instructions see Install Telegraf.
  4. Configure and start Telegraf. As part of collecting metrics data from Telegraf, we use the MySQL input plugin to get data from Telegraf and the Sumo Logic output plugin to send data to Sumo Logic. 
  5. Create or modify the telegraf.conf file, and copy the following into the relevant sections. [[inputs.mysql]]
      servers = ["user:passwd@tcp(127.0.0.1:3306)/?tls=false"]
      table_schema_databases = []
      gather_slave_status = true
      gather_global_variables = true
      gather_table_io_waits = true
      gather_table_lock_waits = true
      gather_index_io_waits = true
      gather_event_waits = true
      gather_file_events_stats = true
      gather_perf_events_statements = true
      [inputs.mysql.tags]
        environment = "prod"
        component = "database"
        db_system = "mysql"
        db_cluster = "your_mysql_cluster_name"
    [[outputs.sumologic]]
        url = "<URL Created in Step 3>"
        data_format = "prometheus"

  [agent]
    interval = "60s"
    flush_interval = "60s"

  1. Follow the instructions in Setting values in telegraf.conf below to configure the settings in the .conf  file.
  2. After updating the telegraf.conf file, start or reload the telegraf service using the instructions in Telegraf documentation.
  3. At this point, MySQL metrics should start flowing into Sumo Logic.

Setting values in telegraf.conf

Make the following updates to telegraf.conf.

  1. In the [[inputs.mysql]] section, set servers to the URL of your MySQL server. For information about additional input plugin configuration options, see the Readme for the MySQL input plugin.
  2. Configure the metrics to collect by uncommenting the following lines. For more information, see this section of the Readme. 
    [[inputs.mysql]]
     table_schema_databases = []
     gather_slave_status = true
     gather_global_variables = true
     gather_table_io_waits = true
     gather_table_lock_waits = true
     gather_index_io_waits = true
     gather_event_waits = true
     gather_file_events_stats = true
     gather_perf_events_statements = true
  3. In the [inputs.mysql.tags] section:
    1. environment. Specify the deployment environment where the MySQL cluster identified by the value of servers resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it. 
    2. db_cluster.  Enter a name to uniquely identify the MySQL cluster. This cluster name will be shown in the Sumo Logic dashboards. 
  4. In the [[outputs.sumologic]] section, set url to the is the HTTP source URL created in Step 2. For information about additional output plugin configuration options, see Configure Telegraf Output Plugin for Sumo Logic.
  5. In the [agent] section, set interval and flush_interval to “60s” to collect metrics every 60 seconds.

There are additional configuration options that you should not modify, as changing them will prevent the MySQL app from functioning correctly. The settings you should not modify are:

  • data_format = “prometheus”, in the output plugins section, causes metrics to be sent in the Prometheus format to Sumo Logic.
  • component = “database”, in the input plugins section, is used by the Sumo Logic app to identify application components.
  • db_system = “mysql”, in the input plugins section, identifies the database system.

For information about properties that can be configured globally in the Telegraf agent, see the Configuration documentation for Telegraf. 

Step 3: Configure log collection

This section provides instructions for configuring collection of logs for MySQL running on a non-Kubernetes environment. 

MySQL logs are stored in log files.

Sumo Logic supports collecting logs via a local log file. Local log files can be collected by Sumo Logic Installed Collectors, which requires you to allow outbound traffic to Sumo Logic endpoints for collection to work.

Configure MySQL to log to a local file(s)

MySQL logs written to a log file can be collected via the Local File Source of a Sumo Logic Installed Collector.
To configure the MySQL log file(s), locate your local my.cnf configuration file in the database directory.

  1. Open my.cnf in a text editor.
  2. Set the following  parameters in the [mysqld] section:

    [mysqld]

    log_error = /var/log/mysql/error.log

    slow_query_log=1
    slow_query_log_file = /var/log/mysql/mysql-slow.log
    long_query_time=2


    Notes

    Error Logs. By default, error logs are enabled and are logged at file specified by the log_error key.

    Slow Query Logs. slow_query_log=1 enables logging of slow queries to the file specified by slow_query_log_file. Setting long_query_time=2 will cause queries that take more than 2 seconds to execute to be logged. The default value of long_query_time is 10 seconds.

    General Query Logs. We don't recommend enabling general_log for performance reasons. These logs are not used by the Sumo Logic MySQL App.

  3. Save the my.cnf file.
  4. Restart the MySQL server:
    sudo mysql.server restart

Configure an Installed Collector

Set up an Installed Collector. 

Configuring a Local File Source for error logs

To add a Local File Source for MySQL error logs

  1. Add a Local File Source in the installed collector configured in the previous step. Configure the Local File Source fields as follows:
  2. Name. (Required)
  3. Description. (Optional)
  4. File Path (Required). Enter the path to your error.log. The files are typically located in /var/log/mysql/error.log. If you are using a customized path, check the my.cnf file for this information. 
  5. Source Host. Sumo Logic uses the hostname assigned by the OS unless you enter a different host nameSource Category. Enter any string to tag the output collected from this Source, such as Prod/MySQL/Error. (The Source Category metadata field is a fundamental building block to organize and label Sources. For details see Best Practices.)
  6. Fields. Set the following fields. For more information, see Fields.
    • component = database
    • db_system = mysql
    • db_cluster = <your_mysql_cluster_name>
    • environment = <Environment_Name>, such as dev, qa, or prod.
      fields.png
  7. In the Advanced Options for Logs section:
    1. Enable Timestamp Parsing. Select "Extract timestamp information from log file entries".
    2. Time Zone. Select "Use time zone form log file, if none is detected use Use Collector Default”.
    3. Timestamp Format. Select "Automatically detect the format."
    4. Encoding. Select UTF-8 (Default).
    5. Enable Multiline Processing
      • Detect Messages Spanning Multiple Lines. True
      • Infer Boundaries - Detect message boundaries automatically. False
      • Boundary Regex.
        • If error messages starts like 2021-05-11T10:12:55.421100Z then use boundary regex as below^\d{4}-\d{2}-\d{2}T\d{1,2}:\d{1,2}:\d{1,2}.*
        • If error messages starts like 210511 11:20:40 then use boundary regex as below^\d{6}\s+\d{1,2}:\d{1,2}:\d{1,2}.*
  8. Click Save.

At this point, MySQL error logs should start flowing into Sumo Logic.

Configuring a Local File Source for slow query logs

To add a Local File Source for MySQL slow query log

  1. Add a Local File Source in the installed collector configured in the previous step. Configure the Local File Source fields as follows:
  2. Name. (Required)
  3. Description. (Optional)
  4. File Path (Required). Enter the path to your mysql-slow.log. The file is typically located in /var/log/mysql/mysql-slow.log. If you are using a customized path, check my.cnf file for this information. 
  5. Source Host. Sumo Logic uses the hostname assigned by the OS unless you enter a different host name
  6. Source Category. Enter any string to tag the output collected from this Source, such as Prod/MySQL/Error. (The Source Category metadata field is a fundamental building block to organize and label Sources. For details see Best Practices.)
  7. Fields. Set the following fields. For more information, see Fields.
    • component = database
    • db_system = mysql
    • db_cluster = <your_mysql_cluster_name>
    • environment = <Environment_Name>, such as dev, qa, or prod.
      fields.png
  8. In the Advanced Options for Logs section:
    1. Enable Timestamp Parsing. Select "Extract timestamp information from log file entries".
    2. Time Zone. Select "Use time zone form log file, if none is detected use Use Collector Default".
    3. Timestamp Format. Select "Automatically detect the format".
    4. Encoding. Select "UTF-8" (Default).
    5. Enable Multiline Processing
      • Detect Messages Spanning Multiple Lines. True
      • Infer Boundaries - Detect message boundaries automatically. False
      • Boundary Regex.  ^#\sTime:\s.*
  9. Click Save.

At this point, MySQL slow query logs should start flowing into Sumo Logic.

Sample log message

210408 00:00:19 [Note] /usr/sbin/mysqld: ready for connections.

Sample query

This sample Query is from the Logs panel of the MySQL - Logs dashboard.

Query String - “Top 10 Slow Queries by Average Execution Time”

db_system=mysql db_cluster={{db_cluster}} "User@Host" "Query_time"  
| parse regex "(?<query_block># User@Host:[\S\s]+?SET timestamp=\d+;[\S\s]+?;)" multi
| parse regex field=query_block "# User@Host: \S+?\[(?<user>\S*?)\] @ (?<host_name>\S+)\s\[(?<ip_addr>\S*?)\]" nodrop // Pttrn1-vrtn1
| parse regex field=query_block "# User@Host: \S+?\[(?<user>\S*?)\] @\s+\[(?<ip_addr>\S*?)\]\s+Id:\s+(?<Id>\d{1,10})" nodrop // Pttrn1-vrtn2
| parse regex field=query_block "# User@Host: \S+?\[(?<user>\S*?)\] @ (?<host_name>\S+)\s\[(?<ip_addr>\S*?)\]\s+Id:\s+(?<Id>\d{1,10})" // Pttrn1-vrtn3
| parse regex field=query_block "Schema: (?<schema>(?:\S*|\s)?)\s*Last_errno[\s\S]+?Query_time:\s+(?<query_time>[\d.]*)\s+Lock_time:\s+(?<lock_time>[\d.]*)\s+Rows_sent:\s+(?<rows_sent>[\d.]*)\s+Rows_examined:\s+(?<rows_examined>[\d.]*)\s+Rows_affected:\s+(?<rows_affected>[\d.]*)\s+Rows_read:\s+(?<rows_read>[\d.]*)\n" nodrop // Pttrn2-vrtn1
| parse regex field=query_block "Schema: (?<schema>(?:\S*|\s)?)\s*Last_errno[\s\S]+?\s+Killed:\s+\d+\n" nodrop // Pttrn2-vrtn2
| parse regex field=query_block "Query_time:\s+(?<query_time>[\d.]*)\s+Lock_time:\s+(?<lock_time>[\d.]*)\s+Rows_sent:\s+(?<rows_sent>[\d]*)\s+Rows_examined:\s+(?<rows_examined>[\d]*)\s+Rows_affected:\s+(?<rows_affected>[\d]*)\s+" nodrop // Pttrn2-vrtn3
| parse regex field=query_block "Query_time:\s+(?<query_time>[\d.]*)\s+Lock_time:\s+(?<lock_time>[\d.]*)\s+Rows_sent:\s+(?<rows_sent>[\d]*)\s+Rows_examined:\s+(?<rows_examined>[\d]*)" // Pttrn2-vrtn4
| parse regex field=query_block "# Bytes_sent:\s+(?<bytes_sent>\d*)\s+Tmp_tables:\s+(?<tmp_tables>\d*)\s+Tmp_disk_tables:\s+(?<temp_disk_tables>\d*)\s+Tmp_table_sizes:\s+(?<tmp_table_sizes>\d*)\n" nodrop // Pttrn3-vrtn1
| parse regex field=query_block "# Bytes_sent:\s+(?<bytes_sent>\d*)\n" nodrop // Pttrn3-vrtn2
| parse regex field=query_block "SET timestamp=(?<set_timestamp>\d*);(?:\\n|\n)(?<sql_cmd>[\s\S]*);" nodrop
| fields -query_block
| avg(query_time) as avg_time, sum(query_time) as total_time, min(query_time) as min_time, max(query_time) as max_time, avg(rows_examined) as avg_rows_examined, avg(rows_sent) as avg_rows_sent, avg(Lock_Time) as avg_lock_time, count as frequency group by sql_cmd, db_cluster
| sort by avg_time | limit 10