Skip to main content
Sumo Logic

Collect Metrics and Logs for Amazon Redshift ULM

This page has instruction for setting up collection of logs and metrics for the Amazon Redshift app.

Step 1. Plan source categories

Before you configure the log and metric sources for the Amazon Redshift app, decide on the source category you will assign to each.  A hierarchical approach allows you to make use of wildcards when you perform searches. For example:

  • For the AWS S3 source for Amazon Redshift Audit logs collection, you could specify a source category of AWS/Redshift/Audit.
  • For the AWS CloudTrail source for Redshift CloudTrail Events, you could specify a source category of AWS/CloudTrail.
  • For the AWS CloudWatch source for Redshift metrics, you could specify a source category of AWS/Metric/Redshift.

Step 2. Enable Amazon Redshift Audit logging

In this step, you enable audit logging for Amazon Redshift. 

  1. Enable database audit logging. For background information, see Database Audit Logging. For instructions on enabling audit logging, see Configuring Auditing Using the Console.
  2. Enable user activity logging. For the user activity log, you must also enable the enable_user_activity_logging database parameter. If you enable only the audit logging feature, but not the associated parameter, the database audit logs will log information for only the connection log and user log, but not for the user activity log. The enable_user_activity_logging parameter is disabled (false) by default, but you can set it to true to enable the user activity log. For more information, see Amazon Redshift Parameter Groups.

    Create a new parameter group with required parameter values and then modify cluster to use the new parameter group. If you want to use an existing parameter group, you can modify it with AWS CLI. For more information, see Amazon Redshift Parameter Groups.

Step 3: Configure AWS S3 source for Amazon Redshift Audit logs collection

  1. Configure a Hosted Collector.
  2. To your Hosted Collector, add an AWS S3 Source.
    1. Name. Enter a name to display for the new Source.
    2. Description. Enter an optional description.
    3. S3 Region. Select the Amazon Region for your Redshift Audit Log S3 bucket.
    4. Bucket Name. Enter the exact name of your Redshift Audit Log S3 bucket.
    5. Path Expression. Enter the string that matches the S3 objects you'd like to collect. You can use a wildcard (*) in this string. (DO NOT use a leading forward slash. See Amazon Path Expressions.)
    6. Source Category. AWS/Redshift/Audit
    7. Access Key ID and Secret Access Key. Enter your Amazon Access Key ID and Secret Access Key.
    8. Scan Interval. Use the default of 5 minutes. Alternately, enter the frequency Sumo Logic will scan your S3 bucket for new data.
    9. Enable Timestamp Parsing. Select the checkbox.
    10. Time Zone. Select Ignore time zone from log file and instead use, and select UTC.
    11. Timestamp Format. Select Automatically detect the format.
    12. Enable Multiline Processing. Select the checkbox, and select Infer Boundaries.
    13. Click Save.

Step 4. Configure AWS CloudTrail source for Redshift CloudTrail Events 

  1. Configure a Hosted Collector.
  2. To your Hosted Collector, add an AWS CloudTrail Source.
    1. Name. Enter a name to display for the new Source.
    2. Description. Enter an optional description.
    3. S3 Region. Select the Amazon Region for your CloudTrail Redshift S3 bucket.
    4. Bucket Name. Enter the exact name of your CloudTrail Redshift S3 bucket.
    5. Path Expression. Enter the string that matches the S3 objects you'd like to collect. You can use a wildcard (*) in this string. (DO NOT use a leading forward slash. See Amazon Path Expressions..)The S3 bucket name is not part of the path. Don’t include the bucket name when you are setting the Path Expression.
    6. Source Category. Enter a source category. For example, AWS/Cloudtrail.
    7. Access Key ID and Secret Access Key. Enter your Amazon Access Key ID and Secret Access Key.
    8. Scan Interval. Use the default of 5 minutes. Alternately, enter the frequency Sumo Logic will scan your S3 bucket for new data.
    9. Enable Timestamp Parsing. Select the checkbox.
    10. Time Zone. Select Ignore time zone from log file and instead use, and select UTC.
    11. Timestamp Format. Select Automatically detect the format.
    12. Enable Multiline Processing. Select the checkbox, and select Infer Boundaries.
    13. Click Save.

Step 5: Configure AWS CloudWatch source for Redshift metrics 

  1. Configure a Hosted Collector.
  2. Configure an AWS CloudTrail Source.
    1. Name. Enter a name to display for the new Source.
    2. Description. Enter an optional description.
    3. Regions. Select your Amazon Regions for Amazon Redshift.
    4. Namespaces. Select AWS/Redshift.
    5. Source Category. Enter a source category. For example, AWS/Metric/Redshift.
    6. Access Key ID and Secret Access Key. Enter your Amazon Access Key ID and Secret Access Key.
    7. Scan Interval. Use the default of 5 minutes, or enter the frequency Sumo Logic will scan your CloudWatch Sources for new data.
    8. Click Save.

Sample log messages  

Amazon Redshift Connection Audit Log Sample

 dir="ltr">authenticated |Mon, 21 May 2018 01:38:01:601|::ffff:127.0.0.1 |32828 |15523|dev |rdsdb |password 
 |0| | |0| | | |dir="ltr">authentication failure |Mon, 21 May 2018 05:20:10:123|::ffff:10.11.12.16 |66790 
 |98031|vendor |himanshu |password |0|TLSv1.2 |ECDHE-RSA-AES256-SHA384 |0| | | |

Amazon Redshift User Activity Audit Log Sample

 '2018-05-21T06:00:09Z UTC [ db=prod_sales user=duc pid=99753 userid=95 xid=6728324 ]' LOG: create table SumoProdbackUp.organization as 
 (select * from SumoProd.simpleuser)
'2018-05-21T06:00:09Z UTC [ db=vendor user=ankit pid=36616 userid=53 xid=2956702 ]' LOG: DELETE FROM SumoProd.employee WHERE id = 38;
'2018-05-21T06:20:09Z UTC [ db=dev user=himanshu pid=64458 userid=35 xid=5143208 ]' LOG: drop user testuser3

Amazon CloudTrail Redshift Log Sample

 dir="ltr">{"eventVersion":"1.04","userIdentity":{"type":"IAMUser","principalId":"AIDA1234567890WUABG5Q",
 "arn":"arn:aws:iam::951234567838:user/Nitin","accountId":"951234567838","accessKeyId":"ASIA12345678UPV5IWTQ",
 "userName":"Nitin","sessionContext":{"attributes":{"mfaAuthenticated":"true","creationDate":"2018-05-11T14:08:12Z"}},
 "invokedBy":"signin.amazonaws.com"},"eventTime":"2018-05-11T17:37:06Z","eventSource":"redshift.amazonaws.com","eventName":
 "RebootCluster","awsRegion":"us-west-1","sourceIPAddress":"114.140.11.57","userAgent":"signin.amazonaws.com","requestParameters":
 {"clusterIdentifier":"sumologicdevbi"},"responseElements":{"nodeType":"dc2.large","preferredMaintenanceWindow":"mon:10:00-mon:10:30",
 "clusterStatus":"rebooting","clusterCreateTime":"Mar 13, 2018 4:49:17 AM","vpcId":"vpc-4333942c","enhancedVpcRouting":false,
 "endpoint":{"port":5439,"address":"sumologicdev-bi.cklqobrc1234.us-west-1.redshift.amazonaws.com"},"masterUsername":
 "sumologicdevbi","clusterSecurityGroups":[],"pendingModifiedValues":{},"dBName":"sumologicdevbi","availabilityZone":
 "us-west-1c","clusterVersion":"1.0","encrypted":false,"publiclyAccessible":true,"tags":[],"clusterParameterGroups":
 [{"clusterParameterStatusList":[{"parameterApplyStatus":"pending-reboot","parameterName":"spectrum_enable_enhanced_vpc_routing"},
 {"parameterApplyStatus":"pending-reboot","parameterName":"enable_user_activity_logging"},{"parameterApplyStatus":"pending-reboot",
 "parameterName":"max_cursor_result_set_size"},{"parameterApplyStatus":"pending-reboot","parameterName":"query_group"},
 {"parameterApplyStatus":"pending-reboot","parameterName":"datestyle"},{"parameterApplyStatus":"pending-reboot","parameterName":
 "extra_float_digits"},{"parameterApplyStatus":"pending-reboot","parameterName":"search_path"},{"parameterApplyStatus":"pending-reboot",
 "parameterName":"statement_timeout"},{"parameterApplyStatus":"pending-reboot","parameterName":"wlm_json_configuration"},{"parameterApplyStatus":
 "pending-reboot","parameterName":"require_ssl"},{"parameterApplyStatus":"pending-reboot","parameterName":"use_fips_ssl"}],"parameterGroupName":
 "auditclusterparamgroup","parameterApplyStatus":"pending-reboot"}],"allowVersionUpgrade":true,"automatedSnapshotRetentionPeriod":1,
 "numberOfNodes":1,"vpcSecurityGroups":[{"status":"active","vpcSecurityGroupId":"sg-1234d441"}],"iamRoles":[{"iamRoleArn":
 "arn:aws:iam::951234567838:role/RedshiftS3ReadOnly","applyStatus":"in-sync"}],"clusterIdentifier":"sumologicdevbi",
 "clusterSubnetGroupName":"redshift"},"requestID":"ec7759c5-5541-11e8-947b-614ed503d341","eventID":"4b0a0389-b04e-4553-8946-e71d0c3cfd46",
 "eventType":"AwsApiCall","recipientAccountId":"951234567838"}

Query Sample

Top Users

dir="ltr">_sourceCategory=*/AWS/Redshift/Audit LOG
| parse regex "^\'(?<time>\d+\-\d+\-\d+T\d+:\d+:\d+Z \w+)\s+\[\s*(?<parameters>[^\]]+)\]\'\s+LOG:\s+(?<sqlstatement>[^;]*)"
| parse field=parameters "db=* user=* pid=* userid=* xid=*" as dbName, username, pid, userid, xid nodrop
| parse regex field = sqlstatement "^/\*(?:.|[\r\n])*?\*/\s+(?<command>\w+)\s*" nodrop
| parse regex field = sqlstatement "^(?<command>\w+)\s*" nodrop
| count as eventCount by username
| top 10 username by eventCount, username asc