Skip to main content
Sumo Logic

Collect Metrics and Logs for AWS Elastic Load Balancing ULM - Classic

Collect Metrics for AWS Elastic Load Balancer Classic

To collect AWS Elastic Load Balancing Metrics, perform the following tasks:

  1. Configure a Hosted Collector.
  2. Add a CloudWatch Source for Metrics.

Collect Logs for AWS Elastic Load Balancer Classic

Prerequisites

  • Enable Elastic Load Balancing logging in your AWS account, using these Sumo Logic instructions. For more information, see AWS ELB documentation. Logging is not enabled in AWS ELB by default.
  • Grant access to an IAM user by following these Sumo Logic instructions.
  • Confirm that logs are being delivered to the Amazon S3 bucket.

To enable logging in AWS

  1. In the AWS Management Console, choose EC2 > Load Balancers.
  2. Under Access Logs, click Edit.
  3. In the Configure Access Logs dialog box, click Enable Access Logs, then choose an Interval and S3 bucket. This is the S3 bucket that will upload logs to Sumo Logic.
  4. Click Save.

Configure a Collector

Configure a Hosted Collector.

Configure an ELB Source

When you create an AWS Source, you associate it with a Hosted Collector. Before creating the Source, identify the Hosted Collector you want to use, or create a new Hosted Collector. For instructions, see Configure a Hosted Collector.

These configuration instructions apply to log collection from all AWS Source types. Select the correct Source type for your Source in Step 3.

  1. In Sumo Logic select Manage Data > Collection > Collection
  2. On the Collectors page, click Add Source next to a hosted collector, either an existing hosted collector, or one you have created for this purpose.
  3. Select your AWS Source type. 
  4. Enter a name to display for the new Source. Description is optional.
  5. For Bucket Name, enter the exact name of your organization's S3 bucket. 
    Be sure to double-check the name as it appears in AWS, for example:

    S3_Bucket_name.png
  1. For Path Expression, enter the string that matches the S3 objects you'd like to collect. You can use one wildcard (*) in this string. Recursive path expressions use a single wildcard and do NOT use a leading forward slash. See About Amazon Path Expressions for details.
  2. Collection should begin. Choose or enter how far back you'd like to begin collecting historical logs. You can either:
    • Choose a predefined value from dropdown list, ranging from "Now" to “72 hours ago” to “All Time”, or
    • Enter a relative value. To enter a relative value, click the Collection should begin field and press the delete key on your keyboard to clear the field. Then, enter a relative time expression, for example-1w. You can define when you want collection to begin in terms of months (M), weeks (w), days (d), hours (h) and minutes (m).
  1. For Source Category, enter any string to tag the output collected from this Source. (Category metadata is stored in a searchable field called _sourceCategory.)
  2. For Key ID, enter the AWS Access Key ID number granted to Sumo Logic. (See Granting access to an S3 bucket for more information.)
  3. For Secret Key, enter the AWS Secret Access Key Sumo Logic should use to access the S3 bucket. (See Granting access to an S3 bucket for more information.)
  4. For Scan Interval, use the default of 5 minutes. Alternately, enter the frequency Sumo Logic will scan your S3 bucket for new data. To learn more about Scan Interval considerations, see About setting the S3 Scan Interval.
  5. Set any of the following under Advanced:
  • Enable Timestamp Parsing. This option is selected by default. If it's deselected, no timestamp information is parsed at all.
    • Time Zone. There are two options for Time Zone. You can use the time zone present in your log files, and then choose an option in case time zone information is missing from a log message. Or, you can have Sumo Logic completely disregard any time zone information present in logs by forcing a time zone. It's very important to have the proper time zone set, no matter which option you choose. If the time zone of logs can't be determined, Sumo Logic assigns logs UTC; if the rest of your logs are from another time zone your search results will be affected.
    • Timestamp Format. By default, Sumo Logic will automatically detect the timestamp format of your logs. However, you can manually specify a timestamp format for a Source. See Timestamps, Time Zones, Time Ranges, and Date Formats for more information.
  • Enable Multiline Processing. See Collecting Multiline Logs for details on multiline processing and its options. This is enabled by default. Use this option if you're working with multiline messages (for example, log4J or exception stack traces). Deselect this option if you want to avoid unnecessary processing when collecting single-message-per-line files (for example, Linux system.log). Choose one of the following:  
    • Infer Boundaries. Enable when you want Sumo Logic to automatically attempt to determine which lines belong to the same message. If you deselect the Infer Boundaries option, you will need to enter a regular expression in the Boundary Regex field to use for detecting the entire first line of multiline messages.
    • Boundary Regex. You can specify the boundary between messages using a regular expression. Enter a regular expression that matches the entire first line of every multiline message in your log files.
  1. Create any Processing Rules you'd like for the AWS Source.
  2. When you are finished configuring the Source click Save.

Sample Log Message

2017-11-06T23:20:38 stag-www-lb 250.38.201.246:56658 10.168.203.134:23662 0.007731 0.214433 0.000261 404 200 3194 123279 "GET https://stag-www.sumologic.net:443/json/v2/searchquery/3E7959EC4BA8AAC5/messages/raw?offset=29&length=15&highlight=true&_=1405591692470 HTTP/1.1" "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:23.0) Gecko/20131011 Firefox/23.0" ECDHE-RSA-CAMELLIA256-SHA384 TLSv1.2

Query Sample

4XX ELB Status by Client Location

_sourceCategory=*elb* 
| parse "* * * * * * * * * * * \"*\" \"*\" * *" as datetime, ELB_Server, client, backend, request_processing_time, backend_processing_time, response_processing_time, elb_status_code, backend_status_code, received_bytes, sent_bytes, request,user_agent,ssl_cipher,ssl_protocol
| parse field=request "* *://*:*/* HTTP" as method, protocol, domain, server_port, path nodrop
| parse field=client "*:*" as clientIP, port nodrop
| parse field=backend "*:*" as backendIP, backend_port nodrop
| fields - request, client, backend
| where (elb_status_code matches "4*")
| lookup latitude, longitude, country_code, country_name, region, city, postal_code, area_code, metro_code from geo://default on ip = clientIP
| count by latitude, longitude, country_code, country_name, region, city, postal_code, area_code, metro_code
| sort _count