Skip to main content
Sumo Logic

Deploy with Terraform

This page explains how to deploy AWS Observability Solution using Terraform.

These instructions help you deploy AWS Observability using a Terraform script. 

To set up the AWS Observability solution using Terraform, complete the following steps:

Additional parameter overrides are available in an appendix section for Source and App Content.

Before you start 

For this setup, complete the following:

About the Solution script

The AWS Observability solution script is organized into the following groups of files and folders:

  • Main Configuration file: main.auto.tfvars

  • The Resource Creation file main.tf internally invokes two modules: 

    • app-module: This module provides a mechanism to set up all the AWS Observability apps and associated content like Fields, Field Extraction Rules, Metric Rules, apps, monitors and the explore hierarchy in your Sumo Logic account.

    • source-module: This module sets up the hosted collector, sources (for logs and metrics) and associated tags to Sumo logic sources as required for the solution.

  • System Files:

    • versions.tf: Provides the Terraform block that specifies the required provider version and required Terraform version for this configuration. See Lock and Upgrade Provider Versions for more information.

    • providers.tf: Provides Terraform configurations to declare the providers they require to have Terraform install and use them. See Providers Configuration Language for more information.

    • variables.tf: Provides parameters for a Terraform module, allowing aspects of the module to be customized without altering the module's own source code, and allowing modules to be shared between different configurations. See Input Variables for more information.

    • output.tf: Provides specific return values for a Terraform module. See Output Values for more information.

Step 1: Set up the Terraform environment

Before you run the Terraform script, perform the following actions on a server machine of your choice:

  1. Install Terraform version 0.13.0 or later.
    To check the installed Terraform version, run the following command:

$ terraform --version

  1. Install the latest version of curl.

  2. Install Python version 3.7 or later.

  3. Install the latest version of jq command-line JSON parser. This is required for running the fields.sh batch file.

Step 2: Configure the Terraform script

  1. Clone the repository https://github.com/SumoLogic/sumologic-solution-templates:

$ git clone https://github.com/SumoLogic/sumologic-solution-templates

  1. Initialize the Terraform working directory by navigating to the directory sumologic-solution-templates/aws-observability-terraform and running terraform init.

$ terraform init

This will install the required Terraform providers, including Null, Sumo Logic Terraform Provider, AWS Provider, Time Provider, Random Provider.

  1. As part of configuring the AWS Observability solution, we need to create fields in Sumo Logic. To import any fields that are already present in Sumo Logic into our Terraform configuration, we need to run a script. To do so, navigate to the sumologic-solution-templates/aws-observability-terraform folder and do the following:
    • Set the following environment variables using the commands below:
      export SUMOLOGIC_ENV="YOUR_SUMOLOGIC_DEPLOYMENT"
      export SUMOLOGIC_ACCESSID="YOUR_SUMOLOGIC_ACCESS_ID"
      export SUMOLOGIC_ACCESSKEY="YOUR_SUMOLOGIC_ACCESS_KEY"

    • Run fields.sh using this command:
      $ sh fields.sh

  2. Navigate to the sumologic-solution-templates/aws-observability-terraform folder. Set values for the following mandatory variables in the main.auto.tfvars file.
Parameter Description
sumologic_environment Enter au, ca, de, eu, jp, us2, in, fed or us1. See Sumo Logic Endpoints and Firewall Security for more information on Sumo Logic deployments.
sumologic_access_id Sumo Logic Access ID. See Create an access key in the Access Keys topic for more information.
sumologic_access_key Sumo Logic Access Key used for Sumo Logic API calls. See Sumo Logic Access Key for more information.
sumologic_organization_id You can find your org on the Preferences page in the Sumo Logic UI. For more information, see the Preferences Page topic. Your org ID will be used to configure the IAM Role for Sumo Logic AWS Sources.
aws_account_alias The Name/Alias for the AWS environment from which you are collecting data. This name will appear in the Sumo Logic Explorer View, metrics, and logs.

Please leave this blank if you are going to deploy the solution in multiple AWS accounts.

Do not include special characters in the alias.

Step 3: Determine which AWS Account/Regions to Deploy

You have three options to configure the AWS Account/Region:

This section details how to connect the AWS account profile(s) you set up in your AWS account(s) in a providers.tf file, which will be used to authenticate with your AWS account(s).

Option 1: Deploy to a single AWS account and region 

To deploy the AWS Observability Solution for one AWS account and region combination based on an AWS account profile defined in the AWS CLI, configure providers in the providers.tf file.

In the providers.tf file, create a provider for the AWS region you want to monitor AWS services for. This provider will be associated with a profile from the AWS CLI that is associated with an AWS account.

The Terraform script uses “us-east-1” and the active AWS CLI profile by default. If you want to use a different region or another AWS CLI profile, change the providers.tf file as shown in the below. Provide an alias that tells Terraform how to identify this account-region combination.

Task Configuration
Example collection setup for the us-east-2 region and for the production AWS account profile.
# Region us-east-2, for AWS Account profile production
provider "aws" {
  profile = "production"
  region  = "us-east-2"
  alias   = “production-us-east-2”
}
Option 2: Deploy to Multiple Regions within an AWS account

Use this option to install the AWS Observability Solution for multiple regions within a given AWS account. To do so, add providers for each AWS region in the providers.tf as shown below.

1. Given that we have multiple providers, we need to provide an alias that tells Terraform how to identify each account-region combination. 

Task Configuration
Comment out the existing provider information in providers.tf.

Comment out the following using #:

#provider "aws" {
#  region = "us-east-1"
  #
  # Below properties should be added when you would like to onboard more than one region and account
  # More Information regarding AWS Profile can be found at -
  #
  # Access configuration
  #
  # profile = <Provide a profile as setup in AWS CLI>
  #
  # Terraform alias
  #
  # alias = <Provide a terraform alias for the aws provider. For eg :- production-us-east-1>
#}

Note: Do not change or remove the provider “sumologic” section:

provider "sumologic" {
  environment = var.sumologic_environment
  access_id   = var.sumologic_access_id
  access_key  = var.sumologic_access_key
}

 

Add a provider for each region, replacing the placeholder content that matches your AWS CLI account profile, AWS region of choice and an alias that tells Terraform how to identify this account-region combination.

Note: The AWS CLI Account profile will be the same across all regions.

# AWS Account profile <AWS_PROFILE_NAME>, Region <REGION>, Alias <ALIAS>
provider "aws" {
  profile = "<AWS_PROFILE_NAME>"
  region  = "<REGION>"
  alias   = "<ALIAS>"
}
Example provider configuration for a production AWS account profile in us-east-1 and us-east-2 regions with the appropriate aliases.
# AWS Account profile production, Region us-east-1, Alias production-us-east-1
provider "aws" {
  profile = "production"
  region  = "us-east-1"
  alias   = "production-us-east-1"
}
# AWS Account profile production, Region us-east-2, Alias production-us-east-1
provider "aws" {
  profile = "production"
  region  = "us-east-2"
  alias   = "production-us-east-2"
}

2. To see the output messages showing you the deployment process, add output code in the output.tf file for each module you added in the earlier step in the main.tf.

Task Configuration
Add this output code for each module added in the earlier step at the main.tf file, replacing the placeholder module name:
output "<ALIAS>" {
  value       = module.<ALIAS>
  description = "All outputs related to collection and sources."
}
Example output configuration for modules with module names production-us-east-1 and production-us-east-2:
output "production-us-east-1" {
  value       = module.production-us-east-1
  description = "All outputs related to collection and sources."
}
 
output "production-us-east-2" {
  value       = module.production-us-east-2
  description = "All outputs related to collection and sources."
}

Option 3: Deploy to multiple AWS accounts and regions

Use this option to install the AWS Observability Solution for multiple accounts and regions.

Add providers for each AWS account/region combination and configure outputs as shown below:

1. Given that we have multiple providers, we need to provide an alias that tells Terraform how to identify each account-region combination.

Task Configuration
Comment out the existing provider information in providers.tf.

Comment out the following using # :

#provider "aws" {
#  region = "us-east-1"
  #
  # Below properties should be added when you would like to onboard more than one region and account
  # More Information regarding AWS Profile can be found at -
  #
  # Access configuration
  #
  # profile = <Provide a profile as setup in AWS CLI>
  #
  # Terraform alias
  #
  # alias = <Provide a terraform alias for the aws provider. For eg :- production-us-east-1>
#}

Note: Do not change or remove the provider “sumologic” section.

provider "sumologic" {
  environment = var.sumologic_environment
  access_id   = var.sumologic_access_id
  access_key  = var.sumologic_access_key
}
Add a provider code sample for each account-region combination, replacing the placeholder content for your AWS CLI account profile, AWS region of choice, and an alias that tells Terraform how to identify this account-region combination:

Note: The AWS CLI Account profile has to be the same across all regions.
# Region <REGION>, AWS Account profile <AWS_PROFILE_NAME>, Alias <ALIAS>
provider "aws" {
  profile = "<AWS_PROFILE_NAME>"
  region  = "<REGION>"
  alias   = "<ALIAS>"
}
Example provider configuration for the production AWS account profile in the us-east-1 and us-east-2 regions and a development AWS account profile in the  us-west-1 region with the appropriate aliases.
# Region us-east-1, AWS Account profile production
provider "aws" {
  profile = "production"
  region  = "us-east-1"
  alias   = "production-us-east-1"
}
# Region us-east-2, AWS Account profile production
provider "aws" {
  profile = "production"
  region  = "us-east-2"
  alias   = "production-us-east-2"
}
# Region us-west-1, AWS Account profile development
provider "aws" {
  profile = "development"
  region  = "us-west-1"
  alias   = "development-us-west-1"
}

2. To see the output messages showing you the provisioning process, add a collection of output code in the output.tf file for each module you added in the earlier step in the main.tf.. 

Task Configuration
Add this output code for each module added in the earlier step at the main.tf file, replacing the placeholder module name.
output "<ALIAS>" {
  value       = module.<ALIAS>
  description = "All outputs related to collection and sources."
}
Example output configuration for modules with module names production-us-east-1, production-us-east-2 and development-us-west-1.
output "production-us-east-1" {
  value       = module.production-us-east-1
  description = "All outputs related to collection and sources."
}
 
output "production-us-east-2" {
  value       = module.production-us-east-2
  description = "All outputs related to collection and sources."
}
 
output "development-us-west-1" {
  value       = module.development-us-west-1
  description = "All outputs related to collection and sources."
}

Step 4: Configure Providers in the main.tf file

Configure providers for collection using the Terraform source-module.

Task Configuration
Comment out the existing module "collection-module" section present in the main.tf.

Comment out the following code with # :

#module "collection-module" {
#  source = "./source-module"
 
#  aws_account_alias         = var.aws_account_alias
#  sumologic_organization_id  = var.sumologic_organization_id
#   access_id                 = var.sumologic_access_id
#   access_key                = var.sumologic_access_key
#   environment               = var.sumologic_environment  
#}

Note: Do not change the module “sumo-module” section.

module "sumo-module" {
  source                   = "./app-modules"
  access_id                = var.sumologic_access_id
  access_key               = var.sumologic_access_key
  environment              = var.sumologic_environment
  JSON_file_directory_path = dirname(path.cwd)
}
Add this module code sample in the main.tf file for the first region provider configured in the providers.tf file, replacing the placeholder content  with provider aliases.

The aws_account_alias differs depending on single or multiple accounts. See the following examples.
module "<ALIAS>" {
  source = "./source-module"
  providers = { aws = aws.<ALIAS> }
 
  aws_account_alias = <var.aws_account_alias OR “account alias”>
  sumologic_organization_id = var.sumologic_organization_id
  access_id    = var.sumologic_access_id
  access_key   = var.sumologic_access_key
  environment  = var.sumologic_environment 
 
}
Single Account

Example module configuration for single account and multiple region config:

Here we have a production AWS account profile in us-east-1 and us-east-2 regions with the provider aliases production-us-east-1 and production-us-east-2.

Note:

  • Since this is a single account, we can use the global account alias which is defined in var.aws_account_alias.
  • A collector is created per account. If you deploy the solution for multiple regions in the same account, you should use the same collector that was created for the first region of the account and for each subsequent region module as shown in the code.
module "production-us-east-1" {
  source = "./source-module"
  providers = { aws = aws.production-us-east-1 }
 
  aws_account_alias = var.aws_account_alias
  sumologic_organization_id = var.sumologic_organization_id
  access_id    = var.sumologic_access_id
  access_key   = var.sumologic_access_key
  environment  = var.sumologic_environment 
}
 
module "production-us-east-2" {
  source = "./source-module"
  providers = { aws = aws.production-us-east-2 }
 
  aws_account_alias = var.aws_account_alias
  sumologic_organization_id = var.sumologic_organization_id
  access_id    = var.sumologic_access_id
  access_key   = var.sumologic_access_key
  environment  = var.sumologic_environment  
  
# Use the same collector created for the first region of the production account.
  sumologic_existing_collector_details = {
    create_collector = false
    collector_id = module.production-us-east-1.sumologic_collector["collector"].id
  }
}
Multiple Accounts

Example module configuration for configuring multiple accounts in multiple regions:

Here we have a production AWS account profile in us-east-1 and us-east-2 regions with the provider aliases production-us-east-1 and production-us-east-2 and a development AWS account profile in us-west-1 with the provider alias development-us-west-1.


Note:

  • In this case the AWS Account alias for each account/region module needs to be specified within the module via the parameter aws_account_alias. The variable var.aws_account_alias is not used in this case.
  • A collector is created per account. If you deploy the solution for multiple regions in the same account, you should use the same collector that was created for the first region of the account for each subsequent region module as shown in the code.
module "production-us-east-1" {
  source = "./source-module"
  providers = { aws = aws.production-us-east-1 }
 
  aws_account_alias = "production-us-east-1"
  sumologic_organization_id = var.sumologic_organization_id
  access_id    = var.sumologic_access_id
  access_key   = var.sumologic_access_key
  environment  = var.sumologic_environment 
}
 
module "production-us-east-2" {
  source = "./source-module"
  providers = { aws = aws.production-us-east-2 }
 
  aws_account_alias = "production-us-east-1"
  sumologic_organization_id = var.sumologic_organization_id
  access_id    = var.sumologic_access_id
  access_key   = var.sumologic_access_key
  environment  = var.sumologic_environment 

# Use the same collector created for the first region of the production account.
  sumologic_existing_collector_details = {
    create_collector = false
    collector_id = module.production-us-east-1.sumologic_collector["collector"].id
  }
}
 
module "development-us-west-1" {
  source = "./source-module"
  providers = { aws = aws.development-us-west-1}
 
  aws_account_alias = "development-us-west-1"
  sumologic_organization_id = var.sumologic_organization_id
  access_id    = var.sumologic_access_id
  access_key   = var.sumologic_access_key
  environment  = var.sumologic_environment 
}

Step 5: Override Default Parameter Values

By default, all other parameters are set up to automatically collect logs, metrics, install apps and monitors. If you need to override parameters, you can configure or override additional parameters in the sumologic-solution-templates/aws-observability-terraform/main.tf file for Collection and app parameters. To perform overrides, see Override Collection Parameters and Override Content Parameters.

Step 6: Deploy the AWS Observability Solution

Deploy the AWS Observability Solution using the Sumo Logic Terraform Script.

Navigate to the directory sumologic-solution-templates/aws-observability-terraform and execute the following commands:

$ terraform validate
$ terraform plan
$ terraform apply

Step 7: Share the Dashboards

Once the AWS Observability solution has been setup, a "AWS Observability Apps" folder of dashboards will be available in the personal library of the user that the Sumo Logic Access keys belong to. Share this folder with appropriate members of your Sumo Logic account to give them access. See Share Content to learn how to share folders.

Uninstalling the Solution

To uninstall the AWS Observability solution deployed using Terraform, navigate to the directory sumologic-solution-templates/aws-observability-terraform and execute the command:

$ terraform destroy

This will destroy all resources and configuration previously set up.

Appendix

Override Source Parameters

Source Parameters define how collectors and their sources are set up in Sumo Logic. If needed, override the desired parameter in the module that you defined earlier for each AWS account and region in the sumologic-solution-templates/aws-observability-terraform/main.tf file. 

The following examples override the following:

  • Example 1 overrides the cloudtrail_source_details parameter to collect Cloudtrail logs from a user-provided s3 bucket. Cloudtrail logs are already stored in the user-provided s3 bucket. The default parameter will always create new S3 buckets, forward cloudtrail logs to it and collect cloudtrail logs from the newly created s3 bucket.

  • Example 2 overrides the auto_enable_access_logs variable to skip automatic access log enablement for an Application Load Balancer resource. By default, it is set to “Both”, which automatically enables access logging for new and existing ALB resources.

Default Example
module "collection-module" {
 source = "./source-module"
 aws_account_alias         = var.aws_account_alias
 sumologic_organization_id = var.sumologic_organization_id
 access_id    = var.sumologic_access_id
 access_key   = var.sumologic_access_key
 environment  = var.sumologic_environment 
}
Override Example 1: Override the  cloudtrail_source_details parameter

Override the cloudtrail_source_details parameter to collect Cloudtrail logs from a user-provided s3 bucket. Cloudtrail logs in this case are already stored in the user-provided s3 bucket.

module "collection-module" {
 source = "./source-module"
 aws_account_alias         = var.aws_account_alias
 sumologic_organization_id = var.sumologic_organization_id
 access_id    = var.sumologic_access_id
 access_key   = var.sumologic_access_key
 environment  = var.sumologic_environment 
 # Enable Collection of Cloudtrail logs
 collect_cloudtrail_logs   = true
 # Collect Cloudtrail logs, from user provided s3 bucket
 # Don't create a s3 bucket, use bucket details provided by the user. Don't force destroy bucket
 cloudtrail_source_details = {
   source_name     = "CloudTrail Logs us-east-1"
   source_category = "aws/observability/cloudtrail/logs"
   description     = "This source is created using Sumo Logic terraform AWS Observability module to collect AWS cloudtrail logs."
   bucket_details = {
       create_bucket        = false
       bucket_name          = "aws-observability-logs"
       path_expression      = "AWSLogs/*/CloudTrail/*/*"
       force_destroy_bucket = false
   }
   fields = {}
 }
}

Override Example 2: Override the auto_enable_access_logs parameter

Override the auto_enable_access_logs parameter (set to None) to automatically skip enable access logging for an Application Load Balancer.

module "collection-module" {
 source = "./source-module"
 aws_account_alias         = var.aws_account_alias
 sumologic_organization_id = var.sumologic_organization_id
 access_id    = var.sumologic_access_id
 access_key   = var.sumologic_access_key
 environment  = var.sumologic_environment 
 auto_enable_access_logs = None
}

 

The following table provides a list of all source parameters and their default values. See the sumologic-solution-templates/aws-observability-terraform/source-module/variables.tf file for complete code.

Configure collection of CloudWatch metrics
collect_cloudwatch_metrics

Description

Select the kind of CloudWatch Metrics Source to create

Options available are:

  • "CloudWatch Metrics Source" - Creates Sumo Logic AWS CloudWatch Metrics Sources.
  • "Kinesis Firehose Metrics Source" (Recommended) -  Creates a Sumo Logic AWS Kinesis Firehose for Metrics Source. Note: This new source has cost and performance benefits over the CloudWatch Metrics Source and is therefore recommended.
  • "None" - Skips the Installation of both the Sumo Logic Metric Sources.

Default Value

"Kinesis Firehose Metrics Source"
Default JSON
collect_cloudwatch_metric = "Kinesis Firehose Metrics Source"
cloudwatch_metrics_source_details

Description

Provide details for the Sumo Logic Cloudwatch Metrics source. If not provided, then defaults will be used.

  • limit_to_namespaces - Enter a comma-delimited list of the namespaces which will be used for both AWS CloudWatch Metrics Source.

See this list of AWS services that publish CloudWatch metrics for details.

Default Value

{
 "bucket_details": {
   "bucket_name": "aws-observability-random-id",
   "create_bucket": true,
   "force_destroy_bucket": true
 },
 "description": "This source is created using Sumo Logic terraform AWS Observability module to collect AWS Cloudwatch metrics.",
 "fields": {},
 "limit_to_namespaces": [
   "AWS/ApplicationELB",
   "AWS/ApiGateway",
   "AWS/DynamoDB",
   "AWS/Lambda",
   "AWS/RDS",
   "AWS/ECS",
   "AWS/ElastiCache",
   "AWS/ELB",
   "AWS/NetworkELB",
   "AWS/SQS",
   "AWS/SNS"
 ],
 "source_category": "aws/observability/cloudwatch/metrics",
 "source_name": "CloudWatch Metrics (Region)"
}
Override Example JSON

The following override example collects only DynamoDB and Lambda namespaces with source_category set to “aws/observability/cloudwatch/metrics/us-east-1”:

Cloudwatch_metrics_source_details = {
 "bucket_details": {
   "bucket_name": "",
   "create_bucket": true,
   "force_destroy_bucket": true
 },
 "description": "This source is created using Sumo Logic terraform AWS Observability module to collect AWS Cloudwatch metrics.",
 "fields": {},
 "limit_to_namespaces": [
   "AWS/DynamoDB",
   "AWS/Lambda"
  ],
 "source_category": "aws/observability/cloudwatch/metrics/us-east-1",
 "source_name": "CloudWatch Metrics us-east-1"
}​​​
cloudwatch_metrics_source_url

Description

Use this parameter if you are already collecting CloudWatch Metrics and want to use an existing Sumo Logic Collector Source. You need to provide the URL of the existing Sumo Logic CloudWatch Metrics Source. If the URL is for a AWS CloudWatch Metrics source, the "account" and "accountid" metadata fields will be added to the Source. If the URL is for the Kinesis Firehose for Metrics source, the “account” field will be added to the Source. For information on how to determine the URL, see View or Download Source JSON Configuration.

Default Value

""
Override Example JSON

The following is a default example:

cloudwatch_metrics_source_url=””

The following is a specific Source URL example:

collect_cloudwatch_metric = "Kinesis Firehose Metrics Source"
cloudwatch_metrics_source_url="https://api.sumologic.com/api/v1/collectors/1234/sources/9876"
Configure collection of Elastic Load Balancer Access Logs

Amazon Elastic load balancers have various load balancers. AWS Observability supports access log collection for Application Load Balancers only.

collect_elb_logs

Description

You have the following options:

  • true - Ingest Load Balancer logs into Sumo Logic. Creates a Sumo Logic Log Source that collects application load balancer  logs from an existing bucket or a new bucket. 
    • If true, configure "elb_source_details" to ingest load balancer logs.
  • false - You are already ingesting load balancer logs into Sumo Logic.

When enabling ALB logs (setting to true), you need to provide elb_source_details with configuration information including the bucket name and path expression.

Default Value

“true”
Example JSON
collect_elb_logs = true
elb_source_details

Description

Provide details for the Sumo Logic ELB source. If not provided, then defaults will be used.

To enable collection of application load balancer logs, set collect_elb_logs to true and provide configuration information for the bucket. Use the default value code and replace default values.

  • If create_bucket is false, provide a name of an existing S3 bucket where you would like to store loadbalancer logs. If this is empty, a new bucket will be created in the region.
  • If create_bucket is true, the script creates a bucket, the name of the bucket has to be unique; this is achieved internally by generating a random-id and then post-fixing it to the “aws-observability-” string.
  • path_expression - This is required in case the above existing bucket is already configured to receive ALB access logs. If this is blank, Sumo Logic will store logs in the path expression: *AWSLogs/*/elasticloadbalancing/*/*

Default Value

{
 "source_name": "Elb Logs (Region)",
 "source_category": "aws/observability/alb/logs",
 "description": "This source is created using Sumo Logic terraform AWS Observability module to collect AWS ELB logs.", 
 "bucket_details": {
   "bucket_name": "aws-observability-random-id",
   "create_bucket": true,
   "force_destroy_bucket": true,
   "path_expression": "*AWSLogs/<ACCOUNT-ID>/elasticloadbalancing/<REGION-NAME>/*"
 },
 "fields": {}
}
Override Example JSON

The following override example uses the bucket “example-loadbalancer-logs” with path expression "*AWSLogs/*/elasticloadbalancing/*/*":

# Enable Collection of ALB Access logs source
collect_elb_logs   = true
# Collect ALB Access logs, from user provided s3 bucket
# Don't create a s3 bucket, use bucket details provided by the user. Don't force destroy bucket
elb_source_details = {
 source_name     = "Elb Logs us-east-1"
 source_category = "aws/observability/alb/logs"
 description     = "This source is created using the Sumo Logic terraform AWS Observability module to collect AWS ELB logs."
 bucket_details = {
     create_bucket        = false
     bucket_name          = "example-loadbalancer-logs"
     path_expression      = "*AWSLogs/*/elasticloadbalancing/*/*"
     force_destroy_bucket = false
 }
 fields = {}
}
auto_enable_access_logs

Description

Enable Application Load Balancer (ALB) Access logging. You have the following options:

  • New - Automatically enables access logging for newly created ALB resources to collect logs for ALB resources. This does not affect ALB resources already collecting logs.
  • Existing - Automatically enables access logging for existing ALB resources to collect logs for ALB resources.
  • Both - Automatically enables access logging for new and existing ALB resources.
  • None - Skips Automatic access Logging enable for ALB resources.

Default Value

”Both”
Override Example  JSON

Example JSON for newly created ALB resources only.

auto_enable_access_logs=”New”
elb_log_source_url

Description

Required if you are already collecting ALB logs. Provide the existing Sumo Logic ALB Source API URL. The account, accountid, region and namespace fields will be added to the Source. For information on how to determine the URL, see View or Download Source JSON Configuration.

Default Value

“”
Example JSON

The following is a default example: 

elb_log_source_url=””
 

The following is a specific Source URL example:

collect_elb_logs = true
elb_log_source_url="https://api.sumologic.com/api/v1/collectors/1234/sources/9879"
Configure collection of CloudTrail logs
collect_cloudtrail_logs

Description

Create a Sumo Logic CloudTrail Logs Source. You have the following options:

  • true - Ingest Cloudtrail logs into Sumo Logic - Creates a Sumo Logic CloudTrail Log Source that collects CloudTrail logs from an existing bucket or new bucket.
    • If true, configure "cloudtrail_source_details" to ingest CloudTrail logs
  • false - You are already ingesting CloudTrail logs into Sumo Logic.

When enabling Cloudtrail logs setting to true, you need to provide cloudtrail_source_details with configuration information. 

Default Value

true
Example JSON
collect_cloudtrail_logs = true
cloudtrail_source_details

Description

Provide details for the Sumo Logic CloudTrail source. If not provided, then defaults will be used.

To enable, set collect_cloudtrail_logs to true and provide configuration information for the bucket. Use the default value code and replace default values.

  • If create_bucket is false, provide a name of an existing S3 bucket where you would like to store CloudTrail logs. If this is empty, a new bucket will be created in the region.
  • If create_bucket is true, the script creates a bucket, the name of the bucket has to be unique; this is achieved internally by generating a random-id and then post-fixing it to the “aws-observability-” string.
  • path_expression - This is required in case the above existing bucket is already configured to receive CloudTrail logs. If this is blank, Sumo Logic will store logs in the path expression AWSLogs/*/CloudTrail/*/*

Default Value

{
 "bucket_details": {
   "bucket_name": "aws-observability-<random-id>",
   "create_bucket": true,
   "force_destroy_bucket": true,
   "path_expression": "AWSLogs/<ACCOUNT-ID>/CloudTrail/<REGION-NAME>/*"
 },
Default JSON

The following override example uses the bucket “aws-observability-logs” with path expression "*AWSLogs/*/CloudTrail/*/*" path expression:

# Enable Collection of Cloudtrail logs
collect_cloudtrail_logs   = true
# Collect Cloudtrail logs, from user provided s3 bucket
# Don't create a s3 bucket, use bucket details provided by the user. Don't force destroy bucket
cloudtrail_source_details = {
 source_name     = "CloudTrail Logs us-east-1"
 source_category = "aws/observability/cloudtrail/logs"
 description     = "This source is created using Sumo Logic terraform AWS Observability module to collect AWS cloudtrail logs."
 bucket_details = {
     create_bucket        = false
     bucket_name          = "aws-observability-logs"
     path_expression      = "AWSLogs/*/CloudTrail/*/*"
     force_destroy_bucket = false
 }
 fields = {}
}
cloudtrail_source_url

Description

Required if you are already collecting CloudTrail logs. Provide the existing Sumo Logic CloudTrail Source API URL. The account field will be added to the Source. For information on how to determine the URL, see View or Download Source JSON Configuration.

Default Value

""
Example JSON

The following is a default example: 

cloudtrail_source_url=””

The following is a specific Source URL example:

collect_cloudtrail_logs = true
cloudtrail_source_url="https://api.sumologic.com/api/v1/collectors/1234/sources/9877"
Configure collection of CloudWatch logs
collect_cloudwatch_logs

Description

Select the type of Sumo Logic CloudWatch Logs Sources to create. You have the following options:

  • "Lambda Log Forwarder" - Creates a Sumo Logic CloudWatch Log Source that collects CloudWatch logs via a Lambda function.
  • "Kinesis Firehose Log Source" - Creates a Sumo Logic Kinesis Firehose Log Source to collect CloudWatch logs.
  • "None" - Skips installation of both sources.

Default Value

"Kinesis Firehose Log Source"
Default JSON
collect_cloudwatch_logs = "Kinesis Firehose Log Source"
cloudwatch_logs_source_details

Description

Provide details for the Sumo Logic Cloudwatch Logs source. If not provided, then defaults will be used.

For bucket_details (used with Kinesis Firehose Logs Source):

  • If create_bucket is false, provide a name of an existing S3 bucket where you would like to store cw logs. If this is empty, a new bucket will be created.
  • If create_bucket is true, the script creates a bucket, the name of the bucket has to be unique; this is achieved internally by generating a random-id and then post-fixing it to the “aws-observability-” string.

For lambda_log_forwarder_config (used with Lambda Log Forwarder):

  • Provide your email_id to receive alerts. You will receive a confirmation email after the deployment is complete. Follow the instructions in this email to validate the address.
  • IncludeLogGroupInfo:  Set to true to include loggroup/logstream values in logs. For AWS Lambda Logs IncludeLogGroupInfo must be set to True .
  • logformat: For Lambda, the value should be set to “Others”. 
  • log_stream_prefix: Enter a comma-separated list of logStream name prefixes to filter by logStream. Please note this is separate from a logGroup. This is used to only send certain logStreams within a CloudWatch logGroup(s).  LogGroup(s) still need to be subscribed to the created Lambda function.
  • workers: Number of lambda function invocations for Cloudwatch logs source Dead Letter Queue processing.

Default Value

{
 "bucket_details": {
   "bucket_name": "aws-observability-random-id",
   "create_bucket": true,
   "force_destroy_bucket": true
 },
 "description": "This source is created using Sumo Logic terraform AWS Observability module to collect AWS Cloudwatch Logs.",
 "fields": {},
 "lambda_log_forwarder_config": {
   "email_id": "",
   "include_log_group_info": true,
   "log_format": "Others",
   "log_stream_prefix": [],
   "workers": 4
 },
 "source_category": "aws/observability/cloudwatch/logs",
 "source_name": "CloudWatch Logs (Region)"
}
Override Default JSON

The following override example sets the aws-observability-cw-logs bucket name and the email-id to bob@company.com:

cloudwatch_logs_source_details = {
 "bucket_details": {
   "bucket_name": "aws-observability-cw-logs",
   "create_bucket": true,
   "force_destroy_bucket": true
 },
 "description": "This source is created using Sumo Logic terraform AWS Observability module to collect AWS Cloudwatch Logs.",
 "fields": {},
 "lambda_log_forwarder_config": {
   "email_id": "bob@company.com",
   "include_log_group_info": true,
   "log_format": "Others",
   "log_stream_prefix": [],
   "workers": 4
 },
 "source_category": "aws/observability/cloudwatch/logs",
 "source_name": "CloudWatch Logs (Region)"
}
cloudwatch_logs_source_url

Description

Required if you are already collecting AWS Lambda CloudWatch logs. Provide the existing Sumo Logic AWS Lambda CloudWatch Source API URL. The account, accountid, region and namespace fields will be added to the Source. For information on how to determine the URL, see View or Download Source JSON Configuration.

Default Value

""
Default JSON

The following is a default example: 

cloudwatch_logs_source_url=””
 

The following is a specific Source URL example:

collect_cloudwatch_logs = "Kinesis Firehose Log Source"
cloudwatch_logs_source_url="https://api.sumologic.com/api/v1/collectors/1234/sources/9878" 
auto_enable_logs_subscription

Description

Subscribe log groups to Sumo Logic Lambda Forwarder. You have the following options:

  • New - Automatically subscribes new log groups to send logs to Sumo Logic.
  • Existing - Automatically subscribes existing log groups to send logs to Sumo Logic.
  • Both - Automatically subscribes new and existing log groups.
  • None - Skips Automatic subscription.

Default Value

"Both"
Override Example JSON
auto_enable_logs_subscription="New"
auto_enable_logs_subscription_options

Description

filter - Enter regex for matching logGroups for AWS Lambda only. The regex will check the name. See Configuring Parameters.

Default Value

{
 "filter": "lambda"
 }
Default JSON

The following example includes all log groups that match “lambda-cloudwatch-logs”:

auto_enable_logs_subscription_options = {
 "filter": "lambda-cloudwatch-logs"
}
collect_root_cause_data

Description

Select the Sumo Logic Root Cause Explorer Source.

You have the following options:

  • Inventory Source - Creates a Sumo Logic Inventory Source used by Root Cause Explorer.
  • Xray Source - Creates a Sumo Logic AWS X-Ray Source that collects X-Ray Trace Metrics from your AWS account.
  • Both - Install both Inventory and Xray sources.
  • None - Skips installation of both sources.

Default Value

“both”
Override Example JSON
collect_root_cause_data = “Inventory Source”
inventory_source_details

Description

Provide details for the Sumo Logic AWS Inventory source. If not provided, then defaults will be used.

Default Value

{
 "description": "This source is created using Sumo Logic terraform AWS Observability module to collect AWS inventory metadata.",
 "fields": {},
 "limit_to_namespaces": [
   "AWS/ApplicationELB",
   "AWS/ApiGateway",
   "AWS/DynamoDB",
   "AWS/Lambda",
   "AWS/RDS",
   "AWS/ECS",
   "AWS/ElastiCache",
   "AWS/ELB",
   "AWS/NetworkELB",
   "AWS/SQS",
   "AWS/SNS",
   "AWS/AutoScaling"
 ],
 "source_category": "aws/observability/inventory",
 "source_name": "AWS Inventory (Region)"
}
Override Example JSON

The following override example limits to DynamoDB and Lambda namespaces.

inventory_source_details = {
 "description": "This source is created using Sumo Logic terraform AWS Observability module to collect AWS inventory metadata.",
 "fields": {},
 "limit_to_namespaces": [
   "AWS/DynamoDB",
   "AWS/Lambda"   
 ],
 "source_category": "aws/observability/inventory",
 "source_name": "AWS Inventory (Region)"
}
xray_source_details

Description

Provide details for the Sumo Logic AWS XRAY source. If not provided, then defaults will be used.

Default Value

{
 "description": "This source is created using Sumo Logic terraform AWS Observability module to collect AWS Xray metrics.",
 "fields": {},
 "source_category": "aws/observability/xray",
 "source_name": "AWS Xray (Region)"
}
Override Example JSON

The following override example calls out the source_name.

xray_source_details = {
 "description": "This source is created using Sumo Logic terraform AWS Observability module to collect AWS Xray metrics.",
 "fields": {},
 "source_category": "aws/observability/xray",
 "source_name": "Xray us-west-2"
}
sumologic_existing_collector_details

Description

Provide an existing Sumo Logic Collector ID. See View or Download Source JSON Configuration

If provided, all the provided sources will be created within the collector. 

If kept empty, a new Collector will be created and all provided sources will be created within that collector.

Default Value

{
 "collector_id": "",
 "create_collector": true
}
Default JSON
sumologic_existing_collector_details = {
 "collector_id": "",
 "create_collector": true
}
Override Example JSON
# Use the same collector created for module production-us-east-1 for the new source module.
  sumologic_existing_collector_details = {
    create_collector = false
    collector_id = module.production-us-east-1.sumologic_collector["collector"].id
  }
sumologic_collector_details

Description

Provide details for the Sumo Logic collector. If not provided, then defaults will be used.

The Collector will be created if any new source is created and sumologic_existing_collector_id is empty.

Default Value

{
 "collector_name": "AWS Observability (AWS Account Alias) (Account ID)",
 "description": "This collector is created using Sumo Logic terraform AWS Observability module.",
 "fields": {}
}
Override Example JSON

The following override example creates a collector with the name “AWS Observability Prod”.

# Following example is to create a collector with name and description as provided with collector_name and description parameters.
sumologic_collector_details = {
 "collector_name": "AWS Observability Prod",
 "description": "This collector is created using Sumo Logic terraform AWS Observability module.",
 "fields": {}
}
existing_iam_details

Description

Provide an existing AWS IAM role arn value which provides access to AWS S3 Buckets, AWS CloudWatch Metrics API and Sumo Logic Inventory data. If kept empty, a new IAM role will be created with the required permissions.

For more details on permissions, check the IAM policy tmpl files at /source-module/templates folder.

Default Value

{
 "create_iam_role": true,
 "iam_role_arn": ""
}
Default JSON
existing_iam_details = {
 "create_iam_role": true,
 "iam_role_arn": ""
}
wait_for_seconds

Description

Used to delay Sumo Logic source creation. The value is in seconds. This helps persisting IAM role in AWS system.

If the AWS IAM role is created outside of the module, the value can be decreased to 1 second.

Default Value

180
Default JSON
wait_for_seconds = 180

Override App Content Parameters Edit section

As needed, override the app content parameters to configure how the AWS Observability app dashboards and alerts are installed in your Sumo Logic account. Enter the overrides in the sumologic-solution-templates/aws-observability-terraform/main.tf file. 

The following is an example of the default value and override for app parameters:

Default Example

Parameters will take default values as defined under the default column. 

This installs the following:

  • Apps: AWS EC2, AWS Application Load Balancer, Amazon RDS, AWS API Gateway, AWS Lambda, AWS DynamoDB, AWS ECS, Amazon ElastiCache and AWS NLB
    • Default location: “AWS Observability Apps” Personal folder in Sumo Logic
  • Alerts for the AWS Observability Solution
    • Default location: “AWS Observability Monitors” folder of the Monitors folder
module "sumo-module" {
 source                   = "./app-modules"
 access_id                = var.sumologic_access_id
 access_key               = var.sumologic_access_key
 environment              = var.sumologic_environment
 json_file_directory_path = dirname(path.cwd)
}
Override Example

For this example, overriding the name of App folder, name of Monitor folder, enabling alerts for ALB, EC2 and send notification as email. 

module "sumo-module" {
 source                   = "./app-modules"
 access_id                = var.sumologic_access_id
 access_key               = var.sumologic_access_key
 environment              = var.sumologic_environment
 json_file_directory_path = dirname(path.cwd)
 apps_folder_name         = "AWS Observability Apps myCustom"
 monitors_folder_name     = "AWS Observability Monitors myCustom"
 apigateway_monitors_disabled  = false
 ec2metrics_monitors_disabled  = false
 email_notifications = [
   {
     connection_type       = "Email",
     recipients            = ["abc@example.com"],
     subject               = "Monitor Alert: {{TriggerType}} on {{Name}}",
     time_zone             = "PST",
     message_body          = "Triggered {{TriggerType}} Alert on {{Name}}: {{QueryURL}}",
     run_for_trigger_types = ["Critical", "ResolvedCritical"]
   }
 ]
}

The following table provides a list of all source parameters and their default values. See the sumologic-solution-templates/aws-observability-terraform/app-module/main.auto.tfvars file for complete code.

Parameter Description Default
access_id Sumo Logic Access ID. See Access Keys for information.

Ignore this setting if you entered it in Source Parameters.
Ignore if already configured in main.auto.tfvars file.
access_key Sumo Logic Access Key. See Access Keys for information.

Ignore this setting if you entered it in Source Parameters.
Ignore if already configured in main.auto.tfvars file.
environment Enter au, ca, de, eu, jp, us2, in, fed, or us1. See Sumo Logic Endpoints and Firewall Security for information. 

Ignore this setting if you entered it in Source Parameters.
Ignore if already configured in main.auto.tfvars file.
apps_folder_name Provide a folder name where all the apps will be installed under your Personal folder. Default value is "AWS Observability Apps".
"AWS Observability Apps"
monitors_folder_name Provide a folder name where all the monitors will be installed under the Personal folder of the user whose access keys you have entered. Default value will be "AWS Observability Monitors".
"AWS Observability Monitors"
alb_monitors_disabled Indicates if the ALB Apps monitors should be enabled or disabled.
true
apigateway_monitors_disabled Indicates if the API Gateway Apps monitors should be enabled or disabled.
true
dynamodb_monitors_disabled Indicates if the DynamoDB Apps monitors should be enabled or disabled.
true
ec2metrics_monitors_disabled Indicates if the EC2 Metrics Apps monitors should be enabled or disabled.
true
ecs_monitors_disabled Indicates if the ECS Apps monitors should be enabled or disabled.
true
elasticache_monitors_disabled Indicates if the Elasticache Apps monitors should be enabled or disabled.
true
lambda_monitors_disabled Indicates if the Lambda Apps monitors should be enabled or disabled.
true
nlb_monitors_disabled Indicates if the NLB Apps monitors should be enabled or disabled.
true
rds_monitors_disabled Indicates if the RDS Apps monitors should be enabled or disabled.
true
group_notifications Indicates if individual items that meet trigger conditions should be grouped. Defaults to true.
true
email_notifications Email Notifications to be sent by the alert.
[ ]
connection_notifications Connection Notifications to be sent by the alert.
[ ]
parent_folder_id The folder ID is automatically generated. Do not enter a value for this parameter.

This is the folder ID to install the apps into. A folder using the provided name will be added in "apps_folder_name". If the folder ID is empty, apps will be installed in the Personal folder.
Ignore this parameter.