Skip to main content

Snowflake SQL API Source

thumbnail icon

The Snowflake SQL API source can be used to execute SQL queries with aggregation fields and translate the results to metrics. This source only collects metrics and does not currently collect any log data.

Data collected​

Customers can provide custom SQL queries for the source to execute and a configuration to translate the results to custom metrics data.

Setup​

Vendor configuration​

To collect metric data from the Snowflake SQL API, you must have an authorized Snowflake account. We suggest setting up a dedicated user account with the correct permissions for accessing the SQL tables with the data you are interested in collecting.

  1. Create a user account with the correct permissions for accessing the SQL tables you plan to query.
  2. Take note of your admin account identifier following the instructions here. The identifier should look something like this: wp00000.us-east-2.aws.
  3. Take note of the database name you plan to query.
  4. Optional additional information such as a role, warehouse, or schema name can also be configured with the source.

Source configuration​

When you create a Snowflake SQL API source, you add it to a Hosted Collector. Before creating the source, identify the Hosted Collector you want to use or create a new Hosted Collector. For instructions, see Configure a Hosted Collector and Source.

To configure the Snowflake SQL API Source:

  1. Classic UI. In the main Sumo Logic menu, select Manage Data > Collection > Collection.
    New UI. In the Sumo Logic top menu select Configuration, and then under Data Collection select Collection. You can also click the Go To... menu at the top of the screen and select Collection.
  2. On the collectors page, click Add Source next to a Hosted Collector.
  3. Search for and select Snowflake SQL API icon.
  4. Enter a Name to display for the source in the Sumo Logic web application. The description is optional.
  5. (Optional) For Source Category, enter any string to tag the output collected from the source. Category metadata is stored in a searchable field called _sourceCategory.
  6. (Optional) Fields. Click the +Add Field link to define the fields you want to associate. Each field needs a name (key) and value.
    • green check circle.png A green circle with a check mark is shown when the field exists in the Fields table schema.
    • orange exclamation point.png An orange triangle with an exclamation point is shown when the field doesn't exist in the Fields table schema. In this case, an option to automatically add the nonexistent fields to the Fields table schema is provided. If a field is sent to Sumo Logic that does not exist in the Fields schema it is ignored, known as dropped.
  7. In Snowflake Username, enter your Snowflake account username
  8. In Snowflake Password, enter the Snowflake account password associated with your user.
  9. In Snowflake Account Identifier, enter your Snowflake account identifier obtained from the vendor configuration above. The identifier should look something like this: wp00000.us-east-2.aws.
  10. In Snowflake Database, enter your Snowflake database. Separate sources are required to query separate databases.
  11. In SQL Statement Metric Configuration, upload a JSON file containing the SQL queries to execute, their polling interval, and additional configuration for translating the results to metrics.
  12. In (Optional) Snowflake Role, provide a database role if required
  13. In (Optional) Snowflake Warehouse, provide a database warehouse name if required
  14. In (Optional) Snowflake Schema, provide a database schema name if required
  15. When you are finished configuring the source, click Save.

Metrics JSON Configuration​

This source requires you upload a JSON configuration containing the details for which SQL queries to execute, the polling interval for how often to execute each query, and additional configuration for translating the results into metrics using the Prometheus format.

note

This metrics JSON file is not the entire source configuration. It is a separate JSON config file that specifically directs the C2C which queries should be executed, how often, and how to translate the results into metric data. You can use the JSON schema provided below to validate the file you upload matches the structure the C2C expects with any JSON schema validator website such as https://jsonschema.dev.

note

The metric format used by this source is Prometheus. All metric and labels must follow the Prometheus naming convention. The source will validate this on start up and will stop if validation fails.

Here is the schema expected from this JSON file:

{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Sumo Logic Snowflake SQL API Metric Schema",
"type": "array",
"items": {
"type": "object",
"properties": {
"sql": {
"type": "string",
"description": "The SQL statement used to fetch the data",
"examples": [
"select DATABASE_NAME, SCHEMA_NAME, USER_NAME, SUM(EXECUTION_TIME) from QUERY_HISTORY where start_time >= DATEADD(MINUTE, -10, GETDATE()) group by 1, 2, 3;"
]
},
"pollingInterval": {
"type": "string",
"description": "The frequency of executing the SQL statement.",
"examples": [
"10m",
"30m",
"1h",
"24h,"
]
},
"metrics": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A custom metric name following the Prometheus naming convention",
"pattern": "^[a-zA-Z_:][a-zA-Z0-9_:]*$",
"examples": [
"snowflake_query_bytes_scanned_sum",
"snowflake_query_execution_time_sum",
"snowflake_total_elapsed_time_sum"
]
},
"value": {
"type": "string",
"description": "The name of a column in your SQL statement containing the aggregated numerical metric value.",
"examples": [
"SUM(EXECUTION_TIME)"
]
},
"labels": {
"type": "array",
"items": {
"type": "string",
"description": "A column in your SQL statement with an aggregated value. This must match your SQL statement.",
"pattern": "^[a-zA-Z_][a-zA-Z0-9_]*$",
"examples": [
"DATABASE_NAME",
"SCHEMA_NAME",
"USER_NAME"
]
}
}
},
"required": [
"name",
"value",
"labels"
]
}
}
},
"required": [
"sql",
"pollingInterval",
"metrics"
]
}
}

Here is a small example with one SQL statement with multiple metrics. Notice the where clause in the SQL statement matches the poll interval.

[
{
"sql": "select SCHEMA_NAME, EXECUTION_STATUS, ERROR_MESSAGE, QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, WAREHOUSE_SIZE, USER_NAME, SUM(BYTES_SCANNED), SUM(EXECUTION_TIME), SUM(TOTAL_ELAPSED_TIME) from QUERY_HISTORY where start_time >= DATEADD(MINUTE, -10, GETDATE()) group by 1, 2, 3, 4, 5, 6, 7, 8;",
"pollingInterval": "10m",
"metrics": [
{
"name": "snowflake_query_bytes_scanned_sum",
"value": "SUM(BYTES_SCANNED)",
"labels": [
"SCHEMA_NAME",
"EXECUTION_STATUS",
"QUERY_TYPE",
"WAREHOUSE_NAME",
"DATABASE_NAME",
"WAREHOUSE_SIZE",
"USER_NAME"
]
},
{
"name": "snowflake_query_execution_time_sum",
"value": "SUM(EXECUTION_TIME)",
"labels": [
"SCHEMA_NAME",
"EXECUTION_STATUS",
"QUERY_TYPE",
"WAREHOUSE_NAME",
"DATABASE_NAME",
"WAREHOUSE_SIZE",
"USER_NAME"
]
},
{
"name": "snowflake_total_elapsed_time_sum",
"value": "SUM(TOTAL_ELAPSED_TIME)",
"labels": [
"SCHEMA_NAME",
"EXECUTION_STATUS",
"QUERY_TYPE",
"WAREHOUSE_NAME",
"DATABASE_NAME",
"WAREHOUSE_SIZE",
"USER_NAME"
]
}
]
}
]

JSON schema​

Sources can be configured using UTF-8 encoded JSON files with the Collector Management API. See Use JSON to Configure Sources for details. 

ParameterTypeValueRequiredDescription
schemaRefJSON Object{"type":"Snowflake SQL API"}YesDefine the specific schema type.
sourceTypeString"Universal"YesType of source.
configJSON ObjectConfiguration objectYesSource type specific values.

Configuration Object​

ParameterTypeRequiredDefaultDescriptionExample
nameStringYesnullType a desired name of the source. The name must be unique per collector. This value is assigned to the metadata field _source."mySource"
descriptionStringNonullType a description of the source."Testing source"
categoryStringNonullType a category of the source. This value is assigned to the metadata field _sourceCategory. See best practices for details."mySource/test"
fieldsJSON ObjectNonullJSON map of key-value fields (metadata) to apply to the collector or source.
usernameStringYesnullYour Snowflake user account.
passwordStringYesnullYour Snowflake user password.
accountIdentifierStringYesnullYour Snowflake admin account identifier.wp00000.us-east-2.aws
databaseStringYesnullYour Snowflake database name.SNOWFLAKE
MetricConfigSectionStringYesnullA stringified JSON of the metrics configuration.See above documentation for examples.

JSON example​

{
"api.version":"v1",
"source":{
"config":{
"name": "Snowflake C2C",
"username": "sumosnowflake",
"password": "*************",
"accountIdentifier": "wp71650.us-east-2.aws",
"database": "SNOWFLAKE",
"schema": "ACCOUNT_USAGE",
"metricConfig": "[{\"sql\":\"select SCHEMA_NAME, EXECUTION_STATUS, ERROR_MESSAGE, QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, WAREHOUSE_SIZE, USER_NAME, COUNT(QUERY_ID), AVG(QUEUED_OVERLOAD_TIME), SUM(QUEUED_OVERLOAD_TIME), AVG(QUEUED_REPAIR_TIME), SUM(QUEUED_REPAIR_TIME), AVG(QUEUED_PROVISIONING_TIME), SUM(QUEUED_PROVISIONING_TIME), SUM(TOTAL_ELAPSED_TIME), AVG(TOTAL_ELAPSED_TIME), SUM(EXECUTION_TIME), SUM(COMPILATION_TIME), AVG(BYTES_SCANNED), SUM(BYTES_SCANNED), SUM(BYTES_WRITTEN), SUM(BYTES_DELETED), SUM(BYTES_SPILLED_TO_LOCAL_STORAGE), SUM(BYTES_SPILLED_TO_REMOTE_STORAGE), AVG(PERCENTAGE_SCANNED_FROM_CACHE), SUM(PARTITIONS_SCANNED), SUM(ROWS_UNLOADED), SUM(ROWS_DELETED), SUM(ROWS_UPDATED), SUM(ROWS_INSERTED), SUM(COALESCE(ROWS_PRODUCED,0)) from QUERY_HISTORY where start_time >= DATEADD(HOUR, -24, GETDATE()) group by 1, 2, 3, 4, 5, 6, 7, 8;\",\"pollingInterval\":\"1m\",\"metrics\":[{\"name\":\"snowflake_query_bytes_scanned_sum\",\"value\":\"SUM(BYTES_SCANNED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_query_bytes_deleted_sum\",\"value\":\"SUM(BYTES_DELETED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_query_bytes_spilled_remote_sum\",\"value\":\"SUM(BYTES_SPILLED_TO_REMOTE_STORAGE)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_query_bytes_written_sum\",\"value\":\"SUM(BYTES_WRITTEN)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_query_compilation_time_sum\",\"value\":\"SUM(COMPILATION_TIME)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_query_execution_time_sum\",\"value\":\"SUM(EXECUTION_TIME)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_query_partitions_scanned_sum\",\"value\":\"SUM(PARTITIONS_SCANNED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_queued_overload_time_sum\",\"value\":\"SUM(queued_overload_time)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_queued_provisioning_time_sum\",\"value\":\"SUM(queued_provisioning_time)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_queued_repair_time_sum\",\"value\":\"SUM(queued_repair_time)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_rows_inserted_sum\",\"value\":\"SUM(ROWS_INSERTED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_rows_deleted_sum\",\"value\":\"SUM(ROWS_DELETED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_rows_produced_sum\",\"value\":\"SUM(COALESCE(ROWS_PRODUCED,0))\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_rows_unloaded_sum\",\"value\":\"SUM(ROWS_UNLOADED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_rows_updated_sum\",\"value\":\"SUM(ROWS_UPDATED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_total_elapsed_time_sum\",\"value\":\"SUM(TOTAL_ELAPSED_TIME)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]}]}]"
},
"schemaRef":{
"type":"Snowflake SQL API"
},
"sourceType":"Universal"
}
}
Download example

Terraform example​

resource "sumologic_cloud_to_cloud_source" "crowdstrike-filevantage_source" {
collector_id = sumologic_collector.collector.id
schema_ref = {
type = "Snowflake SQL API"
}
config = jsonencode({
"name": "Snowflake C2C",
"username": "sumosnowflake",
"password": "*************",
"accountIdentifier": "wp71650.us-east-2.aws",
"database": "SNOWFLAKE",
"schema": "ACCOUNT_USAGE",
"metricConfig": "[{\"sql\":\"select SCHEMA_NAME, EXECUTION_STATUS, ERROR_MESSAGE, QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, WAREHOUSE_SIZE, USER_NAME, COUNT(QUERY_ID), AVG(QUEUED_OVERLOAD_TIME), SUM(QUEUED_OVERLOAD_TIME), AVG(QUEUED_REPAIR_TIME), SUM(QUEUED_REPAIR_TIME), AVG(QUEUED_PROVISIONING_TIME), SUM(QUEUED_PROVISIONING_TIME), SUM(TOTAL_ELAPSED_TIME), AVG(TOTAL_ELAPSED_TIME), SUM(EXECUTION_TIME), SUM(COMPILATION_TIME), AVG(BYTES_SCANNED), SUM(BYTES_SCANNED), SUM(BYTES_WRITTEN), SUM(BYTES_DELETED), SUM(BYTES_SPILLED_TO_LOCAL_STORAGE), SUM(BYTES_SPILLED_TO_REMOTE_STORAGE), AVG(PERCENTAGE_SCANNED_FROM_CACHE), SUM(PARTITIONS_SCANNED), SUM(ROWS_UNLOADED), SUM(ROWS_DELETED), SUM(ROWS_UPDATED), SUM(ROWS_INSERTED), SUM(COALESCE(ROWS_PRODUCED,0)) from QUERY_HISTORY where start_time >= DATEADD(HOUR, -24, GETDATE()) group by 1, 2, 3, 4, 5, 6, 7, 8;\",\"pollingInterval\":\"1m\",\"metrics\":[{\"name\":\"snowflake_query_bytes_scanned_sum\",\"value\":\"SUM(BYTES_SCANNED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_query_bytes_deleted_sum\",\"value\":\"SUM(BYTES_DELETED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_query_bytes_spilled_remote_sum\",\"value\":\"SUM(BYTES_SPILLED_TO_REMOTE_STORAGE)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_query_bytes_written_sum\",\"value\":\"SUM(BYTES_WRITTEN)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_query_compilation_time_sum\",\"value\":\"SUM(COMPILATION_TIME)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_query_execution_time_sum\",\"value\":\"SUM(EXECUTION_TIME)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_query_partitions_scanned_sum\",\"value\":\"SUM(PARTITIONS_SCANNED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_queued_overload_time_sum\",\"value\":\"SUM(queued_overload_time)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_queued_provisioning_time_sum\",\"value\":\"SUM(queued_provisioning_time)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_queued_repair_time_sum\",\"value\":\"SUM(queued_repair_time)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_rows_inserted_sum\",\"value\":\"SUM(ROWS_INSERTED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_rows_deleted_sum\",\"value\":\"SUM(ROWS_DELETED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_rows_produced_sum\",\"value\":\"SUM(COALESCE(ROWS_PRODUCED,0))\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_rows_unloaded_sum\",\"value\":\"SUM(ROWS_UNLOADED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_rows_updated_sum\",\"value\":\"SUM(ROWS_UPDATED)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]},{\"name\":\"snowflake_total_elapsed_time_sum\",\"value\":\"SUM(TOTAL_ELAPSED_TIME)\",\"labels\":[\"SCHEMA_NAME\",\"EXECUTION_STATUS\",\"QUERY_TYPE\",\"WAREHOUSE_NAME\",\"DATABASE_NAME\",\"WAREHOUSE_SIZE\",\"USER_NAME\"]}]}]"

})
}
resource "sumologic_collector" "collector" {
name = "my-collector"
description = "Just testing this"
}
Download example

FAQ​

info

Click here for more information about Cloud-to-Cloud sources.

Status
Legal
Privacy Statement
Terms of Use

Copyright © 2024 by Sumo Logic, Inc.