Skip to main content

Confluent Cloud Metrics Source

icon

Confluent is a software company that helps organizations manage, deploy, and scale real-time data infrastructure, enabling businesses to build real-time applications and derive insights from data efficiently. Confluent Cloud is a scalable, fully managed streaming data service based on Apache Kafka®. It offers a web interface called the Cloud Console for managing resources, settings, and billing, along with a local Command Line Interface (CLI) and REST APIs to create and manage Kafka topics. This integration aims to collect metric data from the Confluent Cloud Metrics platform API and send them to Sumo Logic.

Data collected​

Polling IntervalData
5 minutesExport metric values API

Setup​

Vendor configuration​

The Confluent Cloud Metrics source requires you to provide the Client ID (API Key ID) and the Client Secret (API Secret) to access the data. To generate the Client ID and Client Secret, refer to the cloud API key generation in your Confluent Cloud account.

Source configuration​

When you create a Confluent Cloud Metrics source, you add it to a Hosted Collector. Before creating the source, identify the Hosted Collector you want to use or create a new Hosted Collector. For instructions, see Configure a Hosted Collector and Source.

To configure a Confluent Cloud Metrics source:

  1. Classic UI. In the main Sumo Logic menu, select Manage Data > Collection > Collection.
    New UI. In the Sumo Logic top menu select Configuration, and then under Data Collection select Collection. You can also click the Go To... menu at the top of the screen and select Collection.
  2. On the Collection page, click Add Source next to a Hosted Collector.
  3. Search for and select Confluent Metrics.
  4. Enter a Name for the source. The description is optional.
  5. (Optional) For Source Category, enter any string to tag the output collected from the source. Category metadata is stored in a searchable field called _sourceCategory.
  6. (Optional) Fields. Click the +Add button to define the fields you want to associate. Each field needs a name (key) and value.
    • green check circle.png A green circle with a check mark is shown when the field exists in the Fields table schema.
    • orange exclamation point.png An orange triangle with an exclamation point is shown when the field doesn't exist in the Fields table schema. In this case, an option to automatically add the nonexistent fields to the Fields table schema is provided. If a field is sent to Sumo Logic that does not exist in the Fields schema is ignored, known as dropped.
  7. API Key ID. Enter the Client ID collected from the vendor configuration. For example, U5XXXYZYGAXXXFRZ.
  8. API Secret. Enter the Client Secret collected from the vendor configuration. For example, psYDINXXXG9eYi9hF/X20SZAI4YEn5IZ0cXXXuZ556WIbKYvHPHSCTXXXyF.
  9. Resource Filters. Select the checkbox to collect metrics for the required resources, and then enter the ID of the relevant resource to export metrics.
  10. (Optional) Resource Filter. Select the checkbox to specify the metric to export. If this parameter is not specified, all metrics for the resource will be exported.
  11. (Optional) Processing Rules for Logs. Configure any desired filters, such as allowlist, denylist, hash, or mask, as described in Create a Processing Rule.
  12. When you are finished configuring the source, click Save.

JSON schema​

Sources can be configured using UTF-8 encoded JSON files with the Collector Management API. See Use JSON to Configure Sources for details. 

ParameterTypeValueRequiredDescription
schemaRefJSON Object{“type”: “Confluent Cloud Metrics”}YesDefine the specific schema type.
sourceTypeString"Universal"YesType of source.
configJSON ObjectConfiguration objectYesSource type specific values.

Configuration Object​

ParameterTypeRequiredDefaultDescriptionExample
nameStringYesnullType a desired name of the source. The name must be unique per Collector. This value is assigned to the metadata field _source."mySource"
descriptionStringNonullType a description of the source."Testing source"
categoryStringNonullType a category of the source. This value is assigned to the metadata field _sourceCategory. See best practices for details."mySource/test"
fieldsJSON ObjectNonullJSON map of key-value fields (metadata) to apply to the collector or source. Use the boolean field _siemForward to enable forwarding to SIEM.{"_siemForward": false, "fieldA": "valueA"}
clientIdStringYesnullAPI Key ID generated from the Cloud API key in your Confluent Cloud account.U5XXXYZYGAXXXFRZ
clientSecretStringYesnullAPI Key Secret generated from the Cloud API Key in your Confluent Cloud account.psYDINXXXG9eYi9hF/X20SZAI4YEn5IZ0cXXXuZ556WIbKYvHPHSCTXXXyF
resourceKafkaIdBooleanNoFalseThe boolean value for collecting the metrics for Kafka IDs.
resourceConnectorIdBooleanNoFalseThe boolean value for collecting the metrics for collector IDs.
resourceKSQLIdBooleanNoFalseThe boolean value for collecting the metrics for kSQL IDs.
resourceSchemaRegistryIdBooleanNoFalseThe boolean value for collecting the metrics for SchemaRegistry IDs.
resourceComputePoolIdBooleanNoFalseThe boolean value for collecting the metrics for ComputePool IDs.
kafkaId[]StringNoFalseThe ID of the Kafka cluster to export metrics for.
connectorId[]StringNoFalseThe ID of the Connector to export metrics for.
ksqlId[]StringNoFalseThe ID of the ksqlDB application to export metrics for.
schemaRegistryId[]StringNoFalseThe ID of the Schema Registry to export metrics for.
computepoolId[]StringNoFalseThe ID of the Flink Compute Pool to export metrics for.
metric[]StringNoFalseThe metric to export. If this parameter is not specified, all metrics for the resource will be exported.
ignoreFailedMetricsBooleanNoFalseIgnore failed metrics and export only successful metrics if the allowed failure threshold is not breached. If this parameter is set to true, a StateSet metric (export_status) will be included in the response to report which metrics were successful and which failed.
pollingIntervalMinIntegerYes5Time interval (in minutes) after which the source will check for new data from the source API

JSON example​

{
"api.version": "v1",
"source": {
"config": {
"name": "Confluent Cloud Metrics",
"clientId": "U5XXXYZYGAXXXFRZ",
"clientSecret": "X2OSZAI4YEn5lZ0cXXXuZ556WlbKYvHPHSCTXXXyFN8dfz",
"resourceKafkaId": true,
"resourceConnectorId": false,
"resourceKSQLId": false,
"resourceSchemaRegistryId": true,
"resourceComputePoolId": false,
"metric": true,
"ignoreFailedMetrics": true,
"pollingIntervalMin": 5,
"kafkaId": [
"id1",
"id2"
],
"schemaRegistryId": [
"id3",
"id4"
],
"metrics": [
"example metric"
]
},
"schemaRef": {
"type": "Confluent Cloud Metrics"
},
"sourceType": "Universal"
}
}
Download example

Terraform example​

resource "sumologic_cloud_to_cloud_source" "confluent_cloud_metrics_source" {
collector_id = sumologic_collector.collector.id
schema_ref = {
type = "Confluent Cloud Metrics"
}
config = jsonencode({
"name": "Confluent Cloud Metrics",
"clientId": "U5XXXYZYGAXXXFRZ",
"clientSecret": "X2OSZAI4YEn5lZ0cXXXuZ556WlbKYvHPHSCTXXXyFN8dfz",
"resourceKafkaId": true,
"resourceConnectorId": false,
"resourceKSQLId": false,
"resourceSchemaRegistryId": true,
"resourceComputePoolId": false,
"metric": true,
"ignoreFailedMetrics": true,
"pollingIntervalMin": 5,
"kafkaId": [
"id1",
"id2"
],
"schemaRegistryId": [
"id3",
"id4"
],
"metrics": [
"example metric"
]
})
}
resource "sumologic_collector" "collector" {
name = "my-collector"
description = "Just testing this"
}
Download example

FAQ​

info

Click here for more information about Cloud-to-Cloud sources.

Status
Legal
Privacy Statement
Terms of Use

Copyright © 2025 by Sumo Logic, Inc.