Skip to main content
Sumo Logic

Collect logs and metrics for the MongoDB Atlas App

This page explains how to collect logs from MongoDB Atlas and ingest them into Sumo Logic for use with the MongoDB Atlas App predefined dashboards and searches.

This page explains how to collect logs from MongoDB Atlas and ingest them into Sumo Logic for use with the MongoDB Atlas App predefined dashboards and searches.

Collection overview

Sumo Logic provides a solution which pulls logs and metrics from MongoDB Atlas with API calls. You can configure the log types to be collected, and the logs and metrics are then forwarded to Sumo Logic’s HTTP endpoint.

Configuring collection for MongoDB Atlas includes the following tasks:

  1. Acquire authentication information from the MongoDB Atlas portal
  2. Add a hosted collector and HTTP source
  3. Configure collection for MongoDB Atlas
  4. Configure Webhooks for alerts collection
  5. Advanced configuration
  6. Troubleshooting

Step 1: Get Authentication information from the MongoDB Atlas portal

This section shows you how to acquire MongoDB Atlas portal authentication information.

To acquire authentication information for the MongoDB Atlas portal, do the following:

  1. Generate programmatic API Keys with project owner permissions using the instructions in the Atlas documentation. Then, copy the public key and private pey. These serve the same function as a username and API Key respectively.
  2. Specify the API key Organization Member permissions, under Organization > Access > API Keys, as shown in the following example.

MongoDB_Atlas_Edit_API_Key_dialog.png

  1. Specify Project Data Access Read Only permission, under Project > Access Management > API Keys, as shown in the following example.

MongoDB_Atlas_Access_Management_dialog.png

  1. Go to your project, click Settings, and copy the Project ID, as shown in the following example.

MongoDB_Atlas_Settings_Project-Settings_dialog.png

  1. Go to your organization by using context drop down at the top, click Settings, and copy the Organization ID.

MongoDB_Atlas_Settings_OrgID_dialog.png

  1. Enable Database Auditing for the Atlas project for which you want to monitor logs, as described in this Atlas document
    Leave Database Auditing set to ON, as shown in the following example.

MongoDB_Atlas_Database_Auditing.png

Step 2: Add a Hosted Collector and HTTP Source

This section demonstrates how to add a hosted Sumo Logic collector and HTTP Logs and Metrics source, to collect logs for MongoDB Atlas.

To add a hosted collector and HTTP source, do the following:

  1. Do one of the following:
  • If you already have a Sumo Logic Hosted Collector, identify the one you want to use.
  • Create a new Hosted Collector as described in this document: Configure a Hosted Collector.
  1. Add two  HTTP sources, one for logs and another for metrics.
  2. Go to the source you created for ingesting logs, navigate to Timestamp Format > Advanced Options, and click Specify a format
  3. Enter the following information in the respective fields for the log source:
  • Timestamp Locator: \"created\":(.*)
  • Format: yyyy-MM-dd'T'HH:mm:ss.SSS'Z'
  1. Click Add.
  2. Enter the following information in the respective fields for the metric source:
  • Timestamp Locator: \"created\":(.*)
  • Format: yyyy-MM-dd'T'HH:mm:ss'Z'
  1. Click Add.

Step 3: Configure collection for MongoDB Atlas 

In this section, we explore various mechanisms to collect database logs, events, metrics and alerts  from MongoDB Atlas and send them to Sumo Logic, where they are shown in dashboards as part of the MongoDB Atlas App. You can configure a Sumo Logic’s MongoDB Atlas collector in Amazon Web Services (AWS) using the AWS Lambda service, or by running  a script on a Linux machine as a cron job. Choose the method that is best suited for you:

Deploy the Sumo Logic MongoDB Atlas SAM Application

In this section, you deploy the SAM application, which creates the necessary  resources in your AWS account.

To deploy the Sumo Logic MongoDB Atlas SAM Application, do the following:

  1. Go to https://serverlessrepo.aws.amazon.com/applications.
  2. Search for sumologic-mongodb-atlas, select the Show apps that create custom IAM roles or resource policies check box, and click the app link when it appears.

AWS_Serverless_App_Repository_dialog.png

  1. When the Sumo Logic app page appears, click Deploy.

AWS_sumologic-mongdb-atlas_Deploy.png

  1. In the AWS Lambda > Functions > Application Settings panel, specify the following parameters in the corresponding text fields:
  • HTTPLogsEndpoint: Copy and paste the URL for the HTTP Logs source from Step 2.
  • HTTPMetricsEndpoint: Copy and paste the URL for the HTTP Metrics source from Step 2.
  • OrganizationID: Copy and paste the Organization ID from Step 1.
  • ProjectID: Copy and paste the Project ID from Step 1.
  • Private API Key: Copy and paste the Private Key from Step 1.
  • Public API Key: Copy and paste the Public Key from Step 1.

AWS_Lambda_App_Settings.png

  1. Click Deploy.
Configure collection for multiple projects

This section shows you how to configure collection for multiple projects assuming you are already collecting Atlas data for one project. This task requires that you do the following:

  • Stop the collection of OrgEvents in the second SAM app deployment, because these events are global and are already captured by first collector.
  • Change the DBNAME so that state (keys) maintained (bookkeeping) in the database (key value store) are not in conflict.

To configure collection for multiple projects, do the following:

  1. Deploy the MongoDB Atlas SAM application with the configuration for a new project.
  2. After the deployment is complete, change the database name by adding environment variable (DBNAME) in AWS Lambda, as shown in the following example.

AWS_Env_Variables_panel.png

  1. From the Lambda console, go to the mongodbatlas.yaml file and comment out EVENTS_ORG, as shown in the following example. This prevents the collection of org events, because they are already being collected by the first collector.

AWS_mongodbatlas-yaml_panel.png

Configure script based collection for MongoDB Atlas

This section shows you how to configure script based log collection for the Sumo Logic MongoDB Atlas App.

Prerequisites

This task makes the following assumptions:

  • You successfully added a hosted collector and HTTP source, and copied the configuration parameters (ProjectID, OrganizationID, PublicKey and PrivateKey) from MongoDB Atlas console, as described in Add a Hosted Collector and HTTP Source.
  • You are logged in to the user account with which you will install the collector. If not, use the following command to switch to that account: sudo su <user_name>
Configure the script on a Linux machine

This task shows you how to install the script on a Linux machine.

To deploy the script on a Linux machine, do the following:

  1. If pip is not already installed, follow the instructions in the pip documentation to download and install pip

  2. Log in to a Linux machine (compatible with either Python 3.7 or Python 2.7. 

  3. Do one of the following:

  • For Python 2, run the following command: pip install sumologic-mongodb-atlas
  • For Python 3, run the following command: pip3 install sumologic-mongodb-atlas 
  1. Create a mongodbatlas.yaml configuration file in the home directory and fill in the parameters as shown in the following example.

SumoLogic:
 LOGS_SUMO_ENDPOINT: <Paste the URL for the HTTP Logs source from>
 METRICS_SUMO_ENDPOINT: <Paste the URL for the HTTP Metrics source>

MongoDBAtlas:
 ORGANIZATION_ID: Paste the Organization ID
 PROJECT_ID: Paste the Project ID
 PRIVATE_KEY: Paste the Private Key
 PUBLIC_KEY: Paste the Public Key
  1. Create a cron job  to run the collector every 5 minutes, (use the crontab -e option). Do one of the following:

  • For Python 2, add the following line to your crontab: 
    */5 * * * *  /usr/bin/python -m sumomongodbatlascollector.main > /dev/null 2>&1

  • For Python 3, add the following line to your crontab: 
    */5 * * * *  /usr/bin/python3 -m sumomongodbatlascollector.main > /dev/null 2>&1

Configuring collection for multiple projects

This section shows you how to configure collection for multiple projects assuming you are already collecting Atlas data for one project. This task requires that you do the following:

  • Stop the collection of OrgEvents in the second SAM app deployment, because these events are global and are already captured by first collector.
  • Change the DBNAME so that state (keys) maintained (bookkeeping) in the database (key value store) are not in conflict.

To configure collection for multiple projects, do the following:

  1. Configure the script on a Linux machine, then go to your configuration file.
  2. Change the DB_NAME and comment out EVENTS_ORG as shown in the following example.
SumoLogic:
  LOGS_SUMO_ENDPOINT: <Paste the URL for the HTTP Logs source from step 2.>
  METRICS_SUMO_ENDPOINT: <Paste the URL for the HTTP Metrics source from step 2.>

MongoDBAtlas:
  ORGANIZATION_ID: <Paste the Organization ID from step 1.>
  PROJECT_ID: <Paste the Project ID from step 1.>
  PRIVATE_KEY: <Paste the Private Key from step 1.>
  PUBLIC_KEY: <Paste the Public Key from step 1.>
    LOG_TYPES:
      - DATABASE
      - AUDIT
      - EVENTS_PROJECT
      # - EVENTS_ORG
      - ALERTS

    METRIC_TYPES:
      PROCESS_METRICS:
        - CACHE_DIRTY_BYTES
        - CACHE_USED_BYTES
        - CONNECTIONS
        - CURSORS_TOTAL_OPEN
        - CURSORS_TOTAL_TIMED_OUT
        - DB_STORAGE_TOTAL
        - DB_DATA_SIZE_TOTAL
        - EXTRA_INFO_PAGE_FAULTS
        - GLOBAL_LOCK_CURRENT_QUEUE_TOTAL
        - MEMORY_RESIDENT
        - MEMORY_VIRTUAL
        - MEMORY_MAPPED
        - NETWORK_BYTES_IN
        - NETWORK_BYTES_OUT
        - NETWORK_NUM_REQUESTS
        - OPCOUNTER_CMD
        - OPCOUNTER_QUERY
        - OPCOUNTER_UPDATE
        - OPCOUNTER_DELETE
        - OPCOUNTER_GETMORE
        - OPCOUNTER_INSERT
        - OP_EXECUTION_TIME_READS
        - OP_EXECUTION_TIME_WRITES
        - OP_EXECUTION_TIME_COMMANDS
        - OPLOG_MASTER_LAG_TIME_DIFF
        - OPLOG_SLAVE_LAG_MASTER_TIME
        - QUERY_EXECUTOR_SCANNED
        - QUERY_EXECUTOR_SCANNED_OBJECTS
        - QUERY_TARGETING_SCANNED_PER_RETURNED
        - QUERY_TARGETING_SCANNED_OBJECTS_PER_RETURNED
        - SYSTEM_NORMALIZED_CPU_USER
        - SYSTEM_NORMALIZED_CPU_KERNEL
        - SYSTEM_NORMALIZED_CPU_IOWAIT
        - PROCESS_CPU_USER
        - PROCESS_CPU_KERNEL
        - SYSTEM_NORMALIZED_CPU_STEAL

      DISK_METRICS:
        - DISK_PARTITION_IOPS_READ
        - DISK_PARTITION_IOPS_WRITE
        - DISK_PARTITION_UTILIZATION
        - DISK_PARTITION_LATENCY_READ
        - DISK_PARTITION_LATENCY_WRITE
        - DISK_PARTITION_SPACE_PERCENT_FREE
        - DISK_PARTITION_SPACE_PERCENT_USED

Collection:
  DBNAME: "newmongodbatlas" 

Step 4. Configure Webhooks for alerts collection

You configure Webhooks for real-time alerts. This section explains how to configure alert collection using Webhooks. 

To configure alert collection with Webhooks, do the following:

  1. Go to the MongoDBAtlas console and
  2. Copy and paste the Logs endpoint from Step 2 to set up Webhook. 

WebHook_config1.png

  1. When configuring an alert, specify the Webhook as shown in the following example, and then click Save.

Webhook_Alert.png

Advanced Configuration

This section is common for both AWS Lambda based collection and script based collection

The following table provides a list of variables for MongoDB Atlas that you can optionally define in the configuration file.

Variable  Usage

LOG_TYPES

in MongoDBAtlas Section

- DATABASE

 - AUDIT

 - EVENTS_PROJECT

 - EVENTS_ORG

 - ALERTS

Remove any one of the lines if you do not want to collect metric of that type.

PROCESS_METRICS

in MongoDBAtlas Section

- CACHE_DIRTY_BYTES

- CACHE_USED_BYTES

- CONNECTIONS

- CURSORS_TOTAL_OPEN

- CURSORS_TOTAL_TIMED_OUT

- DB_STORAGE_TOTAL

- DB_DATA_SIZE_TOTAL

- EXTRA_INFO_PAGE_FAULTS

- GLOBAL_LOCK_CURRENT_QUEUE_TOTAL

- MEMORY_RESIDENT

- MEMORY_VIRTUAL

- MEMORY_MAPPED

- NETWORK_BYTES_IN

- NETWORK_BYTES_OUT

- NETWORK_NUM_REQUESTS

- OPCOUNTER_CMD

- OPCOUNTER_QUERY

- OPCOUNTER_UPDATE

- OPCOUNTER_DELETE

- OPCOUNTER_GETMORE

- OPCOUNTER_INSERT

- OP_EXECUTION_TIME_READS

- OP_EXECUTION_TIME_WRITES

- OP_EXECUTION_TIME_COMMANDS

- OPLOG_MASTER_LAG_TIME_DIFF

- OPLOG_SLAVE_LAG_MASTER_TIME

- QUERY_EXECUTOR_SCANNED

- QUERY_EXECUTOR_SCANNED_OBJECTS

- QUERY_TARGETING_SCANNED_PER_RETURNED

- QUERY_TARGETING_SCANNED_OBJECTS_PER_RETURNED

- SYSTEM_NORMALIZED_CPU_USER

- SYSTEM_NORMALIZED_CPU_KERNEL

- SYSTEM_NORMALIZED_CPU_IOWAIT

- PROCESS_CPU_USER

- PROCESS_CPU_KERNEL

- SYSTEM_NORMALIZED_CPU_STEAL

Remove any one of the lines if you do not want to collect metric of that type.

DISK_METRICS

in MongoDBAtlas Section

- DISK_PARTITION_IOPS_READ

- DISK_PARTITION_IOPS_WRITE

- DISK_PARTITION_UTILIZATION

- DISK_PARTITION_LATENCY_READ

- DISK_PARTITION_LATENCY_WRITE

- DISK_PARTITION_SPACE_FREE

- DISK_PARTITION_SPACE_USED   - - -DISK_PARTITION_SPACE_PERCENT_FREE

- DISK_PARTITION_SPACE_PERCENT_USED

Remove any one of the lines if you do not want to collect metric of that type.

BACKFILL_DAYS

in Collection Section

Number of days before the event collection will start. If the value is 1, then events are fetched from yesterday to today.

PAGINATION_LIMIT  

in Collection Section

Number of events to fetch in a single API call.

LOG_FORMAT

in Logging Section

Log format used by the python logging module to write logs in a file.

ENABLE_LOGFILE

in Logging Section

Set to TRUE to write all logs and errors to a log file.

ENABLE_CONSOLE_LOG 

in Logging Section

Enables printing logs in a console.

LOG_FILEPATH

in Logging Section

Path of the log file used when ENABLE_LOGFILE is set to TRUE.

NUM_WORKERS

in Collection Section

Number of threads to spawn for API calls.

MAX_RETRY

in Collection Section

Number of retries to attempt in case of request failure.

BACKOFF_FACTOR

in Collection Section

A backoff factor to apply between attempts after the second try. If the backoff_factor is 0.1, then sleep() will sleep for [0.0s, 0.2s, 0.4s, ...] between retries.

TIMEOUT

in Collection Section

Request time out used by the requests library.

LOGS_SUMO_ENDPOINT

in MongoDBAtlas section

HTTP source endpoint url created in Sumo Logic for ingesting Logs.

METRICS_SUMO_ENDPOINT

In MongoDBAtlas section
HTTP source endpoint url created in Sumo Logic for ingesting Metrics.

Troubleshooting

This section shows you how to run the function manually and then verify that log messages are being sent from MongoDB Atlas.

To run the function manually, do the following:

  1. Enter  one of the following commands:
  • For Python 2, use this command: python -m sumomongodbatlascollector.main
  • For Python 3, use this command: python3 -m sumomongodbatlascollector.main
  1. Check the automatically generated logs in  /tmp/sumoapiclient.log to verify whether the function is getting triggered or not.
  2. If you installed the collector as root user and then run it as a normal user, you will see an error message similar to the following. This is because the config is not present in the home directory of the user that is running the collector. Switch to root user and run the script again.
Traceback (most recent call last):
 File "/usr/local/lib/python2.7/dist-packages/sumomongodbatlascollector/main.py", line 190, in main
   ns = MongoDBAtlasCollector()
 File "/usr/local/lib/python2.7/dist-packages/sumomongodbatlascollector/main.py", line 29, in __init__
   self.config = Config().get_config(self.CONFIG_FILENAME, self.root_dir, cfgpath)
 File "/usr/local/lib/python2.7/dist-packages/sumomongodbatlascollector/common/config.py", line 22, in get_config
   self.validate_config(self.config)
 File "/usr/local/lib/python2.7/dist-packages/sumomongodbatlascollector/common/config.py", line 34, in validate_config
   raise Exception("Invalid config")
Exception: Invalid config

Sample Log Messages

MongoDB Atlas logs are all in JSON format and metrics are in Carbon2.0 format. The MongoDB Atlas utilizes five types of logs and two types of metrics. This section provides examples of the log types utilized by the MongoDB Atlas App.

Database Logs

Sample message

{"msg": "2019-07-03T16:07:40.366+0000 I CONTROL  [initandlisten] MongoDB starting : pid=20104 port=27017 dbpath=/srv/mongodb/M10AWSTestCluster-shard-1-node-2 64-bit host=m10awstestcluster-shard-01-02-snvkl.mongodb.net", "project_id": "5cd0343ff2a30b3880beddb0", "hostname": "m10awstestcluster-shard-01-02-snvkl.mongodb.net", "cluster_name": "m10awstestcluster", "created": "2019-07-03T16:07:40.366+0000"}

For more information, see https://docs.mongodb.com/manual/refe.../log-messages/.

Audit Logs

Sample message

{"atype": "authenticate", "ts": {"$date": "2019-07-03T16:08:50.256+0000"}, "local": {"ip": "192.168.253.201", "port": 27017}, "remote": {"ip": "192.168.254.91", "port": 49592}, "users": [{"user": "mms-monitoring-agent", "db": "admin"}], "roles": [{"role": "clusterMonitor", "db": "admin"}], "param": {"user": "mms-monitoring-agent", "db": "admin", "mechanism": "SCRAM-SHA-1"}, "result": 0, "project_id": "5cd0343ff2a30b3880beddb0", "hostname": "m10awstestcluster-shard-01-02-snvkl.mongodb.net", "cluster_name": "m10awstestcluster", "created": "2019-07-03T16:08:50.256+0000"}

For more information, see https://docs.mongodb.com/manual/refe...audit-message/.

Alerts

Sample message

{"alertConfigId": "5cf9c324f2a30b3783a1dd28", "clusterName": "M10AWSTestCluster", "created": "2019-06-07T01:52:28Z", "currentValue": {"number": 5.142857142857142, "units": "RAW"}, "eventTypeName": "OUTSIDE_METRIC_THRESHOLD", "groupId": "5cd0343ff2a30b3880beddb0", "hostnameAndPort": "m10awstestcluster-shard-00-01-snvkl.mongodb.net:27017", "id": "5cf9c35cd5ec1376c01e81e5", "lastNotified": "2019-06-07T02:28:38Z", "links": [{"href": "https://cloud.mongodb.com/api/public...ec1376c01e81e5", "rel": "self"}], "metricName": "CONNECTIONS_PERCENT", "replicaSetName": "M10AWSTestCluster-shard-0", "resolved": "2019-06-07T02:28:29Z", "status": "CLOSED", "typeName": "HOST_METRIC", "updated": "2019-06-07T02:28:29Z"}

For more information, see https://docs.atlas.mongodb.com/refer...et-all-alerts/.

Project Events

Sample message

{"apiKeyId": "5cef76e8c56c9800d21aa96d", "created": "2019-07-29T11:49:56Z", "eventTypeName": "MONGODB_LOGS_DOWNLOADED", "groupId": "5cd0343ff2a30b3880beddb0", "id": "5d3edd64ff7a25581ef2a95b", "isGlobalAdmin": false, "links": [{"href": "https://cloud.mongodb.com/api/atlas/...7a25581ef2a95b", "rel": "self"}], "publicKey": "hpgstoga", "remoteAddress": "111.93.54.106"}

For more information, see https://docs.atlas.mongodb.com/refer...ath-parameters.

Organization Events

Sample message

{"created": "2019-07-22T12:50:32Z", "eventTypeName": "API_KEY_WHITELIST_ENTRY_ADDED", "id": "5d35b118a6f2391aafc77a05", "isGlobalAdmin": false, "links": [{"href": "https://cloud.mongodb.com/api/atlas/...f2391aafc77a05", "rel": "self"}], "orgId": "5cd0343ef2a30b3bc7b8f88e", "remoteAddress": "113.193.231.154", "targetPublicKey": "hpgstoga", "userId": "5cd0343ec56c985c82d3822f", "username": "hpal@sumologic.com", "whitelistEntry": "3.216.123.80"}

For more information, see https://docs.atlas.mongodb.com/reference/api/events-orgs-get-all/.

Metric types

This section provides examples of the metric types utilized by the MongoDB Atlas App.

Process Metrics

Metrics collected

CACHE_DIRTY_BYTES
CACHE_USED_BYTES
CONNECTIONS
CURSORS_TOTAL_OPEN
CURSORS_TOTAL_TIMED_OUT
DB_STORAGE_TOTAL
DB_DATA_SIZE_TOTAL
EXTRA_INFO_PAGE_FAULTS
GLOBAL_LOCK_CURRENT_QUEUE_TOTAL
MEMORY_RESIDENT
MEMORY_VIRTUAL
MEMORY_MAPPED
NETWORK_BYTES_IN
NETWORK_BYTES_OUT
NETWORK_NUM_REQUESTS
OPCOUNTER_CMD
OPCOUNTER_QUERY
OPCOUNTER_UPDATE
OPCOUNTER_DELETE
OPCOUNTER_GETMORE
OPCOUNTER_INSERT
OP_EXECUTION_TIME_READS
OP_EXECUTION_TIME_WRITES
OP_EXECUTION_TIME_COMMANDS
OPLOG_MASTER_LAG_TIME_DIFF
OPLOG_SLAVE_LAG_MASTER_TIME
QUERY_EXECUTOR_SCANNED
QUERY_EXECUTOR_SCANNED_OBJECTS
QUERY_TARGETING_SCANNED_PER_RETURNED
QUERY_TARGETING_SCANNED_OBJECTS_PER_RETURNED
SYSTEM_NORMALIZED_CPU_USER
SYSTEM_NORMALIZED_CPU_KERNEL
SYSTEM_NORMALIZED_CPU_IOWAIT
PROCESS_CPU_USER
PROCESS_CPU_KERNEL
SYSTEM_NORMALIZED_CPU_STEAL

Sample metric

"projectId=5cd0343ff2a30b3880beddb0 hostId=m10awstestcluster-shard-00-00-snvkl.mongodb.net:27016 processId=m10awstestcluster-shard-00-00-snvkl.mongodb.net:27016 metric=ASSERT_REGULAR  units=SCALAR_PER_SECOND cluster_name=m10awstestcluster 0.0 1564207269"

For more information, see https://docs.atlas.mongodb.com/refer...-measurements/.

Disk Metrics

Metrics collected

DISK_PARTITION_IOPS_READ
DISK_PARTITION_IOPS_WRITE
DISK_PARTITION_UTILIZATION
DISK_PARTITION_LATENCY_READ
DISK_PARTITION_LATENCY_WRITE
DISK_PARTITION_SPACE_PERCENT_FREE
DISK_PARTITION_SPACE_PERCENT_USED

Sample metric

projectId=5cd0343ff2a30b3880beddb0 partitionName=nvme1n1 hostId=m10awstestcluster-shard-01-02-snvkl.mongodb.net:27017 processId=m10awstestcluster-shard-01-02-snvkl.mongodb.net:27017 metric=DISK_PARTITION_IOPS_READ  units=SCALAR_PER_SECOND cluster_name=m10awstestcluster 0.0 1564207300

For more information, see https://docs.atlas.mongodb.com/refer...-measurements/.

Query Sample

The query sample provided in this section is from the Recent Audit Events panel of the MongoDB Atlas - Audit dashboard.

_sourceCategory=Labs/MongoDBAtlas/logs AND (_sourceName="mongodb-audit-log.gz" OR _sourceName="mongos-audit-log.gz")
| json "atype", "local.ip", "remote.ip", "users","result", "project_id", "hostname", "cluster_name", "param" as atype, local_ip, remote_ip, users, result, project_id, hostname, cluster_name, param
| json field=param "db", "ns" as database1, database2 nodrop
| parse field=database2 "*.*" as database2, collection nodrop
| if (isBlank(database1), database2, database1) as database
| where atype matches /create|add|grant|drop|remove|revoke|shutdown|rename|update/
| first(_messageTime) as latest_event_date group by atype, project_id, hostname, cluster_name,  database, collection, users, local_ip, remote_ip
| sort by latest_event_date
| formatDate(fromMillis(toLong(latest_event_date)),"MM-dd-yyyy HH:mm:ss:SSS") as latest_event_date