Skip to main content
Sumo Logic

Transaction Operator Examples


For ordered data, there is a group limit of 10,000. The transaction operator uses a least-recently used scheme to phase out transactions. So when this limitation is reached, the transactions that are included in the results are not the first 10,000 transactions, but the 10,000 most-frequently used transactions. This is due to the fact that some earlier transactions have ended prematurely, as stated in the following error.

This message is displayed if you use more than 10,000 groups with the Transaction operator:

Group or memory limit exceeded, some transactions may have ended prematurely.

For unordered data, once group limit of 10,000 is reached, new transactions are ignored.

Running transaction on sessionID

When you use the transaction operator, it requires one or more transaction IDs to group related log messages together. For this ID, you can use session IDs, IPs, username, email, or any other unique IDs that are relevant to your query.

In this example, using the browser sessionID as the transaction ID, we could define states for countdown_start and countdown_done:

... | transaction on sessionid
with "Starting session *" as init,
with "Initiating countdown *" as countdown_start,
with "Countdown reached *" as countdown_done,
with "Launch *" as launch
results by transactions showing max(_messagetime),
sum("1") for init as initcount

Searching for latency across a distributed system

Let's say we're noticing an issue that could be due to a network problem or perhaps a web server problem. For simplicity's sake, we'll say this system has two internal components (QueryProcessor and Dashboard). The data for the two components are replicated in three databases; requests are load-balanced across these databases.

Running a query similar to:

| parse "] * - " as sessionID
| parse "- * accepting request." as module
| transaction on SessionID with states Frontend, QueryProcessor, Dashboard, DB1, DB2, DB3 in module
results by flow
| count by fromstate, tostate

could produce a Flow Diagram similar to the following, which shows normal latency without issues. The three databases at the end of the Flow Diagram are equally load balanced.

Then say there is an event that causes the average latency to spike to Database 3 between 3:00 AM and 5:00 AM.

This change would be displayed in the Flow Diagram, which shows that the flow to Database 3 has been much reduced from the first Flow Diagram. 

Detecting a potential e-commerce failure

In this example, you could track the states of your e-commerce site to see if there is a disconnect or drop off between two states that would indicate a potential failure.

Running a query similar to:

| parse regex "(?<ip>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})" nodrop
| transaction on ip
with "*/cart*" as cart,
with "*/shippingInfo*" as shipping,
with "*/billingInfo*" as billing,
with "*Verifying credit card with external service*" as billingVerification
with "*/confirmation*" as confirmation,
with "*Order shipped*" as ordershipped
results by flow
| count by fromstate, tostate

could produce a Flow Diagram with normal drop-off rates at the different states: cart, shipping, billing, billingVerification, confirmation, and order shipped.


Now, if you ran this query and saw results as shown below, where there is a big drop-off at the verification state, you could determine that there is likely a problem with the verification service and start an investigation.


Looking at the state counts in the Aggregates tab confirms the same thing.


Now, if you looked into the log messages for more information about the transactions that succeeded, you would see that they only occur in the first couple hours of the day. So you can pinpoint that the issue probably started at 2:00 AM.