FuseQL Examples
Here are some practical examples on how to use the Kloudfuse log searching query language, FuseQL.
Count All Logs
-
Query Builder
-
Advanced Search
count of
all logs
Everything
top
10
5s
* | timeslice 5s | count by (_timeslice)
This analysis works well for the following use cases:
- Activity Patterns
-
Examining logs over a time range can help spot patterns in system usage, traffic, or performance. For example, a system may experience higher load during specific hours of the day. So, analyzing log volumes over a set period — a day or week — can reveal predictable trends.
- Scaling Decisions
-
If logs show consistent spikes in traffic or resource usage during certain time ranges, teams may be able to predict when the system requires additional capacity — servers, storage, or network bandwidth — and plan ahead to scale appropriately.
- Impact of Changes or Deployments
-
After deploying a new feature or making a system update, teams often analyze logs generated at the time of deployment to ensure that the change did not cause any unexpected issues, such as errors or performance degradation. For example, reviewing logs from the past 48 hours can reveal any issues that can result from a recent deployment.
Count of All Fingerprints
-
Query Builder
-
Advanced Search
count of
all fingerprints
Everything
top
10
30s
* | timeslice 30s | count_unique(fingerprint) by (_timeslice)
This analysis works well for the following use cases:
- Identify Unexpected Usage Patterns
-
By tracking how the variety of user-related fingerprints changes over time, you can also spot unexpected usage patterns. For example, if a specific feature starts generating a wide variety of logs — new queries or interactions — it may indicate that users are adopting the feature in a manner that you didn’t anticipate. This may require further optimization or user support.
- Spot New Problems Early
-
A sudden increase in the count of different kinds of fingerprints may indicate that new issues are emerging in your system. For example, if new error patterns appear, or previously rare issues start becoming more frequent, you can track the diversity of fingerprints over time to help you detect these problems early, and mitigate them before they escalate.
Count of All Logs Grouped by Level
-
Query Builder
-
Advanced Search
count of
all logs
core:level
top 10
30 s
* | timeslice 30s | count by (_timeslice, level)
This analysis works well for the following use cases:
- Spot Spikes in Errors or Warnings
-
If the count of
ERROR
orWARN
logs increases suddenly, this is a clear signal that something in the system is broken. Whether the cause is a bug in the system, an overload of requests, or a failing component, monitoring the log counts over time, by severity level, helps you quickly detect issues as they arise. This allows you to react proactively, possibly preventing system outages or service degradation.
- Monitor System Usage Trends
-
INFO
logs often provide general operational details, such as how many users are accessing the system, how many transactions are happening, or how many requests are being made. By grouping logs by level over time, you can track normal system behavior, identifying whether the system is performing as expected or if usage has significantly increased.
Count of All Fingerprints Grouped by Source
-
Query Builder
-
Advanced Search
count of
al lfingerprints
Core:source
top
10
5s
* | timeslice 5s | count_unique(fingerprint) be (_timeslice, source)
This analysis works well for the following use cases:
- Source-Level Diagnosis
-
Grouping fingerprints by source allows you to understand which parts of your system are generating specific log patterns. For example, if a certain error fingerprint is seen predominantly from a specific service, such as an authentication service, this may indicate that the service itself is the source of the issue. Without grouping by source, you may miss the root cause.
- Resource Allocation and Scaling
-
If one particular source, like an API gateway or database, generates a disproportionate number of fingerprints, it may indicate a bottleneck or resource contention issue. Understanding this enables you to achieve a more targeted scaling or resource allocation to that part of the system, and to ensure overall system health.
Average of a Duration/Number Facet
-
Query Builder
-
Advanced Search
avg of
@*:duration
Everything
top
10
5s
* | timeslice 5s | avg(@duration:duration_seconds) by (_timeslice)
This analysis works well for the following use cases:
- Identify Bottlenecks and Latency Trends
-
If your logs contain durations — response times for API requests, transaction times, query execution times — than calculating the average duration over time helps identify performance trends. For example, if the average duration of an API call is gradually increasing over time, this signals that something in the system is slowing down and requires optimization: database queries taking longer, network latency increasing, and so on.
- Estimate Resource Requirements
-
Knowing the average duration of specific processes or operations, such as API calls or data processing tasks, helps estimate resource requirements. For example, if the average duration of a batch job is increasing over time, it may indicate that the system requires more CPU or memory resources to handle the load. By calculating averages, teams can plan for future scaling needs and ensure that the system can handle increasing load without performance degradation.
Error Rate Formula
- FILTER a
Core:level="error"
+
count of
all logs
Everything
top
10
5s
- FILTER b
-
Nothing
count of
all logs
Everything
top
10
5s
- FORMULA
-
a/b
This analysis works well for the following use cases:
- Failure Detection
-
A spike in the error rate usually indicates that a system component has failed or is malfunctioning. For example, a sudden rise in errors across the logs could point to a service crash, a network failure, or a hardware issue like a disk failure. Quickly catching these spikes lets teams react faster and bring the system back to normal operation.
- Trend Analysis
-
Over time, monitoring the error rate helps identify trends that are not immediately apparent. Gradual increases in error rates, even if subtle, can signal an issue that must be addressed — a misconfigured system or slowly degrading performance. Monitoring these trends enables teams to take action before a small issue becomes a major failure.
Anomaly on Count of Error Logs
-
Query Builder
-
Advanced Search
Core:level="error"
count of
all logs
Everything
2m
anomalies
agile-robust
hourly
1
This analysis works well for the following use case:
- Anomaly Detection
-
Identify sudden, sharp spikes in count of error logs that deviate significantly from the expected range. These spikes may indicate unusual events such as system malfunctions, deployment issues, unexpected traffic surges, or other irregular behaviors. Detecting these anomalies promptly enables teams to investigate and resolve issues quickly, minimizing potential impacts.
In these images, around 8:40, there is a sudden sharp spike in error logs; notice that it breaches the gray band, and displays as a red line.
Outlier
level="error" | timeslice 120s | count by (_timeslice, kube_namespace) | outlier (_count) by 120s, model=dbscan, eps=3
- Identify Poor Performance by Source
-
In this scenario, the error logs are monitored across various sources within a distributed system. The analysis determines that two namespaces are outliers: their error log rates differ significantly from other namespaces. This suggests potential issues within these specific components — increased load, configuration issues, or code changes that may be causing higher-than-normal errors.
This outlier detection allows teams to prioritize investigation into these specific sources, helping to identify and resolve issues before they impact the broader system.
Log Math Operator to Scale the Y-Axis Down
-
Query Builder
-
Advanced Search
This analysis works well for the following use cases:
- Compress wide ranges of values
-
Compress wide ranges of values to make large spikes and small changes comparable.
- Reduce the impact of extreme outliers
-
Reduce the impact of extreme outliers, revealing subtle trends.