How to determine the alert criteria when not able to determine bucket size

Hello Community,

We have a reporting system that we would like to monitor its performance to respond to report requests. We use the histogram to measure the response latency distribution. However, we have difficulty setting the custom bucket size due to the data size variability that directly impacts the response latency. Originally, we would like to measure the 90th percentile data point and use that to set a threshold for alerting. However, if we cannot determine the bucket size of our report request latency, what would be the approach to measure the performance and configure the alert setting? Could anyone give me some insight into this problem? Thanks.

1 Like