Issue with Prometheus Query in Grafana

Hello Grafana Community,

I’m currently facing an issue with a Prometheus query in Grafana and would appreciate any insights or suggestions.

I have a metric nginx_request_status_code_total that tracks the number of requests with different status codes. When I query the metric without specifying a time range, I get results as expected. However, when I add a time range, such as [1h], [10m], or [15m], the query returns no data.

Here’s an example of the query that works without a time range:


And here’s the query that does not return data when I add a time range:


Things I’ve checked:

  • Verified the metric name (nginx_request_status_code_total) and label (status_code="404") are correct.
  • Ensured Prometheus is scraping metrics and Grafana can access Prometheus data sources.
  • Reviewed Prometheus logs for any errors or warnings.

Despite these checks, I’m unable to retrieve data when specifying a time range. Could anyone please advise on what might be causing this issue or suggest additional troubleshooting steps?

Thank you in advance for your help!

Since you are looking for the sum over a period of time, the time interval ([1h]) converts the query into a range type and to process the sum over this you should first wrap it in a rate, irate or increase function and then put sum over that. Something like this:

  • sum(increase(nginx_request_status_code_total{status_code="404"}[1h]))
  • sum(rate(nginx_request_status_code_total{status_code="404"}[1h]))

Additionally you could also try changing the time interval, I have personally faced instances wherein the time interval was smaller than my scrape interval and hence there was no data.


1 Like