Can't aggregate log counts into hourly buckets

  • What Grafana version and what operating system are you using?
    Cloud (Free) at the moment

  • What are you trying to achieve?
    We have a process that logs a single entry each time it successfully does something with error_code = 0, or if there’s an error during an attempt: error_code = 4. Typically this results in potentially thousands of logs per hour across all machines. We’re parsing those logs and sending them to Loki as json logs.

I’m trying to show how many units are produced per hour across all machines using count_over_time. Is there a better way to do this?

  • How are you trying to achieve it?
    count_over_time({service_name=“robot-logs”} | json | error_code = 0 [$__auto])

  • What happened?

(Trace ID: 333680631c5d3b850a95bcdb3885a65b)
  • What did you expect to happen?
    I expected to see a time series display of how many logs with error_code = 0 exist in each hour bucket.

  • Can you copy/paste the configuration(s) that you are having problems with?

  • Did you receive any errors in the Grafana UI or in related logs? If so, please tell us exactly what they were.

(Trace ID: 333680631c5d3b850a95bcdb3885a65b)



* Did you follow any online instructions? If so, what is the URL?