Are alerting false alarms from misconfiguration or a bug?

Hi folks,

I’m collecting server metrics using Collectd, storing it in Graphite and using Grafana as a dashboard.
If disk space climbs above 80% for a server, I’d like to get an alert.

However when I test the alert, I get:

  • An error message “tsdb.HandleRequest() error Request failed status: 403 Forbidden”
  • An error message ‘Templating init failed [object Object]’ in the alert email.

If I turn on the alert anyways, I get false alarms really frequently. Usually one as soon as I turn it on.

Example Graph Query:
C . collectd . mongoserver . df-mongodb-data . percent_bytes-used . keepLastValue(100)

Example Alert Query:

Example Email Alert:

I’m using the docker version of Grafana 4.2, my Graphite is not using port 80 and is restricted by ip. My graphs in Grafana are fine.

Is an bug where data source ports aren’t respected somewhere in alerts?
Or have I misconfigured grafana or the queries for the graph or alert?

I don’t care if the emailed graphs are broken, I’d just like accurate alerts.

Thanks for your help!

Do you use direct or proxy access, can grafana server access your graphite server? Looks like it is getting access denied when trying to evaluate the alert rule

I use direct access. Yes, grafana can access my graphite server. It is able to draw graphs of the same metric without problems.

Maybe your alert query is using template variables? that is not supported. Looks like graphite cannot be accessed by grafana-server when rendering the png image