Any way to prevent empty alerts when starting up Grafana?

I am provisioning new clusters via Terraform, each with an installation of Grafana (4.2) on one node, displaying monitoring statistics (held in Graphite and InfluxDB) from other nodes via Riemann, Collectd, and CAdvisor. Using Wizzy, I am importing a default dashboard with alerts on startup of the freshly installed Grafana.

I notice that immediately on startup, Grafana fires one empty notification, then quickly sends the OK.

Here’s the config and history of the Root disk usage alert:

My hope is to avoid this empty notification to our Slack channel when a new cluster (and it’s Grafana) first comes online. What is the best way to achieve this?

Some ideas I’ve considered:

  1. Configuring Grafana to not fire an alert when a very short history (seconds) exist for a particular measure. The graph query is to average 15 minutes, but only a few seconds of data exists when this fires.
  2. Some deployment scripting to temporarily start Grafana with ‘execute_alerts’ set to false in grafana.ini, then after a # of minutes, change execute_alerts back to true and restart Grafana again, all via scripting.
  3. Other…

Any help on this would be most appreciated.

Here’s an example for another alert:

Given I received no other solutions, I went with the following:

  1. Clean fresh install of Grafana with execute_alerts set to false.
  2. Wait 15 minutes for metrics to start coming in from various newly created VMs.
  3. Set execute_alerts to true in grafana.ini and restart Grafana.

This is scripted using Terraform with bash scripts using nohup, sleep, systemctl. This seems to avoid the spurious/empty Grafana alerts sent to Slack if enabled immediately after installing Grafana.

Hey @guydavis , Can you help me in this ?

How do we control that how oftenly an alert is to be sent to slack group ?

I mean does it only notify when an alert has been reached OR it will keep notifying the alert after some time interval ?