False Firing Alert – "DatasourceNoData" Alert Triggering Despite Valid Data in Query

We are experiencing an issue where a Grafana alert is firing with the message:

High Alert! Your site is down  
Firing  
Labels: alertname = DatasourceNoData  
Annotations: summary = (!) V4 US SHAREPOINT SITE IS DOWN (!)

However, the associated query is returning valid and recent data, and the Grafana dashboard panel reflects this correctly. The alert appears to be falsely triggering under the assumption that no data is available, even though this is not the case.


Details:

  • Grafana Version: [11.03]
  • Data Source: [ Prometheus ]
  • Panel Type: [Time series]
  • Alert Engine: [Unified Alerting / Legacy]
  • Evaluation Window: e.g., last 5m
  • Evaluation Interval: e.g., 5m
  • Alert Condition:
WHEN last() OF query(A, 5m, now) IS BELOW
![alert msg|690x361](upload://4yNtQy191q1xjbylSWTGJQrVsyM.png)
![Query2|491x500](upload://imZ7u0FkBmL3870igN6TXLcEMef.png)
![Query1|421x500](upload://8ORugShgGr3MWjKE2p5zPcGVEUn.png)
![Query1|421x500](upload://8ORugShgGr3MWjKE2p5zPcGVEUn.png)
 1
  • “No Data” Behavior Setting: Currently set to “Alerting”
  • Query Behavior: Running the query manually with the same time range used in the alert returns valid results with recent timestamps and numeric values.

What We Tried:

  • Verified the query manually against the alert evaluation time range (results returned correctly)
  • Switched “No Data” behavior to “OK” and “Keep Last State” to test different outcomes
  • Confirmed that data is not null and contains valid values
  • Checked for transformations or format issues
  • Cleared and recreated the alert rule

Expected Behavior:
The alert should only fire when genuine no data is returned or the threshold condition is met. In this case, the alert appears to fire even though data is available and thresholds are not breached.


Request:
Please advise on the root cause or known issues related to this behavior. Is there a bug in the alert engine when evaluating certain data sources or time ranges? Could it be caused by a temporary gap in data despite continuous data availability?

Enable:

and provide alert state history logs for that alert, pls.

HIS

Hi Team,

I have enabled and configured alerting correctly in our on-prem Grafana instance. We are using Prometheus with windows_exporter as the data source.

I am currently facing an issue where an alert is firing falsely (e.g., DatasourceNoData) even though the underlying data is present and visible on the dashboard. This issue has been occurring consistently for the past 1 week.

As part of troubleshooting:

  • I have enabled alert history in the configuration
  • However, I do not see any logs or history records for the alert state transitions (such as firing, resolved, or no data)
  • I am using Unified Alerting on a self-hosted (on-premises) Grafana setup

Could you please help me with the following:

  1. Where exactly can I find the alert history logs for a specific alert in the on-prem version?
  2. Is there any Grafana API or log file where I can extract the state transition history of an alert?
  3. Any known issues or recommendations for false alert triggers when data is actually available?

Please let me know if you need additional configuration details or exports from the alert rule setup.

Thanks,

How did you enable that? Did you follow the doc properly Configure alert state history | Grafana documentation ? Please provide reproducible steps on how you enabled that.