Hello!
I’m trying to set up some alerts and I’m not sure if my alert is configured incorrectly or if there is a problem with Grafana.
I have three queries, and they all return the values I would expect them to on the graph itself. The 185% is because the 3 is actually 3.25.
When I look at my alert, the value for the average is not the same as above. I put the other two metrics on the alert so I could see what values they got, and they seem to be right. But the percent metric is always wrong. Sometimes it gets 0 like below and sometimes it gets a numeric value that is different than I would expect.
I have my queries summarizing by 24 hours, is that affecting it somehow? Any help would be appreciated
Hi,
What datasource are you using? Would be valuable if you can include your queries as well.
Marcus
We’re using Graphite 0.9.16. Liz will have to answer the question on the queries though.
Here’s part of my queries if this is what you’re looking for:
A: alias(averageSeries(timeStack(summarize(xyz, ‘1h’, ‘sum’, false), ‘1week’, 1, 5)), ‘past month average count’)
D: alias(timeStack(summarize(xyz, ‘1h’, ‘sum’, false), ‘1h’, 0, 1), ‘current count’)
E: alias(asPercent(#D, #A), ‘current percent of average (alert metric)’)
Hi,
I’ve searched the community site and found the following post which seems kind of like your problem: Alerts based on graphite data are evaluating to null values even though the metric returns valid plot points
Could you please try changing your alerts condition according to this post, i.e. (25h,now) and/or now-5m etc.
Marcus
I am experiencing a problem similar to this… I have a computed metric like so
alias(absolute(offset(divideSeries(#B, #A), -1)), ‘percent successful requests’)
And many data points come back in the graph, one every 10 seconds. Because of this @mefraimsson the linked issue solution does not help. Still, when the alert evals that metric the query object returns 0 results. Something about calculated metrics?