I have included some screenshots, since this is baffling me.
The instance these metrics are from receives a good amount of requests every few seconds/minutes, but the stats seem to be accumulating and only registered every 16-18 on average.
This could be a graphite/carbon problem, but I’m not sure, and would like to know if anyone might know what’s going on.
Not all types of instances have this problem, which is also strange.
Incoming HTTP requests: the AWS load balancer sends 2 every 30 sec. and those are registered ok, mostly, but the other ones get lumped together and it says there are 110k+ in one spike, though I doubt there are actually that many in the 16-18 min. span.
The DB also seems to make 240k+ calls in a span of 10 sec. if the spikes are accurate… and almost nothing in between. (I’m guessing the stats aren’t flushed somewhere on my side? since at instance restarts, these spikes go down and then keep increasing the longer the instance is running?)
Hmmm, good call. I’ve been putting perSecond on most counts, but without it you get the following. #G is the query that all of those spikey metrics have. (with maybe an extra scale())
The count should always be steadily increasing, what could cause this?
I have more than 1 instance logging the same metrics. Does that trip up carbon/graphite?
Oh no, sorry, I meant that this metric should already be increasing only. I’ve figured out that carbon-cache doesn’t like getting the same named metrics from different sources. I thought it’d automatically aggregate them according to the storage-aggregation.conf file, but apparently not.
I’m now trying to get carbon-c-relay to work with my carbon-cache, but it’s not actually receiving any metrics, I think.
I’ve set the port of carbon-c-relay to 2003 and the port for carbon-cache to 2006 while using a cluster in carbon-c-relay.conf as cluster carbon forward 127.0.0.1:2006
But carbon-c-relay doesn’t seem to accept any input…