Scrape job not collecting all metrics from CloudWatch

What happened?
Grafana scrape job for AWS is not collecting all data. For example i noticed that it stopped collecting cpu usage by 22th for some ec2, for some it collects data randomly. At the moment of writing it collects cpu usage only for 1 out of 4 runnging instances.

What did you expect to happen?
I expect scrape job to collect all data - meaning i can see similar data in grafana as i see in CloudWatch/AWS.

Did this work before?
I don’t know. I noticed that for some EC2 it did and stopped.

How do we reproduce it?:

  1. Configure AWS scrape job and collect AWS/EC2
    aws_ec2_cpuutilization metric. I also noticed that for SQS metrics miss for part of queues.
  2. Compare metrics data in AWS and Grafana

Environment (with versions):

  • Grafana version: Grafana v11.3.0-74868 (4a753dd2d5)
  • Operating system:
  • Browser: Version 114.0.5735.198 (Official Build) (64-bit)
  • Datasource version(s) (prometheus, graphite, etc.): AWS CloudWatch
  • Plugins:

Configuration information:

  1. Use cloudformation role