Prometheus.exporter.cloudwatch returning empty data

Dear all,

I’d like to configure a static job for the prometheus.export.cloudwatch to get S3 data.

I’ve followed the documentation everything seems fine to me:
1.- alloy user has valid credentials
2.- prometheus conf for the static job looks ok

My env: allow 1.2.0 on Centos9.

I added to my alloy.conf something like:

prometheus.exporter.cloudwatch "s3_data" {
    debug = true
    sts_region      = "eu-west-1"
    static "compbiodata" {
        namespace   = "AWS/S3"
        regions     = ["eu-west-1"]
        dimensions = {
                "BucketName" = "bucket_name",
                "StorageType" = "StandardStorage",
        }
        metric {
            name       = "BucketSizeBytes"
            statistics = ["Average"]
            period     = "86400s"
        }
    }
}

bucket_name is one existing bucketname from my account but, ideally, I’d like to get ALL buckets, so I also tried with

dimensions = {
  BucketName="*" ,
  StorageType="*",
}

alloy starts succesfully bu no data is pushed to prometheus, in the logs I see something like:

-----------------------------------------------------
2025/03/31 14:36:35 <GetCallerIdentityResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
  <GetCallerIdentityResult>
    <Arn>arn:aws:iam::awsaccountid:user/custom.cloudwatch.integration</Arn>
    <UserId>AIDARXYZ</UserId>
    <Account>awsaccountid</Account>
  </GetCallerIdentityResult>
  <ResponseMetadata>
    <RequestId>4ec4c575-431a-80cb-a2c3ddb3da09</RequestId>
  </ResponseMetadata>
</GetCallerIdentityResponse>
2025/03/31 14:36:35 DEBUG: Request monitoring/GetMetricStatistics Details:
---[ REQUEST POST-SIGN ]-----------------------------
POST / HTTP/1.1
Host: monitoring.eu-west-1.amazonaws.com
User-Agent: aws-sdk-go/1.53.11 (go1.22.3; linux; amd64)
Content-Length: 385
Authorization: AWS4-HMAC-SHA256 Credential=AKIARXYZ/20250331/eu-west-1/monitoring/aws4_request, SignedHeaders=content-length;content-type;host;x-amz-date, Signature=XYZ
Content-Type: application/x-www-form-urlencoded; charset=utf-8
X-Amz-Date: 20250331T123635Z
Accept-Encoding: gzip

Action=GetMetricStatistics&Dimensions.member.1.Name=BucketName&Dimensions.member.1.Value=awsaccountid-terraform-state-bucket&Dimensions.member.2.Name=StorageType&Dimensions.member.2.Value=StandardStorage&EndTime=2025-03-31T12%3A36%3A35.302Z&MetricName=BucketSizeBytes&Namespace=AWS%2FS3&Period=86400&StartTime=2025-03-30T12%3A36%3A35.302Z&Statistics.member.1=Average&Version=2010-08-01
-----------------------------------------------------
2025/03/31 14:36:35 DEBUG: Response monitoring/GetMetricStatistics Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 200 OK
Content-Length: 338
Content-Type: text/xml
Date: Mon, 31 Mar 2025 12:36:35 GMT
X-Amzn-Requestid: 73e90656-280b-4ad2-8e7c-77215c32b0aa

-----------------------------------------------------
2025/03/31 14:36:35 <GetMetricStatisticsResponse xmlns="http://monitoring.amazonaws.com/doc/2010-08-01/">
  <GetMetricStatisticsResult>
    <Datapoints/>
    <Label>BucketSizeBytes</Label>
  </GetMetricStatisticsResult>
  <ResponseMetadata>
    <RequestId>73e90656-280b-4ad2-8e7c-77215c32b0aa</RequestId>
  </ResponseMetadata>
</GetMetricStatisticsResponse>                  

I’m playing from the very same box with python SDK and I get data when I run something like:

      response = cloudwatch.get_metric_statistics(
          Namespace="AWS/S3",
          MetricName="BucketSizeBytes",
          Dimensions=[
              {
                  "Name": "BucketName",
                  "Value": bucket_name
              },
              {
                  "Name": "StorageType",
                  "Value": type
              }
          ],
          StartTime=datetime.datetime.now() - datetime.timedelta(days=2),
          EndTime=datetime.datetime.now(),
          Period=86400,
          Statistics=['Average']
      )

not sure if I’m overswwing something obvious in the conf or missing anything… but the debug flag is not providing enough infomration for me to understand where the problem is.

ah, one more thing,I added a scrape section for this, but I guess this is yet unrelevant to my get metrics issue.

prometheus.scrape "s3_data" {
  scrape_interval = "3m"
  targets    = prometheus.exporter.cloudwatch.s3_data.targets
  forward_to = [prometheus.remote_write.prom.receiver]
}

any help is appreciated.

Best,

So, couple of conceptual mistakes here:

1.- Teher were no datapoints becuase of the period/lenght combination, it was too short.
You can play around with AWS command until you find a good balance:

aws cloudwatch get-metric-statistics --metric-name BucketSizeBytes --dimensions Name=BucketName,Value=MYBUCKETNAME Name=StorageType,Value=StandardStorage  --namespace AWS/S3 --statistics Average --period 3600 --start-time 2025-03-27T11:55:50+01:00 --end-time 2025-04-01T12:55:50+02:00

2.- In static mode, using wildcards in dimensions does not work. Then you need a discovery block.:

prometheus.exporter.cloudwatch "s3_data" {
debug = true
    sts_region      = "eu-west-1"
    discovery {
        type = "AWS/S3"
        regions     = ["eu-west-1"]
        metric {
            name       = "BucketSizeBytes"
            statistics = ["Average"]
            period     = "3600s"
        }

    }
}
prometheus.scrape "s3_data" {
  scrape_interval = "3m"
  targets    = prometheus.exporter.cloudwatch.s3_data.targets
  forward_to = [prometheus.remote_write.prom.receiver]
}

Now logs show some S3 information for the current account, so far cross acount not workin, btu this another story.

So I answer my own question, hoping that this will be helpful for someone in the future.

1 Like