Loki complains about S3 bucket

Hi,

I’m currently trying to get started with Loki and S3 for storage. I’m using the official helm chart. My values.yaml file looks like:

      resources:
        limits:
          cpu: 250m
          memory: 512Mi
      requests:
        cpu: 250m
        memory: 512Mi

    config:
      schema_config:
        configs:
        - from: 2020-10-24
          store: boltdb-shipper
          object_store: aws
          schema: v11
          index:
            prefix: index_
            period: 24h

      storage_config:
        aws:
          bucketnames: <bucket name>
          region: <region name>
          access_key_id: <access key id >
          secret_access_key: <secret access key>

        boltdb_shipper:
          shared_store: aws

      compactor:
        shared_store: aws

After pod startup it logs this every few minutes and no labels show up in
Grafana.

level=error ts=2021-04-01T09:09:54.950842366Z caller=flush.go:220 org_id=fake msg="failed to flush user" err="InvalidParameter: 1 validation error(s) found.\n- minimum field size of 1, PutObjectInput.Bucket.\n"

Has anybody seen this before?

+1 experiencing this as well

  • confirmed service account has access to the bucket
  • SA is being used by the pods

managed to get rid of most of the errors by using

s3: s3://region/bucket

but no idea if it will upload to s3… after 24hrs… guess will wait and see

1 Like

I was able to resolve this by adding

storage_config:
    aws:
      s3: s3://<access_key>:<secret_access_key>@<region>/<bucket>

to my config. Works like a charm now :slight_smile:

1 Like

yeah but hardcoding secrets in a config map is… kinda sub par…

am using imrsa and the service account works on any other pod that uses it… so its something specific to loki.

going to try run as root securitycontext (another suboptimal )

This topic was automatically closed after 365 days. New replies are no longer allowed.