NoCredentialProviders in kubernetes helm using cluster minio s3

Hi,

I have loki installed using helm, and also have s3 endpoint from a minio deployment.

My issue is that the compactor is not working, and is throwing errors of NoCredentialProviders.

I have tried both the “aws” storage config and “s3” config.
When using the “s3” config it is trying to access for example chunks.minio.xyz which is not resolvable and does not exist.

And when using the “aws” config I get the below error as soon as the compactor tries to access the object storage.

init compactor: failed to init delete store: failed to get s3 object: NoCredentialProviders: no valid providers in chain│
. Deprecated.                                                                                                           │
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors                                              │
error initialising module: compactor                                                                                    │
github.com/grafana/dskit/modules.(*Manager).initModule                                                                  │
        /src/loki/vendor/github.com/grafana/dskit/modules/modules.go:138                                                │
github.com/grafana/dskit/modules.(*Manager).InitModuleServices                                                          │
        /src/loki/vendor/github.com/grafana/dskit/modules/modules.go:108                                                │
github.com/grafana/loki/v3/pkg/loki.(*Loki).Run                                                                         │
        /src/loki/pkg/loki/loki.go:453                                                                                  │
main.main                                                                                                               │
        /src/loki/cmd/loki/main.go:122                                                                                  │
runtime.main                                                                                                            │
        /usr/local/go/src/runtime/proc.go:267                                                                           │
runtime.goexit                                                                                                          │
        /usr/local/go/src/runtime/asm_amd64.s:1650
level=error ts=2024-07-10T02:59:53.000335659Z caller=log.go:216 msg="error running loki" err="init compactor: failed to │
init delete store: failed to get s3 object: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbo│
se messaging see aws.Config.CredentialsChainVerboseErrors\nerror initialising module: compactor\ngithub.com/grafana/dski│
t/modules.(*Manager).initModule\n\t/src/loki/vendor/github.com/grafana/dskit/modules/modules.go:138\ngithub.com/grafana/│
dskit/modules.(*Manager).InitModuleServices\n\t/src/loki/vendor/github.com/grafana/dskit/modules/modules.go:108\ngithub.│
com/grafana/loki/v3/pkg/loki.(*Loki).Run\n\t/src/loki/pkg/loki/loki.go:453\nmain.main\n\t/src/loki/cmd/loki/main.go:122\│
nruntime.main\n\t/usr/local/go/src/runtime/proc.go:267\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1650" 

Full yaml:

loki:
  auth_enabled: false
  storage:
    type: "s3"
    bucketNames:
      chunks: "chunks"
      ruler: "ruler"
      admin: "admin"
    aws: # Also tried s3 here 
      s3: s3://xyz:xxyyzz@minio.default.svc.cluster.local:9000
      endpoint: http://minio.default.svc.cluster.local:9000
      region: local
        accessKeyId: xyz
        secretAccessKey: xxyyzz
      signatureVersion: null
      insecure: true
      s3forcepathstyle: true
      bucketnames: chunks,admin,ruler
    tsdb_shipper:
      active_index_directory: /loki/index
      cache_location: /loki/index_cache
      cache_ttl: 24h
  server:
    grpc_server_max_recv_msg_size: 10485760000
    grpc_server_max_send_msg_size: 10485760000

  ingester_client:
    remote_timeout: 120s
  compactor:
    compaction_interval: 5m
    retention_enabled: true
    retention_delete_delay: 1m
    delete_request_store: s3
  schemaConfig:
    configs:
      - from: 2024-04-01
        store: tsdb
        object_store: s3
        schema: v13
        index:
          prefix: loki_index_
          period: 24h
  common:
    ring:
      kvstore:
        store: memberlist
  memberlist:
    dead_node_reclaim_time: 60s
    gossip_to_dead_nodes_time: 60s
    left_ingesters_timeout: 60s
    abort_if_cluster_join_fails: false
    advertise_addr: eth0
    bind_addr: eth0
    bind_port: 7946
    gossip_interval: 5s
    join_members:
      - loki-memberlist:7946
  ingester:
    chunk_encoding: snappy
  tracing:
    enabled: true
  querier:
    max_concurrent: 20
  query_scheduler:
    max_outstanding_requests_per_tenant: 16384
  limits_config:
    retention_period: 1h
    ingestion_rate_mb: 6144
    split_queries_by_interval: 30m
    max_query_series: 10000
    max_entries_limit_per_query: 10000
    max_query_parallelism: 1000
    max_concurrent_tail_requests: 1000
    volume_enabled: true
    max_streams_per_user: 10000
  frontend:
    max_outstanding_per_tenant: 6144
    compress_responses: true

chunksCache:
  allocatedMemory: 512
gateway:
  service:
    type: NodePort
    nodePort: 30007
  ingress:
    enabled: true
    hosts:
      - host: loki-gateway.default.svc.cluster.local
        paths:
          - path: /
            pathType: Prefix

deploymentMode: Distributed

backend:
  replicas: 0
  persistence:
    enabled: true
    storageClassName: nfs-client
    size: 2Gi
read:
  replicas: 0
write:
  replicas: 0
  persistence:
    enabled: true
    storageClassName: nfs-client
    size: 2Gi

# Enable minio for storage
minio:
  enabled: false

ingester:
  replicas: 1
querier:
  replicas: 1
queryFrontend:
  replicas: 1
queryScheduler:
  replicas: 1
distributor:
  replicas: 1
compactor:
  replicas: 1
indexGateway:
  replicas: 1
bloomCompactor:
  replicas: 1
bloomGateway:
  replicas: 1

This has been fixed by using an external S3 compatible storage endpoint, the below helm config should prove useful for anyone using Oracle ZFSSA ZS9-2:

storage_config:
    aws:
	  s3: s3://AccessKey:SecretKey@storage.example.com/s3/v1/export/sharename/bucketname
      endpoint: https://storage.example.com/s3/v1/export/sharename
      region: null
      s3forcepathstyle: true
      insecure: true
      http_config:
        insecure_skip_verify: true
  storage:
    type: s3
    bucketNames:
      chunks: bucketname
    tsdb_shipper:
      active_index_directory: /loki/index
      cache_location: /loki/index_cache
    aws:
      s3: s3://AccessKey:SecretKey@storage.example.com/s3/v1/export/sharename/bucketname
      endpoint: https://storage.example.com/s3/v1/export/sharename
      region: null
      accessKeyId: AccessKey
      secretAccessKey: SecretKey
      signatureVersion: null
      s3forcepathstyle: true
      sse_encryption: false
      insecure: true
      bucketnames: chunks
      http_config:
        insecure_skip_verify: true