Need help with loki3 and external minio connection

Hello,

I am trying to deploy loki3 via helm chart and use an external minio running on a truenas for storage. I have the following configuration:

 storage:
    # Loki requires a bucket for chunks and the ruler. GEL requires a third bucket for the admin API.
    # Please provide these values if you are using object storage.
    bucketNames:
      chunks: loki-chunks
      ruler: loki-ruler
      admin: loki-admin
    type: s3
    s3:
      s3: https://s3.zozoo.io:9000
      endpoint: https://s3.zozoo.io:9000
      region: null
      secretAccessKey: <some secret>
      accessKeyId: <some access key>
      signatureVersion: null
      s3ForcePathStyle: true
      insecure: false
      http_config: {}
      # -- Check https://grafana.com/docs/loki/latest/configure/#s3_storage_config for more info on how to provide a backoff_config
      backoff_config: {}

I have tested the access of the service with the provided credentials and I was able to read/write/list/delete objects on the buckets but loki is unable to sync the chunks into the minio.

Any suggestion how to fix it?

  1. Can you share the rest of your cnofiguration, please?
  2. If Loki is not writing chunks to S3, were you able to determine where it’s writing to? If it’s writing to a different location then it’s config issue, otherwise you should see in logs why it’s not able to write to S3.

Which configuration are we talking about? The above was the storage config of the values.yaml the rest is the default:

the full storage config looks like this:

storage:
    # Loki requires a bucket for chunks and the ruler. GEL requires a third bucket for the admin API.
    # Please provide these values if you are using object storage.
    bucketNames:
      chunks: loki-chunks
      ruler: loki-ruler
      admin: loki-admin
    type: s3
    s3:
      s3: https://s3.zozoo.io:9000
      endpoint: https://s3.zozoo.io:9000
      region: null
      secretAccessKey: <some secrets>
      accessKeyId: < some access key>
      signatureVersion: null
      s3ForcePathStyle: true
      insecure: false
      http_config: {}
      # -- Check https://grafana.com/docs/loki/latest/configure/#s3_storage_config for more info on how to provide a backoff_config
      backoff_config: {}
    gcs:
      chunkBufferSize: 0
      requestTimeout: "0s"
      enableHttp2: true
    azure:
      accountName: null
      accountKey: null
      connectionString: null
      useManagedIdentity: false
      useFederatedToken: false
      userAssignedId: null
      requestTimeout: null
      endpointSuffix: null
    swift: 
      auth_version: null
      auth_url: null
      internal: null
      username: null
      user_domain_name: null
      user_domain_id: null 
      user_id: null
      password: null
      domain_id: null
      domain_name: null
      project_id: null
      project_name: null
      project_domain_id: null
      project_domain_name: null
      region_name: null
      container_name: null
      max_retries: null
      connect_timeout: null
      request_timeout: null
    filesystem:
      chunks_directory: /var/loki/chunks
      rules_directory: /var/loki/rules
      admin_api_directory: /var/loki/admin

This is the config.yaml the loki has:


auth_enabled: false
chunk_store_config:
  chunk_cache_config:
    background:
      writeback_buffer: 500000
      writeback_goroutines: 1
      writeback_size_limit: 500MB
    default_validity: 0s
    memcached:
      batch_size: 4
      parallelism: 5
    memcached_client:
      addresses: dnssrvnoa+_memcached-client._tcp.logger-loki-chunks-cache.monitoring.svc
      consistent_hash: true
      max_idle_conns: 72
      timeout: 2000ms
common:
  compactor_address: 'http://loki-backend:3100'
  path_prefix: /var/loki
  replication_factor: 3
  storage:
    s3:
      access_key_id: <access keys>
      bucketnames: loki-chunks
      endpoint: https://s3.zozoo.io:9000
      insecure: false
      s3: https://s3.zozoo.io:9000
      s3forcepathstyle: true
      secret_access_key: <some secret key>
frontend:
  scheduler_address: ""
  tail_proxy_url: http://logger-loki-querier.monitoring.svc.cluster.local:3100
frontend_worker:
  scheduler_address: ""
index_gateway:
  mode: simple
limits_config:
  max_cache_freshness_per_query: 10m
  query_timeout: 300s
  reject_old_samples: true
  reject_old_samples_max_age: 168h
  split_queries_by_interval: 15m
  volume_enabled: true
memberlist:
  join_members:
  - loki-memberlist
pattern_ingester:
  enabled: false
query_range:
  align_queries_with_step: true
  cache_results: true
  results_cache:
    cache:
      background:
        writeback_buffer: 500000
        writeback_goroutines: 1
        writeback_size_limit: 500MB
      default_validity: 12h
      memcached_client:
        addresses: dnssrvnoa+_memcached-client._tcp.logger-loki-results-cache.monitoring.svc
        consistent_hash: true
        timeout: 500ms
        update_interval: 1m
ruler:
  storage:
    s3:
      access_key_id: <access keys>
      bucketnames: loki-ruler
      endpoint: https://s3.zozoo.io:9000
      insecure: false
      s3: https://s3.zozoo.io:9000
      s3forcepathstyle: true
      secret_access_key: <some secret key>
    type: s3
runtime_config:
  file: /etc/loki/runtime-config/runtime-config.yaml
schema_config:
  configs:
  - from: "2024-04-01"
    index:
      period: 24h
      prefix: index_
    object_store: 'filesystem'
    schema: v13
    store: tsdb
server:
  grpc_listen_port: 9095
  http_listen_port: 3100
  http_server_read_timeout: 600s
  http_server_write_timeout: 600s
storage_config:
  boltdb_shipper:
    index_gateway_client:
      server_address: dns+loki-backend-headless.monitoring.svc.cluster.local:9095
  hedging:
    at: 250ms
    max_per_second: 20
    up_to: 3
  tsdb_shipper:
    index_gateway_client:
      server_address: dns+loki-backend-headless.monitoring.svc.cluster.local:9095
tracing:
  enabled: false

According to your schema_config you are still writing chunks to filesystem.

Isn’t that how it should be? Written to the filesystem then synced to s3 bucket every minutes or so?

Nevermind I have changed the testSchemaConfig to use s3 and it’s now loading the data to the s3 bucket. Thank you for pointing out the wrong config.