Grafana Loki with Linode Object Storage (S3) compatibility

Hello, I’m deploying to a Kubernetes cluster.

The application works, but the logs are not sent to Object Storage on Linode. What could be wrong?

test_pod:
  enabled: true
  image: bats/bats:1.8.2
  pullPolicy: IfNotPresent

loki:
  image:
    tag: 2.9.3
  enabled: true
  isDefault: true
  url: http://{{(include "loki.serviceName" .)}}:{{ .Values.loki.service.port }}
  readinessProbe:
    httpGet:
      path: /ready
      port: http-metrics
    initialDelaySeconds: 45
  livenessProbe:
    httpGet:
      path: /ready
      port: http-metrics
    initialDelaySeconds: 45
  datasource:
    jsonData: "{}"
    uid: ""

  auth_enabled: false

  server:
    http_listen_port: 3100

  ingester:
    lifecycler:
      address: 127.0.0.1
      ring:
        kvstore:
          store: inmemory
        replication_factor: 1
      final_sleep: 0s
    chunk_retain_period: 10s
    max_transfer_retries: 3
    chunk_block_size: 2048
    chunk_target_size: 2048
    chunk_idle_period: 1s
    max_chunk_age: 1m


  # -- Check https://grafana.com/docs/loki/latest/configuration/#schema_config for more info on how to configure schemas
  schema_config:
    configs:
      - from: 2020-07-01
        store: boltdb-shipper
        object_store: s3
        schema: v11
        index:
          prefix: index_
          period: 5m #24h
        
  # I was forced to add a compactor config as well - though this will become necessary anyway
  compactor:
    working_directory: /data/loki/boltdb-shipper-compactor
    shared_store: s3

  # -- Check https://grafana.com/docs/loki/latest/configuration/#storage_config for more info on how to configure storages
  storageConfig:
    boltdb_shipper:
      shared_store: s3
      active_index_directory: /data/loki/boltdb-shipper-active
      cache_location: /data/loki/boltdb-shipper-cache
      cache_ttl: 168h
    aws:
      s3: s3://xyzaaa-my-bucket-loki-staging
      bucketnames: xyzaaa-my-bucket-loki-staging
      access_key_id: <myaccess_key>
      secret_access_key: <myaccess_secret_key>
      # region will always be US even if you have selected any other region
      region: US
      # use the actual region name below. For example, if you have used ap-south-1 the endpoint will be ap-south-1.linodeobjects.com
      endpoint: us-mia-1.linodeobjects.com

  limits_config:
    ingestion_rate_mb: 16
    ingestion_burst_size_mb: 20
    enforce_metric_name: false
    reject_old_samples: false
    reject_old_samples_max_age: 504h  # I have this set high for testing log ingestion from client

  chunk_store_config:
    max_look_back_period: 0s

  table_manager:
    retention_deletes_enabled: false
    retention_period: 0s


promtail:
  image:
    tag: 2.9.3
  enabled: true
  config:
    logLevel: info
    serverPort: 3101
    clients:
      - url: http://{{ .Release.Name }}:3100/loki/api/v1/push
  dependencies:
    loki:
      enabled: true

Can anybody give me any idea?

Try changing object_store: s3 to object_store: aws in your schema_config.

Hi tonyswumac, thanks for your answer.

I made many adjustments and it is now working, I put my ‘loki-custom-values.yaml’ below to help anybody who needs it.

chunksCache:
  resources:
    requests:
      memory: "128Mi"
    limits:
      memory: "512Mi"

loki:
  schemaConfig:
    configs:
      - from: "2024-04-01"
        store: tsdb
        object_store: s3
        schema: v13
        index:
          prefix: loki_index_
          period: 24h
  storage_config:
    tsdb_shipper:
      active_index_directory: /var/loki/index
      cache_location: /var/loki/index_cache
      cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space  
    aws:
      bucketnames: mybucket-loki-staging
      s3forcepathstyle: false
  pattern_ingester:
      enabled: true
  limits_config:
    allow_structured_metadata: true
    volume_enabled: true
    retention_period: 168h # 7 days retention, default is 672h (28 days)
  storage:
    type: s3
    bucketNames:
        chunks: mybucket-loki-staging
    s3:
      # s3 URL can be used to specify the endpoint, access key, secret key, and bucket name this works well for S3 compatible storages or if you are hosting Loki on-premises and want to use S3 as the storage backend. Either use the s3 URL or the individual fields below (AWS endpoint, region, secret).
      s3: s3://<ACCESS_KEY>:<SECRET_ACCESS_KEY>@us-mia-1.linodeobjects.com/mybucket-loki-staging
  gateway:
    # -- Specifies whether the gateway should be enabled
    enabled: true
  auth_enabled: false

I’m using the following Helm Chart: grafana/loki --version 6.27.0