I want to store my logs in aws s3 not in local storage

auth_enabled: false
server:
  http_listen_port: 3100

ingester:
  lifecycler:
    address: loki-ingester
    ring:
      kvstore:
        store: inmemory

schema_config:
  configs:
    - from: 2020-01-01
      store: boltdb-shipper
      object_store: s3
      schema: v11
      index:
        prefix: index_
        period: 24h

storage_config:
  aws:
    ddb:
      url: "YOUR_DYNAMODB_URL"
      region: "ap-southeast-1"
      access_key_id: "AKIAUVUM5URZJXGLV54V3ddd"
      secret_access_key: "Ot048ppIAuG5DvJFSwlZCBDvcTO90dddeXFnxxKb494i"

  s3:
    bucket_name: "sl-dev-s3/loki-logs"
    endpoint: "s3-ap-southeast-1.amazonaws.com"
    region: "ap-southeast-1"
    access_key_id: "AXXXXXXXXXXXXXXXXXXX"
    secret_access_key: "OXXXXXXXXXXXXXXXXXXXX"

After using this loki configuration i’m getting below error:

failed parsing config: /etc/loki/loki-local-config.yaml: yaml: unmarshal errors:
line 24: field ddb not found in type aws.StorageConfig
line 30: field s3 not found in type storage.Config.

Just I want to store my all logs in the S3 not in local storage. I don’t want to use my VM local storage. Please let me know what is the error in this Loki configuration. I also don’t know dynamoDB config part is needed or not.

You might want to remove the access key from your post and rotate your key right away.

Also, you don’t need the aws.ddb configuration, if you aren’t using dynamodb.

For S3 bucket, configuration should look like:

storage_config:
  aws:
    s3: ...

Or you can use common configuration block as well:

common:
  storage:
    s3:
      bucketnames: ...
      region: ...
      endpoint: ...
      sse_encryption: true
      s3forcepathstyle: true
1 Like

Thanks for your reply. That was a demo aws access. I have created on new configuration file for Loki. Can you please check it out?
If there is any mistake please help me out.

auth_enabled: false

server:
  http_listen_port: 3100

common:
  path_prefix: /tmp/loki
  storage:
    s3:
      s3: https://s3.ap-southeast-1.amazonaws.com
      bucketnames: ....
      region: ...
      access_key_id: ...
      secret_access_key: .....

  replication_factor: 1
  ring:
    kvstore:
      store: memberlist

schema_config:
  configs:
    - from: "2023-12-31"
      store: tsdb
      object_store: s3
      schema: v11
      index:
        prefix: index_
        period: 24h

This configuration is working on my side but If I want to see any log before 24 hours grafana show an error.

too many outstanding requests

Can you please give me some idea why this error is showing and how can I resolve this?

I’d advise looking over the limits configuration section of Loki configuration, and adjust accordingly.

Here are some we use ourselves but there are others. I’d recommend also to tweak slowly and perhaps one at a time:

limits_config:
  cardinality_limit: 500000
  ingestion_burst_size_mb: 200
  ingestion_rate_mb: 100
  ingestion_rate_strategy: local
  max_concurrent_tail_requests: 100
  max_entries_limit_per_query: 1000000
  max_global_streams_per_user: 1000000
  max_label_name_length: 1024
  max_label_names_per_series: 50
  max_label_value_length: 4096
  max_query_parallelism: 64
  max_query_series: 250000
  per_stream_rate_limit: 100M
  per_stream_rate_limit_burst: 200M
  query_timeout: 10m