Hey team,
I am using Loki’s singleBinary deployment mode and have deployed Loki using Helm chart. I am currently using version 6.21.0
. I recently enabled external storage of type S3. Here’s my sample storage configuration:
storage:
type: s3
bucketNames:
chunks: "chunks"
ruler: "ruler"
admin: "admin"
s3:
endpoint: <storage-endpoint.com>/<bucket-name>/
region: auto
secretAccessKey: <redacted>
accessKeyId: <redacted>
insecure: false
s3ForcePathStyle: true
signatureVersion: "v4"
So essentially, I am using an existing bucket and trying to create 3 buckets/folders inside it(I may be wrong about the understanding here). I am facing multiple issues:
a. I can see loki is only creating 1 bucket/folder with name chunks
and nothing else.
b. While retention/deletion is working fine, I observed that older objects/folders with different name(since I am using this bucket as common for multiple stuff) are getting deleted.
I suspect compactor/retention mechanism is deleting other objects in the same bucket that have nothing to do with loki. Please suggest if that’s the case. I also am not able to understand why there’s only 1 bucket named “chunks”. I sense some kind of overwriting that’s happening.
I would recommend you to use a dedicated S3 bucket for Loki. Don’t share it with any other application. Also compactor does not remove anything that’s not Loki related if you do share it with other applications, but again I’d recommend not to do that.
Can you also share your complete configuration, please? I’d like to see how you configured your indexes.
Thanks @tonyswumac for responding. Here’s the config
values:
loki:
auth_enabled: false
commonConfig:
replication_factor: 2
path_prefix: /var/loki
storage:
type: s3
bucketNames:
chunks: "chunks"
ruler: "ruler"
admin: "admin"
s3:
endpoint: <storage-endpoint.com>/<bucket-name>/
region: auto
secretAccessKey: <redacted>
accessKeyId: <redacted>
insecure: false
s3ForcePathStyle: true
signatureVersion: "v4"
schemaConfig:
configs:
- from: "2024-06-01"
store: tsdb
object_store: s3
schema: v13
index:
prefix: index_
period: "24h"
compactor:
retention_enabled: true
delete_request_store: filesystem
chunksCache:
allocatedMemory: 1024
test:
enabled: false
monitoring:
dashboards:
enabled: false
rules:
enabled: false
alerting: false
serviceMonitor:
enabled: false
selfMonitoring:
enabled: false
grafanaAgent:
installOperator: false
lokiCanary:
enabled: false
table_manager:
retention_deletes_enabled: true
retention_period: 720
deploymentMode: SingleBinary
singleBinary:
replicas: 2
persistence:
size: 500Gi
First, I would double check and make sure your index files and chunks are actually being written into your destination S3 bucket.
Second, this part of your configuration I am not sure it’s working:
type: s3
bucketNames:
chunks: "chunks"
ruler: "ruler"
admin: "admin"
s3:
endpoint: <storage-endpoint.com>/<bucket-name>/
I am not sure if you are looking to substitute values into bucket name or something of that nature, but I am pretty sure that doesn’t work. I would recommend reading through the storage documentation here Configuration | Grafana Loki documentation and adjust your configuration accordingly.
If you want to configure an bucket for ruler, then I would recommend you to configure a S3 destination under ruler specifically, like so:
ruler:
<...>
storage:
type: s3
s3:
bucketnames: {{ ruler_bucket_name }}
region: {{ aws_region }}
s3forcepathstyle: true
My intention with the below pasted code is to create separate buckets/folders for chunks/ruler/admin under a parent S3 endpoint. For example, assuming S3 endpoint is 3434343.r2.cloudflarestrorage.com/existing-s3-bucket/
, the snippet will create ../existing-s3-bucket/chunks
, existing-s3-bucket/ruler
and existing-s3-bucket/admin
and files inside these 3 subfolders.
type: s3
bucketNames:
chunks: "chunks"
ruler: "ruler"
admin: "admin"
s3:
endpoint: <storage-endpoint.com>/<bucket-name>/
That’s not going to work. Use one S3 bucket for chunks, and one S3 bucket for ruler. I am not sure what you intend to store in admin.
S3 bucket itself is free, there should be no concern creating more than one S3 bucket.