I am setting up Loki as a POC on our k8s cluster, but the documentation how to use boltdb-shipper is not clear to me.
I want to use boltdb-shipper for the index and store in a S3 bucket.
The configuration documentation pages at,
# Which store to use for the index. Either aws, aws-dynamo, gcp, bigtable, bigtable-hashed,
# cassandra, or boltdb.
In examples on the internet I see that “store: boltdb-shipper” is used which is not consistent with the docs.
I am using the below config, but my index files are only reside local in the ingester pod in the configured index folder and are not wrtitten to the S3 storage at the documented 15m interval.
schema_config: configs: - from: 2021-09-10 store: boltdb object_store: aws schema: v11 index: prefix: index_ period: 24h chunks: prefix: chunks_ period: 24h storage_config: boltdb: # Location of BoltDB index files. # CLI flag: -boltdb.dir directory: /opt/loki/index boltdb_shipper: active_index_directory: /opt/loki/index cache_location: /opt/loki/index_cache shared_store: s3 aws: # S3 or S3-compatible endpoint URL with escaped Key and Secret encoded. # If only region is specified as a host, the proper endpoint will be deduced. # Use inmemory:///<bucket-name> to use a mock in-memory implementation. # CLI flag: -s3.url s3: s3://eu-central-1/loki-bucket
I thought it was only logical to have both…boltdb config lines for initial index storage locally in the pod and then the boltdb-shipper config lines to let the Loki ingester know where to store the index on S3.
But as I described earlier, the index does not appear in my S3 bucket.
The chunks are written to the S3 bucket nicely in the “fake” folder, so I know for sure the ingester pod has the correct rights to write to the S3 bucket.
Should I use “boltdb-shipper” as the value for the key “store” and delete the config lines for boltdb ?
Thanx for the help.