Centralized Loki index/chunks storage

Hi, I am new to loki and we are trying to setup a shared storage for multiple loki/grafana instances running at different servers.

Main Query:
I looked at boltdb-shipper, which acts as a shared store for chunks/indexes and can ship to ‘GCS bucket’, and I was wondering that:

Would it be possible to ship logs from different on-prem instances of loki to same ‘GCS’ store and then setup a loki/grafana instance on Google Cloud that can consume from this central store to provide us a holistic view of all server logs.

Breakdown:
Let’s say we have a GCS bucket named: ‘shared-bucket-xyz
For simplicity, I am using term logs instead of chunks/indexes since getting centralized logs is the ultimate goal

  1. Can we push logs from local setup to remote shared-bucket-xyz? Cause I don’t see any authentication mechanism in provided configuration options.

  2. Can we push logs from 2 or more local servers to the same bucket shared-bucket-xyz?

  3. Will the cloud deployed loki/grafana stack be able to consume logs from the shared-bucket-xyz?

Configuration Sample provided by Grafana:

schema_config:
 configs:
   - from: 2018-04-15
     store: boltdb-shipper
     object_store: gcs
     schema: v11
     index:
       prefix: loki_index_
       period: 24h

storage_config:
 gcs:
   bucket_name: GCS_BUCKET_NAME

 boltdb_shipper:
   active_index_directory: /loki/index
   shared_store: gcs
   cache_location: /loki/boltdb-cache

I would greatly appreciate any feedback from the community. Thanks

… we are trying to setup a shared storage for multiple loki/grafana instances running at different servers.

The first question I would ask is, would it work for you to instead run one single Loki instance and use the agent (promtail) to send logs from all your remote clusters to this single instance? This is the recommended approach and how we run Loki ourselves.

Can we push logs from local setup to remote shared-bucket-xyz ? Cause I don’t see any authentication mechanism in provided configuration options.

Yes, authentication is handled via the GCS SDK we use which is typically done with environment variables

Can we push logs from 2 or more local servers to the same bucket shared-bucket-xyz ?

Yes, however it would be best to only run one table-manager and one compactor, it shouldn’t hurt anything to run multiple but isn’t optimal. Currently however this is hard to do with the single-binary (which will run table manager and compactor both by default) We have been talking about adding configs to better handle this but the work has not been done yet.

Will the cloud deployed loki/grafana stack be able to consume logs from the shared-bucket-xyz ?

Yes, however be aware there will be some delay in what’s visible in this remote instance as Loki keeps log data in memory to build sufficient chunks before flushing them. The Loki at the remote sites configuration would determine this, default could be as long as an hour though before chunks are flushed and visible in the central location. It’s possible to change these timings but not recommended as forcing smaller chunks can impact query performance. This is why we generally recommend the first approach I suggested to just centralize Loki and send logs to it from everywhere.

Yes, however it would be best to only run one table-manager and one compactor, it shouldn’t hurt anything to run multiple but isn’t optimal. Currently however this is hard to do with the single-binary (which will run table manager and compactor both by default) We have been talking about adding configs to better handle this but the work has not been done yet.

Can we use boltdb-shipper to store both indexes & chunks inside bucket, we want to avoid cost of running bigtable to store indexes

Can we use boltdb-shipper to store both indexes & chunks inside bucket, we want to avoid cost of running bigtable to store indexes

Yes you can!

1 Like