Problems moving from Loki-Stack Chart to Loki chart 6.x

hey all, we’re trying to migrate from the loki-stack chart to loki (monolith) while maintaining our existing log data. That data is on an Azure storage PVC. Using a kustomization patch I was able to get force the ScaleSet to use existingClaim and mount the existing volume . The loki chart doesn’t currently support doing that itself. Anyway, while the volume mounts fine we cannot see any of the old log data. New data comes in fine from promtail. Now, if we switch BACK to loki-stack, we can now see the old data but cannot see the new data?! This makes me think its some kind of pathing issue? I do see that the loki-stack chart mounts storages at /data and the loki chart mounts at /var/loki but Im not sure how that might be messing things up. I would appreciate any suggestions or insight. Ive included both config yaml files in case that helps…


loki-stack chart loki.yaml

auth_enabled: false
chunk_store_config:
  max_look_back_period: 0s
compactor:
  retention_enabled: true
  shared_store: filesystem
  working_directory: /data/loki/boltdb-shipper-compactor
frontend:
  max_outstanding_per_tenant: 4096
ingester:
  chunk_block_size: 262144
  chunk_idle_period: 3m
  chunk_retain_period: 1m
  lifecycler:
    ring:
      replication_factor: 1
  max_transfer_retries: 0
  wal:
    dir: /data/loki/wal
limits_config:
  enforce_metric_name: false
  max_entries_limit_per_query: 5000
  max_query_parallelism: 32
  max_query_series: 10000
  reject_old_samples: true
  reject_old_samples_max_age: 168h
  retention_period: 72h
  split_queries_by_interval: 15m
memberlist:
  join_members:
  - 'loki-stack-memberlist'
query_range:
  parallelise_shardable_queries: true
query_scheduler:
  max_outstanding_requests_per_tenant: 4096
schema_config:
  configs:
  - from: "2020-10-24"
    index:
      period: 24h
      prefix: index_
    object_store: filesystem
    schema: v11
    store: boltdb-shipper
  - from: "2024-07-25"
    index:
      period: 24h
      prefix: index_
    object_store: filesystem
    schema: v13
    store: tsdb
server:
  grpc_listen_port: 9095
  http_listen_port: 3100
storage_config:
  boltdb_shipper:
    active_index_directory: /data/loki/boltdb-shipper-active
    cache_location: /data/loki/boltdb-shipper-cache
    cache_ttl: 24h
    shared_store: filesystem
  filesystem:
    directory: /data/loki/chunks
  tsdb_shipper:
    active_index_directory: /data/tsdb-index
    cache_location: /data/tsdb-cache
table_manager:
  retention_deletes_enabled: true
  retention_period: 168h
loki chart config.yaml

auth_enabled: true
chunk_store_config:
  chunk_cache_config:
    background:
      writeback_buffer: 500000
      writeback_goroutines: 1
      writeback_size_limit: 10MB
    default_validity: 0s
    memcached:
      batch_size: 4
      parallelism: 5
    memcached_client:
      addresses: dnssrvnoa+_memcached-client._tcp.loki-chunks-cache.monitoring.svc
      consistent_hash: true
      max_idle_conns: 72
      timeout: 2000ms
common:
  compactor_address: 'http://loki:3100'
  path_prefix: /var/loki
  replication_factor: 1
  storage:
    filesystem:
      chunks_directory: /var/loki/chunks
      rules_directory: /var/loki/rules
compactor:
  retention_enabled: false
frontend:
  max_outstanding_per_tenant: 4096
  scheduler_address: ""
  tail_proxy_url: ""
frontend_worker:
  scheduler_address: ""
index_gateway:
  mode: simple
ingester:
  chunk_encoding: snappy
limits_config:
  allow_structured_metadata: false
  max_cache_freshness_per_query: 10m
  max_query_parallelism: 32
  max_query_series: 10000
  query_timeout: 300s
  reject_old_samples: true
  reject_old_samples_max_age: 168h
  retention_period: 72h
  split_queries_by_interval: 15m
  volume_enabled: true
memberlist:
  join_members:
  - loki-memberlist
pattern_ingester:
  enabled: false
querier:
  max_concurrent: 2
query_range:
  align_queries_with_step: true
  cache_results: true
  parallelise_shardable_queries: true
  results_cache:
    cache:
      background:
        writeback_buffer: 500000
        writeback_goroutines: 1
        writeback_size_limit: 500MB
      default_validity: 12h
      memcached_client:
        addresses: dnssrvnoa+_memcached-client._tcp.loki-results-cache.monitoring.svc
        consistent_hash: true
        timeout: 500ms
        update_interval: 1m
query_scheduler:
  max_outstanding_requests_per_tenant: 4096
ruler:
  storage:
    type: local
runtime_config:
  file: /etc/loki/runtime-config/runtime-config.yaml
schema_config:
  configs:
  - from: "2020-10-24"
    index:
      period: 24h
      prefix: index_
    object_store: filesystem
    schema: v11
    store: boltdb-shipper
  - from: "2024-07-25"
    index:
      period: 24h
      prefix: index_
    object_store: filesystem
    schema: v13
    store: tsdb
server:
  grpc_listen_port: 9095
  http_listen_port: 3100
  http_server_read_timeout: 600s
  http_server_write_timeout: 600s
storage_config:
  boltdb_shipper:
    index_gateway_client:
      server_address: ""
  hedging:
    at: 250ms
    max_per_second: 20
    up_to: 3
  tsdb_shipper:
    active_index_directory: /data/tsdb-index
    cache_location: /data/tsdb-cache
    index_gateway_client:
      server_address: ""
tracing:
  enabled: true

Couple of differences I see:

  1. auth is disabled in one config, but enabled in the other. With auth_enabled it changes where your logs could go, if org ID is supplied.
  2. I don’t see storage config in one of your config.

My recommendation is to attempt your migration with as little change in your configuration as possible. And if your intent is to turn on auth I’d recommend doing that after successful migration. And you’ll want a way to query logs from the default fake org id as well.

Thanks Tony,
I fixed the post so both configs are formatted as code blocks.
I’m looking into the auth difference. It should be the same for both.

I have tried to keep the changes to a minimum as much as possible but its a big leap from the ooutdated loki-stack chart to the newer loki charts.

The storage config is set in both configs. Probably hard to find with my poor initial formatting.

New Loki config
storage_config:
  boltdb_shipper:
    index_gateway_client:
      server_address: ""
  hedging:
    at: 250ms
    max_per_second: 20
    up_to: 3
  tsdb_shipper:
    active_index_directory: /data/tsdb-index
    cache_location: /data/tsdb-cache
    index_gateway_client:
      server_address: ""
OLD Loki-Stack config
storage_config:
  boltdb_shipper:
    active_index_directory: /data/loki/boltdb-shipper-active
    cache_location: /data/loki/boltdb-shipper-cache
    cache_ttl: 24h
    shared_store: filesystem
  filesystem:
    directory: /data/loki/chunks
  tsdb_shipper:
    active_index_directory: /data/tsdb-index
    cache_location: /data/tsdb-cache

You are missing the filesystem configuration, since that’s what your index configuration is using.

Thanks again Tony,
looking at the filesystem config took me down a path that made me realize some of my initial upgrade work wasnt done right. I had some really inconsistent pathing when I upgraded to tsdb from boltdb. Lukcily, Im using a test cluster for all this so I can tear everything down and start over.
I’m going to fix the problems in the initial tsdb upgrade on the loki-stack chart and generate some new logs in boltdb, then switch again to tsdb and let that gen some logs.
THEN, I’ll try to move to the new chart with changes/fixes suggested by you and pay a little closer attention to the path settings.
Thanks again.