Hello there and happy new year ![]()
I discovered that every other day loki data is not available, I can’t say when it started. You can see this on the screenshot. This is not a grafana bug as I confirmed the same issue with `logcli` command.
The setup is running on kubernetes and deployed using helm. I have the exact same setup on other cluster and get not issue. Here the config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: observability-loki
meta.helm.sh/release-namespace: observability
creationTimestamp: "2025-05-30T10:15:32Z"
labels:
app.kubernetes.io/instance: observability-loki
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: loki
app.kubernetes.io/version: 3.6.3
helm.sh/chart: loki-6.49.0
helm.toolkit.fluxcd.io/name: loki
helm.toolkit.fluxcd.io/namespace: observability
name: loki
namespace: observability
data:
config.yaml: |
auth_enabled: false
bloom_build:
builder:
planner_address: ""
enabled: false
bloom_gateway:
client:
addresses: ""
enabled: false
common:
compactor_grpc_address: 'observability-loki.observability.svc.cluster.local:9095'
path_prefix: /var/loki
storage:
s3:
access_key_id: xxxx
bucketnames: loki-k8s
endpoint: https://s3.reg.hosting.net/
http_config:
insecure_skip_verify: false
insecure: false
region: gra
s3forcepathstyle: true
secret_access_key: xxxxxxx
compactor:
compaction_interval: 4h
delete_request_store: s3
retention_delete_delay: 2h
retention_enabled: true
frontend:
max_outstanding_per_tenant: 10000
scheduler_address: ""
tail_proxy_url: ""
frontend_worker:
scheduler_address: ""
index_gateway:
mode: simple
limits_config:
max_cache_freshness_per_query: 10m
query_timeout: 300s
reject_old_samples: true
reject_old_samples_max_age: 168h
retention_period: 365d
split_queries_by_interval: 24h
volume_enabled: true
memberlist:
join_members:
- observability-loki-memberlist.observability.svc.cluster.local
pattern_ingester:
enabled: false
query_range:
align_queries_with_step: true
query_scheduler:
max_outstanding_requests_per_tenant: 10000
ruler:
storage:
s3:
access_key_id: xxxxxxx
bucketnames: loki-k8s
endpoint: https://s3.reg.hosting.net/
http_config:
insecure_skip_verify: false
insecure: false
region: gra
s3forcepathstyle: true
secret_access_key: xxxxxxxxxx
type: s3
wal:
dir: /var/loki/ruler-wal
runtime_config:
file: /etc/loki/runtime-config/runtime-config.yaml
schema_config:
configs:
- from: "2022-01-11"
index:
period: 24h
prefix: loki_index_
object_store: s3
schema: v12
store: boltdb-shipper
- from: "2024-10-25"
index:
period: 24h
prefix: loki_index_
object_store: s3
schema: v13
store: tsdb
server:
grpc_listen_port: 9095
http_listen_port: 3100
http_server_read_timeout: 600s
http_server_write_timeout: 600s
storage_config:
bloom_shipper:
working_directory: /var/loki/data/bloomshipper
boltdb_shipper:
index_gateway_client:
server_address: ""
hedging:
at: 250ms
max_per_second: 20
up_to: 3
tsdb_shipper:
index_gateway_client:
server_address: ""
use_thanos_objstore: false
tracing:
enabled: false
The data is stored on a S3 compatible storage. You can see the last period has more data than usual but it will disappear soon at it does every day. The s3 storage is ok, at least I have the same structure than a working setup
How can I debug this issue? There is so many logs I can find which ones could be relevant with the issue, but in the same time I can’t get logs when there is hole …
If is remove `level=info` I get such entries
2026-01-02 10:49:52.235 error level=error ts=2026-01-02T09:49:52.194116437Z caller=errors.go:26 org_id=fake message="closing iterator" error="context canceled"
2026-01-02 10:49:52.180 error level=error ts=2026-01-02T09:49:52.103852212Z caller=errors.go:26 org_id=fake message="closing iterator" error="context canceled"
2026-01-02 10:49:52.180 error ts=2026-01-02T09:49:52.103822423Z caller=spanlogger.go:152 user=fake level=error msg="failed downloading chunks" err="failed to load chunk 'fake/388474d1f8d49819/19b7d5e365c:19b7dcc4df4:b7530eb1': failed to get s3 object: operation error S3: GetObject, https response error StatusCode: 0, RequestID: , HostID: , canceled, context canceled"
2026-01-02 10:49:52.180 error level=error ts=2026-01-02T09:49:52.103776274Z caller=parallel_chunk_fetch.go:74 msg="error fetching chunks" err="failed to load chunk 'fake/388474d1f8d49819/19b7d5e365c:19b7dcc4df4:b7530eb1': failed to get s3 object: operation error S3: GetObject, https response error StatusCode: 0, RequestID: , HostID: , canceled, context canceled"
2026-01-02 10:49:40.398received a duplicate entry for ts 1767347379784683107
2026-01-02 10:49:40.280 error level=error ts=2026-01-02T09:49:40.27393468Z caller=errors.go:26 org_id=fake message="closing iterator" error="context canceled"
2026-01-02 10:49:40.280 error ts=2026-01-02T09:49:40.272538717Z caller=spanlogger.go:152 user=fake level=error msg="failed downloading chunks" err="context canceled"
2026-01-02 10:49:40.280 error level=error ts=2026-01-02T09:49:40.272455118Z caller=parallel_chunk_fetch.go:74 msg="error fetching chunks" err="context canceled"
2026-01-02 10:49:40.180 error ts=2026-01-02T09:49:40.157035328Z caller=spanlogger.go:152 user=fake level=error msg="failed downloading chunks" err="context canceled"
2026-01-02 10:49:40.180 error level=error ts=2026-01-02T09:49:40.156982489Z caller=parallel_chunk_fetch.go:74 msg="error fetching chunks" err="context canceled"
2026-01-02 10:49:40.017 received a duplicate entry for ts 1767347379410973117
2026-01-02 10:49:36.332 error level=error ts=2026-01-02T09:49:36.280439344Z caller=errors.go:26 org_id=fake message="closing iterator" error="context canceled"
2026-01-02 10:49:36.332 error ts=2026-01-02T09:49:36.280405645Z caller=spanlogger.go:152 user=fake level=error msg="failed downloading chunks" err="failed to load chunk 'fake/94c32ce365c88b7b/19b7d62ea87:19b7dd0d340:ce03313e': failed to get s3 object: operation error S3: GetObject, https response error StatusCode: 0, RequestID: , HostID: , canceled, context canceled"
2026-01-02 10:49:36.332 error level=error ts=2026-01-02T09:49:36.280376955Z caller=parallel_chunk_fetch.go:74 msg="error fetching chunks" err="failed to load chunk 'fake/94c32ce365c88b7b/19b7d62ea87:19b7dd0d340:ce03313e': failed to get s3 object: operation error S3: GetObject, https response error StatusCode: 0, RequestID: , HostID: , canceled, context canceled"
2026-01-02 10:49:27.032 error level=error ts=2026-01-02T09:49:26.933560987Z caller=errors.go:26 org_id=fake message="closing iterator" error="context canceled"
2026-01-02 10:49:27.032 error ts=2026-01-02T09:49:26.933523168Z caller=spanlogger.go:152 user=fake level=error msg="failed downloading chunks" err="failed to load chunk 'fake/6dbb2d6fade8d1ee/19b7d5d5321:19b7dcbb10a:6df80198': failed to get s3 object: operation error S3: GetObject, https response error StatusCode: 0, RequestID: , HostID: , canceled, context canceled"
2026-01-02 10:49:27.032 error level=error ts=2026-01-02T09:49:26.933476459Z caller=parallel_chunk_fetch.go:74 msg="error fetching chunks" err="failed to load chunk 'fake/6dbb2d6fade8d1ee/19b7d5d5321:19b7dcbb10a:6df80198': failed to get s3 object: operation error S3: GetObject, https response error StatusCode: 0, RequestID: , HostID: , canceled, context canceled"
2026-01-02 10:49:23.680 error level=error ts=2026-01-02T09:49:23.626296112Z caller=errors.go:26 org_id=fake message="closing iterator" error="context canceled"
2026-01-02 10:49:23.632 error level=error ts=2026-01-02T09:49:23.59358619Z caller=errors.go:26 org_id=fake message="closing iterator" error="context canceled"
2026-01-02 10:49:11.432 error level=error ts=2026-01-02T09:49:11.393932758Z caller=errors.go:26 org_id=fake message="closing iterator" error="context canceled"
2026-01-02 10:49:02.132 error level=error ts=2026-01-02T09:49:02.071983983Z caller=errors.go:26 org_id=fake message="closing iterator" error="context canceled"
2026-01-02 10:48:52.296 error level=error ts=2026-01-02T09:48:52.195231151Z caller=errors.go:26 org_id=fake message="closing iterator" error="context canceled"
2026-01-02 10:48:52.195 error level=error ts=2026-01-02T09:48:52.095284457Z caller=errors.go:26 org_id=fake message="closing iterator" error="context canceled"
2026-01-02 10:48:52.195 error ts=2026-01-02T09:48:52.095262027Z caller=spanlogger.go:152 user=fake level=error msg="failed downloading chunks" err="context canceled"
2026-01-02 10:48:52.195 error level=error ts=2026-01-02T09:48:52.095225838Z caller=parallel_chunk_fetch.go:74 msg="error fetching chunks" err="context canceled"
I checked the error log entries that reports `fake/388474d1f8d49819/19b7d5e365c:19b7dcc4df4:b7530eb1` is missing but I can find it on the storage
mc ls bucket_loki/loki-k8s-prod/fake/388474d1f8d49819/19b7d5e365c:19b7dcc4df4:b7530eb1
[2026-01-02 09:21:40 CET] 99KiB STANDARD 19b7d5e365c:19b7dcc4df4:b7530eb1
I don’t really understand the write/read path for the data and how this processed by the various component. Does someone have clues ?
thanks






