Hi,
I have tried everything I could on the loki side and can confirm than when I use the api I can return data that are over 1 year old.
However, when I use the time picker in Grafana and select last 90 days, I get the following error “the query time range exceeds the limit (query length: 2160h0m0s, limit: 30d1h)”.
Example of direct access with the api:
From 2024-01-01 00:00:00 UTC → 1704153600000000000
To 2025-01-01 00:00:00 UTC → 1735689600000000000
# curl -G \
--data-urlencode 'query={job=~".+"}' \
--data-urlencode 'limit=100' \
--data-urlencode 'start=1704153600000000000' \
--data-urlencode 'end=1735689600000000000' \
'http://localhost:3100/loki/api/v1/query_range'
I get the following result (truncated for privacy):
{"status":"success","data":{"resultType":"streams","result":[{"stream":{"client":"client1","detected_level":"info","filename":"/data/ingester/client/server1/web.log","job":"client-logs","...
In the logs I can clearly see the request:
Jul 22 10:54:47 Europa loki[14327]: level=info ts=2025-07-22T10:54:47.059538937Z caller=roundtrip.go:379 org_id=fake traceID=075c8e35899931fd msg="executing query" type=range query="{job=~\".+\"}" start=2024-01-02T00:00:00Z end=2025-01-01T00:00:00Z start_delta=13618h54m47.059534186s end_delta=4858h54m47.059534371s length=8760h0m0s step=126144000 query_hash=453119268
Jul 22 10:54:47 Europa loki[14327]: level=debug ts=2025-07-22T10:54:47.061741905Z caller=shard_resolver.go:123 org_id=fake traceID=075c8e35899931fd bytes=393kB chunks=18 streams=7 entries=1792 msg="queried index" type=single matchers="{job=~\".+\"}" duration=99.08µs from=2024-12-31T22:59:30Z through=2025-01-01T00:00:00Z length=1h0m30s
However, when I use Grafana, it looks like the request does not even reach loki. In case the above error would be coming from loki, I believe we should see something in debug logs right?
Here is my loki config :
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
log_level: debug
grpc_server_max_concurrent_streams: 1000
common:
instance_addr: 127.0.0.1
path_prefix: /tmp/loki
replication_factor: 1
ring:
kvstore:
store: memberlist
memberlist:
join_members: []
table_manager:
retention_deletes_enabled: true
retention_period: 8760h
storage_config:
azure:
account_key: ==
account_name: loki
container_name: chunks
use_managed_identity: false
request_timeout: 0
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
limits_config:
metric_aggregation_enabled: true
enable_multi_variant_queries: true
max_query_length: 0h
max_query_lookback: 87600h
volume_enabled: true
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: azure
schema: v13
index:
prefix: index_
period: 24h
ingester:
max_chunk_age: 4320h
pattern_ingester:
enabled: true
metric_aggregation:
loki_address: localhost:3100
ruler:
alertmanager_url: http://localhost:9093
frontend:
encoding: protobuf
compactor:
compaction_interval: 2h # or 1h
working_directory: /data/loki/retention
retention_enabled: false
delete_request_store: "azure"
retention_delete_delay: 5m
retention_delete_worker_count: 150
delete_request_cancel_period: 5m
I believe that loki works as expected but please let me know if you see something missing/wrong in the config.
Also, is there something specific in grafana to be set to allow more than 30 days?
Versions:
# /usr/share/grafana/bin/grafana -version
grafana version 12.0.2+security-01
# /usr/bin/loki --version
loki, version 3.5.2 (branch: release-3.5.x, revision: 257d2f62)
build user: root@67bdab5ad0d6
build date: 2025-07-10T19:26:46Z
go version: go1.24.1
platform: linux/amd64
tags: netgo
I hope someone can help.
Thanks a lot in advance.