Loki Canary can't tail new logs

Trying to set up Loki Canary along Loki Canary | Grafana Loki documentation for monitoring our Loki-distributed-stack-performance with Promtail agents.

Loki itself, Canary and Promtail are located in the same cluster.

No matter what I try, I can’t get Canary to compare its ‘own’ logs with those received from Loki due:

Connecting to loki at ws://<query-frontend>.<loki-distributed-namespace>:3100/loki/api/v1/tail?query=%7Bstream%3D%22stdout%22%2Cpod%3D%22loki-canary-jpcrq%22%7D, querying for label 'pod' with value 'loki-canary-jpcrq'
1684995787382100669 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp
1684995788381340178 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp
timeout tailing new logs (timeout period: 10.00s), will retry in 10 seconds: read tcp 10.42.41.143:41824->10.43.222.194:3100: i/o timeout
1684995797382363949 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp
Querying loki for logs with query: http://<query-frontend>.<loki-distributed-namespace>:3100/loki/api/v1/query_range?start=1684982144381336202&end=1684982164381336202&query=%7Bstream%3D%22stdout%22%2Cpod%3D%22loki-canary-jpcrq%22%7D&limit=1000
websocket failed to receive entry 1684995677382905073 within 60.000000 seconds

Canary-config:

extraArgs:
  - '-labelname=pod'
  - '-labelvalue=$(POD_NAME)'
extraEnv:
  - name: HOSTNAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName
  - name: POD_NAME
    valueFrom:
      fieldRef:
        fieldPath: metadata.name

lokiAddress: '<query-frontend>.<loki-distributed-namespace>:3100'

Logs from Query-frontend:

level=info component=frontend org_id=fake latency=fast query="{stream=\"stdout\",pod=\"loki-canary-jpcrq\"}" query_hash=1991069833 query_type=limited range_type=range length=20s start_delta=29.054708262s end_delta=9.054708419s step=1s duration=17.198964ms status=200 limit=1000 returned_lines=0 throughput=0B total_bytes=0B lines_per_second=0 total_lines=0 total_entries=0 store_chunks_download_time=0s queue_time=0s splits=0 shards=16 cache_chunk_req=0 cache_chunk_hit=0 cache_chunk_bytes_stored=0 cache_chunk_bytes_fetched=0 cache_chunk_download_time=0s cache_index_req=32 cache_index_hit=32 cache_index_download_time=65.625898ms cache_result_req=0 cache_result_hit=0 cache_result_download_time=0s
level=debug caller=logging.go:76 traceID=789b20b9476437a6 orgID=fake msg="GET /loki/api/v1/query_range?start=1684997447570202074&end=1684997467570202074&query=%7Bstream%3D%22stdout%22%2Cpod%3D%22loki-canary-jpcrq%22%7D&limit=1000 (200) 17.502083ms"

However, the logs with the pod-label can easily be found in Grafana via the frontend address.

Does anyone have any idea why Canary can’t find the logs via the frontend address?

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.