Entry too far behind

Hi

we are ingesting logs to enterprise logs(loki) using alloy agent, both alloy and loki deployed on GKE cluster, we started observed “entry too far behind” in alloy logs and aswell as loki write pods. i have updated my deployment config to reject_old_samples: false but still getting same error.

appreciate if anyone help

below is my configuartion details…

loki/production/helm/loki/values.yaml at main · grafana/loki · GitHub

adminApi:
replicas: 2

backend:
replicas: 3
persistence:
size: 100Gi
storageClass: null

deploymentMode: SimpleScalable
serviceAccount:
create: false
annotations:
iam.gke.io/gcp-service-account: “masked”
enterprise:
cluster_name: loki-10
enabled: true
externalLicenseName: ge-logs-license
useExternalLicense: true
provisioner:
enabled: false

enterpriseGateway:
replicas: 3

nameOverride: loki-10

gateway:
enabled: true
service:
annotations:
cloud.google.com/neg: ‘{“ingress”: true}’

loki:
runtimeConfig:
overrides:
prj-fisv-p-apigee6efac44b:
ingestion_rate_mb: 100
ingestion_burst_size_mb: 150
limits_config:

    reject_old_samples: false 
    #reject_old_samples_max_age: 2w
    max_concurrent_tail_requests: 15
    max_global_streams_per_user: 40000
    max_queriers_per_tenant: 0
    max_query_parallelism: 64
    max_query_series: 5000
    min_sharding_lookback: 0s
    query_ready_index_num_days: 30
    split_queries_by_interval: 30m
    tsdb_max_query_parallelism: 1024

storage:
bucketNames:
chunks: bkt-1
ruler: bkt-1
admin: bkt-1
type: gcs
structuredConfig:
admin_client:
storage:
gcs:
bucket_name: bkt-1
service_account: |
masked
backend: gcs
analytics:
reporting_enabled: false
query_scheduler:
max_outstanding_requests_per_tenant: 32768
querier:
max_concurrent: 16
ruler:
storage:
gcs:
bucket_name: bkt-1
chunk_buffer_size: 0
enable_http2: true
request_timeout: 0s
service_account: |
masked
type: gcs
schema_config:
configs:
- from: “2024-04-01”
index:
period: 24h
prefix: index_
object_store: gcs
schema: v13
store: tsdb
server:
log_level: info
grpc_server_max_recv_msg_size: 16777216
grpc_server_max_send_msg_size: 16777216
storage_config:
tsdb_shipper:
active_index_directory: “/var/loki/tsdb-index”
cache_location: “/var/loki/tsdb-cache”
gcs:
bucket_name: bkt-1
service_account: |
masked
chunk_buffer_size: 0
enable_http2: true
request_timeout: 0s

lokiCanary:
enabled: false

read:
replicas: 6
persistence:
size: 100Gi
storageClass: premium-rwo

write:
replicas: 6
persistence:
size: 100Gi
storageClass: premium-rwo

test:
enabled: false

monitoring:
serviceMonitor:
enabled: true
metricsInstance:
enabled: false
interval: 60s
scrapeTimeout: 15s

Don’t see anything obviously wrong. What does your alloy configuration look like?

Can you also provide some sample logs? Preferably logs from two different alloy agents if possible for comparison.