Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream

Hello. I am trying to use Vector, Loki and Grafana for Log monitoring. In my setup I have Docker applications deployed on different instances on AWS and orchestrated by Hachicorp’s Nomad.

Everything seems to be working the logs are aggregated and I can view them. However, there is dely until the logs are arrived and I am not sure that they are full.

looking into Loki logs I get many of those errors:

level=warn ts=2022-02-22T16:46:02.488829797Z caller=grpc_logging.go:38 method=/logproto.Pusher/Push duration=1.200396ms err="rpc error: code = Code(429) desc = entry with timestamp 2022-02-22 16:46:01.601284159 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream '{group=\"vector\", job=\"logging\", node=\"brain-worker-dev-2.blinchik.io\", task=\"vector\"}' totaling 1089281B, consider splitting a stream via additional labels or contact your Loki administrator to see if the limt can be increased' for stream: {group=\"vector\", job=\"logging\", node=\"brain-worker-dev-2.blinchik.io\", task=\"vector\"},\ntotal ignored: 1 out of 1" msg="gRPC\n"

here is my Config:

auth_enabled: false
server:
  http_listen_port: {{ range service "loki" }}{{ .Port }}{{ end }}
  grpc_server_max_recv_msg_size: 9194304
  grpc_server_max_send_msg_size: 9194304
ingester:
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  # Any chunk not receiving new logs in this time will be flushed
  chunk_idle_period: 1h
  # All chunks will be flushed when they hit this age, default is 1h
  max_chunk_age: 1h
  # Loki will attempt to build chunks up to 1.5MB, flushing if chunk_idle_period or max_chunk_age is reached first
  chunk_target_size: 1048576
  # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
  chunk_retain_period: 30s
  max_transfer_retries: 0     # Chunk transfers disabled
schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 24h
storage_config:
  boltdb_shipper:
    active_index_directory: /loki/boltdb-shipper-active
    cache_location: /loki/boltdb-shipper-cache
    cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space
    shared_store: filesystem
  filesystem:
    directory: /loki/chunks
compactor:
  working_directory: /tmp/loki/boltdb-shipper-compactor
  shared_store: filesystem
limits_config:
  reject_old_samples: true
  reject_old_samples_max_age: 168h
  ingestion_rate_mb: 1024
  ingestion_burst_size_mb: 1024
chunk_store_config:
  max_look_back_period: 0s
table_manager:
  retention_deletes_enabled: false
  retention_period: 0s
query_range:
  parallelise_shardable_queries: false
  split_queries_by_interval: 0 # 720h # 30d
query_scheduler:
  max_outstanding_requests_per_tenant: 2048
  grpc_client_config:
    # The maximum size in bytes the client can send.
    # CLI flag: -<prefix>.grpc-max-send-msg-size
    max_send_msg_size: 33554432 # 32mb, default = 16777216]
    max_recv_msg_size: 33554432
frontend_worker:
  grpc_client_config:
    # The maximum size in bytes the client can send.
    # CLI flag: -<prefix>.grpc-max-send-msg-size
    max_send_msg_size: 33554432 # 32MiB, default = 16777216]
    max_recv_msg_size: 33554432
querier:
  max_concurrent: 60
frontend:
  # Maximum number of outstanding requests per tenant per frontend; requests
  # beyond this error with HTTP 429.
  # CLI flag: -querier.max-outstanding-requests-per-tenant
  max_outstanding_per_tenant: 2048 # default = 100]
  # Compress HTTP responses.
  # CLI flag: -querier.compress-http-responses
  compress_responses: true # default = false]
  # Log queries that are slower than the specified duration. Set to 0 to disable.
  # Set to < 0 to enable on all queries.
  # CLI flag: -frontend.log-queries-longer-than
  log_queries_longer_than: 20s

any ideas on how I can solve this one?

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.