Invalid batch size 1000, the next query will have 1000 overlapping entries

I used this command to export logs

/export/loki-script/logcli --ca-cert /etc/nginx/conf/cert/ca.crt --cert /etc/nginx/conf/cert/loki.crt --key /etc/nginx/conf/cert/loki.key --timezone=UTC -q --addr=https://<LOKI_HOST>:3100 query '{app="APPNAME"}' --from="2025-03-21T16:00:00Z" --to="2025-03-22T16:00:00Z" --part-path-prefix=/export/loki-script/app/<APPNAME> --parallel-duration=10m --parallel-max-workers=20 --merge-parts -o raw --forward

but an error occurred:

2025/03/26 15:03:17 Invalid batch size 1000, the next query will have 1000 overlapping entries (there will always be 1 overlapping entry but Loki allows multiple entries to have the same timestamp, so when a batch ends in this scenario the next query will include all the overlapping entries again).  Please increase your batch size to at least 1001 to account for overlapping entryes

I adjusted it up or down --parallel-duration=10m --parallel-max-workers --limit=1000, both are the same error。
Some configurations of loki.yml are as follows:

limits_config:
  ingestion_rate_strategy: global
  ingestion_rate_mb: 1024
  ingestion_burst_size_mb: 1024
  max_label_name_length: 1024
  max_label_value_length: 2048
  max_label_names_per_series: 15
  reject_old_samples: false
  reject_old_samples_max_age: 1w
  creation_grace_period: 10m
  max_line_size: 10MB
  max_line_size_truncate: false
  increment_duplicate_timestamp: true
  discover_log_levels: true
  use_owned_stream_count: false
  max_streams_per_user: 100000
  max_global_streams_per_user: 20000
  unordered_writes: true
  per_stream_rate_limit: 1GB
  per_stream_rate_limit_burst: 1GB
  max_chunks_per_query: 2000000
  max_query_series: 5000
  max_query_lookback: 0s
  max_query_length: 100d
  max_query_range: 0s
  max_query_parallelism: 500
  tsdb_max_query_parallelism: 1280
  tsdb_max_bytes_per_shard: 600MB
  tsdb_sharding_strategy: power_of_two
  tsdb_precompute_chunks: false
  cardinality_limit: 100000
  max_streams_matchers_per_query: 1000
  max_concurrent_tail_requests: 200
  max_entries_limit_per_query: 0
  max_cache_freshness_per_query: 10m
  max_metadata_cache_freshness: 1d
  max_stats_cache_freshness: 10m
  max_queriers_per_tenant: 0
  max_query_capacity: 0
  query_ready_index_num_days: 0
  query_timeout: 1h
  split_queries_by_interval: 1d
  split_metadata_queries_by_interval: 10m
  split_recent_metadata_queries_by_interval: 1h
  recent_metadata_query_window: 0s
  split_instant_metric_queries_by_interval: 1h
  split_ingester_queries_by_interval: 0s
  min_sharding_lookback: 0s
  max_query_bytes_read: 0B
  max_querier_bytes_read: 150GB
  volume_enabled: true
  volume_max_series: 1000

Some log contents have duplicate timestamps.
how to solve the problem?

Try changing parallel duration to be bigger something like --parallel-duration=30m or even 1h.

Also unless you are in a hurry setting max worker smaller might be a better idea just so you don’t overrun your loki cluster.

I have adjusted all three parameters --parallel-duration=10m, --parallel-max-workers=20, --limit=1000 (set them to be larger or smaller), but the result is the same error