Running multiple services

Hello everyone,

we’ve been testing out Loki with idea to implement it and help us with logs. We’ve read the official documentation on Grafana website, and configured Promtail to send data to server running Loki as a monolith service and Grafana.

The problem occurs when trying to parse large data of logs (parsing 10GB of logs takes about 25 seconds) so we figured out we need to make Loki work in parallelism, running more queriers. Can we configure Loki to run on a single machine, in monolitic mode, but with multiple queriers to speed up the reading process?

Here is our Promtail configuration on machine sending the data:

server:
http_listen_port: 9080
grpc_listen_port: 0

positions:
filename: /tmp/positions.yaml

clients:

scrape_configs:

  • job_name: system
    pipeline_stages:
    • json:
      expressions:
      flow: flow
      time: time
    • labels:
      flow:
    • timestamp:
      format: RFC3339Nano
      source: time
      static_configs:
    • targets:
      • localhost
        labels:
        job: CompanyName
        host: ServerName
        path: /something/log/services/{service1,service2}.log

This is the configuration on machine running Loki (currently in monolith mode):

auth_enabled: false

target: all

server:
http_listen_port: 3100
log_level: debug

ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed
max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h
chunk_target_size: 1048576 # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
max_transfer_retries: 0 # Chunk transfers disabled

schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h

query_range:
align_queries_with_step: true
max_retries: 5
split_queries_by_interval: 10m
cache_results: true

results_cache:
cache:
enable_fifocache: true
fifocache:
max_size_bytes: 512MB
validity: 24h

storage_config:
boltdb_shipper:
active_index_directory: /something/database/loki/boltdb-shipper-active
cache_location: /something/database/loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: filesystem
filesystem:
directory: /something/database/loki/chunks

compactor:
working_directory: /something/database/loki/boltdb-shipper-compactor
shared_store: filesystem

limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h

chunk_store_config:
max_look_back_period: 0s

table_manager:
retention_deletes_enabled: true
retention_period: 72h

ruler:
storage:
type: local
local:
directory: /something/database/loki/rules
rule_path: /something/database/loki/rules-temp
alertmanager_url: http://localhost:9093
ring:
kvstore:
store: inmemory
enable_api: true

So, for the testing purposes we tried executing the following command in Loki explorer:
{filename="/something/log/services/authentication.log"} |=“2021-03-14”
and we get the response, but in between 24-26 seconds. We tried that before adding the split_queries_by_interval attribute (we first tried 24h figuring it will spawn 4 query microprocesses because the timeframe was 4 days long) and that the query will be 4 times faster. Well, I’m writing this because it was not :slight_smile:

Are we missing something? Is there a link to some other, more detailed, documentation?

Thanks,
Nikola

How did you get on with this, how did you scale the querying?

Hi,
If I understand you correctly, what you are looking for is the utilities of the query fronted, you can see it under the configuration part frontend_worker_config here. The most important part is parallelism which is the amount of cores you are giving the queriers, you can play with it.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.