Dates and chunks

  • What Grafana version and what operating system are you using?
    RockyOS 9.1
    Grafan 7.5.15
  • What are you trying to achieve?
    Background: A standalone grafana is connected (mounted NFS storage) to a freebsd zfs storage.
    That FreeBSD server got a syslog-ng service running and is saving logs from many switches/fw.

I wish to display all the logs with correct date and time, and I wish the chunks? to have retantion.
I understand that there is an issue with my config but I cant get it working.

  • How are you trying to achieve it?
    Promtail is reading the NFS storage and sending it to loki.
    Loki is the main source for grafana.

This is working.

  • What happened?
    On producion grafana the chunks took all space (retantion-issue?).
    All the dates are wrong, I guess something is wrong with config and using default date which is read-date.

  • What did you expect to happen?
    Not that chunks would eat all the data.
    Not that the dates was wrong.

  • Can you copy/paste the configuration(s) that you are having problems with?


promtail

server:
http_listen_port: 9080
grpc_listen_port: 0

positions:
filename: /tmp/positions.yaml

clients:

scrape_configs:

  • job_name: LABEL_LOG
    static_configs:
    • targets:
      • localhost
        labels:
        job: LABEL
        path: PATHTOSTORAGE
        pipeline_stages:
    • timestamp:
      source: time
      format: RFC3339Nano

loki.

auth_enabled: false

server:
http_listen_port: 3100
grpc_listen_port: 9096

common:
instance_addr: 127.0.0.1
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory

query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100

limits_config:
ingestion_rate_strategy: local # Default: global
max_global_streams_per_user: 5000
max_query_length: 0h # Default: 721h
max_query_parallelism: 32 # Old Default: 14
max_streams_per_user: 0 # Old Default: 10000

schema_config:
configs:
- from: 2023-01-20
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h

ruler:
alertmanager_url: http://localhost:9093

  • Did you receive any errors in the Grafana UI or in related logs? If so, please tell us exactly what they were.
    No.
  • Did you follow any online instructions? If so, what is the URL?
  1. Look into loki retention: Retention | Grafana Loki documentation

  2. For the most part you have to parse your log message at least just enough to get the timestamp out of it, then tell promtail to use that as timestamp. From your config you are telling promtail to use time label as the source for timestamp, but I don’t see that defined anywhere.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.