Data Retention - Cortex

Hi
We are currently sending all our metrics to Cortex which is configured to send data to cassandra / minio .

We came across a scenario where

  • one of our customer is sending metrics via an exporter to cortex , and today they stopped exporter (for some reason) , from what we have observed is : we were unable to see historical data sent to cortex.
    But after some time, when exporter was back running , we see that metrics are reaching cortex server and we also see historical data from starting .
    We are curious to know why Historical metrics were not seen when exporter was not sending metrics (we had around 40 days of metrics) . inspite of adding ```
    retention_period=0

May i know if we are missing any settings ; below is our configuration of cortex.yml:

Documentation | Cortex

auth_enabled: True

target: all

server:
grpc_server_max_concurrent_streams: 1000
grpc_server_max_recv_msg_size: 104857600
grpc_server_max_send_msg_size: 104857600
http_listen_port: 9009

distributor:
pool:
health_check_ingesters: true
shard_by_all_labels: true

ingester_client:
grpc_client_config:
max_recv_msg_size: 104857600
max_send_msg_size: 104857600

ingester:
lifecycler:
final_sleep: 0s
interface_names:
- ens192
join_after: 0s
num_tokens: 512
ring:
kvstore:
store: inmemory
replication_factor: 1
max_transfer_retries: 0

storage:
cassandra:
addresses: XXXXXXXXXXXX
auth: true
connect_timeout: 5s
consistency: LOCAL_ONE
keyspace: cortex
num_connections: 2
password: XXXXXXX
port: 9042
query_concurrency: 10
reconnect_interval: 1s
timeout: 2s
username: XXXXXXX
engine: chunks

blocks_storage:
backend: filesystem
bucket_store:
sync_dir: /data/cortex/tsdb-sync
filesystem:
dir: ./data/tsdb
tsdb:
dir: /data/cortex/tsdb

compactor:
data_dir: /data/cortex/compactor
sharding_ring:
kvstore:
store: inmemory

frontend_worker:
match_max_concurrent: true

ruler:
enable_api: true
enable_sharding: false
storage:
local:
directory: /etc/cortex/rules
type: local

x

XXXXX: START - extended on 2020-10-08

limits:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h

schema:
configs:

  • chunks:
    period: 168h
    prefix: cortexdev_chunk_
    from: 2020-10-01
    index:
    period: 168h
    prefix: cortexdev_index_
    object_store: cassandra
    schema: v10
    store: cassandra

table_manager:
retention_deletes_enabled: false
retention_period: 0s

querier:
active_query_tracker_dir: /data/cortex/querier
query_ingesters_within: 12h

query_range:
align_queries_with_step: true
cache_results: true
split_queries_by_interval: 24h

frontend:
log_queries_longer_than: 10s
max_outstanding_per_tenant: 1000

1 Like

@suresh300567 you are editing the post while Im reading :slightly_smiling_face:
Please put your code inside code tags; readability get better.

Back to question though: config seems to be ok. Could you please login to ssh and post here output of df -h command? Thanks

Hi @anon68762149 thanks for the response , below is the output

image

1 Like

@suresh300567 seems as if you have little more than 1% of space used on volume where Cortex resides. It shouldnt be a problem than.

Yes . its a new set up .
Any other parameter to check ? or some config ?