Freezes and goes into 100% CPU usage

Every time after 22 days, the server load is set to 100% on the processor and memory, and the machine can work without a network connection. And this happens for 3 days, after which it hangs down and continues to work as if nothing had happened. How do I defeat this?
Loki played on Proxmox mahine:
Processor 8, RAM 22 GB, hard disk 250 GB -SSD
During these 22 days, the disk size increases by only ~15GB.

My configuration:

ingester:
  chunk_block_size: 262144
  chunk_idle_period: 5m
  chunk_target_size: 1572864
  chunk_retain_period: 336h 
  chunk_encoding: snappy
  max_chunk_age: 336h
  max_transfer_retries: 0
  lifecycler:                 
    join_after: 60s
    observe_period: 5s       
    address: MY_SERVER
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1  
    final_sleep: 0s

common:
  path_prefix: /data/loki
  storage:
    filesystem:
      chunks_directory: /data/loki/chunks
      rules_directory: /data/loki/rules

  replication_factor: 1
  ring:
    instance_addr: MY_SERVER
    kvstore:
      store: inmemory

schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:                          
        prefix: index_loki_blob
        period: 24h                 

storage_config:
  index_queries_cache_config:
    memcached:
      batch_size: 100
      parallelism: 100
    memcached_client:
      consistent_hash: true
      host: memcached_host
      service: memcached
  boltdb_shipper:
    active_index_directory: /data/loki/boltdb-shipper-active
    cache_location: /data/loki/boltdb-cache
    cache_ttl: 24h 
    shared_store: filesystem
  filesystem:
    directory: /data/loki/chunks

compactor:
  compaction_interval: 10m
  retention_enabled: true 
  retention_delete_delay: 336h         
  retention_delete_worker_count: 150
  delete_request_cancel_period: 336h

limits_config:
   max_entries_limit_per_query: 1000 
   max_streams_per_user: 100000
   max_chunks_per_query: 200000
   reject_old_samples: true 
   reject_old_samples_max_age: 168h
   ingestion_rate_strategy: local
   ingestion_rate_mb: 200
   ingestion_burst_size_mb: 1500
   retention_period: 336h  
   max_query_lookback: 14d 
   max_query_series: 100000  
   per_stream_rate_limit: "512MB"
   per_stream_rate_limit_burst: "1024MB"
   max_global_streams_per_user: 524288000  
   split_queries_by_interval: 30m 

query_range:
  align_queries_with_step: true
  cache_results: true             
  max_retries: 5                   
  parallelise_shardable_queries: false
  results_cache:                
    cache:
      memcached_client:
        consistent_hash: true      
        host: memcached_host       
        max_idle_conns: 16        
        service: memcached        
        timeout: 500ms  
        update_interval: 5m 
chunk_store_config:
  max_look_back_period: 336h 

frontend:
  max_outstanding_per_tenant: 1024  
  compress_responses: true              

ingester_client:
  grpc_client_config:  
    max_send_msg_size: 9663676416          

query_scheduler:
  max_outstanding_requests_per_tenant: 10000 
  grpc_client_config:
    max_send_msg_size:  9663676416

table_manager:
  retention_deletes_enabled: true          
  retention_period: 336h

Kind of hard to answer, there is not a lot of information. Loki exposes various metrics, I would recommend you to capture them and specifically look for memory and heap metrics and see if you can find out where the spike is from.

Unrelated, you should disable table manager if you are using compactor.

tell me if the value is specified correctly: retention_delete_delay: 360h, if my goal is to request logs for a period of 14 days. If I set the value as written in the documentation for 2-3 days, will I then be able to build graphs and request logs for a period of 14 days?