we try to migrate logs collections from one s3 bucket to another and from a monolithic to scalable mode. But we cant get it to work.
We tried to just simply copy the files from one bucket to another and set the new one in the loki conf but it doesnt seem to find any files. Any ideas what we can do?
Please provide more information. Any error in logs? What does your config look like? Does the new loki cluster write chunks to where you’d expect it to?
If you are already using S3, you should be able to just change from monolithic to SSD mode. Of course you’d want to test in a de environment and make sure you have all the configuration ironed out.
thanks for the fast response.
we sync the two buckets via.
aws s3 sync . it took a decent amount of time.
We tried to edit the example config from the github repo to fit our needs and match the old config in the important parts.
But still no data on the new side
Old Config:
auth_enabled: false
frontend:
max_outstanding_per_tenant: 4096
server:
http_listen_port: 3100
grpc_listen_port: 9096
http_server_read_timeout: 60s # allow longer time span queries
http_server_write_timeout: 60s # allow longer time span queries
common:
instance_addr: 127.0.0.1
path_prefix: /tmp/loki
storage:
s3:
access_key_id: ****
bucketnames: ****
endpoint: ****
insecure: false
region: ****
s3forcepathstyle: false
secret_access_key: ****
replication_factor: 1
ring:
kvstore:
store: inmemory
querier:
engine:
timeout: 600s
query_range:
parallelise_shardable_queries: true
results_cache:
cache:
memcached_client:
consistent_hash: true
addresses: "grafana-memcached:11211"
max_idle_conns: 16
timeout: 500ms
update_interval: 1m
chunk_store_config:
max_look_back_period: 672h
chunk_cache_config:
memcached:
batch_size: 256
parallelism: 10
memcached_client:
addresses: "grafana-memcached:11211"
query_scheduler:
max_outstanding_requests_per_tenant: 8192
limits_config:
query_timeout: 600s
retention_period: 72h
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
split_queries_by_interval: 24h
# for big logs tune
per_stream_rate_limit: 4096M
per_stream_rate_limit_burst: 8192M
max_global_streams_per_user: 0
cardinality_limit: 200000
ingestion_burst_size_mb: 2000
ingestion_rate_mb: 10000
max_entries_limit_per_query: 1000000
#reject_old_samples: true
#reject_old_samples_max_age: 168h
#max_query_series: 100000
#max_query_parallelism: 2
max_label_value_length: 20480
max_label_name_length: 10240
max_label_names_per_series: 300
storage_config:
boltdb_shipper:
active_index_directory: /loki/boltdb-shipper-active
cache_location: /loki/boltdb-shipper-cache
cache_ttl: 336h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: s3
index_queries_cache_config:
memcached:
batch_size: 100
parallelism: 100
memcached_client:
consistent_hash: true
addresses: "grafana-memcached:11211"
# TSDB Shipper hinzugefĂĽgt 19.06.24
tsdb_shipper:
active_index_directory: /loki/tsdb-index
cache_location: /loki/tsdb-cache
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: index_
period: 24h
# TSDB hinzugefĂĽgt 19.06.24
- from: 2024-06-19
store: tsdb
object_store: s3
schema: v13
index:
prefix: index_
period: 24h
table_manager:
retention_deletes_enabled: true
retention_period: 672h
ruler:
alertmanager_url: http://localhost:9093
# By default, Loki will send anonymous, but uniquely-identifiable usage and configuration
# analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/
#
# Statistics help us better understand how Loki is used, and they show us performance
# levels for most users. This helps us prioritize features and documentation.
# For more information on what's sent, look at
# https://github.com/grafana/loki/blob/main/pkg/usagestats/stats.go
# Refer to the buildReport method to see what goes into a report.
#
# If you would like to disable reporting, uncomment the following lines:
analytics:
reporting_enabled: false
We figured it out. The Problem was the tenant thats called “docker” by default and we copied the old logs to another tenant that was used by the old loki. We reconfigured it so now we can use the old files What i meant with “doesnt work” was, that we couldnt see the copied logs in Grafana. Thank you very much for your time and have a nice rest of the week. We will change the config as you mentioned and i think the rest is resolved