caller=server.go:239 http=[::]:3100 grpc=[::]:9095 msg=“server listening on addresses”
level=error ts=2021-10-27T14:24:38.121116838Z caller=log.go:106 msg=“error running loki” err="mkdir : no such file or directory\nerror
I ran into a number of these issues - there seems to be so much variability in how to do this (flexible) at the cost of concise clarity on documentation. Here’s what worked for me with key items annotated (may need to remove comments) - I can’t say this is THE precise way only that it worked for me.
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
max_transfer_retries: 2
schema_config:
configs:
- from: 2020-07-01
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: index_
period: 24h
# period 24h is a key setting to AWS S3 use - see docs
# I was forced to add a compactor config as well - though this will become necessary anyway
compactor:
working_directory: /tmp/loki/compactor
shared_store: s3
storage_config:
# boltdb:
# several samples showed both boltdb and shipper, since versions change so fast it's hard to verify proper use but latest examples and docs state shipper and this worked (as of 2.3 for me)
boltdb_shipper:
shared_store: s3 # I have seen both 'aws' and 's3' used here and other places
active_index_directory: /tmp/loki/index
cache_location: /tmp/loki/cache
# cache_ttl: 168h
aws:
s3: s3://usxxxxx2/my-lokidata # I used IAM ROLE - you can put using creds but this is the more secure as the role is attached to this server and not usable from anywhere else
s3forcepathstyle: true # added from various examples - need to validate
sse_encryption: true # added from various examples - need to validate
# bucketnames: my-lokidata # not needed here if bucket in the s3 path above
limits_config:
ingestion_rate_mb: 16
ingestion_burst_size_mb: 20
enforce_metric_name: false
reject_old_samples: false
reject_old_samples_max_age: 504h # I have this set high for testing log ingestion from client
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
ubuntu@ip-:/etc/loki$
That sounds like the local folders for that location are not owned by the user running loki. I assume you are running it as a service (linux I assume) - you must also give ownership of the /tmp/loki/ folder to that user assigned in the service. Here is a good link to follow - some sections are outdated but it it one of the better resources to get general config and process info on Gradana/Loki. Just make sure you are not running as root in the service. Follow that link on the linux side setup but not the config flie for Loki.
Yes, the documentation is not clear. It took a lot of trial and error to get the below configuration to work with Linode s3 compatible object storage
# -- Check https://grafana.com/docs/loki/latest/configuration/#schema_config for more info on how to configure schemas
schemaConfig:
configs:
- from: 2020-09-07
store: boltdb-shipper
object_store: aws
schema: v11
index:
prefix: loki_index_
period: 24h
# -- Check https://grafana.com/docs/loki/latest/configuration/#storage_config for more info on how to configure storages
storageConfig:
boltdb_shipper:
shared_store: s3
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 168h
aws:
s3: s3://
bucketnames: <bucket-name>
access_key_id: <access-key>
secret_access_key: <secret-key>
# region will always be US even if you have selected any other region
region: US
# use the actual region name below. For example, if you have used ap-south-1 the endpoint will be ap-south-1.linodeobjects.com
endpoint: <region-name>.linodeobjects.com