Schema_config: for s3 compatible?

With S3 compatible without dynamodb
There is no schema definition on the official doc, I try this and I still have a error

failed parsing config: /etc/loki/config/config.yaml: yaml: unmarshal errors:
line 42: field storage_config not found in type chunk.PeriodConfig

schema_config:
     configs:
        - from: 2020-05-15
          store: boltdb
          object_store: filesystem
          schema: v11
          index:
            prefix: index_
            period: 168h
          storage_config:

    storage_config:
     boltdb_shipper:
      active_index_directory: /loki/index
      cache_location: /loki/index_cache
      shared_store: s3

     aws:
       s3: s3://access_key:secret_key@s3.private.eu-de.cloud-object-storage.appdomain.cloud/loki-s3
       s3forcepathstyle: true
    chunk_store_config:
      max_look_back_period: 0s

I try this

schema_config:
      configs:
        - from: 2020-05-15
          store: boltdb
          object_store: s3
          schema: v11
          index:
            prefix: index_
            period: 168h

    storage_config:
     boltdb_shipper:
      shared_store: aws

     aws:
       s3: s3://access_key:secret_key@s3.private.eu-de.cloud-object-storage.appdomain.cloud/loki-s3
       s3forcepathstyle: true
      
    chunk_store_config:
      max_look_back_period: 0s

caller=server.go:239 http=[::]:3100 grpc=[::]:9095 msg=“server listening on addresses”
level=error ts=2021-10-27T14:24:38.121116838Z caller=log.go:106 msg=“error running loki” err="mkdir : no such file or directory\nerror

I ran into a number of these issues - there seems to be so much variability in how to do this (flexible) at the cost of concise clarity on documentation. Here’s what worked for me with key items annotated (may need to remove comments) - I can’t say this is THE precise way only that it worked for me.

auth_enabled: false

server:
  http_listen_port: 3100

ingester:
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 5m
  chunk_retain_period: 30s
  max_transfer_retries: 2

schema_config:
  configs:
    - from: 2020-07-01
      store: boltdb-shipper
      object_store: s3
      schema: v11
      index:
        prefix: index_
        period: 24h  
		# period 24h is a key setting to AWS S3 use  -  see docs
# I was forced to add a compactor config as well - though this will become necessary anyway
compactor:
  working_directory: /tmp/loki/compactor
  shared_store: s3

storage_config:
#  boltdb:   
#    several samples showed both boltdb and shipper, since versions change so fast it's hard to verify proper use but latest examples and docs state shipper and this worked (as of 2.3 for me) 
  boltdb_shipper:  
    shared_store: s3  # I have seen both 'aws' and 's3' used here and other places 
    active_index_directory: /tmp/loki/index
    cache_location: /tmp/loki/cache
#    cache_ttl: 168h

  aws:
    s3: s3://usxxxxx2/my-lokidata  # I used IAM ROLE - you can put using creds but this is the more secure as the role is attached to this server and not usable from anywhere else
    s3forcepathstyle: true  # added from various examples - need to validate
    sse_encryption: true    # added from various examples - need to validate
#    bucketnames: my-lokidata  # not needed here if bucket in the s3 path above

limits_config:
  ingestion_rate_mb: 16
  ingestion_burst_size_mb: 20
  enforce_metric_name: false
  reject_old_samples: false
  reject_old_samples_max_age: 504h  # I have this set high for testing log ingestion from client

chunk_store_config:
  max_look_back_period: 0s

table_manager:
  retention_deletes_enabled: false
  retention_period: 0s
ubuntu@ip-:/etc/loki$

Hope this help.

thank you for your answer but When use /tmp…

err="mkdir /tmp/loki: read-only file system\nerror creating index client\

That sounds like the local folders for that location are not owned by the user running loki. I assume you are running it as a service (linux I assume) - you must also give ownership of the /tmp/loki/ folder to that user assigned in the service. Here is a good link to follow - some sections are outdated but it it one of the better resources to get general config and process info on Gradana/Loki. Just make sure you are not running as root in the service. Follow that link on the linux side setup but not the config flie for Loki.

caller=log.go:106 msg=“error running loki” err="mkdir /loki/index: read-only file system\nerror creating index clien

I use the offcial charts loki-distributed

what OS is the install on?

go to the tmp dir and run a listing of contents (ls -l or ll) to show ownership paste screenshot or contents here.

Official image, from the official chart loki-distributed

I’m asking what operating system or are you saying you are using a Docker image or Helm to deploy?

If I use volumeclaim, it might be a good idea to point the index directory on it :grinning_face_with_smiling_eyes:

as statefulset the mount point is /var/loki

the good setup

 boltdb_shipper:  
    shared_store: s3  
    active_index_directory: /var/loki/index
    cache_location: /var/loki/cache

Yes, the documentation is not clear. It took a lot of trial and error to get the below configuration to work with Linode s3 compatible object storage

# -- Check https://grafana.com/docs/loki/latest/configuration/#schema_config for more info on how to configure schemas
  schemaConfig:
    configs:
    - from: 2020-09-07
      store: boltdb-shipper
      object_store: aws
      schema: v11
      index:
        prefix: loki_index_
        period: 24h

  # -- Check https://grafana.com/docs/loki/latest/configuration/#storage_config for more info on how to configure storages
  storageConfig:
    boltdb_shipper:
      shared_store: s3
      active_index_directory: /var/loki/index
      cache_location: /var/loki/cache
      cache_ttl: 168h
    aws:
      s3: s3://
      bucketnames:  <bucket-name>
      access_key_id: <access-key>
      secret_access_key: <secret-key>
      # region will always be US even if you have selected any other region
      region: US
      # use the actual region name below. For example, if you have used ap-south-1 the endpoint will be ap-south-1.linodeobjects.com
      endpoint: <region-name>.linodeobjects.com

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.