Trying to send loki logs in Azure

Hello!

I have a working loki setup sending logs in S3 but want to experiment with storing them in Azure instead but am running into troubles. It creates a file loki_cluster_seed.json so I know that my credentials are correct but nothing else is created in my container and I cannot view my logs in grafana. There are no errors in the loki stdout.

Is it something obvious that’s wrong with my configuration?

docker-compose.yml

version: '3.7'

services:
  loki:
    image: grafana/loki:2.8.2
    logging:
      driver: json-file
      options:
        max-file: '3'
        max-size: 10m
    restart: unless-stopped
    command: -config.file=/etc/loki/loki.yml -config.expand-env=true
    ports:
      - "3100:3100"
    volumes:
      - ./loki/loki.dev.yml:/etc/loki/loki.yml:rw
    environment:
      - AZURE_ACCOUNT_KEY="${AZURE_ACCOUNT_KEY}"

loki.dev.yml

auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096

ingester:
  wal:
    enabled: true
    dir: /tmp/wal
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 1h       # Any chunk not receiving new logs in this time will be flushed
  max_chunk_age: 1h           # All chunks will be flushed when they hit this age, default is 1h
  chunk_target_size: 1048576  # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
  chunk_retain_period: 30s    # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
  max_transfer_retries: 0     # Chunk transfers disabled

schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: azure
      schema: v11
      index:
        prefix: index_
        period: 24h

common:
  storage:
    azure:
      account_name: ${AZURE_ACCOUNT_NAME:-crossenterprisestorage}
      account_key: ${AZURE_ACCOUNT_KEY}
      container_name: ${AZURE_CONTAINER_NAME:-loki}
      request_timeout: 0

storage_config:
  boltdb_shipper:
    active_index_directory: /loki/index
    cache_location: /loki/index_cache
    resync_interval: 5s
    cache_ttl: 24h
    shared_store: azure

compactor:
  working_directory: /loki/boltdb-shipper-compactor
  shared_store: azure

limits_config:
  reject_old_samples: true
  reject_old_samples_max_age: 168h

chunk_store_config:
  max_look_back_period: 0s

table_manager:
  retention_deletes_enabled: true
  retention_period: 4320h # 180 days

query_scheduler:
  max_outstanding_requests_per_tenant: 2048

query_range:
  parallelise_shardable_queries: false
  split_queries_by_interval: 0

docker logs loki

level=warn ts=2023-05-11T13:40:20.62871622Z caller=loki.go:286 msg="per-tenant timeout not configured, using default engine timeout (\"5m0s\"). This behavior will change in the next major to always use the default per-tenant timeout (\"5m\")."
level=info ts=2023-05-11T13:40:20.629857054Z caller=main.go:108 msg="Starting Loki" version="(version=2.8.2, branch=HEAD, revision=9f809eda7)"
level=info ts=2023-05-11T13:40:20.630784599Z caller=server.go:323 http=[::]:3100 grpc=[::]:9096 msg="server listening on addresses"
level=warn ts=2023-05-11T13:40:20.649612737Z caller=experimental.go:20 msg="experimental feature in use" feature="Azure Blob Storage"
level=warn ts=2023-05-11T13:40:20.650100123Z caller=cache.go:114 msg="fifocache config is deprecated. use embedded-cache instead"
level=warn ts=2023-05-11T13:40:20.650122233Z caller=experimental.go:20 msg="experimental feature in use" feature="In-memory (FIFO) cache - chunksembedded-cache"
level=warn ts=2023-05-11T13:40:20.650241083Z caller=experimental.go:20 msg="experimental feature in use" feature="Azure Blob Storage"
level=warn ts=2023-05-11T13:40:20.650313325Z caller=experimental.go:20 msg="experimental feature in use" feature="Azure Blob Storage"
level=info ts=2023-05-11T13:40:20.650522749Z caller=table_manager.go:134 msg="uploading tables"
level=info ts=2023-05-11T13:40:20.650883426Z caller=table_manager.go:262 msg="query readiness setup completed" duration=1.443µs distinct_users_len=0
level=info ts=2023-05-11T13:40:20.650929984Z caller=shipper.go:131 msg="starting index shipper in RW mode"
level=info ts=2023-05-11T13:40:20.657554658Z caller=shipper_index_client.go:78 msg="starting boltdb shipper in RW mode"
level=warn ts=2023-05-11T13:40:20.657775043Z caller=experimental.go:20 msg="experimental feature in use" feature="Azure Blob Storage"
level=warn ts=2023-05-11T13:40:20.658101061Z caller=experimental.go:20 msg="experimental feature in use" feature="Azure Blob Storage"
level=info ts=2023-05-11T13:40:20.659359847Z caller=worker.go:112 msg="Starting querier worker using query-scheduler and scheduler ring for addresses"
level=info ts=2023-05-11T13:40:20.660788131Z caller=table_manager.go:166 msg="handing over indexes to shipper"
level=info ts=2023-05-11T13:40:20.660847543Z caller=mapper.go:47 msg="cleaning up mapped rules directory" path=/rules
level=info ts=2023-05-11T13:40:20.669308981Z caller=module_service.go:82 msg=initialising module=cache-generation-loader
level=info ts=2023-05-11T13:40:20.669354724Z caller=module_service.go:82 msg=initialising module=server
level=info ts=2023-05-11T13:40:20.669521124Z caller=module_service.go:82 msg=initialising module=memberlist-kv
level=info ts=2023-05-11T13:40:20.669519496Z caller=module_service.go:82 msg=initialising module=query-frontend-tripperware
level=info ts=2023-05-11T13:40:20.669563306Z caller=module_service.go:82 msg=initialising module=ring
level=info ts=2023-05-11T13:40:20.669607902Z caller=ring.go:263 msg="ring doesn't exist in KV store yet"
level=info ts=2023-05-11T13:40:20.66967712Z caller=client.go:255 msg="value is nil" key=collectors/ring index=1
level=info ts=2023-05-11T13:40:20.669705451Z caller=module_service.go:82 msg=initialising module=usage-report
level=info ts=2023-05-11T13:40:20.669736343Z caller=module_service.go:82 msg=initialising module=ingester-querier
level=info ts=2023-05-11T13:40:20.669738886Z caller=module_service.go:82 msg=initialising module=query-scheduler
level=info ts=2023-05-11T13:40:20.669755597Z caller=module_service.go:82 msg=initialising module=store
level=info ts=2023-05-11T13:40:20.669788617Z caller=module_service.go:82 msg=initialising module=ingester
level=info ts=2023-05-11T13:40:20.669789452Z caller=module_service.go:82 msg=initialising module=distributor
level=info ts=2023-05-11T13:40:20.669771683Z caller=module_service.go:82 msg=initialising module=compactor
level=info ts=2023-05-11T13:40:20.66981575Z caller=ingester.go:416 msg="recovering from checkpoint"
level=info ts=2023-05-11T13:40:20.669825786Z caller=module_service.go:82 msg=initialising module=ruler
level=info ts=2023-05-11T13:40:20.669858443Z caller=ruler.go:499 msg="ruler up and running"
level=info ts=2023-05-11T13:40:20.669855209Z caller=lifecycler.go:547 msg="not loading tokens from file, tokens file path is empty"
level=info ts=2023-05-11T13:40:20.669879187Z caller=recovery.go:40 msg="no checkpoint found, treating as no-op"
level=info ts=2023-05-11T13:40:20.669882313Z caller=ring.go:263 msg="ring doesn't exist in KV store yet"
level=info ts=2023-05-11T13:40:20.669915701Z caller=client.go:255 msg="value is nil" key=collectors/scheduler index=1
level=info ts=2023-05-11T13:40:20.669924952Z caller=ingester.go:432 msg="recovered WAL checkpoint recovery finished" elapsed=129.206µs errors=false
level=info ts=2023-05-11T13:40:20.669943034Z caller=ingester.go:438 msg="recovering from WAL"
level=info ts=2023-05-11T13:40:20.669946627Z caller=lifecycler.go:576 msg="instance not found in ring, adding with no tokens" ring=distributor
level=info ts=2023-05-11T13:40:20.669953238Z caller=basic_lifecycler.go:261 msg="instance not found in the ring" instance=ff2d4f6a2bdb ring=compactor
level=info ts=2023-05-11T13:40:20.669982854Z caller=basic_lifecycler_delegates.go:63 msg="not loading tokens from file, tokens file path is empty"
level=info ts=2023-05-11T13:40:20.669989604Z caller=basic_lifecycler.go:261 msg="instance not found in the ring" instance=ff2d4f6a2bdb ring=scheduler
level=info ts=2023-05-11T13:40:20.670028814Z caller=basic_lifecycler_delegates.go:63 msg="not loading tokens from file, tokens file path is empty"
level=info ts=2023-05-11T13:40:20.670036722Z caller=ring.go:263 msg="ring doesn't exist in KV store yet"
level=info ts=2023-05-11T13:40:20.670095061Z caller=client.go:255 msg="value is nil" key=collectors/compactor index=1
level=info ts=2023-05-11T13:40:20.670111956Z caller=client.go:255 msg="value is nil" key=collectors/compactor index=2
level=info ts=2023-05-11T13:40:20.670117804Z caller=ingester.go:454 msg="WAL segment recovery finished" elapsed=321.956µs errors=false
level=info ts=2023-05-11T13:40:20.67012749Z caller=ingester.go:402 msg="closing recoverer"
level=info ts=2023-05-11T13:40:20.670123942Z caller=client.go:255 msg="value is nil" key=collectors/compactor index=3
level=info ts=2023-05-11T13:40:20.670127977Z caller=scheduler.go:616 msg="waiting until scheduler is JOINING in the ring"
level=info ts=2023-05-11T13:40:20.670136074Z caller=ingester.go:410 msg="WAL recovery finished" time=340.067µs
level=info ts=2023-05-11T13:40:20.670160465Z caller=wal.go:156 msg=started component=wal
level=info ts=2023-05-11T13:40:20.670176094Z caller=lifecycler.go:547 msg="not loading tokens from file, tokens file path is empty"
level=info ts=2023-05-11T13:40:20.670250289Z caller=lifecycler.go:576 msg="instance not found in ring, adding with no tokens" ring=ingester
level=info ts=2023-05-11T13:40:20.67030423Z caller=lifecycler.go:416 msg="auto-joining cluster after timeout" ring=ingester
level=info ts=2023-05-11T13:40:20.670311672Z caller=lifecycler.go:416 msg="auto-joining cluster after timeout" ring=distributor
level=info ts=2023-05-11T13:40:20.670422919Z caller=compactor.go:332 msg="waiting until compactor is JOINING in the ring"
level=info ts=2023-05-11T13:40:20.850278896Z caller=scheduler.go:620 msg="scheduler is JOINING in the ring"
level=info ts=2023-05-11T13:40:20.850402716Z caller=scheduler.go:630 msg="waiting until scheduler is ACTIVE in the ring"
level=info ts=2023-05-11T13:40:20.860918175Z caller=compactor.go:336 msg="compactor is JOINING in the ring"
level=info ts=2023-05-11T13:40:20.867436476Z caller=compactor.go:346 msg="waiting until compactor is ACTIVE in the ring"
level=info ts=2023-05-11T13:40:20.867453449Z caller=compactor.go:350 msg="compactor is ACTIVE in the ring"
level=info ts=2023-05-11T13:40:21.039256583Z caller=scheduler.go:634 msg="scheduler is ACTIVE in the ring"
level=info ts=2023-05-11T13:40:21.03932548Z caller=module_service.go:82 msg=initialising module=query-frontend
level=info ts=2023-05-11T13:40:21.039345987Z caller=module_service.go:82 msg=initialising module=querier
level=info ts=2023-05-11T13:40:21.039414864Z caller=loki.go:499 msg="Loki started"
level=info ts=2023-05-11T13:40:24.040325468Z caller=worker.go:209 msg="adding connection" addr=127.0.0.1:9096
level=info ts=2023-05-11T13:40:24.040890879Z caller=scheduler.go:681 msg="this scheduler is in the ReplicationSet, will now accept requests."
level=info ts=2023-05-11T13:40:25.667408558Z caller=table_manager.go:223 msg="syncing tables"
level=info ts=2023-05-11T13:40:25.667458212Z caller=table_manager.go:262 msg="query readiness setup completed" duration=1.851µs distinct_users_len=0
level=info ts=2023-05-11T13:40:25.868443526Z caller=compactor.go:411 msg="this instance has been chosen to run the compactor, starting compactor"
level=info ts=2023-05-11T13:40:25.86855971Z caller=compactor.go:440 msg="waiting 10m0s for ring to stay stable and previous compactions to finish before starting compactor"
level=info ts=2023-05-11T13:40:30.657396825Z caller=table_manager.go:223 msg="syncing tables"
level=info ts=2023-05-11T13:40:30.657434962Z caller=table_manager.go:262 msg="query readiness setup completed" duration=1.91µs distinct_users_len=0
level=info ts=2023-05-11T13:40:31.047477783Z caller=frontend_scheduler_worker.go:107 msg="adding connection to scheduler" addr=127.0.0.1:9096
level=info ts=2023-05-11T13:40:35.657383422Z caller=table_manager.go:223 msg="syncing tables"
level=info ts=2023-05-11T13:40:35.657424861Z caller=table_manager.go:262 msg="query readiness setup completed" duration=1.919µs distinct_users_len=0
level=info ts=2023-05-11T13:40:40.657392451Z caller=table_manager.go:223 msg="syncing tables"
level=info ts=2023-05-11T13:40:40.657437625Z caller=table_manager.go:262 msg="query readiness setup completed" duration=1.841µs distinct_users_len=0
level=info ts=2023-05-11T13:40:45.657394938Z caller=table_manager.go:223 msg="syncing tables"
level=info ts=2023-05-11T13:40:45.65744311Z caller=table_manager.go:262 msg="query readiness setup completed" duration=1.83µs distinct_users_len=0
1 Like

Did you get it working with azure?

Hi @fredrikcarlbom
did it work?

I use shared_store: azure as well. However, I have the “azure” block with account name, container name and account key located under “storage_config”.

Also, the locations for index and cache start with “/var”, like “/var/loki/index” and “/var/loki/cache”

I use Loki distributed deployment.