First of all, I want to apologize if this is a duplicate, but over the last couple of days, I’ve searched extensively for a solution to my problem. I’m trying to deploy Grafana Loki on AWS Elastic Kubernetes Service using Terraform in a way that utilizes S3 for storage. I’m fairly new to Kubernetes and AWS, so I’m sure I’m just overlooking something simple, but I can’t get Loki to save logs in S3. Loki starts up normally, but it doesn’t create anything in the S3 bucket, unlike what I saw in the “Deploying the Loki Helm on AWS” video from the official YouTube channel.
I’ll share my files below. If anyone could take a look at them, I’d be very grateful!
Hey, sure!
This is the loki-values.yaml i use. From my understanding everything under loki is loki configuration the rest are settings for the pod. No errors from the writer. I’ll attach the Logs generated from starting loki.
level=info ts=2024-11-20T20:24:47.817499343Z caller=main.go:103 msg="Starting Loki" version="(version=2.6.1, branch=HEAD, revision=6bd05c9a4)"
level=info ts=2024-11-20T20:24:47.818082731Z caller=server.go:288 http=[::]:3100 grpc=[::]:9095 msg="server listening on addresses"
level=info ts=2024-11-20T20:24:47.81875431Z caller=modules.go:736 msg="RulerStorage is not configured in single binary mode and will not be started."
level=warn ts=2024-11-20T20:24:47.835554115Z caller=experimental.go:20 msg="experimental feature in use" feature="In-memory (FIFO) cache - chunksfifocache"
level=info ts=2024-11-20T20:24:47.838381745Z caller=table_manager.go:252 msg="query readiness setup completed" duration=3.56µs distinct_users_len=0
level=info ts=2024-11-20T20:24:47.838555526Z caller=shipper.go:124 msg="starting index shipper in RW mode"
level=info ts=2024-11-20T20:24:47.840084808Z caller=shipper_index_client.go:79 msg="starting boltdb shipper in RW mode"
level=info ts=2024-11-20T20:24:47.845702757Z caller=worker.go:112 msg="Starting querier worker using query-scheduler and scheduler ring for addresses"
ts=2024-11-20T20:24:47.846469437Z caller=memberlist_logger.go:74 level=warn msg="Failed to resolve loki-memberlist: lookup loki-memberlist on 10.100.0.10:53: no such host"
level=info ts=2024-11-20T20:24:47.847276529Z caller=table_manager.go:134 msg="uploading tables"
level=info ts=2024-11-20T20:24:47.847338619Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-11-20T20:24:47.84886251Z caller=modules.go:761 msg="RulerStorage is nil. Not starting the ruler."
level=info ts=2024-11-20T20:24:47.854790403Z caller=module_service.go:82 msg=initialising module=server
level=info ts=2024-11-20T20:24:47.85741054Z caller=module_service.go:82 msg=initialising module=query-frontend-tripperware
level=info ts=2024-11-20T20:24:47.857931707Z caller=module_service.go:82 msg=initialising module=memberlist-kv
level=info ts=2024-11-20T20:24:47.85814035Z caller=module_service.go:82 msg=initialising module=store
level=info ts=2024-11-20T20:24:47.858193871Z caller=module_service.go:82 msg=initialising module=ring
level=info ts=2024-11-20T20:24:47.858321293Z caller=ring.go:263 msg="ring doesn't exist in KV store yet"
level=info ts=2024-11-20T20:24:47.858398084Z caller=module_service.go:82 msg=initialising module=ingester-querier
level=info ts=2024-11-20T20:24:47.858491395Z caller=module_service.go:82 msg=initialising module=usage-report
level=info ts=2024-11-20T20:24:47.859152754Z caller=module_service.go:82 msg=initialising module=compactor
level=info ts=2024-11-20T20:24:47.859423298Z caller=ring.go:263 msg="ring doesn't exist in KV store yet"
level=info ts=2024-11-20T20:24:47.85954905Z caller=module_service.go:82 msg=initialising module=distributor
level=info ts=2024-11-20T20:24:47.859825684Z caller=module_service.go:82 msg=initialising module=ingester
level=info ts=2024-11-20T20:24:47.859864654Z caller=ingester.go:401 msg="recovering from checkpoint"
level=info ts=2024-11-20T20:24:47.860036217Z caller=recovery.go:39 msg="no checkpoint found, treating as no-op"
level=info ts=2024-11-20T20:24:47.860220809Z caller=module_service.go:82 msg=initialising module=query-scheduler
level=info ts=2024-11-20T20:24:47.860370802Z caller=ring.go:263 msg="ring doesn't exist in KV store yet"
level=info ts=2024-11-20T20:24:47.860550994Z caller=basic_lifecycler.go:261 msg="instance not found in the ring" instance=loki-0 ring=compactor
level=info ts=2024-11-20T20:24:47.860798227Z caller=basic_lifecycler_delegates.go:63 msg="not loading tokens from file, tokens file path is empty"
level=info ts=2024-11-20T20:24:47.861096782Z caller=lifecycler.go:547 msg="not loading tokens from file, tokens file path is empty"
level=info ts=2024-11-20T20:24:47.861154212Z caller=lifecycler.go:576 msg="instance not found in ring, adding with no tokens" ring=distributor
level=info ts=2024-11-20T20:24:47.861323025Z caller=lifecycler.go:416 msg="auto-joining cluster after timeout" ring=distributor
level=info ts=2024-11-20T20:24:47.861604538Z caller=basic_lifecycler.go:261 msg="instance not found in the ring" instance=loki-0 ring=scheduler
level=info ts=2024-11-20T20:24:47.861657639Z caller=basic_lifecycler_delegates.go:63 msg="not loading tokens from file, tokens file path is empty"
level=info ts=2024-11-20T20:24:47.861618759Z caller=ingester.go:417 msg="recovered WAL checkpoint recovery finished" elapsed=1.765065ms errors=false
level=info ts=2024-11-20T20:24:47.861974784Z caller=ingester.go:423 msg="recovering from WAL"
level=info ts=2024-11-20T20:24:47.861950304Z caller=scheduler.go:617 msg="waiting until scheduler is JOINING in the ring"
level=info ts=2024-11-20T20:24:47.86237185Z caller=scheduler.go:621 msg="scheduler is JOINING in the ring"
level=info ts=2024-11-20T20:24:47.862005214Z caller=compactor.go:307 msg="waiting until compactor is JOINING in the ring"
level=info ts=2024-11-20T20:24:47.862624903Z caller=compactor.go:311 msg="compactor is JOINING in the ring"
level=info ts=2024-11-20T20:24:47.862538051Z caller=ingester.go:439 msg="WAL segment recovery finished" elapsed=2.684007ms errors=false
level=info ts=2024-11-20T20:24:47.862793586Z caller=ingester.go:387 msg="closing recoverer"
level=info ts=2024-11-20T20:24:47.862814206Z caller=ingester.go:395 msg="WAL recovery finished" time=2.960152ms
level=info ts=2024-11-20T20:24:47.862924657Z caller=lifecycler.go:547 msg="not loading tokens from file, tokens file path is empty"
level=info ts=2024-11-20T20:24:47.863329083Z caller=lifecycler.go:576 msg="instance not found in ring, adding with no tokens" ring=ingester
level=info ts=2024-11-20T20:24:47.863499155Z caller=lifecycler.go:416 msg="auto-joining cluster after timeout" ring=ingester
level=info ts=2024-11-20T20:24:47.863718688Z caller=wal.go:156 msg=started component=wal
level=info ts=2024-11-20T20:24:48.863476132Z caller=compactor.go:321 msg="waiting until compactor is ACTIVE in the ring"
level=info ts=2024-11-20T20:24:48.863650854Z caller=compactor.go:325 msg="compactor is ACTIVE in the ring"
level=info ts=2024-11-20T20:24:48.863487702Z caller=scheduler.go:631 msg="waiting until scheduler is ACTIVE in the ring"
level=info ts=2024-11-20T20:24:48.86406724Z caller=scheduler.go:635 msg="scheduler is ACTIVE in the ring"
level=info ts=2024-11-20T20:24:48.864189441Z caller=module_service.go:82 msg=initialising module=query-frontend
level=info ts=2024-11-20T20:24:48.864375704Z caller=module_service.go:82 msg=initialising module=querier
level=info ts=2024-11-20T20:24:48.864556517Z caller=loki.go:374 msg="Loki started"
level=info ts=2024-11-20T20:24:48.940617848Z caller=memberlist_client.go:563 msg="joined memberlist cluster" reached_nodes=1
level=info ts=2024-11-20T20:24:51.864477531Z caller=scheduler.go:682 msg="this scheduler is in the ReplicationSet, will now accept requests."
level=info ts=2024-11-20T20:24:51.864591223Z caller=worker.go:209 msg="adding connection" addr=172.31.32.17:9095
level=info ts=2024-11-20T20:24:53.864022088Z caller=compactor.go:386 msg="this instance has been chosen to run the compactor, starting compactor"
level=info ts=2024-11-20T20:24:53.864212791Z caller=compactor.go:413 msg="waiting 10m0s for ring to stay stable and previous compactions to finish before starting compactor"
level=info ts=2024-11-20T20:24:58.864805787Z caller=frontend_scheduler_worker.go:101 msg="adding connection to scheduler" addr=172.31.32.17:9095
level=info ts=2024-11-20T20:25:47.84842321Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-11-20T20:25:47.848504991Z caller=table_manager.go:134 msg="uploading tables"
You are defining the same S3 bucket twice. I’d say try to do either common.storage (my approach), or just do storage_config. The key is that object_store under your index configuration needs to point to a valid storage. You can see an example on the helm chart’s documentation here: helm-charts/charts/loki-distributed at main · grafana/helm-charts · GitHub
Hit up your Loki container’s /config endpoint if you are not sure what the run-time configuration looks like.
Can you confirm the chunk files are written to the container’s filesystem? If so then it’s a config error. If not then your problem may be elsewhere.