Loki write pods not getting created but its showing running with 0/1

Using grafana/loki
chat version 2.9.4
“loki-write” pod describes the error
Readiness probe failed: HTTP probe failed with statuscode: 503

Pod Logs
level=warn ts=2024-02-15T10:51:57.029896906Z caller=loki.go:288 msg=“global timeout not configured, using default engine timeout ("5m0s"). This behavior will change in the next major to always use the default global timeout ("5m").”
level=info ts=2024-02-15T10:51:57.030304056Z caller=main.go:108 msg=“Starting Loki” version=“(version=2.9.4, branch=HEAD, revision=f599ebc535)”
level=info ts=2024-02-15T10:51:57.032124314Z caller=server.go:322 http=[::]:3100 grpc=[::]:9095 msg=“server listening on addresses”
level=info ts=2024-02-15T10:51:57.033473345Z caller=memberlist_client.go:434 msg=“Using memberlist cluster label and node name” cluster_label= node=loki-write-0-9870f6b5
level=info ts=2024-02-15T10:51:57.035595807Z caller=memberlist_client.go:540 msg=“memberlist fast-join starting” nodes_found=1 to_join=4
level=warn ts=2024-02-15T10:51:57.040549097Z caller=experimental.go:20 msg=“experimental feature in use” feature=“In-memory (FIFO) cache - embedded-cache”
level=warn ts=2024-02-15T10:51:57.040847045Z caller=cache.go:127 msg=“fifocache config is deprecated. use embedded-cache instead”
level=warn ts=2024-02-15T10:51:57.040865858Z caller=experimental.go:20 msg=“experimental feature in use” feature=“In-memory (FIFO) cache - chunksembedded-cache”
ts=2024-02-15T10:51:57.041347535Z caller=memberlist_logger.go:74 level=warn msg=“Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host”
level=warn ts=2024-02-15T10:51:57.041375131Z caller=memberlist_client.go:560 msg=“memberlist fast-join finished” joined_nodes=0 elapsed_time=5.782673ms
level=info ts=2024-02-15T10:51:57.041392593Z caller=memberlist_client.go:573 msg=“joining memberlist cluster” join_members=loki-memberlist
level=info ts=2024-02-15T10:51:57.04306219Z caller=shipper.go:165 index-store=boltdb-shipper-2022-01-11 msg=“starting index shipper in WO mode”
level=info ts=2024-02-15T10:51:57.043128947Z caller=table_manager.go:136 index-store=boltdb-shipper-2022-01-11 msg=“uploading tables”
level=info ts=2024-02-15T10:51:57.044092928Z caller=table_manager.go:240 index-store=boltdb-shipper-2022-01-11 msg=“loading table loki_index_19759”
ts=2024-02-15T10:51:57.045608411Z caller=memberlist_logger.go:74 level=warn msg=“Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host”
level=warn ts=2024-02-15T10:51:57.045628606Z caller=memberlist_client.go:595 msg=“joining memberlist cluster: failed to reach any nodes” retries=0 err=“1 error occurred:\n\t* Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host\n\n”
level=info ts=2024-02-15T10:51:57.050633925Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19759”
level=info ts=2024-02-15T10:51:57.050679725Z caller=table.go:334 msg=“finished handing over table loki_index_19759”
level=info ts=2024-02-15T10:51:57.050720083Z caller=table_manager.go:240 index-store=boltdb-shipper-2022-01-11 msg=“loading table loki_index_19760”
level=info ts=2024-02-15T10:51:57.054868048Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19760”
level=info ts=2024-02-15T10:51:57.054907067Z caller=table.go:334 msg=“finished handing over table loki_index_19760”
level=info ts=2024-02-15T10:51:57.054946996Z caller=table_manager.go:240 index-store=boltdb-shipper-2022-01-11 msg=“loading table loki_index_19761”
level=info ts=2024-02-15T10:51:57.06477385Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19761”
level=info ts=2024-02-15T10:51:57.064818572Z caller=table.go:334 msg=“finished handing over table loki_index_19761”
level=info ts=2024-02-15T10:51:57.064877719Z caller=shipper_index_client.go:76 index-store=boltdb-shipper-2022-01-11 msg=“starting boltdb shipper in WO mode”
level=info ts=2024-02-15T10:51:57.064928199Z caller=table_manager.go:171 index-store=boltdb-shipper-2022-01-11 msg=“handing over indexes to shipper”
level=info ts=2024-02-15T10:51:57.064980058Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19759”
level=info ts=2024-02-15T10:51:57.064988186Z caller=table.go:334 msg=“finished handing over table loki_index_19759”
level=info ts=2024-02-15T10:51:57.065032446Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19760”
level=info ts=2024-02-15T10:51:57.065041125Z caller=table.go:334 msg=“finished handing over table loki_index_19760”
level=info ts=2024-02-15T10:51:57.06507504Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19761”
level=info ts=2024-02-15T10:51:57.065083137Z caller=table.go:334 msg=“finished handing over table loki_index_19761”
level=info ts=2024-02-15T10:51:57.072178847Z caller=module_service.go:82 msg=initialising module=runtime-config
level=info ts=2024-02-15T10:51:57.072436338Z caller=module_service.go:82 msg=initialising module=server
level=info ts=2024-02-15T10:51:57.072576305Z caller=module_service.go:82 msg=initialising module=memberlist-kv
level=info ts=2024-02-15T10:51:57.072614901Z caller=module_service.go:82 msg=initialising module=ring
level=info ts=2024-02-15T10:51:57.072658215Z caller=ring.go:273 msg=“ring doesn’t exist in KV store yet”
level=info ts=2024-02-15T10:51:57.072709625Z caller=module_service.go:82 msg=initialising module=index-gateway-ring
level=info ts=2024-02-15T10:51:57.072844672Z caller=ring.go:273 msg=“ring doesn’t exist in KV store yet”
level=info ts=2024-02-15T10:51:57.072895746Z caller=module_service.go:82 msg=initialising module=analytics
level=info ts=2024-02-15T10:51:57.073297944Z caller=module_service.go:82 msg=initialising module=distributor
level=info ts=2024-02-15T10:51:57.073383554Z caller=basic_lifecycler.go:297 msg=“instance not found in the ring” instance=loki-write-0 ring=distributor
level=info ts=2024-02-15T10:51:57.073853542Z caller=module_service.go:82 msg=initialising module=store
level=info ts=2024-02-15T10:51:57.073887923Z caller=module_service.go:82 msg=initialising module=ingester
level=info ts=2024-02-15T10:51:57.07392535Z caller=ingester.go:431 msg=“recovering from checkpoint”
ts=2024-02-15T10:51:58.311941445Z caller=memberlist_logger.go:74 level=warn msg=“Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host”
level=warn ts=2024-02-15T10:51:58.31197951Z caller=memberlist_client.go:595 msg=“joining memberlist cluster: failed to reach any nodes” retries=1 err=“1 error occurred:\n\t* Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host\n\n”
ts=2024-02-15T10:52:01.143354149Z caller=memberlist_logger.go:74 level=warn msg=“Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host”
level=warn ts=2024-02-15T10:52:01.143957881Z caller=memberlist_client.go:595 msg=“joining memberlist cluster: failed to reach any nodes” retries=2 err=“1 error occurred:\n\t* Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host\n\n”
level=info ts=2024-02-15T10:52:01.831472489Z caller=flush.go:167 msg=“flushing stream” user=fake fp=3773c507425f1226 immediate=true num_chunks=1 labels=“{app="grafana-agent", container="config-reloader", filename="/var/log/pods/logging_loki-logs-8kmzh_8e5ea927-444d-4979-b91d-1ce2293dd788/config-reloader/0.log", job="logging/grafana-agent", namespace="logging", node_name="ip-10-2-3-196.ap-south-1.compute.internal", pod="loki-logs-8kmzh", stream="stdout"}”
level=info ts=2024-02-15T10:52:01.836001684Z caller=flush.go:167 msg=“flushing stream” user=fake fp=dfccd71b78dbb619 immediate=true num_chunks=2 labels=“{app="aws-node", container="aws-node", filename="/var/log/pods/kube-system_aws-node-vp95k_a4c925e0-72d8-40f9-8ff4-b852c55d2d88/aws-node/0.log", job="kube-system/aws-node", namespace="kube-system", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="aws-node-vp95k", stream="stdout"}”
level=info ts=2024-02-15T10:52:01.836769451Z caller=flush.go:167 msg=“flushing stream” user=fake fp=477618df9220f3f4 immediate=true num_chunks=1 labels=“{app="shield", container="shield", filename="/var/log/pods/suremdm-hotfix_shield-f967fd974-58bss_152b371a-bc15-497e-b41a-5c850814d9cb/shield/0.log", job="suremdm-hotfix/shield", namespace="suremdm-hotfix", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="shield-f967fd974-58bss", stream="stdout"}”
level=info ts=2024-02-15T10:52:01.836880388Z caller=flush.go:167 msg=“flushing stream” user=fake fp=2acb7d79030eb58c immediate=true num_chunks=1 labels=“{app="promtail", container="promtail", filename="/var/log/pods/logging_promtail-new-2875r_4fba1f0c-310c-4bbb-9192-2354a833a7f3/promtail/0.log", job="logging/promtail", namespace="logging", node_name="ip-10-2-4-14.ap-south-1.compute.internal", pod="promtail-new-2875r", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.836947887Z caller=flush.go:167 msg=“flushing stream” user=fake fp=dd14019d76b898c9 immediate=true num_chunks=2 labels=“{app="aws-ebs-csi-driver", component="csi-driver", container="csi-snapshotter", filename="/var/log/pods/kube-system_ebs-csi-controller-75c5d8d557-5rpnd_896c9bb8-5344-4c23-b26c-e59641c9640b/csi-snapshotter/0.log", job="kube-system/aws-ebs-csi-driver", namespace="kube-system", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="ebs-csi-controller-75c5d8d557-5rpnd", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.837004611Z caller=flush.go:167 msg=“flushing stream” user=fake fp=c37dfea386e48a6f immediate=true num_chunks=1 labels=“{instance="devicerouterservice-deployment-7ffbf4bcf7-qh8p2", job="sidecar-DRS"}”
level=info ts=2024-02-15T10:52:01.837061903Z caller=flush.go:167 msg=“flushing stream” user=fake fp=6262c32ea661bef3 immediate=true num_chunks=1 labels=“{app="aws-node", container="aws-vpc-cni-init", filename="/var/log/pods/kube-system_aws-node-km8pz_84141ecb-8b86-44b1-be05-67788bd401a5/aws-vpc-cni-init/0.log", job="kube-system/aws-node", namespace="kube-system", node_name="ip-10-2-4-14.ap-south-1.compute.internal", pod="aws-node-km8pz", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.837122793Z caller=flush.go:167 msg=“flushing stream” user=fake fp=2fdc6aa78b5c10e4 immediate=true num_chunks=1 labels=“{app="browserapi", container="browserapi", filename="/var/log/pods/suremdm-hotfix_browserapi-deployment-7b585578cc-jzfsg_cf2ca282-69e1-45a0-917f-e0d38c9253ca/browserapi/0.log", job="suremdm-hotfix/browserapi", namespace="suremdm-hotfix", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="browserapi-deployment-7b585578cc-jzfsg", stream="stdout"}”
level=info ts=2024-02-15T10:52:01.837177453Z caller=flush.go:167 msg=“flushing stream” user=fake fp=0994b0be9694f072 immediate=true num_chunks=1 labels=“{app="aws-ebs-csi-driver", component="csi-driver", container="ebs-plugin", filename="/var/log/pods/kube-system_ebs-csi-controller-75c5d8d557-5rpnd_896c9bb8-5344-4c23-b26c-e59641c9640b/ebs-plugin/0.log", job="kube-system/aws-ebs-csi-driver", namespace="kube-system", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="ebs-csi-controller-75c5d8d557-5rpnd", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.837236025Z caller=flush.go:167 msg=“flushing stream” user=fake fp=09cf27a938743cd5 immediate=true num_chunks=2 labels=“{app="suremdminsights4", container="suremdminsights4", filename="/var/log/pods/suremdm-hotfix_suremdminsights4-99bff58f6-8425j_85afa4c4-7290-4899-bcda-43b895609fef/suremdminsights4/0.log", job="suremdm-hotfix/suremdminsights4", namespace="suremdm-hotfix", node_name="ip-10-2-4-14.ap-south-1.compute.internal", pod="suremdminsights4-99bff58f6-8425j", stream="stdout"}”
level=info ts=2024-02-15T10:52:01.837746819Z caller=flush.go:167 msg=“flushing stream” user=fake fp=e1683a77f738371c immediate=true num_chunks=2 labels=“{app="aws-ebs-csi-driver", component="csi-driver", container="csi-snapshotter", filename="/var/log/pods/kube-system_ebs-csi-controller-75c5d8d557-mw6tr_5b0e325a-47c3-472c-9ac7-1cd5196c6f12/csi-snapshotter/0.log", job="kube-system/aws-ebs-csi-driver", namespace="kube-system", node_name="ip-10-2-5-151.ap-south-1.compute.internal", pod="ebs-csi-controller-75c5d8d557-mw6tr", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.84019268Z caller=flush.go:167 msg=“flushing stream” user=fake fp=17a1d532bfcc2823 immediate=true num_chunks=2 labels=“{app="cluster-autoscaler", container="cluster-autoscaler", filename="/var/log/pods/kube-system_cluster-autoscaler-b7966f84c-wk9ms_80370b5e-28c5-4caa-9f9c-2fbbf2cf5fa6/cluster-autoscaler/0.log", job="kube-system/cluster-autoscaler", namespace="kube-system", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="cluster-autoscaler-b7966f84c-wk9ms", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.840879231Z caller=flush.go:167 msg=“flushing stream” user=fake fp=f460043dc368aeab immediate=true num_chunks=1 labels=“{app="aws-ebs-csi-driver", component="csi-driver", container="csi-provisioner", filename="/var/log/pods/kube-system_ebs-csi-controller-75c5d8d557-5rpnd_896c9bb8-5344-4c23-b26c-e59641c9640b/csi-provisioner/0.log", job="kube-system/aws-ebs-csi-driver", namespace="kube-system", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="ebs-csi-controller-75c5d8d557-5rpnd", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.841103756Z caller=flush.go:167 msg=“flushing stream” user=fake fp=ae68464a38ac4748 immediate=true num_chunks=2 labels=“{app="grafana", container="grafana", filename="/var/log/pods/grafana_grafana-67b945584f-csn85_1aff72b7-4490-416c-be20-9c646d821d71/grafana/0.log", job="grafana/grafana", namespace="grafana", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="grafana-67b945584f-csn85", stream="stdout"}”
level=info ts=2024-02-15T10:52:01.841300915Z caller=flush.go:167 msg=“flushing stream” user=fake fp=e9b07d7f4df6a5ea immediate=true num_chunks=2 labels=“{app="aws-node", container="aws-node", filename="/var/log/pods/kube-system_aws-node-vp95k_a4c925e0-72d8-40f9-8ff4-b852c55d2d88/aws-node/0.log", job="kube-system/aws-node", namespace="kube-system", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="aws-node-vp95k", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.841452137Z caller=flush.go:167 msg=“flushing stream” user=fake fp=8b929d3431e7a57e immediate=true num_chunks=1 labels=“{app="aws-ebs-csi-driver", component="csi-driver", container="csi-resizer", filename="/var/log/pods/kube-system_ebs-csi-controller-75c5d8d557-mw6tr_5b0e325a-47c3-472c-9ac7-1cd5196c6f12/csi-resizer/0.log", job="kube-system/aws-ebs-csi-driver", namespace="kube-system", node_name="ip-10-2-5-151.ap-south-1.compute.internal", pod="ebs-csi-controller-75c5d8d557-mw6tr", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.841595914Z caller=flush.go:167 msg=“flushing stream” user=fake fp=fe1f29ff23b5e150 immediate=true num_chunks=1 labels=“{app="browserapi", container="sidecar", filename="/var/log/pods/suremdm-hotfix_browserapi-deployment-6558cd87d9-4br9h_c5689adc-f888-4cc0-b711-15bfbafb4038/sidecar/0.log", job="suremdm-hotfix/browserapi", namespace="suremdm-hotfix", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="browserapi-deployment-6558cd87d9-4br9h", stream="stdout"}”
level=info ts=2024-02-15T10:52:01.841814998Z caller=flush.go:167 msg=“flushing stream” user=fake fp=44ec002382da7edb immediate=true num_chunks=1 labels=“{app="aws-node", container="aws-eks-nodeagent", filename="/var/log/pods/kube-system_aws-node-9xqng_25574064-4d29-4808-a2cf-21cb59b17ff7/aws-eks-nodeagent/0.log", job="kube-system/aws-node", namespace="kube-system", node_name="ip-10-2-3-196.ap-south-1.compute.internal", pod="aws-node-9xqng", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.841960386Z caller=flush.go:167 msg=“flushing stream” user=fake fp=9cf41529203cd47f immediate=true num_chunks=1 labels=“{app="aws-ebs-csi-driver", component="csi-driver", container="ebs-plugin", filename="/var/log/pods/kube-system_ebs-csi-node-4tqm5_cc720ec6-5c90-4228-a778-ff7a0e8da61f/ebs-plugin/0.log", job="kube-system/aws-ebs-csi-driver", namespace="kube-system", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="ebs-csi-node-4tqm5", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.84227539Z caller=flush.go:167 msg=“flushing stream” user=fake fp=411ffeef2671e7ae immediate=true num_chunks=1 labels=“{app="loki", component="gateway", container="nginx", filename="/var/log/pods/logging_loki-gateway-756f47464-hdvbt_0d86abac-63ca-4f21-bcb4-f67264208d56/nginx/0.log", job="logging/loki", namespace="logging", node_name="ip-10-2-4-69.ap-south-1.compute.internal", pod="loki-gateway-756f47464-hdvbt", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.842457703Z caller=flush.go:167 msg=“flushing stream” user=fake fp=d45296a50382fc65 immediate=true num_chunks=1 labels=“{cluster="loki", container="write", filename="/var/log/pods/logging_loki-write-0_aa1f8742-dc7d-4b43-a269-220c473fee3e/write/0.log", job="logging/write", name="write", namespace="logging", pod="loki-write-0", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.842608849Z caller=flush.go:167 msg=“flushing stream” user=fake fp=dc597964740d3dcd immediate=true num_chunks=1 labels=“{app="aws-node", container="aws-node", filename="/var/log/pods/kube-system_aws-node-5wfkl_857a7278-5897-45f6-a141-ef9400b55c8a/aws-node/0.log", job="kube-system/aws-node", namespace="kube-system", node_name="ip-10-2-5-151.ap-south-1.compute.internal", pod="aws-node-5wfkl", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.842746409Z caller=flush.go:167 msg=“flushing stream” user=fake fp=dc757a26d4829067 immediate=true num_chunks=1 labels=“{app="aws-ebs-csi-driver", component="csi-driver", container="liveness-probe", filename="/var/log/pods/kube-system_ebs-csi-node-wd2rn_74fa80ee-58ce-4fed-9f79-0524beaada32/liveness-probe/0.log", job="kube-system/aws-ebs-csi-driver", namespace="kube-system", node_name="ip-10-2-5-151.ap-south-1.compute.internal", pod="ebs-csi-node-wd2rn", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.842782418Z caller=flush.go:167 msg=“flushing stream” user=fake fp=4f57f6a51f38dd17 immediate=true num_chunks=1 labels=“{app="browserapi", container="browserapi", filename="/var/log/pods/suremdm-hotfix_browserapi-deployment-7b585578cc-d7k5x_62dd6e0f-b23b-4054-8520-5dc048c6abf2/browserapi/0.log", job="suremdm-hotfix/browserapi", namespace="suremdm-hotfix", node_name="ip-10-2-3-196.ap-south-1.compute.internal", pod="browserapi-deployment-7b585578cc-d7k5x", stream="stdout"}”
level=info ts=2024-02-15T10:52:01.842902945Z caller=flush.go:167 msg=“flushing stream” user=fake fp=f3ad5fd69aed7138 immediate=true num_chunks=2 labels=“{app="prometheus-node-exporter", component="metrics", container="node-exporter", filename="/var/log/pods/prometheus_prometheus-1687165428-prometheus-node-exporter-j5sb2_c0493693-bb53-419b-b608-ed7f7c052628/node-exporter/0.log", job="prometheus/prometheus-node-exporter", namespace="prometheus", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="prometheus-1687165428-prometheus-node-exporter-j5sb2", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.842947781Z caller=flush.go:167 msg=“flushing stream” user=fake fp=8da5a96828b43d3a immediate=true num_chunks=1 labels=“{app="prometheus-node-exporter", component="metrics", container="node-exporter", filename="/var/log/pods/prometheus_prometheus-1687165428-prometheus-node-exporter-lld2p_2cb14a62-9df6-4a32-aa7a-72b9739c386c/node-exporter/0.log", job="prometheus/prometheus-node-exporter", namespace="prometheus", node_name="ip-10-2-4-69.ap-south-1.compute.internal", pod="prometheus-1687165428-prometheus-node-exporter-lld2p", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.843072704Z caller=flush.go:167 msg=“flushing stream” user=fake fp=4546d1ee1380aa1d immediate=true num_chunks=2 labels=“{app="aws-node", container="aws-vpc-cni-init", filename="/var/log/pods/kube-system_aws-node-vp95k_a4c925e0-72d8-40f9-8ff4-b852c55d2d88/aws-vpc-cni-init/0.log", job="kube-system/aws-node", namespace="kube-system", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="aws-node-vp95k", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.84310211Z caller=flush.go:167 msg=“flushing stream” user=fake fp=8ccacd6d8aebca00 immediate=true num_chunks=1 labels=“{app="browserapi", container="sidecar", filename="/var/log/pods/suremdm-hotfix_browserapi-deployment-7b585578cc-d7k5x_62dd6e0f-b23b-4054-8520-5dc048c6abf2/sidecar/0.log", job="suremdm-hotfix/browserapi", namespace="suremdm-hotfix", node_name="ip-10-2-3-196.ap-south-1.compute.internal", pod="browserapi-deployment-7b585578cc-d7k5x", stream="stdout"}”
level=info ts=2024-02-15T10:52:01.843219054Z caller=flush.go:167 msg=“flushing stream” user=fake fp=d37649e8177211e1 immediate=true num_chunks=3 labels=“{app="zebraprinterconnector", container="zebraprinterconnector", filename="/var/log/pods/suremdm-hotfix_zebraprinterconnector-85f867fd47-dz4mv_0a162775-0c05-43a0-80b3-dc1438c6ffb8/zebraprinterconnector/0.log", job="suremdm-hotfix/zebraprinterconnector", namespace="suremdm-hotfix", node_name="ip-10-2-4-14.ap-south-1.compute.internal", pod="zebraprinterconnector-85f867fd47-dz4mv", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.843247823Z caller=flush.go:167 msg=“flushing stream” user=fake fp=473f4b12b2bb6491 immediate=true num_chunks=1 labels=“{app="aws-ebs-csi-driver", component="csi-driver", container="node-driver-registrar", filename="/var/log/pods/kube-system_ebs-csi-node-4tqm5_cc720ec6-5c90-4228-a778-ff7a0e8da61f/node-driver-registrar/0.log", job="kube-system/aws-ebs-csi-driver", namespace="kube-system", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="ebs-csi-node-4tqm5", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.843394406Z caller=flush.go:167 msg=“flushing stream” user=fake fp=221be83872f02876 immediate=true num_chunks=1 labels=“{app="aws-ebs-csi-driver", component="csi-driver", container="liveness-probe", filename="/var/log/pods/kube-system_ebs-csi-node-4tqm5_cc720ec6-5c90-4228-a778-ff7a0e8da61f/liveness-probe/0.log", job="kube-system/aws-ebs-csi-driver", namespace="kube-system", node_name="ip-10-2-5-136.ap-south-1.compute.internal", pod="ebs-csi-node-4tqm5", stream="stderr"}”
level=info ts=2024-02-15T10:52:01.843797655Z caller=flush.go:167 msg=“flushing stream” user=fake fp=da9778f1ec96c722 immediate=true num_chunks=1 labels=“{app="coredns", container="coredns", filename="/var/log/pods/kube-system_coredns-666b594bc6-g7xv7_99d6b014-04a7-4d83-8640-ac1249e83356/coredns/0.log", job="kube-system/coredns", namespace="kube-system", node_name="ip-10-2-5-151.ap-south-1.compute.internal", pod="coredns-666b594bc6-g7xv7", stream="stdout"}”
ts=2024-02-15T10:52:06.898246988Z caller=memberlist_logger.go:74 level=warn msg=“Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host”
level=warn ts=2024-02-15T10:52:06.89827878Z caller=memberlist_client.go:595 msg=“joining memberlist cluster: failed to reach any nodes” retries=3 err=“1 error occurred:\n\t* Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host\n\n”
ts=2024-02-15T10:52:15.277364738Z caller=memberlist_logger.go:74 level=warn msg=“Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host”
level=warn ts=2024-02-15T10:52:15.27739389Z caller=memberlist_client.go:595 msg=“joining memberlist cluster: failed to reach any nodes” retries=4 err=“1 error occurred:\n\t* Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host\n\n”
level=info ts=2024-02-15T10:52:32.680551503Z caller=memberlist_client.go:592 msg=“joining memberlist cluster succeeded” reached_nodes=3 elapsed_time=35.639158712s
level=info ts=2024-02-15T10:52:57.043457216Z caller=table_manager.go:136 index-store=boltdb-shipper-2022-01-11 msg=“uploading tables”
level=info ts=2024-02-15T10:52:57.043525596Z caller=index_set.go:86 msg=“uploading table loki_index_19759”
level=info ts=2024-02-15T10:52:57.065616658Z caller=table_manager.go:171 index-store=boltdb-shipper-2022-01-11 msg=“handing over indexes to shipper”
level=info ts=2024-02-15T10:52:57.066127349Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19759”
level=info ts=2024-02-15T10:52:57.066138357Z caller=table.go:334 msg=“finished handing over table loki_index_19759”
level=info ts=2024-02-15T10:52:57.066165334Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19760”
level=info ts=2024-02-15T10:52:57.06617031Z caller=table.go:334 msg=“finished handing over table loki_index_19760”
level=info ts=2024-02-15T10:52:57.066189464Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19761”
level=info ts=2024-02-15T10:52:57.066194235Z caller=table.go:334 msg=“finished handing over table loki_index_19761”
level=info ts=2024-02-15T10:53:57.065569058Z caller=table_manager.go:171 index-store=boltdb-shipper-2022-01-11 msg=“handing over indexes to shipper”
level=info ts=2024-02-15T10:53:57.065637252Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19759”
level=info ts=2024-02-15T10:53:57.065645011Z caller=table.go:334 msg=“finished handing over table loki_index_19759”
level=info ts=2024-02-15T10:53:57.065671759Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19760”
level=info ts=2024-02-15T10:53:57.065677004Z caller=table.go:334 msg=“finished handing over table loki_index_19760”
level=info ts=2024-02-15T10:53:57.065829172Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19761”
level=info ts=2024-02-15T10:53:57.065841886Z caller=table.go:334 msg=“finished handing over table loki_index_19761”
level=info ts=2024-02-15T10:54:57.065345819Z caller=table_manager.go:171 index-store=boltdb-shipper-2022-01-11 msg=“handing over indexes to shipper”
level=info ts=2024-02-15T10:54:57.065461688Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19761”
level=info ts=2024-02-15T10:54:57.065471591Z caller=table.go:334 msg=“finished handing over table loki_index_19761”
level=info ts=2024-02-15T10:54:57.065500604Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19759”
level=info ts=2024-02-15T10:54:57.065505747Z caller=table.go:334 msg=“finished handing over table loki_index_19759”
level=info ts=2024-02-15T10:54:57.065533152Z caller=table.go:318 msg=“handing over indexes to shipper loki_index_19760”
level=info ts=2024-02-15T10:54:57.065537878Z caller=table.go:334 msg=“finished handing over table loki_index_19760”

values.yaml
storage_config:
boltdb_shipper:
active_index_directory: /loki/boltdb-shipper-active
cache_location: /loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: s3
aws:
s3: s3://:@ap-south-1
bucketnames:

schema_config:
configs:
- from: 2020-07-01
store: boltdb-shipper
object_store: aws
schema: v11
index:
prefix: index_
period: 24h
loki:
auth_enabled: false
server:
http_listen_port: 3100
commonConfig:
replication_factor: 1

nodeSelector:

environment: loki

memberlist:
service:
publishNotReadyAddresses: true

This is the problem, your write pods are not able to resolve the memberlist:

level=warn ts=2024-02-15T10:51:57.045628606Z caller=memberlist_client.go:595 msg=“joining memberlist cluster: failed to reach any nodes” retries=0 err=“1 error occurred:\n\t* Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host\n\n”

How to resolve

The memberlist needs to be something that can be resolved into all writers and readers. For example, if you are using simple scalable mode, you should setup service discovery for all your read and write pods, say:

loki-read.something.local
loki-write.something.local

And then you’d configure memberlist with them like so:

memberlist:
  join_members:
  - dns+loki-read.something.local:7946
  - dns+loki-write.something.local:7946
1 Like

#same Issue Even after following ur Instructions

##write pod log Below

level=warn ts=2024-02-18T07:05:33.787200633Z caller=memberlist_client.go:595 msg=“joining memberlist cluster: failed to reach any nodes” retries=5 err=“1 error occurred:\n\t* Failed to resolve loki-memberlist: lookup loki-memberlist on 172.20.0.10:53: no such host\n\n”

##Pod Describe
Readiness probe failed: HTTP probe failed with statuscode: 503

Used Below Configs
storage_config:
boltdb_shipper:
active_index_directory: /loki/boltdb-shipper-active
cache_location: /loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: s3
aws:
s3: s3://@ap-south-1
bucketnames:

schema_config:
configs:
- from: 2020-07-01
store: boltdb-shipper
object_store: aws
schema: v11
index:
prefix: index_
period: 24h
loki:
auth_enabled: false
server:
http_listen_port: 3100
commonConfig:
replication_factor: 1

memberlist:
join_members:

Please provide Some Docs which can help to configure Loki in AWS Env

You should get rid of the http:// part, and use one of the service discovery method. See About Grafana Mimir DNS service discovery | Grafana Mimir documentation for service discovery. You can also google loki configuration example in any search engine, there are many examples online.

level=error ts=2024-02-26T05:15:56.207196871Z caller=flush.go:143 org_id=fake msg=“failed to flush” err=“failed to flush chunks: store put chunk: RequestError: send request failed\ncaused by: Put "https://chunks.s3.dummy.amazonaws.com/fake/87dd4daa8ec3804e/18d89a60860%3A18d89a6800f%3A7460c366\”: dial tcp: lookup chunks.s3.dummy.amazonaws.com on 172.20.0.10:53: no such host, num_chunks: 4, labels: {app="devicerouterservice", container="devicerouterservice", filename="/var/log/pods/suremdm-hotfix_devicerouterservice-deployment-5b55c89d99-nbf8g_379479cc-6c53-45dd-a056-7d4758517c2c/devicerouterservice/0.log", job="suremdm-hotfix/devicerouterservice", loglevel="Info", namespace="suremdm-hotfix", node_name="ip-10-2-3-196.ap-south-1.compute.internal", pod="devicerouterservice-deployment-5b55c89d99-nbf8g", stream="stdout"}"

Getting this error even after providing proper AWS creds