Msg="error getting ingester clients" err="empty ring"

Hey, I am facing this problem: msg=“error getting ingester clients” err=“empty ring”
This is my Loki helm chart values config using simple scalable mode.

fullnameOverride: "test-loki"
nameOverride: "test-loki"
deploymentMode: SimpleScalable

######################################################################################################################
#
# Base Loki Configs including kubernetes configurations and configurations for Loki itself,
# see below for more specifics on Loki's configuration.
#
######################################################################################################################
loki:
  configStorageType: ConfigMap
  nodeSelector: 
    node_pool: "node-test"

  # Should authentication be enabled
  auth_enabled: false

  commonConfig:
    replication_factor: 1
    ring:
      kvstore:
        store: inmemory
  # -- Check https://grafana.com/docs/loki/latest/configuration/#server for more info on the server configuration.
  server:
    http_listen_port: 3100
    grpc_listen_port: 9095
    http_server_read_timeout: 600s
    http_server_write_timeout: 600s
  # -- Limits config
  limits_config:
    volume_enabled: true
    allow_structured_metadata: true
  tracing:
    enabled: true
  ingester:
    chunk_encoding: snappy
 
  # -- Storage config. Providing this will automatically populate all necessary storage configs in the templated config.
  # storage:
  #   # Loki requires a bucket for chunks and the ruler. GEL requires a third bucket for the admin API.
  #   bucketNames:
  #     chunks: chunks
  #     ruler: ruler
  #     admin: admin
  #   type: s3
  #   s3:
  #     s3: "http://loki:loki-secret@test-loki-minio.default.svc.cluster.local:9000"
  #     endpoint: <endpoint>
  #     region: us-east-1
  #     secretAccessKey: loki-secret
  #     accessKeyId: loki
  #     insecure: true
  #   gcs:
  #     chunkBufferSize: 0
  #     requestTimeout: "0s"
  #     enableHttp2: true
  #   filesystem:
  #     chunks_directory: /var/loki/chunks
  #     rules_directory: /var/loki/rules
  #     admin_api_directory: /var/loki/admin

  # -- Check https://grafana.com/docs/loki/latest/configuration/#schema_config for more info on how to configure schemas
  schemaConfig:
    configs:
      - from: "2024-06-01"
        store: tsdb
        object_store: s3
        schema: v13
        index:
          prefix: loki_index_
          period: 24h

  memcached:
    chunk_cache:
      enabled: true
      host: ""
      service: "memcached-client"
      batch_size: 256
      parallelism: 10
    results_cache:
      enabled: false
      host: ""
      service: "memcached-client"
      timeout: "500ms"
      default_validity: "12h"

######################################################################################################################
#
# Gateway and Ingress
#
# By default this chart will deploy a Nginx container to act as a gateway which handles routing of traffic
# and can also do auth.
#
# If you would prefer you can optionally disable this and enable using k8s ingress to do the incoming routing.
#
######################################################################################################################

# Configuration for the gateway
gateway:
  enabled: true
  replicas: 1
  containerPort: 8080
  # -- Enable logging of 2xx and 3xx HTTP requests
  verboseLogging: true

  deploymentStrategy:
    type: RollingUpdate

  nodeSelector: 
    node_pool: "node-test"
  affinity: {}

  # Gateway service configuration
  service:
    port: 80
    type: ClusterIP

######################################################################################################################
#
# Simple Scalable Deployment (SSD) Mode
#
# For small to medium size Loki deployments up to around 1 TB/day
#
######################################################################################################################

# Configuration for the write pod(s)
write:
  replicas: 1
  nodeSelector: 
    node_pool: "node-test"
  affinity: {}
# --  Configuration for the read pod(s)
read:
  replicas: 1
  nodeSelector: 
    node_pool: "node-test"
  affinity: {}
# --  Configuration for the backend pod(s)
backend:
  replicas: 1
  nodeSelector: 
    node_pool: "node-test"
  affinity: {}

######################################################################################################################
#
# Subchart configurations
#
######################################################################################################################
minio:
  enabled: true
  replicas: 1
  # Minio requires 2 to 16 drives for erasure code (drivesPerNode * replicas)
  # https://docs.min.io/docs/minio-erasure-code-quickstart-guide
  # Since we only have 1 replica, that means 2 drives must be used.
  drivesPerNode: 2
  rootUser: enterprise-logs
  rootPassword: supersecret
  buckets:
    - name: chunks
      policy: none
      purge: false
    - name: ruler
      policy: none
      purge: false
    - name: admin
      policy: none
      purge: false
  persistence:
    size: 5Gi
    annotations: {}
  resources: {}
    # requests:
    #   cpu: 100m
    #   memory: 128Mi
  nodeSelector: 
    node_pool: "node-test"

#Testing 

test:
  enabled: false
lokiCanary:
  enabled: false

# Zero out replica counts of other deployment modes
singleBinary:
  replicas: 0

ingester:
  replicas: 0
querier:
  replicas: 0
queryFrontend:
  replicas: 0
queryScheduler:
  replicas: 0
distributor:
  replicas: 0
compactor:
  replicas: 0
indexGateway:
  replicas: 0
bloomCompactor:
  replicas: 0
bloomGateway:
  replicas: 0

Have you find any solution ? Actually I am also using loki with azureblob storage and my application logs are not send to loki .

No, still facing the problem