Errors moving from Monolith to SSD

Hi Everyone,

Successfully implemented Loki in Monolith mode with 6 Linux VM behind a reverse proxy, with S3 object store and Enterprise Grafana connected (all on prem).

Scaled to 9 VMs and made the change to target=read/write in the yaml files but now cannot connect to previously configured tenants in our S3 bucket. Getting errors on dashboards and at the datasource.

auth_enabled: true
target: write

server:
http_listen_port: 3100
grpc_listen_port: 9096

common:
path_prefix: /loki
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
replication_factor: 1
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules

memberlist:
join_members:
- loki:7946

query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100

schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: loki_index
period: 24h

storage_config:
boltdb_shipper:
active_index_directory: /loki/index
cache_location: /loki/index_cache
cache_ttl: 168h
shared_store: s3
aws:
s3:id:key@hostname:port/bucketname
s3forcepathstyle: true
insecure: true

limits_config:
retention_period: 24h
per_tenant_override_config: /etc/overrides.yaml
ingestion_rate_mb: 1024
ingestion_burst_size_mb: 1024
max_query_series: 100000

compactor:
retention_enabled: true
retention_delete_delay: 30m
working_directory: /loki/compactor
shared_store: s3
compaction_interval: 5m

distributor:
ring:
kvstore:
store: memberlist

ruler:
alertmanager_url: http://localhost:9093


The Loki data source was/is configured to point at the reverse-proxy and had been working to connect to individual tenants based on tenant id.

Any thoughts based on the above? I know the config is pretty bare bones, I’m working through what to include in that as I go.

Thanks!
Christine