Loki fails to upload chunks to MinIO bucket

Loki Fails to Upload Chunks to MinIO Due to Unsupported Characters in Object Name

Setup

I have the following setup running on my development machine:

  • Loki
  • Alloy
  • Grafana
  • MinIO

Loki Configuration

Below is the configuration I am using for Loki:

# This is a complete configuration to deploy Loki backed by an S3-compatible API
# like MinIO for storage.
# Index files will be written locally at /loki/index and, eventually, will be shipped to the storage via tsdb-shipper.

auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096
  log_level: info
  grpc_server_max_concurrent_streams: 1000

common:
  instance_addr: 127.0.0.1
  path_prefix: C:/To_Delete/Loki/loki
  replication_factor: 1
  ring:
    kvstore:
      store: inmemory

schema_config:
  configs:
  - from: 2020-05-15
    store: tsdb
    object_store: s3
    schema: v13
    index:
      prefix: index_
      period: 24h

storage_config:
 tsdb_shipper:
   active_index_directory: C:/To_Delete/Loki/loki/index
   cache_location: C:/To_Delete/Loki/loki/index_cache
   cache_ttl: 1m
 aws:
    s3: http://<username>:<password>@localhost.:1313/loki
    s3forcepathstyle: true

pattern_ingester:
  enabled: true
  metric_aggregation:
    loki_address: localhost:3100

ruler:
  alertmanager_url: http://localhost:9093

frontend:
  encoding: protobuf

Issue

With this configuration, Loki successfully uploads loki_cluster_seed.json to the loki bucket in MinIO. However, it fails to upload chunks.

Error in Loki Console

level=info ts=2025-04-14T18:39:38.1000413Z caller=flush.go:304 component=ingester msg="flushing stream" user=fake fp=4dd7babe8f8bd469 immediate=false num_chunks=3 total_comp="870 B" avg_comp="290 B" total_uncomp="1.8 kB" avg_uncomp="615 B" full=3 labels="{level=\"INFO\", service_name=\"unknown_service\"}"
level=error ts=2025-04-14T18:39:38.1000413Z caller=flush.go:261 component=ingester loop=9 org_id=fake msg="failed to flush" retries=4 err="failed to flush chunks: store put chunk: XMinioInvalidObjectName: Object name contains unsupported characters.\n\tstatus code: 400, request id: 1836435E55EA2654, host id: 90fa0046e429f497c6ff9f067da5dfc2de025f495215b3d6c189ead5fd156cbf, num_chunks: 3, labels: {level=\"INFO\", service_name=\"unknown_service\"}"

Error in MinIO Audit Logs

{
  "version": "1",
  "deploymentid": "52da1725-21cc-4b83-a2c5-07223cd84fd9",
  "time": "2025-04-14T18:13:58.6487694Z",
  "api": {
    "name": "PutObject",
    "bucket": "loki",
    "object": "fake/4dd7babe8f8bd469/1962e1e0ffb:1962e1e1016:65f2d648",
    "status": "Bad Request",
    "statusCode": 400
  },
  "requestID": "183641F7E777EB78",
  "userAgent": "aws-sdk-go/1.55.6 (go1.23.6; windows; amd64)",
  "requestPath": "/loki/fake/4dd7babe8f8bd469/1962e1e0ffb:1962e1e1016:65f2d648",
  "requestHost": "localhost.:1313",
  "accessKey": "minioadmin"
}

Suspected Cause

The error seems to be caused by the object name containing unsupported characters, specifically the colon (:).

Question

  • Is there a way to configure Loki to avoid using colons in object names?
  • Alternatively, is there a workaround or fix for this issue?

Any help or guidance would be greatly appreciated!

Thanks in advance!

Try changing this to object_store: aws

I still see the same issue after updating object_store to aws. Please let me know if any additional information is required

Don’t see anything else obviously wrong. What error message do you see from loki logs?

Attaching the Loki logs

level=info ts=2025-04-14T18:39:38.1000413Z caller=flush.go:304 component=ingester msg=“flushing stream” user=fake fp=4dd7babe8f8bd469 immediate=false num_chunks=3 total_comp=“870 B” avg_comp=“290 B” total_uncomp=“1.8 kB” avg_uncomp=“615 B” full=3 labels=“{level="INFO", service_name="unknown_service"}”
level=error ts=2025-04-14T18:39:38.1000413Z caller=flush.go:261 component=ingester loop=9 org_id=fake msg=“failed to flush” retries=4 err=“failed to flush chunks: store put chunk: XMinioInvalidObjectName: Object name contains unsupported characters.\n\tstatus code: 400, request id: 1836435E55EA2654, host id: 90fa0046e429f497c6ff9f067da5dfc2de025f495215b3d6c189ead5fd156cbf, num_chunks: 3, labels: {level="INFO", service_name="unknown_service"}”

Maybe try changing your S3 configuration a bit, like so:

 aws:
    s3: http://<username>:<password>@localhost:1313
    bucketnames: loki
    s3forcepathstyle: true

If that still doesn’t work, check MinIO logs and see if it has any indication of where the unsupported characters come from.

Also, I just noticed that in your original configuration there is a dot right after localhost and before the colon, not sure if that was a typo, if not could be your problem, too.

Thanks for your reply..

I have tried changing the S3 Configuration, but no luck..

Also, I just noticed that in your original configuration there is a dot right after localhost and before the colon, not sure if that was a typo, if not could be your problem, too.

To answer this
MinIO Blog to configure Loki, Grafana Docs. The dot in the S3 address for MinIO is used because there is no need to specify AWS Region.

I used actual S3 instead of MinIO and it worked properly, Loki was able to upload chunks to S3. Filename was fake/4dd7babe8f8bd469/1962e1e0ffb:1962e1e1016:65f2d648.

It looks like MinIO doesn’t support such filenames (having : in name). If you carefully observe the Blog I shared, there the filenames are base64 encoded. I observed the same thing when I used filesystem as storage for Loki.
The only difference I could see from the blog is, I used tsdb instead of boltdb. Does Loki with tsdb as store and s3 as object_store doesn’t encode the filename with base64? Or is there some configuration to enable encoding? Or am I missing something?

I can confirm the object name in S3 does contain colons. I don’t use MinIO so I can’t confirm if it’s an issue or not, I’ll have to test later.

A quick search seems to indicate there is an issue with Windows and object name with colon, but nothing other than that. S3 object name does allow colon, so MinIO should support it as well since they claim to be S3 compatible.