"Connection reset by peer" with Grafana-Loki-Alloy setup

I got Grafana up and running in Docker Compose without any problems, however I’m running out of ideas when it comes to adding Loki integration to it. My end goal was to at least collect OTEL logs, but I keep getting “connection reset by peer” with all configs I’ve tried. I tried with OTEL collector and Promtail, and now Alloy.

Here’s the compose file:

services:
  grafana:
    image: grafana/grafana-enterprise
    container_name: grafana
    restart: unless-stopped
    environment:
     - GF_SERVER_ROOT_URL=https://grafana.retracted.com/
    ports:
     - '3005:3000'
    volumes:
      - grafana-storage:/var/lib/grafana
    networks:
      - grafana
      
  alloy:
    image: grafana/alloy:latest
    container_name: grafana-alloy
    volumes:
      - ./config.alloy:/etc/alloy/config.alloy
    pull_policy: always
    environment:
      LOKI_HOST: http://192.168.1.222:3006
    command:
      - run
      - /etc/alloy/config.alloy
      - --storage.path=/var/lib/alloy/data
      - --server.http.listen-addr=0.0.0.0:12345
    ports:
      - "12345:12345"
    depends_on:
      - loki
    networks:
      - grafana

  loki:
    image: grafana/loki:latest
    container_name: grafana-loki
    ports:
      - "3006:3100"
    volumes:
      - ./loki-config.yaml:/etc/loki/local-config.yaml
    command: -config.file=/etc/loki/local-config.yaml
    networks:
      - grafana

networks:
    grafana:

volumes:
  grafana-storage: {}

Here’s config.alloy:

// This file serves as an example Alloy configuration to interact with the
// Docker Compose environment.
//
// This configuration works whether you are running Alloy locally or within the
// Docker Compose environment when the `alloy` profile is enabled.

logging {
	level = "debug"

	// Forward internal logs to the local Loki instance.
	write_to = [loki.relabel.alloy_logs.receiver]
}

loki.relabel "alloy_logs"{
	rule {
		target_label = "instance"
		replacement = constants.hostname
	}

	rule {
		target_label = "job"
		replacement = "integrations/self"
	}

	forward_to = [loki.write.loki.receiver]
}

tracing {

	// Forward internal spans to the local Loki instance.
	write_to = [otelcol.exporter.otlphttp.loki.input]
}

loki.write "loki" {
	endpoint {
		url = string.format(
			"%s/loki/api/v1/push",
			coalesce(sys.env("LOKI_HOST"), "http://192.168.1.222:3006"),
		)
	}
}

otelcol.exporter.otlphttp "loki" {
	client {
		endpoint = coalesce(sys.env("LOKI_HOST"), "http://192.168.1.222:3006/otlp")

		tls {
			insecure = true
		}
	}
}

And here’s loki-config.yaml:

auth_enabled: false

server:
  http_listen_port: 3006
  grpc_listen_port: 9096
  grpc_server_max_concurrent_streams: 1000

common:
  instance_addr: 127.0.0.1
  path_prefix: /loki
  storage:
    filesystem:
      chunks_directory: /loki/chunks
      rules_directory: /loki/rules
  replication_factor: 1
  ring:
    kvstore:
      store: inmemory

query_range:
  results_cache:
    cache:
      embedded_cache:
        enabled: true
        max_size_mb: 100

schema_config:
  configs:
    - from: 2020-10-24
      store: tsdb
      object_store: filesystem
      schema: v13
      index:
        prefix: index_
        period: 24h

limits_config:
  allow_structured_metadata: true

ruler:
  enable_alertmanager_discovery: true
  enable_api: true

analytics:
  reporting_enabled: false

pattern_ingester:
  enabled: true

Here is a few lines from Alloy logs:

ts=2025-04-04T22:54:43.088717564Z level=warn msg="error sending batch, will retry" component_path=/ component_id=loki.write.loki component=client host=192.168.1.111:3006 status=-1 tenant="" error="Post \"http://192.168.1.111:3006/loki/api/v1/push\": read tcp 192.168.96.4:42398->192.168.1.111:3006: read: connection reset by peer"
ts=2025-04-04T22:54:41.997823688Z level=info msg="now listening for http traffic" service=http addr=0.0.0.0:12345
ts=2025-04-04T22:54:43.876040787Z level=warn msg="error sending batch, will retry" component_path=/ component_id=loki.write.loki component=client host=192.168.1.111:3006 status=-1 tenant="" error="Post \"http://192.168.1.111:3006/loki/api/v1/push\": read tcp 192.168.96.4:42414->192.168.1.111:3006: read: connection reset by peer"
2025-04-04 22:54:46.954913832 +0000 UTC m=+5.297341701 write error: lokiWriter failed to forward entry, channel was blocked
ts=2025-04-04T22:54:46.95492389Z level=debug msg="Preparing to make HTTP request" component_path=/ component_id=otelcol.exporter.otlphttp.loki url=http://192.168.1.111:3006/v1/traces
ts=2025-04-04T22:54:46.95618606Z level=info msg="Exporting failed. Will retry the request after interval." component_path=/ component_id=otelcol.exporter.otlphttp.loki error="failed to make an HTTP request: Post \"http://192.168.1.111:3006/v1/traces\": read tcp 192.168.96.4:42428->192.168.1.111:3006: read: connection reset by peer" interval=5.702270822s

Sorry for the long post!

Error is quite clear, your alloy container can’t get to your Loki container.

  1. Make sure loki is actually working. Use API to send push and query command to Loki container directly to verify functionality.
  2. Make sure alloy container has connectivity to Loki container on the right port.

Thanks for the quick reply! I tried to send HTTP GET requests to Loki’s endpoints, but connection times out. How could this be, if I can reach Grafana and Alloy perfectly even though they are all in the same compose -config file, configured similarly?

You need to keep in mind container - each container has own network namespace. That means request from the host may be failing, but request from the container can be working perfectly. So make that verification request from the alloy container, when you want to verify alloy.

Alright, here’s the tests I’ve done so far (personal PC meaning a computer outside docker host, still in internal network):

  • Personal PC → Grafana: OK
  • Personal PC → Alloy: OK
  • Personal PC → Loki: Connection timeout
  • Alloy container → Loki (cURL requests): Connection reset by peer
  • Alloy container → Grafana container (cURL /api/health): OK

I can’t test requests FROM Loki container, as it’s the Distroless image.

I’m still unable to figure out why Loki is unreachable, the config for the container is taken from Grafanas own examples, and I don’t see anything wrong with the config. Any ideas?

Alrighty… two days of debugging for this. The problem was http_listen_port being 3006, not 3100. I set it to 3006 intuitively as you know, I published that port. Didn’t think about it afterwards.

Now however, Alloy returns these errors (to Loki as well, so reachability is solved):

ts=2025-04-05T19:43:48.699072647Z level=debug msg="Preparing to make HTTP request" component_path=/ component_id=otelcol.exporter.otlphttp.loki url=http://192.168.1.111:3100/v1/traces
ts=2025-04-05T19:43:48.707265312Z level=error msg="Exporting failed. Dropping data." component_path=/ component_id=otelcol.exporter.otlphttp.loki error="not retryable error: Permanent error: rpc error: code = Unimplemented desc = error exporting items, request to http://192.168.1.111:3100/v1/traces responded with HTTP Status Code 404" dropped_items=13

You are trying to ingest traces to Loki (Loki is a logs storage, not traces storage). Be familiar with signal types: metrics, logs, traces.
Each signal type needs own storage - Grafana (as a company, not as a tool) offers Tempo for traces.