I want to run Grafana Alloy as three separate containers: one acting as an HTTP receiver, one as a processor, and one as a writer

This setup runs Grafana Alloy as three separate containers, each dedicated to a specific stage of the telemetry pipeline:

  1. HTTP Receiver – This container is responsible for ingesting telemetry data over HTTP (e.g., OTLP, Prometheus remote write, etc.) from various sources.
  2. Processor – This container handles transformation, enrichment, or filtering of the received data. It decouples ingestion from processing for greater scalability and flexibility.
  3. Writer – This container is responsible for exporting the processed telemetry data to its final destination(s), such as Grafana Cloud, Loki, Tempo, or other observability backends.

This architecture promotes better separation of concerns, improved scalability, and easier maintenance by isolating responsibilities across multiple containers.

You can kinda do this by chaining loki.source.api and loki.write. But I wouldn’t recommend doing this unless you have a good reason to. One issue you’ll quickly run into is with any sort of centralized processing you either have to have all the knowledge from the processing layer, or your data need to be highly uniformed.