Grafana Loki store log data on S3 bucket on AWS Fargate

Hello. I am running Loki in AWS ECS fargate cluster. I am trying to store log data on S3 bucket but I am not sure how to add given config in environment variable. Below config for the S3 bucket integration:

tsdb_shipper:
    active_index_directory: /data/tsdb-index
    cache_location: /data/tsdb-cache
    index_gateway_client:
    aws_s3=s3://<access_key>:<secret_key>@region
    aws_bucketnames=<bucket_name"
    shared_store: aws

I can only make changes in ECS task defincation. I am not sure how to pass on this configuration data as Environment variables in ECS task defination.

Please suggest.

I am getting reference from these link:

Your question is more specific to ECS. AWS ECS, unlike Kubernetes, does not have the functionality of a config map (something that’s been requested for many many years now, see [ECS] [Volumes]: Ability to create config "volume" and mount it into container as file · Issue #56 · aws/containers-roadmap · GitHub). Since you likely can’t rely on AWS adding it, you have a couple of options:

  1. Use EFS. You write configuration to an EFS file share, mount it inside the containers as read-only volume.

  2. Put your configuration file on S3, deploy a sidecar container along with your Loki containers. The sidecar container’s sole responsibility would be to pull the configuration file from S3 and put it on the desired path on the file system.

  3. Place the configuration on ECS host (obviously doesn’t work if you use fargate). This of course requires you to have access to the host.

None of the solution above is perfect, and depending on your CI/CD they each have unique caveats. Personally we do #3, because we use Terraform, and it’s pretty easy to just use SSM + Ansible to place the configuration file on the host. See GitHub - tonyswu/terraform-ssm-ansible-example as an example.

1 Like

Hi @tonyswumac … Thanks for your reply. So another question. So far my setup is follow:
I have several services running on ECS Fargate Cluster. In order to implement Grafana stack, I decided to run Loki and Grafana on the same ECS cluster. Fluentd log collecter is already running on the ECS fargate and sending logs without any issue.

However, (Considering such complications) I can also setup Loki and Grafana on the separate EFS (Kubernetes) cluster within the same AZ and VPC.

What is the best infra model for such case.? What’s your experience considering these facts.?

Thanks in advance.

I’d recommend you to run your Loki and Grafana stacks on a separate cluster, regardless of the container platform you decide to use.

Any reasons for that recommendation?

Technically, an Amazon ECS cluster is only a logical grouping of tasks or services. So Grafana, Loki may end up on the same host also when they are deployed on different ECS Fargate clusters.

If it’s FARGATE then yes, less reason to run a separate cluster. But the other side of coin is also true, since the concept of an ECS cluster doesn’t really have any fundamental impact on how the containers behave there also isn’t a good reason not to be a bit more liberal on the logical separation of clusters.

If it’s EC2 mode then it makes more sense to run a separate cluster (I was mostly thinking on EC2 mode because that’s what we do, and kinda forgot OP is running FARGATE).

1 Like