Is there any option to provide Loki S3 credentials using environment variables? I’m using Hashicorp vault and thinking about how to securely pass keys, but seems no option is secure enough.
We don’t use EKS or any cloud, everything is on-prem.
Take a look at Use environment variables in the configuration in the Loki docs.
Are you running Kubernetes on-prem? If not, what configuration management system do you use?
EDIT
As you have tagged your post with Helm, I assume you use something Kubernetes-like…
This might help https://community.grafana.com/t/storing-s3-accesskeyid-and-secretaccesskey-securely
/EDIT
Yes I’m running on-prem. what information is missing here?
I’ve followed the link you sent and tried to improve my Yaml:
loki:
loki:
enabled: true
auth_enabled: false
storage:
bucketNames:
chunks: loki
ruler: loki
admin: loki
type: s3
s3:
accessKeyId: "${GRAFANA-LOKI-S3-ACCESKEYID}"
secretAccessKey: "${GRAFANA-LOKI-S3-SECRETACCESSKEY}"
region: eu-west-1
commonConfig:
replication_factor: 1
limits_config:
retention_period: 48h
retention_stream:
- selector: '{namespace="monitoring"}'
priority: 1
period: 24h
- selector: '{namespace="loki"}'
priority: 2
period: 24h
ingress:
enabled: true
ingressClassName: "nginx"
paths:
write:
- /api/prom/push
- /loki/api/v1/push
read:
- /api/prom/tail
- /loki/api/v1/tail
- /loki/api
- /api/prom/rules
- /loki/api/v1/rules
- /prometheus/api/v1/rules
- /prometheus/api/v1/alerts
singleBinary:
- /api/prom/push
- /loki/api/v1/push
- /api/prom/tail
- /loki/api/v1/tail
- /loki/api
- /api/prom/rules
- /loki/api/v1/rules
- /prometheus/api/v1/rules
- /prometheus/api/v1/alerts
hosts:
- loki.domain.local
write:
replicas: 1
persistence:
storageClass: "netapp-storage"
read:
replicas: 1
persistence:
storageClass: "netapp-storage"
backend:
persistence:
storageClass: "netapp-storage"
extraArgs:
- '-config.expand-env=true'
extraEnv:
- name: GRAFANA-LOKI-S3-ACCESKEYID
valueFrom:
secretKeyRef:
name: loki-secrets
key: grafana-loki-s3-accessKeyId
- name: GRAFANA-LOKI-S3-SECRETACCESSKEY
valueFrom:
secretKeyRef:
name: loki-secrets
key: grafana-loki-s3-secretAccessKey
singleBinary:
persistence:
storageClass: "netapp-storage"
replicas: 1
and here is example of my secret:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: loki-secrets
data:
grafana-loki-s3-accessKeyId: =
grafana-loki-s3-secretAccessKey: ==
(censored my keys)
but after the helm deployment, the keys are added and removed saying:
Controlled By GrafanaAgent loki
example: https://i.imgur.com/tikfFe2.png
Edit:
Seem after helm upgrade
it’s forcing the secret to be updated and re-apply it. but now backend container restart over and over… with this log:
failed parsing config: missing closing brace
Edit2:
When I’m trying to create the secret and then install Loki I get this error:
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: Secret "loki-secrets" in namespace "loki" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "loki"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "loki"
Assume I’ve installed Hashicorp Vault server. Now I can map the keys inside the pod, what will be the best practice to load them?
I’ve tried to add this:
singleBinary:
persistence:
storageClass: "netapp-storage"
replicas: 1
extraArgs:
- ' &&'
- 'source /vault/secrets/observability-loki-s3aws'
podAnnotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/role: 'loki'
vault.hashicorp.com/agent-inject-secret-observability-loki-s3aws: 'internal/data/observability/loki/s3aws'
vault.hashicorp.com/agent-inject-template-observability-loki-s3aws: |
{{ with secret "internal/data/observability/loki/s3aws" -}}
export AWS_STORAGE_ACCESS_KEY="{{ .Data.data.access_key_id }}" &&
export AWS_STORAGE_SECRET_KEY="{{ .Data.data.secret_access_key }}" &&
echo "Done!!" ~~~
{{- end }}
But Loki don’t use this script when pod is starting.
@b0b hope you can help here.
I can make some educated guesses
We are not using the Vault Agent Injector so I have no personal experience with it but looks like the secrets are put in a file under /vault/secrets/
.
If the rendered secrets file would be in a format the AWS SDK can use (more info) then you could configure the pod with ´AWS_CONFIG_FILE`=<path_to_secrets_file>.
I think you would then leave out all the AWS config from the Loki config, so no grafana-loki-s3-accessKeyId
or grafana-loki-s3-secretAccessKey
or anything like that. AWS SDK, which Loki uses for S3 connections, should take care of the S3 auth using the config file set by AWS_CONFIG_FILE
.
Something like that. We no longer use AWS_STORAGE_ACCESS_KEY
or AWS_STORAGE_SECRET_KEY
in our Loki config. Instead we use service accounts. This looks like what we are doing.
Anyway, there are many ways to do this. You can definitely do it like you are attempting to do it. I just don’t know exactly how to configure it as I’m not using the same method.
So from what you’re saying - currently you support only IAM via EKS (based on the profile you provided - the service account is the way of EKS to manage permissions between services).
From what you saying about AWS_CONFIG_FILE
, if file is provided on a specific path with AWS format, then provide AWS_CONFIG_FILE
the path to the file, and AWS SDK will use it?
If so it can be a great solution. (but better to support native secret)
That is what I would expect to happen. I have not tried it.
So I made it work, Loki & Promtail chart using credentials stored in HashiCorp Vault.
Here’s my values.yaml
in case it can help someone: (single binary is comment since we don’t need it right now)
loki:
loki:
enabled: true
auth_enabled: false
storage:
bucketNames:
chunks: loki
ruler: loki
admin: loki
type: s3
s3:
region: eu-west-1
commonConfig:
replication_factor: 1
limits_config:
retention_period: 48h
retention_stream:
- selector: '{namespace="monitoring"}'
priority: 1
period: 24h
- selector: '{namespace="loki"}'
priority: 2
period: 24h
ingress:
enabled: true
ingressClassName: "nginx"
paths:
write:
- /api/prom/push
- /loki/api/v1/push
read:
- /api/prom/tail
- /loki/api/v1/tail
- /loki/api
- /api/prom/rules
- /loki/api/v1/rules
- /prometheus/api/v1/rules
- /prometheus/api/v1/alerts
singleBinary:
- /api/prom/push
- /loki/api/v1/push
- /api/prom/tail
- /loki/api/v1/tail
- /loki/api
- /api/prom/rules
- /loki/api/v1/rules
- /prometheus/api/v1/rules
- /prometheus/api/v1/alerts
hosts:
- loki.dc-infra.local
write:
replicas: 1
persistence:
storageClass: "netapp-storage"
extraArgs:
- '-config.expand-env=true'
podAnnotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/role: 'loki'
vault.hashicorp.com/agent-inject-secret-observability-loki-s3aws: 'internal/data/observability/loki/s3aws'
vault.hashicorp.com/agent-inject-template-observability-loki-s3aws: |
{{ with secret "internal/data/observability/loki/s3aws" -}}
[default]
aws_access_key_id={{ .Data.data.access_key_id }}
aws_secret_access_key={{ .Data.data.secret_access_key }}
{{- end }}
extraEnv:
- name: AWS_SHARED_CREDENTIALS_FILE
value: "/vault/secrets/observability-loki-s3aws"
read:
replicas: 1
persistence:
storageClass: "netapp-storage"
extraArgs:
- '-config.expand-env=true'
podAnnotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/role: 'loki'
vault.hashicorp.com/agent-inject-secret-observability-loki-s3aws: 'internal/data/observability/loki/s3aws'
vault.hashicorp.com/agent-inject-template-observability-loki-s3aws: |
{{ with secret "internal/data/observability/loki/s3aws" -}}
[default]
aws_access_key_id={{ .Data.data.access_key_id }}
aws_secret_access_key={{ .Data.data.secret_access_key }}
{{- end }}
extraEnv:
- name: AWS_SHARED_CREDENTIALS_FILE
value: "/vault/secrets/observability-loki-s3aws"
backend:
replicas: 1
persistence:
storageClass: "netapp-storage"
extraArgs:
- '-config.expand-env=true'
podAnnotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/role: 'loki'
vault.hashicorp.com/agent-inject-secret-observability-loki-s3aws: 'internal/data/observability/loki/s3aws'
vault.hashicorp.com/agent-inject-template-observability-loki-s3aws: |
{{ with secret "internal/data/observability/loki/s3aws" -}}
[default]
aws_access_key_id={{ .Data.data.access_key_id }}
aws_secret_access_key={{ .Data.data.secret_access_key }}
{{- end }}
extraEnv:
- name: AWS_SHARED_CREDENTIALS_FILE
value: "/vault/secrets/observability-loki-s3aws"
# singleBinary:
# persistence:
# storageClass: "netapp-storage"
# replicas: 0
# extraArgs:
# - '-config.expand-env=true'
# podAnnotations:
# vault.hashicorp.com/agent-inject: 'true'
# vault.hashicorp.com/role: 'loki'
# vault.hashicorp.com/agent-inject-secret-observability-loki-s3aws: 'internal/data/observability/loki/s3aws'
# vault.hashicorp.com/agent-inject-template-observability-loki-s3aws: |
# {{ with secret "internal/data/observability/loki/s3aws" -}}
# [default]
# aws_access_key_id={{ .Data.data.access_key_id }}
# aws_secret_access_key={{ .Data.data.secret_access_key }}
# {{- end }}
# extraEnv:
# - name: AWS_SHARED_CREDENTIALS_FILE
# value: "/vault/secrets/observability-loki-s3aws"
promtail:
enabled: false
config:
logLevel: info
clients:
- url: http://loki-gateway.loki.svc.cluster.local/loki/api/v1/push
For a better reference here is the dependencies part from my Chart.yaml
:
dependencies:
- name: loki
condition: loki.enabled
repository: https://grafana.github.io/helm-charts
version: 5.10.0
- name: promtail
condition: promtail.enabled
repository: https://grafana.github.io/helm-charts
version: 6.14.1
Thanks for the resources.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.