How to disable encryption of Grfana Loki logs fed to S3

This could be a very dumb question to start off with, so I apologise in advance, but skimming through the documentation I didn’t find a way (to control in config) the encryption of logs being fed to s3 buckets. I have a setup where Grafana Loki logs are being fed to S3(collected by fluent-bit from pods, since all of this is deployed in EKS), I have absolutely no problem in viewing logs via Grafana UI, logs are properly stored in S3 as well, but when I download files from within the bucket they are encrypted. Is there a config flag I missed or there is more to do away with this encrypted logs or there isn’t really something that can be done in this situation. I hope I have shared enough information or presented the situation/question. But in case its not clear please feel free to ask and also thanks in advance for any help !!

I tried to play around with some config items like sse_encryption: false but it didn’t seem to have any effect , I also tried to toggle the insecure flag but I believe that
has more to do with tls.
The downloaded file from s3 looks like the attached screen.

The encryption happens on S3 (server side), if you download it from the AWS console it should come as unencrypted. The chunks are compressed though, so you might try that.

If you need raw log, I’d recommend using API calls with a script and export that way.

Thanks for your response @tonyswumac . The screen I attached in the post is what I see from a log I downloaded in my local from s3 at which point it should’ve already been decrypted but its not unfortunately which made me think that probably some encryption happening from the loki end.

loki:
  enabled: true
  isDefault: true
  url: http://{{(include "loki.serviceName" .)}}:{{ .Values.loki.service.port }}
  readinessProbe:
    httpGet:
      path: /ready
      port: http-metrics
    initialDelaySeconds: 45
  livenessProbe:
     httpGet:
      path: /ready
      port: http-metrics
     initialDelaySeconds: 45
  datasource:
      jsonData: { }
      uid: ""
  env:
  - name: AWS_ACCESS_KEY_ID
    valueFrom:
        secretKeyRef:
          name: iamlokis3
          key: AWS_ACCESS_KEY_ID
  - name: AWS_SECRET_ACCESS_KEY
    valueFrom:
        secretKeyRef:
          name: iamlokis3
          key: AWS_SECRET_ACCESS_KEY
  config:
      schema_config:
        configs:
          - from: 2022-01-01
            store: boltdb-shipper
            object_store: s3
            schema: v11
            index:
              prefix: loki_index_
              period: 24h
      storage_config:
        aws:
          s3: s3://region/bucket-name
          s3forcepathstyle: true
          bucketnames: bucket-name
          region: region
          insecure: false
          sse_encryption: false
        boltdb_shipper:
          shared_store: s3
          cache_ttl: 24h

i am not aware of built-in encryption happening in loki, and you don’t even have SSE enabled for S3, so you are probably looking at the binary form of compressed data. Try to decompress it, loki default uses gzip.

Hi @tonyswumac Thanks for getting back on this thread yet again. The sse flag that I use in the configuration don’t seem to have any effect anyways. The bucket does encrypt objects under it. The index part of the loki logs is gzipped(they have a type ‘gz’ in s3) but the logs under fake/ directory aren’t compressed and on download ideally should already be decrypted but they are not(like i shared in previous screenshots. So, I’m not sure whats causing this encryption.


Not entirely sure. If you have SSE enabled on S3, perhaps try enabling it in your Loki configuration as well.

Also want to take a step back, why are you looking at the binary chunk in the first place anyway If you can read it from Loki?