Avoiding 502 on Elasticsearch with self-signed certificates and grafana/grafana-oss:8.4.1

Hi team,

Today I write with a very specific problem for deploying a version of Grafana into production. I would like to know if maybe you have tips on how to address it.

Currently I work with a team that collects metric data from many remote sources, and enables users to define their own Grafana dashboards in Gitlab, and deploy them via a standard image (in this case: grafana/grafana-oss:8.4.1). The deployment is quite security-oriented, and all that users can modify from the docker image, is just to pass GF environment variables, that change the configuration of the Grafana Server (apart from pointing to jsons of the dashboards, or datasource definitions).

Given this setup, we find a bit of difficulties for connecting to an ES deployment from a server with a self-signed certificate. Namely:

  • We can connect from the container, with wget --no-check-certificates to the ES. Mind that this is the busybox version of wget and the user permissions are quite restricted (so we cannot add system-wide certificates).
  • Sadly, when we try to connect to the ES deployment, with all options (skip-verify, or with a certificate given from the UI), all that we get is a 502 timeout reply. We are baffled by this and we think that it might be that something in the js runtime, or the libraries used, that block the passing of a certificate or the skip-verify options from actually being applied. Please note that this happens for both datasources defined beforehand or for a new one.

Unfortunately we do not know what it could be blocking this. So we thought of writing just to ask for tips, or ideas to address this problem.

Other than what we have tried. We found online some suggestions for changing golang environment variables, but are not sure if these could really be the issue, nor what might be precise values to give.

Thanks in advance for any tip or suggestion :slight_smile:

Cheers :slight_smile:

Hey, so a quick update: For our case the solution was just skipping the HTTP Proxy that was set-up for the Docker Image, by flagging no-proxy for the location of the datasource.