Get the details of a status=400 when someone issues a bad query


We have dashboards that we can build queries from a postgres datasource.

If when executing this query there is a bad request to postgres, the user sees an error.

Is there any way for us to have this server error manifested anywhere that we can see the details of it to know that it’s happening? In the logs I see that it dumps out a record with a status=400 on it, though the details about this are not present.

May 23 19:37:53 grafana.staging.internal grafana-stack_grafana_1/01680e235309[750]: logger=context userId=3 orgId=1 uname=levi t=2023-05-23T19:37:53.7643862Z level=info msg="Request Completed" method=POST path=/api/ds/query stat
us=400 remote_addr= time_ms=38 duration=38.212141ms size=229 referer="
%22%3A%5B%7B%22type%22%3A%22groupBy%22%2C%22property%22%3A%7B%22type%22%3A%22string%22%7D%7D%5D%2C%22limit%22%3A50%7D%2C%22rawQuery%22%3Atrue%7D%5D%2C%22range%22%3A%7B%22from%22%3A%22now-1h%22%2C%22to%22%3A%22now%22%7D%7D&orgId=1" handler=/api/ds/query


Noticed that there was support for OTEL instrumentation. Is this only on the receiving end for Grafana? Was curious if Grafana emits any trace information from someone running into an error in the UI for any given dashboard.

I would increase log level first, so you may have more details in the logs.

Of course you can enable tracing (e.g. OTEL tracing) - errors occured on the backend site, so you should to see error in the trace.

There is also Assess dashboard usage | Grafana documentation where you can see errors count per dashboard.

1 Like

Thanks for your reply. We did enable this so we could send to Honeycomb using OpenTelemetry | Honeycomb but Grafana seemed to want to use gRPC and wasn’t sure of the setting that we could set the HTTP headers to authenticate.

We had the log level on debug


Why would this even happen in the first place? That a table is deleted and how would what you plan to do help with this?

So I would chain another OTEL collector:

[Grafana] --(grpc)-->
   [middle OTEL collector: grpc receiver/http exporter] --(http)-->

it was a simplified example.

A more realistic one is if a query is driven by a dataset, but that dataset has a value you use (interpolated) into your query and it isn’t escaped or the like, the same thing can happen.

So having that context about what happened would be valuable.

1 Like

Table may not be deleted. Only search path may be misconfigured.

1 Like

@jangaraj thanks for the advices…I’ll see what information we might be able to obtain from the telemetry if I can get that chained properly. Thanks!

Is there a telemetry config file to set up that chain or more documentation about what the available options were?

Thanks for all the info @jangaraj

Was able to get this working. It seems the gRPC doesn’t support any authentication, so we just stood up a sidecar collector and leveraged the otlphttp exporter to get it into Honeycomb that way because we can set the headers there.

For completeness of the thread, here is our collector.yaml

    endpoint: ""
      "x-honeycomb-team": "${LIBHONEY_API_KEY}"
      "x-honeycomb-dataset": "${LIBHONEY_DATASET}"

      receivers: [otlp]
      exporters: [otlphttp,logging]

Then in our docker-compose.yaml

1 Like