How to access logs from in-cluster Loki in OpenShift?

  • What Grafana version and what operating system are you using?
    Grafana: 9.5.2 or preferably 11.2.0
    OpenShift: 4.14/4.15
    OpenShift Loki-operator 5.9.5

  • What are you trying to achieve?
    Authenticate against the in-cluster Lokistack and retrieve logs into Grafana while keeping access to Logs separated by namespace admins for multitenancy.

  • How are you trying to achieve it?
    Following the docs or this forum post’s solution.

  • What happened?
    Got authentication working and was able to explore “Audit” and “Infrastructure” logs, but nothing from “Application” or the Prometheus metrics.

  • What did you expect to happen?
    Also to have access to the “Application” logs and preferably the prometheus metrics.

  • Can you copy/paste the configuration(s) that you are having problems with?

  • Did you receive any errors in the Grafana UI or in related logs? If so, please tell us exactly what they were.

Grafana v11.2.0 Pod found 7 lines like this:
logger=sqlstore.transactions t=2024-09-11T07:18:48.236990538Z level=info msg=“Database locked, sleeping then retrying” error=“database is locked” retry=0 code=“database is locked”

Also found this set of logs multiple times:
logger=authn.service t=2024-09-11T07:19:02.849602701Z level=warn msg=“Failed to authenticate request” client=auth.client.session error=“user token not found”
logger=context userId=0 orgId=0 uname= t=2024-09-11T07:19:02.849654018Z level=warn msg=Unauthorized error=“user token not found” remote_addr=redacted traceID=
logger=context userId=0 orgId=0 uname= t=2024-09-11T07:19:02.849687394Z level=info msg=“Request Completed” method=GET path=/api/live/ws status=401 remote_addr= time_ms=0 duration=649.773µs size=40 referer= handler=/api/live/ws status_source=server

In the Grafana UI:
Forbidden (user=, verb=get, resource=prometheuses, subresource=api). Probably a misconfiguration somewhere down the line.

In case anyone was wondering, the problem was with the ClusterLogForwarder object, and specifically that since it existed there needed to be an inputRef for application in the same pipeline as the outputRef "default.