Unable to fetch labels from Loki (Failed to call resource)

We are working on PLG (promtail , loki and grafana) stack setup , loki is deployed in distributed microservice mode . Promtail and loki is working as expected but when trying to add loki as a datasource in grafana dashboard we are getting the below error (PFA screenshot for the same). Can you please help on this.

Unable to fetch labels from Loki (Failed to call resource)

Error logs :

logger=sqlstore t=2022-07-07T07:47:38.451264825Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0
logger=context traceID=00000000000000000000000000000000 userId=1 orgId=1 uname=admin t=2022-07-07T08:14:24.575931983Z level=info msg="Request Completed" method=GET path=/api/live/ws status=0  time_ms=1 duration=1.072052ms size=0 referer= traceID=00000000000000000000000000000000
logger=context traceID=00000000000000000000000000000000 userId=1 orgId=1 uname=admin t=2022-07-07T08:14:32.576690448Z level=error msg="Failed to call resource" error="<html>\r\n<head><title>404 Not Found</title></head>\r\n<body>\r\n<center><h1>404 Not Found<
logger=context traceID=00000000000000000000000000000000 userId=1 orgId=1 uname=admin t=2022-07-07T07:40:14.056975972Z level=info msg="Request Completed" method=GET path=/api/live/ws status=0  time_ms=1 duration=1.176239ms size=0 referer= traceID=00000000000000000000000000000000
logger=context traceID=00000000000000000000000000000000 userId=1 orgId=1 uname=admin t=2022-07-07T07:40:26.484892541Z level=error msg="Failed to call resource" error="<html>\r\n<head><title>404 Not Found</title></head>\r\n<body>\r\n<center><h1>404 Not Found</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n" traceID=00000000000000000000000000000000
logger=context traceID=00000000000000000000000000000000 userId=1 orgId=1 uname=admin t=2022-07-07T07:40:26.484957144Z level=error msg="Request Completed" method=GET path=/api/datasources/1/resources/labels status=500  time_ms=16 duration=16.982728ms size=83
1 Like

Could you please format this nicely. cant sort out what is what

@yosiasz

Please find below latest logs in proper format

logger=context traceID=00000000000000000000000000000000 userId=1 orgId=1 uname=admin t=2022-07-18T12:13:58.745370165Z level=error msg="Failed to call resource" error="Get \"http://loki.ingress.eval.lp.shoot.live.k8s-hana.ondemand.com:3100/api/datasources/1/resources/labels/loki/api/v1/labels?start=1658145828524000000&end=1658146428524000000\": dial tcp 18.196.216.18:3100: i/o timeout" traceID=00000000000000000000000000000000
logger=context traceID=00000000000000000000000000000000 userId=1 orgId=1 uname=admin t=2022-07-18T12:13:58.745463247Z level=error msg="Request Completed" method=GET path=/api/datasources/1/resources/labels status=500 remote_addr=165.1.238.31 time_ms=10003 duration=10.003021789s size=83 referer=https://grafana.ingress.eval.lp.shoot.live.k8s-hana.ondemand.com/datasources/edit/P982945308D3682D1 traceID=00000000000000000000000000000000

Please consider these logs to investigate. Also let me know if you need any other details. We tried a lot of things as per document but still no luck. Not sure what we are missing.

Use HTTP URL as http://loki:3100 instead of localhost

2 Likes

@akshaypathickal did you find the cause of this? It looks like the community is very interested in this topic, there are almost 2K views!

Did any of the suggestions in Troubleshooting | Grafana Loki documentation help?

2 Likes

Are you sure your grafana and your loki can communicate ? Did you try a telnet to host and port ?
Do you have a firewall ?

1 Like

Was this issue resolved? I am experiencing this issue as well, I can curl loki from my grafana server fine, but testing the datasource on the grafana UI fails.

Could you explain why this works?

i am also facing this same issue

same error
loki helm chart version v2.7.0

firtst which serivce should use ?

root@node52:~# kubectl -n loki get svc
NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
loki-canary           ClusterIP   10.233.54.61    <none>        3500/TCP            4m18s
loki-gateway          ClusterIP   10.233.23.155   <none>        80/TCP              4m18s
loki-grafana          NodePort    10.233.62.203   <none>        80:31973/TCP        4m18s
loki-memberlist       ClusterIP   None            <none>        7946/TCP            4m18s
loki-read             ClusterIP   10.233.28.192   <none>        3100/TCP,9095/TCP   4m18s
loki-read-headless    ClusterIP   None            <none>        3100/TCP,9095/TCP   4m18s
loki-write            ClusterIP   10.233.22.25    <none>        3100/TCP,9095/TCP   4m18s
loki-write-headless   ClusterIP   None            <none>        3100/TCP,9095/TCP   4m18s

Same deal with the distributed version, folks what’s up?

I ran into this exact issue today. I am using Promtail running locally on my windows computer together with Loki. Both Promtail and Loki were running with zero issues or errors but Grafana still showed “failed to call resource”.

What worked for me (and has worked once before is).

  1. shut down Loki and Promtail
  2. Delete the Loki configuration folder (for me /tmp/*) which deletes all chunks, index files and so on.
  3. restart Loki and Promtail and wait for a while. Promtail needs to scrape and send to Loki, which is not the fastest due to my setup
  4. reload community.grafana.com. Remove the data source Loki and remake it.
  5. Done

It makes no sense to me but perhaps others will find this works for them too.

Same issue here. I upgraded grafana from 8.3.11 to 9.3.6 (where it worked fine) and started seeing this issue.

The only trace of this issue in logs is: logger=context userId=1 orgId=1 uname=admin t=2023-02-21T14:45:04.833177993Z level=error msg="Request Completed" method=GET path=/api/datasources/4/resources/labels status=500 remote_addr=89.79.107.126 time_ms=1 duration=1.809072ms size=51 referer=https://logs.theguru.co/datasources/edit/7Rn3jrJVk handler=/api/datasources/:id/resources/* which does not say much

Loki needs the OrgId header set

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.