Credential error - local loki instance with GCS storage

I am trying to run a local instance of loki with GCS storage and I keep getting a credentials error.

Error Message:

level=info ts=2020-12-09T17:20:23.745736266Z caller=main.go:128 msg="Starting Loki" version="(version=2.0.0, branch=HEAD, revision=6978ee5d7)"
level=error ts=2020-12-09T17:20:23.793801502Z caller=log.go:149 msg="error running loki" err="google: could not find default credentials. See for more information.\nerror creating index client\\n\t/src/loki/vendor/\*Loki).initStore\n\t/src/loki/pkg/loki/modules.go:287\*Manager).initModule\n\t/src/loki/vendor/\*Manager).InitModuleServices\n\t/src/loki/vendor/\*Loki).Run\n\t/src/loki/pkg/loki/loki.go:204\nmain.main\n\t/src/loki/cmd/loki/main.go:130\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1373\nerror initialising module: store\*Manager).initModule\n\t/src/loki/vendor/\*Manager).InitModuleServices\n\t/src/loki/vendor/\*Loki).Run\n\t/src/loki/pkg/loki/loki.go:204\nmain.main\n\t/src/loki/cmd/loki/main.go:130\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1373"

Config file:

auth_enabled: false
  http_listen_port: 3100
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 1h       # Any chunk not receiving new logs in this time will be flushed
  max_chunk_age: 1h           # All chunks will be flushed when they hit this age, default is 1h
  chunk_target_size: 1048576  # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
  chunk_retain_period: 30s    # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
  max_transfer_retries: 0     # Chunk transfers disabled
    - from: 2020-12-09
      store: boltdb-shipper
      object_store: gcs
      schema: v11
        prefix: loki_index_
        period: 24h
    bucket_name: my-gcs-bucket
    active_index_directory: /loki/index
    cache_location: /loki/boltdb-cache
    cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space
    shared_store: gcs
  working_directory: /loki/boltdb-shipper-compactor
  shared_store: gcs
  reject_old_samples: true
  reject_old_samples_max_age: 168h
  max_look_back_period: 0s
  retention_deletes_enabled: false
  retention_period: 0s
    type: local
      directory: /loki/rules
  rule_path: /loki/rules-temp
  alertmanager_url: http://localhost:9093
      store: inmemory
  enable_api: true

This seems to an issue with the node pool in GKE. Check if there is a default service account on the node by running gcloud auth list and it is allowed GCS writes in the oauth setup.


I think this is related to another question I had asked here: Boltdb-shipper - Can we push indexes/chunks from local instance of loki to GCS Bucket

I will try setting up google client and then see if it picks up credentials from my system environment variables. Since I am running a local instance of loki so I probably don’t have any GKE related setup right.

Anyways, I really appreciate your feedback @rajatvig :slight_smile:

@rajatvig so I looked at gcloud auth list. It’s pointing to correct service account.

I’ll explain a bit more about my setup:

  1. I have created a GCS bucket on google cloud
  2. I am running loki docker on my laptop
  3. I have setup Google SDK on my laptop
  4. I am not running Kubernetes instance on Google or on my laptop
  5. I have setup my service account. It’s using my default account
  6. I have setup GOOGLE_APPLICATION_CREDENTIALS to point to the credentials json file

With the above setup I am getting a slightly different error message. It’s now giving error about table manager:

level=error ts=2020-12-16T01:21:30.419385759Z caller=log.go:149 msg="error running loki" err="google: could not find default credentials.
See for more information.
\nerror initialising module: table-manager\

Command to run loki:

docker run -d -v $(pwd):/mnt/config --name=loki --network=host -p 3200:3100 grafana/loki:2.0.0 -config.file=/mnt/config/loki-config.yaml


If I pass the environment variable to docker run command using --env, then loki tries to find the credentials json file and obviously fails to do so because the file is not inside container but on my local host.

If you are using Loki running in docker on your local machine pointing to a GCS bucket then the credentials needs to made available to the docker using mounts else it would not load up.

1 Like

How do you do that? I can’t find any docs.

usually the GCS creds are in a json file, you need to mount this file with a volume mount similar to what you did with the /mnt/config in your example. Then use the environment variable to point to where this file is mounted.