Setting up datasources for grafana provisioning

  • What Grafana version and what operating system are you using?
    Linux operating system, grafana v9.2.0

  • What are you trying to achieve?
    Recovering data from an influxdb database from a docker container to display stats on a provisioned dashboard

  • How are you trying to achieve it?
    By setting up multiple containers with a docker-compose file that generate/collect/store/show data

  • What happened?
    I get a data source using InfluxQL that does not allow me to retrieve data from my database, so my dashboard is completely empty.

  • What did you expect to happen?

I wanted to get the data from my influx database and display it in the provisioned dashboard, but I have the impression that some information is missing in the .yml file, in particular the absence of an API key to communicate with influx, as well as the name of the organisation and the type of language desired (here FLUX).
I didn’t find any of this information on the grafana provisioning tutorial.

  • Can you copy/paste the configuration(s) that you are having problems with?
# config file version
apiVersion: 1

# list of datasources to insert/update depending
# whats available in the database
datasources:
  # <string, required> name of the datasource. Required
- name: InfluxDB
  # <string, required> datasource type. Required
  type: influxdb
  # <string, required> access mode. direct or proxy. Required
  access: proxy
  # <int> org id. will default to orgId 1 if not specified
  orgId: 1
  # <string> url
  url: http://influxdb:8086
  # <string> database password, if used
  password: admin1234
  # <string> database user, if used
  user: admin
  # <string> database name, if used
  database:
  # <bool> enable/disable basic auth
  basicAuth: true
#  withCredentials:
  # <bool> mark as default datasource. Max one per org
  isDefault: true
  # <map> fields that will be converted to json and stored in json_data
  jsonData:
    timeInterval: "5s"
#     graphiteVersion: "1.1"
#     tlsAuth: false
#     tlsAuthWithCACert: false
#  # <string> json object of data that will be encrypted.
#  secureJsonData:
#    tlsCACert: "..."
#    tlsClientCert: "..."
#    tlsClientKey: "..."
  version: 1
  # <bool> allow users to edit datasources from the UI.
  editable: true

Docker-compose file

version: "3"

services:
  zookeeper:
    image: "bitnami/zookeeper:latest"
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes

  kafka:
    image: confluentinc/cp-kafka:6.1.1
    container_name: kafka
    depends_on:
      - zookeeper
    ports:
      - '9092:9092'
    expose:
      - '29092'
    environment:
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: '1'
      KAFKA_MIN_INSYNC_REPLICAS: '1'
      KAFKA_CLUSTER_ENV_VAR_NAME: 'KAFKA_CLUSTER'


  init-kafka:
    image: confluentinc/cp-kafka:6.1.1
    container_name: init-kafka
    depends_on:
      - kafka
      - zookeeper
    entrypoint: [ '/bin/sh', '-c' ]
    command: |
      "
      # blocks until kafka is reachable
      kafka-topics --bootstrap-server kafka:29092 --list

      echo -e 'Creating kafka topics'
      kafka-topics --bootstrap-server kafka:29092 --create --if-not-exists --topic general-events --replication-factor 1 --partitions 1

      echo -e 'Successfully created the following topics:'
      kafka-topics --bootstrap-server kafka:29092 --list

      kafka-console-consumer --topic general-events --from-beginning --bootstrap-server kafka:29092
      "
    
  influxdb:
    image: influxdb
    container_name: influxdb
    hostname: influxdb
    volumes:
      - influxdb-storage:/var/lib/influxdb2:rw
    env_file:
      - .env
    entrypoint: ["./entrypoint.sh"]
    ports:
      - ${DOCKER_INFLUXDB_INIT_PORT}:8086

  telegraf:
    image: telegraf
    depends_on:
      - influxdb
      - kafka
    container_name: telegraf
    links:
      - influxdb
    restart: on-failure
    env_file:
      - .env
    environment: 
      - DOCKER_INFLUXDB_INIT_ORG=earthWatch
      - DOCKER_INFLUXDB_INIT_BUCKET=telegraf
      - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=60b42f6f12a91425b4fc02d1dd4e44eff9231f737171da79a993055c3aa367ab
    volumes:
      - ./telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:rw
  
  grafana:
    image: grafana/grafana-oss:9.2.0
    depends_on:
      - influxdb
    env_file:
      - .env
    links:
      - influxdb
    container_name: grafana
    ports:
      - ${GRAFANA_PORT}:3000
    links:
      - influxdb:influxdb
    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana/provisioning/:/etc/grafana/provisioning/
      - ./grafana/dashboards/:/var/lib/grafana/dashboards/

  java:
    image: openjdk:15
    depends_on:
      - init-kafka
    container_name: data-source
    volumes:
      - ./earthWatch.jar:/usr/src/java 
    command: bash -c "java -jar /usr/src/java earthWatch.jar"

volumes:
  grafana_data:
  influxdb-storage:
  • Did you receive any errors in the Grafana UI or in related logs? If so, please tell us exactly what they were.
logger=sqlstore.transactions t=2022-11-10T13:38:38.276714299Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0
logger=cleanup t=2022-11-10T13:48:38.273883042Z level=info msg="Completed cleanup jobs" duration=34.954637ms
logger=sqlstore.transactions t=2022-11-10T13:53:38.088183624Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0

I would use better DB (mysql, postgresql) for Grafana instead of default sqlite. They have better concurrent access implementation.