Grafana agent usage-reporting sending to self-host Grafana URL

  • What Grafana version and what operating system are you using?
    Grafana OSS v10 and hosted in Azure AKS
    Grafana agent v0.34 installed in RedHat 8 servers

  • What are you trying to achieve?
    Grafana agent usage-reporting currently sending to https://stats.grafana.org/agent-usage-report
    And would like to send to AKS grafana

  • How are you trying to achieve it?
    Grafana agent usage-reporting currently sending to https://stats.grafana.org/agent-usage-report
    And would like to send to AKS grafana

  • What happened?
    Grafana agent usage-reporting currently sending to https://stats.grafana.org/agent-usage-report

  • What did you expect to happen?
    Grafama agent usage-reporting send to https://stats.grafana.org/agent-usage-report

  • Can you copy/paste the configuration(s) that you are having problems with?

  • Did you receive any errors in the Grafana UI or in related logs? If so, please tell us exactly what they were.

Server did not fully allow accessing to internet. Hence, client timeout is expected
grafana-agent[1969898]: ts=2023-06-18T07:21:19.130326648Z caller=reporter.go:129 level=info msg=“failed to report usage” err=“5 errors: Post "https://stats.grafana.org/agent-usage-report\”: context deadline exceeded (Client.Timeout exceeded while awaiting headers); Post "https://stats.grafana.org/agent-usage-report\“: EOF; Post "https://stats.grafana.org/agent-usage-report\”: context deadline exceeded (Client.Timeout exceeded while awaiting headers); Post "https://stats.grafana.org/agent-usage-report\“: EOF; Post "https://stats.grafana.org/agent-usage-report\”: context deadline exceeded (Client.Timeout exceeded while awaiting headers)"

  • Did you follow any online instructions? If so, what is the URL?
    Cannot find any related instructions

Here is my current grafana-agent configuration

integrations:
  node_exporter:
    enabled: true
    # disable unused collectors
    disable_collectors:
      - ipvs #high cardinality on kubelet
      - btrfs
      - infiniband
      - xfs
      - zfs
    # exclude dynamic interfaces
    netclass_ignored_devices: "^(veth.*|cali.*|[a-f0-9]{15})$"
    netdev_device_exclude: "^(veth.*|cali.*|[a-f0-9]{15})$"
    # disable tmpfs
    filesystem_fs_types_exclude: "^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|tmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$"
    # drop extensive scrape statistics
    metric_relabel_configs:
    - action: drop
      regex: node_scrape_collector_.+
      source_labels: [__name__]
    relabel_configs:
    - replacement: testvm01.com
      target_label: instance
  prometheus_remote_write:
  - basic_auth:
      password: admin
      username: admin
    url: http://prometheus.com:9090/api/v1/write
  agent:
    enabled: true
    relabel_configs:
    - action: replace
      source_labels:
      - agent_hostname
      target_label: instance
    - action: replace
      target_label: job
      replacement: "integrations/agent-check"
    metric_relabel_configs:
    - action: keep
      regex: (prometheus_target_.*|prometheus_sd_discovered_targets|agent_build.*|agent_wal_samples_appended_total|process_start_time_seconds)
      source_labels:
      - __name__

logs:
  configs:
  - clients:
    - url: http://loki.com:3100/loki/api/v1/push
    name: integrations
    positions:
      filename: /tmp/positions.yaml
    scrape_configs:
    - job_name: integrations/node_exporter_journal_scrape
      journal:
        max_age: 24h
        labels:
          instance: testvm01.com
          job: integrations/node_exporter
      relabel_configs:
      - source_labels: ['__journal__systemd_unit']
        target_label: 'unit'
      - source_labels: ['__journal__boot_id']
        target_label: 'boot_id'
      - source_labels: ['__journal__transport']
        target_label: 'transport'
      - source_labels: ['__journal_priority_keyword']
        target_label: 'level'
    - job_name: integrations/node_exporter_direct_scrape
      static_configs:
      - targets:
        - localhost
        labels:
          instance: testvm01.com
          __path__: /var/log/{syslog,messages}
          job: integrations/node_exporter

metrics:
  configs:
  - name: integrations
    remote_write:
    - basic_auth:
        password: 1234abc455
        username: admin
      url: http://prometheus.com:9090/api/v1/write
    scrape_configs:
  global:
    scrape_interval: 60s
  wal_directory: /tmp/grafana-agent-wal

Blockquote