Grafana-agent trying to scrape some static endpoints

I took a default configuration from the documentation on installing grafana-agent in static mode and then set it to accept secrets via env vars along with run in debug mode. Those bits all seem to work and I can see agent and node_exporter metrics in the cloud grafana instance.

I’m trying to scrape a custom exporter running on localhost:8080. I attempted to put it under metrics.configs in the integrations job’s scrape_configs. I’ve put it in debug mode and have been tailing the journal logs and do not see it ever actually attempting to scrape the solana-exporter.

I am seeing some lines that tell me it sees it in the config but doesn’t see a valid config to scrape I think?

Feb 23 03:12:41 [hostname here] grafana-agent[1860593]: ts=2024-02-23T03:12:41.27045376Z caller=manager.go:287 level=debug agent=prometheus instance=b2ab4d993b48b18b4d24abd693c65b13 component="discovery manager" msg="Starting provider" provider=static/0 subs=map[solana-exporter:{}]

I’m new to prometheus and grafana-agent config, but am not new to unix. This would be trivial with the datadog agent but has been frustratingly difficult to debug the grafana-agent. Any context or pointers in the right direction would be very much appreciated!

Agent Details

Installed from the ubuntu repo.

grafana-agent --version
agent, version v0.39.1 (branch: HEAD, revision: 7dbb39c7)
  build user:       root@e5705ffdf6fa
  build date:       2024-01-19T11:55:16Z
  go version:       go1.21.4
  platform:         linux/amd64
  tags:             netgo,builtinassets,promtail_journal_enabled

Configuration

Located at /etc/grafana-agent.yaml

# For a full configuration reference, see: https://grafana.com/docs/agent/latest/configuration/.
# Parts from: https://storage.googleapis.com/cloud-onboarding/agent/config/config.yaml
server:
  log_level: debug

integrations:
  agent:
    enabled: true
    relabel_configs:
      - action: replace
        source_labels:
          - agent_hostname
        target_label: instance
      - action: replace
        target_label: job
        replacement: "integrations/agent-check"
    metric_relabel_configs:
      - action: keep
        regex: (prometheus_target_.*|prometheus_sd_discovered_targets|agent_build.*|agent_wal_samples_appended_total|process_start_time_seconds)
        source_labels:
          - __name__
  node_exporter:
    enabled: true
    include_exporter_metrics: true
    #disable_collectors:
    #  - "mdadm"
    # Not enabled by default per the docs
    enable_collectors:
      - "systemd"
    systemd_enable_restarts_metrics: true
    # Collects metrics from any files in this directory matching the glob *.prom.
    # The files must use the prometheus exposition text file format as documented:
    #    https://prometheus.io/docs/instrumenting/exposition_formats
    textfile_directory: /etc/grafana-agent/textfiles.d
  prometheus_remote_write:
  - basic_auth:
      password: ${GRAFANA_AGENT_PROMETHEUS_REMOTE_WRITE_PASSWORD}
      username: ${GRAFANA_AGENT_PROMETHEUS_REMOTE_WRITE_USERNAME}
    url: ${GRAFANA_AGENT_PROMETHEUS_REMOTE_WRITE_URL}

logs:
  configs:
  - clients:
    - basic_auth:
        password: ${GRAFANA_AGENT_LOGS_PASSWORD}
        username: ${GRAFANA_AGENT_LOGS_USERNAME}
      url: ${GRAFANA_AGENT_LOGS_URL}
    name: integrations
    positions:
      filename: /tmp/positions.yaml
    scrape_configs:
      # Add here any snippet that belongs to the `logs.configs.scrape_configs` section.
      # For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
metrics:
  configs:
  - name: integrations
    remote_write:
    - basic_auth:
        password: ${GRAFANA_AGENT_METRICS_REMOTE_WRITE_PASSWORD}
        username: ${GRAFANA_AGENT_METRICS_REMOTE_WRITE_USERNAME}
      url: ${GRAFANA_AGENT_METRICS_REMOTE_WRITE_URL}
    scrape_configs:
      - job_name: solana-exporter
        metrics_path: /metrics
        static_configs:
        - labels: {env: prod, network: mainnet}
          targets: ['localhost:8080']

      # Add here any snippet that belongs to the `metrics.configs.scrape_configs` section.
      # For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
  global:
    scrape_interval: 1m
  wal_directory: /var/lib/grafana-agent

Hello! Have you tried looking up the Agent’s localhost:12345/metrics endpoint? The metrics there will tell you how various parts of the Agents are working.

I understand that the agent is working. I can’t get it to scrape metrics on localhost. Do you know how to get it to scrape metrics on localhost:8080? That is what I’m trying to get working.

When an Agent isn’t working, most of the time you’d need to check various metrics instead of logs. Do the metrics in localhost:12345/metrics suggest that there are issues with the scrape?

Flow mode has a much better documentation, and the Agent dev team is actively working on it . I’d suggest using Flow if possible. The scrape metrics for the Flow component are documented here:

There are also scrape-related metrics which will be remote written directly (as opposed to being exposed on the Agent’s metrics endpoint). Those are metrics such as the “up” metric, documented here:

They are similar to the sorts of metrics Prometheus provides:

You can also find example dashboards and alerts here:

When an Agent isn’t working, most of the time you’d need to check various metrics instead of logs. Do the metrics in localhost:12345/metrics suggest that there are issues with the scrape?

There is no scrape. The logs show there is no scrape at all. The configuration is clearly not correct. I’m trying to get help on what the configuration needs to be to get the scrape to at least be attempted. Metrics are not helpful until it at least attempts the scrape, which the logs show it is not doing when I put it into debug mode.