Grafana Agent scraping interval and metrics

Hello,
I configured grafana agent on a linux EC2 instance and I used metrics to create a dashboard.
I would know just a couple of things about grafana agent:

  1. in my grafana-agent.yaml I set “scrape interval” to 1m.
    However, in Grafana, if I use 1m as rate interval (e.g this query avg (sum by (cpu) (rate(node_cpu_seconds_total{mode!=“idle”}[1m]))) * 100) I see “No data”.
    If I write 30s as scrape interval, I have data as query result.
    So, my question is: is this behavior normal? Do I have to set 30s to use 1m range in query operator rate?

  2. Grafana Agent scrapes a lot of metrics: I need just 10 (more or less) metrics (CPU, RAM, Disk, Load) but I noticed that Grafana Agent scrapes a lot of metrics: can I filter them?
    Should I specify in configuration file metrics that I want?

Right now, here is my grafana-agent.yaml (don’t pay attention to logs section)

server:
log_level: warn

metrics:
global:
scrape_interval: 30s
#wal_directory: ‘/var/lib/grafana-agent’
remote_write:
- url: https://mimir.subcom.it:443/api/v1/push
headers:
X-Scope-OrgID: subcomgagent
configs:

logs:
configs:

  • name: default
    clients:
    • url: https://loki.mimir.subcom.it:443/loki/api/v1/push
      tenant_id: diego31
      positions:
      filename: /tmp/positions.yaml
      scrape_configs:
    • job_name: secure
      pipeline_stages:
      • regex:
        expression: ‘(?P\w{3} ( |1|2|3)\d{1} \d{2}:\d{2}:\d{2}) .*’
      • timestamp:
        source: timestamp
        format: “Jan 2 15:04:05” #May 7 03:35:01 ansible systemd: Started Session 598194 of user root
        location: “Europe/Rome”
        static_configs:
      • targets: [localhost]
        labels:
        job: secure
        path: /var/log/secure
        host: ansible

integrations:
agent:
enabled: true
node_exporter:
enabled: true
include_exporter_metrics: true
disable_collectors:
- “mdadm”

Thank you in advance