Prometheus shows wrong values when alerts triggered

I have created alerts using node-exporter, prometheus and grafana.

Query used for CPU :-

100 * (1 - avg(rate(node_cpu_seconds_total{mode=“idle”, instance=“custom:port” , job=“nodeexporter”}[1m])))

when the alerts triggered it sends an alerts but the actual problem is when i compare alerts with GCP alerts its shows different values. like in GCP its 45% then in Grafana its 80%.

Note:- GCP alerting the correct values checked and verified.

How to solve this problem Grafana alerts showing wrong values how to get the exact values.

Steps:

  1. Changed scrape_interval to 5s.
  2. changed CPU query
  3. checked prometheus collecting the same data as it shows in Grafana.

Prometheus.yml file:-

global:

scrape_interval: 15s

evaluation_interval: 15s

alerting:

alertmanagers:

- static_configs:


    - targets:

rule_files:

scrape_configs:

  • job_name: "prometheus"static_configs:
    • targets: [“customip:port”]