How do I solve the target connection refused error on a Prometheus scrape target

I have successfully added my API metrics endpoint as a scrape target in my Grafana-Loki K8S deployment. When I check the state of the target in PrometheusUI (via kubectl port-forward service/loki-prometheus-server 80 ) the target is reporting as being down with error Connection refused as below :

promUI

I verified that the metrics endpoint is indeed up and that metrics are available by issuing the following command:

kubectl port-forward service/metrics-clusterip 80

Executing a call to http://localhost:80/metrics subsequently returns the metrics payload as expected.

This is my ServiceMonitor configuration :

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: reg
  namespace: loki
  labels:
    app: reg
    release: loki
spec:
  selector:
    matchLabels:
      app: reg
      release: loki
  endpoints:
    - port: reg
      path: /metrics
      interval: 15s
  namespaceSelector:
    matchNames:
      - "labs"

And my Deployment configuration :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: reg
  labels:
    app: reg
    namespace: labs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reg
      release: loki
  template:
    metadata:
      labels:
        app: reg
        release: loki
    spec:
      containers:
        - name: reg
          image: xxxxxx/sre-ops:dev-latest
          imagePullPolicy: Always
          ports:
            - name: reg
              containerPort: 80           
          resources:
            limits:
              memory: 500Mi
            requests:
              cpu: 100m
              memory: 128Mi
      nodeSelector:
        kubernetes.io/hostname: xxxxxxxxxxxx     
      imagePullSecrets:
        - name: xxxx
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-clusterip
  namespace: labs
  labels:
    app: reg
    release: loki
  annotations:
    prometheus.io/path: /metrics
    prometheus.io/port: '80'
    prometheus.io/scrape: "true"
spec:
  type: ClusterIP
  selector:
    app: reg
    release: loki
  ports:
  - port: 80
    targetPort: reg
    protocol: TCP
    name:  reg

Part of the ConfigMap for the Grafana-Loki deployment :

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    component: "server"
    app: prometheus
    release: loki
    chart: prometheus-15.5.4
    heritage: Helm
  name: loki-prometheus-server
  namespace: loki
data:
  alerting_rules.yml: |
    {}
  alerts: |
    {}
  prometheus.yml: |
    global:
      evaluation_interval: 1m
      scrape_interval: 1m
      scrape_timeout: 10s
    rule_files:
    - /etc/config/recording_rules.yml
    - /etc/config/alerting_rules.yml
    - /etc/config/rules
    - /etc/config/alerts
    scrape_configs:
- job_name: kubernetes-service-endpoints
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_scrape
      - action: drop
        regex: true
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_scrape_slow
      - action: replace
        regex: (https?)
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_scheme
        target_label: __scheme__
      - action: replace
        regex: (.+)
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_path
        target_label: __metrics_path__
      - action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        source_labels:
        - __address__
        - __meta_kubernetes_service_annotation_prometheus_io_port
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_service_annotation_prometheus_io_param_(.+)
        replacement: __param_$1
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_service_name
        target_label: service
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: node

For context Prometheus is scrapping metrics from a .Net Core 5 API and the API exposes metrics on the same port as the API itself (port 80). The configuration at the client side is simple (and working as expected) :

public class Startup
{
    
    public void ConfigureServices(IServiceCollection services)
    {
        .....
         
        services.AddSingleton<MetricReporter>();
        
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        
        app.UseRouting();

        // global cors policy
        app.UseCors(x => x
            .AllowAnyOrigin()
            .AllowAnyMethod()
            .AllowAnyHeader());

        app.UseAuthentication();
        app.UseAuthorization();
        //place before app.UseEndpoints() to avoid losing some metrics
        app.UseMetricServer();
        app.UseMiddleware<ResponseMetricMiddleware>();
        app.UseEndpoints(endpoints => endpoints.MapControllers());

    }
}
}

Versions
Prometheus : 2.34.0

HelmChart : release: loki
chart: prometheus-15.5.4

What am I missing ?

Do you have any network policies? Prometheus pods have to be allowed to scrape pods in namespace labs.

hi @b0b I have an instance that is in same namespace and its being scrapped ok. Only difference is that this particular pod uses a prometheus exporter sidecar container.

Can you curl http://localhost:80/metrics if you connect to the pod with

$ kubectl port-forward service/metrics-clusterip 80:reg

Yes.

I have done kubectl port-forward service/metrics-clusterip 80:reg and I am also able to get the metrics when I curl http://localhost:80/metrics

I’m out of ideas… As it is redacted I can be sure. Is the IP for the target endpoint the IP of the right pod? The labels suggests it is but can’t be sure.

I actually have an issue with the IPs.
UPDATE
After further analysis I have found out that I can wget successfully from the Prometheus pod to the MongoDB instance (that has a prometheus-exporter sidecar) :

 wget mongodb-metrics.labs.svc:9216
Connecting to mongodb-metrics.labs.svc:9216 (10.XXX.XX.X:9216)
wget: can't open 'index.html': File exists

I have run wget again for my API and I am noticing something confusing :

/prometheus $ wget metrics-clusterip.labs.svc:9216
Connecting to metrics-clusterip.labs.svc:9216 (10.XXX.XX.XXX:9216)
wget: can't connect to remote host (10.XXX.XX.XX): Connection refused

The value of the Pod IP (10.XXX.XX.XXX:9216) that is appearing when I wget from Prometheus pod is different from the value I get when I run the command below :

kubectl get ep -o wide
NAME                       ENDPOINTS                               AGE
metrics-clusterip          10.XXX.XX.XX:9216                       15h
mongodb-metrics            10.XXX.XXX.XXX:9216                     85d

https://community.grafana.com/t/dial-tcp-127-0-0-1-connect-connection-refused-changing-state-to-alerting/51515/15?u=denmely

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.