Kubernetes: monitoring a single namespace

Hi,

I would like to monitor applications running in a single Kubernetes namespace, which includes:

  • prometheus metrics endpoints
  • pod logs ( written to the container’s stdout )
  • opentelemetry traces

Looking at the Helm chart ( k8s-monitoring 0.2.2 · grafana/grafana ) I don’t see any changes that I can make to configure the metrics scraping/log gathering to be confined to a single namespace. I guess gathering traces is a push mechanism so this is not an issue.

What options do I have for confining log gathering/metrics scraping to a single Kubernetes namespace?

Thanks!

Hi rombert!

I am one of the authors of the k8s-monitoring Helm chart. I just created an example of filtering to a specific list of namespaces. You can see that example here: https://github.com/grafana/k8s-monitoring-helm/tree/main/examples/specific-namespace

Also, pick up version 0.2.3, which improved some of the namespace handling I found when writing up that example.

Let me know if that works out, or if there’s anything else you might need!

-Pete Wall

Thanks a lot @petewall , this looks good! I tried deploying the k8s-monitoring helm chart to my target cluster but there is a CRD conflict with the kube-prometheus stack which I was not able to immediately solve. I am using FluxCD which may complicate things a bit.

I will keep trying and let you know.

Thanks!

Ah, seems values.prometheus-operator-crds.enabled is what I wanted to check.

1 Like

Yep! That should tell the chart to skip deploying the CRDs.

One more questions @petewall . Since I am not using CRDs I tried to manually configure the grafana agent to scrape metrics from pods.

I added the following to my Helm values:

prometheus.scrape "pospai_pods" {
  targets = discovery.kubernetes.pods.targets
  forward_to = [prometheus.relabel.podmonitors.receiver]
}

This seems to work, but I see in the logs that it tries to scrape metrics from outside the configured namespace. I guess this is expected, as the extraMetricRelabelingRules parameter is not taken into account here.

Even when changing the scrape target to

prometheus.scrape "pospai_pods" {
  targets = discovery.kubernetes.pods.targets

  rule {
      source_labels = ["namespace"]
      regex = "^$|app-pospai"
      action = "keep"
  }

  forward_to = [prometheus.relabel.podmonitors.receiver]
}

I see other instances getting scraped. Can you suggest a way to exclude pods from other namespaces from being scraped?

Thanks again!

Gotcha!

So, instead of putting a rule in the prometheus.scrape component, you’ll need to add a discovery.relabel component with that rule. Also, the namespace label wont be set, so you’ll use the __meta_kubernetes_namespace label. You can see all of the meta labels that are set by discovery.kubernetes.pods from here: discovery.kubernetes | Grafana Agent documentation.

Reading through this doc might be helpful: https://github.com/grafana/k8s-monitoring-helm/blob/fix/doc-improvements/charts/k8s-monitoring/docs/ScrapeApplicationMetrics.md