Kubernetes (microk8s) monitoring on Ubuntu with Alloy

Hi all, I am new to Grafana Stack and am trying to get a test environment up and running using Alloy, Prometheus and Grafana on a PC running Ubuntu.

Apologies for the links, I am only allowed to add two links as a new user so I could not post. I have added an a at the start as a workaround. All links are to the Grafana website

I have already got metrics from the machine itself going using this tutorial ahttps://grafana.com/docs/alloy/latest/tutorials/send-metrics-to-prometheus/
Which is working great. I am currently trying to extend the alloy configuration to enable Kubernetes monitoring on the microk8s instance.

I have setup alloy following these instructions,
ahttps://grafana.com/docs/alloy/latest/set-up/install/linux/
which runs Kubernetes under systemd.

I have tried to pull metrics using these instructions ahttps://grafana.com/docs/alloy/latest/collect/prometheus-metrics/#collect-metrics-from-kubernetes-pods

and also tried to pull metrics from “prometheus.exporter.cadvisor” using the instructions here ahttps://grafana.com/docs/alloy/latest/reference/components/prometheus/prometheus.exporter.cadvisor/
I have followed the security section Add to docker group (recommended) and setup using the example code here. ahttps://grafana.com/docs/alloy/latest/reference/components/prometheus/prometheus.exporter.cadvisor/#component-configuration

I am not sure why I am not getting pod level metrics, I am not sure if alloy running in Ubuntu trying to read the Kubernetes instance is a valid configuration, most of the examples I see alloy is running in a pod in Kubernetes itself.

I am getting some metrics but the seem to be primary concerned with the Kubernetes instance like container_memory_usage_bytes and container_cpu_usage_seconds_total. which just show one graph under the name of the machine itself.

When I run the command “users” in terminal I don’t get an alloy user, I don’t think one was ever created during alloy installation.

Would I need to run as root in this scenario the link on the page? The link ahttps://grafana.com/docs/alloy/latest/reference/configure/nonroot/
is dead so that’s no help.

I am just new to this stuff so I would appreciate any guidance.

Are you deploying Alloy on the host machine, trying to monitor the Kubernetes cluster that’s running there (outside-in)? or are you deploying Alloy from within the Kubernetes cluster and trying to monitor it from the inside?

I have alloy installed on the host machine. I am already monitoring the host and am trying to extend the config to monitor kubernetes as well if thats possible. So outside in.

If Alloy is on the host of your kubernetes host nodes then you need to make sure you mount directories into Alloy in order to get container metrics. You can see the node-exporter helm chart as an example: helm-charts/charts/prometheus-node-exporter/templates/daemonset.yaml at main · prometheus-community/helm-charts · GitHub

I am trying to collect metrics using this guide

Collect Prometheus metrics | Grafana Alloy documentation

Which uses

discovery.kubernetes "<DISCOVERY_LABEL>" {
  role = "pod"
}

Will this give me the metrics for the individual pods? Do I need to instead use the node exporter?

I am not familiar with reading HELM charts sorry, I don’t know what section in it you are referring to.

My current alloy configuration is as follows and is mostly a copy and paste from the Grafana documentation mentioned in my first post

//k8s pods
discovery.kubernetes "k8s_pods" {
  role = "pod"
  kubeconfig_file = "/etc/alloy/config"
}

prometheus.scrape "k8s_pod_scraper" {
  targets    = discovery.kubernetes.k8s_pods.targets
  forward_to = [prometheus.relabel.k8s_pod_metrics.receiver]
}

prometheus.relabel "k8s_pod_metrics" {
  rule {
    action        = "replace"
    source_labels = ["__meta_kubernetes_pod_name"]
    target_label  = "pod_name"
  }

  forward_to = [prometheus.remote_write.metrics_service.receiver]
}

//k8s services
discovery.kubernetes "k8s_services" {
  role = "service"
  kubeconfig_file = "/etc/alloy/config"
}

prometheus.scrape "k8s_service_scraper" {
  targets    = discovery.kubernetes.k8s_services.targets
  forward_to = [prometheus.remote_write.metrics_service.receiver]
}

//k8s cadvisor
prometheus.exporter.cadvisor "docker" {
  docker_host       = "unix:///var/run/docker.sock"
}

prometheus.scrape "cadvisor" {
  targets    = prometheus.exporter.cadvisor.docker.targets
  forward_to = [prometheus.remote_write.metrics_service.receiver]
}

//Local system
prometheus.exporter.unix "local_system" {
  enable_collectors = ["logind"]
}

prometheus.scrape "unix_scraper" {
  targets         = prometheus.exporter.unix.local_system.targets
  forward_to      = [prometheus.relabel.filter_metrics.receiver]
  scrape_interval = "10s"
}

prometheus.relabel "filter_metrics" {
  rule {
    action        = "drop"
    source_labels = ["env"]
    regex         = "dev"
  }

  forward_to = [prometheus.remote_write.metrics_service.receiver]
}

prometheus.remote_write "metrics_service" {
    endpoint {
        url = "http://192.168.0.161:9090/api/v1/write"

        // basic_auth {
        //   username = "admin"
        //   password = "admin"
        // }
    }
}

livedebugging {
  enabled = true
}

My Question really boils down to

  • Is outside in monitoring of Kubernetes will Alloy possible?
  • What special considerations are there to that configuration?

Outside in monitoring for Kubernetes is possible, if you are just doing poke tests. But if you are looking to collect pod and node metrics you have to do it from within the cluster. My understanding a general metrics collection for Kubernetes is:

  1. node-exporter type agent per node.
  2. singleton prometheus or similar service, pulls from node exporter, forwards to backend (mimir, thanos, etc)
  3. Singleton prometheus can also pull from service metrics based on service discovery.

None of this can be easily done from outside.