Cannot see any traces from Alloy in Grafana

I am trying to use Grafana Alloy with Grafana Beyla enabled and hope it can send some traces to Grafana Tempo.
With this setup, Alloy succeed sending logs to Loki. However, for traces, I cannot see any traces in Grafana and also no Service Graph. Stuck here for days. And I didn’t see any useful logs. Any guide would be appreciate, thank you! :smiley:

Helm Charts:

Grafana

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: production-hm-grafana
  namespace: production-hm-argo-cd
  labels:
    app.kubernetes.io/name: hm-grafana
spec:
  project: production-hm
  sources:
    - repoURL: https://grafana.github.io/helm-charts
      # https://artifacthub.io/packages/helm/grafana/grafana
      targetRevision: 8.8.5
      chart: grafana
      helm:
        releaseName: hm-grafana
        values: |
          # https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml
          ---
          sidecar:
            dashboards:
              enabled: true
              searchNamespace: ALL
          datasources:
            datasources.yaml:
              apiVersion: 1
              datasources:
                - name: hm-prometheus
                  type: prometheus
                  isDefault: true
                  url: http://hm-prometheus-kube-pr-prometheus.production-hm-prometheus:9090
                  access: proxy
                - name: hm-loki
                  type: loki
                  isDefault: false
                  url: http://hm-loki-gateway.production-hm-loki:80
                  access: proxy
                - name: hm-tempo
                  type: tempo
                  isDefault: false
                  url: http://hm-tempo-query-frontend.production-hm-tempo:3100
                  access: proxy
                  # https://grafana.com/docs/grafana/next/datasources/tempo/configure-tempo-data-source/#example-file
                  jsonData:
                    tracesToLogsV2:
                      datasourceUid: 'hm-loki'
                      spanStartTimeShift: '-1h'
                      spanEndTimeShift: '1h'
                      tags: ['job', 'instance', 'pod', 'namespace']
                      filterByTraceID: false
                      filterBySpanID: false
                      customQuery: true
                      query: 'method="$${__span.tags.method}"'
                    tracesToMetrics:
                      datasourceUid: 'hm-prometheus'
                      spanStartTimeShift: '-1h'
                      spanEndTimeShift: '1h'
                      tags: [{ key: 'service.name', value: 'service' }, { key: 'job' }]
                      queries:
                        - name: 'Sample query'
                          query: 'sum(rate(traces_spanmetrics_latency_bucket{$$__tags}[5m]))'
                    serviceMap:
                      datasourceUid: 'hm-prometheus'
                    nodeGraph:
                      enabled: true
                    search:
                      hide: false
                    traceQuery:
                      timeShiftEnabled: true
                      spanStartTimeShift: '-1h'
                      spanEndTimeShift: '1h'
                    spanBar:
                      type: 'Tag'
                      tag: 'http.path'
                    streamingEnabled:
                      search: true
    - repoURL: git@github.com:hongbo-miao/hongbomiao.com.git
      targetRevision: main
      path: kubernetes/argo-cd/applications/production-hm/grafana/kubernetes-manifests
  destination:
    namespace: production-hm-grafana
    server: https://kubernetes.default.svc
  syncPolicy:
    syncOptions:
      - ServerSideApply=true
    automated:
      prune: true

Tempo

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: production-hm-tempo
  namespace: production-hm-argo-cd
  labels:
    app.kubernetes.io/name: hm-tempo
spec:
  project: production-hm
  source:
    repoURL: https://grafana.github.io/helm-charts
    # https://artifacthub.io/packages/helm/grafana/tempo-distributed
    targetRevision: 1.31.0
    chart: tempo-distributed
    helm:
      releaseName: hm-tempo
      values: |
        # https://github.com/grafana/helm-charts/blob/main/charts/tempo-distributed/values.yaml
        # https://grafana.com/docs/tempo/latest/setup/operator/object-storage/
        ---
        tempo:
          structuredConfig:
            # https://grafana.com/docs/tempo/latest/traceql/#stream-query-results
            stream_over_http_enabled: true
        gateway:
          enabled: false
        serviceAccount:
          create: true
          name: hm-tempo
          annotations:
            eks.amazonaws.com/role-arn: arn:aws:iam::272394222652:role/TempoRole-hm-tempo
        storage:
          admin:
            backend: s3
            s3:
              endpoint: s3.amazonaws.com
              region: us-west-2
              bucket: production-hm-tempo-admin-bucket
          trace:
            backend: s3
            s3:
              endpoint: s3.amazonaws.com
              region: us-west-2
              bucket: production-hm-tempo-trace-bucket
        traces:
          otlp:
            http:
              enabled: true
            grpc:
              enabled: true
        metricsGenerator:
          enabled: true
          config:
            processor:
              # https://grafana.com/docs/tempo/latest/operations/traceql-metrics/
              local_blocks:
                filter_server_spans: false
            storage:
              remote_write:
                - url: http://hm-prometheus-kube-pr-prometheus.production-hm-prometheus:9090/api/v1/write
        global_overrides:
          metrics_generator_processors:
            - local-blocks
            - service-graphs
            - span-metrics
  destination:
    namespace: production-hm-tempo
    server: https://kubernetes.default.svc
  syncPolicy:
    syncOptions:
      - ServerSideApply=true
    automated:
      prune: true

Alloy

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: production-hm-alloy
  namespace: production-hm-argo-cd
  labels:
    app.kubernetes.io/name: hm-alloy
spec:
  project: production-hm
  source:
    repoURL: https://grafana.github.io/helm-charts
    # https://artifacthub.io/packages/helm/grafana/alloy
    targetRevision: 0.11.0
    chart: alloy
    helm:
      releaseName: hm-alloy
      values: |
        # https://github.com/grafana/alloy/blob/main/operations/helm/charts/alloy/values.yaml
        ---
        alloy:
          # For "beyla.ebpf", see https://grafana.com/docs/grafana-cloud/send-data/alloy/reference/components/beyla/beyla.ebpf/
          stabilityLevel: public-preview
          extraEnv:
            - name: LOKI_URL
              value: http://hm-loki-gateway.production-hm-loki:80/loki/api/v1/push
            - name: TEMPO_ENDPOINT
              value: hm-tempo-distributor.production-hm-tempo:4317
          configMap:
            content: |-
              // https://grafana.com/docs/alloy/latest/configure/kubernetes/
              // https://grafana.com/docs/alloy/latest/collect/logs-in-kubernetes/
              logging {
                level = "info"
                format = "logfmt"
              }

              // local.file_match discovers files on the local filesystem using glob patterns and the doublestar library. It returns an array of file paths.
              local.file_match "node_logs" {
                path_targets = [{
                    // Monitor syslog to scrape node-logs
                    __path__  = "/var/log/syslog",
                    job       = "node/syslog",
                    node_name = sys.env("HOSTNAME"),
                    cluster   = "hm-eks-cluster",
                }]
              }

              // loki.source.file reads log entries from files and forwards them to other loki.* components.
              // You can specify multiple loki.source.file components by giving them different labels.
              loki.source.file "node_logs" {
                targets    = local.file_match.node_logs.targets
                forward_to = [loki.write.default.receiver]
              }

              // discovery.kubernetes allows you to find scrape targets from Kubernetes resources.
              // It watches cluster state and ensures targets are continually synced with what is currently running in your cluster.
              discovery.kubernetes "pod" {
                role = "pod"
              }

              // discovery.relabel rewrites the label set of the input targets by applying one or more relabeling rules.
              // If no rules are defined, then the input targets are exported as-is.
              discovery.relabel "pod_logs" {
                targets = discovery.kubernetes.pod.targets

                // Label creation - "namespace" field from "__meta_kubernetes_namespace"
                rule {
                  source_labels = ["__meta_kubernetes_namespace"]
                  action = "replace"
                  target_label = "namespace"
                }

                // Label creation - "pod" field from "__meta_kubernetes_pod_name"
                rule {
                  source_labels = ["__meta_kubernetes_pod_name"]
                  action = "replace"
                  target_label = "pod"
                }

                // Label creation - "container" field from "__meta_kubernetes_pod_container_name"
                rule {
                  source_labels = ["__meta_kubernetes_pod_container_name"]
                  action = "replace"
                  target_label = "container"
                }

                // Label creation -  "app" field from "__meta_kubernetes_pod_label_app_kubernetes_io_name"
                rule {
                  source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name"]
                  action = "replace"
                  target_label = "app"
                }

                // Label creation -  "job" field from "__meta_kubernetes_namespace" and "__meta_kubernetes_pod_container_name"
                // Concatenate values __meta_kubernetes_namespace/__meta_kubernetes_pod_container_name
                rule {
                  source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_container_name"]
                  action = "replace"
                  target_label = "job"
                  separator = "/"
                  replacement = "$1"
                }

                // Label creation - "container" field from "__meta_kubernetes_pod_uid" and "__meta_kubernetes_pod_container_name"
                // Concatenate values __meta_kubernetes_pod_uid/__meta_kubernetes_pod_container_name.log
                rule {
                  source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
                  action = "replace"
                  target_label = "__path__"
                  separator = "/"
                  replacement = "/var/log/pods/*$1/*.log"
                }

                // Label creation -  "container_runtime" field from "__meta_kubernetes_pod_container_id"
                rule {
                  source_labels = ["__meta_kubernetes_pod_container_id"]
                  action = "replace"
                  target_label = "container_runtime"
                  regex = "^(\\S+):\\/\\/.+$"
                  replacement = "$1"
                }
              }

              // loki.source.kubernetes tails logs from Kubernetes containers using the Kubernetes API.
              loki.source.kubernetes "pod_logs" {
                targets    = discovery.relabel.pod_logs.output
                forward_to = [loki.process.pod_logs.receiver]
              }

              // loki.process receives log entries from other Loki components, applies one or more processing stages,
              // and forwards the results to the list of receivers in the component's arguments.
              loki.process "pod_logs" {
                stage.static_labels {
                    values = {
                      cluster = "hm-eks-cluster",
                    }
                }
                forward_to = [loki.write.default.receiver]
              }

              // loki.source.kubernetes_events tails events from the Kubernetes API and converts them
              // into log lines to forward to other Loki components.
              loki.source.kubernetes_events "cluster_events" {
                job_name   = "integrations/kubernetes/eventhandler"
                log_format = "logfmt"
                forward_to = [
                  loki.process.cluster_events.receiver,
                ]
              }

              // loki.process receives log entries from other loki components, applies one or more processing stages,
              // and forwards the results to the list of receivers in the component's arguments.
              loki.process "cluster_events" {
                forward_to = [loki.write.default.receiver]
                stage.static_labels {
                  values = {
                    cluster = "hm-eks-cluster",
                  }
                }
                stage.labels {
                  values = {
                    kubernetes_cluster_events = "job",
                  }
                }
              }

              loki.write "default" {
                endpoint {
                  url = env("LOKI_URL")
                }
              }

              // https://grafana.com/docs/tempo/latest/configuration/grafana-alloy/automatic-logging/
              // https://grafana.com/docs/tempo/latest/configuration/grafana-alloy/service-graphs/
              // https://grafana.com/docs/tempo/latest/configuration/grafana-alloy/span-metrics/
              // https://grafana.com/blog/2024/05/21/how-to-use-grafana-beyla-in-grafana-alloy-for-ebpf-based-auto-instrumentation/
              beyla.ebpf "default" {
                attributes {
                  kubernetes {
                    enable = "true"
                  }
                }
                discovery {
                  services {
                    exe_path = "http"
                    open_ports = "80"
                  }
                }
                output {
                  traces = [otelcol.processor.batch.default.input]
                }
              }

              otelcol.receiver.otlp "default" {
                grpc {}
                http {}
                output {
                  metrics = [otelcol.processor.batch.default.input]
                  logs    = [otelcol.processor.batch.default.input]
                  traces = [
                    otelcol.connector.servicegraph.default.input,
                    otelcol.connector.spanlogs.default.input,
                    otelcol.connector.spanmetrics.default.input,
                  ]
                }
              }

              otelcol.connector.servicegraph "default" {
                dimensions = ["http.method", "http.target"]
                output {
                  metrics = [otelcol.processor.batch.default.input]
                }
              }

              otelcol.connector.spanlogs "default" {
                roots = true
                output {
                  logs = [otelcol.processor.batch.default.input]
                }
              }

              otelcol.connector.spanmetrics "default" {
                dimension {
                  name = "http.method"
                  default = "GET"
                }
                dimension {
                  name = "http.target"
                }
                aggregation_temporality = "DELTA"
                histogram {
                  explicit {
                    buckets = ["50ms", "100ms", "250ms", "1s", "5s", "10s"]
                  }
                }
                metrics_flush_interval = "15s"
                namespace = "traces_spanmetrics"
                output {
                  metrics = [otelcol.processor.batch.default.input]
                }
              }

              otelcol.processor.batch "default" {
                output {
                  metrics = [otelcol.exporter.otlp.hm_tempo.input]
                  logs    = [otelcol.exporter.otlp.hm_tempo.input]
                  traces  = [otelcol.exporter.otlp.hm_tempo.input]
                }
              }

              otelcol.exporter.otlp "hm_tempo" {
                client {
                  endpoint = env("TEMPO_ENDPOINT")
                  tls {
                    insecure = true
                    insecure_skip_verify = true
                  }
                }
              }
  destination:
    namespace: production-hm-alloy
    server: https://kubernetes.default.svc
  syncPolicy:
    syncOptions:
      - ServerSideApply=true
    automated:
      prune: true

Alloy’s graph

All are healthy:

Logs

One of alloy pods log

It is long, I put at Alloy log · GitHub

I saw a line

2025/01/30 08:59:35 ERROR Unable to load eBPF watcher for process events component=discover.ProcessWatcher interval=5s error=“loading and assigning BPF objects: field BeylaKprobeSysBind: program beyla_kprobe_sys_bind: map watch_events: map create: operation not permitted (MEMLOCK may be too low, consider rlimit.RemoveMemlock)”

inside, however, I am not sure how to resolve. I am using Amazon EKS.

I expect that other traces might be appearing in Grafana, even those traces from Beyla not function properly. (I could be wrong)

tempo-distributor pod log

level=warn ts=2025-01-30T06:36:10.194320099Z caller=main.go:133 msg="-- CONFIGURATION WARNINGS --"
level=warn ts=2025-01-30T06:36:10.19437038Z caller=main.go:139 msg="Inline, unscoped overrides are deprecated. Please use the new overrides config format."
level=info ts=2025-01-30T06:36:10.197225405Z caller=main.go:121 msg="Starting Tempo" version="(version=v2.7.0, branch=HEAD, revision=b0da6b481)"
level=info ts=2025-01-30T06:36:10.197899194Z caller=server.go:248 msg="server listening on addresses" http=[::]:3100 grpc=[::]:9095
ts=2025-01-30T06:36:10Z level=info msg="OTel Shim Logger Initialized" component=tempo
level=info ts=2025-01-30T06:36:10.413004915Z caller=memberlist_client.go:446 msg="Using memberlist cluster label and node name" cluster_label=hm-tempo.production-hm-tempo node=hm-tempo-distributor-6f579f694c-665x8-51b758e5
level=info ts=2025-01-30T06:36:10.414030557Z caller=module_service.go:82 msg=starting module=internal-server
level=info ts=2025-01-30T06:36:10.414226259Z caller=module_service.go:82 msg=starting module=server
level=info ts=2025-01-30T06:36:10.414342161Z caller=module_service.go:82 msg=starting module=memberlist-kv
level=info ts=2025-01-30T06:36:10.414361461Z caller=module_service.go:82 msg=starting module=overrides
level=info ts=2025-01-30T06:36:10.414402831Z caller=module_service.go:82 msg=starting module=ring
level=info ts=2025-01-30T06:36:10.414436302Z caller=module_service.go:82 msg=starting module=metrics-generator-ring
level=info ts=2025-01-30T06:36:10.414457312Z caller=module_service.go:82 msg=starting module=usage-report
level=warn ts=2025-01-30T06:36:10.414641774Z caller=runtime_config_overrides.go:97 msg="Overrides config type mismatch" err="per-tenant overrides config type does not match static overrides config type" config_type=new static_config_type=legacy
level=error ts=2025-01-30T06:36:10.498151365Z caller=resolver.go:87 msg="failed to lookup IP addresses" host=hm-tempo-gossip-ring err="lookup hm-tempo-gossip-ring on 10.215.0.10:53: no such host"
level=warn ts=2025-01-30T06:36:10.498190246Z caller=resolver.go:134 msg="IP address lookup yielded no results. No host found or no addresses found" host=hm-tempo-gossip-ring
level=info ts=2025-01-30T06:36:10.498203496Z caller=memberlist_client.go:563 msg="memberlist fast-join starting" nodes_found=0 to_join=0
level=warn ts=2025-01-30T06:36:10.498217166Z caller=memberlist_client.go:583 msg="memberlist fast-join finished" joined_nodes=0 elapsed_time=83.858295ms
level=info ts=2025-01-30T06:36:10.498238626Z caller=memberlist_client.go:595 phase=startup msg="joining memberlist cluster" join_members=dns+hm-tempo-gossip-ring:7946
level=info ts=2025-01-30T06:36:10.498317557Z caller=ring.go:316 msg="ring doesn't exist in KV store yet"
level=info ts=2025-01-30T06:36:10.498360498Z caller=ring.go:316 msg="ring doesn't exist in KV store yet"
level=info ts=2025-01-30T06:36:10.498495459Z caller=module_service.go:82 msg=starting module=distributor
ts=2025-01-30T06:36:10Z level=warn msg="Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks." component=tempo documentation=https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks
ts=2025-01-30T06:36:10Z level=info msg="Starting GRPC server" component=tempo endpoint=0.0.0.0:4317
ts=2025-01-30T06:36:10Z level=warn msg="Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks." component=tempo documentation=https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks
ts=2025-01-30T06:36:10Z level=info msg="Starting HTTP server" component=tempo endpoint=0.0.0.0:4318
level=info ts=2025-01-30T06:36:10.498914304Z caller=app.go:208 msg="Tempo started"
level=error ts=2025-01-30T06:36:10.509656008Z caller=resolver.go:87 msg="failed to lookup IP addresses" host=hm-tempo-gossip-ring err="lookup hm-tempo-gossip-ring on 10.215.0.10:53: no such host"
level=warn ts=2025-01-30T06:36:10.509682688Z caller=resolver.go:134 msg="IP address lookup yielded no results. No host found or no addresses found" host=hm-tempo-gossip-ring
level=warn ts=2025-01-30T06:36:10.509699598Z caller=memberlist_client.go:629 phase=startup msg="joining memberlist cluster" attempts=1 max_attempts=10 err="found no nodes to join"
level=error ts=2025-01-30T06:36:11.536326569Z caller=resolver.go:87 msg="failed to lookup IP addresses" host=hm-tempo-gossip-ring err="lookup hm-tempo-gossip-ring on 10.215.0.10:53: no such host"
level=warn ts=2025-01-30T06:36:11.536359541Z caller=resolver.go:134 msg="IP address lookup yielded no results. No host found or no addresses found" host=hm-tempo-gossip-ring
level=warn ts=2025-01-30T06:36:11.53637399Z caller=memberlist_client.go:629 phase=startup msg="joining memberlist cluster" attempts=2 max_attempts=10 err="found no nodes to join"
level=error ts=2025-01-30T06:36:15.080790386Z caller=resolver.go:87 msg="failed to lookup IP addresses" host=hm-tempo-gossip-ring err="lookup hm-tempo-gossip-ring on 10.215.0.10:53: no such host"
level=warn ts=2025-01-30T06:36:15.080823057Z caller=resolver.go:134 msg="IP address lookup yielded no results. No host found or no addresses found" host=hm-tempo-gossip-ring
level=warn ts=2025-01-30T06:36:15.080835127Z caller=memberlist_client.go:629 phase=startup msg="joining memberlist cluster" attempts=3 max_attempts=10 err="found no nodes to join"
level=info ts=2025-01-30T06:36:19.988200041Z caller=memberlist_client.go:602 phase=startup msg="joining memberlist cluster succeeded" reached_nodes=7 elapsed_time=9.489949855s