Labels scraped by prometheus.operator.podmonitors but dropped on exporting to otelcol.exporter.otlphttp exporter

We have an Azure Kubernetes Cluster hosting a few applications. Besides this we also have prometheus native podmonitors and servicemonitors that expose the scraping targets and configurations.

We are using Grafana Alloy to discover and scrape these pod/servicemonitor targets using prometheus.operator.podmonitors component of Alloy along with rules within it to add a few labels like cluster, region, environment. The intention is to forward the scraped metrics from the targets to ElasticSearch Cloud Deployment using otelcol.exporter.otlphttpAlloy component.

Below is a glimpse of the alloy config

prometheus.operator.podmonitors "podmonitors" {
      forward_to = [otelcol.receiver.prometheus.default.receiver, prometheus.remote_write.thanos.receiver]
      scrape {
        default_scrape_interval = "30s"
        default_scrape_timeout = "20s"
      }
      rule {
        target_label = "cluster"
        replacement  = sys.env("CLUSTER_ID")
      }
      rule {
        target_label = "environment"
        replacement  = sys.env("ENVIRONMENT")
      }
      rule {
        target_label = "region"
        replacement  = sys.env("REGION")
      }
    }
otelcol.receiver.prometheus "default" {
      output {
        metrics = [otelcol.exporter.otlphttp.elastic.input]
      }
    }
otelcol.exporter.otlphttp "elastic" {
        client {
          endpoint = "https://ELASTIC.apm.westeurope.azure.elastic-cloud.com:443"
          auth     = otelcol.auth.headers.creds.handler
        }
    }
otelcol.auth.headers "creds" {
      header {
        key   = "Authorization" 
        value = "ApiKey REDACTED"
      }
    }

Now the targets are scraped correctly which I verified by port-forwarding Alloy locally from the below endpoint.

http://localhost:8090/component/prometheus.operator.podmonitors.podmonitors

Below is a screenshot of the target for a kong api-gateway application and the extra labels (cluster, environment, region) which can be seen as expected.

But the issue is that when I try to lookup on Elastic for metrics like kong_http_requests_total or kong_bandwidth_bytes , these labels do not end up on Elastic.

On the contrary the labels are correctly pushed on Thanos. If you see in the alloy config, I am exporting the metrics to both Thanos and Elastic

Is this an issue with Alloy or Elastic ?