Agent stops scraping at scrape_interval=60s

We’re experiencing some issues with our grafana agent (prometheus) scraping configuration. If we set the scrape_interval to 15s or 30s, everything is working fine but once we increase it to 60s it stops scraping our kubernetes-pods job from the configuration below. Is there anything additional we need to change in the configuration to make 60s work? We initially deployed the agent when the 15s scrape interval was still the default setting.

kind: ConfigMap
metadata:
  name: grafana-agent
  namespace: ${NAMESPACE}
apiVersion: v1
data:
  agent.yaml: |
    server:
      http_listen_port: 12345
    prometheus:
      wal_directory: /tmp/grafana-agent-wal
      global:
        scrape_interval: 60s
        external_labels:
          cluster: cloud
      configs:
      - name: integrations
        remote_write:
        - url: ${REMOTE_WRITE_URL}
          basic_auth:
            username: ${REMOTE_WRITE_USERNAME}
            password: ${REMOTE_WRITE_PASSWORD}
        scrape_configs:
        - job_name: kubernetes-pods
          kubernetes_sd_configs:
            - role: pod
          relabel_configs:
            - action: keep
              regex: .*-metrics
              source_labels:
                - __meta_kubernetes_pod_container_port_name
          metric_relabel_configs:
            - source_labels: [__name__]
              regex: 'nginx_ingress_controller_(request_duration_seconds_bucket|response_duration_seconds_bucket|request_duration_seconds_count|response_duration_seconds_sum|upstream_latency_seconds_sum|upstream_latency_seconds_countrequests|ssl_expire_time_seconds|nginx_process_connections|requests)'
              action: keep
...