Not getting alerts

  alerting:
    policies.yaml:
      policies:
        - orgId: 1
          receiver: email
          group_by:
          - grafana_folder
          - alertname
    contactpoints.yaml:
      contactPoints:
      - name: email
        org_id: 1
        receivers:
        - uid: "email"
          type: email
          settings:
            addresses: |
              me@mydomain.dk
              myfriend@hisdomain.vn
            singleEmail: true
          disableResolveMessage: false

I have configured the policies and contact as above. Using the “test” in contact points sends the expected [FIRING:1] TestAlert Grafana email and we both receive it.

My alert is configured as:

kind: PrometheusRule
apiVersion: monitoring.coreos.com/v1
metadata:
  name: argocd-alerts
spec:
  "groups":
  - "name": "argo-cd"
    "rules":
    - "alert": "ArgoCdAppOutOfSync"
      "annotations":
        "description": {{`"The application {{ $labels.dest_server }}/{{ $labels.project }}/{{ $labels.name }} is out of sync with the sync status {{ $labels.sync_status }} for the past 15m."`}}
        "summary": "An ArgoCD Application is Out Of Sync."
      "expr": |
        sum(
          argocd_app_info{
            job=~".*",
            sync_status!="Synced"
          }
        ) by (job, dest_server, project, name, sync_status)
        > 0
      "for": "15m"
      "labels":
        "severity": "warning"

It shows fine in grafana UI, and is registered as “firing”.

However, none of us ever get any alerts from firing alerts, not the above or any other.

What am i doing wrong?

Figured it out…
It is not at all clear or obvious that grafanas alertmanager is not used at all, but it is instead mimir or prometheus that does the alerting.