NGalert / Grafana 8 alert feature: How to export / import alerts as yml/json?

Hi @andi20002000 , this is due to the output from GET includes the folder name:
{ "<folder name>": [ { <alert rule object> } ] }

As soon as I removed the folder part I managed to create the alert rule using curl POST:
{ <alert rule object> }

I haven’t managed to post more than one alert rule at a time yet tho.

Inspired by @andi20002000 we came out with a bit more automated solution.

Makefile

check-grafana-token:
ifndef GRAFANA_TOKEN
	$(error GRAFANA_TOKEN is required)
endif

download-grafana-alerts: check-grafana-token
	curl -X GET \
		  -H "Authorization: Bearer ${GRAFANA_TOKEN}" \
	     'https://example.com/api/ruler/grafana/api/v1/rules' \
	     | jq > './grafana/alerts/alerts.json'

upload-grafana-alerts: check-grafana-token
	./upload-grafana-alerts.sh

upload-grafana-alerts.sh

#!/bin/sh

ALERTS_JSON_PATH=./grafana/alerts/alerts.json
NUMBER_OF_ALERTS=$(jq -c '.["folder-name"] | length' ${ALERTS_JSON_PATH})

for ((i=0; i<NUMBER_OF_ALERTS; i++)); do
  ALERT_OBJECT=$(jq -c --arg i "$i" '.["folder-name"][($i | tonumber)] | del(.rules[0].grafana_alert.uid)' ${ALERTS_JSON_PATH})
  ALERT_NAME=$(jq -c --arg i "$i" '.["folder-name"][($i | tonumber)].name' ${ALERTS_JSON_PATH})
  echo "Creating ${ALERT_NAME}...\n"
	curl -X POST \
		-H "Authorization: Bearer ${GRAFANA_TOKEN}" \
		-H "Content-type: application/json" \
		'https://example.com/api/ruler/grafana/api/v1/rules/folder-name' \
		-d "${ALERT_OBJECT}"
	echo "\n"
done
3 Likes

+1 , Must have feature! Stuck to 8.2.1 release until we can manage alarms as code in a json file, as we did before with 8.2.1

hi guyz, make a little script for export alerts from dashboard, try it

1 Like

is it possible to persist alerts in json in new Grafana 9.0 or this is still not possible? This is must have feature for IaC

5 Likes

Did already someone test v9.1? There is a new feature regarding provisioning of alert rules in release note: What's new in Grafana v9.1 | Grafana documentation

I tested it and it is working for me. But helm-charts are still not updated to v9.1 and does not include sidecar to provision alert rules.

Any solution for import and export available now?
Or can you guide how to use this script

anyone is there a possibility to provision alerts now with newer version of Grafana?

thnx

Would be nice if we could get an update about this topic from the team :smiley:

I know that this is not a solution but only a workaround.
In my team we have dashboards and grafana provisioned from the code. I tried to setup alert in ui, then pull alert by grafana api as a json, then convert it to yaml and provision alert rule as a file. Finally, I found that this is hard and provisioned alert did’t show alert data query and details about alert. After reading forums, grafana team for now have other priorities and maybe import/export will be implemented in the future.

For now with grafana 9.1.6 there is an option to disable grafana new alerting system and use old one.

When you run grafana as docker image you need to provide ini file by env variable:

Docker Environment variable:
            - Name: GF_PATHS_CONFIG.           # grafana docker env variable name to provide ini file
              Value: /etc/grafana/grafana.ini          # path to ini file that should be mounted at docker startup, it overrides default grafana.ini values. 

and in ini there is an option to disable new grafana alerting:

force_migration = true
[alerting]
enabled = true
[unified_alerting]
enabled = false
1 Like

any update on this ? will they enable provisiong of alerts since with next major release they will remove legacy alerting

Issue on Github

They are working on it :smiley:

2 Likes

I guess the main problem is writing the data section of the rule. Here is how it could be obtained from UI:

  1. go to the alert edit page
  2. open the network tab in the browser developer tools
  3. click save
    1. there will be POST request to /api/ruler/grafana/api/v1/rules/…
  4. copy “data” array from the request payload
  5. convert json to yaml
  6. paste it as it is to your provisioning/alerting file to data section
1 Like

Hello,

It is very helpful, thanks!

But I cannot understand on which provisioning folder. I have found two directories -
/etc/grafana/provisioning/alerting and one inside /var/lib/grafana/provisioning/alerting/1/. When I apply alert as shown from the template, anything appears on the interface. No errors occurs, no logs everything seems fine, but the provisioned alerts are not available (I am using unified_alerts).

Forgot to mention that the provisioning path inside grafana.ini is /etc/grafana/provisioning/alerting. The configmap applies the alert there, but it is not shown on the interface.

Thank you in advance!

I put alerts into /etc/grafana/provisioning/alerting/default.yaml

2 Likes

Hello again,

Do you use unified_alerting or legacy_alerting? Do you have additional configurations in your grafana.ini? My configuration is as follows for the alerts:

unified_alerting.enabled - true
alerting.enabled - false

  • Installed via Helm Chart and it is updated.

Thank you in advance!

Hello,

I’m on Grafana 9.3.1 with unified alerting enabled. I am also struggling to provision alerts. I tried using a sidecar/config maps and while no errors are logged, my provisioned alerts don’t appear in the UI. To try and debug this, I created an alert in the UI and then exported it:

curl -XGET  http://some:password@somegrafana/api/v1/provisioning/alert-rules/1jIgdlKVk | jq . > alert-rule.json

I would expect to be able to import the exported rule, but when I delete the rule or delete the grafana pod (I’m also on kubernetes) and try and post back to the api I get this error:

curl -XPOST  http://some:password@somegrafana//api/v1/provisioning/alert-rules -d @alert-rule.json      
{"message":"bad request data","traceID":""} 

there is nothing in the log to expand on this error and cross checking the contents of the json with the api guide: Alerting Provisioning HTTP API | Grafana documentation
it seems to be ok…

unless the keys really need to be upper case, i.e. my json starts:

{
  "id": 1,
  "uid": "TSKdFlF4k",
  "orgID": 1,

but in the api docs, its “ID”, “UID” etc.
Thanks,
Tom

PS. I might have posted on the wrong thread here which I see is for Grafana 8, but I was led here by google!

@thopewell it looks like you might be running into the issue described in the related issue below - an invalid for duration returns this not-obvious error about “traceID”. The PR is to add better error messages so the user knows what to fix in their request.

What is the for duration in your alert rule?

Hi @melori.arellano ,
Thanks for the reply.
I was able to progress provisioning my alert rules via yaml / config map.
Doing a quick test now, I’m still getting the error. I configured a very simple alert in the UI, exported to json and then tried to post back. The json I get is below. I think I originally used a for of 0, but in this test I used the default 5m.
Thanks,
Tom

alert rule json:

{
  "id": 3,
  "uid": "3T0zXkhVz",
  "orgID": 1,
  "folderUID": "ODXFj154k",
  "ruleGroup": "test",
  "title": "test",
  "condition": "C",
  "data": [
    {
      "refId": "A",
      "queryType": "",
      "relativeTimeRange": {
        "from": 600,
        "to": 0
      },
      "datasourceUid": "prometheus",
      "model": {
        "editorMode": "code",
        "expr": "vector(0)",
        "hide": false,
        "intervalMs": 1000,
        "legendFormat": "__auto",
        "maxDataPoints": 43200,
        "range": true,
        "refId": "A"
      }
    },
    {
      "refId": "B",
      "queryType": "",
      "relativeTimeRange": {
        "from": 600,
        "to": 0
      },
      "datasourceUid": "-100",
      "model": {
        "conditions": [
          {
            "evaluator": {
              "params": [],
              "type": "gt"
            },
            "operator": {
              "type": "and"
            },
            "query": {
              "params": [
                "B"
              ]
            },
            "reducer": {
              "params": [],
              "type": "last"
            },
            "type": "query"
          }
        ],
        "datasource": {
          "type": "__expr__",
          "uid": "-100"
        },
        "expression": "A",
        "hide": false,
        "intervalMs": 1000,
        "maxDataPoints": 43200,
        "reducer": "last",
        "refId": "B",
        "type": "reduce"
      }
    },
    {
      "refId": "C",
      "queryType": "",
      "relativeTimeRange": {
        "from": 600,
        "to": 0
      },
      "datasourceUid": "-100",
      "model": {
        "conditions": [
          {
            "evaluator": {
              "params": [
                0
              ],
              "type": "gt"
            },
            "operator": {
              "type": "and"
            },
            "query": {
              "params": [
                "C"
              ]
            },
            "reducer": {
              "params": [],
              "type": "last"
            },
            "type": "query"
          }
        ],
        "datasource": {
          "type": "__expr__",
          "uid": "-100"
        },
        "expression": "B",
        "hide": false,
        "intervalMs": 1000,
        "maxDataPoints": 43200,
        "refId": "C",
        "type": "threshold"
      }
    }
  ],
  "updated": "2023-01-04T19:05:31Z",
  "noDataState": "NoData",
  "execErrState": "Error",
  "for": "5m"
}