How to reduce grafana alert interval below 1s

  • 11.1.3

  • Have an alert trigger as soon as possible after a value drops below a threshold, this should occur within milliseconds of the data being writtend to influxdb. Currently our dashboards can auto-refresh at 100ms intervals so i’m hoping to run the alerting rule that ferequently as well.

  • Currently setting unified_alerting.min_interval to 100ms

  • Getting an error at startup saying “Error: ✗ value of setting ‘min_interval’ should be greater than the base interval (10s)”

  • The UI to now let me to set alerts to trigger at 100ms intervals.

  • # Minimum interval to enforce between rule evaluations. Rules will be adjusted if they are less than this value  or if they are not multiple of the scheduler interval (10s). Higher values can help with resource management as we'll schedule fewer evaluations over time.
    # The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.
    min_interval = 100ms
    
  • logger=settings t=2025-10-22T15:14:03.93415226Z level=info msg="Starting Grafana" version=11.1.3 commit=beac3bdbcb34e68b53538cac5734ef90344d3122 branch=HEAD compiled=2025-10-22T15:14:03Z
    Error: ✗ value of setting 'min_interval' should be greater than the base interval (10s)
    

Thanks in advance!

P.S I know that this is liable to cause false positives and i know this is an insanely quick speed to want to alert at. It is the requirement of the system that is being built to alert super quick after thresholds are breached.

Hi,

Quick update by setting the feature flag “configurableSchedulerTick” and “scheduler_tick_interval” to 1000ms i can drop the “min_interval” config for alerting to 1000ms. Going any lower than this produces a runtime error of integer divide by zero.

logger=provisioning.datasources t=2025-10-22T15:57:05.017235731Z level=info msg="inserting datasource from configuration" name=InfluxDB_v2_Flux uid=P5697886F9CA74929
logger=app-registry t=2025-10-22T15:57:05.023986273Z level=info msg="app registry initialized"
t=2025-10-22T15:57:05.023985695Z level=info caller=logger.go:214 time=2025-10-22T15:57:05.023982596Z msg="App initialized" app=playlist
t=2025-10-22T15:57:05.023976862Z level=info caller=logger.go:214 time=2025-10-22T15:57:05.023973905Z msg="App initialized" app=plugins
logger=provisioning.alerting t=2025-10-22T15:57:05.036121351Z level=info msg="starting to provision alerting"
logger=provisioning.alerting t=2025-10-22T15:57:05.036128386Z level=info msg="finished to provision alerting"
logger=provisioning.dashboard t=2025-10-22T15:57:05.036667435Z level=info msg="starting to provision dashboards"
logger=sqlstore.transactions t=2025-10-22T15:57:05.060609267Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0
logger=plugin.angulardetectorsprovider.dynamic t=2025-10-22T15:57:05.09809199Z level=info msg="Patterns update finished" duration=98.88483ms
panic: runtime error: integer divide by zero

goroutine 2090 [running]:
github.com/grafana/grafana/pkg/services/ngalert/schedule.(*schedule).processTick(0x17defe48?, {0x386ed00?, 0xc0021ce870?}, 0x17d78400?, {0x17d78400, 0xee08af5d1, 0x142a6b00})
        github.com/grafana/grafana/pkg/services/ngalert/schedule/schedule.go:277 +0x1e7a
github.com/grafana/grafana/pkg/services/ngalert/schedule.(*schedule).schedulePeriodic(0xc0017a4840, {0x386ed00?, 0xc0029b80a0?}, 0xc0026d1c20)
        github.com/grafana/grafana/pkg/services/ngalert/schedule/schedule.go:258 +0x187
github.com/grafana/grafana/pkg/services/ngalert/schedule.(*schedule).Run(0xc0017a4840, {0x386ed00, 0xc0029b80a0})
        github.com/grafana/grafana/pkg/services/ngalert/schedule/schedule.go:180 +0x193
github.com/grafana/grafana/pkg/services/ngalert.(*AlertNG).Run.func3()
        github.com/grafana/grafana/pkg/services/ngalert/ngalert.go:577 +0x2e
golang.org/x/sync/errgroup.(*Group).Go.func1()
        golang.org/x/sync@v0.17.0/errgroup/errgroup.go:93 +0x50
created by golang.org/x/sync/errgroup.(*Group).Go in goroutine 225
        golang.org/x/sync@v0.17.0/errgroup/errgroup.go:78 +0x95

Looking through code in the stack trace it seems the tick interval is selected as seconds then converted to an int64. So a tick interval of less than a second will be interpreted as 0.

Yeah. That’s usually task for streaming approach (e.g. TICK stack have Kapacitor for that), not for “slow” pulling approach (e.g. what Grafana is doing - it is pulling/querying InfluxDB regularly and then it reacts based on the query result). So I would say use more suitable tool for this real time (<1s) alerting.

1 Like