Hi Grafana community ![]()
I’m building an App Plugin (incident-first reliability / RCA style workflow) that correlates metrics, logs, traces, and Kubernetes context into a guided investigation view.
The plugin includes:
-
A backend (Go) that aggregates data and builds incident timelines
-
A frontend app UI inside Grafana
-
A TestData-based “chaos / simulation mode” for safe testing (deterministic, synthetic, real-time streams via SSE)
-
Clear labeling of test vs prod mode (no real telemetry is queried in test mode)
My current challenge
I’m not able to test against real production telemetry, which means:
-
I rely on Grafana TestData and synthetic events for most testing
-
Some incident lists / lifecycle flows are generated or simulated
-
End-to-end behavior is realistic, but the data source is not live production Prometheus/Loki/Tempo
I’ve followed Grafana plugin docs and TestData guidance, but I want to confirm best practices from experienced plugin authors.
My questions
-
Is it acceptable to publish a plugin for review when:
-
Testing is done using Grafana TestData / synthetic sources
-
Production datasources are supported but not exercised with real prod traffic yet?
-
-
Are reviewers generally okay with:
-
Deterministic synthetic incidents
-
Explicit “test/simulation mode” clearly labeled in the UI and docs?
-
-
Is there a recommended staging or non-prod testing pattern for app plugins that require complex observability data, before catalog publication?
I want to make sure I’m aligning with Grafana’s expectations and not over-engineering pre-publish testing.
Any guidance from Grafana team members or plugin authors would be hugely appreciated ![]()
Thanks!