How can Grafana OSS visualize historical metrics when Prometheus only stores real-time data?

Version 12.2.0 (commit: 92f1fba9b4b6700328e99e97328d6639df8ddc3d, branch: release-12.2.0)

Hello Community :waving_hand:,

We are working on an implementation where metrics are fetched from external monitoring platforms such as Datadog, Dynatrace and New Relic.

We are able to retrieve historical data (for example, the last 48 hours) from the source systems even after a period of downtime. However, we are facing a challenge when visualizing this data through Grafana OSS using Prometheus as the metric store.

Below is our current scenario:

  • Our application can fetch past metrics (e.g., last 2 days) from vendor APIs even when the system is restarted after downtime.
  • The data is transformed and forwarded to Prometheus through an OTEL collector.
  • Prometheus accepts and stores only real-time incoming data, not old historical samples.
  • Any backfilled metrics with timestamps older than the current time window are ignored or discarded by Prometheus.
  • Because of this, Grafana dashboards only show the latest data points.
  • Historical ranges do not appear even though we fetched them successfully.
  • The requirement is to visualize older data, even when it was not scraped or stored by Prometheus at the time of occurrence.

We would like your guidance on:

  1. How Grafana OSS can be used to visualize historical metric windows under such conditions.
  2. Whether Grafana is capable of rendering data that is not persisted in Prometheus but fetched dynamically from external APIs.
  3. Any recommended architecture or best practice that supports historic timelines without Prometheus pre-retention.
  4. How others have handled late ingestion or missed time windows for metrics while still allowing historical visualization.
  5. Any native or community-supported methods to overlay historical data fetched post-factum.

We would greatly appreciate insights, recommended patterns, or real-world implementation references from the community. :folded_hands:

Thank you.

That looks like you didn’t configure your Prometheus for this use case.

I guess this will help (maybe other configs):

# When out_of_order_time_window is greater than 0, it also affects experimental agent. It allows
# the agent's WAL to accept out-of-order samples that fall within the specified time window relative
# to the timestamp of the last appended sample for the same series.
[ out_of_order_time_window: <duration> | default

Hi @jangaraj , Thank you. I tried that out and had put the following in our prometheus.yml, and restarted the Prometheus application. But it didn’t work :frowning: , there have been no changes in behaviour. Is there any other way?

storage:
tsdb:
out_of_order_time_window: 2d

Check doc and other config options, read errors,… Your target is to configure your prometheus to accept historic data. It’s not a Grafana issue. Grafana will just read what is in the Prometheus.

Ok @jangaraj . Actually I am ok to use any other tsdb or any other mechanism, other than prometheus. Are there any such features or mechanisms, by which we can have such kind of historical data displayed in Grafana on the fly.

OK, so use your favorite mature TSDB, which is supported in the Grafana and you should be good. IMHO if you are good with SQL, then some SQL TSDB will be good choice - you will have mature SQL, which is much powerful than PromQL. But you will need to sacrifice OTEL, because there no native SQL exporter.

Prometheus was designated for real time and regular data ingestion, not for back filling, so default config is prepared for that. But you should be able to enable that.

You will need to decide what’s your “the best” solution, based on your need, experience, time, budget, …

1 Like