Influxdb and Grafana retriving less number of rows than inserted

What are you trying to achieve?

I inserted 997 metrics into InfluxDB and want to visualize and count them correctly in Grafana.


How are you trying to achieve it?

  • Inserted 997 data points into the testUTC_1krows bucket in InfluxDB.
  • Queried using Flux in Grafana:
from(bucket: "testUTC_1krows")
|> range(start: 2024-02-07T12:55:00Z, stop: 2024-02-07T15:49:10Z)
|> count()

What happened?

  • The query only returns 505 metrics instead of 997.

What did you expect to happen?

  • The count query in Grafana and influx should return 997 metrics instead of 505.
  • No data loss or filtering should occur.

Configuration Details

  • InfluxDB Bucket: testUTC_1krows
  • Timezone Used: UTC
  • Grafana Timezone: (Browser Time / UTC)
  • Server Timezone: Confirmed as Etc/UTC (+0000)
  • Command output:
date -u
Sun Mar 16 12:02:40 UTC 2025
timedatectl
Time zone: Etc/UTC (UTC, +0000)

Errors in Grafana UI or Logs?

  • No direct errors, but incorrect count() results.
  • Possible incorrect handling of time ranges or missing data points.

Did you follow any online instructions?

Yes, I followed:

  • InfluxDB count query documentation
  • Grafana Time-Series Data Debugging

Note : it is stored csv and not continous data. any help would be appriciated, also want to know if influxdb and grafana works for such data ?or just incoming data

1 Like

If you run the same flux query in InfluxDB UI, does it give you result that is different from Grafana? All data that you inserted is within UTC time period that you specify in query (2024-02-07T12:55:00Z … 2024-02-07T15:49:10Z)?

1 Like

How? Are you sure that all 997 data points were inserted successfully? Are there any duplications, which can be deduplicated on the InfluxDB level?

2025-03-16T17:10:02Z D! [outputs.influxdb_v2] Wrote batch of 998 metrics in 12.074623ms
2025-03-16T17:10:02Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 1010 metrics
this way it confirms right ?

it gives same answer, I tried with 2k rows to and that only shows 1459 only both in flux and grafana

How many rows come back with same query ran outside of grafana in influxdb web ui

Now show us the query you ran

same as grafana :confused:

Please share the flux query



have a look and let me know if im missing anything, this is 2000rows and it inserts 1997 rows in this case but when queried the start and last time it just retrived 1497.

What is the datetime on the missing 503.

also what you get if you use vanilla |> count() instead of what you have


the missing iam not able to figure out how, but with just count() it is same :confused:

1 Like

now show us how the influxdb bucket was populated ? what is the data source? did you use telegraf to populate it or python?

Im using the telegraf to send send data and here is the conf file
[agent]
interval = “10s”
round_interval = true
metric_batch_size = 2500
metric_buffer_limit = 20000
flush_interval = “10s”
flush_jitter = “0s”
precision = “”
hostname = “msc-burge-influxdb”
omit_hostname = false

[[outputs.influxdb_v2]]
urls = [“http://141.26.156.220:8086”]
Token for authentication.
token = “$INFLUX_TOKEN”
Organization is the name of the organization you wish to write to; must exist.
organization = “University of Koblenz”
Destination bucket to write into.
bucket = “datapoints”

[[inputs.file]]

files = [“/etc/telegraf/1K_row_data_part_0001.csv”,“/etc/telegraf/1K_row_data_part_0002.csv”]
data_format = “csv”
csv_header_row_count = 1
csv_column_names = [“timestamp”, “value”, “extra_column”, “category”]
csv_timestamp_column = “timestamp”
csv_column_types = [“timestamp”, “float”, “string”, “int”]
csv_timestamp_format = “2006-01-02 15:04:05.999Z07:00”

this is for the 1k rows with UTC time converted


and .conf file
[agent]
interval = “10s”
round_interval = true
metric_batch_size = 1010
metric_buffer_limit = 1010
flush_interval = “10s”
flush_jitter = “0s”
precision = “”
hostname = “msc-burge-influxdb”
omit_hostname = false

[[outputs.influxdb_v2]]
urls = [“http://141.26.156.220:8086”]
Token for authentication.
token = “$INFLUX_TOKEN”
Organization is the name of the organization you wish to write to; must exist.
organization = “University of Koblenz”
Destination bucket to write into.
bucket = “testUTC_1krows”

[[inputs.file]]
files = [“/etc/telegraf/1K_row_data_part_0001UTC.csv”]
data_format = “csv”
csv_header_row_count = 1
csv_column_names = [“timestamp”, “value”, “extra_column”, “category”]
csv_timestamp_column = “timestamp”
csv_column_types = [“timestamp”, “float”, “string”, “int”]
csv_timestamp_format = “2006-01-02T15:04:05Z”

we better to stick this 1000rows to make it even easier for you than confusing with 2k too. so for 1000rows it is displaying just 505.

Are there any values in Influx Data Explorer when you select these?

yes same as 505 as count

from your csv files, please extract the missing rows and post them here hiding sensitive stuff

Could you please guide me how can that be done. I have ran out of all options and confused now

We need this from you or

compare the file on disk to the data in influxdb using

import "csv"

csv.from(file: "/path/to/example.csv")