Influxdb and Grafana retriving less number of rows than inserted

Does every of these 1000 rows have a unique timestamp in CSV file?

What does this give you?

from(bucket: "datapoints") //or whatever your bucket name is called
|> range(start: 1970-01-01T00:00:00Z, stop: now())
|> group()
|> count()

@vaishaliburge Given that the query with the group() function gives you 1515 results, does that concur with what you were expecting?

The csv that you imported appears to have 3 fields: value, extra_column, and category that appeared as 3 separate tables in your original query. The group function combines these 3 into one table.

how is it grouping as total 1515, it should be 1000 per fields as I have inserted 1000 rows right or am i understanding influxdb wrongly ?? It means only 505 / field only has been inserted ?

my conf I recently changed to
[agent]
interval = “10s”
round_interval = true
metric_batch_size = 500
metric_buffer_limit = 500
flush_interval = “10s”
flush_jitter = “0s”
precision = “”
hostname = “msc-burge-influxdb”
omit_hostname = false

please post the content of your 2 csv files, modifying any sensitive data. We cant see what is in them remotely, I usually do have telepathic powers but today they are bit low :winking_face_with_tongue:

also what @grant2 said after you ingest both csv files, and you do a stright raw query with the following in influxdb UI

|> range(start: 1970-01-01T00:00:00Z, stop: now())

and you do not get 1000 rows then the issue is not grafana.

Also, maybe Telegraf+InfluxDB are overkill for what you are trying to do here. Seems like putting your .csv into Google Sheets would be easier and then use the Google Sheets datasource.

1 Like

or even CSV + Infinity (like in this demo) would fit better

1 Like

It could be that the files are not static, they always get updated by another system

Yes will try out and paste my some of the csv data here. or attach. Give me sometime, will get back with updates hopefully something works :slight_smile: :crossed_fingers:

where is this plugin “Google sheets” is availabale in the grafana or influx? I cannot see in both of them, any more information on this would be helpfull :slight_smile:

It’s not a plugin for InfluxDB. It’s a datasource for Grafana. See my link above from yesterday.

Let’s go back to basics…you have a .CSV file, correct? Does it get updated every hour, every day, etc.? Is your goal to just get that data into Grafana so you can view it, or do you intend to do alerting on the data?

The Infinity datasource (as @ebabeshko mentioned) is perfect for doing just this. More here:

it is static csv file from 2024.07.02 till 2024.12.02 . I just want to analyse different TSDB and have a better visuals via grafana, and im guesssing INFLUX-DB is not good for stored data. But will try out the option you suggested :slight_smile:

here is piece of data from mycsv

  1. data :

2024-02-07T11:55:52Z,4.52,NA,6
2024-02-07T12:00:58Z,4.54,NA,6
2024-02-07T12:06:04Z,4.54,NA,6
I converted these to UTC and trying to insert them into influx, there are total 999rows

can you tell something from this data ?

As I mentioned earlier, probably Infinity plugin would fit for your needs if you can provide URL to your CSV file to it or put data directly inline:

The problem worked when I added uniq_id tag for every timestamp. Idk why even though i had unique timestamps it overlapped. but all thanks for constant suupport and help. will write you all if I encounter problem with grafana :slight_smile:

That’s a solution, which doesn’t scale - there will be a problem with high cardinality generally. Of course you want to see a problem, when you have tiny dataset of 1k records. Uniq id indicates a problem with duplicated timeseries - there will be 1k of records inserted, but they will be deduplicatedon the InfluxDB level, so you can query only those deduplicated records - that’s exactly yours symptoms.

as I checked there are no duplicate time which are in my csv file, still it caused the issue. So what are you suggesting here is to check my cardinality by this query ?
import “influxdata/influxdb/schema”

cardinalityByTag(bucket: “your-bucket”)
did I get you right ?

okay I checked the extra_column row which had NA and a string which had high cardinality. so I removed it as it was not important it was just empty. even after that the results are same it retrives 498 out of 998 rows inserted :smiling_face_with_tear:

Provide reproducible example what you doing, pls.



these are things I tried without NA columns which had string value.
and when I checked cardinality