Does every of these 1000 rows have a unique timestamp in CSV file?
What does this give you?
from(bucket: "datapoints") //or whatever your bucket name is called
|> range(start: 1970-01-01T00:00:00Z, stop: now())
|> group()
|> count()
@vaishaliburge Given that the query with the group()
function gives you 1515 results, does that concur with what you were expecting?
The csv that you imported appears to have 3 fields: value
, extra_column
, and category
that appeared as 3 separate tables in your original query. The group function combines these 3 into one table.
how is it grouping as total 1515, it should be 1000 per fields as I have inserted 1000 rows right or am i understanding influxdb wrongly ?? It means only 505 / field only has been inserted ?
my conf I recently changed to
[agent]
interval = â10sâ
round_interval = true
metric_batch_size = 500
metric_buffer_limit = 500
flush_interval = â10sâ
flush_jitter = â0sâ
precision = ââ
hostname = âmsc-burge-influxdbâ
omit_hostname = false
please post the content of your 2 csv files, modifying any sensitive data. We cant see what is in them remotely, I usually do have telepathic powers but today they are bit low
also what @grant2 said after you ingest both csv files, and you do a stright raw query with the following in influxdb UI
|> range(start: 1970-01-01T00:00:00Z, stop: now())
and you do not get 1000 rows then the issue is not grafana.
Also, maybe Telegraf+InfluxDB are overkill for what you are trying to do here. Seems like putting your .csv into Google Sheets would be easier and then use the Google Sheets datasource.
or even CSV + Infinity (like in this demo) would fit better
It could be that the files are not static, they always get updated by another system
Yes will try out and paste my some of the csv data here. or attach. Give me sometime, will get back with updates hopefully something works
where is this plugin âGoogle sheetsâ is availabale in the grafana or influx? I cannot see in both of them, any more information on this would be helpfull
Itâs not a plugin for InfluxDB. Itâs a datasource for Grafana. See my link above from yesterday.
Letâs go back to basicsâŚyou have a .CSV file, correct? Does it get updated every hour, every day, etc.? Is your goal to just get that data into Grafana so you can view it, or do you intend to do alerting on the data?
The Infinity datasource (as @ebabeshko mentioned) is perfect for doing just this. More here:
it is static csv file from 2024.07.02 till 2024.12.02 . I just want to analyse different TSDB and have a better visuals via grafana, and im guesssing INFLUX-DB is not good for stored data. But will try out the option you suggested
here is piece of data from mycsv
- data :
2024-02-07T11:55:52Z,4.52,NA,6
2024-02-07T12:00:58Z,4.54,NA,6
2024-02-07T12:06:04Z,4.54,NA,6
I converted these to UTC and trying to insert them into influx, there are total 999rows
can you tell something from this data ?
As I mentioned earlier, probably Infinity plugin would fit for your needs if you can provide URL to your CSV file to it or put data directly inline:
The problem worked when I added uniq_id tag for every timestamp. Idk why even though i had unique timestamps it overlapped. but all thanks for constant suupport and help. will write you all if I encounter problem with grafana
Thatâs a solution, which doesnât scale - there will be a problem with high cardinality generally. Of course you want to see a problem, when you have tiny dataset of 1k records. Uniq id indicates a problem with duplicated timeseries - there will be 1k of records inserted, but they will be deduplicatedon the InfluxDB level, so you can query only those deduplicated records - thatâs exactly yours symptoms.
as I checked there are no duplicate time which are in my csv file, still it caused the issue. So what are you suggesting here is to check my cardinality by this query ?
import âinfluxdata/influxdb/schemaâ
cardinalityByTag(bucket: âyour-bucketâ)
did I get you right ?
okay I checked the extra_column row which had NA and a string which had high cardinality. so I removed it as it was not important it was just empty. even after that the results are same it retrives 498 out of 998 rows inserted
Provide reproducible example what you doing, pls.