Multiple Continously CSV incoming files - timeseries data as data source from .CSV

Hello everyone,

we are using Grafana self hosted on debian VM.

We are trying to achieve data visualization from dataloggers pushing .csv files. (ftp push) - every 1 minute we have some data coming with timestamp (1 header and 1 record).

so our data source is an SMB or FTP location with all the incoming .csv files.
Those files need to be consumed, meaning extract the record and keep it as historical data, then delete the file consumed. (This can be done externally if there is no such tool in Grafana).

i have check the infinity plugin data source from csv, but seems that is static data source one time import from csv, not continuously like we have.

Can i have some suggestions / directions for achieving something like that ?

I have checked for online directions on this subject but didn’t find something appropriate.

Any comments and suggestions appreciated.

welcome @bambosd

so they get extracted and kept as historical in a database ?

Hello @yosiasz and thank you.

No , at this time we don’t do that. we are looking for a possible solution. I’m just stating that those files need to be consumed because they will add up to several thousands over the course of some days.

I’m examining whether i have to do this externally with another server / app and load to a database, or there is something ready in Grafana, or i should contribute for a plugin doing that in Grafana.

ah gotcha, so with grafana you could use some plugins that could aggregate those files and visualize them but wont be optimal approach. you have tons of options depends on your in house expertise.

Let’s say we are confident with python and mySQL.

perfect, create a cron job that pushes the data to mysql with your basic ETL principles.