Hi @fuba77
These 3 lines should do it:
|> group(columns: ["_time"])
|> distinct(column: "username")
|> count(column: "_value")
Example where the distinct regions are being counted for each date:
import "csv"
csvData =
"
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,string,string,double
#group,false,false,false,false,false,true,true,false
#default,,,,,,,,
,result,table,_start,_stop,_time,region,host,_value
,mean,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:51:00Z,east,A,15.43
,mean,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:51:00Z,east,A,65.15
,mean,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:51:00Z,north,B,59.25
,mean,0,2018-05-09T20:50:00Z,2018-05-09T20:50:00Z,2018-05-09T20:50:00Z,east,B,18.67
,mean,0,2018-05-09T20:50:00Z,2018-05-09T20:52:00Z,2018-05-09T20:50:00Z,west,C,52.62
,mean,0,2018-05-09T20:50:00Z,2018-05-09T20:52:00Z,2018-05-09T20:50:00Z,south,C,52.62
,mean,0,2018-05-10T20:50:00Z,2018-05-10T20:52:00Z,2018-05-10T20:51:00Z,east,C,82.16
"
csv.from(csv: csvData)
|> range(start: 2018-05-08T20:50:00Z, stop: 2018-05-10T20:52:00Z)
|> group(columns: ["_time"])
|> distinct(column: "region")
|> count(column: "_value")