Elastic search and counters

Dear experts

I am trying to graph sflow data in Elastic Search but I can’t get it to work as I would like it to. The challenge that I am facing is that the data is counters and continuously increasing. This does not make for a very useful graph.

Is there any way to calculate the delta? Or do I need to do that via Elastic Search?

Kind regards,
Patrik

Hi, you need to do Date Histogram aggregation, below the Count Metric.

1 Like

Thank you for your reply. Date histogram has been chosen but the graph does not look right to me. I must have missed something.

Attaching a screen shot of the view I have:

And one of the options under the “Group by” menu:
2019-02-05%2013_41_31-Grafana%20-%20New%20dashboard

Kind regards,
Patrik

I thought I had an answer there for a while, but I was wrong.

in Group by date histogram you will have to select the time field in your Elasticsearch document instead of ‘@timestamp

I’m afraid there is no such option. Here’s a sample document if it helps?

{
  "_index": "sflow-2019.02.05",
  "_type": "doc",
  "_id": "CBtRvmgBtSdbphhRfi3f",
  "_version": 1,
  "_score": null,
  "_source": {
    "output_packets": "518568",
    "source_id_index": "64",
    "input_broadcast_packets": "0",
    "input_discarded_packets": "0",
    "promiscous_mode": "2",
    "ip_version": "1",
    "output_broadcast_packets": "0",
    "input_octets": "2568584435",
    "output_multicast_packets": "0",
    "sflow_type": "counter_sample",
    "uptime_in_ms": "284588000",
    "interface_index": "64",
    "interface_speed": "0",
    "input_packets": "42335548",
    "host": "192.168.10.19",
    "sub_agent_id": "1",
    "@timestamp": "2019-02-05T15:40:38.321Z",
    "input_errors": "0",
    "agent_ip": "192.168.10.23",
    "sample_seq_number": "10770",
    "output_discarded_packets": "0",
    "interface_status": "3",
    "@version": "1",
    "source_id_type": "0",
    "input_multicast_packets": "0",
    "input_unknown_protocol_packets": "4294967295",
    "output_errors": "0",
    "interface_direction": "0",
    "interface_type": "6",
    "output_octets": "21993196"
  },
  "fields": {
    "@timestamp": [
      "2019-02-05T15:40:38.321Z"
    ]
  },
  "sort": [
    1549381238321
  ]
}

I might have found a solution:

image

The data (output_octets) is in bytes. To get the throughput I needed to first multiply by 8 to get bits. Then divide by 1024*1024 to get Mbit. Lastly, divide by the interval.

Then I get a value that is close to what I expected. Not sure if it’s correct though. Does this look right to you guys?

Turns out there was an inbuilt functionality to add the interval for the derivative. My unit is sampling every 10 seconds. Adding 10s in the Unit field gave me something close to what I expected. However, it is still around 10x too high.

Any idea of what I am doing wrong?