Access specific field (elasticsearch as datasource)

Hello,

I have set Elasticsearch as datasource which works correctly. I would like to filter for specific documents and see the value of this specific field which is called “count” in the filtered document.

The Lucene query works correctly and looks like that:

tags:db AND tags:hourly AND tags:NEG AND env.keyword:“env1”

Query filters for documents which are sent every hour and as you can see in the screen below works correctly.

Every document of theese has a field named “count” and a value.
Instead of viewing on the chart the count of documents I would like to view the value of “count” field in the given document. How to do this? Is it possible?

When i switch the “Query type” from “Metrics” to “Logs” i get following error:

[sse.readDataError] [A] got error: input data must be a wide series but got type long (input refid)

Does anybody have an idea if this is possible?

what does your es document look like please post sample

I would like to inspect this document but when I choose ‘Query type’ Logs instead of metrics i get:

[sse.readDataError] [A] got error: input data must be a wide series but got type long (input refid)

The same happens if I choose ‘Raw Data’

any ideas?

Hi, I guess your Elasticsearch query looks like this
image

You can actually change the Count metric type to some other types by clicking on the Count. You can change it (for example) to Sum, like here:
image

I’m not sure if you can present all the values from the documents but maybe sum or some other aggregation will suffice (I guess Grafana wouldn’t know how to visualize each value since it’s a variative count of points).

Does it work for you? If not, you can try to play around with “Logs” in Table visualization and apply some transformations.

This is how i have set this up. As I said this results in the count of documents which are being sent instead of the value of ‘count’ filed which is present in the sent document.

This is what happens if i set metrics to SUM instead of COUNT

These are the fileds which are available when I choose SUM. These are not the fileds which are in the elasticsearch document. This is something else. I would like to access fields from the document I dont know why those arent available. Maybe the format of the ELK document is wrong?

This is what happens when I choose query type Logs. - No data

This is what happens when i choose query type Raw data and Raw Document.

Where lies the problem? the format of the document is incorrect? How to check this? Please note that it’s elasticsearch being set as data source. I can easily access every field of this document in elasticsearch but I can’t do this in Grafana.

Here I tried making a dashboard. The result is somehow ok because I get one hit every hour but i dont want to count this documents but view the value which is inside them.

Here i switched to logs.

Here I switched to Raw data.

Here I switched to Raw Document.

I sent how it looks on my side. Any ideas?

still waiting for a sample es doc as json

how can I do this as I cannot view the document using ‘Logs’ or ‘Table’ tab in Grafana as i showed in screenshots? maybe the document format is not supported?

document viewed in es

{
  "_index": "test-system-monthly",
  "_type": "_doc",
  "_id": "8aa1e768d89731fd3aa2497c43911a4a76543f278c410fd1468485ffbec069b8da17d68fa9e008795a63cbed0729294baf2c1edc51346344ff23d46e482ffac2",
  "_version": 1,
  "_score": 1,
  "_source": {
    "@timestamp": "2024-12-03T11:00:01.331783231Z",
    "@version": "1",
    "TMSTMP": "2024-12-03T12:00:01.432+01:00",
    "count": 0,
    "env": "system_test",
    "kafkaTopic": "nonprod-vms-ams-sys",
    "name": "NEG_ACKS_1h",
    "tags": [
      "NEG_ACKS_1h",
      "db",
      "system_test",
      "neg_acks"
    ],
    "timestamp": "2024-12-03T11:00:01.432Z",
    "vector_index_name": "test-system-monthly",
    "vector_kafka_headers": {},
    "vector_kafka_message_key": null,
    "vector_kafka_offset": 626254999,
    "vector_kafka_partition": 2,
    "vector_kafka_topic": "nonprod-vms-ams-sys",
    "vector_source_type": "kafka"
  },
  "fields": {
    "kafkaTopic": [
      "nonprod-vms-ams-sys"
    ],
    "TMSTMP": [
      "2024-12-03T11:00:01.432Z"
    ],
    "vector_index_name.keyword": [
      "test-system-monthly"
    ],
    "tags.keyword": [
      "NEG_ACKS_1h",
      "db",
      "system_test",
      "neg_acks"
    ],
    "count": [
      0
    ],
    "@version.keyword": [
      "1"
    ],
    "name.keyword": [
      "NEG_ACKS_1h"
    ],
    "vector_source_type.keyword": [
      "kafka"
    ],
    "kafkaTopic.keyword": [
      "nonprod-vms-ams-sys"
    ],
    "env": [
      "system_test"
    ],
    "vector_kafka_offset": [
      626254999
    ],
    "tags": [
      "NEG_ACKS_1h",
      "db",
      "system_test",
      "neg_acks"
    ],
    "@timestamp": [
      "2024-12-03T11:00:01.331Z"
    ],
    "vector_index_name": [
      "test-system-monthly"
    ],
    "vector_kafka_topic.keyword": [
      "nonprod-vms-ams-sys"
    ],
    "vector_kafka_partition": [
      2
    ],
    "env.keyword": [
      "system_test"
    ],
    "name": [
      "NEG_ACKS_1h"
    ],
    "@version": [
      "1"
    ],
    "vector_kafka_topic": [
      "nonprod-vms-ams-sys"
    ],
    "vector_source_type": [
      "kafka"
    ],
    "timestamp": [
      "2024-12-03T11:00:01.432Z"
    ]
  }
}