So we have a hbase setup and it’s metrics are enabled via jmx exporter. The generated metrics are in form of json. Do we have any provision in grafana alloy where it can scrap json metrics and convert to Prometheus metrics and save to mimir.
Are the metrics available via a URL like http://example.com/metrics.json
? Perhaps you could post a sample here to show how they are structured.
they are available via JMX exporter with URL like https://:16030/jmx
its hbase region server metrics
and below is the sample response section:
{
“name”: “Hadoop:service=HBase,name=RegionServer,sub=Replication”,
“modelerType”: “RegionServer,sub=Replication”,
“tag.Context”: “regionserver”,
“tag.Hostname”: “gbl25111813”,
“Source.ageOfLastShippedOp_num_ops”: 0,
“Source.ageOfLastShippedOp_min”: 0,
“Source.ageOfLastShippedOp_max”: 0,
“Source.ageOfLastShippedOp_mean”: 0,
“Source.ageOfLastShippedOp_25th_percentile”: 0,
“Source.ageOfLastShippedOp_median”: 0,
“Source.ageOfLastShippedOp_75th_percentile”: 0,
“Source.ageOfLastShippedOp_90th_percentile”: 0,
“Source.ageOfLastShippedOp_95th_percentile”: 0,
“Source.ageOfLastShippedOp_98th_percentile”: 0,
“Source.ageOfLastShippedOp_99th_percentile”: 0,
“Source.ageOfLastShippedOp_99.9th_percentile”: 0,
“source.walReaderEditsBufferUsage”: 0,
“source.sizeOfHFileRefsQueue”: 0,
“source.logReadInBytes”: 484820,
“source.2.failedBatches”: 0,
“source.sizeOfLogQueue”: 1,
“source.2.shippedBytes”: 0,
“Sink.ageOfLastAppliedOp_num_ops”: 0,
“Sink.ageOfLastAppliedOp_min”: 0,
“Sink.ageOfLastAppliedOp_max”: 0,
“Sink.ageOfLastAppliedOp_mean”: 0,
“Sink.ageOfLastAppliedOp_25th_percentile”: 0,
“Sink.ageOfLastAppliedOp_median”: 0,
“Sink.ageOfLastAppliedOp_75th_percentile”: 0,
“Sink.ageOfLastAppliedOp_90th_percentile”: 0,
“Sink.ageOfLastAppliedOp_95th_percentile”: 0,
“Sink.ageOfLastAppliedOp_98th_percentile”: 0,
“Sink.ageOfLastAppliedOp_99th_percentile”: 0,
“Sink.ageOfLastAppliedOp_99.9th_percentile”: 0,
“source.shippedOps”: 0,
“source.2.isInitializing”: 0,
“source.2.restartedLogReading”: 0,
“sink.appliedHFiles”: 0,
“source.shippedBytes”: 0,
“Source.2.ageOfLastShippedOp_num_ops”: 0,
“Source.2.ageOfLastShippedOp_min”: 0,
“Source.2.ageOfLastShippedOp_max”: 0,
“Source.2.ageOfLastShippedOp_mean”: 0,
“Source.2.ageOfLastShippedOp_25th_percentile”: 0,
“Source.2.ageOfLastShippedOp_median”: 0,
“Source.2.ageOfLastShippedOp_75th_percentile”: 0,
“Source.2.ageOfLastShippedOp_90th_percentile”: 0,
“Source.2.ageOfLastShippedOp_95th_percentile”: 0,
“Source.2.ageOfLastShippedOp_98th_percentile”: 0,
“Source.2.ageOfLastShippedOp_99th_percentile”: 0,
“Source.2.ageOfLastShippedOp_99.9th_percentile”: 0,
“sink.appliedOps”: 0,
“source.2.shippedBatches”: 0,
“source.failedBatches”: 0,
“source.2.shippedKBs”: 0,
“source.2.shippedHFiles”: 0,
“sink.appliedBatches”: 0,
“source.2.shippedOps”: 0,
“source.2.uncleanlyClosedLogs”: 0,
“source.shippedHFiles”: 0,
“source.2.ignoredUncleanlyClosedLogContentsInBytes”: 0,
“source.2.logEditsRead”: 3179,
“source.uncleanlyClosedLogs”: 0,
“source.closedLogsWithUnknownFileLength”: 0,
“source.repeatedLogFileBytes”: 0,
“source.shippedKBs”: 0,
“source.completedRecoverQueues”: 1,
“source.2.logReadInBytes”: 484820,
“source.2.closedLogsWithUnknownFileLength”: 0,
“source.restartedLogReading”: 0,
“source.failedRecoverQueues”: 0,
“source.2.sizeOfLogQueue”: 1,
“source.2.completedLogs”: 3,
“source.ignoredUncleanlyClosedLogContentsInBytes”: 0,
“source.logEditsRead”: 3179,
“source.2.completedRecoverQueues”: 0,
“source.numInitializing”: 0,
“source.2.oldestWalAge”: 303,
“source.2.logEditsFiltered”: 3179,
“source.logEditsFiltered”: 3179,
“source.2.repeatedLogFileBytes”: 0,
“source.2.sizeOfHFileRefsQueue”: 0,
“source.completedLogs”: 4,
“sink.failedBatches”: 0,
“source.shippedBatches”: 0
},
{},
etc many many blocks.