Hello all,
I’m new here, hew to Grafana, new to Grafana Alloy, new to OTLP in general. Please forgive the likely muddle in my reasoning.
Just a bit of background of what we’re trying to do. We have machines that make parts. Different machines make different kinds of parts and they take different times for each kind. Each machine has a PC attached, that collects those timings into a local MS SQL Server database.
We need to send those timings to Grafana Cloud. In order to do that we installed Alloy on the PC: now we need to code this little Java application that takes data from the local MS SQL Server database and sends them to Alloy. We could also, in theory, “instrument” our Java application (e.g. create OTLP compatible endpoints) and let Alloy scrape data (but not JVM data) via our endpoints, but that does not seem the correct way to go, because our data (the rows in the SQL database) come at variable delays in a range of about 1 DPM to 60 DPM: we do not know in advance how long a machine will take to make a single part (that is exactly the metric we need to monitor).
Now I assume this scenario best fits the Metrics for applications path (e.g. OTLP, not Prometheus), but I may be wrong, so please give me your opinion about the correct choice in this case. In my /etc/alloy/config.alloy
I’ve appended both the OTLP and the Prometheus receivers, but I did that more out of desperation than driven by a real reasoning:
otelcol.receiver.otlp "default" {
// configures the default grpc endpoint "0.0.0.0:4317"
grpc { }
// configures the default http/protobuf endpoint "0.0.0.0:4318"
http { }
output {
metrics = [otelcol.processor.resourcedetection.default.input]
logs = [otelcol.processor.resourcedetection.default.input]
traces = [otelcol.processor.resourcedetection.default.input]
}
}
[...]
prometheus.receive_http "api" {
http {
listen_address = "127.0.0.1"
listen_port = 9308
}
forward_to = [prometheus.remote_write.metrics_service.receiver]
}
So now I think I have both OTLP and Prometheus endpoints listening in my Alloy setup (netstat
confrims that), but I do not now how to use either one (their respective URL, nor the code I should write), nor what is the correct one I should be trying to use.
Assuming OTLP is the way to go in my case, here is what I came up with, after a bit of googling, looking at the API docs, asking AI chatbots and applying some intuition:
OtlpHttpMetricExporter exporter = OtlpHttpMetricExporter.builder()
.setEndpoint("http://localhost:????/.../api/v1/write") ...
which lacks the correct port number, the full URL (assuming the written part is correct, which probably is not), the call to send the data point and a real understanding of what I’m writing.
Long story short: can you please provide me with your opinion about the best way to do things in my case and an example of how to do that in Java?