How to call Alloy endpoints?

Hello all,

I’m new here, hew to Grafana, new to Grafana Alloy, new to OTLP in general. Please forgive the likely muddle in my reasoning.

Just a bit of background of what we’re trying to do. We have machines that make parts. Different machines make different kinds of parts and they take different times for each kind. Each machine has a PC attached, that collects those timings into a local MS SQL Server database.

We need to send those timings to Grafana Cloud. In order to do that we installed Alloy on the PC: now we need to code this little Java application that takes data from the local MS SQL Server database and sends them to Alloy. We could also, in theory, “instrument” our Java application (e.g. create OTLP compatible endpoints) and let Alloy scrape data (but not JVM data) via our endpoints, but that does not seem the correct way to go, because our data (the rows in the SQL database) come at variable delays in a range of about 1 DPM to 60 DPM: we do not know in advance how long a machine will take to make a single part (that is exactly the metric we need to monitor).

Now I assume this scenario best fits the Metrics for applications path (e.g. OTLP, not Prometheus), but I may be wrong, so please give me your opinion about the correct choice in this case. In my /etc/alloy/config.alloy I’ve appended both the OTLP and the Prometheus receivers, but I did that more out of desperation than driven by a real reasoning:

otelcol.receiver.otlp "default" {
	// configures the default grpc endpoint "0.0.0.0:4317"
	grpc { }
	// configures the default http/protobuf endpoint "0.0.0.0:4318"
	http { }

	output {
		metrics = [otelcol.processor.resourcedetection.default.input]
		logs    = [otelcol.processor.resourcedetection.default.input]
		traces  = [otelcol.processor.resourcedetection.default.input]
	}
}

[...]

prometheus.receive_http "api" {
  http {
    listen_address = "127.0.0.1"
    listen_port = 9308
  }
  forward_to = [prometheus.remote_write.metrics_service.receiver]
} 

So now I think I have both OTLP and Prometheus endpoints listening in my Alloy setup (netstat confrims that), but I do not now how to use either one (their respective URL, nor the code I should write), nor what is the correct one I should be trying to use.

Assuming OTLP is the way to go in my case, here is what I came up with, after a bit of googling, looking at the API docs, asking AI chatbots and applying some intuition:

OtlpHttpMetricExporter exporter = OtlpHttpMetricExporter.builder()
                      .setEndpoint("http://localhost:????/.../api/v1/write") ...

which lacks the correct port number, the full URL (assuming the written part is correct, which probably is not), the call to send the data point and a real understanding of what I’m writing.

Long story short: can you please provide me with your opinion about the best way to do things in my case and an example of how to do that in Java?

1 Like

welcome @info5fd1

why did your team choose this approach to push data from ms sql to prometheus.

why not just distributed sql server to one centralized sql server

Because we prefer a serverless solution like Grafana Cloud that doesn’t force us to manage a centralized SQL server.

1 Like

According to the documentation, the URL path would be /api/v1/metrics/write (see prometheus.receive_http | Grafana Alloy documentation). So in your case it would be http://127.0.0.1/api/v1/metrics/write.

Note that because you are only listening to 127.0.0.1 it has to be localhost.

Also, since you are using remote_write, you don’t necessarily have to run Alloy on the same host, meaning you can have a couple of Alloy agents running behind a load balancer and have all your java apps send metrics to them. Depends on which is easier for you and if you have any egress constraint on your app servers.