I’m trying to use the experimental foreach
block to read many files from S3 buckets. Each file will be used as the targets for a different Prometheus scrape component. But I can’t find a way to name each iterated component. In this example, they are all named remote.s3.default
so I’m unable to access them.
foreach "s3" {
var = "pipeline_name"
collection = ["snmp_christie4415rgb", "qsys_core", "ping",]
template {
remote.s3 "default" {
path = string.format("s3://mybucket/%s.yml", pipeline_name)
poll_frequency = "1m"
}
}
}
prometheus.scrape "christie4415rgb" {
forward_to = [prometheus.relabel.christie4415rgb.receiver]
targets = encoding.from_yaml(remote.s3.christie4415rgb.content)
scrape_interval = "1m"
}
I also tried putting the scrape component inside the foreach
block but then I would need to insert the variable in the targets list, and I don’t think it’s possible to construct capsule values. I tried like this:
prometheus.scrape "christie4415rgb" {
forward_to = [string.format("prometheus.relabel.%s.receiver", pipeline_name)]
targets = encoding.from_yaml(remote.s3.default.content)
}
And got this error:
string.format(“prometheus.relabel.%s.receiver”, pipeline_name) should be capsule, got string
I tried a different approach, and found that the foreach
variable also doesn’t work when used as a key for json_path
. With this config:
local.file "targets" {
filename = "targets.json"
}
foreach "default" {
var = "each"
collection = json_path(local.file.targets.content, each)
template {
prometheus.scrape "default" {
forward_to = [prometheus.remote_write.local.receiver]
targets = encoding.from_json(json_path(local.file.targets.content, string.format("$.%s[*]", each)))
}
}
}
It shows error:
Error: test.alloy:7:54: identifier “each” does not exist
Hello, I think that in your case you would need both the scrape and the relabel component in the foreach and have the relabel component to point to a remote_write component outside of the foreach like this:
foreach "s3" {
var = "pipeline_name"
collection = ["snmp_christie4415rgb", "qsys_core", "ping",]
template {
remote.s3 "default" {
path = string.format("s3://mybucket/%s.yml", pipeline_name)
poll_frequency = "1m"
}
prometheus.scrape "default" {
forward_to = [prometheus.relabel.default.receiver]
targets = encoding.from_yaml(remote.s3.default.content)
scrape_interval = "1m"
}
prometheus.relabel "default" {
forward_to = [prometheus.remote_write.common.receiver]
}
}
}
prometheus.remote_write "common" {
endpoint {
url = "http://localhost:9009/api/v1/push"
}
}
Thanks for the suggestion! That won’t work for me because each scrape needs a different set of relabel rules.
I’ve worked out a way to handle this logic in my own script instead, so I poll S3 and download new files if their Etag has changed, write them to local files, which Alloy sees with several discovery.file
components.