Hi @mariorodriguez ,
thank you very much for you reply
These are the HotROD app metrics after redeploying and zeroing the counters and making a few requests. To me that all looks right?
"route.jaeger.tracer.baggage_restrictions_updates.result_err": 0,
"route.jaeger.tracer.baggage_restrictions_updates.result_ok": 0,
"route.jaeger.tracer.baggage_truncations": 0,
"route.jaeger.tracer.baggage_updates.result_err": 0,
"route.jaeger.tracer.baggage_updates.result_ok": 0,
"route.jaeger.tracer.finished_spans.sampled_delayed": 0,
"route.jaeger.tracer.finished_spans.sampled_n": 0,
"route.jaeger.tracer.finished_spans.sampled_y": 352,
"route.jaeger.tracer.reporter_queue_length": 0,
"route.jaeger.tracer.reporter_spans.result_dropped": 0,
"route.jaeger.tracer.reporter_spans.result_err": 0,
"route.jaeger.tracer.reporter_spans.result_ok": 352,
"route.jaeger.tracer.sampler_queries.result_err": 0,
"route.jaeger.tracer.sampler_queries.result_ok": 0,
"route.jaeger.tracer.sampler_updates.result_err": 0,
"route.jaeger.tracer.sampler_updates.result_ok": 0,
"route.jaeger.tracer.span_context_decoding_errors": 0,
"route.jaeger.tracer.started_spans.sampled_delayed": 0,
"route.jaeger.tracer.started_spans.sampled_n": 0,
"route.jaeger.tracer.started_spans.sampled_y": 353,
"route.jaeger.tracer.throttled_debug_spans": 0,
"route.jaeger.tracer.throttler_updates.result_err": 0,
"route.jaeger.tracer.throttler_updates.result_ok": 0,
"route.jaeger.tracer.traces.sampled_n.state_joined": 0,
"route.jaeger.tracer.traces.sampled_n.state_started": 0,
"route.jaeger.tracer.traces.sampled_y.state_joined": 350,
"route.jaeger.tracer.traces.sampled_y.state_started": 3,
My Grafana Agent deployment is done like this.
---
apiVersion: v1
data:
agent.yaml: |
server:
http_listen_port: 8080
log_level: debug
metrics:
wal_directory: /tmp/wal
global:
scrape_interval: 1m
remote_write:
- url: http://victoriametrics.victoriametrics:8428/api/v1/write
configs:
- name: default
scrape_configs:
- job_name: kubernetes_pods
kubernetes_sd_configs:
- role: pod
selectors:
- role: "pod"
label: "metrics=grafana-agent"
logs:
configs:
- name: default
positions:
filename: /tmp/positions_traces.yaml
clients:
- url: http://loki.loki:3100/loki/api/v1/push
traces:
configs:
- name: default
receivers:
jaeger:
protocols:
grpc:
thrift_compact:
remote_write:
- endpoint: tempo.tempo:55680
insecure: true
batch:
timeout: 5s
send_batch_size: 100
automatic_logging:
backend: logs_instance
logs_instance_name: default
spans: true
processes: true
roots: true
kind: ConfigMap
metadata:
name: grafana-agent-full
namespace: grafana-agent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-agent-full
namespace: grafana-agent
labels:
io.kompose.service: hotrod
metrics: "grafana-agent"
logs: "grafana-agent"
traces: "grafana-agent"
spec:
replicas: 1
selector:
matchLabels:
app: grafana-agent
template:
metadata:
labels:
app: grafana-agent
metrics: "grafana-agent"
logs: "grafana-agent"
traces: "grafana-agent"
spec:
containers:
- args:
- -config.file=/etc/agent/agent.yaml
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
image: grafana/agent:v0.19.0
imagePullPolicy: IfNotPresent
name: agent
ports:
- containerPort: 8080
name: http-metrics
- containerPort: 6831
name: t-j-t-compact
protocol: UDP
- containerPort: 6832
name: t-j-t-binary
protocol: UDP
- containerPort: 14268
name: t-j-t-http
protocol: TCP
- containerPort: 14250
name: t-j-grpc
protocol: TCP
- containerPort: 9411
name: tempo-zipkin
protocol: TCP
- containerPort: 55680
name: tempo-otlp
protocol: TCP
- containerPort: 55678
name: t-opencensus
protocol: TCP
volumeMounts:
- mountPath: /etc/agent
name: grafana-agent-full
serviceAccount: grafana-agent-logs
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- configMap:
name: grafana-agent-full
name: grafana-agent-full
---
apiVersion: v1
kind: Service
metadata:
name: grafana-agent-full
namespace: grafana-agent
labels:
name: grafana-agent-full
spec:
ports:
- name: agent-http-metrics
port: 8080
targetPort: 8080
- name: agent-t-j-t-compact
port: 6831
protocol: UDP
targetPort: 6831
- name: agent-t-j-t-binary
port: 6832
protocol: UDP
targetPort: 6832
- name: agent-t-j-t-http
port: 14268
protocol: TCP
targetPort: 14268
- name: agent-t-j-grpc
port: 14250
protocol: TCP
targetPort: 14250
- name: agent-tempo-zipkin
port: 9411
protocol: TCP
targetPort: 9411
- name: agent-tempo-otlp
port: 55680
protocol: TCP
targetPort: 55680
- name: agent-t-opencensus
port: 55678
protocol: TCP
targetPort: 55678
selector:
name: grafana-agent-full
The name I actually use for the service and deployment is grafana-agent-full
. In my original post I had changed that to grafana-agent
.
This is the startup log of the Grafana Agent pod
$ k logs grafana-agent-full-79f78594bf-2n9fr -n grafana-agent
ts=2021-10-22T11:02:04.385735436Z caller=node.go:85 level=info agent=prometheus component=cluster msg="applying config"
ts=2021-10-22T11:02:04.386045145Z caller=remote.go:180 level=info agent=prometheus component=cluster msg="not watching the KV, none set"
ts=2021-10-22T11:02:04.386639052Z caller=config_watcher.go:135 level=debug agent=prometheus component=cluster msg="waiting for next reshard interval" last_reshard=2021-10-22T11:02:04.386619244Z next_reshard=2021-10-22T11:03:04.386619244Z remaining=59.999998779s
ts=2021-10-22T11:02:04Z level=info caller=traces/traces.go:120 msg="Traces Logger Initialized" component=traces
ts=2021-10-22T11:02:04Z level=info caller=traces/instance.go:122 msg="shutting down receiver" component=traces traces_config=default
ts=2021-10-22T11:02:04Z level=info caller=traces/instance.go:122 msg="shutting down processors" component=traces traces_config=default
ts=2021-10-22T11:02:04Z level=info caller=traces/instance.go:122 msg="shutting down exporters" component=traces traces_config=default
ts=2021-10-22T11:02:04.389982673Z caller=instance.go:301 level=debug agent=prometheus instance=9b6cec8990db03140ef4948dfc33097f msg="initializing instance" name=9b6cec8990db03140ef4948dfc33097f
ts=2021-10-22T11:02:04Z level=info caller=builder/exporters_builder.go:266 msg="Exporter was built." component=traces traces_config=default kind=exporter name=otlp/0
ts=2021-10-22T11:02:04Z level=info caller=builder/exporters_builder.go:93 msg="Exporter is starting..." component=traces traces_config=default kind=exporter name=otlp/0
ts=2021-10-22T11:02:04Z level=info caller=builder/exporters_builder.go:98 msg="Exporter started." component=traces traces_config=default kind=exporter name=otlp/0
ts=2021-10-22T11:02:04Z level=info caller=builder/pipelines_builder.go:207 msg="Pipeline was built." component=traces traces_config=default pipeline_name=traces pipeline_datatype=traces
ts=2021-10-22T11:02:04Z level=info caller=builder/pipelines_builder.go:52 msg="Pipeline is starting..." component=traces traces_config=default pipeline_name=traces pipeline_datatype=traces
ts=2021-10-22T11:02:04Z level=info caller=builder/pipelines_builder.go:63 msg="Pipeline is started." component=traces traces_config=default pipeline_name=traces pipeline_datatype=traces
ts=2021-10-22T11:02:04Z level=info caller=builder/receivers_builder.go:231 msg="Receiver was built." component=traces traces_config=default kind=receiver name=jaeger datatype=traces
ts=2021-10-22T11:02:04Z level=info caller=builder/receivers_builder.go:71 msg="Receiver is starting..." component=traces traces_config=default kind=receiver name=jaeger
ts=2021-10-22T11:02:04Z level=info caller=static/strategy_store.go:201 msg="No sampling strategies provided or URL is unavailable, using defaults" component=traces traces_config=default kind=receiver name=jaeger
ts=2021-10-22T11:02:04Z level=info caller=builder/receivers_builder.go:76 msg="Receiver started." component=traces traces_config=default kind=receiver name=jaeger
ts=2021-10-22T11:02:04.395934568Z caller=manager.go:208 level=debug msg="Applying integrations config changes"
ts=2021-10-22T11:02:04.397510327Z caller=server.go:77 level=info msg="server configuration changed, restarting server"
ts=2021-10-22T11:02:04.399925501Z caller=gokit.go:47 level=info http=[::]:8080 grpc=[::]:9095 msg="server listening on addresses"
Using netcat
I have tried to verify that the port is open (spinning up a temporary troubleshooting container)
$ kubectl run tmp-shell -n grafana-agent --rm -i --tty --image nicolaka/netshoot -- /bin/bash
If you don't see a command prompt, try pressing enter.
bash-5.1# nc -v -z -u grafana-agent-full.grafana-agent 6831
Connection to grafana-agent-full.grafana-agent 6831 port [udp/*] succeeded!
I’m new to Kubernetes so I would not be surprised if this is some simple n00b thing that I have missed…
Cheers!