OpenTelemetry collector with Loki

I have Otel Collector using Loki exporter and the logs are showing up fine in Loki. I see the log in Loki with several labels and Detected fields. I can query the logs on the labels but not the Detected fields

I am interested in querying the logs based on TraceID or traceID but that doesnt work. What am I doing wrong ?

Hi @rupinder10 ,

I the field is not an indexed label, then you will have to apply some parsing first. Even then, the result will basically be a text filter (but you can just click a button instead of typing it in)

Here is an example

Only a few Log labels and a couple of Detected fields

Some parsing applied

Now you can see more Log labels, still the same logs.

Here is the full query as text

{container="hotrod"} | pattern "<time>\t<level>\t<component>\t<status>\t<json_log>\n" | line_format "{{.json_log}}" | json

I can now use the filters, like indexed labels. The query just adds a filter at the end resulting in

{container="hotrod"} | pattern "<time>\t<level>\t<component>\t<status>\t<json_log>\n" | line_format "{{.json_log}}" | json | trace_id="584397635b19c4b6"

Hope that helps. I can elaborate if needed :slight_smile:

1 Like

Thanks. My intention is to configure Trace to Logs. I configured it but the query to jump to the logs is {TraceID=""}. But since that doesnt work the Trace to Logs link doesnt yeild anything.

So how do I add parsing logic ? And where do I add it ? I am only 3 days into using Loki.

This post might help. It took me a few weeks to get my head around this :slight_smile:

1 Like

@b0b Thanks for that link. But that works for going from logs to Traces. So Loki to Tempo. I already have that working. My question is from Temp to Loki. So I want to get that little button (highlighted in yellow) to show up the logs associated with that trace.

I didn’t even know about that function :grimacing:

Looks like the spans and logs need shared tags/labels to work. I had a look at my traces to see what tags they have

If I set one of those tags in the Tempo datasource then the Loki query is pre-filled the way I would expect but my logs do not share the tags so nothing is found.

I tried that. I have a tag named rootContextId on my traces. When I set that up I get an error and it doesnt let me add to the tempo datasource. See below:

I get that error as well. I just ignore it… My changes get saved anyway.

I think some initial data was lost before I got everything to work, so the trace with id: 0 was sent into a black hole. This is just an explanation I have made up for myself. Probably not what happened.

Did you ever work this out? I’m currently in the same position; I can go from Loki to Tempo but not from Tempo to Loki. Would really appreciate any guidance!

Hi @rupinder10,

currently I try to setup OTel collector using a Loki exporter too. How did you get the logs into Loki? According to the collector log output the exporter has been started, but my pod’s logs were not forwarded to Loki.
My setup is:

[Pod + OTel instrumentation] → [OTel collector w/ Loki exporter] → [Loki] → [Grafana w/ datasource Loki]

Thanks for your help!

hi @rupinder10 i wanna add log labels but i can’t able to add the labels inside log label section can you help me to add the labels under log label section do you have any examples for that

Hii @b0b i wanna add log labels but i can’t able to add the labels inside log label section can you help me to add the labels under log label section do you have any examples for that ?

thank you

Hi @lakshmanin999 ,
as far as I know, you need to add these labels in your log shipper agent config (probably Promtail).

I have used grafana-agent which should use Promtail compatible config. I plan on moving to Promtail but I have not had time to test this config with Promtail yet.

You basically have a Prometheus relabel_configs configuration for labels.

      - name: kubernetes_pods
        positions:
          filename: /tmp/positions_pods.yaml
        scrape_configs:
          - job_name: kubernetes_pods
            kubernetes_sd_configs:
              - role: pod
            pipeline_stages:
              - docker: {}
            relabel_configs:
              - source_labels:
                - __meta_kubernetes_pod_controller_name
                target_label: __service__
              - source_labels:
                - __meta_kubernetes_pod_node_name
                target_label: __host__
              - action: labelmap
                regex: __meta_kubernetes_pod_label_(app|project|service)
              - action: replace
                replacement: $1
                source_labels:
                - name
                target_label: job
              - action: replace
                source_labels:
                - __meta_kubernetes_namespace
                target_label: namespace
              - action: replace
                source_labels:
                - __meta_kubernetes_pod_name
                target_label: pod
              - action: replace
                source_labels:
                - __meta_kubernetes_pod_container_name
                target_label: container
              - replacement: /var/log/pods/*$1/*.log
                separator: /
                source_labels:
                - __meta_kubernetes_pod_uid
                - __meta_kubernetes_pod_container_name
                target_label: __path__

More info

in my scenario i’m using otel(open telemetry) to send logs,traces,metrics to grafana dashboard so can i use otel config yaml to add the labels here? can you please let me know

Sorry. I have no idea how that would be done.

1 Like

For what its worth I finally got this to work using derived fields and Loki+Jaeger. This is an example of my loki.yaml in my grafana/datasources apiVersion: 1
datasources:

  • name: Loki
    type: loki
    uid: loki
    access: proxy
    url: http://loki:3100
    jsonData:
    maxLines: 1000
    derivedFields:
    # Field with internal link pointing to data source in Grafana.
    # datasourceUid value can be anything, but it should be unique across all defined data source uids.
    - datasourceUid: thisNameNeedsToMatchYourTraceDataSource
    matcherRegex: ‘[traceid=(\w+)]’
    name: traceid
    # url will be interpreted as query for the datasource
    url: ‘$${__value.raw}’
    # optional for URL Label to set a custom display label for the link.
    urlDisplayLabel: ‘View Trace’

In the same dataSources folder I would have a jaeger.yaml with its configuration using the same uid: thisNameNeedsToMatchYourTraceDataSource

datasources:

  • name: Jaeger

    UID should match the datasourceUid in derivedFields.

    uid: thisNameNeedsToMatchYourTraceDataSource
    type: jaeger
    isDefault: true
    url: http://jaeger:16686/jaeger/ui
    editable: true
    jsonData:
    nodeGraph:
    enabled: true
    tracesToLogsV2:
    spanStartTimeShift: ‘1s’
    spanEndTimeShift: ‘6h’
    datasourceUid: loki
    tags: [‘job’, ‘instance’]
    filterBySpanID: false
    filterByTraceID: false
    mapTagNamesEnabled: true
    customQuery: true
    query: ‘{exporter=“OTLP”} |= “$${__span.traceId}” | json | line_format {{.body}}
    mappedTags:
    - key: traceid
    value: traceid