Specific log location suddenly disappears on Grafana Explore

We have an issue where a specific log suddenly disappears.

Below is the configuration file installed on the Windows Server VM

integrations:
  prometheus_remote_write:
  - basic_auth:
      password: 
      username:
    url: https://prometheus
  agent:
    enabled: true
    relabel_configs:
    - action: replace
      source_labels:
      - agent_hostname
      target_label: instance
    - action: replace
      target_label: job
      replacement: "integrations/agent-check"
    metric_relabel_configs:
    - action: keep
      regex: (prometheus_target_.*|prometheus_sd_discovered_targets|agent_build.*|agent_wal_samples_appended_total|process_start_time_seconds)
      source_labels:
      - __name__
  # Add here any snippet that belongs to the `integrations` section.
  # For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
  windows_exporter:
    enabled: true
    instance: XRF-PROD # must match instance used in logs
    relabel_configs:
    - target_label: job
      replacement: 'integrations/windows_exporter' # must match job used in logs
logs:
  configs:
  - clients:
    - basic_auth:
        password: 
        username:
      url: https://logs
    name: integrations
    positions:
      filename: "D:\\tmp\\positions.yaml"
    scrape_configs:
      # Add here any snippet that belongs to the `logs.configs.scrape_configs` section.
      # For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
    - job_name: integrations/windows-exporter-application
      windows_events:
        use_incoming_timestamp: true
        bookmark_path: "C:\\Program Files\\Grafana Agent\\bookmarks-app.xml"
        eventlog_name: "Application"
        labels:
          job: integrations/windows_exporter
          instance: XRF-PROD # must match instance used in windows_exporter
      relabel_configs:
        - source_labels: ['computer']
          target_label: 'agent_hostname'
      pipeline_stages:
        - json:
            expressions:
              source: source
        - labels:
            source:
    - job_name: integrations/windows-exporter-system
      windows_events:
        use_incoming_timestamp: true
        bookmark_path: "C:\\Program Files\\Grafana Agent\\bookmarks-sys.xml"
        eventlog_name: "System"
        labels:
          job: integrations/windows_exporter
          instance: XRF-PROD # must match instance used in windows_exporter
    - job_name: composer-logs-job
      static_configs:
        - targets: [localhost]
          labels:
            instance: XRF-PROD
            job: composer-logs
            __path__: "D:\\UNIRITA\\SmartConductor\\plugin\\xrf_composer\\log\\composer.log"
    - job_name: xrf-logs-job
      static_configs:
        - targets: [localhost]
          labels:
            instance: XRF-PROD
            job: xrf-logs
            __path__: "D:\\XRF\\log\\xrf_log.log.0"

metrics:
  configs:
  - name: integrations
    remote_write:
    - basic_auth:
        password: 
        username:
      url: https://prometheus
    scrape_configs:
      # Add here any snippet that belongs to the `metrics.configs.scrape_configs` section.
      # For a correct indentation, paste snippets copied from Grafana Cloud at the beginning of the line.
  global:
    scrape_interval: 60s
  wal_directory: /tmp/grafana-agent-wal

In the Grafana Cloud UI, the two log locations are visible:

However, after a day or two, one of the logs D:\UNIRITA\SmartConductor\plugin\xrf_composer\log\composer.log is no longer visible.

Rebooting the Grafana agent service doesn’t resolve the issue. You have to reboot the server for the logs to reappear. But after a while, it’s gone.

Your help is greatly appreciated.

Hi @edhxb ! Is it possible that these logs are outside the window of time selected in Explore? The duration is set to 6 hours by default, and can be changed using the button located in the top right-hand side of the Explore and dashboard UI. Restarting the Grafana Agent service will not have any impact to logs that have already been received and stored in your hosted Loki instance.