Dashboard / Integration for reading Wildfly Logs

Hi,

I´m a very beginner in Grafana.
I would like to read the log files in Grafana:

  1. Is there a recommendation for an existing dashboard for this?
  2. How can I integrate it in Grafana?
  3. Which additional docker container (besides Grafana and WildFly) do I need?

Can someone help here?

do you have logging enabled for wildfly?

https://docs.wildfly.org/33/Admin_Guide.html#Logging

Yes, it´s looking like this:

<subsystem xmlns="urn:jboss:domain:logging:8.0">
            <console-handler name="CONSOLE">
                <level name="DEBUG"/>
                <formatter>
                    <named-formatter name="COLOR-PATTERN"/>
                </formatter>
            </console-handler>
            <periodic-rotating-file-handler name="FILE" autoflush="true">
                <formatter>
                    <named-formatter name="PATTERN"/>
                </formatter>
                <file relative-to="jboss.server.log.dir" path="server.log"/>
                <suffix value=".yyyy-MM-dd"/>
                <append value="true"/>
            </periodic-rotating-file-handler>
            <logger category="com.arjuna">
                <level name="WARN"/>
            </logger>
            <logger category="com.networknt.schema">
                <level name="WARN"/>
            </logger>
            <logger category="io.jaegertracing.Configuration">
                <level name="WARN"/>
            </logger>
            <logger category="org.jboss.as.config">
                <level name="DEBUG"/>
            </logger>
            <logger category="sun.rmi">
                <level name="WARN"/>
            </logger>
            <logger category="org.pac4j">
                <level name="DEBUG"/>
            </logger>
            <logger category="io.buji">
                <level name="DEBUG"/>
            </logger>
            <logger category="org.thymeleaf">
                <level name="OFF"/>
            </logger>
            <root-logger>
                <level name="INFO"/>
                <handlers>
                    <handler name="CONSOLE"/>
                    <handler name="FILE"/>
                </handlers>
            </root-logger>
            <formatter name="PATTERN">
                <pattern-formatter pattern="%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n"/>
            </formatter>
            <formatter name="COLOR-PATTERN">
                <pattern-formatter pattern="%K{level}%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n"/>
            </formatter>
        </subsystem>

Here my filebeat.yaml:

## Log Files, die überwacht werden
filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /opt/jboss/wildfly/standalone/log/*.log  # Pfad zu den Wildfly-Logdateien
    fields:
      service: wildfly_gixxshare
    fields_under_root: true


## Hier können Sie zusätzliche Verarbeitungsschritte definieren, die auf die Logdaten angewendet werden, bevor sie an den Output gesendet werden.
processors:
  - timestamp:
      field: "@timestamp"  # Das Feld für den Zeitstempel in den Logs
      layouts:
        - '2006-01-02 15:04:05'  # Format des Zeitstempels in den Logs

output.elasticsearch:
  hosts: ["http://${ELASTICSEARCH_HOSTNAME}"]                   # URL des Elasticsearch-Clusters
  username: "elastic"
  password: "${ELASTICSEARCH_PASSWORD}"                         # Passwort aus einer Umgebungsvariablen
  index: "wildfly-logs-%{[fields.service]}-%{+yyyy.MM.dd}"      # Index-Namensschema

#setup.kibana:
#  host: "http://${GRAFANA_HOSTNAME}"

Here my docker-compose:

version: '3.7'

services:
  
  ########################################################################################################################
  prometheus:
    image: prom/prometheus:v2.42.0
    container_name: prometheus
    volumes:
      - ${ROOT_DIR}/prometheus/prometheus.yaml:/etc/prometheus/prometheus.yaml
      - prometheus_data:/prometheus
    # ports:
    #   - "9090:9090"
    networks:
      - app-network
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.prometheus.rule=Host(`${PROMOTHEUS_HOSTNAME}`)"
      - "traefik.http.routers.prometheus.entrypoints=web"
      - "traefik.http.services.prometheus.loadbalancer.server.port=${PROMOTHEUS_PORT}"
      - "traefik.http.services.prometheus.loadbalancer.passhostheader=true"
      - "traefik.http.routers.prometheus.middlewares=compresstraefik"
      - "traefik.http.middlewares.compresstraefik.compress=true"
      - "traefik.docker.network=app-network"

  ########################################################################################################################
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.15.0
    container_name: elasticsearch
    environment:
      - discovery.type=single-node
      - ES_JAVA_OPTS=-Xms512m -Xmx512m
      - "ELASTIC_PASSWORD=${ELASTICSEARCH_PASSWORD}"             # Setzt das Passwort für den `elastic`-Benutzer
      - "xpack.security.enabled=false"                           # Stellt sicher, dass die Sicherheit aktiviert ist-> später aktivieren
     # - "xpack.security.transport.ssl.enabled=true"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    ports:
      - "9200:9200"
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.elasticsearch.rule=Host(`${ELASTICSEARCH_HOSTNAME}`)"
      - "traefik.http.routers.elasticsearch.entrypoints=web"
      - "traefik.http.services.elasticsearch.loadbalancer.server.port=${ELASTICSEARCH_PORT}"
      - "traefik.http.services.elasticsearch.loadbalancer.passhostheader=true"
      - "traefik.http.routers.elasticsearch.middlewares=compresstraefik"
      - "traefik.http.middlewares.compresstraefik.compress=true"
      - "traefik.docker.network=app-network"
    networks:
      - app-network
  
  
  ########################################################################################################################
  filebeat:
    image: docker.elastic.co/beats/filebeat:8.15.0
    container_name: filebeat
    volumes:
      - ${ROOT_DIR}/filebeat/filebeat.yaml:/usr/share/filebeat/filebeat.yaml
      - ${ROOT_DIR}/wildfly_gixxshare/logs:/var/log/wildfly
    networks:
      - app-network
    extra_hosts:
      - "elasticsearch.localhost:192.168.178.93"
    depends_on:
      - elasticsearch
  
  
  ########################################################################################################################
  grafana:
    image: grafana/grafana-enterprise:11.1.4
    container_name: grafana
    volumes:
      - grafana_data:/grafana
      - ${ROOT_DIR}/grafana/datasources.yaml:/etc/grafana_data/datasources.yaml
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD}
    # ports:
    #   - "3000:3000"
    networks:
      - app-network
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.grafana.rule=Host(`${GRAFANA_HOSTNAME}`)"
      - "traefik.http.routers.grafana.entrypoints=web"
      - "traefik.http.services.grafana.loadbalancer.server.port=${GRAFANA_PORT}"
      - "traefik.http.services.grafana.loadbalancer.passhostheader=true"
      - "traefik.http.routers.grafana.middlewares=compresstraefik"
      - "traefik.http.middlewares.compresstraefik.compress=true"
      - "traefik.docker.network=app-network"
    extra_hosts:
      - "prometheus.localhost:192.168.178.93"
      - "elasticsearch.localhost:192.168.178.93"
    depends_on:
      - prometheus
      - elasticsearch

volumes:
  prometheus_data:
  grafana_data:
  elasticsearch_data:

networks:
  app-network:
    external: true

Here my datasources.yaml:

apiVersion: 1

datasources:
  - name: Elastic
    type: elasticsearch
    access: proxy
    url: http://localhost:9200
    jsonData:
      index: '[metrics-]YYYY.MM.DD'
      interval: Daily
      timeField: '@timestamp'

env variable:

#####################
# GENERAL
#####################
ROOT_DIR=/Users/myuser/Desktop/Docker/Monitoring


#####################
# PROMOTHEUS prometheus
#####################
PROMOTHEUS_HOSTNAME=prometheus.localhost
PROMOTHEUS_PORT=9090 

#####################
# ELASTIC SEARCH
#####################
ELASTICSEARCH_HOSTNAME=ealasticsearch.localhost
ELASTICSEARCH_PORT=9200
ELASTICSEARCH_PASSWORD=test

#####################
# GRAFANA
#####################
GRAFANA_HOSTNAME=monitoring.localhost
GRAFANA_ADMIN_PASSWORD=admin
GRAFANA_PORT=3000

If I try to connect Elastic search in Grafana I got this error:

Try full index name in ES datasource config - don’t use wildchar there.

You mean in the filebeat.yaml?

Change from:
index: "wildfly-logs-%{[fields.service]}-%{+yyyy.MM.dd}

To:
index: “wildfly-logs” # Index-Namensschema

for a beginner you sure got a lot cooking on the grill, why?

  • wildfly logs
  • filebeat
  • prometheus
  • elasticsearch
  • grafana

what is the relationship between your logs and elastic search

start small and work your way up

I would like to collect the logs from WildFly and create some alerts based on that…

I have a JAVA EE application and would like to have a monitoring

What do you mean with “full index name” ?

so, can you help?
Which information will help to solve this issue?

sure can, if you answer the questions that were asked of you?

Sure, what do you mean with „ what is the relationship between your logs and elastic search“

As I wrote:
I have a wildfly Server and would Like to Collect those log files to do also some alerts etc. with Grafana