Hi community.
I’m new to the Grafana-Loki-Alloy stack. I have a local deployment of the stack in containers, and it’s working correctly.
I want to use the stack to process SystemOut.log files from an IBM WebSphere platform on a monthly basis. Since the processing is monthly, I need to be able to process older log files, ideally maintaining a history of at least 60 days, so I can compare results from one month to the next.
My file structure is similar to:
./alloy ← stack configurations (docker-compose.yaml, config.alloy, loki-config.yaml)
./logs/2026-01/SystemOut*.log ← files to process
Initially, I tried processing the log files separately. Then, using a bash script, I opted to group the files, sort them by date, and generate a single output file for processing like:
./logs/2026-01/CPE.log
In both cases, almost all of the log entries were excluded due to the “timestamp too old” error.
Some examples of logs I’m trying to process are as follows:
[u713] [C-CPE04] [1/13/26 16:23:38:062 CLST] 0000dccb webapp E com.ibm.ws.webcontainer.webapp.WebApp logServletError SRVE0293E: [Servlet Error]-[ListenerNst]: com.filenet.api.exception.EngineRuntimeException: FNRCE0066E: E_UNEXPECTED_EXCEPTION: An unexpected exception occurred. Message was: null
[u713] [C-CPE04] [1/13/26 16:24:18:662 CLST] 000000b3 HardwareInfoC W ASPS0023W: The logical partition on which this node resides has a mode of shared, a type of uncapped, but performance collection is not enabled.
[u712] [C-CPE03] [1/13/26 16:30:24:236 CLST] 00003da9 LocalTranCoor E WLTC0017E: Resources rolled back due to setRollbackOnly() being called.
[u712] [C-CPE03] [1/13/26 16:30:24:236 CLST] 00003da9 webapp E com.ibm.ws.webcontainer.webapp.WebApp logServletError SRVE0293E: [Servlet Error]-[ListenerNst]: com.filenet.api.exception.EngineRuntimeException: FNRCE0066E: E_UNEXPECTED_EXCEPTION: An unexpected exception occurred. Message was: null
[u713] [C-CPE02] [1/13/26 16:32:36:028 CLST] 00006b5c LocalTranCoor E WLTC0017E: Resources rolled back due to setRollbackOnly() being called.
My configuration file for Alloy is:
livedebugging {
enabled = true
}
// ----------------------------------------------------------------------------
// Procesamiento de logs WAS
// ----------------------------------------------------------------------------
loki.source.file "was" {
targets = [{
__path__ = "/var/log/*/CPE.log",
__path_exclude__ = ".*\\.lck$",
}]
forward_to = [loki.process.was.receiver]
file_match {
enabled = true
sync_period = "10s"
}
tail_from_end = false
encoding = "UTF-8"
}
loki.process "was" {
stage.regex {
expression = "/var/log/(?P<year>\\d{4})-(?P<month>\\d{2})/"
source = "filename"
}
stage.static_labels {
values = {
type = "SystemOut",
}
}
stage.labels {
values = {
year = "",
month = "",
}
}
stage.multiline {
firstline = "^\\[([^\\]]+)\\]\\s+\\[([^\\]]+)\\]\\s+\\[\\d{1,2}\\/\\d{1,2}\\/\\d{2}"
max_lines = 500
max_wait_time = "3s"
}
// Extraer campos del header del log
stage.regex {
expression = "^\\[(?P<host>[^\\]]+)\\]\\s+\\[(?P<jvm>[^\\]]+)\\]\\s+\\[(?P<timestamp>[^\\]]+)\\]\\s+(?P<thread>\\S+)\\s+(?P<category>\\S+)\\s+(?P<level>[A-Z])\\s+(?P<class>\\S+)\\s+(?P<method>[^ ]+)\\s*(?P<message>.*)"
}
stage.timestamp {
source = "timestamp"
format = "1/2/06 15:04:05:000 MST"
fallback_formats = [
"01/02/06 15:04:05:000 MST",
"1/2/2006 15:04:05:000 MST",
"01/02/2006 15:04:05:000 MST",
]
}
stage.labels {
values = {
host = "",
jvm = "",
level = "",
}
}
// Eliminar líneas vacías o muy cortas, solo aplicar a logs que ya tienen source_type definido (ya clasificados).
stage.match {
selector = "{source_type=~\".+\"}"
// descartar líneas completamente vacías o solo con espacios
stage.drop {
expression = "^\\s*$"
drop_counter_reason = "linea_vacia"
}
// descartar lineas que contengan el patrón
stage.drop {
expression = ".*SystemOut O (XML|.get).*"
drop_counter_reason = "patron detectado"
}
}
// Rate limiting para evitar sobrecarga
stage.limit {
rate = 10000
burst = 20000
drop = true
}
forward_to = [loki.write.default.receiver]
}
loki.write "default" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
external_labels = {}
}
My configuration for Loki is:
---
# This is a complete configuration to deploy Loki backed by the filesystem.
# The index will be shipped to the storage via tsdb-shipper.
auth_enabled: false
limits_config:
allow_structured_metadata: true
volume_enabled: true
reject_old_samples: false
reject_old_samples_max_age: 12w
server:
http_listen_port: 3100
common:
ring:
instance_addr: 0.0.0.0
kvstore:
store: inmemory
replication_factor: 1
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
schema_config:
configs:
- from: 2020-05-15
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
storage_config:
tsdb_shipper:
active_index_directory: /tmp/loki/index
cache_location: /tmp/loki/index_cache
filesystem:
directory: /tmp/loki/chunks
pattern_ingester:
enabled: true
# Note: We are setting the max chunk age far lower than the default expected value
# This is due to the fact this scenario is used within the LogCLI demo and we need a short flush time.
# To show how logcli stats --since 24h '{service_name="Delivery World", package_size="Large"}' works.
ingester:
max_chunk_age: 5m # Should be 2 hours
I would greatly appreciate your help in identifying where my problem lies, as I haven’t been able to find a solution yet. The example and configurations I’ve attached are for processing the single unified file. If it’s advisable to revert to separate processing, I have no problem reverting my configuration.
Thank you in advance for your help.
Regards.
Sergio