Std Logs in Grafana Loki graph

Hi team,

I’m using Grafana Loki to compose dashboards. I need to group the logs by level to create the graph. I used this query to sum the logs based on log-level:
sum by(severity,level) (count_over_time({source=“${source}”,zone=“$zone”, vm_name=“$vm_name”} | json | logfmt | drop error , error_details [$__range]))



As you can see, In addition to log level, on the graph always appears a field named Std Logs.
Can anyone explain to me what is Std Logs and how can I remove it from my graph?

You typically wouldn’t chain json and logfmt together. What does your original log look like?

Hi @tonyswumac ,

This is my example original log:

{“severity”:“INFO”,“time”:“2022-08-10T12:55:08.569Z”,“retry”:0,“queue”:“cronjob:user_status_cleanup_batch”,“version”:0,“queue_namespace”:“cronjob”,“args”:,“class”:“UserStatusCleanup::BatchWorker”,“jid”:“05098a1647c13891db0559f3”,“created_at”:“2022-08-10T12:55:08.512Z”,“meta.caller_id”:“Cronjob”,“correlation_id”:“9d9c8a296c2479f18a8e8de8ee7b5647”,“meta.root_caller_id”:“Cronjob”,“meta.feature_category”:“users”,“worker_data_consistency”:“always”,“idempotency_key”:“resque:gitlab:duplicate:cronjob:user_status_cleanup_batch:63596f84f2aa01599e5d284b58c5b497d0d152817e7aa09bae73984558ee32bf”,“size_limiter”:“validated”,“enqueued_at”:“2022-08-10T12:55:08.515Z”,“job_size_bytes”:2,“pid”:544,“message”:“UserStatusCleanup::BatchWorker JID-05098a1647c13891db0559f3: done: 0.051376 sec”,“job_status”:“done”,“scheduling_latency_s”:0.002508,“redis_calls”:1,“redis_duration_s”:0.000303,“redis_read_bytes”:10,“redis_write_bytes”:364,“redis_queues_calls”:1,“redis_queues_duration_s”:0.000303,“redis_queues_read_bytes”:10,“redis_queues_write_bytes”:364,“db_count”:1,“db_write_count”:0,“db_cached_count”:0,“db_replica_count”:0,“db_primary_count”:1,“db_main_count”:1,“db_main_replica_count”:0,“db_replica_cached_count”:0,“db_primary_cached_count”:0,“db_main_cached_count”:0,“db_main_replica_cached_count”:0,“db_replica_wal_count”:0,“db_primary_wal_count”:0,“db_main_wal_count”:0,“db_main_replica_wal_count”:0,“db_replica_wal_cached_count”:0,“db_primary_wal_cached_count”:0,“db_main_wal_cached_count”:0,“db_main_replica_wal_cached_count”:0,“db_replica_duration_s”:0.0,“db_primary_duration_s”:0.038,“db_main_duration_s”:0.038,“db_main_replica_duration_s”:0.0,“cpu_s”:0.004882,“mem_objects”:1314,“mem_bytes”:108752,“mem_mallocs”:275,“mem_total_bytes”:161312,“worker_id”:“sidekiq_0”,“rate_limiting_gates”:,“duration_s”:0.051376,“completed_at”:“2022-08-10T12:55:08.568Z”,“load_balancing_strategy”:“primary”,“db_duration_s”:0.038482}
{“severity”:“INFO”,“time”:“2022-08-10T12:55:08.588Z”,“retry”:0,“queue”:“cronjob:loose_foreign_keys_cleanup”,“version”:0,“queue_namespace”:“cronjob”,“args”:,“class”:“LooseForeignKeys::CleanupWorker”,“jid”:“78b683d7726e7492ba23a0c4”,“created_at”:“2022-08-10T12:55:08.360Z”,“meta.caller_id”:“Cronjob”,“correlation_id”:“8b816c4b7b53b9f86504322310975691”,“meta.root_caller_id”:“Cronjob”,“meta.feature_category”:“pods”,“worker_data_consistency”:“always”,“idempotency_key”:“resque:gitlab:duplicate:cronjob:loose_foreign_keys_cleanup:2180384c8554b410820afa81ad3479301139c7627524f882c6bca44488742df9”,“size_limiter”:“validated”,“enqueued_at”:“2022-08-10T12:55:08.364Z”,“job_size_bytes”:2,“pid”:544,“message”:“LooseForeignKeys::CleanupWorker JID-78b683d7726e7492ba23a0c4: done: 0.221251 sec”,“job_status”:“done”,“scheduling_latency_s”:0.002336,“redis_calls”:3,“redis_duration_s”:0.004359,“redis_read_bytes”:12,“redis_write_bytes”:667,“redis_queues_calls”:1,“redis_queues_duration_s”:0.00188,“redis_queues_read_bytes”:10,“redis_queues_write_bytes”:366,“redis_shared_state_calls”:2,“redis_shared_state_duration_s”:0.002479,“redis_shared_state_read_bytes”:2,“redis_shared_state_write_bytes”:301,“db_count”:9,“db_write_count”:0,“db_cached_count”:0,“db_replica_count”:0,“db_primary_count”:9,“db_main_count”:9,“db_main_replica_count”:0,“db_replica_cached_count”:0,“db_primary_cached_count”:0,“db_main_cached_count”:0,“db_main_replica_cached_count”:0,“db_replica_wal_count”:0,“db_primary_wal_count”:0,“db_main_wal_count”:0,“db_main_replica_wal_count”:0,“db_replica_wal_cached_count”:0,“db_primary_wal_cached_count”:0,“db_main_wal_cached_count”:0,“db_main_replica_wal_cached_count”:0,“db_replica_duration_s”:0.0,“db_primary_duration_s”:0.077,“db_main_duration_s”:0.077,“db_main_replica_duration_s”:0.0,“cpu_s”:0.024146,“mem_objects”:5365,“mem_bytes”:305112,“mem_mallocs”:1060,“mem_total_bytes”:519712,“worker_id”:“sidekiq_0”,“rate_limiting_gates”:,“extra.loose_foreign_keys_cleanup_worker.stats”:{“over_limit”:false,“delete_count_by_table”:{},“update_count_by_table”:{},“delete_count”:0,“update_count”:0,“connection”:“main”},“duration_s”:0.221251,“completed_at”:“2022-08-10T12:55:08.588Z”,“load_balancing_strategy”:“primary”,“db_duration_s”:0.179604}

==> /var/log/gitlab/gitlab-exporter/current <==
2022-08-10_12:55:09.61105 127.0.0.1 - - [10/Aug/2022:12:55:09 UTC] “GET /ruby HTTP/1.1” 200 1012
2022-08-10_12:55:09.61108 - → /ruby
2022-08-10_12:55:09.79781 Passing ‘exists?’ command to redis as is; blind passthrough has been deprecated and will be removed in redis-namespace 2.0 (at /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/sidekiq-6.4.0/lib/sidekiq/api.rb:962:in `block (3 levels) in each’)
2022-08-10_12:55:09.82959 127.0.0.1 - - [10/Aug/2022:12:55:09 UTC] “GET /sidekiq HTTP/1.1” 200 74366
2022-08-10_12:55:09.82962 - → /sidekiq

==> /var/log/gitlab/gitlab-rails/production.log <==
Started GET “/-/metrics” for 127.0.0.1 at 2022-08-10 12:55:10 +0000
Processing by MetricsController#index as HTML
Completed 200 OK in 49ms (Views: 0.8ms | ActiveRecord: 0.0ms | Elasticsearch: 0.0ms | Allocations: 930)

==> /var/log/gitlab/gitlab-rails/production_json.log <==
{“method”:“GET”,“path”:“/-/metrics”,“format”:“html”,“controller”:“MetricsController”,“action”:“index”,“status”:200,“time”:“2022-08-10T12:55:10.660Z”,“params”:,“db_count”:0,“db_write_count”:0,“db_cached_count”:0,“db_replica_count”:0,“db_primary_count”:0,“db_main_count”:0,“db_main_replica_count”:0,“db_replica_cached_count”:0,“db_primary_cached_count”:0,“db_main_cached_count”:0,“db_main_replica_cached_count”:0,“db_replica_wal_count”:0,“db_primary_wal_count”:0,“db_main_wal_count”:0,“db_main_replica_wal_count”:0,“db_replica_wal_cached_count”:0,“db_primary_wal_cached_count”:0,“db_main_wal_cached_count”:0,“db_main_replica_wal_cached_count”:0,“db_replica_duration_s”:0.0,“db_primary_duration_s”:0.0,“db_main_duration_s”:0.0,“db_main_replica_duration_s”:0.0,“cpu_s”:0.040632,“mem_objects”:1883,“mem_bytes”:878672,“mem_mallocs”:4906,“mem_total_bytes”:953992,“pid”:1203147,“worker_id”:“puma_2”,“rate_limiting_gates”:,“correlation_id”:“bac4c49a-d770-42f4-a928-648f6fb2b1b4”,“db_duration_s”:0.0,“view_duration_s”:0.00082,“duration_s”:0.05052}

==> /var/log/gitlab/postgres-exporter/current <==
2022-08-10_12:55:10.77588 ts=2022-08-10T12:55:10.775Z caller=log.go:168 level=debug msg=“Querying PostgreSQL version” server=/var/opt/gitlab/postgresql:5432
2022-08-10_12:55:10.77610 ts=2022-08-10T12:55:10.776Z caller=log.go:168 level=debug msg=“Querying pg_setting view” server=/var/opt/gitlab/postgresql:5432
2022-08-10_12:55:10.77790 ts=2022-08-10T12:55:10.777Z caller=log.go:168 level=debug msg=“Querying namespace” namespace=pg_stat_activity
2022-08-10_12:55:10.77934 ts=2022-08-10T12:55:10.779Z caller=log.go:168 level=debug msg=“Querying namespace” namespace=pg_stat_database
2022-08-10_12:55:10.79305 ts=2022-08-10T12:55:10.792Z caller=log.go:168 level=debug msg=“Querying namespace” namespace=pg_postmaster
2022-08-10_12:55:10.79333 ts=2022-08-10T12:55:10.793Z caller=log.go:168 level=debug msg=“Querying namespace” namespace=pg_replication
2022-08-10_12:55:10.79412 ts=2022-08-10T12:55:10.794Z caller=log.go:168 level=debug msg=“Querying namespace” namespace=pg_stat_activity_autovacuum_active
2022-08-10_12:55:10.79560 ts=2022-08-10T12:55:10.795Z caller=log.go:168 level=debug msg=“Querying namespace” namespace=pg_vacuum_queue
2022-08-10_12:55:10.81721 ts=2022-08-10T12:55:10.817Z caller=log.go:168 level=debug msg=“Querying namespace” namespace=pg_total_relation_size
2022-08-10_12:55:10.93308 ts=2022-08-10T12:55:10.932Z caller=log.go:168 level=debug msg=“Querying namespace” namespace=pg_stat_activity_marginalia_sampler
2022-08-10_12:55:10.93953 ts=2022-08-10T12:55:10.939Z caller=log.go:168 level=debug msg=“Querying namespace” namespace=pg_oldest_blocked
2022-08-10_12:55:10.94113 ts=2022-08-10T12:55:10.941Z caller=log.go:168 level=debug msg=“Querying namespace” namespace=pg_stat_replication

Could you explain to me why we don’t use JSON and logfmt together?

json is for parsing logs that’s json formatted, logfmt is for parsing logs that are key/value formatted. You should parse json logs with json filter, and parse key/value logs with logfmt, not mix the two.

thank you,
With these logs, how can I remove sdt log?

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.