How do I count lines by labels?

I have an apache log and I want to know which urls are requested most often. So I already have the following

{filename="/var/log/apache2/other_vhosts_access.log"} | regexp `(?P<domain>[^:]+).* "(?P<method>[A-Z]+) (?P<url>(/[^/? ]+){0,2}).*`

which is working fine (the labels have the right values)
I now want the equivilant of the SQL Query select count(*) from ... group by url, domain, method;
I tried the following:

count by(domain, url, method) (rate({filename="/var/log/apache2/other_vhosts_access.log"} | regexp `(?P<domain>[^:]+).* "(?P<method>[A-Z]+) (?P<url>(/[^/? ]+){0,2}).*` [$__range]))

but this does not give me a single value per label tupel, but with

count by(domain, url, method) ({filename="/var/log/apache2/other_vhosts_access.log"} | regexp `(?P<domain>[^:]+).* "(?P<method>[A-Z]+) (?P<url>(/[^/? ]+){0,2}).*`)

I get the error: parse error at line 1, col 155: syntax error: unexpected )

Try something like this:

sum by (domain, url, method) (
  count_over_time(
    <YOUR_QUERY>
  [$__interval])
)

You can even slap topk on top:

topk(10, sum by (domain, url, method) (
  count_over_time(
    <YOUR_QUERY>
  [$__interval])
))

If you can share an example log line I can test for you in the logql analyzer as well.

The problem is that this is still displayed as graph over time and not as list where the time is ignored:

Hey, I’m also trying the same thing. Any luck so far?
I’ve tried this:
count_over_time({filename=“/var/log/nginx/access.log”} |~ “(GET|POST) /geoserver/web” [30d])

refer this article

this is what my log looks like:

{"name":"NSPanel","id":"10017b****","data":{"action":"update","deviceid":"10017b****","apikey":"f47a5333-****-****-****-************","userAgent":"device","d_seq":218583,"params":{"temperature":22.6,"humidity":"blank","tempUnit":0},"seq":"166"}}

my LogQL:

count by(name)(count_over_time({service_name="ewelink", ext="WSP_MSG"}| json[$__range]))

replace $__auto to $__range and set Options.Type from Range to Instant

and I get what I want:(a list instead of a time-seriers data)