[Cassandra] Loki not creating Chunk or Index tables on Cassandra (v2.2.1)

I have a pretty basic setup where I’m trying to write to a Cassandra backend and Loki just isn’t creating any chunks.

Here’s my config;

      schema_config:
        configs:
        - from: 2020-05-15
          store: cassandra
          object_store: cassandra
          schema: v11
          index:
            prefix: loki_index
            period: 360h
          chunks:
            prefix: chunk
            period: 360h
      storage_config:
        cassandra:
          addresses: cassandra-1,cassandra-2,cassandra-3
          auth: false
          keyspace: loki
      chunk_store_config:
        max_look_back_period: 0s

Loki creates the keyspace, but just isn’t making any chunks or index related inforamtion:

cqlsh> SELECT * FROM system_schema.keyspaces;

 keyspace_name      | durable_writes | replication
--------------------+----------------+-------------------------------------------------------------------------------------
               loki |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}

(7 rows)
cqlsh> SELECT index_name FROM system_schema.indexes WHERE keyspace_name = 'loki';

 index_name
------------

(0 rows)

Same with tables:

cqlsh> SELECT table_name FROM system_schema.tables WHERE keyspace_name = 'loki';

 table_name
------------

(0 rows)

Only information I have in the log relates to the incoming promtail pushes showing failures:

level=error ts=2021-05-21T00:54:23.393836285Z caller=flush.go:220 org_id=fake msg="failed to flush user" err="unconfigured table chunk1251"
level=error ts=2021-05-21T00:54:53.396176119Z caller=flush.go:220 org_id=fake msg="failed to flush user" err="unconfigured table chunk1251"
level=error ts=2021-05-21T00:55:23.431153799Z caller=flush.go:220 org_id=fake msg="failed to flush user" err="unconfigured table chunk1251"

Any ideas what’s happening here?

I’ve tried with chunk storage on filesystem with indices managed through Cass. I get the same problem, but just with the indices in this case. Chunks are written to the FS fine.

Ingester logs show some attempts to write? But I don’t see the index or chunks being written.

level=info ts=2021-05-21T02:55:18.515576993Z caller=main.go:130 msg="Starting Loki" version="(version=2.2.1, branch=HEAD, revision=babea82ef)"
level=info ts=2021-05-21T02:55:18.515768099Z caller=server.go:229 http=[::]:3100 grpc=[::]:9095 msg="server listening on addresses"
level=info ts=2021-05-21T02:55:18.525281477Z caller=events.go:247 module=gocql client=index-read msg=Session.handleNodeUp ip=10.103.249.215 port=9042
level=info ts=2021-05-21T02:55:18.589145021Z caller=events.go:247 module=gocql client=index-write msg=Session.handleNodeUp ip=10.107.95.44 port=9042
level=info ts=2021-05-21T02:55:18.652227181Z caller=events.go:247 module=gocql client=chunks-read msg=Session.handleNodeUp ip=10.97.193.231 port=9042
level=info ts=2021-05-21T02:55:18.675929319Z caller=events.go:247 module=gocql client=chunks-write msg=Session.handleNodeUp ip=10.107.95.44 port=9042
level=info ts=2021-05-21T02:55:18.688575613Z caller=memberlist_client.go:380 msg="Using memberlist cluster node name" name=loki-loki-distributed-ingester-0-f27ff7c2
level=info ts=2021-05-21T02:55:18.690134596Z caller=module_service.go:59 msg=initialising module=server
level=info ts=2021-05-21T02:55:18.69017885Z caller=module_service.go:59 msg=initialising module=store
level=info ts=2021-05-21T02:55:18.690203123Z caller=module_service.go:59 msg=initialising module=memberlist-kv
level=info ts=2021-05-21T02:55:18.690253238Z caller=module_service.go:59 msg=initialising module=ingester
level=info ts=2021-05-21T02:55:18.6903041Z caller=lifecycler.go:521 msg="not loading tokens from file, tokens file path is empty"
level=info ts=2021-05-21T02:55:18.690374396Z caller=loki.go:248 msg="Loki started"
level=info ts=2021-05-21T02:55:18.691046402Z caller=lifecycler.go:550 msg="instance not found in ring, adding with no tokens" ring=ingester
level=info ts=2021-05-21T02:55:18.691144339Z caller=lifecycler.go:397 msg="auto-joining cluster after timeout" ring=ingester
ts=2021-05-21T02:55:18.701284756Z caller=memberlist_logger.go:74 level=warn msg="Failed to resolve loki-loki-distributed-memberlist: lookup loki-loki-distributed-memberlist on 10.96.0.10:53: no such host"
ts=2021-05-21T02:55:20.326324127Z caller=memberlist_logger.go:74 level=warn msg="Failed to resolve loki-loki-distributed-memberlist: lookup loki-loki-distributed-memberlist on 10.96.0.10:53: no such host"
ts=2021-05-21T02:55:23.351964345Z caller=memberlist_logger.go:74 level=warn msg="Failed to resolve loki-loki-distributed-memberlist: lookup loki-loki-distributed-memberlist on 10.96.0.10:53: no such host"
ts=2021-05-21T02:55:29.978929763Z caller=memberlist_logger.go:74 level=warn msg="Failed to resolve loki-loki-distributed-memberlist: lookup loki-loki-distributed-memberlist on 10.96.0.10:53: no such host"
ts=2021-05-21T02:55:43.01314624Z caller=memberlist_logger.go:74 level=warn msg="Failed to resolve loki-loki-distributed-memberlist: lookup loki-loki-distributed-memberlist on 10.96.0.10:53: no such host"
level=info ts=2021-05-21T02:56:06.430437004Z caller=memberlist_client.go:521 msg="joined memberlist cluster" reached_nodes=2
level=error ts=2021-05-21T02:57:18.70017624Z caller=flush.go:220 org_id=fake msg="failed to flush user" err="unconfigured table chunks_2681"

In case you either forgot to self-solve or abandoned your issue, you need to enable the table manager in the chart’s configuration (defaults to disabled as of 2021-07-23), which should pre-create and manage tables in your datastores.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.