How to connect Grafana Cloud MongoDB plugin to managed Digital Ocean MongoDB instance?

I have tried everything I could think of to get MongoDB visualizations in my Grafana Cloud for my Managed Digital Ocean MongoDB instance.

I cannot ssh into it to install an agent for the Private Datasource Connect feature. I added all 49 current Grafana Hosted Metrics IPs to the Trusted Sources of my db cluster, thinking it was a firewall issue, but that still did not work.

I receive this error in Grafana:

The connection can not be established. Error: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: <my-cluster-name>.mongo.ondigitalocean.com:27017, Type: Unknown, Last error: dial tcp <my.ip.address>:27017: i/o timeout }, ] }

The credentials are set in the Authentication section, and my connection string is mongodb+srv://<my-cluster-name>.mongo.ondigitalocean.com/admin?authSource=admin&replicaSet=<my-cluster-name>

Grafana does not accept the mongodb+srv://username:password format.

Surely someone has successfully done this and I’m just doing it incorrectly. The only thing Digital Ocean offers for monitoring is CPU load and memory. I need more than that.

I can connect just fine using the mongosh cli. I just cannot create this datasource in Grafana.

Hey! Sorry you’re running into difficulty on this problem. Can you please clarify your end goal? Are you wanting to connect your MongoDB to visualize your collections, or do you want to export metrics from your managed MongoDB and visualize metrics from your database in Grafana?

Here’s a tutorial on how to create an exporter for your MongoDB metrics:

You’ll then have to use Prometheus or Grafana Alloy or Grafana Cloud’s Metrics Endpoint to push the metrics to Grafana Cloud. Let me know more about your use case and I can help point you in the right direction.

I tried that tutorial. It’s based on the assumption I can ssh into a machine and use localhost as the host. I tried creating an instance within the VPC to pull metrics, but it would never connect. I could only connect using the mongosh cli.

My preferred use case would be to simply use my mongo cluster as a datasource in my MongoDB plugin to visualize collections without the need for another machine running an exporter.

I returned to your suggestion, but with a slight variation. I used the docker container version of the mongodb_exporter, and it’s successfully pulling metrics from my managed mongodb instance.

However, when I try to incorporate that new metric data into my grafana-agent.yml config, I get a 401 Unauthorized error, despite the agent working fine before adding the new block for the mongo metrics.

integrations:
  node_exporter:
    enabled: true
  prometheus_remote_write:
  - basic_auth:
      password: "<MY_API_KEY"
      username: "1505309"
    url: https://prometheus-prod-13-prod-us-east-0.grafana.net/api/prom/push
  agent:
    enabled: true
    relabel_configs:
    - action: replace
      source_labels:
      - agent_hostname
      target_label: instance
    - action: replace
      target_label: job
      replacement: "integrations/agent-check"
    metric_relabel_configs:
    - action: keep
      regex: (prometheus_target_sync_length_seconds_sum|prometheus_target_scrapes_.*|prometheus_target_interval.*|prometheus_sd_discovered_targets|agent_build.*|agent_wal_samples_appended_total|process_start_time_seconds)
      source_labels:
      - __name__
logs:
  configs:
  - clients:
    - basic_auth:
        password: "<MY_API_KEY"
        username: "853565"
      url: https://logs-prod-006.grafana.net/loki/api/v1/push
    name: integrations
    positions:
      filename: /tmp/positions.yaml
    scrape_configs:
      - job_name: syslog
        static_configs:
          - targets: [localhost]
            labels:
              job: syslog
              instance: "mongo-gf-exporter"
              env: "dev"
              __path__: /var/log/syslog

metrics:
  configs:
  - name: integrations
    remote_write:
    - basic_auth:
        password: "<MY_API_KEY"
        username: "1505309"
      url: https://prometheus-prod-13-prod-us-east-0.grafana.net/api/prom/push
    scrape_configs:
      - job_name: syslog
        static_configs:
          - targets: ['localhost']
            labels:
              job: syslog
              instance: "mongo-gf-exporter"
              env: "dev"
              __path__: /var/log/syslog
      - job_name: mongodb
        static_configs:
          - targets: ['localhost:9216']
            labels:
              instance: "mongo-gf-exporter"
              env: "dev"
  global:
    scrape_interval: 60s
  wal_directory: /tmp/grafana-agent-wal

This is the only new section of the grafana-agent.yml file that causes it to fail:

 - job_name: mongodb
        static_configs:
          - targets: ['localhost:9216']
            labels:
              instance: "mongo-gf-exporter"
              env: "dev"

I am able to see the metrics at http://<my-instance>:9216/metrics. I can remove that new block and it works fine with the normal OS metrics from the instance (:9090). What could I be doing wrong? I’ve almost got this working. Just this small error is happening.

I must have had a mixup with my API key. I’ve finally got it working with a new one for the :9216 scraping portion. I’ll mark this as solved!

1 Like