How to send juniper router telemetry to grafana

I see my data from my juniper router in my ubuntu grafana server. I see this using tcpdump, so at least I know it’s getting there. What do I need to do next ? I don’t know what is supposed to read that data and handle it next.


What datasource are you using ? Now are you storing metrics ? Have you added the datasource in Grafana? The add a panel go into edit mode and add a query

Thanks @torkel i haven’t successfully gotten that far yet. I see that I can use a variety of data sources in grafana…Cloudwatch, elasticsearch, graphite, influxdb, etc, etc…but I don’t know if this has to correspond with what i have configure in my Juniper router? Any guidance or ideas ?

Please forgive me as I’m a network engineer full time and trying to learn more about this stuff at the moment…any insights you can provide would be helpful

by “datasource” do you mean where is the data coming from and in what format ?

…if so, it’s a Juniper MX960 configured to send analytics/telemetry data over udp OR grpc and as gpb OR gpb-sdm… which should I use to send to grafana and what mechanism receives these types of transports/formats at grafana ? …is it influxdb?..telegraf?..or something else that I must have installed on the grafana ubuntu server in order to rcv this juniper data ?

let me know if i should alter these …

agould@lab-960# set services analytics export-profile my-exprt-prfl format ?
Possible completions:
gpb Use gpb format
gpb-sdm Use gpb self-describing-message format
agould@lab-960# set services analytics export-profile my-exprt-prfl transport ?
Possible completions:
grpc Use grpc transport
udp Use UDP transport protocol

…here’s what i currently have configured on the MX960…

set services analytics streaming-server my-grafana-srvr remote-address
set services analytics streaming-server my-grafana-srvr remote-port 8094
set services analytics export-profile my-exprt-prfl local-address
set services analytics export-profile my-exprt-prfl local-port 21111
set services analytics export-profile my-exprt-prfl reporting-rate 10
set services analytics export-profile my-exprt-prfl format gpb
set services analytics export-profile my-exprt-prfl transport udp
set services analytics sensor my-sensor-14 server-name my-grafana-srvr
set services analytics sensor my-sensor-14 export-name my-exprt-prfl
set services analytics sensor my-sensor-14 resource /junos/system/linecard/interface/

i do read about how juniper can send to opennti and it runs in a docker container, but my coworker says he is having trouble starting that docker image of opennti…

this person mentions that I would need to send my juniper router telemetry to udp port 50000 i think towards the fluentd collector used in opennti

Grafana does not store metrics, it queries a time series database to visualize metrics.

thanks @torkel , what are some examples of “time series database” that I can use for grafana to query ? are there many different time series databases that grafana is capable of querying ?

prometheus, graphite, influxdb, etc

Thanks, so if these are data sources…

Microsoft SQL Server (MSSQL)

…what is “telegraf” ?

Also, I wonder what data source would be able to receive the udp/grpc transported gpb (google protocol buffer) data that my Juniper router would be sending ?

As I understand it, the Juniper router I’m streaming telemetry data analaytics from is sending in udp or grpc and gpb format. …and I’m sending it to my grafana server, but I need to know what to do with it once it’s sent to the grafana server (i’m loosley referring to my grafana server… it’s my server that has all sorts of stuff loaded on it… ubuntu with grafana, i see influxdb on it, telegraf, i think opennti, etc)


Hi Torkel, i guess there is a difference between data sources (prometheus, graphite, influxdb, etc) and tsdb’s (which i think are like fluentd and telegraf)

(copy and paste of a post I just made elsewhere and wanted to share with you sir)
I got telemetry streaming working using this site … I have a couple MX960’s streaming telemetry to the suite of software provided in this Open-NTI project spoken of on this techmocha blog site. I think my previous problems were related to conflicting installs… as myself and my coworker had loaded individual items and then the open-nti suite (which i understand is a docker container with all the items like grafana, fluentd, chronograf, influxdb, etc)… anyway, we started with a fresh install Ubunto virtual machine and only loaded Open-NTI and it works.

I do not know or understand all of the innerworkings of it at this point, but am quickly learning, even while writing this post… I’m currently using Chronograf hosted at port 8888 and browsing the Data Explorer function and seeing some nice graphs. (I’m wondering if Chrongraf is simply an alternative to Grafana gui front end, unsure) There seems to be tons of items to monitor and analyze, and I’m currently only sending the following sensor resource from the MX960 and there are several more that can be sent… /junos/system/linecard/interface/

I am sending the telemetry from the MX960 using UDP transport and GPB format to port 50000 and source port 21111 (mx960-1) and 21112 (mx960-2). I’m unsure that I had to use unique source ports… as I wonder if the source-ip would have been sufficient to make the streaming sources unique in the Open-NTI server.

Looking at the techmocha pictures, and the “docker ps” command on the linux server, and now this new-found techmocha link (see “deconstructed” below) i understand that FluentD is the TSDB (time series db) that is receiving/ingesting the Native streaming form of telemetry from my MX960’s on udp port 50000 and looks like fluentd hands off that data to InfluxDB port 8086 (which i think happens internally at that server). (I’m not evening talking about the other form of jti telemetry using openconfig and grpc…I’ve yet to do that and don’t know why I would exactly…which i beleive is ingested using telegraf, unsure)

…the link i followed to deploy open-nti suite…

…interestingly, i just now found this, which apparently is a way of deploying all the components individually…