Advice for Handling 100-150 GB of Data Per Day with GET and GEL Servers

Hello Grafana Community,

I am seeking advice on building an architecture to handle a substantial amount of data and logs. Here is the proposed setup:

3 servers for GETs: These servers will handle both read and write operations and get Traces from Alloy agent.
3 servers for GELs: These servers will collect logs from alloy agent

And they are used for visualizing in Grafana.

The raw data volume is expected to be around 100-150 GB per day.

My questions are as follows:

  1. GET Servers: Can I configure all components to perform both read and write operations on each of the 3 GET servers? Additionally, can these servers handle the traces and send them to Grafana?
  2. Data Distribution: Will the 3 GET servers be able to manage 100 GB of data, distributed equally (approximately 33 GB per server per day)?
  3. Component Allocation: Should I separate the read and write components onto different servers to better handle this volume of data, or is it feasible to have both operations on the same servers?
  4. Storage: How much blob storage is required to store Traces and logs in 2 separate DBs and has 30 days retention period.

I appreciate any insights or suggestions you can provide regarding this architecture.

Thank you in advance for your assistance.

1 Like

Your setup is solid but here are a few quick pointers:

  1. GET Servers: You can have all servers handle both read and write operations, but ensure you have load balancing and a scalable database. A time-series DB (like InfluxDB) can handle both without major issues, but monitor performance.

  2. Data Distribution: 33 GB per server per day is manageable, but ensure your servers are provisioned to handle the load—watch CPU, RAM, and disk I/O.

  3. Read/Write Separation: It’s ideal to separate read and write operations for better performance at scale, but it’s feasible to combine them if you’re managing load effectively.

  4. Storage: For 100-150 GB/day and 30-day retention, expect about 3-4.5 TB per DB. Factor in compression and indexing, and consider cloud storage for scalability.

Overall, it’s scalable, but monitor closely and plan for growth!

Hi @scarletioshub
Sorry for asking again, just for the confirmation, you are saying that we can use monolithic mode of Grafana Enterprise Traces servers with all components together can handle 20-30GB of data per day?
I just need one clarification, If not 3 nodes, how many monolithic GET servers and there sizings should I use to handle 80 GB data in total per day.
Could you please suggest me the number of servers and its average sizing for handling logs of GEL(Grafana Enterprise Logs) too.
It would be very helpful.
And if there are any documents about servers sizing. please let me know.

1 Like