Costa Tsaousis spoke at stackconf 2024 in Berlin on June 19, in a 14:30–15:00 slot. The talk traced the history of Netdata from its inception: the original goals, the architectural bets, and what held up over time.
The starting point was straightforward. When Costa began building Netdata, the goal was high-resolution metrics – per-second granularity, not the 10- or 60-second averages that were standard at the time. That required a fundamentally different collection architecture: lightweight agents that process data locally, real-time visualization that does not depend on a query round-trip to a central database, and auto-detection of services so that adding a new node does not require writing configuration files.
stackconf draws an open-source infrastructure crowd in Berlin – people who run Prometheus, Grafana, Ansible, Terraform, and similar tools. For this audience, the interesting part was not “what does Netdata do” but “why was it built this way.” Costa walked through the decisions that most monitoring tools make differently – centralized vs. distributed, sampling vs. full fidelity, manual configuration vs. auto-detection – and explained what Netdata gained and gave up with each choice.