The only agent that thinks for itself

Autonomous Monitoring with self-learning AI built-in, operating independently across your entire stack.

Unlimited Metrics & Logs
Machine learning & MCP
5% CPU, 150MB RAM
3GB disk, >1 year retention
800+ integrations, zero config
Dashboards, alerts out of the box
> Discover Netdata Agents
Centralized metrics streaming and storage

Aggregate metrics from multiple agents into centralized Parent nodes for unified monitoring across your infrastructure.

Stream from unlimited agents
Long-term data retention
High availability clustering
Data replication & backup
Scalable architecture
Enterprise-grade security
> Learn about Parents
Fully managed cloud platform

Access your monitoring data from anywhere with our SaaS platform. No infrastructure to manage, automatic updates, and global availability.

Zero infrastructure management
99.9% uptime SLA
Global data centers
Automatic updates & patches
Enterprise SSO & RBAC
SOC2 & ISO certified
> Explore Netdata Cloud
Deploy Netdata Cloud in your infrastructure

Run the full Netdata Cloud platform on-premises for complete data sovereignty and compliance with your security policies.

Complete data sovereignty
Air-gapped deployment
Custom compliance controls
Private network integration
Dedicated support team
Kubernetes & Docker support
> Learn about Cloud On-Premises
Powerful, intuitive monitoring interface

Modern, responsive UI built for real-time troubleshooting with customizable dashboards and advanced visualization capabilities.

Real-time chart updates
Customizable dashboards
Dark & light themes
Advanced filtering & search
Responsive on all devices
Collaboration features
> Explore Netdata UI
Monitor on the go

Native iOS and Android apps bring full monitoring capabilities to your mobile device with real-time alerts and notifications.

iOS & Android apps
Push notifications
Touch-optimized interface
Offline data access
Biometric authentication
Widget support
> Download apps

Best energy efficiency

True real-time per-second

100% automated zero config

Centralized observability

Multi-year retention

High availability built-in

Zero maintenance

Always up-to-date

Enterprise security

Complete data control

Air-gap ready

Compliance certified

Millisecond responsiveness

Infinite zoom & pan

Works on any device

Native performance

Instant alerts

Monitor anywhere

80% Faster Incident Resolution
AI-powered troubleshooting from detection, to root cause and blast radius identification, to reporting.
True Real-Time and Simple, even at Scale
Linearly and infinitely scalable full-stack observability, that can be deployed even mid-crisis.
90% Cost Reduction, Full Fidelity
Instead of centralizing the data, Netdata distributes the code, eliminating pipelines and complexity.
Control Without Surrender
SOC 2 Type 2 certified with every metric kept on your infrastructure.
Integrations

800+ collectors and notification channels, auto-discovered and ready out of the box.

800+ data collectors
Auto-discovery & zero config
Cloud, infra, app protocols
Notifications out of the box
> Explore integrations
Real Results
46% Cost Reduction

Reduced monitoring costs by 46% while cutting staff overhead by 67%.

— Leonardo Antunez, Codyas

Zero Pipeline

No data shipping. No central storage costs. Query at the edge.

From Our Users
"Out-of-the-Box"

So many out-of-the-box features! I mostly don't have to develop anything.

— Simon Beginn, LANCOM Systems

No Query Language

Point-and-click troubleshooting. No PromQL, no LogQL, no learning curve.

Enterprise Ready
67% Less Staff, 46% Cost Cut

Enterprise efficiency without enterprise complexity—real ROI from day one.

— Leonardo Antunez, Codyas

SOC 2 Type 2 Certified

Zero data egress. Only metadata reaches the cloud. Your metrics stay on your infrastructure.

Full Coverage
800+ Collectors

Auto-discovered and configured. No manual setup required.

Any Notification Channel

Slack, PagerDuty, Teams, email, webhooks—all built-in.

Built for the People Who Get Paged
Because 3am alerts deserve instant answers, not hour-long hunts.
Every Industry Has Rules. We Master Them.
See how healthcare, finance, and government teams cut monitoring costs 90% while staying audit-ready.
Monitor Any Technology. Configure Nothing.
Install the agent. It already knows your stack.
From Our Users
"A Rare Unicorn"

Netdata gives more than you invest in it. A rare unicorn that obeys the Pareto rule.

— Eduard Porquet Mateu, TMB Barcelona

99% Downtime Reduction

Reduced website downtime by 99% and cloud bill by 30% using Netdata alerts.

— Falkland Islands Government

Real Savings
30% Cloud Cost Reduction

Optimized resource allocation based on Netdata alerts cut cloud spending by 30%.

— Falkland Islands Government

46% Cost Cut

Reduced monitoring staff by 67% while cutting operational costs by 46%.

— Codyas

Real Coverage
"Plugin for Everything"

Netdata has agent capacity or a plugin for everything, including Windows and Kubernetes.

— Eduard Porquet Mateu, TMB Barcelona

"Out-of-the-Box"

So many out-of-the-box features! I mostly don't have to develop anything.

— Simon Beginn, LANCOM Systems

Real Speed
Troubleshooting in 30 Seconds

From 2-3 minutes to 30 seconds—instant visibility into any node issue.

— Matthew Artist, Nodecraft

20% Downtime Reduction

20% less downtime and 40% budget optimization from out-of-the-box monitoring.

— Simon Beginn, LANCOM Systems

Pay per Node. Unlimited Everything Else.

One price per node. Unlimited metrics, logs, users, and retention. No per-GB surprises.

Free tier—forever
No metric limits or caps
Retention you control
Cancel anytime
> See pricing plans
What's Your Monitoring Really Costing You?

Most teams overpay by 40-60%. Let's find out why.

Expose hidden metric charges
Calculate tool consolidation
Customers report 30-67% savings
Results in under 60 seconds
> See what you're really paying
Your Infrastructure Is Unique. Let's Talk.

Because monitoring 10 nodes is different from monitoring 10,000.

On-prem & air-gapped deployment
Volume pricing & agreements
Architecture review for your scale
Compliance & security support
> Start a conversation
Monitoring That Sells Itself

Deploy in minutes. Impress clients in hours. Earn recurring revenue for years.

30-second live demos close deals
Zero config = zero support burden
Competitive margins & deal protection
Response in 48 hours
> Apply to partner
Per-Second Metrics at Homelab Prices

Same engine, same dashboards, same ML. Just priced for tinkerers.

Community: Free forever · 5 nodes · non-commercial
Homelab: $90/yr · unlimited nodes · fair usage
> Start monitoring your lab—free
$1,000 Per Referral. Unlimited Referrals.

Your colleagues get 10% off. You get 10% commission. Everyone wins.

10% of subscriptions, up to $1,000 each
Track earnings inside Netdata Cloud
PayPal/Venmo payouts in 3-4 weeks
No caps, no complexity
> Get your referral link
Cost Proof
40% Budget Optimization

"Netdata's significant positive impact" — LANCOM Systems

Calculate Your Savings

Compare vs Datadog, Grafana, Dynatrace

Savings Proof
46% Cost Reduction

"Cut costs by 46%, staff by 67%" — Codyas

30% Cloud Bill Savings

"Reduced cloud bill by 30%" — Falkland Islands Gov

Enterprise Proof
"Better Than Combined Alternatives"

"Better observability with Netdata than combining other tools." — TMB Barcelona

Real Engineers, <24h Response

DPA, SLAs, on-prem, volume pricing

Why Partners Win
Demo Live Infrastructure

One command, 30 seconds, real data—no sandbox needed

Zero Tickets, High Margins

Auto-config + per-node pricing = predictable profit

Homelab Ready
"Absolutely Incredible"

"We tested every monitoring system under the sun." — Benjamin Gabler, CEO Rocket.Net

76k+ GitHub Stars

3rd most starred monitoring project

Worth Recommending
Product That Delivers

Customers report 40-67% cost cuts, 99% downtime reduction

Zero Risk to Your Rep

Free tier lets them try before they buy

Never Fight Fires Alone

Docs, community, and expert help—pick your path to resolution.

Learn.netdata.cloud docs
Discord, Forums, GitHub
Premium support available
> Get answers now
60 Seconds to First Dashboard

One command to install. Zero config. 850+ integrations documented.

Linux, Windows, K8s, Docker
Auto-discovers your stack
> Read our documentation
See Netdata in Action

Watch real-time monitoring in action—demos, tutorials, and engineering deep dives.

Product demos and walkthroughs
Real infrastructure, not staged
> Start with the 3-minute tour
Level Up Your Monitoring
Real problems. Real solutions. 112+ guides from basic monitoring to AI observability.
76,000+ Engineers Strong
615+ contributors. 1.5M daily downloads. One mission: simplify observability.
Per-Second. 90% Cheaper. Data Stays Home.
Side-by-side comparisons: costs, real-time granularity, and data sovereignty for every major tool.

See why teams switch from Datadog, Prometheus, Grafana, and more.

> Browse all comparisons
Edge-Native Observability, Born Open Source
Per-second visibility, ML on every metric, and data that never leaves your infrastructure.
Founded in 2016
615+ contributors worldwide
Remote-first, engineering-driven
Open source first
> Read our story
Promises We Publish—and Prove
12 principles backed by open code, independent validation, and measurable outcomes.
Open source, peer-reviewed
Zero config, instant value
Data sovereignty by design
Aligned pricing, no surprises
> See all 12 principles
Edge-Native, AI-Ready, 100% Open
76k+ stars. Full ML, AI, and automation—GPLv3+, not premium add-ons.
76,000+ GitHub stars
GPLv3+ licensed forever
ML on every metric, included
Zero vendor lock-in
> Explore our open source
Build Real-Time Observability for the World
Remote-first team shipping per-second monitoring with ML on every metric.
Remote-first, fully distributed
Open source (76k+ stars)
Challenging technical problems
Your code on millions of systems
> See open roles
Talk to a Netdata Human in <24 Hours
Sales, partnerships, press, or professional services—real engineers, fast answers.
Discuss your observability needs
Pricing and volume discounts
Partnership opportunities
Media and press inquiries
> Book a conversation
Your Data. Your Rules.
On-prem data, cloud control plane, transparent terms.
Trust & Scale
76,000+ GitHub Stars

One of the most popular open-source monitoring projects

SOC 2 Type 2 Certified

Enterprise-grade security and compliance

Data Sovereignty

Your metrics stay on your infrastructure

Validated
University of Amsterdam

"Most energy-efficient monitoring solution" — ICSOC 2023, peer-reviewed

ADASTEC (Autonomous Driving)

"Doesn't miss alerts—mission-critical trust for safety software"

Community Stats
615+ Contributors

Global community improving monitoring for everyone

1.5M+ Downloads/Day

Trusted by teams worldwide

GPLv3+ Licensed

Free forever, fully open source agent

Why Join?
Remote-First

Work from anywhere, async-friendly culture

Impact at Scale

Your work helps millions of systems

Compliance
SOC 2 Type 2

Audited security controls

GDPR Ready

Data stays on your infrastructure

Blog

The reality of Netdata’s long-term metrics storage database

Balancing Performance with Historical Data Insights
by Netdata Team · October 12, 2020

The perception that Netdata is only capable of short-term metrics storage is a myth. It’s a pervasive myth we still see in blog posts and through community engagement, despite it being false for more than a year.

 

However, like all myths, this one on metrics storage began with a kernel of truth. When Netdata first flourished as an open-source project in 2017 and 2018, the default metrics database was RAM-only. You could configure this database’s size, but for many users, that size was limited by the amount of RAM they were willing to allocate for metrics storage. We also kept the default value low to ensure Netdata worked efficiently on all hardware and a variety of operating systems.

The database engine, which we first released in May 2019 as part of v1.15 of the Netdata Agent, solved the initial lack of long-term metrics storage in Netdata. This release revolutionized Netdata’s database storage solution, allowing every node to use both RAM and disk to efficiently store days, weeks, or months worth of per-second data.

To rewrite a myth, one needs to present another version of the story. How about this? Through an innovative metrics storage database and a distributed data architecture, Netdata has a versatile, scalable, and cost-effective solution for long-term metrics storage.

‘Spilling’ the beans on database storage

The database engine is a time-series database with a few twists to make it ideal for distributed, scalable, long-term storage of highly granular metrics.

When the Netdata Agent collects metrics from its system, it stores the most chart metric values in memory. Each dimension gets its own 4096-byte page, with Netdata’s many collectors continuing to fill these pages with consecutive values. These pages are the page cache.

It takes about 17 minutes for Netdata to fill a single page, given it’s collecting metrics every second, and every metric value requires 4 bytes (4096 bytes / 4 bytes = 1024 seconds, or 17 minutes). When the page fills up, it’s still too “hot” and “dirty” to be evicted from the cache, but the database engine already begins spilling it to disk. Once the page has been written to disk (at a low enough rate to not interfere with the host system and applications), it becomes a candidate for eviction.

The database engine runs an orthogonal process for evicting pages from the page cache. It looks for the least recently-used page, and if that page has already been spilled to disk, and thus marked “clean,” the database engine evicts it from the cache, clearing up a little bit of space in memory.

Back to the disk. The database engine organizes many pages into a single extent, which are immutable sets of 4 KiB blocks, with each block containing exactly one page. It also aligns extents at 4 KiB to enable direct I/O access, so as to minimize system interference and ensure efficient and high-performance I/O requests.

The extent is compressed using a low-overhead algorithm (lz4), given a header and trailer, and stored in a datafile, like datafile-1-0000000391.ndf. The extent’s header stores details like the compression algorithm type, number of completed pages of data inside the extent, arrays for start time and sampling rate, and much more. The trailer stores a checksum of the extent.

The database engine also creates a few metadata files, such as .njf and .mlf files, which contain important metadata required to resurface metrics stored on disk.

Resurfacing becomes useful when you want to view historical metrics, stored on disk, for troubleshooting or root cause analysis. As you scrub backward in time in Netdata, the dashboard queries the database engine for historical metrics, which then fills the in-memory page cache with the requested pages. By resurfacing historical metrics into memory, you get a much smoother (and less I/O intensive) experience when you interact with charts.

When the datafiles and journalfiles exceed the default or user-defined disk space quota, the database engine then removes the oldest data/journalfiles along with any data/metadata that still resides in the in-memory cache.

See the journey of a metric value through the wonders of the database engine:

Let’s say you have a system that collects 2,000 metrics every second. Given a compression ratio of 80%, which we’ve found is pretty standard for production systems, the database engine can store a year’s worth of per-second metrics using 48GiB of disk space.

That’s 63,072,000,000 valuable points of data for a few dollar’s worth of disk space.

Netdata’s time-series database is fully capable of long-term metrics storage. That much should be clear. But what’s even more important is why Netdata’s solution is so much more versatile than other monitoring solutions.

Low cost, high scalability, total ownership

The database engine’s architecture means more than highly-compressed metrics.

Let’s say you have an infrastructure with 100 nodes, a mixture of VMs and bare metal. With other monitoring solutions, you’re streaming all those metrics to a data lake in the cloud. If you expand to 200 nodes, your costs just doubled, and that’s even at the low-resolution, 10-second granularity that other solutions offer.

Centralized metrics makes scaling your infrastructure monitoring difficult. Even worse, the metrics aren’t yours any more. Good luck downloading your data and migrating to another platform.

The database engine’s most powerful feature is distributed metrics, stored locally on each node. By abandoning the expensive data lake and using the disk space available for each nod, you can keep costs down. There’s no extra expense when you jump to 200 nodes, or even 1,000. That means you can scale your infrastructure as you see fit without worrying about whether your monitoring stack can keep up.

Even better, you maintain complete control of your metrics. With Netdata Cloud, you can view and interact with your metrics in a single pane of glass while storing all the data on your distributed nodes. When you view or navigate a dashboard in Netdata Cloud, it makes a request to your node to stream those metrics to your browser on-demand.

This is why Netdata Cloud doesn’t store any metrics from our users’ nodes. Doing so would undermine the entire purpose of the distributed data model and the elegance of the database engine itself. Plus, it would worsen the experience for our most active users.

Change your metrics retention policy

If you want to store more metrics on a given node, you only need to change a single configuration setting. The dbengine multihost disk space setting dictates how much disk space, in MiB, you want to allocate for long-term metrics storage.

The default setting is 256 MiB. For a system collecting 2,000 metrics every second, and 80% compression, that’s roughly two days of metrics at 1s granularity.

[global]
    dbengine multihost disk space = 256

If you want four days, double the setting to 512. Eight days? 1024.

Not sure what you want? Or do you need some more information about how much disk space and RAM a given setting will require? We have a calculator for that. Enter the metrics retention you’d like, tweak the other inputs, and see a recommended setting for dbengine multihost disk space. You can even calculate the database engine’s size when streaming multiple child nodes to a single parent.

Myth-busting with our community

Rewriting myths is a community effort. We’re trying to do our part with this post and documentation, but we encourage our community to spread this new story. And, when it makes sense, help us debunk instances of the myth when you find them.

If you have questions about Netdata’s long-term metrics storage or the database engine’s intricacies, feel free to post in our community forum. A large chunk of the engineering and product team are active there and are ready to engage if you have questions.

We even created a thread specific to this blog post. We’re especially curious to hear about how you might be using Netdata’s distributed data model in your work’s monitoring stack. The more we know about how Netdata is used in the wild, so to speak, the better we can not only squash the existing myths, but also create new stories that better reflect Netdata’s rapidly-changing reality.