The only agent that thinks for itself

Autonomous Monitoring with self-learning AI built-in, operating independently across your entire stack.

Unlimited Metrics & Logs
Machine learning & MCP
5% CPU, 150MB RAM
3GB disk, >1 year retention
800+ integrations, zero config
Dashboards, alerts out of the box
> Discover Netdata Agents
Centralized metrics streaming and storage

Aggregate metrics from multiple agents into centralized Parent nodes for unified monitoring across your infrastructure.

Stream from unlimited agents
Long-term data retention
High availability clustering
Data replication & backup
Scalable architecture
Enterprise-grade security
> Learn about Parents
Fully managed cloud platform

Access your monitoring data from anywhere with our SaaS platform. No infrastructure to manage, automatic updates, and global availability.

Zero infrastructure management
99.9% uptime SLA
Global data centers
Automatic updates & patches
Enterprise SSO & RBAC
SOC2 & ISO certified
> Explore Netdata Cloud
Deploy Netdata Cloud in your infrastructure

Run the full Netdata Cloud platform on-premises for complete data sovereignty and compliance with your security policies.

Complete data sovereignty
Air-gapped deployment
Custom compliance controls
Private network integration
Dedicated support team
Kubernetes & Docker support
> Learn about Cloud On-Premises
Powerful, intuitive monitoring interface

Modern, responsive UI built for real-time troubleshooting with customizable dashboards and advanced visualization capabilities.

Real-time chart updates
Customizable dashboards
Dark & light themes
Advanced filtering & search
Responsive on all devices
Collaboration features
> Explore Netdata UI
Monitor on the go

Native iOS and Android apps bring full monitoring capabilities to your mobile device with real-time alerts and notifications.

iOS & Android apps
Push notifications
Touch-optimized interface
Offline data access
Biometric authentication
Widget support
> Download apps

Best energy efficiency

True real-time per-second

100% automated zero config

Centralized observability

Multi-year retention

High availability built-in

Zero maintenance

Always up-to-date

Enterprise security

Complete data control

Air-gap ready

Compliance certified

Millisecond responsiveness

Infinite zoom & pan

Works on any device

Native performance

Instant alerts

Monitor anywhere

80% Faster Incident Resolution
AI-powered troubleshooting from detection, to root cause and blast radius identification, to reporting.
True Real-Time and Simple, even at Scale
Linearly and infinitely scalable full-stack observability, that can be deployed even mid-crisis.
90% Cost Reduction, Full Fidelity
Instead of centralizing the data, Netdata distributes the code, eliminating pipelines and complexity.
Control Without Surrender
SOC 2 Type 2 certified with every metric kept on your infrastructure.
Integrations

800+ collectors and notification channels, auto-discovered and ready out of the box.

800+ data collectors
Auto-discovery & zero config
Cloud, infra, app protocols
Notifications out of the box
> Explore integrations
Real Results
46% Cost Reduction

Reduced monitoring costs by 46% while cutting staff overhead by 67%.

— Leonardo Antunez, Codyas

Zero Pipeline

No data shipping. No central storage costs. Query at the edge.

From Our Users
"Out-of-the-Box"

So many out-of-the-box features! I mostly don't have to develop anything.

— Simon Beginn, LANCOM Systems

No Query Language

Point-and-click troubleshooting. No PromQL, no LogQL, no learning curve.

Enterprise Ready
67% Less Staff, 46% Cost Cut

Enterprise efficiency without enterprise complexity—real ROI from day one.

— Leonardo Antunez, Codyas

SOC 2 Type 2 Certified

Zero data egress. Only metadata reaches the cloud. Your metrics stay on your infrastructure.

Full Coverage
800+ Collectors

Auto-discovered and configured. No manual setup required.

Any Notification Channel

Slack, PagerDuty, Teams, email, webhooks—all built-in.

Built for the People Who Get Paged
Because 3am alerts deserve instant answers, not hour-long hunts.
Every Industry Has Rules. We Master Them.
See how healthcare, finance, and government teams cut monitoring costs 90% while staying audit-ready.
Monitor Any Technology. Configure Nothing.
Install the agent. It already knows your stack.
From Our Users
"A Rare Unicorn"

Netdata gives more than you invest in it. A rare unicorn that obeys the Pareto rule.

— Eduard Porquet Mateu, TMB Barcelona

99% Downtime Reduction

Reduced website downtime by 99% and cloud bill by 30% using Netdata alerts.

— Falkland Islands Government

Real Savings
30% Cloud Cost Reduction

Optimized resource allocation based on Netdata alerts cut cloud spending by 30%.

— Falkland Islands Government

46% Cost Cut

Reduced monitoring staff by 67% while cutting operational costs by 46%.

— Codyas

Real Coverage
"Plugin for Everything"

Netdata has agent capacity or a plugin for everything, including Windows and Kubernetes.

— Eduard Porquet Mateu, TMB Barcelona

"Out-of-the-Box"

So many out-of-the-box features! I mostly don't have to develop anything.

— Simon Beginn, LANCOM Systems

Real Speed
Troubleshooting in 30 Seconds

From 2-3 minutes to 30 seconds—instant visibility into any node issue.

— Matthew Artist, Nodecraft

20% Downtime Reduction

20% less downtime and 40% budget optimization from out-of-the-box monitoring.

— Simon Beginn, LANCOM Systems

Pay per Node. Unlimited Everything Else.

One price per node. Unlimited metrics, logs, users, and retention. No per-GB surprises.

Free tier—forever
No metric limits or caps
Retention you control
Cancel anytime
> See pricing plans
What's Your Monitoring Really Costing You?

Most teams overpay by 40-60%. Let's find out why.

Expose hidden metric charges
Calculate tool consolidation
Customers report 30-67% savings
Results in under 60 seconds
> See what you're really paying
Your Infrastructure Is Unique. Let's Talk.

Because monitoring 10 nodes is different from monitoring 10,000.

On-prem & air-gapped deployment
Volume pricing & agreements
Architecture review for your scale
Compliance & security support
> Start a conversation
Monitoring That Sells Itself

Deploy in minutes. Impress clients in hours. Earn recurring revenue for years.

30-second live demos close deals
Zero config = zero support burden
Competitive margins & deal protection
Response in 48 hours
> Apply to partner
Per-Second Metrics at Homelab Prices

Same engine, same dashboards, same ML. Just priced for tinkerers.

Community: Free forever · 5 nodes · non-commercial
Homelab: $90/yr · unlimited nodes · fair usage
> Start monitoring your lab—free
$1,000 Per Referral. Unlimited Referrals.

Your colleagues get 10% off. You get 10% commission. Everyone wins.

10% of subscriptions, up to $1,000 each
Track earnings inside Netdata Cloud
PayPal/Venmo payouts in 3-4 weeks
No caps, no complexity
> Get your referral link
Cost Proof
40% Budget Optimization

"Netdata's significant positive impact" — LANCOM Systems

Calculate Your Savings

Compare vs Datadog, Grafana, Dynatrace

Savings Proof
46% Cost Reduction

"Cut costs by 46%, staff by 67%" — Codyas

30% Cloud Bill Savings

"Reduced cloud bill by 30%" — Falkland Islands Gov

Enterprise Proof
"Better Than Combined Alternatives"

"Better observability with Netdata than combining other tools." — TMB Barcelona

Real Engineers, <24h Response

DPA, SLAs, on-prem, volume pricing

Why Partners Win
Demo Live Infrastructure

One command, 30 seconds, real data—no sandbox needed

Zero Tickets, High Margins

Auto-config + per-node pricing = predictable profit

Homelab Ready
"Absolutely Incredible"

"We tested every monitoring system under the sun." — Benjamin Gabler, CEO Rocket.Net

76k+ GitHub Stars

3rd most starred monitoring project

Worth Recommending
Product That Delivers

Customers report 40-67% cost cuts, 99% downtime reduction

Zero Risk to Your Rep

Free tier lets them try before they buy

Never Fight Fires Alone

Docs, community, and expert help—pick your path to resolution.

Learn.netdata.cloud docs
Discord, Forums, GitHub
Premium support available
> Get answers now
60 Seconds to First Dashboard

One command to install. Zero config. 850+ integrations documented.

Linux, Windows, K8s, Docker
Auto-discovers your stack
> Read our documentation
See Netdata in Action

Watch real-time monitoring in action—demos, tutorials, and engineering deep dives.

Product demos and walkthroughs
Real infrastructure, not staged
> Start with the 3-minute tour
Level Up Your Monitoring
Real problems. Real solutions. 112+ guides from basic monitoring to AI observability.
76,000+ Engineers Strong
615+ contributors. 1.5M daily downloads. One mission: simplify observability.
Per-Second. 90% Cheaper. Data Stays Home.
Side-by-side comparisons: costs, real-time granularity, and data sovereignty for every major tool.

See why teams switch from Datadog, Prometheus, Grafana, and more.

> Browse all comparisons
Edge-Native Observability, Born Open Source
Per-second visibility, ML on every metric, and data that never leaves your infrastructure.
Founded in 2016
615+ contributors worldwide
Remote-first, engineering-driven
Open source first
> Read our story
Promises We Publish—and Prove
12 principles backed by open code, independent validation, and measurable outcomes.
Open source, peer-reviewed
Zero config, instant value
Data sovereignty by design
Aligned pricing, no surprises
> See all 12 principles
Edge-Native, AI-Ready, 100% Open
76k+ stars. Full ML, AI, and automation—GPLv3+, not premium add-ons.
76,000+ GitHub stars
GPLv3+ licensed forever
ML on every metric, included
Zero vendor lock-in
> Explore our open source
Build Real-Time Observability for the World
Remote-first team shipping per-second monitoring with ML on every metric.
Remote-first, fully distributed
Open source (76k+ stars)
Challenging technical problems
Your code on millions of systems
> See open roles
Talk to a Netdata Human in <24 Hours
Sales, partnerships, press, or professional services—real engineers, fast answers.
Discuss your observability needs
Pricing and volume discounts
Partnership opportunities
Media and press inquiries
> Book a conversation
Your Data. Your Rules.
On-prem data, cloud control plane, transparent terms.
Trust & Scale
76,000+ GitHub Stars

One of the most popular open-source monitoring projects

SOC 2 Type 2 Certified

Enterprise-grade security and compliance

Data Sovereignty

Your metrics stay on your infrastructure

Validated
University of Amsterdam

"Most energy-efficient monitoring solution" — ICSOC 2023, peer-reviewed

ADASTEC (Autonomous Driving)

"Doesn't miss alerts—mission-critical trust for safety software"

Community Stats
615+ Contributors

Global community improving monitoring for everyone

1.5M+ Downloads/Day

Trusted by teams worldwide

GPLv3+ Licensed

Free forever, fully open source agent

Why Join?
Remote-First

Work from anywhere, async-friendly culture

Impact at Scale

Your work helps millions of systems

Compliance
SOC 2 Type 2

Audited security controls

GDPR Ready

Data stays on your infrastructure

Blog

Extending Netdata's anomaly detection training window

Enhancing Anomaly Detection with Extended Historical Data
by Andrew Maguire · February 2, 2023

We have been busy at work under the hood of the Netdata agent to introduce new capabilities that let you extend the “training window” used by Netdata’s native anomaly detection capabilities.

This blog post will discuss one of these improvements to help you reduce “false positives” by essentially extending the training window by using the new (beautifully named) number of models per dimension configuration parameter.

Background

One of the most important considerations of our native anomaly detection capabilities is the overhead of running the training and scoring computations required to train thousands of models (one per metric) and produce anomaly bits every second based on those trained models.

📑 Read more about our approach to machine learning or how our anomaly detection actually works.

It is in this context that most of our design and implementation decisions have been made. Unlike many tools that can rely on having your raw data in their cloud we can’t just take the kitchen sink approach (nor should we - it tends not to work that well in practice) and throw all the data at some sort of wrapper on something like Facebook Prophet.

Instead we have to be a bit more grown up about the design choices we face and the trade offs involved. Typically, this comes down to needing to think a bit more deeply about the ingredients we use and how we put them together, more and more this is becoming the most important aspect of actually using machine learning within your product.

Our journey so far in building native anomaly detection into the core of Netdata touches on aspects of this.

Training window

One very important consideration has been on the amount of data to train on at any one time. We need to be careful not to try and train on too large a chunk of data at any one time, since reading, pre-processing and then training on that data could have too noticeable an impact on CPU overhead.

On the other hand we also need to be careful not to train on too little data since this could lead to the model not being able to learn the “normal” patterns of the metric well enough to be able to detect anomalies.

So we started as simply as possible by continually training and retraining on the most recent 4 hours of metrics data by default. Users can extend this easily if they want the model to learn over a longer window but in terms of having to pick some default we thought this gave the best balance between a reasonable window of time to learn what might be considered “normal” patterns vs the overhead cost of the training step itself being as close to negligible as possible.

False positives

While we and many in the community have found the existing anomaly detection capabilities of Netdata very useful, especially in spotting sudden changes when things go wrong, there can of course still be “false positives” for behaviors or patterns that might just be somewhat rare and so not happen enough in any random 4 hour window to be considered “normal”.

For example, this could be something like common but irregular workloads (e.g. user driven) or cron jobs that maybe happen a few times a day or at irregular spaced out intervals. They still would be considered normal behavior on the system but for a model trained only on the last 4 hours where this pattern has never occurred they, of course, look anomalous.

So it has been clear from the start that we needed to in some way extend the amount of training data used while still being very careful not to impact training overhead on the system.

📑 If your are curious about some of our thinking and approaches here feel free to check out this GitHub discussion where we explored various ideas and approaches.

To get the best of both worlds we decided that instead of just throwing away previous models once a new one has been trained, we should hold on to them and then use them when scoring in addition to the most recently trained model.

Each trained model is essentially a compressed representation of the training data it was trained on so using more “reference” models during scoring should allow us to capture the wider range of “normal” patterns on a system really without any additional training overhead, we just need a little more space to store them (almost negligible given the kmeans models we use are just a few numbers really) and perhaps a very small performance impact during scoring (which, as we will cover below, we have also tried to be very careful about doing only when needed).

What has changed?

To this end we have introduced a new ML parameter called number of models per dimension (you can read more detail about it here) which will control the number of trained models used during scoring.

To illustrate this approach, below is some pseudo code of how the trained models are actually used in producing anomaly bits (which give you an “anomaly rate” over any window of time) each second.

# preprocess recent observations into a "feature vector"
latest_feature_vector = preprocess_data([recent_data])

# loop over each trained model
for model in models:
    # if recent feature vector is considered normal by any model, stop scoring
    if model.score(latest_feature_vector) < dimension_anomaly_score_threshold:
        anomaly_bit = 0
        break
    else:
        # only if all models agree the feature vector is anomalous is it considered anomalous by netdata
        anomaly_bit = 1

The aim here is to only use those additional models when we need to “double check” or confirm if some potentially anomalous looking recent data should indeed be flagged as such based on a wider and more representative set of models.

So essentially once one model suggests a feature vector looks anomalous we check all saved models and only when they all agree that something is anomalous does the anomaly bit get to be finally set to 1 to signal that netdata considered the most recent feature vector unlike anything seen in all the models (spanning a wider training window) checked.

Impact on anomaly rates

Below is a typical impact of this change and the proposed new defaults mentioned below. In this set up both nodes are running the same workloads.

We can see that the overall node anomaly rate (blue line) for the ml-demo-ml-enabled-newconf node is consistent below that of the current default represented by ml-demo-ml-enabled (red line).

In the highlighted period to the right of the chart we triggered a true anomaly on both nodes. You can see both nodes “react” with an increased node anomaly rate as we would hope.

anomaly-rate-impact

Some main takeaways here are that:

  1. Using more models during scoring will tend to suppress overall node anomaly rates a little.
  2. When anomalies are detected the overall node anomaly rate will still tend to be a little lower than before.
  3. When something is considered anomalous by Netdata we might have more confidence that it really is more likely to be some strange unseen pattern regardless of whether it’s actually something you need to react to or not.
  4. Overall this should help reduce “false positives”.

Next steps

To begin with, the default for number of models per dimension is 1 such that the new functionality collapses to the previous default of just using the single most recently trained model.

Dogfooding

The plan from here is to dogfood further internally and with the wider community while we work on two other “foundational” pieces of ML functionality:

  1. [Feat]: have ml work on any update_every - ability to have anomaly detection work across all metrics regardless of their update_every. This will greatly increase the coverage of ML to more metrics by default.
  2. [Feat]: persist trained ML models to db - save the stored ML models to disk so that they are robust to agent restarts or machine reboots, currently in such cases all training needs to restart as models are stored in RAM.

Once these two features have been implemented and also dogfooded internally and by early adopters in the community we will move forward with this “update ml defaults and readme” PR to update Netdata’s ML config defaults to something like below.

The aim of the new defaults will be that roughly the last 24 hours would be trained on and so take advantage of all the foundations laid to date.

[ml]
    # train on 6 hours
    maximum num samples to train = 21600
    # train every 3 hours
    train every = 10800
    number of models per dimension = 9

Try it yourself!

As of Netdata version v1.38 the new number of models per dimension parameter is available. You could try a configuration like above to see the impact it has on the anomaly rate across your infrastructure and if it helps reduce false positives within the Anomaly Advisor tab of Netdata.

We love feedback!

We’d love to hear any and all feedback you have about this feature. This is very much an initial iteration and we are hoping to continually improve the ML under the hood in the agent and the overall UX experience as users share their thoughts with us.

🚧 Note: This functionality is still under active development. We dogfood it internally and among early adopters within the Netdata community. If you would like to get involved and help us with some feedback, email us at [email protected], create a thread in the Netdata Community Forums, or come join us in the 🤖-ml-powered-monitoring channel of the Netdata discord, or open a discussion in GitHub if that’s more your thing.

Learn more

If you’d like to dive deeper and learn a little more about exactly how it all works, please feel free to check out some of the resources below.