The only agent that thinks for itself

Autonomous Monitoring with self-learning AI built-in, operating independently across your entire stack.

Unlimited Metrics & Logs
Machine learning & MCP
5% CPU, 150MB RAM
3GB disk, >1 year retention
800+ integrations, zero config
Dashboards, alerts out of the box
> Discover Netdata Agents
Centralized metrics streaming and storage

Aggregate metrics from multiple agents into centralized Parent nodes for unified monitoring across your infrastructure.

Stream from unlimited agents
Long-term data retention
High availability clustering
Data replication & backup
Scalable architecture
Enterprise-grade security
> Learn about Parents
Fully managed cloud platform

Access your monitoring data from anywhere with our SaaS platform. No infrastructure to manage, automatic updates, and global availability.

Zero infrastructure management
99.9% uptime SLA
Global data centers
Automatic updates & patches
Enterprise SSO & RBAC
SOC2 & ISO certified
> Explore Netdata Cloud
Deploy Netdata Cloud in your infrastructure

Run the full Netdata Cloud platform on-premises for complete data sovereignty and compliance with your security policies.

Complete data sovereignty
Air-gapped deployment
Custom compliance controls
Private network integration
Dedicated support team
Kubernetes & Docker support
> Learn about Cloud On-Premises
Powerful, intuitive monitoring interface

Modern, responsive UI built for real-time troubleshooting with customizable dashboards and advanced visualization capabilities.

Real-time chart updates
Customizable dashboards
Dark & light themes
Advanced filtering & search
Responsive on all devices
Collaboration features
> Explore Netdata UI
Monitor on the go

Native iOS and Android apps bring full monitoring capabilities to your mobile device with real-time alerts and notifications.

iOS & Android apps
Push notifications
Touch-optimized interface
Offline data access
Biometric authentication
Widget support
> Download apps

Best energy efficiency

True real-time per-second

100% automated zero config

Centralized observability

Multi-year retention

High availability built-in

Zero maintenance

Always up-to-date

Enterprise security

Complete data control

Air-gap ready

Compliance certified

Millisecond responsiveness

Infinite zoom & pan

Works on any device

Native performance

Instant alerts

Monitor anywhere

80% Faster Incident Resolution
AI-powered troubleshooting from detection, to root cause and blast radius identification, to reporting.
True Real-Time and Simple, even at Scale
Linearly and infinitely scalable full-stack observability, that can be deployed even mid-crisis.
90% Cost Reduction, Full Fidelity
Instead of centralizing the data, Netdata distributes the code, eliminating pipelines and complexity.
Control Without Surrender
SOC 2 Type 2 certified with every metric kept on your infrastructure.
Integrations

800+ collectors and notification channels, auto-discovered and ready out of the box.

800+ data collectors
Auto-discovery & zero config
Cloud, infra, app protocols
Notifications out of the box
> Explore integrations
Real Results
46% Cost Reduction

Reduced monitoring costs by 46% while cutting staff overhead by 67%.

— Leonardo Antunez, Codyas

Zero Pipeline

No data shipping. No central storage costs. Query at the edge.

From Our Users
"Out-of-the-Box"

So many out-of-the-box features! I mostly don't have to develop anything.

— Simon Beginn, LANCOM Systems

No Query Language

Point-and-click troubleshooting. No PromQL, no LogQL, no learning curve.

Enterprise Ready
67% Less Staff, 46% Cost Cut

Enterprise efficiency without enterprise complexity—real ROI from day one.

— Leonardo Antunez, Codyas

SOC 2 Type 2 Certified

Zero data egress. Only metadata reaches the cloud. Your metrics stay on your infrastructure.

Full Coverage
800+ Collectors

Auto-discovered and configured. No manual setup required.

Any Notification Channel

Slack, PagerDuty, Teams, email, webhooks—all built-in.

Built for the People Who Get Paged
Because 3am alerts deserve instant answers, not hour-long hunts.
Every Industry Has Rules. We Master Them.
See how healthcare, finance, and government teams cut monitoring costs 90% while staying audit-ready.
Monitor Any Technology. Configure Nothing.
Install the agent. It already knows your stack.
From Our Users
"A Rare Unicorn"

Netdata gives more than you invest in it. A rare unicorn that obeys the Pareto rule.

— Eduard Porquet Mateu, TMB Barcelona

99% Downtime Reduction

Reduced website downtime by 99% and cloud bill by 30% using Netdata alerts.

— Falkland Islands Government

Real Savings
30% Cloud Cost Reduction

Optimized resource allocation based on Netdata alerts cut cloud spending by 30%.

— Falkland Islands Government

46% Cost Cut

Reduced monitoring staff by 67% while cutting operational costs by 46%.

— Codyas

Real Coverage
"Plugin for Everything"

Netdata has agent capacity or a plugin for everything, including Windows and Kubernetes.

— Eduard Porquet Mateu, TMB Barcelona

"Out-of-the-Box"

So many out-of-the-box features! I mostly don't have to develop anything.

— Simon Beginn, LANCOM Systems

Real Speed
Troubleshooting in 30 Seconds

From 2-3 minutes to 30 seconds—instant visibility into any node issue.

— Matthew Artist, Nodecraft

20% Downtime Reduction

20% less downtime and 40% budget optimization from out-of-the-box monitoring.

— Simon Beginn, LANCOM Systems

Pay per Node. Unlimited Everything Else.

One price per node. Unlimited metrics, logs, users, and retention. No per-GB surprises.

Free tier—forever
No metric limits or caps
Retention you control
Cancel anytime
> See pricing plans
What's Your Monitoring Really Costing You?

Most teams overpay by 40-60%. Let's find out why.

Expose hidden metric charges
Calculate tool consolidation
Customers report 30-67% savings
Results in under 60 seconds
> See what you're really paying
Your Infrastructure Is Unique. Let's Talk.

Because monitoring 10 nodes is different from monitoring 10,000.

On-prem & air-gapped deployment
Volume pricing & agreements
Architecture review for your scale
Compliance & security support
> Start a conversation
Monitoring That Sells Itself

Deploy in minutes. Impress clients in hours. Earn recurring revenue for years.

30-second live demos close deals
Zero config = zero support burden
Competitive margins & deal protection
Response in 48 hours
> Apply to partner
Per-Second Metrics at Homelab Prices

Same engine, same dashboards, same ML. Just priced for tinkerers.

Community: Free forever · 5 nodes · non-commercial
Homelab: $90/yr · unlimited nodes · fair usage
> Start monitoring your lab—free
$1,000 Per Referral. Unlimited Referrals.

Your colleagues get 10% off. You get 10% commission. Everyone wins.

10% of subscriptions, up to $1,000 each
Track earnings inside Netdata Cloud
PayPal/Venmo payouts in 3-4 weeks
No caps, no complexity
> Get your referral link
Cost Proof
40% Budget Optimization

"Netdata's significant positive impact" — LANCOM Systems

Calculate Your Savings

Compare vs Datadog, Grafana, Dynatrace

Savings Proof
46% Cost Reduction

"Cut costs by 46%, staff by 67%" — Codyas

30% Cloud Bill Savings

"Reduced cloud bill by 30%" — Falkland Islands Gov

Enterprise Proof
"Better Than Combined Alternatives"

"Better observability with Netdata than combining other tools." — TMB Barcelona

Real Engineers, <24h Response

DPA, SLAs, on-prem, volume pricing

Why Partners Win
Demo Live Infrastructure

One command, 30 seconds, real data—no sandbox needed

Zero Tickets, High Margins

Auto-config + per-node pricing = predictable profit

Homelab Ready
"Absolutely Incredible"

"We tested every monitoring system under the sun." — Benjamin Gabler, CEO Rocket.Net

76k+ GitHub Stars

3rd most starred monitoring project

Worth Recommending
Product That Delivers

Customers report 40-67% cost cuts, 99% downtime reduction

Zero Risk to Your Rep

Free tier lets them try before they buy

Never Fight Fires Alone

Docs, community, and expert help—pick your path to resolution.

Learn.netdata.cloud docs
Discord, Forums, GitHub
Premium support available
> Get answers now
60 Seconds to First Dashboard

One command to install. Zero config. 850+ integrations documented.

Linux, Windows, K8s, Docker
Auto-discovers your stack
> Read our documentation
See Netdata in Action

Watch real-time monitoring in action—demos, tutorials, and engineering deep dives.

Product demos and walkthroughs
Real infrastructure, not staged
> Start with the 3-minute tour
Level Up Your Monitoring
Real problems. Real solutions. 112+ guides from basic monitoring to AI observability.
76,000+ Engineers Strong
615+ contributors. 1.5M daily downloads. One mission: simplify observability.
Per-Second. 90% Cheaper. Data Stays Home.
Side-by-side comparisons: costs, real-time granularity, and data sovereignty for every major tool.

See why teams switch from Datadog, Prometheus, Grafana, and more.

> Browse all comparisons
Edge-Native Observability, Born Open Source
Per-second visibility, ML on every metric, and data that never leaves your infrastructure.
Founded in 2016
615+ contributors worldwide
Remote-first, engineering-driven
Open source first
> Read our story
Promises We Publish—and Prove
12 principles backed by open code, independent validation, and measurable outcomes.
Open source, peer-reviewed
Zero config, instant value
Data sovereignty by design
Aligned pricing, no surprises
> See all 12 principles
Edge-Native, AI-Ready, 100% Open
76k+ stars. Full ML, AI, and automation—GPLv3+, not premium add-ons.
76,000+ GitHub stars
GPLv3+ licensed forever
ML on every metric, included
Zero vendor lock-in
> Explore our open source
Build Real-Time Observability for the World
Remote-first team shipping per-second monitoring with ML on every metric.
Remote-first, fully distributed
Open source (76k+ stars)
Challenging technical problems
Your code on millions of systems
> See open roles
Talk to a Netdata Human in <24 Hours
Sales, partnerships, press, or professional services—real engineers, fast answers.
Discuss your observability needs
Pricing and volume discounts
Partnership opportunities
Media and press inquiries
> Book a conversation
Your Data. Your Rules.
On-prem data, cloud control plane, transparent terms.
Trust & Scale
76,000+ GitHub Stars

One of the most popular open-source monitoring projects

SOC 2 Type 2 Certified

Enterprise-grade security and compliance

Data Sovereignty

Your metrics stay on your infrastructure

Validated
University of Amsterdam

"Most energy-efficient monitoring solution" — ICSOC 2023, peer-reviewed

ADASTEC (Autonomous Driving)

"Doesn't miss alerts—mission-critical trust for safety software"

Community Stats
615+ Contributors

Global community improving monitoring for everyone

1.5M+ Downloads/Day

Trusted by teams worldwide

GPLv3+ Licensed

Free forever, fully open source agent

Why Join?
Remote-First

Work from anywhere, async-friendly culture

Impact at Scale

Your work helps millions of systems

Compliance
SOC 2 Type 2

Audited security controls

GDPR Ready

Data stays on your infrastructure

Blog

Using Pandas In Python: Data Analysis & Performance Insights

Extending Monitoring to Environmental Data for Insightful Correlations
by Andrew Maguire · October 19, 2022

netdata-pandas

Netdata just got a Pandas collector.

Pandas is a de-facto standard in reading and processing most types of structured data in Python so if you have some csv/json/xml data, either locally or via some HTTP endpoint, containing metrics you’d like to monitor, chances are you can now easily do this by leveraging the Pandas collector without having to develop your own custom collector as you might have in the past.

Let’s take a look at a realistic example where we have some HTTP or API that returns json from which we would like to extract some metrics.

Monitoring weather data

We will use the awesome free api from Open-Meteo and the Pandas collector to pull some temperature forecasts for today across a range of cities and store the mean, min, and max for today in Netdata.

With the Pandas collector a user just needs to define a sequence of df_steps as part of their collector configuration. Below is the configuration used in this example. We will focus mostly on the df_steps parameter as that’s really where all the logic lives.

# example pulling some hourly temperature data
temperature:
    name: "temperature"
    update_every: 3
    chart_configs:
      - name: "temperature_by_city"
        title: "Temperature By City"
        family: "temperature.today"
        context: "temperature"
        type: "line"
        units: "Celsius"
        df_steps: >
          pd.DataFrame.from_dict(
            {city: requests.get(
                f'https://api.open-meteo.com/v1/forecast?latitude={lat}&amp;longitude={lng}&amp;hourly=temperature_2m'
                ).json()['hourly']['temperature_2m'] 
            for (city,lat,lng) 
            in [
                ('dublin', 53.3441, -6.2675),
                ('athens', 37.9792, 23.7166),
                ('london', 51.5002, -0.1262),
                ('berlin', 52.5235, 13.4115),
                ('paris', 48.8567, 2.3510),
                ]
            }
            );                                                         # use dictionary comprehension to make multiple requests;
          df.describe();                                               # get aggregate stats for each city;
          df.transpose()[['mean', 'max', 'min']].reset_index();        # just take mean, min, max;
          df.rename(columns={'index':'city'});                         # some column renaming;
          df.pivot(columns='city').mean().to_frame().reset_index();    # force to be one row per city;
          df.rename(columns={0:'degrees'});                            # some column renaming;
          pd.concat([df, df['city']+'_'+df['level_0']], axis=1);       # add new column combining city and summary measurement label;
          df.rename(columns={0:'measurement'});                        # some column renaming;
          df[['measurement', 'degrees']].set_index('measurement');     # just take two columns we want;
          df.sort_index();                                             # sort by city name;
          df.transpose();                                              # transpose so its just one wide row;          

To make developing your own df_steps as easy as possible we have created this Google Colab notebook that lets you iterate and build up your code step by step, printing the output of each step along the way. There are some more examples in this notebook so feel free to duplicate it to work on your own use case.

Step by step

Each step needs to result in a Pandas DataFrame. This is a common pattern in data pipelining whereby we chain a series of transformations together, each step taking in a dataframe and outputting a transformed dataframe.

First we loop over a number of api calls to pull hourly temperature forecasts for each city in a starting dataframe.

pd.DataFrame.from_dict(
    {city: requests.get(
        f'https://api.open-meteo.com/v1/forecast?latitude={lat}&amp;longitude={lng}&amp;hourly=temperature_2m'
        ).json()['hourly']['temperature_2m'] 
    for (city,lat,lng) 
    in [
        ('dublin', 53.3441, -6.2675),
        ('athens', 37.9792, 23.7166),
        ('london', 51.5002, -0.1262),
        ('berlin', 52.5235, 13.4115),
        ('paris', 48.8567, 2.3510),
        ]
    }
  ) 
# =
#      dublin  athens  london  berlin  paris
# 0      14.0    17.8    12.5     7.9    9.1
# 1      14.0    17.7    12.6     7.3    8.0
# 2      13.9    17.9    12.6     6.9    6.1
# 3      14.0    17.7    12.8     6.1    5.8
# 4      14.0    17.6    12.7     5.9    5.7
# ..      ...     ...     ...     ...    ...
# 163    13.2    19.3    15.5    11.7   15.1
# 164    12.8    19.0    15.0    11.5   14.0
# 165    12.6    18.6    14.6    11.1   12.6
# 166    12.8    18.3    14.4    10.6   11.8
# 167    13.3    18.0    14.3    10.2   11.0
# 
# [168 rows x 5 columns] 

Next we aggregate this data to get summary statistics per city.

df.describe() 
# =
#            Dublin      athens      london      berlin       paris
# count  168.000000  168.000000  168.000000  168.000000  168.000000
# mean    12.008929   19.459524   12.513690   10.798214   12.059524
# std      2.442361    4.037315    3.044617    3.286672    4.046204
# min      6.600000   12.200000    5.200000    5.700000    4.800000
# 25%     10.675000   16.700000    9.775000    7.900000    8.475000
# 50%     12.800000   18.900000   12.550000   10.400000   11.750000
# 75%     13.900000   23.125000   14.900000   13.700000   15.825000
# max     15.300000   26.200000   18.900000   17.600000   19.300000 

The next two steps filter to the metrics we want, reshape and rename some columns.

df.transpose()[['mean', 'max', 'min']].reset_index() 
# =
#     index       mean   max   min
# 0  dublin  12.008929  15.3   6.6
# 1  athens  19.459524  26.2  12.2
# 2  London  12.513690  18.9   5.2
# 3  berlin  10.798214  17.6   5.7
# 4   paris  12.059524  19.3   4.8 

df.rename(columns={'index':'city'}) 
# =
#      city       mean   max   min
# 0  dublin  12.008929  15.3   6.6
# 1  athens  19.459524  26.2  12.2
# 2  London  12.513690  18.9   5.2
# 3  berlin  10.798214  17.6   5.7
# 4   paris  12.059524  19.3   4.8 

Now we have a table of data that’s what we want, the next steps are about reshaping this data to end up as one “wide” single row of data as that is what the collector expects to result from the last step.

df.pivot(columns='city').mean().to_frame().reset_index() 
# =
#    level_0    city          0
# 0     mean  Athens  19.459524
# 1     mean  berlin  10.798214
# 2     mean  Dublin  12.008929
# 3     mean  London  12.513690
# 4     mean   paris  12.059524
# 5      max  Athens  26.200000
# 6      max  berlin  17.600000
# 7      max  dublin  15.300000
# 8      max  London  18.900000
# 9      max   paris  19.300000
# 10     min  Athens  12.200000
# 11     min  berlin   5.700000
# 12     min  dublin   6.600000
# 13     min  London   5.200000
# 14     min   paris   4.800000 

df.rename(columns={0:'degrees'}) 
# =
#    level_0    city    degrees
# 0     mean  Athens  19.459524
# 1     mean  berlin  10.798214
# 2     mean  Dublin  12.008929
# 3     mean  London  12.513690
# 4     mean   paris  12.059524
# 5      max  Athens  26.200000
# 6      max  berlin  17.600000
# 7      max  Dublin  15.300000
# 8      max  London  18.900000
# 9      max   paris  19.300000
# 10     min  athens  12.200000
# 11     min  berlin   5.700000
# 12     min  Dublin   6.600000
# 13     min  london   5.200000
# 14     min   paris   4.800000 

pd.concat([df, df['city']+'_'+df['level_0']], axis=1) 
# =
#    level_0    city    degrees            0
# 0     mean  Athens  19.459524  athens_mean
# 1     mean  berlin  10.798214  berlin_mean
# 2     mean  dublin  12.008929  dublin_mean
# 3     mean  London  12.513690  london_mean
# 4     mean   paris  12.059524   paris_mean
# 5      max  Athens  26.200000   athens_max
# 6      max  berlin  17.600000   berlin_max
# 7      max  Dublin  15.300000   dublin_max
# 8      max  london  18.900000   london_max
# 9      max   paris  19.300000    paris_max
# 10     min  Athens  12.200000   athens_min
# 11     min  berlin   5.700000   berlin_min
# 12     min  dublin   6.600000   dublin_min
# 13     min  London   5.200000   london_min
# 14     min   paris   4.800000    paris_min 

df.rename(columns={0:'measurement'}) 
# =
#    level_0    city    degrees  measurement
# 0     mean  athens  19.459524  athens_mean
# 1     mean  berlin  10.798214  berlin_mean
# 2     mean  dublin  12.008929  dublin_mean
# 3     mean  London  12.513690  london_mean
# 4     mean   paris  12.059524   paris_mean
# 5      max  athens  26.200000   athens_max
# 6      max  berlin  17.600000   berlin_max
# 7      max  Dublin  15.300000   dublin_max
# 8      max  London  18.900000   london_max
# 9      max   paris  19.300000    paris_max
# 10     min  Athens  12.200000   athens_min
# 11     min  berlin   5.700000   berlin_min
# 12     min  Dublin   6.600000   dublin_min
# 13     min  London   5.200000   london_min
# 14     min   paris   4.800000    paris_min 

df[['measurement', 'degrees']].set_index('measurement') 
# =
#                degrees
# measurement           
# athens_mean  19.459524
# berlin_mean  10.798214
# dublin_mean  12.008929
# london_mean  12.513690
# paris_mean   12.059524
# athens_max   26.200000
# berlin_max   17.600000
# dublin_max   15.300000
# london_max   18.900000
# paris_max    19.300000
# athens_min   12.200000
# berlin_min    5.700000
# dublin_min    6.600000
# london_min    5.200000
# paris_min     4.800000 

Next we sort the data.

df.sort_index() 
# =
#                degrees
# measurement           
# athens_max   26.200000
# athens_mean  19.459524
# athens_min   12.200000
# berlin_max   17.600000
# berlin_mean  10.798214
# berlin_min    5.700000
# dublin_max   15.300000
# dublin_mean  12.008929
# dublin_min    6.600000
# london_max   18.900000
# london_mean  12.513690
# london_min    5.200000
# paris_max    19.300000
# paris_mean   12.059524
# paris_min     4.800000

And finally we do one last transpose to go from a long format to a wide format of one row where is column is a metric we want Netdata to collect.

df.transpose() 
# =
# measurement  athens_max  athens_mean  athens_min  berlin_max  berlin_mean  \
# degrees            26.2    19.459524        12.2        17.6    10.798214   
# 
# measurement  berlin_min  dublin_max  dublin_mean  dublin_min  london_max  \
# degrees             5.7        15.3    12.008929         6.6        18.9   
# 
# measurement  london_mean  london_min  paris_max  paris_mean  paris_min  
# degrees         12.51369         5.2       19.3   12.059524        4.8   

This row is then converted (by the collector internally) into a python dictionary of key value pairs.

{'athens_max': 26.2, 'athens_mean': 19.45952380952381, 'athens_min': 12.2, 'berlin_max': 17.6, 'berlin_mean': 10.798214285714286, 'berlin_min': 5.7, 'dublin_max': 15.3, 'dublin_mean': 12.008928571428571, 'dublin_min': 6.6, 'london_max': 18.9, 'london_mean': 12.513690476190478, 'london_min': 5.2, 'paris_max': 19.3, 'paris_mean': 12.059523809523808, 'paris_min': 4.8}

And that’s it, this should end up in a chart in Netdata like below.

screenshot

Try it yourself

Pandas is a truly amazing library that can usually accomplish almost any data processing task, so if you have some custom data you would like to monitor with Netdata but do not quite feel ready yet to develop your own custom collector - give the Pandas collector a go!

If you haven’t already, sign up now for a free Netdata account!

We’d love to hear from you – if you have any questions, complaints or feedback please reach out to us on Discord or Github.