The only agent that thinks for itself

Autonomous Monitoring with self-learning AI built-in, operating independently across your entire stack.

Unlimited Metrics & Logs
Machine learning & MCP
5% CPU, 150MB RAM
3GB disk, >1 year retention
800+ integrations, zero config
Dashboards, alerts out of the box
> Discover Netdata Agents
Centralized metrics streaming and storage

Aggregate metrics from multiple agents into centralized Parent nodes for unified monitoring across your infrastructure.

Stream from unlimited agents
Long-term data retention
High availability clustering
Data replication & backup
Scalable architecture
Enterprise-grade security
> Learn about Parents
Fully managed cloud platform

Access your monitoring data from anywhere with our SaaS platform. No infrastructure to manage, automatic updates, and global availability.

Zero infrastructure management
99.9% uptime SLA
Global data centers
Automatic updates & patches
Enterprise SSO & RBAC
SOC2 & ISO certified
> Explore Netdata Cloud
Deploy Netdata Cloud in your infrastructure

Run the full Netdata Cloud platform on-premises for complete data sovereignty and compliance with your security policies.

Complete data sovereignty
Air-gapped deployment
Custom compliance controls
Private network integration
Dedicated support team
Kubernetes & Docker support
> Learn about Cloud On-Premises
Powerful, intuitive monitoring interface

Modern, responsive UI built for real-time troubleshooting with customizable dashboards and advanced visualization capabilities.

Real-time chart updates
Customizable dashboards
Dark & light themes
Advanced filtering & search
Responsive on all devices
Collaboration features
> Explore Netdata UI
Monitor on the go

Native iOS and Android apps bring full monitoring capabilities to your mobile device with real-time alerts and notifications.

iOS & Android apps
Push notifications
Touch-optimized interface
Offline data access
Biometric authentication
Widget support
> Download apps

Best energy efficiency

True real-time per-second

100% automated zero config

Centralized observability

Multi-year retention

High availability built-in

Zero maintenance

Always up-to-date

Enterprise security

Complete data control

Air-gap ready

Compliance certified

Millisecond responsiveness

Infinite zoom & pan

Works on any device

Native performance

Instant alerts

Monitor anywhere

80% Faster Incident Resolution
AI-powered troubleshooting from detection, to root cause and blast radius identification, to reporting.
True Real-Time and Simple, even at Scale
Linearly and infinitely scalable full-stack observability, that can be deployed even mid-crisis.
90% Cost Reduction, Full Fidelity
Instead of centralizing the data, Netdata distributes the code, eliminating pipelines and complexity.
Control Without Surrender
SOC 2 Type 2 certified with every metric kept on your infrastructure.
Integrations

800+ collectors and notification channels, auto-discovered and ready out of the box.

800+ data collectors
Auto-discovery & zero config
Cloud, infra, app protocols
Notifications out of the box
> Explore integrations
Real Results
46% Cost Reduction

Reduced monitoring costs by 46% while cutting staff overhead by 67%.

— Leonardo Antunez, Codyas

Zero Pipeline

No data shipping. No central storage costs. Query at the edge.

From Our Users
"Out-of-the-Box"

So many out-of-the-box features! I mostly don't have to develop anything.

— Simon Beginn, LANCOM Systems

No Query Language

Point-and-click troubleshooting. No PromQL, no LogQL, no learning curve.

Enterprise Ready
67% Less Staff, 46% Cost Cut

Enterprise efficiency without enterprise complexity—real ROI from day one.

— Leonardo Antunez, Codyas

SOC 2 Type 2 Certified

Zero data egress. Only metadata reaches the cloud. Your metrics stay on your infrastructure.

Full Coverage
800+ Collectors

Auto-discovered and configured. No manual setup required.

Any Notification Channel

Slack, PagerDuty, Teams, email, webhooks—all built-in.

Built for the People Who Get Paged
Because 3am alerts deserve instant answers, not hour-long hunts.
Every Industry Has Rules. We Master Them.
See how healthcare, finance, and government teams cut monitoring costs 90% while staying audit-ready.
Monitor Any Technology. Configure Nothing.
Install the agent. It already knows your stack.
From Our Users
"A Rare Unicorn"

Netdata gives more than you invest in it. A rare unicorn that obeys the Pareto rule.

— Eduard Porquet Mateu, TMB Barcelona

99% Downtime Reduction

Reduced website downtime by 99% and cloud bill by 30% using Netdata alerts.

— Falkland Islands Government

Real Savings
30% Cloud Cost Reduction

Optimized resource allocation based on Netdata alerts cut cloud spending by 30%.

— Falkland Islands Government

46% Cost Cut

Reduced monitoring staff by 67% while cutting operational costs by 46%.

— Codyas

Real Coverage
"Plugin for Everything"

Netdata has agent capacity or a plugin for everything, including Windows and Kubernetes.

— Eduard Porquet Mateu, TMB Barcelona

"Out-of-the-Box"

So many out-of-the-box features! I mostly don't have to develop anything.

— Simon Beginn, LANCOM Systems

Real Speed
Troubleshooting in 30 Seconds

From 2-3 minutes to 30 seconds—instant visibility into any node issue.

— Matthew Artist, Nodecraft

20% Downtime Reduction

20% less downtime and 40% budget optimization from out-of-the-box monitoring.

— Simon Beginn, LANCOM Systems

Pay per Node. Unlimited Everything Else.

One price per node. Unlimited metrics, logs, users, and retention. No per-GB surprises.

Free tier—forever
No metric limits or caps
Retention you control
Cancel anytime
> See pricing plans
What's Your Monitoring Really Costing You?

Most teams overpay by 40-60%. Let's find out why.

Expose hidden metric charges
Calculate tool consolidation
Customers report 30-67% savings
Results in under 60 seconds
> See what you're really paying
Your Infrastructure Is Unique. Let's Talk.

Because monitoring 10 nodes is different from monitoring 10,000.

On-prem & air-gapped deployment
Volume pricing & agreements
Architecture review for your scale
Compliance & security support
> Start a conversation
Monitoring That Sells Itself

Deploy in minutes. Impress clients in hours. Earn recurring revenue for years.

30-second live demos close deals
Zero config = zero support burden
Competitive margins & deal protection
Response in 48 hours
> Apply to partner
Per-Second Metrics at Homelab Prices

Same engine, same dashboards, same ML. Just priced for tinkerers.

Community: Free forever · 5 nodes · non-commercial
Homelab: $90/yr · unlimited nodes · fair usage
> Start monitoring your lab—free
$1,000 Per Referral. Unlimited Referrals.

Your colleagues get 10% off. You get 10% commission. Everyone wins.

10% of subscriptions, up to $1,000 each
Track earnings inside Netdata Cloud
PayPal/Venmo payouts in 3-4 weeks
No caps, no complexity
> Get your referral link
Cost Proof
40% Budget Optimization

"Netdata's significant positive impact" — LANCOM Systems

Calculate Your Savings

Compare vs Datadog, Grafana, Dynatrace

Savings Proof
46% Cost Reduction

"Cut costs by 46%, staff by 67%" — Codyas

30% Cloud Bill Savings

"Reduced cloud bill by 30%" — Falkland Islands Gov

Enterprise Proof
"Better Than Combined Alternatives"

"Better observability with Netdata than combining other tools." — TMB Barcelona

Real Engineers, <24h Response

DPA, SLAs, on-prem, volume pricing

Why Partners Win
Demo Live Infrastructure

One command, 30 seconds, real data—no sandbox needed

Zero Tickets, High Margins

Auto-config + per-node pricing = predictable profit

Homelab Ready
"Absolutely Incredible"

"We tested every monitoring system under the sun." — Benjamin Gabler, CEO Rocket.Net

76k+ GitHub Stars

3rd most starred monitoring project

Worth Recommending
Product That Delivers

Customers report 40-67% cost cuts, 99% downtime reduction

Zero Risk to Your Rep

Free tier lets them try before they buy

Never Fight Fires Alone

Docs, community, and expert help—pick your path to resolution.

Learn.netdata.cloud docs
Discord, Forums, GitHub
Premium support available
> Get answers now
60 Seconds to First Dashboard

One command to install. Zero config. 850+ integrations documented.

Linux, Windows, K8s, Docker
Auto-discovers your stack
> Read our documentation
See Netdata in Action

Watch real-time monitoring in action—demos, tutorials, and engineering deep dives.

Product demos and walkthroughs
Real infrastructure, not staged
> Start with the 3-minute tour
Level Up Your Monitoring
Real problems. Real solutions. 112+ guides from basic monitoring to AI observability.
76,000+ Engineers Strong
615+ contributors. 1.5M daily downloads. One mission: simplify observability.
Per-Second. 90% Cheaper. Data Stays Home.
Side-by-side comparisons: costs, real-time granularity, and data sovereignty for every major tool.

See why teams switch from Datadog, Prometheus, Grafana, and more.

> Browse all comparisons
Edge-Native Observability, Born Open Source
Per-second visibility, ML on every metric, and data that never leaves your infrastructure.
Founded in 2016
615+ contributors worldwide
Remote-first, engineering-driven
Open source first
> Read our story
Promises We Publish—and Prove
12 principles backed by open code, independent validation, and measurable outcomes.
Open source, peer-reviewed
Zero config, instant value
Data sovereignty by design
Aligned pricing, no surprises
> See all 12 principles
Edge-Native, AI-Ready, 100% Open
76k+ stars. Full ML, AI, and automation—GPLv3+, not premium add-ons.
76,000+ GitHub stars
GPLv3+ licensed forever
ML on every metric, included
Zero vendor lock-in
> Explore our open source
Build Real-Time Observability for the World
Remote-first team shipping per-second monitoring with ML on every metric.
Remote-first, fully distributed
Open source (76k+ stars)
Challenging technical problems
Your code on millions of systems
> See open roles
Talk to a Netdata Human in <24 Hours
Sales, partnerships, press, or professional services—real engineers, fast answers.
Discuss your observability needs
Pricing and volume discounts
Partnership opportunities
Media and press inquiries
> Book a conversation
Your Data. Your Rules.
On-prem data, cloud control plane, transparent terms.
Trust & Scale
76,000+ GitHub Stars

One of the most popular open-source monitoring projects

SOC 2 Type 2 Certified

Enterprise-grade security and compliance

Data Sovereignty

Your metrics stay on your infrastructure

Validated
University of Amsterdam

"Most energy-efficient monitoring solution" — ICSOC 2023, peer-reviewed

ADASTEC (Autonomous Driving)

"Doesn't miss alerts—mission-critical trust for safety software"

Community Stats
615+ Contributors

Global community improving monitoring for everyone

1.5M+ Downloads/Day

Trusted by teams worldwide

GPLv3+ Licensed

Free forever, fully open source agent

Why Join?
Remote-First

Work from anywhere, async-friendly culture

Impact at Scale

Your work helps millions of systems

Compliance
SOC 2 Type 2

Audited security controls

GDPR Ready

Data stays on your infrastructure

Blog

Unlock the Secrets of Kernel Memory Usage

Diving Deep into Kernel Memory for System Optimization
by Satyadeep Ashwathnarayana · May 4, 2023

stacked-netdata

The mem.kernel chart in Netdata provides insight into the memory usage of various kernel subsystems and mechanisms. By understanding these dimensions and their technical details, you can monitor your system’s kernel memory usage and identify potential issues or inefficiencies. Monitoring these dimensions can help you ensure that your system is running efficiently and provide valuable insights into the performance of your kernel and memory subsystem.

mem-kernel

Slab

The slab allocator is a memory management mechanism introduced by Jeff Bonwick in 1994 to manage memory allocation for kernel objects. The main purpose of the slab allocator is to reduce memory fragmentation and improve the speed of memory allocation/deallocation. The slab allocator groups objects of the same size into “slabs” and caches the objects to speed up future allocations.

The slab allocator consists of three main components:

  • Cache: A cache is a collection of slabs that store objects of the same type and size.

  • Slab: A slab is a contiguous block of memory that contains multiple instances of the same object type. It can be in one of three states: full (no free objects), partial (some free objects), or empty (all objects are free).

  • Object: An object is an instance of a kernel data structure, such as inode, dentry, or buffer_head.

When the kernel requires a new object, it checks if there is a free object in the corresponding cache. If not, it allocates a new slab and populates it with objects.

How to interpret Slab patterns

  • A steady or moderate increase in Slab memory usage is normal, as the kernel caches data structures for better performance.

  • A sudden spike or continuous growth in Slab memory usage might indicate a memory leak or excessive caching, which could impact system performance or cause out-of-memory issues.

Physical servers, VMs and containers

  • Hardware devices and drivers may require kernel objects, which can increase Slab memory usage.

  • In VMs, hardware emulation and additional drivers may lead to increased Slab memory usage compared to physical systems.

  • When running containers, keep in mind they use kernel objects for various purposes, such as network or storage management, which can increase Slab memory usage.

VmallocUsed

The vmalloc (virtual memory allocator) is a kernel mechanism that allows allocation of non-contiguous physical memory regions that are mapped into a contiguous virtual address space. The main purpose of vmalloc is to allocate large memory regions when there is not enough contiguous physical memory available.

Vmalloc uses a technique called “paging” to map non-contiguous physical memory to a contiguous virtual address space. It breaks the memory into fixed-size chunks called “pages” and uses page tables to keep track of the mapping between virtual and physical addresses. When the kernel requests memory via vmalloc, it searches for available physical pages, allocates them, and maps them to contiguous virtual addresses.

How to interpret VmallocUsed patterns?

  • A low to moderate VmallocUsed value is normal for most systems, as the kernel typically uses vmalloc for specific purposes when contiguous memory is not available.

  • A high VmallocUsed value, especially if it grows continuously, could indicate an issue with memory fragmentation, a memory leak, or excessive use of non-contiguous memory allocations.

Physical servers, VMs and containers

  • Hardware with large address spaces, such as NUMA systems, may require more extensive use of vmalloc, impacting the VmallocUsed metric.

  • VMs may have different memory allocation characteristics, which could affect the usage of vmalloc. For example, the hypervisor may have limited contiguous memory available, causing the kernel to use vmalloc more frequently.

  • Container runtimes or the workloads running inside the containers might allocate large memory regions, increasing VmallocUsed. Also host systems running containers with limited contiguous memory might lead to increased VmallocUsed.

KernelStack

A kernel stack is a memory region allocated for each task (or thread) executed by the kernel. When the kernel is executing a task, it uses the kernel stack to store temporary data, function call information, and local variables. Each task has its own kernel stack, which is usually of a fixed size (e.g., 4KB, 8KB, or 16KB).

Kernel stacks are essential for task management and context switching. When the kernel switches from one task to another, it saves the current task’s state (including register values and stack pointer) and loads the state of the next task. This allows the kernel to resume the execution of the next task from where it left off.

How to interpret KernelStack patterns

  • KernelStack memory usage depends on the number of tasks or threads the kernel is managing. In general, a moderate and stable KernelStack value is normal.

  • A sudden increase or continuous growth in KernelStack memory usage might suggest an issue with task management, such as too many threads being spawned or a memory leak in the kernel stacks.

Physical servers, VMs and containers

  • The number of kernel tasks and threads depends on the hardware and the workload. A system with more CPU cores or devices may require more kernel threads, increasing KernelStack memory usage.

  • In VMs, the hypervisor and additional virtual devices may generate more kernel tasks and threads, leading to increased KernelStack memory usage.

  • Container runtimes and the workloads running inside the containers might generate additional kernel tasks and threads, increasing KernelStack memory usage. Also, running multiple containers on a single host might increase the number of kernel tasks and threads, impacting KernelStack memory usage.

PageTables

Page tables are hierarchical data structures used by the Memory Management Unit (MMU) in a processor to translate virtual addresses into physical memory addresses. The MMU uses a technique called “paging” to break memory into fixed-size chunks called “pages”. Page tables keep track of the mapping between virtual and physical addresses for each page.

There are usually multiple levels of page tables, with each level containing a set of entries pointing to the next level. The final level contains the actual mapping between virtual and physical addresses. The number of levels depends on the architecture and the size of the virtual address space.

In x86-64 architecture, there are four levels of page tables: PGD (Page Global Directory), PUD (Page Upper Directory), PMD (Page Middle Directory), and PTE (Page Table Entry). Each entry in the page table contains information about the corresponding page, such as its physical address, access permissions, and status flags.

How to interpret PageTables patterns

  • PageTables memory usage is related to the number of mappings between virtual and physical memory addresses. A moderate and stable value is normal.

  • A sudden increase or continuous growth in PageTables memory usage could indicate an issue with memory mapping, such as a large number of small memory allocations or a memory leak in the page table entries.

Physical servers, VMs and containers

  • Hardware with larger address spaces or more devices may require more extensive memory mapping, affecting PageTables memory usage.

  • VMs running on hypervisors with hardware-assisted virtualization (e.g., Intel EPT or AMD NPT) may have different memory mapping behavior, impacting PageTables memory usage.

  • Running containers with isolated memory namespaces may increase the number of memory mappings, affecting PageTables memory usage. Also, container runtimes or workloads with a large number of small memory allocations might increase PageTables memory usage.

PerCPU

Per-CPU allocations are a mechanism used by the Linux kernel to allocate memory that is specific to a particular CPU core. This is useful for optimizing performance in multi-core systems, as it reduces the need for synchronization between cores and minimizes cache contention. Per-CPU allocations are primarily used for frequently accessed data structures, counters, and buffers.

The per-CPU allocator provides each CPU core with its own copy of a variable or data structure. This allows each core to access and modify its copy without needing to lock or synchronize with other cores. As a result, the performance impact of cache coherency and contention is reduced, leading to better scalability in multi-core systems.

When you create a per-CPU variable, the kernel allocates memory for each CPU core in the system, usually from a dedicated per-CPU memory pool. The size of the allocated memory depends on the size of the variable or data structure, as well as any padding required to ensure proper alignment for cache line boundaries. The PerCPU dimension in the mem.kernel chart represents the amount of memory allocated to the per-CPU allocator, excluding the cost of metadata.

How to interpret PerCPU patterns

  • PerCPU memory usage depends on the number of CPU cores and the amount of per-CPU data structures allocated. A stable and proportional value relative to the number of cores is normal.

  • A sudden increase or continuous growth in PerCPU memory usage might suggest an issue with per-CPU data structures, such as a memory leak or excessive per-CPU allocations.

Physical servers, VMs and containers

  • Systems with more CPU cores will have higher PerCPU memory usage due to the per-CPU data structures allocated for each core. \

  • In VMs, the number of virtual CPU cores and the underlying physical CPU cores may affect PerCPU memory usage. Additionally, the hypervisor’s handling of per-CPU data structures may influence this metric.

Conclusion

In conclusion, the mem.kernel chart in Netdata provides valuable insights into the memory usage of various kernel subsystems and mechanisms. By understanding the technical details of each dimension - Slab, VmallocUsed, KernelStack, PageTables, and PerCPU - you can effectively monitor your system’s kernel memory usage and identify potential issues or inefficiencies.

Interpreting these metrics requires considering the specific context of your system, including the hardware, the environment (such as running on a VM or in a Kubernetes cluster), and the expected behavior. In general, look for the following patterns:

  • Sudden spikes or drops in any of these dimensions, which could indicate an issue or an unexpected change in the system’s behavior.

  • Continuous growth in any of these dimensions, which might suggest a memory leak or excessive resource usage.

  • Disproportionately high values compared to the system’s hardware resources, workload, or historical trends, which could indicate inefficiencies that need to be investigated further.