Discover what is thrashing in operating systems, its causes, effects, and solutions with real-life examples explained in simple, beginner-friendly terms.
It’s Monday morning. You open your laptop, ready to finish that urgent report.
You’ve got Chrome with 15 tabs, Spotify playing in the background, Zoom running, and a few Word documents open.
At first, everything works fine — until suddenly, your system slows to a crawl.
The cursor freezes, the fan starts roaring like a jet engine, and even a simple click takes forever to respond.
You open the Task Manager and see something strange — CPU usage is low, but your disk light is blinking nonstop.
Your laptop isn’t busy working — it’s busy swapping data between memory and disk.
What’s happening here?
You’ve just experienced a classic case of Thrashing in Operating System — a condition where your computer spends more time moving data than actually running programs.
Thrashing in Operating System occurs when the system starts spending more time transferring pages between main memory (RAM) and secondary storage (disk) than actually executing instructions.
This constant back-and-forth movement of pages leads to frequent page faults and causes a sharp drop in CPU efficiency.
The Cycle of Thrashing
- Too Many Processes (High Multiprogramming Level)
When multiple programs are loaded into memory simultaneously, each process gets fewer memory frames than it actually needs. - Insufficient Frames
Due to limited memory allocation, pages that are required soon get replaced too often — increasing the number of page faults. - Poor Page Replacement Behavior
As the system tries to manage memory aggressively, it keeps swapping pages in and out, further slowing performance.
This loop of low CPU utilization → adding more processes → even more page faults keeps repeating — and that continuous cycle is known as thrashing.
Techniques to Handle Thrashing in Operating System
Now that we know what thrashing in operating system is and how it happens, let’s look at how it can be detected and controlled.
Operating systems use intelligent techniques to maintain performance and prevent excessive paging.
The two most common methods are the Working Set Model and Page Fault Frequency (PFF).
1. Working Set Model
The Working Set Model is built around the Locality of Reference concept — it assumes that at any given moment, a process actively uses only a specific set of pages called its working set.
In simple words, this model tracks which pages a process is currently using so that enough frames can be assigned to hold that data in memory.
Here’s how it works:
- If enough frames are provided to fit the process’s working set → very few page faults occur.
- If the allocated frames are less than the working set size → the process starts experiencing frequent page faults, leading to thrashing.
The working set of a process (denoted as WSSᵢ) is defined as the number of pages referenced in the last Δ memory accesses, where Δ represents the window size or observation interval.
The total memory demand across all processes is calculated as:
D = Σ WSSᵢ
Now, based on the value of D:
- If D > m, where m is the number of available physical frames → the system experiences thrashing.
- If D ≤ m, the memory demand is within limits → no thrashing occurs.
The accuracy of this method depends heavily on the choice of Δ (window size):
- A large Δ may cause overlapping working sets, increasing memory demand unnecessarily.
- A small Δ might not capture the entire locality, leading to inefficient frame allocation.
In short: The Working Set Model helps the OS allocate memory dynamically and avoid thrashing by ensuring that each process gets just enough frames for its active locality.
2. Page Fault Frequency (PFF)
The Page Fault Frequency (PFF) technique provides a direct way to control thrashing by constantly monitoring the rate of page faults for each process.
Instead of tracking localities, this method focuses on how frequently a process is causing page faults — and adjusts the memory allocation accordingly.
How It Works:
- The system defines an upper and lower threshold for acceptable page fault rates.
- If a process’s fault rate exceeds the upper limit, it means the process needs more frames — so the OS allocates additional memory to it.
- If the fault rate drops below the lower limit, it means the process has more memory than necessary — so the OS can reclaim some frames and assign them elsewhere.
- If there are no free frames available, the OS may suspend one or more processes temporarily and redistribute frames to active ones.
This approach keeps the system balanced — maintaining efficient CPU usage while avoiding excessive swapping. These intelligent strategies help the operating system optimize performance, minimize page faults, and prevent thrashing from crippling system speed.
The Locality Model in Thrashing
To truly understand thrashing in an operating system, it’s essential to first grasp the concept of locality of reference — one of the core principles behind efficient memory management.
In simple terms, locality refers to a set of memory pages that a program frequently accesses during a short period of execution. Think of it like your daily workflow — you keep only the tools you use often on your desk, while the rest stay in the drawer.
For example, when a function runs, it repeatedly uses certain instructions, local variables, and data structures. These together form that function’s locality — the pages it actively needs at that time. If this concept feels familiar, it’s similar to how processes and threads share and manage memory differently, as explained in this detailed guide on the differences between process and thread. Understanding this relationship helps you see why thrashing happens when too many processes compete for memory, causing the system to slow down dramatically.
Now, here’s how this affects system performance:
- When enough memory frames are available to store a process’s current locality, the program runs smoothly with very few page faults.
- When allocated frames are fewer than the size of the locality, the system keeps removing pages that are needed again soon — resulting in frequent page faults.
Over time, as multiple processes compete for limited memory space, their active localities start overlapping and can’t all fit in RAM.
That’s when thrashing begins — the operating system constantly swaps pages in and out, drastically reducing performance.
Understanding the Concept: What is Thrashing?
Understanding Thrashing Through the Locality Model
To fully understand thrashing, it helps to know about the locality of reference — a core concept in memory management.
- A locality refers to a group of pages that a process frequently uses together.
For example, when a function executes, it accesses certain instructions, local variables, and global data — all part of its current locality.
Now, here’s where the balance matters:
- If the number of allocated frames is enough to hold the locality, the process runs smoothly with fewer page faults.
- If the frames are less than the locality size, the process keeps replacing its own pages — leading to frequent page faults and eventually thrashing.
In simple terms, thrashing happens when the active localities of multiple processes don’t fit into the available memory at once.
In short:
Thrashing = too much swapping + too little processing.
Why Does Thrashing Occur?
Let’s imagine your system’s memory (RAM) as a small workspace.
If you put too many files (processes) on the desk, you’ll constantly shuffle papers just to find what you need — instead of doing the actual work.
That’s exactly what happens inside your computer!
Here are the main causes of thrashing:
- Insufficient Physical Memory (RAM)
When the number of active processes exceeds the available memory, the OS starts using disk space (swap area). But disks are much slower than RAM — leading to thrashing. - High Degree of Multiprogramming
When too many processes are loaded in memory at the same time, each one demands pages that can’t all fit in RAM. - Poor Page Replacement Policy
If the operating system keeps replacing pages that will soon be needed again, it increases page faults — and that means more swapping. - Large Working Set Size
Each process has a working set (the set of pages it’s actively using). If all working sets together exceed the total memory, thrashing begins.
How Does Thrashing Affect System Performance?
Thrashing doesn’t just make your system slow — it practically stops useful work from happening.
Here’s what goes on behind the scenes:
- Page faults increase drastically.
- The disk I/O (input/output) shoots up because pages are constantly read and written.
- The CPU utilization drops — even though the system seems “busy.”
- The response time becomes so high that users feel the system is frozen.
In short:
More paging = more waiting = less performance.
How Operating System Detects Thrashing
Most modern operating systems monitor CPU utilization and page fault rate to detect thrashing.
- If CPU usage is low but page faults are very high, it’s a clear sign of thrashing.
- The system may automatically reduce the degree of multiprogramming (suspend some processes) to control it.
This mechanism is part of load control and memory management strategies inside OS kernels.
How to Prevent or Control Thrashing
Now that we know what causes thrashing, let’s talk about how to prevent it.
- Reduce the Degree of Multiprogramming
Run fewer processes at the same time. The OS can temporarily suspend some processes to free up memory. - Use Better Page Replacement Algorithms
Algorithms like LRU (Least Recently Used) or Working Set Model help maintain pages that are actually needed in memory. - Increase Physical Memory (RAM)
More RAM means fewer swaps between disk and memory. - Adjust the Working Set Size
The OS can dynamically allocate frames to processes based on their needs using local replacement policy. - Use Efficient Virtual Memory Techniques
Systems like Linux and Windows use smart paging techniques to reduce thrashing by balancing memory demand and availability.
Real-World Example of Thrashing
Imagine you’re using your laptop with just 4 GB RAM and open Chrome with 10 tabs, plus Photoshop, a video editor, and a few background apps.
The system runs out of RAM and starts using the hard drive as virtual memory.
Now, as you switch between apps, the OS keeps swapping pages in and out — and your system freezes for seconds at a time.
That’s thrashing in real life!
Key Differences: Paging vs Thrashing
| Aspect | Paging | Thrashing |
|---|---|---|
| Purpose | Normal memory management process | Undesirable condition |
| CPU Utilization | High | Very Low |
| Disk I/O | Moderate | Extremely High |
| Performance | Stable | Degraded |
| Cause | Controlled swapping | Excessive swapping |
C Code Example: Simulating Thrashing in Operating System
Understanding what is thrashing in an operating system can be tricky without a practical example.
Here’s a simple C program that simulates thrashing using paging and FIFO (First-In-First-Out) page replacement.
This example will help you visualize how excessive paging slows down system performance.
C Code — Thrashing Simulation
#include
#include
#include
#define TOTAL_FRAMES 4 // Number of frames in memory
#define TOTAL_PAGES 10 // Number of pages in process
#define PAGE_REFERENCES 20 // Total page requests
// Function to check if page is already in memory frames
int isInFrames(int frames[], int page) {
for (int i = 0; i < TOTAL_FRAMES; i++) {
if (frames[i] == page) return 1;
}
return 0;
}
int main() {
int frames[TOTAL_FRAMES];
int pageReferences[PAGE_REFERENCES];
int pageFaults = 0;
srand(time(0));
// Initialize memory frames
for (int i = 0; i < TOTAL_FRAMES; i++) {
frames[i] = -1;
}
// Generate random page reference sequence
printf("Page Reference Sequence: ");
for (int i = 0; i < PAGE_REFERENCES; i++) {
pageReferences[i] = rand() % TOTAL_PAGES;
printf("%d ", pageReferences[i]);
}
printf("\n\n");
printf("Simulating Thrashing in Operating System using FIFO...\n\n");
int nextFrame = 0;
for (int i = 0; i < PAGE_REFERENCES; i++) {
int page = pageReferences[i];
if (!isInFrames(frames, page)) {
// Page fault occurs
frames[nextFrame] = page;
nextFrame = (nextFrame + 1) % TOTAL_FRAMES;
pageFaults++;
}
// Display frame status
printf("Page: %d | Frames: ", page);
for (int j = 0; j < TOTAL_FRAMES; j++) {
if (frames[j] != -1)
printf("%d ", frames[j]);
else
printf("- ");
}
printf("\n");
}
printf("\nTotal Page Faults: %d\n", pageFaults);
// Simulate thrashing detection
if (pageFaults > (PAGE_REFERENCES / 2)) {
printf("Thrashing Detected! The system is spending more time swapping pages than executing processes.\n");
} else {
printf("No Thrashing Detected. System memory usage is efficient.\n");
}
return 0;
}
How This Code Demonstrates Thrashing in Operating System
This simple simulation helps explain what thrashing is in a very practical way:
- It generates a random sequence of page requests to simulate a process working with memory pages.
- It uses a FIFO page replacement strategy to replace pages in frames.
- It counts page faults — a high rate of faults is a sign of thrashing.
- It prints results showing how thrashing affects performance.
This program shows how low memory availability and frequent page replacement cause the system to spend most of its time swapping pages instead of executing actual processes — which is the essence of thrashing in operating system performance problems.
What Kind of Questions You Can Face in an Interview About Thrashing in Operating System
When you’re preparing for interviews — especially in embedded systems, operating system development, or memory management roles — understanding what is thrashing is important.
Interviewers often test both your conceptual knowledge and your ability to apply it in real scenarios.
Here’s a curated list of real interview questions you might face related to thrashing in operating system:
Basic Concept Questions
- What is thrashing in an operating system?
- What causes thrashing?
- How does thrashing affect CPU utilization and system performance?
- Can thrashing happen in embedded systems? Why or why not?
Advanced Questions
- Explain the locality of reference and its relation to thrashing.
- How does the working set model prevent thrashing?
- What is Page Fault Frequency (PFF) and how does it help reduce thrashing?
- How does thrashing differ from normal paging?
- How do page replacement algorithms affect thrashing?
Practical & Scenario-Based Questions
- If your system shows low CPU utilization but high disk I/O, how would you investigate thrashing?
- Given a memory constraint in a system, how would you design your application to avoid thrashing?
- How would you simulate thrashing in C for a demonstration?
- Can you explain a real-time scenario where thrashing impacted performance? How did you solve it?
Pro Tip for Interviews
When answering these questions, it’s best to start with a clear definition, then explain using examples (real-time or code-based) and finish with prevention strategies such as the Working Set Model or Page Fault Frequency technique.
Interviewers love concise answers that show both theory and practical knowledge.
Quick Recap
Here’s a short summary before we wrap up:
- Thrashing happens when excessive paging occurs.
- It makes CPU utilization drop and slows down the entire system.
- Main causes: less memory, too many processes, poor page replacement.
- Solutions: limit processes, improve paging algorithms, or add more RAM.
Final Thoughts
So, next time your computer suddenly turns slow even though the CPU isn’t overloaded, remember — it might not be your processor’s fault. It could be thrashing behind the scenes.
Understanding what thrashing is helps you tune performance and design better memory management systems — especially if you’re working in operating systems, embedded systems, or performance optimization.
Frequently Asked Questions (FAQ) on Thrashing in Operating System
1. What is Thrashing in Operating System?
Thrashing in Operating System occurs when the CPU spends more time swapping pages between RAM and disk (virtual memory) than executing actual processes.
This excessive paging reduces system performance, increases page faults, and makes the computer extremely slow.
2. What causes thrashing in an operating system?
Thrashing is usually caused by:
- Having too many processes running at once (high degree of multiprogramming).
- Insufficient physical memory (RAM) to handle all active processes.
- A poor page replacement policy that keeps replacing pages that are needed again soon.
- Large working set sizes that don’t fit in available memory.
3. How does thrashing affect system performance?
When thrashing occurs, the operating system spends most of its time moving data between main memory and disk.
As a result:
- CPU utilization drops drastically.
- Response time increases.
- System throughput becomes very low.
In short, the computer feels like it’s “busy doing nothing.”
4. What is the difference between paging and thrashing?
| Aspect | Paging | Thrashing |
|---|---|---|
| Purpose | Normal memory management process | Undesirable condition caused by excessive paging |
| CPU Utilization | High | Very Low |
| Disk I/O | Moderate | Extremely High |
| System Performance | Stable | Severely degraded |
In short: Paging is normal; thrashing is a performance disaster due to too much paging.
5. How can we prevent or reduce thrashing?
To prevent thrashing in operating system, you can:
- Reduce the number of active processes (lower multiprogramming level).
- Use better page replacement algorithms like LRU (Least Recently Used).
- Increase physical memory (RAM) if possible.
- Apply Working Set Model or Page Fault Frequency (PFF) techniques to manage memory intelligently.
6. What is the Working Set Model in thrashing?
The Working Set Model is based on the idea of locality of reference.
It defines the set of pages a process actively uses (its working set).
If the system allocates enough memory frames to hold this working set, thrashing won’t occur.
But if the working set exceeds available memory, page faults rise and thrashing begins.
7. How does Page Fault Frequency (PFF) help control thrashing?
Page Fault Frequency (PFF) monitors how often a process generates page faults.
If the page fault rate becomes too high, the OS allocates more frames to that process.
If it’s too low, the OS can reclaim some memory.
This dynamic adjustment helps maintain balance and avoid thrashing.
8. Is thrashing permanent or temporary?
Thrashing is a temporary condition — it occurs when system memory demand suddenly exceeds available physical memory.
Once processes are reduced or additional memory is made available, the system can recover and return to normal performance.
9. How does the operating system detect thrashing?
Operating systems detect thrashing by monitoring:
- Page fault rate — a sharp increase indicates excessive paging.
- CPU utilization — if it drops while page faults rise, it’s a sign of thrashing.
When detected, the OS can automatically suspend some processes or adjust memory allocation to restore performance.
10. Why is understanding thrashing important for developers?
Understanding thrashing in operating system helps developers write memory-efficient applications and design better-performing systems.
By managing memory wisely, developers can prevent slowdowns and ensure smooth multitasking — especially in embedded systems or real-time environments.
Mr. Raj Kumar is a highly experienced Technical Content Engineer with 7 years of dedicated expertise in the intricate field of embedded systems. At Embedded Prep, Raj is at the forefront of creating and curating high-quality technical content designed to educate and empower aspiring and seasoned professionals in the embedded domain.
Throughout his career, Raj has honed a unique skill set that bridges the gap between deep technical understanding and effective communication. His work encompasses a wide range of educational materials, including in-depth tutorials, practical guides, course modules, and insightful articles focused on embedded hardware and software solutions. He possesses a strong grasp of embedded architectures, microcontrollers, real-time operating systems (RTOS), firmware development, and various communication protocols relevant to the embedded industry.
Raj is adept at collaborating closely with subject matter experts, engineers, and instructional designers to ensure the accuracy, completeness, and pedagogical effectiveness of the content. His meticulous attention to detail and commitment to clarity are instrumental in transforming complex embedded concepts into easily digestible and engaging learning experiences. At Embedded Prep, he plays a crucial role in building a robust knowledge base that helps learners master the complexities of embedded technologies.













