Unit II: CPU Scheduling
- ⭐1. Introduction to CPU Scheduling
- ⭐2. Types of Scheduling
- ⭐3. Scheduling Algorithms
- ⭐4. Scheduling Criteria
- ⭐5. CPU Scheduler: Preemptive vs. Non-Preemptive Scheduling
- ⭐6. Dispatcher
- ⭐7. First Come, First Serve (FCFS) Scheduling
- ⭐8. Shortest Job First (SJF) Scheduling
- ⭐9. Non-Preemptive Priority Scheduling
- 9.1 Non-Preemptive Priority Scheduling
- 9.2 Example of Non-Preemptive Priority Scheduling
- 9.3 Step-by-Step Execution
- 9.4 Gantt Chart for Non-Preemptive Priority Scheduling
- 9.5 Calculating Waiting Time (WT) & Turnaround Time (TAT)
- 9.6 Final Calculations
- 9.7 Observations in Non-Preemptive Priority Scheduling
- 9.8 Real-Life Example of Non-Preemptive Priority Scheduling:
- ⭐10. Preemptive Priority Scheduling
- 10.1 Introduction to Preemptive Priority Scheduling
- 10.2 Example for Preemptive Priority Scheduling
- 10.3 Step-by-Step Execution
- 10.4 Gantt Chart for Preemptive Priority Scheduling
- 10.5 Calculating Waiting Time (WT) & Turnaround Time (TAT)
- 10.6 Final Calculations
- 10.7 Observations in Preemptive Priority Scheduling
- 10.8 Real-Life Example of Preemptive Priority Scheduling:
- ⭐11. Round Robin (RR) Scheduling
- 11.1 Introduction to Round Robin Scheduling
- 11.2 How Round Robin Works (Step-by-Step Execution)
- 11.3 Gantt Chart for Round Robin Scheduling
- 11.4 Calculating Waiting Time (WT) & Turnaround Time (TAT)
- 11.5 Final Calculations
- 11.6 Advantages of Round Robin Scheduling
- 11.7 Disadvantages of Round Robin Scheduling
- 11.8 When to Use Round Robin?
- ⭐12. Multilevel Feedback Queue (MLFQ) Scheduling
- 12.1 Introduction to Multilevel Feedback Queue Scheduling
- 12.2 How MLFQ Works?
- 12.3 Example of MLFQ Scheduling
- 12.4 Step-by-Step Execution
- 12.5 Gantt Chart for MLFQ Scheduling
- 12.6 Calculating Waiting Time (WT) & Turnaround Time (TAT)
- 12.7 Final Calculations
- 12.8 Advantages of Multilevel Feedback Queue Scheduling
- 12.9 Disadvantages of Multilevel Feedback Queue Scheduling
- 12.10 When to Use MLFQ?
- ⭐13. Multiprocessor Scheduling
- ⭐14. Real-Time Scheduling
- ⭐15. Thread Scheduling
- 🚨Thanks for visiting finenotes4u✨
⭐1. Introduction to CPU Scheduling
- CPU scheduling is the process of selecting which process will run next on the CPU.
- It is a key part of multiprogramming, ensuring efficient CPU utilization.
- The CPU Scheduler decides the order of execution of processes.
⭐2. Types of Scheduling
2.1 Long-Term Scheduling (Job Scheduling)
- Determines which processes enter the system for execution.
- Manages the degree of multiprogramming (number of processes in memory).
- Runs infrequently (seconds/minutes).
2.2 Short-Term Scheduling (CPU Scheduling)
- Decides which process gets the CPU next from the ready queue.
- Runs frequently (milliseconds).
- Impacts system performance directly.
2.3 Medium-Term Scheduling
- Temporarily removes processes from memory (swapping).
- Reduces system load and improves performance.
- Used in time-sharing systems.
⭐3. Scheduling Algorithms
3.1 First-Come, First-Served (FCFS)
- The first process that arrives is executed first (like a queue).
- Non-preemptive (once running, it cannot be stopped).
- Disadvantage: "Convoy effect" – a small process may wait a long time if a large process comes first.
3.2 Shortest Job Next (SJN) / Shortest Job First (SJF)
- Process with smallest execution time runs first.
- Preemptive (Shortest Remaining Time First – SRTF) or Non-preemptive.
- Best for batch processing.
3.3 Round Robin (RR)
- Each process gets a fixed time slice (time quantum).
- If the process does not finish, it goes back to the queue.
- Preemptive (ensures fair CPU allocation).
3.4 Priority Scheduling
- Each process is assigned a priority; higher priority runs first.
- Can be Preemptive or Non-Preemptive.
- Disadvantage: Starvation (low-priority processes may never execute).
3.5 Multilevel Queue Scheduling
- Divides processes into different queues (foreground, background).
- Each queue has a different scheduling algorithm.
- Foreground queue: Uses Round Robin (interactive tasks).
- Background queue: Uses FCFS (batch tasks like printing).
3.6 Multilevel Feedback Queue Scheduling
- Similar to Multilevel Queue Scheduling, but processes can move between queues based on behavior.
- Ensures fairness.
⭐4. Scheduling Criteria
- CPU Utilization:
- Percentage of time CPU is busy.
- Goal: Maximize CPU usage (keep CPU working as much as possible).
- Throughput:
- Number of processes completed per unit time.
- Goal: Increase throughput.
Turnaround Time:
- Time from process arrival to completion.
- Formula:
- Goal: Minimize turnaround time.
Waiting Time:
- Time a process spends in the ready queue before execution.
- Formula:
- Goal: Minimize waiting time.
Response Time:
- Time from arrival to first execution.
- Important in interactive systems.
- Goal: Minimize response time.
⭐5. CPU Scheduler: Preemptive vs. Non-Preemptive Scheduling
5.1 Preemptive Scheduling
- A running process can be interrupted and moved back to the ready queue.
- Used in time-sharing and real-time systems.
- Example Algorithms: Round Robin, SJF (Preemptive), Priority (Preemptive).
5.2 Non-Preemptive Scheduling
- Once a process starts execution, it cannot be interrupted until completion.
- Used in batch and non-time-critical systems.
- Example Algorithms: FCFS, SJF (Non-Preemptive), Priority (Non-Preemptive).
⭐6. Dispatcher
6.1 Functions of Dispatcher:
- Context Switching: Saves the state of the old process and loads the new one.
- Switching to User Mode: Ensures the process runs with user permissions.
- Jumping to the Right Location: Starts execution of the selected process.
6.2 Dispatch Latency
- The time taken by the dispatcher to stop one process and start another.
- Goal: Minimize dispatch latency to improve CPU efficiency.
⭐7. First Come, First Serve (FCFS) Scheduling
7.1 Introduction to FCFS Scheduling
- First Come, First Serve (FCFS) is the simplest CPU scheduling algorithm.
- The process that arrives first is executed first, like a queue (FIFO - First In, First Out).
- It is a non-preemptive scheduling algorithm (once a process starts execution, it cannot be interrupted until completion).
- Imagine a ticket counter where people stand in a queue.
- The first person in line gets the ticket first, then the second, and so on.
- Similarly, in FCFS, the first process in the queue gets CPU time first.
7.2 Characteristics of FCFS Scheduling
- Simple to implement (easy to use with a queue structure).
- Non-preemptive (processes run until completion).
- Can cause long waiting times for shorter processes if a longer process arrives first.
- Not ideal for interactive systems because it doesn’t allow quick process switching.
7.3 How FCFS Works (Step-by-Step)
- Processes arrive in a queue.
- The CPU picks the first process and starts execution.
- Once the process finishes, the next process in the queue starts.
- This continues until all processes are completed.
7.4 Example of FCFS Scheduling
Given Processes:
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 ms | 5 ms |
P2 | 1 ms | 3 ms |
P3 | 2 ms | 8 ms |
P4 | 3 ms | 6 ms |
Gantt Chart (Execution Order)
Calculating Waiting Time
Process | Completion Time | Turnaround Time (TAT) | Waiting Time (WT) |
---|---|---|---|
P1 | 5 ms | 5 - 0 = 5 ms | 5 - 5 = 0 ms |
P2 | 8 ms | 8 - 1 = 7 ms | 7 - 3 = 4 ms |
P3 | 16 ms | 16 - 2 = 14 ms | 14 - 8 = 6 ms |
P4 | 22 ms | 22 - 3 = 19 ms | 19 - 6 = 13 ms |
Average Waiting Time:
Average Turnaround Time:
7.5 Advantages of FCFS Scheduling
✅ Fair Scheduling: Every process gets a turn based on arrival.
✅ Good for Long Jobs: No process gets interrupted once started.
7.6 Disadvantages of FCFS Scheduling
❌ High Waiting Time: If a process with a long burst time arrives first, all other processes have to wait.
❌ Not Suitable for Interactive Systems: Doesn’t allow fast switching between tasks.
Example of Convoy Effect:
- Suppose a huge truck is moving on a one-lane road, and many small cars are behind it.
- The small cars cannot pass until the truck finishes its journey (like shorter processes waiting behind longer ones in FCFS).
7.7 When to Use FCFS?
- Batch processing where all jobs are of similar size.
- Simple environments where process order doesn’t matter.
- Non-interactive systems where preemption is not required.
⭐8. Shortest Job First (SJF) Scheduling
8.1 Introduction to SJF Scheduling
- Shortest Job First (SJF) is a CPU scheduling algorithm that selects the process with the smallest burst time first.
- It is also known as Shortest Next CPU Burst.
- It is an optimal algorithm because it gives the minimum average waiting time.
- Imagine a hospital emergency room.
- Patients with minor injuries (short treatment time) are treated first, while serious cases (longer treatment) wait.
- This ensures faster treatment for the majority of patients.
8.2 Types of SJF Scheduling
- Non-Preemptive SJF (Standard SJF)
- The CPU is assigned to the shortest job first, and it cannot be interrupted until it completes.
- Other processes must wait even if a shorter process arrives later.
- Example:
- If five customers order food at a restaurant, the chef prepares the quickest orders first, even if a large order came in earlier.
- Preemptive SJF (Shortest Remaining Time First - SRTF)
- If a new process with a shorter burst time arrives, the current process is interrupted and replaced.
- The CPU always executes the shortest remaining job first.
- Example:
- A pizza shop starts preparing a large pizza, but a customer orders a small burger.
- The shop pauses the pizza and makes the burger first because it takes less time.
8.3 Example for Non-Preemptive SJF
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 ms | 8 ms |
P2 | 1 ms | 4 ms |
P3 | 2 ms | 9 ms |
P4 | 3 ms | 5 ms |
Process | Completion Time | Turnaround Time (TAT) | Waiting Time (WT) |
---|---|---|---|
P1 | 18 ms | 18 - 0 = 18 ms | 18 - 8 = 10 ms |
P2 | 5 ms | 5 - 1 = 4 ms | 4 - 4 = 0 ms |
P3 | 27 ms | 27 - 2 = 25 ms | 25 - 9 = 16 ms |
P4 | 10 ms | 10 - 3 = 7 ms | 7 - 5 = 2 ms |
8.4 Example for Preemptive SJF (SRTF)
Given Processes:
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 ms | 8 ms |
P2 | 1 ms | 4 ms |
P3 | 2 ms | 9 ms |
P4 | 3 ms | 5 ms |
Step-by-Step Execution:
- At time 0, P1 is the only process, so it starts execution.
- At time 1, P2 arrives (burst time 4). Since 4 < P1’s remaining time (7), P1 is preempted, and P2 starts execution.
- At time 2, P3 arrives (burst time 9). Since P2 is shorter, P3 waits.
- At time 3, P4 arrives (burst time 5). P2 is still the shortest, so it continues.
- At time 5, P2 completes, and the next shortest process (P4) starts execution.
- At time 10, P4 completes, and the next shortest process (P1) resumes.
- At time 18, P1 completes, and P3 (the longest) starts execution.
- At time 27, P3 completes, and all processes are finished.
Gantt Chart for Preemptive SJF
Calculating Waiting Time (WT) & Turnaround Time (TAT)
- Waiting Time (WT) = Turnaround Time - Burst Time
- Turnaround Time (TAT) = Completion Time - Arrival Time
Process | Completion Time | Turnaround Time (TAT) | Waiting Time (WT) |
---|---|---|---|
P1 | 18 ms | 18 - 0 = 18 ms | 18 - 8 = 10 ms |
P2 | 5 ms | 5 - 1 = 4 ms | 4 - 4 = 0 ms |
P3 | 27 ms | 27 - 2 = 25 ms | 25 - 9 = 16 ms |
P4 | 10 ms | 10 - 3 = 7 ms | 7 - 5 = 2 ms |
Final Calculations
Average Waiting Time:
Average Turnaround Time:
8.5 Advantages of SJF Scheduling
✅ Optimized CPU Utilization: Keeps the CPU busy with the shortest processes.
✅ Best for batch processing: Works well in systems where job completion time is known.
8.6 Disadvantages of SJF Scheduling
❌ Difficult to Implement: The OS must know the burst time of each process in advance, which is not always possible.
❌ Not Ideal for Interactive Systems: In a real-time system, new processes must execute quickly, but SJF may delay them.
8.7 When to Use SJF?
🔹 Scientific computing where quick execution is needed.
🔹 Non-interactive systems where user experience is not a priority.
⭐9. Non-Preemptive Priority Scheduling
9.1 Non-Preemptive Priority Scheduling
- In Priority Scheduling, each process is assigned a priority number.
- Lower priority number = Higher priority (e.g., Priority 1 is higher than Priority 3).
- In Non-Preemptive Priority Scheduling:
- The CPU selects the highest-priority process first.
- Once a process starts execution, it cannot be interrupted until it completes.
- If two processes have the same priority, FCFS (First Come, First Serve) is used.
9.2 Example of Non-Preemptive Priority Scheduling
Given Processes:
Process | Arrival Time | Burst Time | Priority |
---|---|---|---|
P1 | 0 ms | 5 ms | 3 |
P2 | 1 ms | 3 ms | 1 |
P3 | 2 ms | 8 ms | 4 |
P4 | 3 ms | 6 ms | 2 |
- Lower priority number = Higher priority.
- The OS selects the highest-priority process that has arrived.
9.3 Step-by-Step Execution
- At time 0, P1 arrives and starts execution (since it's the only process available).
- At time 1, P2 arrives with priority 1 (highest priority so far).
- At time 2, P3 arrives, but its priority (4) is lower than P2.
- At time 3, P4 arrives, and its priority (2) is higher than P1 and P3 but lower than P2.
- Execution order based on priority:
- P2 (priority 1) starts first.
- P4 (priority 2) executes after P2.
- P1 (priority 3) executes after P4.
- P3 (priority 4) executes last.
9.4 Gantt Chart for Non-Preemptive Priority Scheduling
9.5 Calculating Waiting Time (WT) & Turnaround Time (TAT)
- Waiting Time (WT) = Turnaround Time - Burst Time
- Turnaround Time (TAT) = Completion Time - Arrival Time
Process | Completion Time | Turnaround Time (TAT) | Waiting Time (WT) |
---|---|---|---|
P1 | 10 ms | 10 - 0 = 10 ms | 10 - 5 = 5 ms |
P2 | 4 ms | 4 - 1 = 3 ms | 3 - 3 = 0 ms |
P3 | 23 ms | 23 - 2 = 21 ms | 21 - 8 = 13 ms |
P4 | 10 ms | 10 - 3 = 7 ms | 7 - 6 = 1 ms |
9.6 Final Calculations
Average Waiting Time:
Average Turnaround Time:
9.7 Observations in Non-Preemptive Priority Scheduling
✔ Non-preemptive nature means once a process starts, it cannot be interrupted.
✔ Good for batch processing systems, where important jobs must finish first.
❌ Disadvantages:
- Starvation: Low-priority processes may never get CPU time if high-priority processes keep arriving.
- Not suitable for real-time systems where tasks must be switched dynamically.
9.8 Real-Life Example of Non-Preemptive Priority Scheduling:
- Hospital Emergency Room:
- A patient with a heart attack (priority 1) is treated first.
- A patient with a broken arm (priority 2) is treated after.
- A patient with a mild fever (priority 4) waits longer.
⭐10. Preemptive Priority Scheduling
10.1 Introduction to Preemptive Priority Scheduling
- Preemptive Priority Scheduling selects the process with the highest priority (lowest priority number) for execution.
- If a new process with a higher priority arrives, the currently running process is preempted (paused) and replaced by the new one.
- It ensures important tasks execute quickly, but low-priority processes may suffer from starvation.
10.2 Example for Preemptive Priority Scheduling
Given Processes:
Process | Arrival Time | Burst Time | Priority |
---|---|---|---|
P1 | 0 ms | 10 ms | 3 |
P2 | 2 ms | 1 ms | 1 |
P3 | 3 ms | 2 ms | 4 |
P4 | 5 ms | 3 ms | 2 |
- Lower priority number = Higher priority.
- The OS preempts a lower-priority process if a higher-priority process arrives.
10.3 Step-by-Step Execution
- At time 0, P1 arrives and starts execution (it's the only process available).
- At time 2, P2 arrives with higher priority (1), so P1 is preempted, and P2 executes.
- At time 3, P3 arrives, but since its priority (4) is lower than P1 (3), it waits.
- At time 4, P2 completes, and P1 resumes execution.
- At time 5, P4 arrives with priority 2 (higher than P1), so P1 is preempted again, and P4 starts execution.
- At time 8, P4 completes, and P1 resumes execution.
- At time 13, P1 completes, and P3 (the last remaining process) starts execution.
- At time 15, P3 completes, and all processes are finished.
10.4 Gantt Chart for Preemptive Priority Scheduling
10.5 Calculating Waiting Time (WT) & Turnaround Time (TAT)
- Waiting Time (WT) = Turnaround Time - Burst Time
- Turnaround Time (TAT) = Completion Time - Arrival Time
Process | Completion Time | Turnaround Time (TAT) | Waiting Time (WT) |
---|---|---|---|
P1 | 13 ms | 13 - 0 = 13 ms | 13 - 10 = 3 ms |
P2 | 4 ms | 4 - 2 = 2 ms | 2 - 1 = 1 ms |
P3 | 15 ms | 15 - 3 = 12 ms | 12 - 2 = 10 ms |
P4 | 8 ms | 8 - 5 = 3 ms | 3 - 3 = 0 ms |
10.6 Final Calculations
10.7 Observations in Preemptive Priority Scheduling
✔ Preemption ensures quick response for urgent processes.
✔ Better for real-time and interactive systems.
- Starvation: Lower-priority processes may never execute if higher-priority processes keep arriving.
- Frequent context switching increases overhead.
10.8 Real-Life Example of Preemptive Priority Scheduling:
- Emergency Room Treatment:
- A patient with a heart attack (priority 1) gets immediate treatment, even if a patient with a broken arm (priority 3) is already being treated.
- The doctor pauses the broken arm treatment and attends to the emergency first.
⭐11. Round Robin (RR) Scheduling
11.1 Introduction to Round Robin Scheduling
- Round Robin (RR) is one of the most popular CPU scheduling algorithms, mainly used in time-sharing and multitasking systems.
- Each process gets a fixed time slice (time quantum) to execute.
- If a process does not finish in its time quantum, it is moved to the end of the queue, and the next process gets the CPU.
- It ensures fair CPU allocation and reduces waiting time for all processes.
- Imagine a circular queue of students taking turns using a single computer.
- Each student gets 5 minutes to use the computer.
- If they are not done, they go to the end of the line and wait for their next turn.
11.2 How Round Robin Works (Step-by-Step Execution)
Example of Round Robin Scheduling
Given Processes:
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 ms | 5 ms |
P2 | 1 ms | 3 ms |
P3 | 2 ms | 8 ms |
P4 | 3 ms | 6 ms |
Step-by-Step Execution
- At time 0, P1 starts execution. Since P1’s burst time (5 ms) is more than the time quantum (3 ms), it executes for 3 ms, and the remaining 2 ms is moved to the end of the queue.
- At time 3, P2 starts execution. Since P2’s burst time (3 ms) is exactly the time quantum, it completes execution.
- At time 6, P3 starts execution. It executes for 3 ms, and the remaining 5 ms is moved to the queue.
- At time 9, P4 starts execution. It executes for 3 ms, and the remaining 3 ms is moved to the queue.
- At time 12, P1 resumes execution and completes in 2 ms.
- At time 14, P3 resumes execution for 3 ms and still has 2 ms left.
- At time 17, P4 resumes execution and completes in 3 ms.
- At time 20, P3 resumes execution and completes in 2 ms.
11.3 Gantt Chart for Round Robin Scheduling
11.4 Calculating Waiting Time (WT) & Turnaround Time (TAT)
- Waiting Time (WT) = Turnaround Time - Burst Time
- Turnaround Time (TAT) = Completion Time - Arrival Time
Process | Completion Time | Turnaround Time (TAT) | Waiting Time (WT) |
---|---|---|---|
P1 | 12 ms | 12 - 0 = 12 ms | 12 - 5 = 7 ms |
P2 | 6 ms | 6 - 1 = 5 ms | 5 - 3 = 2 ms |
P3 | 22 ms | 22 - 2 = 20 ms | 20 - 8 = 12 ms |
P4 | 17 ms | 17 - 3 = 14 ms | 14 - 6 = 8 ms |
11.5 Final Calculations
11.6 Advantages of Round Robin Scheduling
✅ Prevents Starvation: All processes get a chance to execute.
✅ Better for Time-Sharing Systems: Ensures smooth multitasking.
11.7 Disadvantages of Round Robin Scheduling
❌ Increased Average Turnaround Time: If the time quantum is too small, processes switch too frequently, causing delays.
❌ Choosing the Right Time Quantum is Critical:
- If too small: Too many context switches (inefficient).
- If too large: Acts like First Come, First Serve (FCFS), which increases waiting time.
11.8 When to Use Round Robin?
- Multitasking and Time-Sharing Systems where all processes need equal CPU time.
- Interactive Systems where responsiveness is important.
- Gaming Servers where multiple users require CPU time fairly.
⭐12. Multilevel Feedback Queue (MLFQ) Scheduling
12.1 Introduction to Multilevel Feedback Queue Scheduling
- Multilevel Feedback Queue (MLFQ) is an advanced CPU scheduling algorithm that allows processes to move between different priority queues.
- It improves CPU utilization by giving priority to shorter processes and adjusting process priority dynamically.
- It is a preemptive scheduling algorithm.
- A teacher prioritizes students who ask shorter questions first.
- If a student takes too long, they are asked to wait while others get a chance.
- Similarly, in MLFQ, shorter processes get higher priority, but longer processes are eventually executed.
12.2 How MLFQ Works?
- The system maintains multiple queues, each with different priority levels.
- Short processes start in a high-priority queue.
- If a process exceeds its time quantum, it is moved to a lower-priority queue.
- Lower-priority queues use longer time slices or FCFS scheduling.
Example of Queue Structure:
Queue Level | Priority | Scheduling Algorithm |
---|---|---|
Q1 | Highest | Round Robin (TQ = 2ms) |
Q2 | Medium | Round Robin (TQ = 4ms) |
Q3 | Lowest | FCFS (No Preemption) |
12.3 Example of MLFQ Scheduling
Given Processes:
Process | Arrival Time | Burst Time |
---|---|---|
P1 | 0 ms | 5 ms |
P2 | 1 ms | 3 ms |
P3 | 2 ms | 8 ms |
Time Quantum:
- Queue Q1 (Highest Priority): TQ = 2ms
- Queue Q2 (Medium Priority): TQ = 4ms
- Queue Q3 (Lowest Priority): FCFS (No time limit)
12.4 Step-by-Step Execution
- At time 0, P1 arrives and starts in Q1.
- P1 executes for 2ms and moves to the end of Q1 with remaining time = 3ms.
- At time 1, P2 arrives in Q1.
- P2 executes for 2ms and completes (Burst Time = 3ms, Remaining = 1ms).
- At time 3, P3 arrives in Q1.
- P3 executes for 2ms and moves to the end of Q1 with remaining time = 6ms.
- At time 5, P2 resumes and completes execution.
- At time 6, P1 resumes in Q1.
- P1 executes for 2ms, moves to Q2 (Remaining time = 1ms).
- At time 8, P3 resumes in Q1.
- P3 executes for 2ms, moves to Q2 (Remaining time = 4ms).
- At time 10, P1 resumes in Q2 and completes execution.
- At time 11, P3 resumes in Q2 for 4ms, moves to Q3 (Remaining time = 2ms).
- At time 15, P3 executes in Q3 (FCFS) and completes execution.
12.5 Gantt Chart for MLFQ Scheduling
12.6 Calculating Waiting Time (WT) & Turnaround Time (TAT)
- Waiting Time (WT) = Turnaround Time - Burst Time
- Turnaround Time (TAT) = Completion Time - Arrival Time
Process | Completion Time | Turnaround Time (TAT) | Waiting Time (WT) |
---|---|---|---|
P1 | 10 ms | 10 - 0 = 10 ms | 10 - 5 = 5 ms |
P2 | 5 ms | 5 - 1 = 4 ms | 4 - 3 = 1 ms |
P3 | 17 ms | 17 - 2 = 15 ms | 15 - 8 = 7 ms |
12.7 Final Calculations
Average Waiting Time:
Average Turnaround Time:
12.8 Advantages of Multilevel Feedback Queue Scheduling
✅ Shorter processes execute faster, reducing response time.
✅ Prevents Starvation: If a process waits too long, it is moved to a higher-priority queue.
12.9 Disadvantages of Multilevel Feedback Queue Scheduling
❌ Choosing time quantum is difficult: Incorrect values can cause inefficiency.
❌ More CPU Overhead: Frequent context switching between queues.
12.10 When to Use MLFQ?
- Operating systems that handle multiple types of processes (background, interactive).
- Gaming and real-time systems where high-priority tasks need immediate execution.
- Cloud computing and multitasking environments where different tasks require different CPU time allocations.
⭐13. Multiprocessor Scheduling
13.1 Introduction to Multiprocessor Scheduling
- Multiprocessor scheduling deals with scheduling multiple processes across multiple processors (CPUs).
- It is more complex than single-processor scheduling because it must consider load balancing, processor affinity, and synchronization.
- It is used in modern computers, servers, and supercomputers to improve performance.
- A laptop with a quad-core processor can run four tasks at the same time (e.g., a web browser, video player, game, and antivirus scan).
13.2 Types of Multiprocessor Systems
- Asymmetric Multiprocessing (AMP)
- One CPU acts as the master and assigns tasks to other CPUs (slaves).
- The master CPU controls scheduling and manages the system.
- Simpler but can cause bottlenecks at the master CPU.
- Example: Older computers where one core manages all system tasks, while others handle computations.
- Symmetric Multiprocessing (SMP)
- All CPUs are equal and share the workload.
- The operating system manages scheduling and assigns tasks dynamically.
- Used in modern computers and servers for better efficiency.
- Example: A Windows PC with an 8-core processor, where each core runs different tasks independently.
13.3 Scheduling in Multiprocessor Systems
Load Balancing
- Ensures equal distribution of tasks across all processors.
- Prevents some CPUs from being overloaded while others are idle.
- Types of Load Balancing:
- Push Migration: The OS moves tasks from an overloaded CPU to a less busy one.
- Pull Migration: An idle CPU pulls tasks from a busy CPU.
Processor Affinity
- If a process runs on a particular CPU, it prefers to stay on that CPU for efficiency.
- Reduces the overhead of moving processes between CPUs.
- Types of Affinity:
- Soft Affinity: The OS tries to keep a process on the same CPU but can move it if necessary.
- Hard Affinity: The OS forces a process to stay on a specific CPU.
- Example: A video editing software running on CPU 3 will continue running there for better cache performance.
Multicore Scheduling
- A single processor contains multiple cores that can execute tasks independently.
- Challenges:
- Some threads must communicate between cores (e.g., in parallel programming).
- Cache sharing can lead to performance issues.
- Example: A gaming PC with 16 CPU cores, where some cores handle graphics while others manage background processes.
Real-Time Multiprocessor Scheduling
- Used in time-sensitive applications like air traffic control, robotics, and medical devices.
- Ensures that critical tasks meet strict deadlines.
13.4 Advantages of Multiprocessor Scheduling
✅ Better Resource Utilization: Prevents CPU idling.
✅ Improves Multitasking: More processes can run at once.
✅ Enhances Reliability: If one CPU fails, others continue working.
13.5 Disadvantages of Multiprocessor Scheduling
❌ Synchronization Issues: Multiple processors accessing shared memory can cause conflicts.
❌ Expensive Hardware: More CPUs increase costs.
13.6 When to Use Multiprocessor Scheduling?
- Supercomputers for scientific research and simulations.
- Cloud Computing Servers for handling multiple user requests.
- AI & Machine Learning Systems for processing large datasets.
⭐14. Real-Time Scheduling
14.1 Introduction to Real-Time Scheduling
- Real-time scheduling is used in systems where tasks must be completed within strict time limits (deadlines).
- Failure to meet the deadline can lead to serious consequences in critical applications.
- Used in aircraft systems, medical devices, robotics, and industrial automation.
- A self-driving car must process sensor data in real-time.
- If the braking system is delayed, it can cause an accident.
- A real-time scheduling algorithm ensures that critical tasks always run on time.
14.2 Types of Real-Time Systems
Hard Real-Time Systems
- Strict deadlines must always be met.
- Failure to meet the deadline = System failure.
- Used in life-critical systems where delays are unacceptable.
- Examples:
- ✅ Pacemakers (must send electrical pulses on time).
- ✅ Airbag Systems in cars (must deploy immediately in an accident).
- ✅ Industrial Robots (must work with precise timing).
Soft Real-Time Systems
- Deadlines should be met, but occasional delays are acceptable.
- System performance degrades if deadlines are missed, but it does not fail.
- Examples:
- ✅ Video Streaming (minor delays cause buffering but do not crash the system).
- ✅ Online Gaming (small delays can cause lag but do not stop the game).
- ✅ Stock Market Trading (fast execution is important but not life-threatening).
14.3 Scheduling Algorithms for Real-Time Systems
Rate Monotonic Scheduling (RMS) – Fixed Priority
- Assigns higher priority to tasks with shorter execution periods.
- Tasks run at regular intervals (periodic).
- Used for predictable workloads.
- Example:
- A temperature sensor in an industrial plant checks temperature every 5ms.
- Since it runs frequently, it gets higher priority.
Earliest Deadline First (EDF) – Dynamic Priority
- The task with the closest deadline gets the highest priority.
- More flexible than RMS but requires complex calculations.
- Example:
- A navigation system in an airplane must process incoming flight data before the next set of sensor readings arrives.
- The earliest deadline (next sensor reading) gets the CPU first.
Least Slack Time First (LSTF) – Dynamic Priority
- Slack Time = Deadline - Remaining Execution Time
- The task with the least slack time is scheduled first.
- Example:
- A video streaming system where frames must be displayed at precise times.
- Frames with the least remaining time before the next frame is needed are processed first.
14.4 Challenges in Real-Time Scheduling
❌ Resource Management: CPU, memory, and I/O devices must be allocated efficiently.
❌ Interrupt Handling: Quick response to external events (e.g., sensor inputs).
❌ Power Consumption: Must be optimized in battery-operated devices like medical implants.
14.5 Advantages of Real-Time Scheduling
✅ Guaranteed Response Time: Ensures critical tasks complete before the deadline.
✅ Efficient Resource Utilization: CPU and memory are allocated optimally.
✅ Predictable Performance: Essential for life-critical systems.
14.6 Disadvantages of Real-Time Scheduling
❌ Complex Implementation: Requires advanced scheduling algorithms.
❌ High Resource Usage: May require dedicated processors for real-time tasks.
❌ Difficult to Scale: Works best for small, well-defined tasks.
14.7 When to Use Real-Time Scheduling?
🔹 Embedded Systems (e.g., smartwatches, medical devices).
🔹 Autonomous Vehicles (e.g., self-driving cars, drones).
🔹 Industrial Automation (e.g., robotic arms in factories).
🔹 Military & Aerospace Systems (e.g., missile guidance systems).
⭐15. Thread Scheduling
15.1 Introduction to Thread Scheduling
- Thread scheduling decides which thread gets CPU time when multiple threads exist.
- It works inside a process, where multiple threads share the same resources (memory, files, etc.).
- Helps in efficient multitasking and improves CPU utilization.
Example:
- In a web browser, one thread handles user input (clicking buttons), another downloads files, and another displays content.
- Thread scheduling ensures smooth execution of all these tasks.
15.2 Types of Thread Scheduling
Process vs. Thread Scheduling
Process Scheduling | Thread Scheduling |
---|---|
Deals with entire processes | Deals with individual threads within a process |
More overhead (context switching is costly) | Less overhead (threads share memory) |
Used for system-level multitasking | Used for efficient application performance |
User-Level Thread Scheduling
- The user program manages thread scheduling, not the OS.
- Uses a thread library (e.g., POSIX Threads, Java Threads).
- Fast but cannot take advantage of multiple processors.
- Example:
- A game engine handles enemy movement, physics, and sound using threads managed by the game itself.
Kernel-Level Thread Scheduling
- The operating system manages scheduling, not the user program.
- Threads are assigned to different CPU cores for execution.
- Slower than user-level scheduling but supports multicore processing.
- Example:
- In Linux, the kernel schedules system threads to optimize performance on a multi-core CPU.
15.3 Multithreading Models
- Defines how user threads map to kernel threads.
Many-to-One Model
- Many user threads → One kernel thread
- Simple but cannot use multiple CPUs.
- Used in older systems.
- Example:
- Green Threads in older Java versions.
One-to-One Model
- Each user thread → One kernel thread
- Supports parallel execution on multiple CPUs.
- Used in modern OS like Windows, Linux, macOS.
Many-to-Many Model
- Multiple user threads → Multiple kernel threads
- Balances flexibility and efficiency.
- Used in hybrid thread libraries.
15.4 Thread Scheduling Algorithms
Threads are scheduled using different policies.
Preemptive Scheduling
- The OS switches threads forcefully to ensure fairness.
- Used in multitasking operating systems.
- Example: A music player pauses while the CPU processes a high-priority notification.
Non-Preemptive Scheduling
- A thread runs until it completes or voluntarily yields.
- Used in simpler scheduling models.
- Example: A single-threaded calculator runs until the user closes it.
15.5 Thread Priorities
- Threads have priority values to determine execution order.
- Higher priority threads run before lower priority ones.
- Dynamic priority adjustment prevents starvation of low-priority threads.
- A real-time video rendering thread gets higher priority than a background file backup thread.
15.6 Advantages of Thread Scheduling
✅ Efficient CPU Utilization: Threads run in parallel, making use of multi-core processors.
✅ Better Responsiveness: UI remains active while background tasks run.
15.7 Disadvantages of Thread Scheduling
❌ More Complex than Process Scheduling: Requires careful deadlock prevention.
❌ CPU Starvation Risk: If thread priorities are not balanced, low-priority threads may never run.
15.8 When to Use Thread Scheduling?
🔹 Gaming & Multimedia (Graphics rendering, AI, and input handling).
🔹 Cloud Computing (Efficiently handling user requests in servers).
🔹 Mobile Applications (Background sync while using other features).
🚨Thanks for visiting finenotes4u✨
Welcome to a hub for 😇Nerds and knowledge seekers! Here, you'll find everything you need to stay updated on education, notes, books, and daily trends.
💗 Bookmark our site to stay connected and never miss an update!
💌 Have suggestions or need more content? Drop a comment below, and let us know what topics you'd like to see next! Your support means the world to us. 😍