Unit II: CPU Scheduling - CSE316 Operating System | B.Tech CSE Notes PDF | FineNotes4U


Unit II: CPU Scheduling


Contents
  1. ⭐1. Introduction to CPU Scheduling
  2. ⭐2. Types of Scheduling
    1. 2.1 Long-Term Scheduling (Job Scheduling)
    2. 2.2 Short-Term Scheduling (CPU Scheduling)
    3. 2.3 Medium-Term Scheduling
  3. ⭐3. Scheduling Algorithms
    1. 3.1 First-Come, First-Served (FCFS)
    2. 3.2 Shortest Job Next (SJN) / Shortest Job First (SJF)
    3. 3.3 Round Robin (RR)
    4. 3.4 Priority Scheduling
    5. 3.5 Multilevel Queue Scheduling
    6. 3.6 Multilevel Feedback Queue Scheduling
  4. ⭐4. Scheduling Criteria
  5. ⭐5. CPU Scheduler: Preemptive vs. Non-Preemptive Scheduling
    1. 5.1 Preemptive Scheduling
    2. 5.2 Non-Preemptive Scheduling
  6. ⭐6. Dispatcher
    1. 6.1 Functions of Dispatcher:
    2. 6.2 Dispatch Latency
  7. ⭐7. First Come, First Serve (FCFS) Scheduling
    1. 7.1 Introduction to FCFS Scheduling
    2. 7.2 Characteristics of FCFS Scheduling
    3. 7.3 How FCFS Works (Step-by-Step)
    4. 7.4 Example of FCFS Scheduling
    5. 7.5 Advantages of FCFS Scheduling
    6. 7.6 Disadvantages of FCFS Scheduling
    7. 7.7 When to Use FCFS?
  8. ⭐8. Shortest Job First (SJF) Scheduling
    1. 8.1 Introduction to SJF Scheduling
    2. 8.2 Types of SJF Scheduling
    3. 8.3 Example for Non-Preemptive SJF
    4. 8.4 Example for Preemptive SJF (SRTF)
    5. 8.5 Advantages of SJF Scheduling
    6. 8.6 Disadvantages of SJF Scheduling
    7. 8.7 When to Use SJF?
  9. ⭐9. Non-Preemptive Priority Scheduling
    1. 9.1 Non-Preemptive Priority Scheduling
    2. 9.2 Example of Non-Preemptive Priority Scheduling
    3. 9.3 Step-by-Step Execution
    4. 9.4 Gantt Chart for Non-Preemptive Priority Scheduling
    5. 9.5 Calculating Waiting Time (WT) & Turnaround Time (TAT)
    6. 9.6 Final Calculations
    7. 9.7 Observations in Non-Preemptive Priority Scheduling
    8. 9.8 Real-Life Example of Non-Preemptive Priority Scheduling:
  10. ⭐10. Preemptive Priority Scheduling
    1. 10.1 Introduction to Preemptive Priority Scheduling
    2. 10.2 Example for Preemptive Priority Scheduling
    3. 10.3 Step-by-Step Execution
    4. 10.4 Gantt Chart for Preemptive Priority Scheduling
    5. 10.5 Calculating Waiting Time (WT) & Turnaround Time (TAT)
    6. 10.6 Final Calculations
    7. 10.7 Observations in Preemptive Priority Scheduling
    8. 10.8 Real-Life Example of Preemptive Priority Scheduling:
  11. ⭐11. Round Robin (RR) Scheduling
    1. 11.1 Introduction to Round Robin Scheduling
    2. 11.2 How Round Robin Works (Step-by-Step Execution)
    3. 11.3 Gantt Chart for Round Robin Scheduling
    4. 11.4 Calculating Waiting Time (WT) & Turnaround Time (TAT)
    5. 11.5 Final Calculations
    6. 11.6 Advantages of Round Robin Scheduling
    7. 11.7 Disadvantages of Round Robin Scheduling
    8. 11.8 When to Use Round Robin?
  12. ⭐12. Multilevel Feedback Queue (MLFQ) Scheduling
    1. 12.1 Introduction to Multilevel Feedback Queue Scheduling
    2. 12.2 How MLFQ Works?
    3. 12.3 Example of MLFQ Scheduling
    4. 12.4 Step-by-Step Execution
    5. 12.5 Gantt Chart for MLFQ Scheduling
    6. 12.6 Calculating Waiting Time (WT) & Turnaround Time (TAT)
    7. 12.7 Final Calculations
    8. 12.8 Advantages of Multilevel Feedback Queue Scheduling
    9. 12.9 Disadvantages of Multilevel Feedback Queue Scheduling
    10. 12.10 When to Use MLFQ?
  13. ⭐13. Multiprocessor Scheduling
    1. 13.1 Introduction to Multiprocessor Scheduling
    2. 13.2 Types of Multiprocessor Systems
    3. 13.3 Scheduling in Multiprocessor Systems
    4. 13.4 Advantages of Multiprocessor Scheduling
    5. 13.5 Disadvantages of Multiprocessor Scheduling
    6. 13.6 When to Use Multiprocessor Scheduling?
  14. ⭐14. Real-Time Scheduling
    1. 14.1 Introduction to Real-Time Scheduling
    2. 14.2 Types of Real-Time Systems
    3. 14.3 Scheduling Algorithms for Real-Time Systems
    4. 14.4 Challenges in Real-Time Scheduling
    5. 14.5 Advantages of Real-Time Scheduling
    6. 14.6 Disadvantages of Real-Time Scheduling
    7. 14.7 When to Use Real-Time Scheduling?
  15. ⭐15. Thread Scheduling
    1. 15.1 Introduction to Thread Scheduling
    2. 15.2 Types of Thread Scheduling
    3. 15.3 Multithreading Models
    4. 15.4 Thread Scheduling Algorithms
    5. 15.5 Thread Priorities
    6. 15.6 Advantages of Thread Scheduling
    7. 15.7 Disadvantages of Thread Scheduling
    8. 15.8 When to Use Thread Scheduling?
  16. 🚨Thanks for visiting finenotes4u✨

⭐1. Introduction to CPU Scheduling


  • CPU scheduling is the process of selecting which process will run next on the CPU.
  • It is a key part of multiprogramming, ensuring efficient CPU utilization.
  • The CPU Scheduler decides the order of execution of processes.
Example: If multiple users are running programs on a computer, the OS must decide which program gets CPU time first to ensure fairness and efficiency.

2. Types of Scheduling


CPU scheduling occurs at different levels within the OS.

2.1 Long-Term Scheduling (Job Scheduling)

  • Determines which processes enter the system for execution.
  • Manages the degree of multiprogramming (number of processes in memory).
  • Runs infrequently (seconds/minutes).
Example: A system that decides which user jobs should be loaded into memory for execution.

2.2 Short-Term Scheduling (CPU Scheduling)

  • Decides which process gets the CPU next from the ready queue.
  • Runs frequently (milliseconds).
  • Impacts system performance directly.
Example: If multiple applications (browser, music player, Word) are running, the short-term scheduler decides which one gets CPU time next.

2.3 Medium-Term Scheduling

  • Temporarily removes processes from memory (swapping).
  • Reduces system load and improves performance.
  • Used in time-sharing systems.
Example: A background process (like a system update) may be temporarily swapped out to let a user’s program run faster.

3. Scheduling Algorithms


Different algorithms decide how CPU time is allocated to processes.

3.1 First-Come, First-Served (FCFS)

  • The first process that arrives is executed first (like a queue).
  • Non-preemptive (once running, it cannot be stopped).
  • Disadvantage: "Convoy effect" – a small process may wait a long time if a large process comes first.
Example: A printer queue where the first document submitted is printed first.

3.2 Shortest Job Next (SJN) / Shortest Job First (SJF)

  • Process with smallest execution time runs first.
  • Preemptive (Shortest Remaining Time First – SRTF) or Non-preemptive.
  • Best for batch processing.
Example: A university server processing student assignments – smaller assignments are processed first.

3.3 Round Robin (RR)

  • Each process gets a fixed time slice (time quantum).
  • If the process does not finish, it goes back to the queue.
  • Preemptive (ensures fair CPU allocation).
Example: Multiplayer online games – each player’s action is processed in turns.

3.4 Priority Scheduling

  • Each process is assigned a priority; higher priority runs first.
  • Can be Preemptive or Non-Preemptive.
  • Disadvantage: Starvation (low-priority processes may never execute).
Example: Real-time systems like air traffic control prioritize emergency landings over normal flights.

3.5 Multilevel Queue Scheduling

  • Divides processes into different queues (foreground, background).
  • Each queue has a different scheduling algorithm.
Example:
  • Foreground queue: Uses Round Robin (interactive tasks).
  • Background queue: Uses FCFS (batch tasks like printing).

3.6 Multilevel Feedback Queue Scheduling

  • Similar to Multilevel Queue Scheduling, but processes can move between queues based on behavior.
  • Ensures fairness.
Example: A process taking too long in an interactive queue moves to a lower-priority queue.

4. Scheduling Criteria


To evaluate CPU scheduling algorithms, we use different performance criteria:

  • CPU Utilization:

    • Percentage of time CPU is busy.
    • Goal: Maximize CPU usage (keep CPU working as much as possible).

  • Throughput:

    • Number of processes completed per unit time.
    • Goal: Increase throughput.
  • Turnaround Time:

    • Time from process arrival to completion.
    • Formula:
      Turnaround Time = Completion Time - Arrival Time
    • Goal: Minimize turnaround time.
  • Waiting Time:

    • Time a process spends in the ready queue before execution.
    • Formula:
      Waiting Time = Turnaround Time - Burst Time
    • Goal: Minimize waiting time.
  • Response Time:

    • Time from arrival to first execution.
    • Important in interactive systems.
    • Goal: Minimize response time.

5. CPU Scheduler: Preemptive vs. Non-Preemptive Scheduling


The CPU scheduler decides when a process gets the CPU.

5.1 Preemptive Scheduling

  • A running process can be interrupted and moved back to the ready queue.
  • Used in time-sharing and real-time systems.
  • Example Algorithms: Round Robin, SJF (Preemptive), Priority (Preemptive).
Example: In an online exam system, the OS pauses background tasks (like updates) if the user is actively answering questions.

5.2 Non-Preemptive Scheduling

  • Once a process starts execution, it cannot be interrupted until completion.
  • Used in batch and non-time-critical systems.
  • Example Algorithms: FCFS, SJF (Non-Preemptive), Priority (Non-Preemptive).
Example: A movie streaming app buffers a full video without interruptions.

6. Dispatcher


The dispatcher is the component that gives control of the CPU to a selected process.

6.1 Functions of Dispatcher:

  • Context Switching: Saves the state of the old process and loads the new one.
  • Switching to User Mode: Ensures the process runs with user permissions.
  • Jumping to the Right Location: Starts execution of the selected process.

6.2 Dispatch Latency

  • The time taken by the dispatcher to stop one process and start another.
  • Goal: Minimize dispatch latency to improve CPU efficiency.
Example: A laptop switching between an open Word document and a web browser.

7. First Come, First Serve (FCFS) Scheduling


7.1 Introduction to FCFS Scheduling

  • First Come, First Serve (FCFS) is the simplest CPU scheduling algorithm.
  • The process that arrives first is executed first, like a queue (FIFO - First In, First Out).
  • It is a non-preemptive scheduling algorithm (once a process starts execution, it cannot be interrupted until completion).
Example:
  • Imagine a ticket counter where people stand in a queue.
  • The first person in line gets the ticket first, then the second, and so on.
  • Similarly, in FCFS, the first process in the queue gets CPU time first.

7.2 Characteristics of FCFS Scheduling

  • Simple to implement (easy to use with a queue structure).
  • Non-preemptive (processes run until completion).
  • Can cause long waiting times for shorter processes if a longer process arrives first.
  • Not ideal for interactive systems because it doesn’t allow quick process switching.

7.3 How FCFS Works (Step-by-Step)

  • Processes arrive in a queue.
  • The CPU picks the first process and starts execution.
  • Once the process finishes, the next process in the queue starts.
  • This continues until all processes are completed.

7.4 Example of FCFS Scheduling

Given Processes:

ProcessArrival TimeBurst Time
P10 ms5 ms
P21 ms3 ms
P32 ms8 ms
P43 ms6 ms

Gantt Chart (Execution Order)

| P1 | P2 | P3 | P4 | 0 5 8 16 22

Calculating Waiting Time

Waiting Time (WT) = Turnaround Time - Burst Time
Turnaround Time (TAT) = Completion Time - Arrival Time

ProcessCompletion TimeTurnaround Time (TAT)Waiting Time (WT)
P15 ms5 - 0 = 5 ms5 - 5 = 0 ms
P28 ms8 - 1 = 7 ms7 - 3 = 4 ms
P316 ms16 - 2 = 14 ms14 - 8 = 6 ms
P422 ms22 - 3 = 19 ms19 - 6 = 13 ms

Average Waiting Time:

(0 + 4 + 6 + 13) / 4 = 5.75 ms

Average Turnaround Time:

(5 + 7 + 14 + 19) / 4 = 11.25 ms

7.5 Advantages of FCFS Scheduling

Easy to Implement: Uses a simple queue structure.
Fair Scheduling: Every process gets a turn based on arrival.
Good for Long Jobs: No process gets interrupted once started.

7.6 Disadvantages of FCFS Scheduling

Convoy Effect: Short processes wait too long behind long processes.
High Waiting Time: If a process with a long burst time arrives first, all other processes have to wait.
Not Suitable for Interactive Systems: Doesn’t allow fast switching between tasks.

Example of Convoy Effect:

  • Suppose a huge truck is moving on a one-lane road, and many small cars are behind it.
  • The small cars cannot pass until the truck finishes its journey (like shorter processes waiting behind longer ones in FCFS).

7.7 When to Use FCFS?

  • Batch processing where all jobs are of similar size.
  • Simple environments where process order doesn’t matter.
  • Non-interactive systems where preemption is not required.

8. Shortest Job First (SJF) Scheduling


8.1 Introduction to SJF Scheduling

  • Shortest Job First (SJF) is a CPU scheduling algorithm that selects the process with the smallest burst time first.
  • It is also known as Shortest Next CPU Burst.
  • It is an optimal algorithm because it gives the minimum average waiting time.
Example:
  • Imagine a hospital emergency room.
  • Patients with minor injuries (short treatment time) are treated first, while serious cases (longer treatment) wait.
  • This ensures faster treatment for the majority of patients.

8.2 Types of SJF Scheduling

  • Non-Preemptive SJF (Standard SJF)
    • The CPU is assigned to the shortest job first, and it cannot be interrupted until it completes.
    • Other processes must wait even if a shorter process arrives later.
    • Example:
      • If five customers order food at a restaurant, the chef prepares the quickest orders first, even if a large order came in earlier.
  • Preemptive SJF (Shortest Remaining Time First - SRTF)
    • If a new process with a shorter burst time arrives, the current process is interrupted and replaced.
    • The CPU always executes the shortest remaining job first.
    • Example:
      • A pizza shop starts preparing a large pizza, but a customer orders a small burger.
      • The shop pauses the pizza and makes the burger first because it takes less time.

8.3 Example for Non-Preemptive SJF

ProcessArrival TimeBurst Time
P10 ms8 ms
P21 ms4 ms
P32 ms9 ms
P43 ms5 ms
Gantt Chart (Execution Order for Non-Preemptive SJF)
| P2 | P4 | P1 | P3 | 1 5 10 18 27
Calculating Waiting Time

Waiting Time (WT) = Turnaround Time - Burst Time
Turnaround Time (TAT) = Completion Time - Arrival Time

ProcessCompletion TimeTurnaround Time (TAT)Waiting Time (WT)
P118 ms18 - 0 = 18 ms18 - 8 = 10 ms
P25 ms5 - 1 = 4 ms4 - 4 = 0 ms
P327 ms27 - 2 = 25 ms25 - 9 = 16 ms
P410 ms10 - 3 = 7 ms7 - 5 = 2 ms
Average Waiting Time:
(10 + 0 + 16 + 2) / 4 = 7 ms
Average Turnaround Time:
(18 + 4 + 25 + 7) / 4 = 13.5 ms

8.4 Example for Preemptive SJF (SRTF)

Given Processes:

ProcessArrival TimeBurst Time
P10 ms8 ms
P21 ms4 ms
P32 ms9 ms
P43 ms5 ms

Step-by-Step Execution:

  1. At time 0, P1 is the only process, so it starts execution.
  2. At time 1, P2 arrives (burst time 4). Since 4 < P1’s remaining time (7), P1 is preempted, and P2 starts execution.
  3. At time 2, P3 arrives (burst time 9). Since P2 is shorter, P3 waits.
  4. At time 3, P4 arrives (burst time 5). P2 is still the shortest, so it continues.
  5. At time 5, P2 completes, and the next shortest process (P4) starts execution.
  6. At time 10, P4 completes, and the next shortest process (P1) resumes.
  7. At time 18, P1 completes, and P3 (the longest) starts execution.
  8. At time 27, P3 completes, and all processes are finished.

Gantt Chart for Preemptive SJF

| P1 | P2 | P4 | P1 | P3 | 0 1 5 10 18 27

Calculating Waiting Time (WT) & Turnaround Time (TAT)

  • Waiting Time (WT) = Turnaround Time - Burst Time
  • Turnaround Time (TAT) = Completion Time - Arrival Time
ProcessCompletion TimeTurnaround Time (TAT)Waiting Time (WT)
P118 ms18 - 0 = 18 ms18 - 8 = 10 ms
P25 ms5 - 1 = 4 ms4 - 4 = 0 ms
P327 ms27 - 2 = 25 ms25 - 9 = 16 ms
P410 ms10 - 3 = 7 ms7 - 5 = 2 ms

Final Calculations

Average Waiting Time:

(10+0+16+2)/4=7ms

Average Turnaround Time:

(18+4+25+7)/4=13.5ms

8.5 Advantages of SJF Scheduling

Efficient: Gives the minimum average waiting time compared to other algorithms.
Optimized CPU Utilization: Keeps the CPU busy with the shortest processes.
Best for batch processing: Works well in systems where job completion time is known.

8.6 Disadvantages of SJF Scheduling

Starvation Problem: If many short processes keep arriving, longer processes may never execute.
Difficult to Implement: The OS must know the burst time of each process in advance, which is not always possible.
Not Ideal for Interactive Systems: In a real-time system, new processes must execute quickly, but SJF may delay them.

8.7 When to Use SJF?

🔹 Batch processing systems where job times are known.
🔹 Scientific computing where quick execution is needed.
🔹 Non-interactive systems where user experience is not a priority.
Priority Scheduling – Non-Preemptive

9. Non-Preemptive Priority Scheduling


9.1 Non-Preemptive Priority Scheduling

  • In Priority Scheduling, each process is assigned a priority number.
  • Lower priority number = Higher priority (e.g., Priority 1 is higher than Priority 3).
  • In Non-Preemptive Priority Scheduling:
    • The CPU selects the highest-priority process first.
    • Once a process starts execution, it cannot be interrupted until it completes.
    • If two processes have the same priority, FCFS (First Come, First Serve) is used.

9.2 Example of Non-Preemptive Priority Scheduling

Given Processes:

ProcessArrival TimeBurst TimePriority
P10 ms5 ms3
P21 ms3 ms1
P32 ms8 ms4
P43 ms6 ms2
  • Lower priority number = Higher priority.
  • The OS selects the highest-priority process that has arrived.

9.3 Step-by-Step Execution

  • At time 0, P1 arrives and starts execution (since it's the only process available).
  • At time 1, P2 arrives with priority 1 (highest priority so far).
  • At time 2, P3 arrives, but its priority (4) is lower than P2.
  • At time 3, P4 arrives, and its priority (2) is higher than P1 and P3 but lower than P2.
  • Execution order based on priority:
    • P2 (priority 1) starts first.
    • P4 (priority 2) executes after P2.
    • P1 (priority 3) executes after P4.
    • P3 (priority 4) executes last.

9.4 Gantt Chart for Non-Preemptive Priority Scheduling

| P2 | P4 | P1 | P3 | 1 4 10 15 23

9.5 Calculating Waiting Time (WT) & Turnaround Time (TAT)

  • Waiting Time (WT) = Turnaround Time - Burst Time
  • Turnaround Time (TAT) = Completion Time - Arrival Time
ProcessCompletion TimeTurnaround Time (TAT)Waiting Time (WT)
P110 ms10 - 0 = 10 ms10 - 5 = 5 ms
P24 ms4 - 1 = 3 ms3 - 3 = 0 ms
P323 ms23 - 2 = 21 ms21 - 8 = 13 ms
P410 ms10 - 3 = 7 ms7 - 6 = 1 ms

9.6 Final Calculations

Average Waiting Time:

(5+0+13+1)/4=4.75ms

Average Turnaround Time:

(10+3+21+7)/4=10.25ms

9.7 Observations in Non-Preemptive Priority Scheduling

Higher-priority processes execute first, reducing turnaround time for important tasks.
Non-preemptive nature means once a process starts, it cannot be interrupted.
Good for batch processing systems, where important jobs must finish first.

Disadvantages:

  • Starvation: Low-priority processes may never get CPU time if high-priority processes keep arriving.
  • Not suitable for real-time systems where tasks must be switched dynamically.

9.8 Real-Life Example of Non-Preemptive Priority Scheduling:

  • Hospital Emergency Room:
    • A patient with a heart attack (priority 1) is treated first.
    • A patient with a broken arm (priority 2) is treated after.
    • A patient with a mild fever (priority 4) waits longer.

10. Preemptive Priority Scheduling


10.1 Introduction to Preemptive Priority Scheduling

  • Preemptive Priority Scheduling selects the process with the highest priority (lowest priority number) for execution.
  • If a new process with a higher priority arrives, the currently running process is preempted (paused) and replaced by the new one.
  • It ensures important tasks execute quickly, but low-priority processes may suffer from starvation.

10.2 Example for Preemptive Priority Scheduling

Given Processes:

ProcessArrival TimeBurst TimePriority
P10 ms10 ms3
P22 ms1 ms1
P33 ms2 ms4
P45 ms3 ms2
  • Lower priority number = Higher priority.
  • The OS preempts a lower-priority process if a higher-priority process arrives.

10.3 Step-by-Step Execution

  • At time 0, P1 arrives and starts execution (it's the only process available).
  • At time 2, P2 arrives with higher priority (1), so P1 is preempted, and P2 executes.
  • At time 3, P3 arrives, but since its priority (4) is lower than P1 (3), it waits.
  • At time 4, P2 completes, and P1 resumes execution.
  • At time 5, P4 arrives with priority 2 (higher than P1), so P1 is preempted again, and P4 starts execution.
  • At time 8, P4 completes, and P1 resumes execution.
  • At time 13, P1 completes, and P3 (the last remaining process) starts execution.
  • At time 15, P3 completes, and all processes are finished.

10.4 Gantt Chart for Preemptive Priority Scheduling

| P1 | P2 | P1 | P4 | P1 | P3 | 0 2 4 5 8 13 15

10.5 Calculating Waiting Time (WT) & Turnaround Time (TAT)

  • Waiting Time (WT) = Turnaround Time - Burst Time
  • Turnaround Time (TAT) = Completion Time - Arrival Time
ProcessCompletion TimeTurnaround Time (TAT)Waiting Time (WT)
P113 ms13 - 0 = 13 ms13 - 10 = 3 ms
P24 ms4 - 2 = 2 ms2 - 1 = 1 ms
P315 ms15 - 3 = 12 ms12 - 2 = 10 ms
P48 ms8 - 5 = 3 ms3 - 3 = 0 ms

10.6 Final Calculations

Average Waiting Time:
(3+1+10+0)/4=3.5ms
Average Turnaround Time:
(13+2+12+3)/4=7.5ms

10.7 Observations in Preemptive Priority Scheduling

Higher-priority processes execute first, reducing waiting time for critical tasks.
Preemption ensures quick response for urgent processes.
Better for real-time and interactive systems.
Disadvantages:
  • Starvation: Lower-priority processes may never execute if higher-priority processes keep arriving.
  • Frequent context switching increases overhead.

10.8 Real-Life Example of Preemptive Priority Scheduling:

  • Emergency Room Treatment:
    • A patient with a heart attack (priority 1) gets immediate treatment, even if a patient with a broken arm (priority 3) is already being treated.
    • The doctor pauses the broken arm treatment and attends to the emergency first.

11. Round Robin (RR) Scheduling


11.1 Introduction to Round Robin Scheduling

  • Round Robin (RR) is one of the most popular CPU scheduling algorithms, mainly used in time-sharing and multitasking systems.
  • Each process gets a fixed time slice (time quantum) to execute.
  • If a process does not finish in its time quantum, it is moved to the end of the queue, and the next process gets the CPU.
  • It ensures fair CPU allocation and reduces waiting time for all processes.
Example:
  • Imagine a circular queue of students taking turns using a single computer.
  • Each student gets 5 minutes to use the computer.
  • If they are not done, they go to the end of the line and wait for their next turn.

11.2 How Round Robin Works (Step-by-Step Execution)

Example of Round Robin Scheduling

Given Processes:

ProcessArrival TimeBurst Time
P10 ms5 ms
P21 ms3 ms
P32 ms8 ms
P43 ms6 ms
Time Quantum (TQ) = 3 ms

Step-by-Step Execution

  • At time 0, P1 starts execution. Since P1’s burst time (5 ms) is more than the time quantum (3 ms), it executes for 3 ms, and the remaining 2 ms is moved to the end of the queue.
  • At time 3, P2 starts execution. Since P2’s burst time (3 ms) is exactly the time quantum, it completes execution.
  • At time 6, P3 starts execution. It executes for 3 ms, and the remaining 5 ms is moved to the queue.
  • At time 9, P4 starts execution. It executes for 3 ms, and the remaining 3 ms is moved to the queue.
  • At time 12, P1 resumes execution and completes in 2 ms.
  • At time 14, P3 resumes execution for 3 ms and still has 2 ms left.
  • At time 17, P4 resumes execution and completes in 3 ms.
  • At time 20, P3 resumes execution and completes in 2 ms.

11.3 Gantt Chart for Round Robin Scheduling

| P1 | P2 | P3 | P4 | P1 | P3 | P4 | P3 | 0 3 6 9 12 14 17 20 22

11.4 Calculating Waiting Time (WT) & Turnaround Time (TAT)

  • Waiting Time (WT) = Turnaround Time - Burst Time
  • Turnaround Time (TAT) = Completion Time - Arrival Time
ProcessCompletion TimeTurnaround Time (TAT)Waiting Time (WT)
P112 ms12 - 0 = 12 ms12 - 5 = 7 ms
P26 ms6 - 1 = 5 ms5 - 3 = 2 ms
P322 ms22 - 2 = 20 ms20 - 8 = 12 ms
P417 ms17 - 3 = 14 ms14 - 6 = 8 ms

11.5 Final Calculations

Average Waiting Time:
(7+2+12+8)/4=7.25ms
Average Turnaround Time:
(12+5+20+14)/4=12.75ms

11.6 Advantages of Round Robin Scheduling

Fair CPU Allocation: Every process gets an equal time slice (TQ).
Prevents Starvation: All processes get a chance to execute.
Better for Time-Sharing Systems: Ensures smooth multitasking.

11.7 Disadvantages of Round Robin Scheduling

Higher Context Switching Overhead: More frequent process switching leads to CPU overhead.
Increased Average Turnaround Time: If the time quantum is too small, processes switch too frequently, causing delays.
Choosing the Right Time Quantum is Critical:
  • If too small: Too many context switches (inefficient).
  • If too large: Acts like First Come, First Serve (FCFS), which increases waiting time.

11.8 When to Use Round Robin?

  • Multitasking and Time-Sharing Systems where all processes need equal CPU time.
  • Interactive Systems where responsiveness is important.
  • Gaming Servers where multiple users require CPU time fairly.

12. Multilevel Feedback Queue (MLFQ) Scheduling


12.1 Introduction to Multilevel Feedback Queue Scheduling

  • Multilevel Feedback Queue (MLFQ) is an advanced CPU scheduling algorithm that allows processes to move between different priority queues.
  • It improves CPU utilization by giving priority to shorter processes and adjusting process priority dynamically.
  • It is a preemptive scheduling algorithm.
Example:
  • A teacher prioritizes students who ask shorter questions first.
  • If a student takes too long, they are asked to wait while others get a chance.
  • Similarly, in MLFQ, shorter processes get higher priority, but longer processes are eventually executed.

12.2 How MLFQ Works?

  • The system maintains multiple queues, each with different priority levels.
  • Short processes start in a high-priority queue.
  • If a process exceeds its time quantum, it is moved to a lower-priority queue.
  • Lower-priority queues use longer time slices or FCFS scheduling.

Example of Queue Structure:

Queue LevelPriorityScheduling Algorithm
Q1HighestRound Robin (TQ = 2ms)
Q2MediumRound Robin (TQ = 4ms)
Q3LowestFCFS (No Preemption)

12.3 Example of MLFQ Scheduling

Given Processes:

ProcessArrival TimeBurst Time
P10 ms5 ms
P21 ms3 ms
P32 ms8 ms

Time Quantum:

  • Queue Q1 (Highest Priority): TQ = 2ms
  • Queue Q2 (Medium Priority): TQ = 4ms
  • Queue Q3 (Lowest Priority): FCFS (No time limit)

12.4 Step-by-Step Execution

  • At time 0, P1 arrives and starts in Q1.
    • P1 executes for 2ms and moves to the end of Q1 with remaining time = 3ms.
  • At time 1, P2 arrives in Q1.
    • P2 executes for 2ms and completes (Burst Time = 3ms, Remaining = 1ms).
  • At time 3, P3 arrives in Q1.
    • P3 executes for 2ms and moves to the end of Q1 with remaining time = 6ms.
  • At time 5, P2 resumes and completes execution.
  • At time 6, P1 resumes in Q1.
    • P1 executes for 2ms, moves to Q2 (Remaining time = 1ms).
  • At time 8, P3 resumes in Q1.
    • P3 executes for 2ms, moves to Q2 (Remaining time = 4ms).
  • At time 10, P1 resumes in Q2 and completes execution.
  • At time 11, P3 resumes in Q2 for 4ms, moves to Q3 (Remaining time = 2ms).
  • At time 15, P3 executes in Q3 (FCFS) and completes execution.

12.5 Gantt Chart for MLFQ Scheduling

| P1 | P2 | P3 | P2 | P1 | P3 | P1 | P3 | 0 2 4 5 6 8 10 15 17

12.6 Calculating Waiting Time (WT) & Turnaround Time (TAT)

  • Waiting Time (WT) = Turnaround Time - Burst Time
  • Turnaround Time (TAT) = Completion Time - Arrival Time
ProcessCompletion TimeTurnaround Time (TAT)Waiting Time (WT)
P110 ms10 - 0 = 10 ms10 - 5 = 5 ms
P25 ms5 - 1 = 4 ms4 - 3 = 1 ms
P317 ms17 - 2 = 15 ms15 - 8 = 7 ms

12.7 Final Calculations

Average Waiting Time:

(5+1+7)/3=4.33ms

Average Turnaround Time:

(10+4+15)/3=9.67ms

12.8 Advantages of Multilevel Feedback Queue Scheduling

Flexible Scheduling: Adjusts priorities dynamically.
Shorter processes execute faster, reducing response time.
Prevents Starvation: If a process waits too long, it is moved to a higher-priority queue.

12.9 Disadvantages of Multilevel Feedback Queue Scheduling

Complex Implementation: Requires multiple queues and scheduling rules.
Choosing time quantum is difficult: Incorrect values can cause inefficiency.
More CPU Overhead: Frequent context switching between queues.

12.10 When to Use MLFQ?

  • Operating systems that handle multiple types of processes (background, interactive).
  • Gaming and real-time systems where high-priority tasks need immediate execution.
  • Cloud computing and multitasking environments where different tasks require different CPU time allocations.

13. Multiprocessor Scheduling


13.1 Introduction to Multiprocessor Scheduling

  • Multiprocessor scheduling deals with scheduling multiple processes across multiple processors (CPUs).
  • It is more complex than single-processor scheduling because it must consider load balancing, processor affinity, and synchronization.
  • It is used in modern computers, servers, and supercomputers to improve performance.
Example:
  • A laptop with a quad-core processor can run four tasks at the same time (e.g., a web browser, video player, game, and antivirus scan).

13.2 Types of Multiprocessor Systems

  • Asymmetric Multiprocessing (AMP)
    • One CPU acts as the master and assigns tasks to other CPUs (slaves).
    • The master CPU controls scheduling and manages the system.
    • Simpler but can cause bottlenecks at the master CPU.
    • Example: Older computers where one core manages all system tasks, while others handle computations.
  • Symmetric Multiprocessing (SMP)
    • All CPUs are equal and share the workload.
    • The operating system manages scheduling and assigns tasks dynamically.
    • Used in modern computers and servers for better efficiency.
    • Example: A Windows PC with an 8-core processor, where each core runs different tasks independently.

13.3 Scheduling in Multiprocessor Systems

Load Balancing

  • Ensures equal distribution of tasks across all processors.
  • Prevents some CPUs from being overloaded while others are idle.
  • Types of Load Balancing:
    • Push Migration: The OS moves tasks from an overloaded CPU to a less busy one.
    • Pull Migration: An idle CPU pulls tasks from a busy CPU.

Processor Affinity

  • If a process runs on a particular CPU, it prefers to stay on that CPU for efficiency.
  • Reduces the overhead of moving processes between CPUs.
  • Types of Affinity:
    • Soft Affinity: The OS tries to keep a process on the same CPU but can move it if necessary.
    • Hard Affinity: The OS forces a process to stay on a specific CPU.
  • Example: A video editing software running on CPU 3 will continue running there for better cache performance.

Multicore Scheduling

  • A single processor contains multiple cores that can execute tasks independently.
  • Challenges:
    • Some threads must communicate between cores (e.g., in parallel programming).
    • Cache sharing can lead to performance issues.
  • Example: A gaming PC with 16 CPU cores, where some cores handle graphics while others manage background processes.

Real-Time Multiprocessor Scheduling

  • Used in time-sensitive applications like air traffic control, robotics, and medical devices.
  • Ensures that critical tasks meet strict deadlines.

13.4 Advantages of Multiprocessor Scheduling

Increases Performance: Multiple CPUs execute tasks faster.
Better Resource Utilization: Prevents CPU idling.
Improves Multitasking: More processes can run at once.
Enhances Reliability: If one CPU fails, others continue working.

13.5 Disadvantages of Multiprocessor Scheduling

Complex Scheduling Algorithms: OS must efficiently distribute tasks.
Synchronization Issues: Multiple processors accessing shared memory can cause conflicts.
Expensive Hardware: More CPUs increase costs.

13.6 When to Use Multiprocessor Scheduling?

  • Supercomputers for scientific research and simulations.
  • Cloud Computing Servers for handling multiple user requests.
  • AI & Machine Learning Systems for processing large datasets.

14. Real-Time Scheduling


14.1 Introduction to Real-Time Scheduling

  • Real-time scheduling is used in systems where tasks must be completed within strict time limits (deadlines).
  • Failure to meet the deadline can lead to serious consequences in critical applications.
  • Used in aircraft systems, medical devices, robotics, and industrial automation.
Example:
  • A self-driving car must process sensor data in real-time.
  • If the braking system is delayed, it can cause an accident.
  • A real-time scheduling algorithm ensures that critical tasks always run on time.

14.2 Types of Real-Time Systems

Hard Real-Time Systems

  • Strict deadlines must always be met.
  • Failure to meet the deadline = System failure.
  • Used in life-critical systems where delays are unacceptable.
  • Examples:
    • Pacemakers (must send electrical pulses on time).
    • Airbag Systems in cars (must deploy immediately in an accident).
    • Industrial Robots (must work with precise timing).

Soft Real-Time Systems

  • Deadlines should be met, but occasional delays are acceptable.
  • System performance degrades if deadlines are missed, but it does not fail.
  • Examples:
    • Video Streaming (minor delays cause buffering but do not crash the system).
    • Online Gaming (small delays can cause lag but do not stop the game).
    • Stock Market Trading (fast execution is important but not life-threatening).

14.3 Scheduling Algorithms for Real-Time Systems

Rate Monotonic Scheduling (RMS) – Fixed Priority

  • Assigns higher priority to tasks with shorter execution periods.
  • Tasks run at regular intervals (periodic).
  • Used for predictable workloads.
  • Example:
    • A temperature sensor in an industrial plant checks temperature every 5ms.
    • Since it runs frequently, it gets higher priority.

Earliest Deadline First (EDF) – Dynamic Priority

  • The task with the closest deadline gets the highest priority.
  • More flexible than RMS but requires complex calculations.
  • Example:
    • A navigation system in an airplane must process incoming flight data before the next set of sensor readings arrives.
    • The earliest deadline (next sensor reading) gets the CPU first.

Least Slack Time First (LSTF) – Dynamic Priority

  • Slack Time = Deadline - Remaining Execution Time
  • The task with the least slack time is scheduled first.
  • Example:
    • A video streaming system where frames must be displayed at precise times.
    • Frames with the least remaining time before the next frame is needed are processed first.

14.4 Challenges in Real-Time Scheduling

Meeting Deadlines: Ensuring that all critical tasks complete on time.
Resource Management: CPU, memory, and I/O devices must be allocated efficiently.
Interrupt Handling: Quick response to external events (e.g., sensor inputs).
Power Consumption: Must be optimized in battery-operated devices like medical implants.

14.5 Advantages of Real-Time Scheduling

Guaranteed Response Time: Ensures critical tasks complete before the deadline.
Efficient Resource Utilization: CPU and memory are allocated optimally.
Predictable Performance: Essential for life-critical systems.

14.6 Disadvantages of Real-Time Scheduling

Complex Implementation: Requires advanced scheduling algorithms.
High Resource Usage: May require dedicated processors for real-time tasks.
Difficult to Scale: Works best for small, well-defined tasks.

14.7 When to Use Real-Time Scheduling?

🔹 Embedded Systems (e.g., smartwatches, medical devices).
🔹 Autonomous Vehicles (e.g., self-driving cars, drones).
🔹 Industrial Automation (e.g., robotic arms in factories).
🔹 Military & Aerospace Systems (e.g., missile guidance systems).


15. Thread Scheduling


15.1 Introduction to Thread Scheduling

  • Thread scheduling decides which thread gets CPU time when multiple threads exist.
  • It works inside a process, where multiple threads share the same resources (memory, files, etc.).
  • Helps in efficient multitasking and improves CPU utilization.

Example:

  • In a web browser, one thread handles user input (clicking buttons), another downloads files, and another displays content.
  • Thread scheduling ensures smooth execution of all these tasks.

15.2 Types of Thread Scheduling

Process vs. Thread Scheduling

Process SchedulingThread Scheduling
Deals with entire processesDeals with individual threads within a process
More overhead (context switching is costly)Less overhead (threads share memory)
Used for system-level multitaskingUsed for efficient application performance

User-Level Thread Scheduling

  • The user program manages thread scheduling, not the OS.
  • Uses a thread library (e.g., POSIX Threads, Java Threads).
  • Fast but cannot take advantage of multiple processors.
  • Example: 
    • A game engine handles enemy movement, physics, and sound using threads managed by the game itself.

Kernel-Level Thread Scheduling

  • The operating system manages scheduling, not the user program.
  • Threads are assigned to different CPU cores for execution.
  • Slower than user-level scheduling but supports multicore processing.
  • Example: 
    • In Linux, the kernel schedules system threads to optimize performance on a multi-core CPU.

15.3 Multithreading Models

  • Defines how user threads map to kernel threads.

Many-to-One Model

  • Many user threads → One kernel thread
  • Simple but cannot use multiple CPUs.
  • Used in older systems.
  • Example: 
    • Green Threads in older Java versions.

One-to-One Model

  • Each user thread → One kernel thread
  • Supports parallel execution on multiple CPUs.
  • Used in modern OS like Windows, Linux, macOS.

Many-to-Many Model

  • Multiple user threads → Multiple kernel threads
  • Balances flexibility and efficiency.
  • Used in hybrid thread libraries.

15.4 Thread Scheduling Algorithms

Threads are scheduled using different policies.

Preemptive Scheduling

  • The OS switches threads forcefully to ensure fairness.
  • Used in multitasking operating systems.
  • Example: A music player pauses while the CPU processes a high-priority notification.

Non-Preemptive Scheduling

  • A thread runs until it completes or voluntarily yields.
  • Used in simpler scheduling models.
  • Example: A single-threaded calculator runs until the user closes it.

15.5 Thread Priorities

  • Threads have priority values to determine execution order.
  • Higher priority threads run before lower priority ones.
  • Dynamic priority adjustment prevents starvation of low-priority threads.
Example:
  • A real-time video rendering thread gets higher priority than a background file backup thread.

15.6 Advantages of Thread Scheduling

Faster Execution: Reduces overhead compared to process switching.
Efficient CPU Utilization: Threads run in parallel, making use of multi-core processors.
Better Responsiveness: UI remains active while background tasks run.

15.7 Disadvantages of Thread Scheduling

Thread Synchronization Issues: Multiple threads accessing shared resources can cause data inconsistency.
More Complex than Process Scheduling: Requires careful deadlock prevention.
CPU Starvation Risk: If thread priorities are not balanced, low-priority threads may never run.

15.8 When to Use Thread Scheduling?

🔹 Web Browsers (Loading multiple tabs simultaneously).
🔹 Gaming & Multimedia (Graphics rendering, AI, and input handling).
🔹 Cloud Computing (Efficiently handling user requests in servers).
🔹 Mobile Applications (Background sync while using other features).


🚨Thanks for visiting finenotes4u✨

Welcome to a hub for 😇Nerds and knowledge seekers! Here, you'll find everything you need to stay updated on education, notes, books, and daily trends.

💗 Bookmark our site to stay connected and never miss an update!

💌 Have suggestions or need more content? Drop a comment below, and let us know what topics you'd like to see next! Your support means the world to us. 😍

Post a Comment

Previous Post Next Post

Ad blocker detected!

We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.