Unit I: Foundations of Algorithm - CSE408 Design and Analysis of Algorithms | B.Tech CSE Notes PDF | FineNotes4U


Unit I: Foundations of Algorithm


1. Algorithms

  • Definition:
    An algorithm is a step-by-step procedure to solve a problem in a finite number of steps.
  • Characteristics of a Good Algorithm:
    • Input: Takes zero or more inputs.
    • Output: Produces at least one output.
    • Definiteness: Each step is clear and well-defined.
    • Finiteness: The algorithm must end after a finite number of steps.
    • Effectiveness: Each step should be simple enough to be performed exactly.
  • Example:
    Algorithm to find the sum of two numbers:
    1. Start
    2. Read two numbers, A and B
    3. Compute the sum: SUM = A + B
    4. Print SUM
    5. Stop


2. Fundamentals of Algorithmic Problem Solving

2.1 Basic Algorithm Design Techniques

  1. Brute Force:

    • Tries all possible solutions and picks the best.
    • Example: Linear Search, Bubble Sort
  2. Divide and Conquer:

    • Breaks a problem into smaller subproblems, solves them, and combines results.
    • Example: Merge Sort, Quick Sort
  3. Greedy Method:

    • Makes the best choice at each step, assuming it leads to an optimal solution.
    • Example: Dijkstra’s Algorithm, Prim’s Algorithm
  4. Dynamic Programming:

    • Breaks the problem into overlapping subproblems and stores results to avoid redundant calculations.
    • Example: Fibonacci Series, Knapsack Problem
  5. Backtracking:

    • Tries all possibilities and backtracks when a solution is not feasible.
    • Example: N-Queens Problem, Sudoku Solver
  6. Branch and Bound:

    • Like backtracking, but it prunes unnecessary paths to improve efficiency.
    • Example: Travelling Salesman Problem

2.2 Analyzing Algorithm

  • Why Analyze?
    • To determine the efficiency of an algorithm in terms of time and space complexity.
  • Factors Affecting Performance:
    • Number of operations performed
    • Size of input data
    • Type of operations (arithmetic, comparisons, loops)


3. Fundamental Data Structures

3.1 Linear Data Structures

Arrays:

  • Stores elements in contiguous memory locations.
  • Example: int arr[5] = {1, 2, 3, 4, 5}

Real-Life Uses of Arrays:

  • Databases → Storing and managing structured data.
  • Games → Storing player scores, positions, inventory, etc.
  • Graphics Processing → Image representation as pixel grids.
  • Sorting & Searching → Algorithms like Bubble Sort, Binary Search.
  • Memory Management → Used in OS for process scheduling.

Types of Arrays

  • One-Dimensional (1D) Array
    • A list of elements stored in a single row or column.
    • Example: int arr[5] = {10, 20, 30, 40, 50};
  • Two-Dimensional (2D) Array
    • Represents tabular data (rows × columns).
    • Example (Matrix of size 3×3): int matrix[3][3] = {{1, 2, 3}, {4, 5, 6},{7, 8, 9}};
  • Multi-Dimensional Array (3D, 4D, etc.)
    • Higher-dimensional arrays used in advanced applications like 3D modeling.
    • Example (3D array): int cube[3][3][3]; 

Array Operations


(i) Insertion in an Array
  • Definition: Adding an element at a specific index.
  • Time Complexity:
    • Best Case: O(1) (if inserted at the end)
    • Worst Case: O(n) (if inserted at the beginning or middle, as shifting is needed)
  • Logic of Insertion:
    • Check if the array has space (for static arrays).
    • Shift elements to the right from the insertion index.
    • Insert the new element at the given index.
  • Example Explanation (Insert 50 at index 2 in arr[5] = {10, 20, 30, 40})
    • Initial Array: {10, 20, 30, 40}
    • Shift elements: {10, 20, __, 30, 40}
    • Insert 50{10, 20, 50, 30, 40}

(ii) Deletion in an Array

  • Definition: Removing an element from a given index.
  • Time Complexity:
    • Best Case: O(1) (if deleting the last element).
    • Worst Case: O(n) (if deleting the first element, as shifting is needed).
  • Logic of Deletion:
    • Check if the array is empty.
    • Find the index of the element to delete.
    • Shift elements to the left to overwrite the deleted value.
  • Example Explanation (Delete element at index 1 in arr[5] = {10, 20, 30, 40})
    • Initial Array: {10, 20, 30, 40}
    • Shift elements: {10, 30, 40, __}
    • Final Array: {10, 30, 40}

(iii) Searching in an Array


(A) Linear Search
  • Definition: Sequentially checking each element until a match is found.
  • Time Complexity:
    • Best Case: O(1) (if element is at the first index).
    • Worst Case: O(n) (if the element is not present).
  • Logic of Linear Search:
    • Start from the first element.
    • Compare each element with the target.
    • If found, return index; else, continue.
    • If the end is reached, return -1 (not found).
  • Example Explanation (Find 30 in arr[5] = {10, 20, 30, 40, 50})
    • Compare 10 → No
    • Compare 20 → No
    • Compare 30 → Found at index 2
(B) Binary Search (Only for Sorted Arrays)
  • Definition: Efficient searching by repeatedly dividing the search range in half.
  • Time Complexity: O(log n)
  • Logic of Binary Search:
    • Set low = 0 and high = size - 1.
    • Find mid = (low + high) / 2.
    • If arr[mid] == key, return index.
    • If arr[mid] > key, search in the left half (high = mid - 1).
    • If arr[mid] < key, search in the right half (low = mid + 1).
    • Repeat until low > high (element not found).
  • Example Explanation (Find 30 in sorted arr[5] = {10, 20, 30, 40, 50})
    • Step 1: low = 0high = 4mid = 2 → arr[2] == 30 (Found at index 2).

(iv) Sorting in an Array

(A) Bubble Sort (Simple but Slow)
  • Logic:
    1. Compare adjacent elements and swap if they are in the wrong order.
    2. Move the largest element to the end in each pass.
    3. Repeat for the remaining elements.
  • Example Explanation (Bubble Sort on arr[5] = {5, 3, 8, 6, 2})
    • Pass 1: (Move largest element to last)
      • Compare 5 & 3 → Swap → {3, 5, 8, 6, 2}
      • Compare 5 & 8 → No Swap → {3, 5, 8, 6, 2}
      • Compare 8 & 6 → Swap → {3, 5, 6, 8, 2}
      • Compare 8 & 2 → Swap → {3, 5, 6, 2, 8} ✅ (Largest number 8 moved to last)
    • Pass 2: (Move second largest number to second last)
      • Compare 3 & 5 → No Swap → {3, 5, 6, 2, 8}
      • Compare 5 & 6 → No Swap → {3, 5, 6, 2, 8}
      • Compare 6 & 2 → Swap → {3, 5, 2, 6, 8} ✅ (6 placed correctly)
    • Pass 3:
      • Compare 3 & 5 → No Swap → {3, 5, 2, 6, 8}
      • Compare 5 & 2 → Swap → {3, 2, 5, 6, 8} ✅
    • Pass 4:
      • Compare 3 & 2 → Swap → {2, 3, 5, 6, 8} ✅ (Fully sorted)
  • Final Sorted Array: {2, 3, 5, 6, 8}
    • Time Complexity: O(n²) (because of nested loops).
    • Best Case: O(n) (if already sorted).

(B) Quick Sort (Efficient Divide & Conquer)
  • Logic:
    1. Pick a pivot element.
    2. Partition the array so that smaller elements are on the left and larger elements are on the right.
    3. Recursively apply Quick Sort to both partitions.
  • Example Explanation (Quick Sort on arr[5] = {10, 80, 30, 90, 40})
    • Step 1: Select Pivot → Choose 40 as pivot
      • Compare 10 < 40 → Left ✅
      • Compare 80 > 40 → Right ✅
      • Compare 30 < 40 → Left ✅
      • Compare 90 > 40 → Right ✅
      • Swap Pivot (40) with first element in the right part (80)
      • Partitioned Array: {10, 30, 40, 90, 80} (40 is now in its correct position)
    • Step 2: Recursively apply Quick Sort on left and right parts.
      • Left Subarray {10, 30} → Sorted as {10, 30}
      • Right Subarray {90, 80} → Sorted as {80, 90}
  • Final Sorted Array: {10, 30, 40, 80, 90}
    • Time Complexity: O(n log n) (best case).
    • Worst Case: O(n²) (if always picking a bad pivot).

(v) Merging Two Arrays
  • Logic:
    1. Start from the first elements of both arrays.
    2. Compare and add the smaller one to the new array.
    3. If one array is exhausted, copy the remaining elements.
  • Time Complexity: O(n)
  • Example Explanation (Merge arr1[3] = {2, 5, 7} and arr2[4] = {1, 3, 6, 8})
    • Start with empty merged array {}
    • Compare 2 & 1 → 1 is smaller → {1}
    • Compare 2 & 3 → 2 is smaller → {1, 2}
    • Compare 5 & 3 → 3 is smaller → {1, 2, 3}
    • Compare 5 & 6 → 5 is smaller → {1, 2, 3, 5}
    • Compare 7 & 6 → 6 is smaller → {1, 2, 3, 5, 6}
    • Compare 7 & 8 → 7 is smaller → {1, 2, 3, 5, 6, 7}
    • Only 8 is left, add it → {1, 2, 3, 5, 6, 7, 8}
  • Final Merged Sorted Array: {1, 2, 3, 5, 6, 7, 8}

Linked Lists:

  • A Linked List is a linear data structure where elements are not stored in contiguous memory locations.
  • Instead of using an index (like arrays), each element (called a node) contains:
    • Data (actual value).
    • Pointer (address of the next node).
  • Unlike arrays, linked lists grow dynamically and do not require pre-allocation of memory.

Uses of Linked Lists

Linked lists are widely used in various applications due to their flexibility and dynamic memory allocation.

Real-World Applications of Linked Lists
  • Dynamic Memory Allocation

    • Unlike arrays, linked lists allow efficient insertion/deletion without shifting elements.
  • Implementing Stacks & Queues

    • Stacks (LIFO) and Queues (FIFO) can be implemented using linked lists instead of arrays.
  • Undo Functionality (Text Editors, Graphics Editors)

    • Linked lists help maintain a history of actions (Undo/Redo).
  • Navigation Systems (Web Browsers, File Explorers)

    • Doubly Linked Lists allow forward and backward traversal (e.g., browser history).
  • Graph Adjacency List Representation

    • Used to store edges of graphs dynamically (efficient for sparse graphs).
  • Memory Management (Garbage Collection)

    • Uses linked lists to track free memory blocks.

Types of Linked Lists

(i) Singly Linked List
  • Each node has one pointer (to the next node).
  • Traversal possible in one direction only.
  • Example Structure:
    [10 | *][20 | *][30 | *] → NULL
    ('NULL' represents the end of the list)
(ii) Doubly Linked List
  • Each node has two pointers:
    • Next Pointer: Points to the next node.
    • Previous Pointer: Points to the previous node.
  • Allows traversal in both directions.
  • Example Structure:
    NULL ← [10 | * | *][20 | * | *][30 | * | *] → NULL
(iii) Circular Linked List
  • Last node points back to the first node (making it circular).
  • Can be Singly Circular or Doubly Circular.
  • Example Structure (Singly Circular Linked List):
    [10 | *][20 | *][30 | *][10 | *] (Back to first node)
(iv) Circular Doubly Linked List
  • Similar to a Doubly Linked List, but the last node connects back to the first.
  • Example Structure:
    [10 | * | *][20 | * | *][30 | * | *] ↔ (Back to 10)

Operations on Linked Lists

(i) Insertion in a Linked List
  • Definition: Adding a node at the beginning, middle, or end of the linked list.
  • Time Complexity:
    • Best Case: O(1) (inserting at the beginning).
    • Worst Case: O(n) (inserting at the end).
  • Logic of Insertion at the Beginning
    • Create a new node.
    • Set newNode.next = head.
    • Update head to point to newNode.
  • Logic of Insertion at the End
    • Traverse to the last node.
    • Set lastNode.next = newNode.
  • Logic of Insertion in the Middle (At a Specific Index)
    • Traverse to the position-1 node.
    • Set newNode.next = currentNode.next.
    • Set currentNode.next = newNode.
  • Example Explanation (Insert 50 at the beginning in Linked List: 10 → 20 → 30 → NULL)
    • New Node: [50 | *]
    • 50.next points to 10
    • Updated Linked List: 50 → 10 → 20 → 30 → NULL
(ii) Deletion in a Linked List
  • Definition: Removing a node from beginning, middle, or end.
  • Time Complexity:
    • Best Case: O(1) (deleting from the beginning).
    • Worst Case: O(n) (deleting the last node).
  • Logic of Deletion from Beginning
    • Store head in a temp variable.
    • Move head to the next node.
    • Delete the old head.
  • Logic of Deletion from End
    • Traverse to the second-last node.
    • Set secondLast.next = NULL.
  • Logic of Deletion from Middle (Specific Index)
    • Traverse to the node before the target.
    • Set prevNode.next = targetNode.next.
    • Delete the target node.
  • Example Explanation (Delete first node in Linked List: 10 → 20 → 30 → NULL)
    • Old head = 10
    • Move head to next node (20)
    • Updated Linked List: 20 → 30 → NULL
(iii) Searching in a Linked List
  • Definition: Finding a node with a given value.
  • Time Complexity: O(n) (as we need to traverse the list).
  • Logic of Searching
    • Start from head.
    • Compare each node’s data with the key.
    • If match found, return position.
    • If end reached, return not found.
  • Example Explanation (Search for 20 in Linked List: 10 → 20 → 30 → NULL)
    • Compare 10 → No Match
    • Compare 20Match Found at position 2
(iv) Reversing a Linked List
  • Definition: Changing the direction of pointers to reverse the order of elements.
  • Time Complexity: O(n).
  • Logic of Reversing a Linked List
    • Initialize prev = NULL, current = head.
    • Traverse through the list and reverse the next pointer of each node.
    • Update head to the last node.
  • Example Explanation (Reverse Linked List: 10 → 20 → 30 → NULL)
    • Step 1: 10.next = NULL
    • Step 2: 20.next = 10
    • Step 3: 30.next = 20
    • Updated List: 30 → 20 → 10 → NULL
(v) Merging Two Linked Lists
  • Definition: Combining two sorted linked lists into a single sorted list.
  • Time Complexity: O(n).
  • Logic of Merging Two Sorted Linked Lists
    • Compare head nodes of both lists.
    • Add the smaller node to the result list.
    • Move pointer forward in the list from which node was taken.
    • Repeat until all nodes are merged.
  • Example Explanation (Merge List1: 10 → 30, List2: 20 → 40)
    • Compare 10 & 20 → Take 10
    • Compare 30 & 20 → Take 20
    • Compare 30 & 40 → Take 30
    • Only 40 is left, add it
    • Final Merged List: 10 → 20 → 30 → 40

Stacks:

  • A Stack is a linear data structure that follows the LIFO (Last In, First Out) principle.
  • This means the last element added is the first one removed.
  • Example Analogy:
    • Think of a stack of plates in a cafeteria. You add plates to the top and remove plates from the top.
  • Key Features of Stacks:
    • Insertion & Deletion happen at the same end (Top).
    • Fast operations (O(1) for Push & Pop).
    • Used in recursion, expression evaluation, undo/redo, etc.

Uses of Stacks

  • Function Calls (Recursion)
    • The system uses a stack to manage function calls (Call Stack).
    • When a function is called, it is pushed onto the stack.
    • When it completes, it is popped from the stack.
  • Expression Evaluation (Postfix, Prefix, Infix)
    • Used in evaluating mathematical expressions and converting between notations (e.g., Infix to Postfix).
  • Undo & Redo Operations
    • Used in text editors, graphics editors, and browsers to track changes.
  • Backtracking Algorithms (Maze, DFS)
    • Used in Depth-First Search (DFS) and solving puzzles like N-Queens.
  • Balancing Parentheses & Syntax Checking
    • Used in parsing code to ensure balanced parentheses {[()]}.
  • Reversing Data
    • Used to reverse strings, arrays, and linked lists.
  • Browser History Navigation
    • Forward and backward navigation is implemented using stacks.

Types of Stacks

  • Static Stack (Implemented Using Arrays)
    • Uses a fixed-size array to store elements.
    • Fast access but has a fixed limit.
    • Memory needs to be pre-allocated.
  • Dynamic Stack (Implemented Using Linked List)
    • Uses linked list nodes to store elements.
    • No size limitation (memory allocated dynamically).
    • Efficient memory usage but slightly slower due to pointer management.

Operations on Stacks

(i) Push Operation (Insertion in Stack)
  • Definition: Adding an element to the top of the stack.
  • Time Complexity: O(1) (constant time).
  • Logic of Push Operation
    • Check if the stack is full (for an array-based stack).
    • Increment the top pointer to point to the next empty position.
    • Store the new element at the top position.
  • Example Explanation (Push 50 into Stack: {10, 20, 30})
    • Initial Stack: Top = 2, Stack = {10, 20, 30}
    • Increment Top → 3
    • Insert 50 at Top
    • Updated Stack: {10, 20, 30, 50}
(ii) Pop Operation (Deletion in Stack)
  • Definition: Removing an element from the top of the stack.
  • Time Complexity: O(1) (constant time).
  • Logic of Pop Operation
    • Check if the stack is empty (to avoid underflow).
    • Access the top element and store it in a temporary variable.
    • Decrement the top pointer to remove the element.
  • Example Explanation (Pop from Stack: {10, 20, 30, 50})
    • Initial Stack: Top = 3, Stack = {10, 20, 30, 50}
    • Remove 50 (Top Element)
    • Decrement Top → 2
    • Updated Stack: {10, 20, 30}
(iii) Peek Operation (Accessing the Top Element)
  • Definition: Retrieves the top element without removing it.
  • Time Complexity: O(1)
  • Logic of Peek Operation
    • Check if the stack is empty.
    • Return the top element.
  • Example Explanation (Peek Stack: {10, 20, 30})
    • Top Element is 30 → Return 30
(iv) Checking If the Stack Is Empty (Underflow Condition)
  • Definition: Determines if the stack contains any elements.
  • Time Complexity: O(1)
  • Logic to Check Empty Stack
    • If Top == -1, stack is empty.
    • Otherwise, it is not empty.
  • Example Explanation (Check Stack: {10, 20, 30})
    • Top = 2 → Stack not empty ✅
(v) Checking If the Stack Is Full (Overflow Condition)
  • Definition: Determines if the stack has reached its maximum capacity.
  • Time Complexity: O(1)
  • Logic to Check Full Stack (Array Implementation)
    • If Top == (size - 1), stack is full.
    • Otherwise, it is not full.
  • Example Explanation (Stack Size = 3, Stack: {10, 20, 30})
    • Top = 2 (Size-1) → Stack Full
(vi) Reversing a String Using Stack
  • Definition: Using a stack to reverse a given string.
  • Time Complexity: O(n)
  • Logic to Reverse a String
    • Push all characters of the string onto the stack.
    • Pop each character and append it to a new string.
  • Example Explanation (Reverse "HELLO")
    • Push: H → E → L → L → O
    • Pop: O → L → L → E → H
    • Reversed String: "OLLEH" ✅
(v) Checking Balanced Parentheses Using Stack
  • Definition: Uses stack to check if an expression has balanced brackets.
  • Time Complexity: O(n)
  • Logic of Balanced Parentheses Check
    • Push opening brackets ((, {, [) onto the stack.
    • For closing brackets (), }, ]), check the top of the stack.
    • If matching opening bracket is found, pop it.
    • At the end, if stack is empty, brackets are balanced.
  • Example Explanation (Check "{[()]}")
    • Push: {[(
    • Pop: ( (matches ))
    • Pop: [ (matches ])
    • Pop: { (matches })
    • Stack is Empty → Balanced ✅
(vi) Implementing Two Stacks in One Array
  • Definition: Store two stacks in a single array without wasting space.
  • Time Complexity: O(1)
  • Logic of Two Stacks in One Array
    • Stack 1 grows from left to right.
    • Stack 2 grows from right to left.
    • If top1 and top2 cross, stack is full.
  • Example Explanation (Array of Size 6, Stack1 = {10, 20}, Stack2 = {50, 40})
    • {10, 20, __, __, 40, 50}

Queues:

  • A Queue is a linear data structure that follows the FIFO (First In, First Out) principle.
  • This means that the first element added is the first one to be removed.
  • Example Analogy:
    • Think of a queue at a bus stop. The person who arrives first gets on the bus first.
  • Key Features of Queues:
    • Insertion happens at the rear (enqueue operation).
    • Deletion happens at the front (dequeue operation).
    • Efficient operations (O(1) for both enqueue & dequeue).

Uses of Queues

  • Process Scheduling (Operating System)
    • CPU scheduling uses queues to manage processes (e.g., Round Robin scheduling).
  • Handling Requests in Servers
    • Web servers use queues to process multiple requests in order.
  • Print Job Management
    • Printers use queues to print documents in the order they were added.
  • Call Center & Customer Service
    • Incoming customer calls are queued and answered in order.
  • Graph Algorithms (BFS - Breadth-First Search)
    • Queues are used in BFS to explore nodes level by level.
  • Data Buffering (Streaming & Real-Time Processing)
    • Data streams (e.g., YouTube, live streaming) use queues to process data sequentially.
  • Network Packet Scheduling
    • Routers use queues to process packets efficiently before transmission.

Types of Queues

  • Simple Queue (Linear Queue)
    • Basic queue structure where:
      • Insertion happens at the rear.
      • Deletion happens at the front.
    • Problem: After dequeuing, empty spaces are not reused.
  • Circular Queue
    • Overcomes the limitation of a simple queue by making the rear wrap around when it reaches the end.
    • Efficient memory utilization.
  • Double-Ended Queue (Deque)
    • Allows insertion and deletion from both ends.
    • Types of Deque:
      • Input-restricted Deque: Insertion at one end, deletion at both ends.
      • Output-restricted Deque: Deletion at one end, insertion at both ends.
  • Priority Queue
    • Each element has a priority value, and the element with the highest priority is dequeued first.
    • Used in Dijkstra’s Algorithm, A Search Algorithm*, CPU scheduling, etc.

Operations on Queues

(i) Enqueue Operation (Insertion in Queue)
  • Definition: Adding an element at the rear of the queue.
  • Time Complexity: O(1) (constant time).
  • Logic of Enqueue Operation
    • Check if the queue is full (overflow condition).
    • Increment rear pointer to the next available position.
    • Insert the new element at the rear.
  • Example Explanation (Enqueue 50 into Queue: {10, 20, 30})
    • Initial Queue: Front = 0, Rear = 2, Queue = {10, 20, 30}
    • Increment Rear → 3
    • Insert 50 at Rear
    • Updated Queue: {10, 20, 30, 50}
(ii) Dequeue Operation (Deletion from Queue)
  • Definition: Removing an element from the front of the queue.
  • Time Complexity: O(1) (constant time).
  • Logic of Dequeue Operation
    • Check if the queue is empty (underflow condition).
    • Retrieve the front element.
    • Increment the front pointer to remove the element.
  • Example Explanation (Dequeue from Queue: {10, 20, 30, 50})
    • Initial Queue: Front = 0, Rear = 3, Queue = {10, 20, 30, 50}
    • Remove 10 (Front Element)
    • Increment Front → 1
    • Updated Queue: {__, 20, 30, 50} (Front at 20)
(iii) Peek Operation (Accessing the Front Element)
  • Definition: Retrieves the front element without removing it.
  • Time Complexity: O(1)
    • Logic of Peek Operation
    • Check if the queue is empty.
    • Return the front element.
  • Example Explanation (Peek Queue: {20, 30, 50})
    • Front Element = 20 → Return 20
(iv) Checking If the Queue Is Empty (Underflow Condition)
  • Definition: Determines if the queue contains any elements.
  • Time Complexity: O(1)
  • Logic to Check Empty Queue
    • If Front == Rear + 1, queue is empty.
    • Otherwise, it is not empty.
  • Example Explanation (Check Queue: {20, 30, 50})
    • Front = 1, Rear = 3 → Queue not empty ✅
(v) Checking If the Queue Is Full (Overflow Condition)
  • Definition: Determines if the queue has reached its maximum capacity.
  • Time Complexity: O(1)
  • Logic to Check Full Queue (Array Implementation)
    • If Rear == Size - 1, queue is full.
    • Otherwise, it is not full.
  • Example Explanation (Queue Size = 4, Queue: {10, 20, 30, 50})
    • Rear = 3 (Size-1) → Queue Full ✅
(vi) Circular Queue Implementation
  • Problem with Simple Queue: Once elements are dequeued, unused spaces are wasted.
  • Solution: Circular Queue makes the rear wrap around.
  • Logic of Circular Queue
    • If Rear reaches the end, wrap it around using (Rear + 1) % size.
    • If Front reaches the end, wrap it around using (Front + 1) % size.
  • Example Explanation (Circular Queue of Size 5, Enqueue 50)
    • Initial Queue: {20, 30, 40, __, __}
    • Rear = 2 → Move Rear to 3
    • Insert 50
    • Updated Queue: {20, 30, 40, 50, __}
(vii) Implementing Two Queues in One Array
  • Definition: Store two queues in a single array without wasting space.
  • Time Complexity: O(1)
  • Logic of Two Queues in One Array
    • Queue 1 grows from left to right.
    • Queue 2 grows from right to left.
    • If rear1 and rear2 cross, queues are full.
  • Example Explanation (Array of Size 6, Queue1 = {10, 20}, Queue2 = {50, 40})
    • {10, 20, __, __, 40, 50}

3.2 Graphs and Trees

Graph:

  • A graph is a non-linear data structure consisting of nodes (vertices) and edges that connect them.
  • Unlike arrays, linked lists, stacks, and queues, graphs are used to represent complex relationships like networks, social connections, and maps.

Key Terms in Graphs

  • Vertex (Node): A point in the graph.
  • Edge: A connection between two vertices.
  • Adjacent Vertices: Two vertices connected by an edge.
  • Degree of a Vertex: The number of edges connected to a vertex.
  • Path: A sequence of edges connecting two vertices.
  • Connected Graph: A graph where every vertex is reachable from any other vertex.
  • Cycle: A path where the start and end vertex are the same.

Uses of Graphs

  • Social Networks
    • Users are represented as nodes, and friendships are edges.
  • Google Maps & GPS Navigation
    • Cities are nodes, and roads are edges.
    • Dijkstra’s Algorithm finds the shortest route.
  • Web Crawling (Google Search Engine)
    • Websites are nodes, and hyperlinks are edges.
  • Network Routing (Internet, Telecommunication)
    • Routers and switches form a graph network.
  • AI & Machine Learning (Graph Neural Networks)
    • Representing knowledge graphs and recommendation systems.
  • Dependency Graphs (Task Scheduling, Compilation)
    • Used in project management (e.g., dependencies in software builds).
  • Computer Vision & Image Processing
    • Pixels and object relationships are modeled as graphs.

Types of Graphs

(i) Based on Direction of Edges
  • Undirected Graph
    • Edges have no direction (e.g., friendship connections).
    • Example: {A—B—C} (A is connected to B, and B is connected to C).
  • Directed Graph (Digraph)
    • Edges have a direction (e.g., one-way roads, follower relationships on social media).
    • Example: {A → B → C} (A points to B, B points to C).
(ii) Based on Edge Weights
  • Unweighted Graph
    • All edges have equal weight.
    • Used in basic network connections.
  • Weighted Graph
    • Each edge has a weight or cost (e.g., road distances, network bandwidth).
    • Example: {A → B (5), B → C (2)} (A to B has weight 5, B to C has weight 2).
(iii) Special Types of Graphs
  • Cyclic Graph
    • A graph that contains cycles (paths that return to the starting node).
    • Example: {A → B → C → A}
  • Acyclic Graph
    • A graph without cycles (e.g., family trees).
  • Tree (Special Type of Graph)
    • A connected acyclic graph.
  • Bipartite Graph
    • A graph where vertices can be divided into two sets, and edges only connect nodes from different sets.

Graph Representation

(i) Adjacency Matrix (2D Array Representation)
  • Definition: A matrix representation where 1 represents an edge and 0 represents no edge.
  • Space Complexity: O(V²)
  • Best for: Dense graphs (many edges).
Example Explanation (Graph with 3 Nodes: A, B, C)
A --- B \ | \ | C
Adjacency Matrix:
A B C A [0, 1, 1] B [1, 0, 1] C [1, 1, 0]
(ii) Adjacency List (Linked List Representation)
  • Definition: Each node stores a list of its adjacent nodes.
  • Space Complexity: O(V + E)
  • Best for: Sparse graphs (fewer edges).

Example Explanation (Same Graph as Above)

A → {B, C} B → {A, C} C → {A, B}

Operations on Graphs

(i) Graph Traversal (Exploring Nodes & Edges)

(A) Breadth-First Search (BFS)
  • Explores all neighbors before moving to deeper levels.
  • Uses a queue (FIFO).
  • Example: Finding shortest path in an unweighted graph.
  • Logic of BFS
    • Start at the source node, mark it as visited.
    • Add it to a queue.
    • While the queue is not empty:
      • Remove a node.
      • Visit all unvisited neighbors and add them to the queue.
  • Example Explanation (BFS on Graph: {A → B, A → C, B → D})
    • Start at A → Visit B, C → Visit D
    • BFS Order: A → B → C → D ✅
(B) Depth-First Search (DFS)
  • Explores as far as possible along a branch before backtracking.
  • Uses a stack (LIFO) (or recursion).
  • Example: Detecting cycles in a graph.
  • Logic of DFS
    • Start at the source node, mark it as visited.
    • Recursively visit all unvisited neighbors.
    • If no neighbors left, backtrack.
  • Example Explanation (DFS on Graph: {A → B, A → C, B → D})
    • Start at A → Go to B → Go to D → Backtrack to A, Visit C
    • DFS Order: A → B → D → C ✅
(ii) Shortest Path Algorithms

(A) Dijkstra’s Algorithm (Greedy Approach)
  • Finds the shortest path from a source node to all other nodes.
  • Uses a priority queue.
  • Time Complexity: O((V + E) log V)
  • Logic of Dijkstra’s Algorithm
    • Assign 0 to the source node and infinity to all others.
    • Select the node with the smallest distance.
    • Update distances of its neighbors.
    • Repeat until all nodes are processed.
  • Example Explanation (Graph: A → B (4), A → C (2), B → C (1), B → D (5))
    • Start at A (0) → Update B (4), C (2)
    • Select C (2) → Update B (3)
    • Select B (3) → Update D (8) ✅
(iii) Detecting Cycles in Graphs
  • DFS-based cycle detection in directed graphs.
  • Union-Find algorithm in undirected graphs.

Tree:

  • A Tree is a non-linear hierarchical data structure that consists of nodes connected by edges.
  • Unlike graphs, a tree is acyclic and has a single root node.
  • It follows a parent-child relationship, where each node connects to one or more child nodes.
Key Properties of Trees
  • Root Node: The topmost node in a tree (has no parent).
  • Parent Node: A node that has child nodes.
  • Child Node: A node derived from another node.
  • Leaf Node: A node with no children.
  • Degree of a Node: The number of children a node has.
  • Height of a Tree: The longest path from the root to a leaf.
  • Depth of a Node: The number of edges from the root to that node.
  • Subtree: A tree formed by a node and its descendants.
Uses of Trees
  • Hierarchical Data Representation
    • Example: File Systems, Organization charts, Family Trees.
  • Binary Search Trees (BST) for Fast Searching
    • Used in databases and indexing for quick lookups.
  • Trie (Prefix Tree) for Auto-suggestions & Dictionary Implementation
    • Used in search engines, predictive text input.
  • Heap Data Structure for Priority Queues
    • Used in Dijkstra’s Algorithm, CPU Scheduling, and Memory Management.
  • Expression Trees for Compilers
    • Convert expressions like (A + B) * C into tree-based computations.
  • Artificial Intelligence & Decision Trees
    • Used in AI, machine learning algorithms, and game trees (like Chess & Tic-Tac-Toe).
  • Networking & Routing Protocols
    • Used in network spanning trees and routing (like Spanning Tree Protocol in networking).
Types of Trees
  • General Tree
    • A tree where each node can have any number of children.
  • Binary Tree
    • A tree where each node has at most 2 children (Left & Right).
    • Types of Binary Trees:
      • Full Binary Tree: Every node has 0 or 2 children.
      • Complete Binary Tree: All levels are filled except possibly the last.
      • Perfect Binary Tree: All leaf nodes are at the same level, and each node has exactly 2 children.
      • Balanced Binary Tree: Height difference between left and right subtrees is at most 1.
  • Binary Search Tree (BST)
    • A sorted binary tree where:
      • Left child < Parent < Right child
      • Allows fast searching, insertion, and deletion (O(log n) time complexity).
  • AVL Tree (Self-Balancing BST)
    • A BST with an extra balance factor (height difference of at most 1).
    • Used in databases where search speed is crucial.
  • B-Tree (Generalized BST for Databases)
    • A self-balancing search tree that allows multiple children per node.
    • Used in databases and file systems for efficient search.
  • Heap (Binary Heap Tree)
    • A complete binary tree used for priority queues.
    • Min-Heap: Parent node ≤ Children.
    • Max-Heap: Parent node ≥ Children.
Operations on Trees

(i)Tree Traversal (Visiting All Nodes in a Tree)
  • Depth-First Traversal (DFS)
    • Inorder (LNR - Left, Node, Right) → Used in Binary Search Trees.
    • Preorder (NLR - Node, Left, Right) → Used for copying trees.
    • Postorder (LRN - Left, Right, Node) → Used in deleting trees.
    • Logic of Inorder Traversal (LNR)
      • Visit the left subtree recursively.
      • Visit the current node.
      • Visit the right subtree recursively.
    • Example Explanation (Inorder on BST: {50, 30, 70, 20, 40, 60, 80})
      • Left subtree {20, 30, 40}
      • Root node {50}
      • Right subtree {60, 70, 80}
      • Inorder Traversal Output: {20, 30, 40, 50, 60, 70, 80}
  • Breadth-First Traversal (BFS or Level-Order Traversal)
    • Visits nodes level by level using a queue (FIFO).
    • Example: Used in shortest path algorithms like Dijkstra’s Algorithm.
    • Logic of Level-Order Traversal (BFS)
      • Enqueue the root node.
      • While queue is not empty:
        • Dequeue a node and visit it.
        • Enqueue its left and right children.
    • Example Explanation (BFS on Tree: {50, 30, 70, 20, 40, 60, 80})
      • Visit Level 1: {50}
      • Visit Level 2: {30, 70}
      • Visit Level 3: {20, 40, 60, 80}
      • BFS Traversal Output: {50, 30, 70, 20, 40, 60, 80}
(ii) Insertion in a Binary Search Tree (BST)
  • Definition: Adding a node while maintaining BST properties.
  • Time Complexity: O(log n) (if balanced).
  • Logic of Insertion in BST
    • If tree is empty, set root = new node.
    • If new value < root, go left.
    • If new value > root, go right.
    • Repeat recursively until insertion point is found.
  • Example Explanation (Insert 35 in BST: {50, 30, 70, 20, 40})
    • 35 < 50 → Go left
    • 35 > 30 → Go right
    • Insert 35 as right child of 30 ✅
(iii) Deletion in a Binary Search Tree (BST)
  • Definition: Removing a node while maintaining BST properties.
  • Time Complexity: O(log n).
  • Logic of Deletion in BST
    • If node has no children, remove it.
    • If node has one child, replace it with its child.
    • If node has two children, replace it with inorder successor (smallest in right subtree).
  • Example Explanation (Delete 30 from BST: {50, 30, 70, 20, 40})
    • 30 has two children (20, 40)
    • Find inorder successor (40)
    • Replace 30 with 40 ✅
(iv) Searching in a Binary Search Tree (BST)
  • Definition: Finding a node in the tree.
  • Time Complexity: O(log n).
  • Logic of Searching in BST
    • If root == key, return root.
    • If key < root, search in left subtree.
    • If key > root, search in right subtree.
  • Example Explanation (Search 40 in BST: {50, 30, 70, 20, 40})
    • 40 < 50 → Go left
    • 40 > 30 → Go right
    • Found 40 ✅


4. Fundamentals of the Analysis of Algorithm Efficiency

4.1 Measuring Input Size

  • The input size affects the efficiency of an algorithm.
  • Examples of input size:
    • Number of elements in an array (for sorting/searching problems)
    • Number of vertices/edges (for graph problems)

4.2 Units for Measuring Running Time

  • Primitive Operations: Basic steps like addition, multiplication, comparison.
  • Execution Time Estimation: Count the number of steps required.

4.3 Order of Growth

  • Represents how the running time grows as input size increases.
  • Common Orders of Growth:
    • Constant: O(1) → Accessing an element in an array
    • Logarithmic: O(log n) → Binary Search
    • Linear: O(n) → Linear Search
    • Quadratic: O(n²) → Bubble Sort
    • Exponential: O(2ⁿ) → Recursive Fibonacci

4.4 Worst-Case, Best-Case, and Average-Case Efficiencies

  1. Worst Case (O):
    • Maximum time taken by an algorithm.
    • Example: Searching an element not present in an array (O(n) in Linear Search).
  2. Best Case (Ω):
    • Minimum time taken.
    • Example: Searching for the first element in an array (O(1) in Linear Search).
  3. Average Case (Θ):
    • Expected running time over all inputs.


5. Asymptotic Notations and Basic Efficiency Classes

5.1 Asymptotic Notations

  • Used to describe the efficiency of an algorithm.
  1. Big-O Notation (O)
    • Upper bound (worst-case performance).
    • Example: O(n²) for Bubble Sort
  2. Big-Omega Notation (Ω)
    • Lower bound (best-case performance).
    • Example: Ω(n) for QuickSort (best case).
  3. Big-Theta Notation (Θ)
    • Tight bound (average-case performance).
    • Example: Θ(n log n) for Merge Sort

5.2 Useful Properties of Asymptotic Notations

  1. Transitivity:
    • If f(n) = O(g(n)) and g(n) = O(h(n)), then f(n) = O(h(n)).
  2. Additivity:
    • If f(n) + g(n) = O(max(f(n), g(n))).
  3. Multiplicative Property:
    • If f(n) = O(g(n)), then c × f(n) = O(g(n)) (c is a constant).

5.3 Using Limits for Comparing Orders of Growth

  • To compare two functions f(n) and g(n), use limits:
    • lim (n → ∞) [f(n) / g(n)] = 0 → f(n) grows slower than g(n).
    • lim (n → ∞) [f(n) / g(n)] = c (0 < c < ∞) → f(n) and g(n) grow at the same rate.
    • lim (n → ∞) [f(n) / g(n)] = ∞ → f(n) grows faster than g(n).


🚨Thanks for visiting finenotes4u✨

Welcome to a hub for 😇Nerds and knowledge seekers! Here, you'll find everything you need to stay updated on education, notes, books, and daily trends.

💗 Bookmark our site to stay connected and never miss an update!

💌 Have suggestions or need more content? Drop a comment below, and let us know what topics you'd like to see next! Your support means the world to us. 😍

Post a Comment

Previous Post Next Post

Ad blocker detected!

We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.