Unit II: Uninformed & Informed Search Strategies - INT428 Artificial Intelligence | B.Tech CSE Notes PDF | FineNotes4U


Unit II: Uninformed & Informed Search Strategies


⭐Uninformed Search Strategies:


Contents
  1. 1) Breadth-First Search (BFS)
    1. 1.1 What is Breadth-First Search (BFS)?
    2. 1.2 How BFS Works (Step-by-Step Process)
    3. 1.3 Example of BFS
    4. 1.4 Characteristics of BFS
    5. 1.5 Advantages of BFS
    6. 1.6 Disadvantages of BFS
    7. 1.7 Applications of BFS in AI
    8. 1.9 Conclusion
  2. 2) Depth-First Search (DFS)
    1. 2.1 What is Depth-First Search (DFS)?
    2. 2.2 How DFS Works (Step-by-Step Process)
    3. 2.3 Example of DFS
    4. 2.4 Characteristics of DFS
    5. 2.5 Advantages of DFS
    6. 2.6 Disadvantages of DFS
    7. 2.7 Applications of DFS in AI
    8. 2.8 Conclusion
  3. 3) Bi-Directional Search
    1. 3.1 What is Bi-Directional Search?
    2. 3.2 How Bi-Directional Search Works (Step-by-Step Process)
    3. 3.3 Example of Bi-Directional Search
    4. 3.4 Characteristics of Bi-Directional Search
    5. 3.5 Advantages of Bi-Directional Search
    6. 3.6 Disadvantages of Bi-Directional Search
    7. 3.7 Applications of Bi-Directional Search in AI
    8. 3.8 Conclusion
  4. 4) Iterative Deepening Search (IDS)
    1. 4.1 What is Iterative Deepening Search (IDS)?
    2. 4.2 Why Use Iterative Deepening Search?
    3. 4.3 How Iterative Deepening Search Works (Step-by-Step Process)
    4. 4.4 Example of Iterative Deepening Search
    5. 4.5 Characteristics of Iterative Deepening Search
    6. 4.6 Advantages of Iterative Deepening Search
    7. 4.7 Disadvantages of Iterative Deepening Search
    8. 4.8 Applications of Iterative Deepening Search in AI
    9. 4.9 Conclusion
  5. 1) Heuristic Functions
    1. 1.1 What is a Heuristic Function?
    2. 1.2 Why Use a Heuristic Function?
    3. 1.3 Properties of a Good Heuristic Function
    4. 1.4 Examples of Heuristic Functions
    5. 1.5 Types of Heuristic Functions
    6. 1.6 Applications of Heuristic Functions in AI
    7. 1.7 Conclusion
  6. 2) Generate and Test
    1. 2.1 What is Generate and Test?
    2. 2.2 How Generate and Test Works (Step-by-Step Process)
    3. 2.3 Example of Generate and Test
    4. 2.4 Characteristics of Generate and Test
    5. 2.5 Advantages of Generate and Test
    6. 2.6 Disadvantages of Generate and Test
    7. 2.7 Applications of Generate and Test in AI
    8. 2.8 Conclusion
  7. 3) Hill Climbing
    1. 3.1 What is Hill Climbing?
    2. 3.2 Why is it Called "Hill Climbing"?
    3. 3.3 How Hill Climbing Works (Step-by-Step Process)
    4. 3.4 Example of Hill Climbing
    5. 3.5 Types of Hill Climbing
    6. 3.6 Challenges in Hill Climbing
    7. 3.7 Advantages of Hill Climbing
    8. 3.8 Disadvantages of Hill Climbing
    9. 3.9 Applications of Hill Climbing in AI
    10. 3.10 Conclusion
  8. 4) Simulated Annealing
    1. 4.1 What is Simulated Annealing?
    2. 4.2 Why is it Called Simulated Annealing?
    3. 4.3 How Simulated Annealing Works (Step-by-Step Process)
    4. 4.4 Example of Simulated Annealing
    5. 4.5 Characteristics of Simulated Annealing
    6. 4.6 Advantages of Simulated Annealing
    7. 4.7 Disadvantages of Simulated Annealing
    8. 4.8 Applications of Simulated Annealing in AI
    9. 4.9 Conclusion
  9. 5) Best-First Search
    1. 5.1 What is Best-First Search?
    2. 5.2 Why Use Best-First Search?
    3. 5.3 How Best-First Search Works (Step-by-Step Process)
    4. 5.4 Example of Best-First Search
    5. 5.5 Heuristic Function in Best-First Search
    6. 5.6 Types of Best-First Search
    7. 5.7 Characteristics of Best-First Search
    8. 5.8 Advantages of Best-First Search
    9. 5.9 Disadvantages of Best-First Search
    10. 5.10 Applications of Best-First Search in AI
    11. 5.11 Conclusion
  10. 6) A Algorithm
    1. 6.1 What is the A Algorithm?*
    2. 6.2 Why is the A Algorithm Important?*
    3. 6.3 A Algorithm Formula*
    4. 6.4 How A Algorithm Works (Step-by-Step Process)*
    5. 6.5 Example of A Algorithm*
    6. 6.6 Heuristic Functions Used in A*
    7. 6.7 Characteristics of A Algorithm*
    8. 6.8 Advantages of A Algorithm*
    9. 6.9 Disadvantages of A Algorithm*
    10. 6.10 Applications of A Algorithm in AI*
    11. 6.11 Conclusion
  11. 7) Constraint Satisfaction
    1. 7.1 What is Constraint Satisfaction?
    2. 7.2 Key Components of CSP
    3. 7.3 Types of Constraint Satisfaction Problems
    4. 7.4 Solving Constraint Satisfaction Problems
    5. 7.5 Example of Constraint Satisfaction Problems
    6. 7.6 Characteristics of CSP
    7. 7.7 Advantages of Constraint Satisfaction
    8. 7.8 Disadvantages of Constraint Satisfaction
    9. 7.9 Applications of Constraint Satisfaction in AI
    10. 7.10 Conclusion
  12. 🚨Thanks for visiting finenotes4u✨

1) Breadth-First Search (BFS)

1.1 What is Breadth-First Search (BFS)?

  • BFS is a graph-based search algorithm used to explore all possible paths from a starting point (initial state) to a goal state.
  • It expands all nodes at the current level before moving to the next level.
  • It follows the First-In, First-Out (FIFO) principle using a queue data structure.

1.2 How BFS Works (Step-by-Step Process)

  1. Start from the Initial State (Root Node).
  2. Add the initial state to a queue.
  3. Expand the first node in the queue (dequeue it) and generate its child nodes.
  4. Add all unvisited child nodes to the queue.
  5. Repeat the process until:
    • The goal state is found, OR
    • The queue is empty (meaning no solution exists).

1.3 Example of BFS

Example 1: Finding the Shortest Path in a Graph

Consider a graph with nodes representing cities and edges representing roads between them. We want to find the shortest route from City A to City G.
A / \ B C /| |\ D E F G

Step-by-step BFS traversal from A to G:

  1. Start at A, add its neighbors B and C to the queue.
  2. Expand B (next in queue), add its neighbors D and E.
  3. Expand C, add its neighbors F and G.
  4. The goal node G is found. Path: A → C → G.
BFS finds the shortest path in an unweighted graph.

Example 2: Solving a Maze

  • BFS is used in pathfinding algorithms like Google Maps and game AI.
  • If an AI agent needs to navigate a maze, BFS ensures it finds the shortest route to the exit.

1.4 Characteristics of BFS

CharacteristicDescription
Search OrderExplores all nodes level by level.
Data Structure UsedUses a queue (FIFO) to store nodes.
CompletenessGuaranteed to find a solution if one exists.
OptimalityFinds the shortest path in an unweighted graph.
Time ComplexityO(b^d) (where b is the branching factor, d is depth).
Space ComplexityO(b^d) (requires storing all nodes in memory).

1.5 Advantages of BFS

Finds the shortest path in an unweighted graph.
Guaranteed to find a solution if one exists.
Systematic and complete (explores all possibilities).

1.6 Disadvantages of BFS

Consumes a lot of memory (stores all nodes at a level).
Slow for deep search spaces (complex problems with large depths).
✖ Not efficient for problems where the solution is deep in the search tree.

1.7 Applications of BFS in AI

Shortest Path Finding – Google Maps, GPS navigation.
Social Networking – Finding connections on Facebook, LinkedIn.
AI in Games – Chess, Pac-Man AI movement.
Web Crawling – Search engines like Google use BFS to index web pages.
Network Broadcasting – Sending messages across a network.

1.9 Conclusion

  • BFS is a systematic, complete, and optimal search strategy for unweighted graphs.
  • It is widely used in AI applications like navigation, game development, and social network analysis.
  • However, its high memory usage makes it inefficient for problems with deep search trees.

2) Depth-First Search (DFS)

2.1 What is Depth-First Search (DFS)?

  • DFS is a graph-based search algorithm used to explore possible solutions by going as deep as possible before backtracking.
  • It follows the Last-In, First-Out (LIFO) principle using a stack (either explicitly or through recursion).
  • Unlike Breadth-First Search (BFS), DFS explores a complete branch before moving to another.

2.2 How DFS Works (Step-by-Step Process)

  1. Start from the Initial State (Root Node).
  2. Push the initial state onto a stack.
  3. Expand the top node (pop it from the stack) and generate its child nodes.
  4. Push all unvisited child nodes onto the stack.
  5. Repeat until:
    • The goal state is found, OR
    • The stack is empty (meaning no solution exists).

2.3 Example of DFS

Example 1: Searching in a Graph

Consider a graph with nodes representing cities and edges representing roads. We want to find a path from City A to City G using DFS.
A / \ B C /| |\ D E F G

Step-by-step DFS traversal from A to G:

  1. Start at A, push its neighbors B and C onto the stack.
  2. Pop C, push its neighbors F and G onto the stack.
  3. Pop G (goal found). Path: A → C → G.
DFS finds a solution, but it may not be the shortest path.

Example 2: Solving a Maze

  • DFS is used in game AI and pathfinding.
  • Imagine a rat in a maze looking for cheese. DFS helps the rat explore one complete path before trying another.

2.4 Characteristics of DFS

CharacteristicDescription
Search OrderExplores one branch completely before backtracking.
Data Structure UsedUses a stack (LIFO) or recursion.
CompletenessNot guaranteed to find a solution if cycles exist.
OptimalityDoes not guarantee the shortest path.
Time ComplexityO(b^d) (where b is the branching factor, d is depth).
Space ComplexityO(b*d) (less memory than BFS).

2.5 Advantages of DFS

✔ Uses less memory than BFS.
Efficient for deep solutions (e.g., puzzles, mazes).
✔ Works well when a solution is expected to be deep.

2.6 Disadvantages of DFS

May get stuck in infinite loops (if cycles exist).
Does not guarantee the shortest path.
Not good for very deep search trees (risk of stack overflow).

2.7 Applications of DFS in AI

Solving Puzzles – Sudoku, maze-solving, word search.
AI in Games – Chess, decision trees.
Web Crawling – Search engines use DFS to index web pages.
Route Planning – Navigation in robotics and AI.
Topological Sorting – Used in scheduling and dependency resolution.

2.8 Conclusion

  • DFS is useful for deep search problems, but not always optimal.
  • It is memory-efficient compared to BFS but can get stuck in infinite loops.
  • Used in AI applications like games, pathfinding, and web crawling.

  • Bi-Directional Search is a graph-based search algorithm that searches from both the start state and the goal state simultaneously until the two meet in the middle.
  • Instead of searching the entire space like BFS or DFS, it divides the problem into two smaller searches, making it more efficient.
  • It uses two search frontiers:
    • One expands forward from the start state.
    • One expands backward from the goal state.

3.2 How Bi-Directional Search Works (Step-by-Step Process)

  1. Initialize two search trees: One from the start state and one from the goal state.
  2. Expand the nodes from both trees simultaneously.
  3. Continue expanding until a common node is found, connecting the two paths.
  4. Merge both paths to form the complete solution.

Example 1: Finding the Shortest Path Between Two Cities

  • Suppose we want to find the shortest route from City A to City G using Bi-Directional Search.
A / \ B C /| |\ D E F G

Step-by-step process:

  1. Start BFS from A and G simultaneously.
  2. Expand nodes from both sides:
    • From AB, C
    • From GF
  3. Continue expanding:
    • From BD, E
    • From CF
  4. Since F is found from both sides, the paths merge at F, forming the shortest route:
    A → C → F → G.
Bi-Directional Search finds the shortest path faster than BFS.

Example 2: Solving a Maze Faster

  • In maze-solving AI, Bi-Directional Search can explore both from start and exit to quickly find the connecting path.
  • Instead of covering the entire maze, it meets in the middle, saving time and memory.
CharacteristicDescription
Search OrderExpands from both the start and goal.
Data Structure UsedUses two queues (FIFO for BFS) or two stacks (LIFO for DFS).
CompletenessGuaranteed to find a solution if one exists.
OptimalityFinds the shortest path in an unweighted graph.
Time ComplexityO(b^(d/2)), better than O(b^d) in BFS.
Space ComplexityO(b^(d/2)), as only half the search space is stored.
Faster than BFS and DFS – Searches only half the space.
Uses less memory compared to BFS.
Guaranteed to find the shortest path in unweighted graphs.
Difficult to implement – Requires maintaining two simultaneous searches.
Needs the goal state in advance – Not useful for problems where the goal is unknown.
Merging paths can be complex if the search spaces do not align perfectly.

3.7 Applications of Bi-Directional Search in AI

Pathfinding Algorithms – Used in Google Maps, GPS navigation.
Game AI – Finding shortest paths in game environments.
Robot Navigation – Optimizing routes for autonomous robots.
Artificial Intelligence Planning – Used in AI planning for automated decision-making.

3.8 Conclusion

  • Bi-Directional Search is one of the fastest search algorithms for finding shortest paths.
  • It is more efficient than BFS and DFS in large search spaces.
  • However, it is complex to implement and requires knowing the goal state beforehand.

4) Iterative Deepening Search (IDS)

4.1 What is Iterative Deepening Search (IDS)?

  • Iterative Deepening Search (IDS) is a search strategy that combines the benefits of Depth-First Search (DFS) and Breadth-First Search (BFS).
  • It repeatedly runs Depth-Limited Depth-First Search (DLDFS), increasing the depth limit by one in each iteration until the goal is found.
  • IDS is useful when the depth of the solution is unknown.
  • DFS alone is not complete (may get stuck in infinite loops).
  • BFS alone is memory-intensive (stores all nodes in memory).
  • IDS provides the completeness of BFS and the memory efficiency of DFS.

4.3 How Iterative Deepening Search Works (Step-by-Step Process)

  1. Set a depth limit (initially 0).
  2. Run Depth-First Search (DFS) with this depth limit.
  3. If the goal is found, stop.
  4. If the goal is not found, increase the depth limit by 1.
  5. Repeat DFS with the new depth limit until the goal is found or no more nodes are left.

Example: Finding a Target Node in a Tree

Consider this tree where we want to find node G:
A / \ B C /| |\ D E F G

Step-by-step IDS traversal from A to G (Depth-Limited DFS at each level):

  • Depth Limit 0: Only explore A (Goal not found).
  • Depth Limit 1: Explore A → B, C (Goal not found).
  • Depth Limit 2: Explore A → B → D, E and A → C → F, GGoal found at depth 2.
Iterative Deepening finds the solution with minimal memory usage.
CharacteristicDescription
Search OrderExpands nodes depth by depth like BFS but uses DFS at each level.
Data Structure UsedUses recursion (DFS) and a depth limit.
CompletenessGuaranteed to find a solution if one exists.
OptimalityFinds the shortest path in an unweighted graph.
Time ComplexityO(b^d) (similar to BFS).
Space ComplexityO(b*d) (better than BFS, similar to DFS).
Memory-efficient (uses less memory than BFS).
Guaranteed to find the shortest path like BFS.
Complete and optimal for unweighted graphs.
Recomputes nodes multiple times, making it slower than BFS in some cases.
Not efficient for deep search trees with large branching factors.
Not suitable for weighted graphs (A* or Uniform Cost Search is better).

4.8 Applications of Iterative Deepening Search in AI

Pathfinding Algorithms – Used in Google Maps, GPS navigation.
AI in Games – Chess AI uses IDS to evaluate possible moves.
Robot Navigation – Used in autonomous robots for decision-making.
Artificial Intelligence Planning – Helps AI plan actions efficiently.

4.9 Conclusion

  • Iterative Deepening Search is a balance between DFS and BFS.
  • It is efficient in memory usage while still finding the shortest path.
  • Used in AI applications like games, robotics, and navigation.

⭐Comparison of Uninformed Search Strategies



FeatureBreadth-First Search (BFS)Depth-First Search (DFS)Bi-Directional SearchIterative Deepening Search (IDS)
DefinitionExplores all nodes at the current depth level before going deeper.Explores as deep as possible before backtracking.Searches forward from the start and backward from the goal simultaneously.Combines DFS with increasing depth limits to balance time and memory.
Search TypeUninformedUninformedUninformedUninformed
Uses Data StructureQueue (FIFO – First In, First Out)Stack (LIFO – Last In, First Out) or RecursionTwo Queues (One for forward search, one for backward search)Stack (DFS) with depth limit
Completeness✅ Yes, always finds a solution if one exists.❌ No, may get stuck in infinite loops (in some cases).✅ Yes, if both searches meet at a common node.✅ Yes, finds a solution if one exists.
Optimality✅ Yes, guarantees the shortest path if all edges have equal cost.❌ No, does not guarantee the shortest path.✅ Yes, finds the shortest path if both searches progress equally.✅ Yes, finds the shortest path like BFS.
Time ComplexityO(b^d) (Exponential, where b is the branching factor and d is depth).O(b^d) (Can be deep and inefficient).O(b^(d/2)), much faster than BFS and DFS.O(b^d), similar to BFS but with repeated computation.
Space ComplexityO(b^d) (Stores all nodes at each level).O(b*d) (Stores only the path).O(b^(d/2)), requires two search spaces.O(b*d), much better than BFS.
Memory Usage❌ High (Stores all nodes at each level).✅ Low (Stores only the path).✅ Efficient (Stores fewer nodes than BFS).✅ Low (Memory-efficient like DFS).
Speed❌ Slow (Explores all possibilities level by level).✅ Fast (Reaches deep states quickly).✅ Faster than BFS and DFS (Searches from both sides).✅ Efficient (DFS-style exploration but avoids deep searches).
Best Used ForFinding the shortest path in unweighted graphs.Exploring deep trees, solving puzzles.Pathfinding in large graphs (e.g., social networks, navigation).Finding shortest paths while using less memory.
Handles Large State Spaces?❌ No, consumes too much memory.✅ Yes, but may go too deep without finding a solution.✅ Yes, reduces search space significantly.✅ Yes, memory-efficient approach.
Handles Infinite Loops?❌ No, unless cycle detection is used.❌ No, may enter infinite recursion.✅ Yes, meets in the middle, avoiding deep searches.✅ Yes, because depth increases gradually.
Example 1: Pathfinding in a MazeExplores all possible moves level by level until it finds the goal.Goes deep into one path before backtracking.Searches from start and exit until both meet.Uses DFS with increasing depth to find the shortest path.
Example 2: AI in Games (Chess)Finds all possible moves step by step.Goes deep into a single move sequence before checking others.Searches from the start and goal moves, meeting in the middle.Increases the search depth until the best move is found.
Example 3: Web Crawling (Search Engines)Visits web pages level by level.Crawls deeply into one section before backtracking.Crawls from multiple directions at once.Starts shallow and gradually goes deeper.

⭐Informed Search Strategies:


1) Heuristic Functions

1.1 What is a Heuristic Function?

  • A heuristic function is a method used in Informed Search Strategies to estimate the cost or distance from a given state to the goal state.
  • It provides additional knowledge to improve the efficiency of the search algorithm.
  • Helps AI systems prioritize more promising paths instead of exploring blindly like in Uninformed Search Strategies.

1.2 Why Use a Heuristic Function?

  • Speeds up search algorithms by guiding them in the right direction.
  • Reduces the number of nodes explored, making the search more efficient.
  • Helps in finding approximate solutions for complex problems where exact solutions are too costly.

1.3 Properties of a Good Heuristic Function

A good heuristic function should have the following properties:
PropertyDescription
AdmissibilityThe heuristic never overestimates the actual cost to reach the goal.
Consistency (Monotonicity)The heuristic satisfies triangle inequality, meaning the estimated cost between two points is always ≤ direct transition + heuristic.
EfficiencyThe heuristic is easy to compute and does not slow down the search process.

1.4 Examples of Heuristic Functions

Example 1: Heuristic for Pathfinding Problems

  • Manhattan Distance (Used in Grid-based problems)

    • Formula: h(n)=x1x2+y1y2h(n) = |x_1 - x_2| + |y_1 - y_2|
    • Example: Used in 2D grid navigation (e.g., Google Maps, Robotics).
  • Euclidean Distance (Straight-Line Distance)

    • Formula: h(n)=(x1x2)2+(y1y2)2h(n) = \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2}
    • Example: Used in GPS navigation, autonomous vehicles.

Example 2: Heuristic for Solving the 8-Puzzle Problem

  • Hamming Distance: Counts the number of misplaced tiles compared to the goal state.
  • Manhattan Distance: Measures how far each tile is from its correct position.
Manhattan Distance is a better heuristic because it provides a more accurate estimate of the moves needed.

1.5 Types of Heuristic Functions

  • Admissible Heuristic
    • Does not overestimate the cost to reach the goal.
    • Guarantees finding the optimal solution.
    • Example: Manhattan Distance in Grid Search Problems.
  • Inadmissible Heuristic
    • May overestimate the actual cost.
    • Can lead to faster solutions but may not be optimal.
    • Example: Multiplying Manhattan Distance by 1.5 in a pathfinding problem.
  • Relaxed Problem Heuristic
    • Simplifies the problem by removing some constraints.
    • Example: In the 8-puzzle game, a heuristic that assumes tiles can move anywhere freely.

1.6 Applications of Heuristic Functions in AI

Pathfinding Algorithms – Used in Google Maps, GPS navigation.
Game AI – Used in Chess, Pac-Man, and other strategy-based games.
Robot Navigation – Helps autonomous robots find efficient paths.
Puzzle Solving – Used in Sudoku, 8-Puzzle, and Rubik’s Cube solvers.

1.7 Conclusion

  • Heuristic functions make search algorithms smarter and more efficient by guiding them towards the goal instead of searching blindly.
  • Good heuristics help AI solve problems faster and with fewer resources.
  • Used in many AI applications like robotics, navigation, and games.

2) Generate and Test

2.1 What is Generate and Test?

  • Generate and Test is a simple trial-and-error search strategy used in Artificial Intelligence (AI) to find a solution.
  • It generates possible solutions and tests each one to check if it meets the goal conditions.
  • It is useful when no direct method is available to find a solution.

2.2 How Generate and Test Works (Step-by-Step Process)

  1. Generate a possible solution.
  2. Test if the solution satisfies the goal condition.
  3. If the solution is correct, stop.
  4. If not, generate a new solution and repeat the process.
  5. Continue until a valid solution is found or all possibilities are exhausted.

2.3 Example of Generate and Test

Example 1: Solving a Puzzle (8-Puzzle Game)

  • AI generates different moves by shifting tiles.
  • It tests if the current arrangement matches the goal state.
  • If the puzzle is solved, the process stops; otherwise, another move is tested.

Example 2: Password Cracking

  • A hacker generates random passwords.
  • Each password is tested to see if it unlocks the system.
  • If the correct password is found, the process stops.

Example 3: Route Planning

  • AI generates different travel routes.
  • It tests if the selected route is the shortest and most efficient.
  • The process continues until the best route is found.

2.4 Characteristics of Generate and Test

CharacteristicDescription
Search TypeTrial-and-error method.
Data Structure UsedNo specific structure, relies on solution generation.
CompletenessGuaranteed to find a solution if one exists.
Optimality❌ May not always find the best solution.
Time ComplexityDepends on the number of possible solutions.
Space ComplexityDepends on the number of solutions stored.

2.5 Advantages of Generate and Test

Simple to implement – No complex algorithms needed.
Works well for small problems.
Useful when there is no direct method to find the solution.

2.6 Disadvantages of Generate and Test

Inefficient for large problems – Too many possibilities to test.
No guarantee of finding the best solution.
Wastes time on invalid solutions.

2.7 Applications of Generate and Test in AI

Puzzle Solving – Used in Sudoku, 8-puzzle, and crosswords.
Route Optimization – Used in Google Maps for finding best routes.
AI in Games – Helps AI analyze different moves in games like Chess.
Scheduling and Planning – AI generates possible schedules and tests feasibility.

2.8 Conclusion

  • Generate and Test is a simple AI strategy that works by trial and error.
  • It is useful for small problems but inefficient for complex problems.
  • AI systems often combine Generate and Test with heuristic techniques to improve efficiency.

3) Hill Climbing

3.1 What is Hill Climbing?

  • Hill Climbing is an informed search algorithm that continuously moves toward a better state based on a heuristic function.
  • It is a type of local search algorithm that does not backtrack once it moves to a better state.
  • Goal: To reach the highest point (optimal solution) by taking small steps towards improvement.

3.2 Why is it Called "Hill Climbing"?

  • Imagine climbing a hill in foggy conditions where you can only see nearby steps.
  • You keep moving up (toward better solutions) until you reach the peak (best solution).

3.3 How Hill Climbing Works (Step-by-Step Process)

  1. Start with an initial state (solution).
  2. Evaluate the current state using a heuristic function.
  3. Generate neighboring states (possible next moves).
  4. Move to the best neighboring state (higher value in maximization problems, lower value in minimization problems).
  5. Repeat the process until no better neighboring state exists.

3.4 Example of Hill Climbing

Example 1: Solving the "Traveling Salesman Problem"

  • A salesman must visit N cities and return to the starting point using the shortest route.
  • Hill Climbing:
    • Start with a random route.
    • Swap two cities to create a new route.
    • Choose the route with the shortest total distance.
    • Repeat until no shorter route is found.

Example 2: Optimizing AI in Games

  • Chess AI evaluates possible moves and selects the move that gives the highest advantage.

3.5 Types of Hill Climbing

  • Simple Hill Climbing
    • Evaluates one neighboring state at a time.
    • Moves to the first better state found.
    • Limitation: May not find the best solution.
  • Steepest-Ascent Hill Climbing
    • Evaluates all neighboring states and moves to the best one.
    • Better than Simple Hill Climbing but requires more computation.
  • Stochastic Hill Climbing
    • Randomly selects one neighbor and moves if it improves the solution.
    • Less efficient but can avoid local maxima.

3.6 Challenges in Hill Climbing

  • Local Maxima
    • A local maximum is a peak that is not the highest in the overall search space.
    • Solution: Use Random Restart (restart the search from a new random point).
  • Plateau
    • A flat area in the search space where all neighboring states have the same value.
    • Solution: Introduce small random jumps to escape the plateau.
  • Ridges
    • A path with steep slopes on both sides, making it hard to move forward.
    • Solution: Use Simulated Annealing or a different search approach.

3.7 Advantages of Hill Climbing

Simple and easy to implement.
Requires less memory than other search strategies.
Efficient for optimization problems.

3.8 Disadvantages of Hill Climbing

Can get stuck in local maxima.
Not guaranteed to find the best solution.
May struggle in complex search spaces with many plateaus and ridges.

3.9 Applications of Hill Climbing in AI

Robot Path Planning – Helps robots find the shortest path to a destination.
Machine Learning – Used for optimizing model parameters.
AI in Games – Helps AI evaluate the best move in games like Chess.
Scheduling Problems – Used in exam or work schedule optimization.
Image Processing – Used in image enhancement algorithms.

3.10 Conclusion

  • Hill Climbing is a simple and efficient algorithm for optimization problems.
  • It works well for small search spaces but may get stuck in local maxima.
  • Best suited for AI applications like robotics, game AI, and machine learning.

4) Simulated Annealing

4.1 What is Simulated Annealing?

  • Simulated Annealing (SA) is an advanced local search algorithm used for optimization problems.
  • It is an improved version of Hill Climbing, but unlike Hill Climbing, it allows occasional moves to worse states to escape local maxima.
  • Inspired by annealing in metallurgy, where materials are heated and slowly cooled to achieve a stable structure.

4.2 Why is it Called Simulated Annealing?

  • In metal cooling, atoms move freely at high temperatures and settle into a stable, low-energy state as the temperature decreases.
  • Similarly, in Simulated Annealing, the algorithm starts with a high probability of accepting bad moves, and this probability decreases over time, allowing it to settle into an optimal solution.

4.3 How Simulated Annealing Works (Step-by-Step Process)

  1. Start with an initial solution.
  2. Set a high "temperature" (T).
  3. Generate a neighboring solution (small change to the current solution).
  4. Evaluate the new solution:
    • If it is better, accept it.
    • If it is worse, accept it with some probability (controlled by temperature).
  5. Reduce the temperature gradually (cooling schedule).
  6. Repeat steps 3-5 until the system cools down (T → 0) or a stopping condition is met.

4.4 Example of Simulated Annealing

Example 1: Solving the "Traveling Salesman Problem"

  • The problem: A salesman must visit N cities and return to the start, using the shortest route.
  • Simulated Annealing Approach:
    • Start with a random route.
    • Swap two cities to create a new route.
    • If the new route is shorter, accept it.
    • If it is longer, accept it with some probability, based on temperature.
    • Gradually reduce the probability of accepting worse solutions.
    • The process stops when the temperature is near zero, and the best route is selected.

Example 2: Optimizing AI in Games

  • AI in chess or other strategy-based games can use Simulated Annealing to find better moves by exploring different possibilities without getting stuck in local optima.

4.5 Characteristics of Simulated Annealing

CharacteristicDescription
Search TypeA local search algorithm with a probabilistic element.
Uses a Heuristic?✅ Yes, it evaluates solutions based on a heuristic function.
Escapes Local Maxima?✅ Yes, by allowing occasional bad moves.
Completeness✅ Can find an optimal solution if properly tuned.
Optimality✅ Can find the best solution with a slow cooling schedule.
Time ComplexityDepends on the cooling schedule (generally O(n log n)).
Space ComplexityO(1) (since it only stores one current solution at a time).

4.6 Advantages of Simulated Annealing

Can escape local maxima, plateaus, and ridges (better than Hill Climbing).
Finds near-optimal solutions for complex problems.
Uses very little memory (O(1) space complexity).
Works well for large search spaces.

4.7 Disadvantages of Simulated Annealing

Slow convergence – May take a long time to find the best solution.
Performance depends on cooling schedule – If not properly tuned, it may not find the optimal solution.
May accept too many bad moves if temperature is not reduced correctly.

4.8 Applications of Simulated Annealing in AI

Traveling Salesman Problem (TSP) – Optimizing delivery routes.
AI in Games – Helps AI players find better strategies.
Machine Learning – Used for hyperparameter tuning in neural networks.
Robot Motion Planning – Optimizing robot paths in navigation.
Scheduling Problems – Creating efficient exam or work schedules.

4.9 Conclusion

  • Simulated Annealing is a powerful optimization algorithm that overcomes the limitations of Hill Climbing by accepting occasional bad moves.
  • It is widely used in AI applications like games, scheduling, and robotics.
  • Proper tuning of the cooling schedule is crucial for finding the best solution efficiently.

  • Best-First Search (BFS) is an informed search algorithm that selects the most promising node first, using a heuristic function to evaluate states.
  • It is a combination of Depth-First Search (DFS) and Breadth-First Search (BFS) but prioritizes nodes based on their estimated cost to reach the goal.
  • Uses a priority queue (instead of a simple queue or stack) to select the best node to expand next.
  • Faster than uninformed search (like BFS or DFS) because it does not explore unnecessary paths.
  • Uses heuristic information to guide the search efficiently.
  • Works well for complex search problems like pathfinding and AI decision-making.

5.3 How Best-First Search Works (Step-by-Step Process)

  1. Start with an initial state (root node).
  2. Use a heuristic function to evaluate each node (estimate how close it is to the goal).
  3. Insert nodes into a priority queue, with the most promising node at the front.
  4. Expand the best node (remove from the queue) and generate its child nodes.
  5. Repeat the process until the goal node is reached or the queue is empty.

Example 1: Finding the Shortest Path in a Graph

Consider a graph where we want to find the shortest path from A to G.
A / \ B C /| |\ D E F G

Step-by-step Best-First Search traversal from A to G:

  • Start at A, evaluate B and C.
  • If C is closer to G, expand C first.
  • Continue expanding the best nodes until G is reached.
Best-First Search prioritizes nodes leading to the goal, making it more efficient than BFS.

Example 2: AI in Pathfinding (Google Maps)

  • Suppose you want to travel from New York to Los Angeles.
  • Best-First Search will evaluate different routes and prioritize highways with the shortest estimated travel time.
  • The heuristic function h(n) estimates the cost from node n to the goal.
  • Common heuristics used:
    • Manhattan Distance (for grid-based navigation).
    • Euclidean Distance (for pathfinding in maps).
    • Hamming Distance (for solving puzzles).
  • Greedy Best-First Search
    • Selects the node that appears closest to the goal, based only on h(n).
    • Formula: f(n)=h(n)
    • Problem: May get stuck in local optima and not find the best solution
  • A Algorithm (Best-First with Cost Consideration)*
    • Uses both actual cost (g(n)) and heuristic (h(n)).
    • Formula: f(n)=g(n)+h(n)
    • Solves the limitations of Greedy Best-First Search and finds the shortest path.
CharacteristicDescription
Search TypeInformed search using a heuristic function.
Data Structure UsedUses a priority queue for node selection.
CompletenessGuaranteed to find a solution if one exists.
Optimality❌ Greedy Best-First is not optimal, but A* is.
Time ComplexityO(b^d) (depends on the heuristic).
Space ComplexityO(b^d) (stores all expanded nodes).
Faster than BFS and DFS because it prioritizes good paths.
Efficient for large search spaces (e.g., pathfinding, AI decision-making).
Can be improved with A for optimality.*
Greedy Best-First is not always optimal (may take a wrong path).
High memory usage (stores all visited nodes).
Performance depends on the heuristic function (a bad heuristic can mislead the search).

5.10 Applications of Best-First Search in AI

Google Maps & GPS Navigation – Finds the shortest route.
AI in Games (Chess, Pac-Man) – AI evaluates the best moves.
Robot Path Planning – Robots find the fastest route in an environment.
Web Crawling & Search Engines – Helps search engines rank pages efficiently.

5.11 Conclusion

  • Best-First Search is a powerful AI algorithm that uses heuristics to prioritize the best paths first.
  • Greedy Best-First is fast but not always optimal, while A Search finds the best solution*.
  • It is widely used in AI applications like navigation, robotics, and game AI.

6) A Algorithm

6.1 What is the A Algorithm?*

  • A (A-Star) is an informed search algorithm* used for finding the shortest path in AI and computer science.
  • It combines the advantages of Best-First Search and Uniform Cost Search by considering both:
    • g(n) → The actual cost from the start node to the current node.
    • h(n) → The estimated cost from the current node to the goal (heuristic function).
  • It is widely used in navigation, robotics, and AI-based decision-making.

6.2 Why is the A Algorithm Important?*

  • More efficient than BFS and DFS because it prioritizes the best paths first.
  • Finds the shortest path while using a heuristic to speed up the search.
  • Widely used in AI applications like Google Maps, games, and robotics.

6.3 A Algorithm Formula*

f(n)=g(n)+h(n)
Where:
  • f(n) = Total estimated cost of the path through node n.
  • g(n) = Actual cost from the start node to n.
  • h(n) = Estimated cost from n to the goal (heuristic function).
A selects the node with the lowest f(n), ensuring an optimal solution.*

6.4 How A Algorithm Works (Step-by-Step Process)*

  1. Start with the initial state (start node).
  2. Put it in a priority queue, sorted by f(n).
  3. Expand the node with the lowest f(n).
  4. Generate its neighboring nodes.
  5. Update g(n) and calculate f(n) for each neighbor.
  6. Repeat the process until the goal node is reached.

6.5 Example of A Algorithm*

Example 1: Finding the Shortest Path in a Grid (Google Maps)

Imagine an AI trying to find the best route from point A to point G on a map:
A / \ B C /| |\ D E F G

Using A*:

  • The algorithm calculates f(n) = g(n) + h(n) for each node.
  • Expands the node with the smallest f(n) (choosing the best path).
  • The AI reaches G efficiently, avoiding unnecessary paths.
A ensures the shortest and most efficient path.*

Example 2: Solving the 8-Puzzle Problem

  • g(n) = Number of moves taken so far.
  • h(n) = Number of misplaced tiles (Manhattan distance).
  • A finds the fastest way to solve the puzzle by minimizing f(n).*

6.6 Heuristic Functions Used in A*

  • Manhattan Distance: Used in grid-based pathfinding.
    • Formula: h(n)=x1x2+y1y2h(n) = |x_1 - x_2| + |y_1 - y_2|
  • Euclidean Distance: Used for straight-line pathfinding.
    • Formula: h(n)=(x1x2)2+(y1y2)2​
Choosing the right heuristic is key to A's efficiency.*

6.7 Characteristics of A Algorithm*

CharacteristicDescription
Search TypeInformed search using a heuristic function.
Uses Heuristic?✅ Yes, considers both g(n) and h(n).
Optimality✅ Guaranteed to find the shortest path if h(n) is admissible.
Completeness✅ Always finds a solution if one exists.
Time ComplexityO(b^d) (depends on branching factor and depth).
Space ComplexityO(b^d) (stores all expanded nodes).

6.8 Advantages of A Algorithm*

Guaranteed to find the optimal solution.
Faster than BFS and DFS in large search spaces.
Flexible and works well for real-world problems.

6.9 Disadvantages of A Algorithm*

Requires more memory (stores all visited nodes).
Performance depends on the heuristic function.
Can be slow for large search spaces.

6.10 Applications of A Algorithm in AI*

Google Maps & GPS Navigation – Finds the shortest route.
AI in Games (Pac-Man, Chess) – AI selects the best moves.
Robot Path Planning – Helps robots navigate efficiently.
Network Routing (Internet Protocols) – Optimizes data packet transmission.

6.11 Conclusion

  • A is one of the best pathfinding algorithms*, balancing cost and heuristic information.
  • It guarantees finding the shortest path if the heuristic is properly chosen.
  • Widely used in AI applications like navigation, robotics, and games.

7) Constraint Satisfaction

7.1 What is Constraint Satisfaction?

  • Constraint Satisfaction Problem (CSP) is a type of problem where a solution must satisfy a set of constraints (rules or conditions).
  • It is used in AI for problems like Sudoku, scheduling, and map coloring.
  • The goal is to find a valid solution that meets all given constraints.

7.2 Key Components of CSP

  • Variables
    • The set of elements that need to be assigned values.
    • Example: In Sudoku, the empty cells are the variables.
  • Domains
    • The possible values that a variable can take.
    • Example: In Sudoku, each cell has a domain of {1,2,3,4,5,6,7,8,9}.
  • Constraints
    • Rules that limit which values can be assigned to variables.
    • Example: In Sudoku, no two numbers in the same row, column, or 3×3 box can be the same.

7.3 Types of Constraint Satisfaction Problems

  • Unary Constraints
    • Constraints that affect a single variable.
    • Example: A student must be assigned a morning exam slot.
  • Binary Constraints
    • Constraints that involve two variables.
    • Example: Two students cannot be assigned the same exam room.
  • Higher-Order Constraints
    • Constraints that involve three or more variables.
    • Example: In map coloring, a country must have a different color than all its neighboring countries.

7.4 Solving Constraint Satisfaction Problems

  • Backtracking Algorithm
    • A brute force method that assigns values to variables and backtracks if a constraint is violated.
    • Example: Assigning colors to a map, checking constraints, and undoing if conflicts occur.
  • Forward Checking
    • Eliminates inconsistent values before assigning a variable.
    • Example: If a variable is assigned Red, remove Red from neighboring variables' domains.
  • Constraint Propagation (Arc Consistency)
    • Reduces the search space by enforcing constraints early.
    • Example: If only one number is possible for a Sudoku cell, assign it immediately.

7.5 Example of Constraint Satisfaction Problems

Example 1: Sudoku Puzzle

  • Variables: Empty cells.
  • Domain: {1,2,3,4,5,6,7,8,9}.
  • Constraints: No duplicate numbers in a row, column, or 3×3 box.
  • Solution Method: Backtracking with forward checking.

Example 2: Map Coloring Problem

  • Variables: Countries on a map.
  • Domain: {Red, Green, Blue}.
  • Constraints: Adjacent countries cannot have the same color.
  • Solution Method: Constraint Propagation (Arc Consistency).

Example 3: Exam Scheduling

  • Variables: Exam time slots.
  • Domain: Available time slots.
  • Constraints:
    • No student can have two exams at the same time.
    • Limited number of rooms.

7.6 Characteristics of CSP

CharacteristicDescription
Search TypeUses constraint-based filtering instead of blind search.
Uses Heuristic?✅ Yes, uses constraints to reduce search space.
Completeness✅ Guaranteed to find a solution if one exists.
Optimality❌ Not guaranteed (depends on constraints).
Time ComplexityDepends on constraints (usually NP-Hard).
Space ComplexityO(n) (depends on variables and constraints).

7.7 Advantages of Constraint Satisfaction

Efficient for structured problems like scheduling and planning.
Eliminates unnecessary search paths early.
Guarantees a valid solution if one exists.

7.8 Disadvantages of Constraint Satisfaction

Difficult to model complex problems.
Time-consuming for large problems.
Some problems require approximation methods if constraints are too strict.

7.9 Applications of Constraint Satisfaction in AI

Scheduling Problems – Exam timetables, airline scheduling.
Puzzle Solving – Sudoku, Crossword puzzles.
Map Coloring – Ensuring neighboring regions have different colors.
Robot Motion Planning – Ensuring safe movement in an environment.

7.10 Conclusion

  • Constraint Satisfaction is a powerful AI technique used in structured decision-making problems.
  • It is widely applied in scheduling, planning, and optimization tasks.
  • Uses heuristics to reduce the search space, making it more efficient than brute-force approaches.

⭐Comparison of Informed Search Strategies



FeatureHeuristic FunctionsGenerate and TestHill ClimbingSimulated AnnealingBest-First SearchA Algorithm*Constraint Satisfaction
DefinitionA function that estimates the cost of reaching the goal.A trial-and-error method to find a solution.A search method that moves towards better solutions.A search that allows worse moves to escape local maxima.A heuristic-based search that selects the best node first.A search that combines cost and heuristic information.A method to find solutions while satisfying constraints.
Search TypeNot a search algorithm but guides search strategies.Trial and error.Local search.Local search with randomness.Informed search.Informed search.Constraint-based search.
Uses Heuristic?✅ Yes❌ No✅ Yes✅ Yes✅ Yes✅ Yes✅ Uses constraints instead.
Search StrategyGuides search algorithms.Randomly generates solutions and tests them.Moves towards better solutions but may get stuck.Uses probability to avoid getting stuck.Chooses the most promising node.Considers both cost and heuristic for best path.Uses constraints to eliminate invalid solutions.
Completeness✅ If used with a complete algorithm.❌ Not guaranteed.❌ May stop at local maxima.✅ Can find a solution with enough time.✅ If heuristic is good.✅ Always finds a solution if one exists.✅ Always finds a solution if one exists.
Optimality❌ Not always optimal.❌ No guarantee.❌ May stop at local maxima.✅ Can find an optimal solution if properly tuned.❌ Not always optimal.✅ Finds the best solution if heuristic is admissible.❌ May not be optimal if constraints limit the solution space.
Time ComplexityDepends on search algorithm.High (if many solutions must be tested).O(b^d) (depends on branching factor and depth).O(n log n) (depends on cooling schedule).O(b^d) (depends on branching factor).O(b^d) (efficient if heuristic is good).Depends on constraints (usually NP-Hard).
Space ComplexityDepends on search algorithm.O(n) (stores all solutions tested).O(b*d) (depends on depth).O(1) (only stores current state).O(b^d) (stores expanded nodes).O(b^d) (high memory use).O(n) (depends on number of constraints).
Handles Local Maxima?❌ No❌ No❌ No, gets stuck at local maxima.✅ Yes, can escape local maxima.❌ No, may select a bad path.✅ Yes, considers cost and heuristic.❌ No local maxima, uses constraints.
ExampleManhattan Distance in GPS navigation.Password guessing by testing all possibilities.Robot moving towards a target using a direct path.AI optimizing chess moves by considering different possibilities.Pac-Man AI choosing the nearest food pellet.Google Maps route finding using shortest path.Solving Sudoku by eliminating impossible values.
Best Used ForGuiding search algorithms efficiently.Small problems where brute force is possible.Optimization problems with simple constraints.Complex problems requiring exploration of multiple solutions.AI games, navigation, and decision-making.Pathfinding, robotics, and AI planning.Scheduling, puzzle solving, and rule-based decision-making.



🚨Thanks for visiting finenotes4u✨

Welcome to a hub for 😇Nerds and knowledge seekers! Here, you'll find everything you need to stay updated on education, notes, books, and daily trends.

💗 Bookmark our site to stay connected and never miss an update!

💌 Have suggestions or need more content? Drop a comment below, and let us know what topics you'd like to see next! Your support means the world to us. 😍

Post a Comment

Previous Post Next Post

Ad blocker detected!

We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.