Unit II: Uninformed & Informed Search Strategies
⭐Uninformed Search Strategies:
- 1) Breadth-First Search (BFS)
- 2) Depth-First Search (DFS)
- 3) Bi-Directional Search
- 3.1 What is Bi-Directional Search?
- 3.2 How Bi-Directional Search Works (Step-by-Step Process)
- 3.3 Example of Bi-Directional Search
- 3.4 Characteristics of Bi-Directional Search
- 3.5 Advantages of Bi-Directional Search
- 3.6 Disadvantages of Bi-Directional Search
- 3.7 Applications of Bi-Directional Search in AI
- 3.8 Conclusion
- 4) Iterative Deepening Search (IDS)
- 4.1 What is Iterative Deepening Search (IDS)?
- 4.2 Why Use Iterative Deepening Search?
- 4.3 How Iterative Deepening Search Works (Step-by-Step Process)
- 4.4 Example of Iterative Deepening Search
- 4.5 Characteristics of Iterative Deepening Search
- 4.6 Advantages of Iterative Deepening Search
- 4.7 Disadvantages of Iterative Deepening Search
- 4.8 Applications of Iterative Deepening Search in AI
- 4.9 Conclusion
- 1) Heuristic Functions
- 2) Generate and Test
- 3) Hill Climbing
- 3.1 What is Hill Climbing?
- 3.2 Why is it Called "Hill Climbing"?
- 3.3 How Hill Climbing Works (Step-by-Step Process)
- 3.4 Example of Hill Climbing
- 3.5 Types of Hill Climbing
- 3.6 Challenges in Hill Climbing
- 3.7 Advantages of Hill Climbing
- 3.8 Disadvantages of Hill Climbing
- 3.9 Applications of Hill Climbing in AI
- 3.10 Conclusion
- 4) Simulated Annealing
- 4.1 What is Simulated Annealing?
- 4.2 Why is it Called Simulated Annealing?
- 4.3 How Simulated Annealing Works (Step-by-Step Process)
- 4.4 Example of Simulated Annealing
- 4.5 Characteristics of Simulated Annealing
- 4.6 Advantages of Simulated Annealing
- 4.7 Disadvantages of Simulated Annealing
- 4.8 Applications of Simulated Annealing in AI
- 4.9 Conclusion
- 5) Best-First Search
- 5.1 What is Best-First Search?
- 5.2 Why Use Best-First Search?
- 5.3 How Best-First Search Works (Step-by-Step Process)
- 5.4 Example of Best-First Search
- 5.5 Heuristic Function in Best-First Search
- 5.6 Types of Best-First Search
- 5.7 Characteristics of Best-First Search
- 5.8 Advantages of Best-First Search
- 5.9 Disadvantages of Best-First Search
- 5.10 Applications of Best-First Search in AI
- 5.11 Conclusion
- 6) A Algorithm
- 6.1 What is the A Algorithm?*
- 6.2 Why is the A Algorithm Important?*
- 6.3 A Algorithm Formula*
- 6.4 How A Algorithm Works (Step-by-Step Process)*
- 6.5 Example of A Algorithm*
- 6.6 Heuristic Functions Used in A*
- 6.7 Characteristics of A Algorithm*
- 6.8 Advantages of A Algorithm*
- 6.9 Disadvantages of A Algorithm*
- 6.10 Applications of A Algorithm in AI*
- 6.11 Conclusion
- 7) Constraint Satisfaction
- 7.1 What is Constraint Satisfaction?
- 7.2 Key Components of CSP
- 7.3 Types of Constraint Satisfaction Problems
- 7.4 Solving Constraint Satisfaction Problems
- 7.5 Example of Constraint Satisfaction Problems
- 7.6 Characteristics of CSP
- 7.7 Advantages of Constraint Satisfaction
- 7.8 Disadvantages of Constraint Satisfaction
- 7.9 Applications of Constraint Satisfaction in AI
- 7.10 Conclusion
- 🚨Thanks for visiting finenotes4u✨
1) Breadth-First Search (BFS)
1.1 What is Breadth-First Search (BFS)?
- BFS is a graph-based search algorithm used to explore all possible paths from a starting point (initial state) to a goal state.
- It expands all nodes at the current level before moving to the next level.
- It follows the First-In, First-Out (FIFO) principle using a queue data structure.
1.2 How BFS Works (Step-by-Step Process)
- Start from the Initial State (Root Node).
- Add the initial state to a queue.
- Expand the first node in the queue (dequeue it) and generate its child nodes.
- Add all unvisited child nodes to the queue.
- Repeat the process until:
- The goal state is found, OR
- The queue is empty (meaning no solution exists).
1.3 Example of BFS
Example 1: Finding the Shortest Path in a Graph
Step-by-step BFS traversal from A to G:
- Start at A, add its neighbors B and C to the queue.
- Expand B (next in queue), add its neighbors D and E.
- Expand C, add its neighbors F and G.
- The goal node G is found. Path: A → C → G.
Example 2: Solving a Maze
- BFS is used in pathfinding algorithms like Google Maps and game AI.
- If an AI agent needs to navigate a maze, BFS ensures it finds the shortest route to the exit.
1.4 Characteristics of BFS
Characteristic | Description |
---|---|
Search Order | Explores all nodes level by level. |
Data Structure Used | Uses a queue (FIFO) to store nodes. |
Completeness | ✅ Guaranteed to find a solution if one exists. |
Optimality | ✅ Finds the shortest path in an unweighted graph. |
Time Complexity | O(b^d) (where b is the branching factor, d is depth). |
Space Complexity | O(b^d) (requires storing all nodes in memory). |
1.5 Advantages of BFS
✔ Guaranteed to find a solution if one exists.
✔ Systematic and complete (explores all possibilities).
1.6 Disadvantages of BFS
✖ Slow for deep search spaces (complex problems with large depths).
✖ Not efficient for problems where the solution is deep in the search tree.
1.7 Applications of BFS in AI
✅ Social Networking – Finding connections on Facebook, LinkedIn.
✅ AI in Games – Chess, Pac-Man AI movement.
✅ Web Crawling – Search engines like Google use BFS to index web pages.
✅ Network Broadcasting – Sending messages across a network.
1.9 Conclusion
- BFS is a systematic, complete, and optimal search strategy for unweighted graphs.
- It is widely used in AI applications like navigation, game development, and social network analysis.
- However, its high memory usage makes it inefficient for problems with deep search trees.
2) Depth-First Search (DFS)
2.1 What is Depth-First Search (DFS)?
- DFS is a graph-based search algorithm used to explore possible solutions by going as deep as possible before backtracking.
- It follows the Last-In, First-Out (LIFO) principle using a stack (either explicitly or through recursion).
- Unlike Breadth-First Search (BFS), DFS explores a complete branch before moving to another.
2.2 How DFS Works (Step-by-Step Process)
- Start from the Initial State (Root Node).
- Push the initial state onto a stack.
- Expand the top node (pop it from the stack) and generate its child nodes.
- Push all unvisited child nodes onto the stack.
- Repeat until:
- The goal state is found, OR
- The stack is empty (meaning no solution exists).
2.3 Example of DFS
Example 1: Searching in a Graph
Step-by-step DFS traversal from A to G:
- Start at A, push its neighbors B and C onto the stack.
- Pop C, push its neighbors F and G onto the stack.
- Pop G (goal found). Path: A → C → G.
Example 2: Solving a Maze
- DFS is used in game AI and pathfinding.
- Imagine a rat in a maze looking for cheese. DFS helps the rat explore one complete path before trying another.
2.4 Characteristics of DFS
Characteristic | Description |
---|---|
Search Order | Explores one branch completely before backtracking. |
Data Structure Used | Uses a stack (LIFO) or recursion. |
Completeness | ❌ Not guaranteed to find a solution if cycles exist. |
Optimality | ❌ Does not guarantee the shortest path. |
Time Complexity | O(b^d) (where b is the branching factor, d is depth). |
Space Complexity | O(b*d) (less memory than BFS). |
2.5 Advantages of DFS
✔ Efficient for deep solutions (e.g., puzzles, mazes).
✔ Works well when a solution is expected to be deep.
2.6 Disadvantages of DFS
✖ Not good for very deep search trees (risk of stack overflow).
2.7 Applications of DFS in AI
✅ AI in Games – Chess, decision trees.
✅ Web Crawling – Search engines use DFS to index web pages.
✅ Route Planning – Navigation in robotics and AI.
✅ Topological Sorting – Used in scheduling and dependency resolution.
2.8 Conclusion
- DFS is useful for deep search problems, but not always optimal.
- It is memory-efficient compared to BFS but can get stuck in infinite loops.
- Used in AI applications like games, pathfinding, and web crawling.
3) Bi-Directional Search
3.1 What is Bi-Directional Search?
- Bi-Directional Search is a graph-based search algorithm that searches from both the start state and the goal state simultaneously until the two meet in the middle.
- Instead of searching the entire space like BFS or DFS, it divides the problem into two smaller searches, making it more efficient.
- It uses two search frontiers:
- One expands forward from the start state.
- One expands backward from the goal state.
3.2 How Bi-Directional Search Works (Step-by-Step Process)
- Initialize two search trees: One from the start state and one from the goal state.
- Expand the nodes from both trees simultaneously.
- Continue expanding until a common node is found, connecting the two paths.
- Merge both paths to form the complete solution.
3.3 Example of Bi-Directional Search
Example 1: Finding the Shortest Path Between Two Cities
- Suppose we want to find the shortest route from City A to City G using Bi-Directional Search.
Step-by-step process:
- Start BFS from A and G simultaneously.
- Expand nodes from both sides:
- From A → B, C
- From G → F
- Continue expanding:
- From B → D, E
- From C → F
- Since F is found from both sides, the paths merge at F, forming the shortest route:
A → C → F → G.
Example 2: Solving a Maze Faster
- In maze-solving AI, Bi-Directional Search can explore both from start and exit to quickly find the connecting path.
- Instead of covering the entire maze, it meets in the middle, saving time and memory.
3.4 Characteristics of Bi-Directional Search
Characteristic | Description |
---|---|
Search Order | Expands from both the start and goal. |
Data Structure Used | Uses two queues (FIFO for BFS) or two stacks (LIFO for DFS). |
Completeness | ✅ Guaranteed to find a solution if one exists. |
Optimality | ✅ Finds the shortest path in an unweighted graph. |
Time Complexity | O(b^(d/2)), better than O(b^d) in BFS. |
Space Complexity | O(b^(d/2)), as only half the search space is stored. |
3.5 Advantages of Bi-Directional Search
✔ Uses less memory compared to BFS.
✔ Guaranteed to find the shortest path in unweighted graphs.
3.6 Disadvantages of Bi-Directional Search
✖ Needs the goal state in advance – Not useful for problems where the goal is unknown.
✖ Merging paths can be complex if the search spaces do not align perfectly.
3.7 Applications of Bi-Directional Search in AI
✅ Game AI – Finding shortest paths in game environments.
✅ Robot Navigation – Optimizing routes for autonomous robots.
✅ Artificial Intelligence Planning – Used in AI planning for automated decision-making.
3.8 Conclusion
- Bi-Directional Search is one of the fastest search algorithms for finding shortest paths.
- It is more efficient than BFS and DFS in large search spaces.
- However, it is complex to implement and requires knowing the goal state beforehand.
4) Iterative Deepening Search (IDS)
4.1 What is Iterative Deepening Search (IDS)?
- Iterative Deepening Search (IDS) is a search strategy that combines the benefits of Depth-First Search (DFS) and Breadth-First Search (BFS).
- It repeatedly runs Depth-Limited Depth-First Search (DLDFS), increasing the depth limit by one in each iteration until the goal is found.
- IDS is useful when the depth of the solution is unknown.
4.2 Why Use Iterative Deepening Search?
- DFS alone is not complete (may get stuck in infinite loops).
- BFS alone is memory-intensive (stores all nodes in memory).
- IDS provides the completeness of BFS and the memory efficiency of DFS.
4.3 How Iterative Deepening Search Works (Step-by-Step Process)
- Set a depth limit (initially 0).
- Run Depth-First Search (DFS) with this depth limit.
- If the goal is found, stop.
- If the goal is not found, increase the depth limit by 1.
- Repeat DFS with the new depth limit until the goal is found or no more nodes are left.
4.4 Example of Iterative Deepening Search
Example: Finding a Target Node in a Tree
Step-by-step IDS traversal from A to G (Depth-Limited DFS at each level):
- Depth Limit 0: Only explore
A
(Goal not found). - Depth Limit 1: Explore
A → B, C
(Goal not found). - Depth Limit 2: Explore
A → B → D, E
andA → C → F, G
→ Goal found at depth 2.
4.5 Characteristics of Iterative Deepening Search
Characteristic | Description |
---|---|
Search Order | Expands nodes depth by depth like BFS but uses DFS at each level. |
Data Structure Used | Uses recursion (DFS) and a depth limit. |
Completeness | ✅ Guaranteed to find a solution if one exists. |
Optimality | ✅ Finds the shortest path in an unweighted graph. |
Time Complexity | O(b^d) (similar to BFS). |
Space Complexity | O(b*d) (better than BFS, similar to DFS). |
4.6 Advantages of Iterative Deepening Search
✔ Guaranteed to find the shortest path like BFS.
✔ Complete and optimal for unweighted graphs.
4.7 Disadvantages of Iterative Deepening Search
✖ Not efficient for deep search trees with large branching factors.
✖ Not suitable for weighted graphs (A* or Uniform Cost Search is better).
4.8 Applications of Iterative Deepening Search in AI
✅ AI in Games – Chess AI uses IDS to evaluate possible moves.
✅ Robot Navigation – Used in autonomous robots for decision-making.
✅ Artificial Intelligence Planning – Helps AI plan actions efficiently.
4.9 Conclusion
- Iterative Deepening Search is a balance between DFS and BFS.
- It is efficient in memory usage while still finding the shortest path.
- Used in AI applications like games, robotics, and navigation.
⭐Comparison of Uninformed Search Strategies
Feature | Breadth-First Search (BFS) | Depth-First Search (DFS) | Bi-Directional Search | Iterative Deepening Search (IDS) |
---|---|---|---|---|
Definition | Explores all nodes at the current depth level before going deeper. | Explores as deep as possible before backtracking. | Searches forward from the start and backward from the goal simultaneously. | Combines DFS with increasing depth limits to balance time and memory. |
Search Type | Uninformed | Uninformed | Uninformed | Uninformed |
Uses Data Structure | Queue (FIFO – First In, First Out) | Stack (LIFO – Last In, First Out) or Recursion | Two Queues (One for forward search, one for backward search) | Stack (DFS) with depth limit |
Completeness | ✅ Yes, always finds a solution if one exists. | ❌ No, may get stuck in infinite loops (in some cases). | ✅ Yes, if both searches meet at a common node. | ✅ Yes, finds a solution if one exists. |
Optimality | ✅ Yes, guarantees the shortest path if all edges have equal cost. | ❌ No, does not guarantee the shortest path. | ✅ Yes, finds the shortest path if both searches progress equally. | ✅ Yes, finds the shortest path like BFS. |
Time Complexity | O(b^d) (Exponential, where b is the branching factor and d is depth). | O(b^d) (Can be deep and inefficient). | O(b^(d/2)), much faster than BFS and DFS. | O(b^d), similar to BFS but with repeated computation. |
Space Complexity | O(b^d) (Stores all nodes at each level). | O(b*d) (Stores only the path). | O(b^(d/2)), requires two search spaces. | O(b*d), much better than BFS. |
Memory Usage | ❌ High (Stores all nodes at each level). | ✅ Low (Stores only the path). | ✅ Efficient (Stores fewer nodes than BFS). | ✅ Low (Memory-efficient like DFS). |
Speed | ❌ Slow (Explores all possibilities level by level). | ✅ Fast (Reaches deep states quickly). | ✅ Faster than BFS and DFS (Searches from both sides). | ✅ Efficient (DFS-style exploration but avoids deep searches). |
Best Used For | Finding the shortest path in unweighted graphs. | Exploring deep trees, solving puzzles. | Pathfinding in large graphs (e.g., social networks, navigation). | Finding shortest paths while using less memory. |
Handles Large State Spaces? | ❌ No, consumes too much memory. | ✅ Yes, but may go too deep without finding a solution. | ✅ Yes, reduces search space significantly. | ✅ Yes, memory-efficient approach. |
Handles Infinite Loops? | ❌ No, unless cycle detection is used. | ❌ No, may enter infinite recursion. | ✅ Yes, meets in the middle, avoiding deep searches. | ✅ Yes, because depth increases gradually. |
Example 1: Pathfinding in a Maze | Explores all possible moves level by level until it finds the goal. | Goes deep into one path before backtracking. | Searches from start and exit until both meet. | Uses DFS with increasing depth to find the shortest path. |
Example 2: AI in Games (Chess) | Finds all possible moves step by step. | Goes deep into a single move sequence before checking others. | Searches from the start and goal moves, meeting in the middle. | Increases the search depth until the best move is found. |
Example 3: Web Crawling (Search Engines) | Visits web pages level by level. | Crawls deeply into one section before backtracking. | Crawls from multiple directions at once. | Starts shallow and gradually goes deeper. |
⭐Informed Search Strategies:
1) Heuristic Functions
1.1 What is a Heuristic Function?
- A heuristic function is a method used in Informed Search Strategies to estimate the cost or distance from a given state to the goal state.
- It provides additional knowledge to improve the efficiency of the search algorithm.
- Helps AI systems prioritize more promising paths instead of exploring blindly like in Uninformed Search Strategies.
1.2 Why Use a Heuristic Function?
- Speeds up search algorithms by guiding them in the right direction.
- Reduces the number of nodes explored, making the search more efficient.
- Helps in finding approximate solutions for complex problems where exact solutions are too costly.
1.3 Properties of a Good Heuristic Function
Property | Description |
---|---|
Admissibility | The heuristic never overestimates the actual cost to reach the goal. |
Consistency (Monotonicity) | The heuristic satisfies triangle inequality, meaning the estimated cost between two points is always ≤ direct transition + heuristic. |
Efficiency | The heuristic is easy to compute and does not slow down the search process. |
1.4 Examples of Heuristic Functions
Example 1: Heuristic for Pathfinding Problems
Manhattan Distance (Used in Grid-based problems)
- Formula:
- Example: Used in 2D grid navigation (e.g., Google Maps, Robotics).
- Formula:
Euclidean Distance (Straight-Line Distance)
- Formula:
- Example: Used in GPS navigation, autonomous vehicles.
Example 2: Heuristic for Solving the 8-Puzzle Problem
- Hamming Distance: Counts the number of misplaced tiles compared to the goal state.
- Manhattan Distance: Measures how far each tile is from its correct position.
1.5 Types of Heuristic Functions
- Admissible Heuristic
- Does not overestimate the cost to reach the goal.
- Guarantees finding the optimal solution.
- Example: Manhattan Distance in Grid Search Problems.
- Inadmissible Heuristic
- May overestimate the actual cost.
- Can lead to faster solutions but may not be optimal.
- Example: Multiplying Manhattan Distance by 1.5 in a pathfinding problem.
- Relaxed Problem Heuristic
- Simplifies the problem by removing some constraints.
- Example: In the 8-puzzle game, a heuristic that assumes tiles can move anywhere freely.
1.6 Applications of Heuristic Functions in AI
✅ Game AI – Used in Chess, Pac-Man, and other strategy-based games.
✅ Robot Navigation – Helps autonomous robots find efficient paths.
✅ Puzzle Solving – Used in Sudoku, 8-Puzzle, and Rubik’s Cube solvers.
1.7 Conclusion
- Heuristic functions make search algorithms smarter and more efficient by guiding them towards the goal instead of searching blindly.
- Good heuristics help AI solve problems faster and with fewer resources.
- Used in many AI applications like robotics, navigation, and games.
2) Generate and Test
2.1 What is Generate and Test?
- Generate and Test is a simple trial-and-error search strategy used in Artificial Intelligence (AI) to find a solution.
- It generates possible solutions and tests each one to check if it meets the goal conditions.
- It is useful when no direct method is available to find a solution.
2.2 How Generate and Test Works (Step-by-Step Process)
- Generate a possible solution.
- Test if the solution satisfies the goal condition.
- If the solution is correct, stop.
- If not, generate a new solution and repeat the process.
- Continue until a valid solution is found or all possibilities are exhausted.
2.3 Example of Generate and Test
Example 1: Solving a Puzzle (8-Puzzle Game)
- AI generates different moves by shifting tiles.
- It tests if the current arrangement matches the goal state.
- If the puzzle is solved, the process stops; otherwise, another move is tested.
Example 2: Password Cracking
- A hacker generates random passwords.
- Each password is tested to see if it unlocks the system.
- If the correct password is found, the process stops.
Example 3: Route Planning
- AI generates different travel routes.
- It tests if the selected route is the shortest and most efficient.
- The process continues until the best route is found.
2.4 Characteristics of Generate and Test
Characteristic | Description |
---|---|
Search Type | Trial-and-error method. |
Data Structure Used | No specific structure, relies on solution generation. |
Completeness | ✅ Guaranteed to find a solution if one exists. |
Optimality | ❌ May not always find the best solution. |
Time Complexity | Depends on the number of possible solutions. |
Space Complexity | Depends on the number of solutions stored. |
2.5 Advantages of Generate and Test
✔ Works well for small problems.
✔ Useful when there is no direct method to find the solution.
2.6 Disadvantages of Generate and Test
✖ No guarantee of finding the best solution.
✖ Wastes time on invalid solutions.
2.7 Applications of Generate and Test in AI
✅ Puzzle Solving – Used in Sudoku, 8-puzzle, and crosswords.
✅ Route Optimization – Used in Google Maps for finding best routes.
✅ AI in Games – Helps AI analyze different moves in games like Chess.
✅ Scheduling and Planning – AI generates possible schedules and tests feasibility.
2.8 Conclusion
- Generate and Test is a simple AI strategy that works by trial and error.
- It is useful for small problems but inefficient for complex problems.
- AI systems often combine Generate and Test with heuristic techniques to improve efficiency.
3) Hill Climbing
3.1 What is Hill Climbing?
- Hill Climbing is an informed search algorithm that continuously moves toward a better state based on a heuristic function.
- It is a type of local search algorithm that does not backtrack once it moves to a better state.
- Goal: To reach the highest point (optimal solution) by taking small steps towards improvement.
3.2 Why is it Called "Hill Climbing"?
- Imagine climbing a hill in foggy conditions where you can only see nearby steps.
- You keep moving up (toward better solutions) until you reach the peak (best solution).
3.3 How Hill Climbing Works (Step-by-Step Process)
- Start with an initial state (solution).
- Evaluate the current state using a heuristic function.
- Generate neighboring states (possible next moves).
- Move to the best neighboring state (higher value in maximization problems, lower value in minimization problems).
- Repeat the process until no better neighboring state exists.
3.4 Example of Hill Climbing
Example 1: Solving the "Traveling Salesman Problem"
- A salesman must visit N cities and return to the starting point using the shortest route.
- Hill Climbing:
- Start with a random route.
- Swap two cities to create a new route.
- Choose the route with the shortest total distance.
- Repeat until no shorter route is found.
Example 2: Optimizing AI in Games
- Chess AI evaluates possible moves and selects the move that gives the highest advantage.
3.5 Types of Hill Climbing
- Simple Hill Climbing
- Evaluates one neighboring state at a time.
- Moves to the first better state found.
- Limitation: May not find the best solution.
- Steepest-Ascent Hill Climbing
- Evaluates all neighboring states and moves to the best one.
- Better than Simple Hill Climbing but requires more computation.
- Stochastic Hill Climbing
- Randomly selects one neighbor and moves if it improves the solution.
- Less efficient but can avoid local maxima.
3.6 Challenges in Hill Climbing
- Local Maxima
- A local maximum is a peak that is not the highest in the overall search space.
- Solution: Use Random Restart (restart the search from a new random point).
- Plateau
- A flat area in the search space where all neighboring states have the same value.
- Solution: Introduce small random jumps to escape the plateau.
- Ridges
- A path with steep slopes on both sides, making it hard to move forward.
- Solution: Use Simulated Annealing or a different search approach.
3.7 Advantages of Hill Climbing
✔ Requires less memory than other search strategies.
✔ Efficient for optimization problems.
3.8 Disadvantages of Hill Climbing
✖ Can get stuck in local maxima.
✖ Not guaranteed to find the best solution.
✖ May struggle in complex search spaces with many plateaus and ridges.
3.9 Applications of Hill Climbing in AI
✅ Machine Learning – Used for optimizing model parameters.
✅ AI in Games – Helps AI evaluate the best move in games like Chess.
✅ Scheduling Problems – Used in exam or work schedule optimization.
✅ Image Processing – Used in image enhancement algorithms.
3.10 Conclusion
- Hill Climbing is a simple and efficient algorithm for optimization problems.
- It works well for small search spaces but may get stuck in local maxima.
- Best suited for AI applications like robotics, game AI, and machine learning.
4) Simulated Annealing
4.1 What is Simulated Annealing?
- Simulated Annealing (SA) is an advanced local search algorithm used for optimization problems.
- It is an improved version of Hill Climbing, but unlike Hill Climbing, it allows occasional moves to worse states to escape local maxima.
- Inspired by annealing in metallurgy, where materials are heated and slowly cooled to achieve a stable structure.
4.2 Why is it Called Simulated Annealing?
- In metal cooling, atoms move freely at high temperatures and settle into a stable, low-energy state as the temperature decreases.
- Similarly, in Simulated Annealing, the algorithm starts with a high probability of accepting bad moves, and this probability decreases over time, allowing it to settle into an optimal solution.
4.3 How Simulated Annealing Works (Step-by-Step Process)
- Start with an initial solution.
- Set a high "temperature" (T).
- Generate a neighboring solution (small change to the current solution).
- Evaluate the new solution:
- If it is better, accept it.
- If it is worse, accept it with some probability (controlled by temperature).
- Reduce the temperature gradually (cooling schedule).
- Repeat steps 3-5 until the system cools down (T → 0) or a stopping condition is met.
4.4 Example of Simulated Annealing
Example 1: Solving the "Traveling Salesman Problem"
- The problem: A salesman must visit N cities and return to the start, using the shortest route.
- Simulated Annealing Approach:
- Start with a random route.
- Swap two cities to create a new route.
- If the new route is shorter, accept it.
- If it is longer, accept it with some probability, based on temperature.
- Gradually reduce the probability of accepting worse solutions.
- The process stops when the temperature is near zero, and the best route is selected.
Example 2: Optimizing AI in Games
- AI in chess or other strategy-based games can use Simulated Annealing to find better moves by exploring different possibilities without getting stuck in local optima.
4.5 Characteristics of Simulated Annealing
Characteristic | Description |
---|---|
Search Type | A local search algorithm with a probabilistic element. |
Uses a Heuristic? | ✅ Yes, it evaluates solutions based on a heuristic function. |
Escapes Local Maxima? | ✅ Yes, by allowing occasional bad moves. |
Completeness | ✅ Can find an optimal solution if properly tuned. |
Optimality | ✅ Can find the best solution with a slow cooling schedule. |
Time Complexity | Depends on the cooling schedule (generally O(n log n)). |
Space Complexity | O(1) (since it only stores one current solution at a time). |
4.6 Advantages of Simulated Annealing
✔ Finds near-optimal solutions for complex problems.
✔ Uses very little memory (O(1) space complexity).
✔ Works well for large search spaces.
4.7 Disadvantages of Simulated Annealing
✖ Performance depends on cooling schedule – If not properly tuned, it may not find the optimal solution.
✖ May accept too many bad moves if temperature is not reduced correctly.
4.8 Applications of Simulated Annealing in AI
✅ AI in Games – Helps AI players find better strategies.
✅ Machine Learning – Used for hyperparameter tuning in neural networks.
✅ Robot Motion Planning – Optimizing robot paths in navigation.
✅ Scheduling Problems – Creating efficient exam or work schedules.
4.9 Conclusion
- Simulated Annealing is a powerful optimization algorithm that overcomes the limitations of Hill Climbing by accepting occasional bad moves.
- It is widely used in AI applications like games, scheduling, and robotics.
- Proper tuning of the cooling schedule is crucial for finding the best solution efficiently.
5) Best-First Search
5.1 What is Best-First Search?
- Best-First Search (BFS) is an informed search algorithm that selects the most promising node first, using a heuristic function to evaluate states.
- It is a combination of Depth-First Search (DFS) and Breadth-First Search (BFS) but prioritizes nodes based on their estimated cost to reach the goal.
- Uses a priority queue (instead of a simple queue or stack) to select the best node to expand next.
5.2 Why Use Best-First Search?
- Faster than uninformed search (like BFS or DFS) because it does not explore unnecessary paths.
- Uses heuristic information to guide the search efficiently.
- Works well for complex search problems like pathfinding and AI decision-making.
5.3 How Best-First Search Works (Step-by-Step Process)
- Start with an initial state (root node).
- Use a heuristic function to evaluate each node (estimate how close it is to the goal).
- Insert nodes into a priority queue, with the most promising node at the front.
- Expand the best node (remove from the queue) and generate its child nodes.
- Repeat the process until the goal node is reached or the queue is empty.
5.4 Example of Best-First Search
Example 1: Finding the Shortest Path in a Graph
Step-by-step Best-First Search traversal from A to G:
- Start at A, evaluate B and C.
- If C is closer to G, expand C first.
- Continue expanding the best nodes until G is reached.
Example 2: AI in Pathfinding (Google Maps)
- Suppose you want to travel from New York to Los Angeles.
- Best-First Search will evaluate different routes and prioritize highways with the shortest estimated travel time.
5.5 Heuristic Function in Best-First Search
- The heuristic function h(n) estimates the cost from node n to the goal.
- Common heuristics used:
- Manhattan Distance (for grid-based navigation).
- Euclidean Distance (for pathfinding in maps).
- Hamming Distance (for solving puzzles).
5.6 Types of Best-First Search
- Greedy Best-First Search
- Selects the node that appears closest to the goal, based only on h(n).
- Formula:
- Problem: May get stuck in local optima and not find the best solution
- A Algorithm (Best-First with Cost Consideration)*
- Uses both actual cost (g(n)) and heuristic (h(n)).
- Formula:
- Solves the limitations of Greedy Best-First Search and finds the shortest path.
5.7 Characteristics of Best-First Search
Characteristic | Description |
---|---|
Search Type | Informed search using a heuristic function. |
Data Structure Used | Uses a priority queue for node selection. |
Completeness | ✅ Guaranteed to find a solution if one exists. |
Optimality | ❌ Greedy Best-First is not optimal, but A* is. |
Time Complexity | O(b^d) (depends on the heuristic). |
Space Complexity | O(b^d) (stores all expanded nodes). |
5.8 Advantages of Best-First Search
✔ Efficient for large search spaces (e.g., pathfinding, AI decision-making).
✔ Can be improved with A for optimality.*
5.9 Disadvantages of Best-First Search
✖ High memory usage (stores all visited nodes).
✖ Performance depends on the heuristic function (a bad heuristic can mislead the search).
5.10 Applications of Best-First Search in AI
✅ AI in Games (Chess, Pac-Man) – AI evaluates the best moves.
✅ Robot Path Planning – Robots find the fastest route in an environment.
✅ Web Crawling & Search Engines – Helps search engines rank pages efficiently.
5.11 Conclusion
- Best-First Search is a powerful AI algorithm that uses heuristics to prioritize the best paths first.
- Greedy Best-First is fast but not always optimal, while A Search finds the best solution*.
- It is widely used in AI applications like navigation, robotics, and game AI.
6) A Algorithm
6.1 What is the A Algorithm?*
- A (A-Star) is an informed search algorithm* used for finding the shortest path in AI and computer science.
- It combines the advantages of Best-First Search and Uniform Cost Search by considering both:
- g(n) → The actual cost from the start node to the current node.
- h(n) → The estimated cost from the current node to the goal (heuristic function).
- It is widely used in navigation, robotics, and AI-based decision-making.
6.2 Why is the A Algorithm Important?*
- More efficient than BFS and DFS because it prioritizes the best paths first.
- Finds the shortest path while using a heuristic to speed up the search.
- Widely used in AI applications like Google Maps, games, and robotics.
6.3 A Algorithm Formula*
- f(n) = Total estimated cost of the path through node n.
- g(n) = Actual cost from the start node to n.
- h(n) = Estimated cost from n to the goal (heuristic function).
6.4 How A Algorithm Works (Step-by-Step Process)*
- Start with the initial state (start node).
- Put it in a priority queue, sorted by f(n).
- Expand the node with the lowest f(n).
- Generate its neighboring nodes.
- Update g(n) and calculate f(n) for each neighbor.
- Repeat the process until the goal node is reached.
6.5 Example of A Algorithm*
Example 1: Finding the Shortest Path in a Grid (Google Maps)
Using A*:
- The algorithm calculates f(n) = g(n) + h(n) for each node.
- Expands the node with the smallest f(n) (choosing the best path).
- The AI reaches G efficiently, avoiding unnecessary paths.
Example 2: Solving the 8-Puzzle Problem
- g(n) = Number of moves taken so far.
- h(n) = Number of misplaced tiles (Manhattan distance).
- A finds the fastest way to solve the puzzle by minimizing f(n).*
6.6 Heuristic Functions Used in A*
- Manhattan Distance: Used in grid-based pathfinding.
- Formula:
- Formula:
- Euclidean Distance: Used for straight-line pathfinding.
- Formula:
6.7 Characteristics of A Algorithm*
Characteristic | Description |
---|---|
Search Type | Informed search using a heuristic function. |
Uses Heuristic? | ✅ Yes, considers both g(n) and h(n). |
Optimality | ✅ Guaranteed to find the shortest path if h(n) is admissible. |
Completeness | ✅ Always finds a solution if one exists. |
Time Complexity | O(b^d) (depends on branching factor and depth). |
Space Complexity | O(b^d) (stores all expanded nodes). |
6.8 Advantages of A Algorithm*
✔ Faster than BFS and DFS in large search spaces.
✔ Flexible and works well for real-world problems.
6.9 Disadvantages of A Algorithm*
✖ Performance depends on the heuristic function.
✖ Can be slow for large search spaces.
6.10 Applications of A Algorithm in AI*
✅ AI in Games (Pac-Man, Chess) – AI selects the best moves.
✅ Robot Path Planning – Helps robots navigate efficiently.
✅ Network Routing (Internet Protocols) – Optimizes data packet transmission.
6.11 Conclusion
- A is one of the best pathfinding algorithms*, balancing cost and heuristic information.
- It guarantees finding the shortest path if the heuristic is properly chosen.
- Widely used in AI applications like navigation, robotics, and games.
7) Constraint Satisfaction
7.1 What is Constraint Satisfaction?
- Constraint Satisfaction Problem (CSP) is a type of problem where a solution must satisfy a set of constraints (rules or conditions).
- It is used in AI for problems like Sudoku, scheduling, and map coloring.
- The goal is to find a valid solution that meets all given constraints.
7.2 Key Components of CSP
- Variables
- The set of elements that need to be assigned values.
- Example: In Sudoku, the empty cells are the variables.
- Domains
- The possible values that a variable can take.
- Example: In Sudoku, each cell has a domain of {1,2,3,4,5,6,7,8,9}.
- Constraints
- Rules that limit which values can be assigned to variables.
- Example: In Sudoku, no two numbers in the same row, column, or 3×3 box can be the same.
7.3 Types of Constraint Satisfaction Problems
- Unary Constraints
- Constraints that affect a single variable.
- Example: A student must be assigned a morning exam slot.
- Binary Constraints
- Constraints that involve two variables.
- Example: Two students cannot be assigned the same exam room.
- Higher-Order Constraints
- Constraints that involve three or more variables.
- Example: In map coloring, a country must have a different color than all its neighboring countries.
7.4 Solving Constraint Satisfaction Problems
- Backtracking Algorithm
- A brute force method that assigns values to variables and backtracks if a constraint is violated.
- Example: Assigning colors to a map, checking constraints, and undoing if conflicts occur.
- Forward Checking
- Eliminates inconsistent values before assigning a variable.
- Example: If a variable is assigned Red, remove Red from neighboring variables' domains.
- Constraint Propagation (Arc Consistency)
- Reduces the search space by enforcing constraints early.
- Example: If only one number is possible for a Sudoku cell, assign it immediately.
7.5 Example of Constraint Satisfaction Problems
Example 1: Sudoku Puzzle
- Variables: Empty cells.
- Domain: {1,2,3,4,5,6,7,8,9}.
- Constraints: No duplicate numbers in a row, column, or 3×3 box.
- Solution Method: Backtracking with forward checking.
Example 2: Map Coloring Problem
- Variables: Countries on a map.
- Domain: {Red, Green, Blue}.
- Constraints: Adjacent countries cannot have the same color.
- Solution Method: Constraint Propagation (Arc Consistency).
Example 3: Exam Scheduling
- Variables: Exam time slots.
- Domain: Available time slots.
- Constraints:
- No student can have two exams at the same time.
- Limited number of rooms.
7.6 Characteristics of CSP
Characteristic | Description |
---|---|
Search Type | Uses constraint-based filtering instead of blind search. |
Uses Heuristic? | ✅ Yes, uses constraints to reduce search space. |
Completeness | ✅ Guaranteed to find a solution if one exists. |
Optimality | ❌ Not guaranteed (depends on constraints). |
Time Complexity | Depends on constraints (usually NP-Hard). |
Space Complexity | O(n) (depends on variables and constraints). |
7.7 Advantages of Constraint Satisfaction
✔ Eliminates unnecessary search paths early.
✔ Guarantees a valid solution if one exists.
7.8 Disadvantages of Constraint Satisfaction
✖ Time-consuming for large problems.
✖ Some problems require approximation methods if constraints are too strict.
7.9 Applications of Constraint Satisfaction in AI
✅ Puzzle Solving – Sudoku, Crossword puzzles.
✅ Map Coloring – Ensuring neighboring regions have different colors.
✅ Robot Motion Planning – Ensuring safe movement in an environment.
7.10 Conclusion
- Constraint Satisfaction is a powerful AI technique used in structured decision-making problems.
- It is widely applied in scheduling, planning, and optimization tasks.
- Uses heuristics to reduce the search space, making it more efficient than brute-force approaches.
⭐Comparison of Informed Search Strategies
Feature | Heuristic Functions | Generate and Test | Hill Climbing | Simulated Annealing | Best-First Search | A Algorithm* | Constraint Satisfaction |
---|---|---|---|---|---|---|---|
Definition | A function that estimates the cost of reaching the goal. | A trial-and-error method to find a solution. | A search method that moves towards better solutions. | A search that allows worse moves to escape local maxima. | A heuristic-based search that selects the best node first. | A search that combines cost and heuristic information. | A method to find solutions while satisfying constraints. |
Search Type | Not a search algorithm but guides search strategies. | Trial and error. | Local search. | Local search with randomness. | Informed search. | Informed search. | Constraint-based search. |
Uses Heuristic? | ✅ Yes | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Uses constraints instead. |
Search Strategy | Guides search algorithms. | Randomly generates solutions and tests them. | Moves towards better solutions but may get stuck. | Uses probability to avoid getting stuck. | Chooses the most promising node. | Considers both cost and heuristic for best path. | Uses constraints to eliminate invalid solutions. |
Completeness | ✅ If used with a complete algorithm. | ❌ Not guaranteed. | ❌ May stop at local maxima. | ✅ Can find a solution with enough time. | ✅ If heuristic is good. | ✅ Always finds a solution if one exists. | ✅ Always finds a solution if one exists. |
Optimality | ❌ Not always optimal. | ❌ No guarantee. | ❌ May stop at local maxima. | ✅ Can find an optimal solution if properly tuned. | ❌ Not always optimal. | ✅ Finds the best solution if heuristic is admissible. | ❌ May not be optimal if constraints limit the solution space. |
Time Complexity | Depends on search algorithm. | High (if many solutions must be tested). | O(b^d) (depends on branching factor and depth). | O(n log n) (depends on cooling schedule). | O(b^d) (depends on branching factor). | O(b^d) (efficient if heuristic is good). | Depends on constraints (usually NP-Hard). |
Space Complexity | Depends on search algorithm. | O(n) (stores all solutions tested). | O(b*d) (depends on depth). | O(1) (only stores current state). | O(b^d) (stores expanded nodes). | O(b^d) (high memory use). | O(n) (depends on number of constraints). |
Handles Local Maxima? | ❌ No | ❌ No | ❌ No, gets stuck at local maxima. | ✅ Yes, can escape local maxima. | ❌ No, may select a bad path. | ✅ Yes, considers cost and heuristic. | ❌ No local maxima, uses constraints. |
Example | Manhattan Distance in GPS navigation. | Password guessing by testing all possibilities. | Robot moving towards a target using a direct path. | AI optimizing chess moves by considering different possibilities. | Pac-Man AI choosing the nearest food pellet. | Google Maps route finding using shortest path. | Solving Sudoku by eliminating impossible values. |
Best Used For | Guiding search algorithms efficiently. | Small problems where brute force is possible. | Optimization problems with simple constraints. | Complex problems requiring exploration of multiple solutions. | AI games, navigation, and decision-making. | Pathfinding, robotics, and AI planning. | Scheduling, puzzle solving, and rule-based decision-making. |
🚨Thanks for visiting finenotes4u✨
Welcome to a hub for 😇Nerds and knowledge seekers! Here, you'll find everything you need to stay updated on education, notes, books, and daily trends.
💗 Bookmark our site to stay connected and never miss an update!
💌 Have suggestions or need more content? Drop a comment below, and let us know what topics you'd like to see next! Your support means the world to us. 😍