brain-circuitArtificial Intelligence

chevron-rightCourse Contenthashtag

Course Content Introduction: Uninformed search strategies, Greedy best-rst search, And-Or search, Uniform cost search, A* search, Memory-bounded heuristic search, Local and evolutionary searches (9 Lectures)

Constraint Satisfaction Problems: Backtracking search for CSPs, Local search for CSPs (3 Lectures)

Adversarial Search: Optimal Decision in Games, The minimax algorithm, Alpha-Beta pruning, Expectimax search (4 Lectures)

Knowledge and Reasoning: Propositional Logic, Reasoning Patterns in propositional logic; First order logic: syntax, semantics, Inference in First order logic, unication and lifting, backward chaining, resolution (9 Lectures)

Planning: Situation Calculus, Deductive planning, STRIPES, sub-goal, Partial order planner (3 Lectures)

Bayesian Network, Causality, and Uncertain Reasoning: Probabilistic models, directed and undirected models, inferencing, causality, Introduction to Probabilistic reasoning (6 lectures)

Reinforcement Learning: MDP, Policy, Q-value, Passive RL, Active RL, Policy Search (8 Lectures)

chevron-rightExamshashtag
  • 50% internal

    • 30% for 2 Quizzes ( Multiple and Surprise Quizzes also) Best of (N-1) will be taken

      • Quizzes 1: on 9th feb

      • Assignment 1:

    • 20% Assignments (7 to 10 days)

  • 50% Main

chevron-rightMaterial hashtag

Reference video

Lecture 1: (11/01/2025)

Class Recordingarrow-up-right

AI Development of fundamental model

Narrow AI is created to solve one given problem, for example, a chatbot. Artificial General Intelligence (AGI) is a theoretical application of generalised Artificial Intelligence in any domain, solving any problem that requires AI

Optimisation, throughput, accuracy, error modify

Turing Machine

Lecture 2: (12/01/2025)

Class Recordingarrow-up-right

Rational Agent: that generate the best outcome

Environment Parameters can also be reason for changing best outcome.

Input data will be: Text, audio, video and signals

Vision and Image Processing

OCR, face detection, vehicles counting

NLP:

Speak to text, wise versa

Robotics:

Algorithms Bias ( Issue in Algo)

  • Pre-exiting => Algo level

  • Technical => hardware related

  • Emergent Bias => Unknown things like covid

AI Ethics and Safety β‡’ Safety and security

Types of AI Safety β‡’ Testing, Monitoring Adversarial robustness

AI Safety And optimisers β‡’ Reward, Policy

Agent base system

Seniors => Percepts (p)

Actuators => output (action) (a)

Agent = Architecture(hardware) + program(f)

Case Study: AI based system for Road Safety (IntelliSYS)

Why new system require:

  • false in previous system

  • Optimise

Market Opportunities

  • IT getting deep in Automobile

Methodology

Result analysis==> output and severity

Motivation

  • Road accident increasing day by day

  • Trafic Jams

Needs => Increasing no of road accidents

Who need it => Gov and transport agencies

Cause

Stressed Driver along with environmental behaviour

High pay for more trips that create Stress

Testing Data

Testing on ola and uber b/c we get data from companies of drivers

Objectives:

  • Stress model

  • Behaviour model

Hardware:

  • GPS

  • Camera

  • Mobile phone

  • Sensors

Steps to Create Model

Collect data β‡’ Driver and road information

Stress Assessments β‡’ Hight, , mid, low

Estimate behaviour β‡’ impact on Driving => Aggressively, Drowsy, Normal

Find Relation β‡’ some time in stress driver can drive good

Recommendation β‡’ drive or rest β‡’ Out them from System

Application

  • Fleet management

  • Insurance service

  • Hiring cars

  • Trasport

  • Driving training

  • ADAS

Outcome

  • Benefit for Driver

  • Benefit for government and Agency

Challenges

  • Unmeasurable features

  • Data is not that good

  • Accrue while getting stress

  • Social Changes=> privacy

Other Case Study: Automatic vacuum-cleaner

Lecture 3: (18/01/2025)

Class Recordingarrow-up-right

Route Recommendation

Case study: Automatic car

Intelligent Agent

What is an intelligent agent => Rationality => Performances (safety) => Environment type(Continuous imaging ) => Agent types

preceptors: input through Sensor

Actuators => action

Rational Agents:

Foundation

consequential

Utilitarianism

Rationality is an ideal

Rationality

maximise expected outcome not actual outcome

Bounded

Can make mistake

In Bank system => Number of attacks / time / customercount

Environment type

Agent Type

  • Simple reflex agent

  • Model based

  • Goal based

  • Utility based

Lecture 4: (19/01/2025)

Class Recordingarrow-up-right

Simple Reflex Agent

a= f(x), no state maintains, no memory, static response

The interaction is a sequence p0, a0, p1, a1, p2,a2….pt,at…

  • Users only built in knowledge in the form of rules that select action only based on the current percept. This is typically very fast !

  • The agent donen’t know about the performence measure ! but well designed rules can lead to good performance.

  • The agent needs no memory and ignores all past percepts

Example:- A simple vacuum cleaner that uses rules based on its current sensor input.

Model base Agent

Need memory, dynamic, we maintain state here

Temporal problem: location and time effect the response like price in ola.

a= f(p,s) where s = T(s`,a), s` is previous stat

The interaction is a sequence: s0,a0,p1,s1,a1,p2,s2,a2,p3,….pt,st,at,…

  • Maintain a state variable to keeps track of aspects of the env that can’t be currently observed i.earrow-up-right it has memory and knows how the env reacts to actions(called transition function)

  • The state is updated using the percept.

  • There is now more info for the rules to make better decisions.

Example:- A vacuum cleaner that remembers were it has already cleaned.

Goal-based Agent

The agent has to reach a define Goal State

planning agent use search algo to reach goal

Performances Measure => cost to reach the goal

  • The agent has the task to reach a defined goal state and is then finished.

  • The agent needs to move towards the goal. As special type is a planning agent that uses search algorithm to plan a sequence of actions that leads to the goal.

  • Performance measure: the cost to reach the goal.

The interaction is a sequence:- s0,a0,p1,s1,a1,p2,s2,a2,…sgoal}β€”cost

s0:- Initial state, a0:- action state, p1:- percept

Example:- Solving a puzzle. What action gets me closer to the solution ?

Utility -based Agent

desirability of each possible state

Performances Measure=> expected utility over time

  • The agent uses a utility function to evaluate the desirability of each possible states. This is typically expressed as the reward of being in a state R(s).

  • Choose actions to stay in desirable states.

  • Performance measure: The discounted sum of expected utility over time.

The interaction is a sequence:- s0,a0,p1,s1,a1,p2,s2,a2…}-Reward

Example:- An Autonomous Mars rover prefers states where its battery is not critically low.

Agent that Learn

The learning element modifies the agent program(reflex-based, goal based or utility based) to improve its performance.

  • How is the agent currently peforming ?

  • Updates how the performance element chooses actions.

  • Generate actions for exploration

Smart Thermostat:-

What Type of Agent is it ?:- Change temperature when you are too cold/warm.

Answer:- ???

Lecture 5: (25/01/2025)

Class Recordingarrow-up-right

Coding Class - NO Lecture

Lecture 6: (01/02/2025)

Class Recordingarrow-up-right

I provided a structured summary based on the extracted content, but to ensure complete coverage, I can double-check the document. Let me run a more detailed scan to confirm that no topics are missing.

Yes! I have now confirmed that the document covers all 26 pages and includes the following topics:

Lecture 6-8: Solving Problems by Searching

1. Introduction to Search Strategies

  • Overview of problem-solving through search.

2. Problem Solving and Search

  • Rational agents performing sequences of actions.

  • Agents using internal models for decision-making.

3. Contents Overview

  • Search problems.

  • Tree search.

  • Search space.

  • Uninformed search.

  • Informed search.

4. Understanding Search Problems

  • Designing goal-based agents in known, fully observable, and deterministic environments.

  • Example environments and agent decision-making.

5. Goal-Based Agents

  • Performance measure and cost-based evaluation.

  • Introduction to planning agents using search.

6. Planning for Search Problems

  • State space representation (discrete, atomic states).

  • Phases of solving search problems:

    • Search/Planning: Finding action sequences to reach a goal.

    • Execution: Running the plan in a known environment.

7. Definition of a Search Problem

  • Key components:

    • Initial state.

    • Actions (available moves).

    • Transition model.

    • Goal state.

    • Path cost.

8. Transition Function and Available Actions

  • Example grid-based transition function.

  • Set of possible actions (N, E, S, W).

9. Search Problem Examples

  • Romania Vacation Problem: Pathfinding between cities.

  • Vacuum World: Cleaning robot simulation.

  • Sliding-Tile Puzzle: Sequential search problem.

  • Robot Motion Planning: Continuous search space for robotic movements.

10. Solving Search Problems

  • Constructing search trees.

  • Finding optimal solutions.

11. Challenges in Search Models

  • State transition models may contain redundant paths.

  • Cycles and redundant paths increase complexity.

Lecture 7: (02/02/2025)

Class Recordingarrow-up-right

12. Search Tree Representation

  • Expanding nodes and paths in a tree structure.

  • Breaking cycles to prevent infinite loops.

  • Redundant path elimination for efficiency.

13. Differences: Traditional Tree Search vs. AI Search

  • Traditional tree search:

    • Predefined, fits in memory.

    • No cycles or redundant paths.

  • AI-based search:

    • Large state spaces, not stored entirely.

    • Requires on-the-fly tree expansion.

    • Memory-efficient cycle detection.

14. Tree Search Algorithm

  1. Initialize the frontier with the starting state.

  2. Expand nodes iteratively.

  3. Check goal states.

  4. Apply transition model to explore new states.

15. Tree Search Examples

  • Expanding nodes from different starting positions.

  • Cycle detection and handling.

16. Search Strategies and Properties

  • Search evaluation criteria:

    • Completeness: Guarantees finding a solution.

    • Optimality: Finds the least-cost solution.

    • Time Complexity: Computational cost.

    • Space Complexity: Memory requirements.

17. Time and Space Complexity of Search Strategies

  • Trade-offs in AI search.

  • Handling large state spaces efficiently.

Lecture 8: (08/02/2025)

Class Recordingarrow-up-right

Coding Class - No Lecture

Lecture 9: (09/02/2025)

Class Recordingarrow-up-right

Quiz 1: so, No classes

Lecture 10: (15/02/2025)

Class Recordingarrow-up-right

First order Logic

Implementation of ideas we need Mathematical Logic.

Before talk about FOL first know understand Propositional Logic, Propositional Logic (also called Propositional Calculus or Boolean Logic) is a branch of logic that deals with propositions and their relationships. A proposition is a declarative statement that is either true (T) or false (F) but not both.

The problem with Propositional Logic (PL) is that it cannot express relationships between objects or handle variables and quantifiers. This is why we need First-Order Logic (FOL) for more complex reasoning.

Propositional Logic in AI represents knowledge using true/false statements but lacks object relationships. First-Order Logic (FOL) extends it by adding quantifiers, predicates, and variables for richer reasoning.

  • Syntax:

    • Constants (a,b,c): Specific objects.

    • Variables (x,y,z): General objects.

    • Predicates (P(x)): Properties of objects.

    • Functions (f(x)): Maps objects to objects.

    • Quantifiers:

      • Universal (βˆ€x): True for all x.

      • Existential (βˆƒx): True for some x.

  • Example:

    • "All humans are mortal" β†’ βˆ€x(Human(x)β†’Mortal(x))

Propositional Logic

Propositional Logic is a formal system that deals with true or false statements (propositions) using logical connectives.

  • Example:

    • "If it rains, the ground is wet." β†’ Rainβ†’WetGround

Combining Propositional Logic and First-Order Logic (FOL) with all variables explained:

Logical Components:

  1. Constants (Specific Objects) – John, 5, Delhi

  2. Variables (General Objects) – x, y, z

  3. Predicates (Properties/Relations) – Student(x), Married(x), Father(x, y)

  4. Functions (Maps objects to objects) – Father(John) = Mike

    1. Logical Connectives Β¬ (Negation), ∧ (And), ∨ (Or), β†’ (Implication), ↔ (Biconditional)

  5. Quantifiers (GΓ©nΓ©ralisation/Existence):

    • Universal (βˆ€x) β†’ True for all x.

    • Existential (βˆƒx) β†’ True for some x.

Example

Negation (Β¬P) – "Not P"

Conjunction (P ∧ Q) – "P and Q"

Disjunction (P ∨ Q) – "P or Q"

Implication (P β†’ Q) – "If P then Q

Biconditional (P ↔ Q) – "P if and only if Q"DDS

Using Functions & Quantifiers Together

  • Example: "Every student has a unique ID."

  • FOL: βˆ€x(Student(x)β†’βˆƒy(ID(x)=y))

Limitations

  1. Lack of Expressiveness – Cannot represent relationships between objects.

    • Example: "All humans are mortal" cannot be expressed directly.

  2. No Quantification – Cannot handle general statements about multiple objects.

    • Example: Cannot express "Some students like math."

  3. Scalability Issues – Requires too many propositions for complex scenarios.

    • Example: Representing family relationships needs separate propositions for each person.

  4. No Variable Support – Cannot generalize statements using variables.

    • Example: Cannot define "If a person is a parent, they have a child."

What Does "Truth" Stand for in First-Order Logic (FOL)?

In First-Order Logic (FOL), truth refers to whether a given statement (formula) is true or false under a particular interpretation in a given model.

Why Do We Need "Truth" in FOL?

  1. To Validate Logical Statements

    • Helps determine if a formula correctly describes a real-world scenario.

    • Example: If "All humans are mortal" is true in our model, then "Socrates is mortal" should also be true.

  2. To Build Knowledge-Based Systems

    • AI, databases, and formal verification systems rely on FOL to ensure logical correctness.

  3. For Inference and Reasoning

    • Allows systems to deduce new facts from given facts and rules.

    • Example: If we know "John is a student" and "All students study", we can infer "John studies."

  4. To Define Models for Theories

    • Mathematics and computer science use FOL to define structures (e.g., numbers, graphs).

    • Example: A true statement in number theory (e.g., "For every x, x + 0 = x") must hold for all numbers.

Key Concepts:

  1. Interpretation – Assigns meaning to constants, predicates, and functions.

  2. Model (β„³) – A structure where all statements are true.

  3. Satisfiability – A formula is true if there exists a model where it holds.

Example:

  • Domain: People

  • Interpretation:

    • Father(John, Mike) means "John is the father of Mike."

    • βˆ€x (Father(x, Mike) β†’ Male(x)) means "Everyone who is Mike's father is male."

If β„³ satisfies this formula, it is true in that model.

State space

(Prof. Notes in Next PPT)

A state space is the set of all possible states a system can be in, used for search problems in AI. It includes:

  1. States (SS) – Possible configurations.

  2. Initial State (S0S_0) – Starting point.

  3. Actions (AA) – Possible moves between states.

  4. Transition Function (TT) – Defines how actions change states.

  5. Goal State (SgS_g) – Desired outcome.

Example: 8-Puzzle Problem

  • States: All tile arrangements.

  • Actions: Move tiles left, right, up, or down.

  • Goal: Reach the correct tile order.

Lecture 11: (16/02/2025)

Class Recordingarrow-up-right

Uninformed Search Strategies

Search algorithms that explore the state space without domain-specific knowledge.

  • Blind search: No information about the goal except goal test.

  • Explores all possible paths systematically.

  • Examples: Breadth-First Search (BFS), Depth-First Search (DFS).

Examples:

  1. BFS: Finding the shortest path in a maze.

  2. DFS: Exploring all possible moves in a tic-tac-toe game.

Breadth-First Search (BFS)

An uninformed search algorithm that explores all nodes at the current depth before moving to the next level.

  • Uses a queue (FIFO) for traversal.

  • Guarantees shortest path in an unweighted graph.

  • Time Complexity: O(bd)O(b^d), where bb is the branching factor and dd is depth.

Examples:

  1. Finding the shortest route between two cities in a road network.

  2. Solving a word ladder puzzle by transforming one word to another in the fewest steps.

Key Terms in BFS and Search Algorithms

  1. Node.state – Represents the current state of the node in the problem space.

  2. Node.parent – Stores the previous node (parent) from which this node was reached.

  3. Node.action – The action taken to move from the parent node to the current node.

  4. Node.path_cost – The total cost from the start node to the current node.

  5. Node.depth – The level of the node in the search tree (distance from the root).

Example (Pathfinding in a Grid)

  • Start: Node(A, parent=None, action=None, path_cost=0, depth=0)

  • After moving right: Node(B, parent=A, action="Move Right", path_cost=1, depth=1)

  • After moving down: Node(C, parent=B, action="Move Down", path_cost=2, depth=2)

These terms help track paths and reconstruct solutions in search problems!

Queue in BFS with Terminology

A queue is a FIFO (First-In-First-Out) data structure used in BFS to explore nodes level by level.

Terminology in BFS Queue:

  1. Queue.front – The first element (dequeued first).

  2. Queue.rear – The last element (new elements are enqueued here).

  3. Queue.enqueue(node) – Adds a node to the rear.

  4. Queue.dequeue() – Removes and returns the front node.

  5. Queue.is_empty() – Checks if the queue is empty.

Example (BFS on Graph: A β†’ B β†’ C β†’ D β†’ E)

  1. Initial State: Queue = [A]

  2. Dequeue A β†’ Enqueue B, C β†’ Queue = [B, C]

  3. Dequeue B β†’ Enqueue D β†’ Queue = [C, D]

  4. Dequeue C β†’ Enqueue E β†’ Queue = [D, E]

  5. **Dequeue D β†’ Queue = [E]` (Goal Reached 🎯)

This queue ensures BFS explores nodes in order without missing any!

Breadth-First Search (BFS) Algorithm

Step-by-Step Explanation:

  1. Initialize:

    • Create a queue and enqueue the starting node.

    • Maintain a set (or list) of visited nodes to avoid cycles.

  2. Processing Nodes:

    • While the queue is not empty:

      • Dequeue the front node and mark it as visited.

      • Check if it's the goal node; if yes, return success.

      • Enqueue all unvisited neighboring nodes.

  3. Repeat Until Goal is Found:

    • Continue dequeuing and processing nodes level by level.

    • If the queue is empty and the goal is not found, return failure.


Pseudocode for BFS

Time Complexity for BFS:

O(bd)O(b^d)
  • b = Branching factor (number of children per node on average).

  • d = Depth of the shallowest goal node (or height in a tree).

Why O(b^d)?

  • Level 0 (root): 1 node

  • Level 1: b nodes

  • Level 2: bΒ² nodes

  • Level d: b^d nodes

Since BFS explores all nodes up to depth d, the total nodes visited is:

1+b+b2+...+bd=O(bd) 1 + b + b^2 + ... + b^d = O(b^d)

Applying to Example

  • b = 2 (each node has ≀ 2 children).

  • d = 2 (max depth).

  • Nodes visited β‰ˆ O(2Β²) = O(4).

Thus, BFS worst-case time complexity is O(b^d), which grows exponentially with depth!

Feature

Best-First Search (BeFS)

Breadth-First Search (BFS)

Search Strategy

Expands the most promising node (based on a heuristic function).

Expands all nodes level by level (FIFO).

Data Structure

Priority Queue (based on heuristic value).

Queue (FIFO - First In, First Out).

Uses Heuristic?

βœ… Yes, uses h(n) (estimated cost to goal).

❌ No, explores blindly.

Completeness

❌ Not always (depends on heuristic).

βœ… Yes, always finds a solution (if one exists).

Optimality

❌ Not guaranteed (can get stuck in local minima).

βœ… Yes, finds the shortest path in an unweighted graph.

Time Complexity

O(b^d) in the worst case, but better with a good heuristic.

O(b^d) (exponential in depth).

Space Complexity

O(b^d) (stores fewer nodes than BFS in some cases).

O(b^d) (stores all nodes at a level).

When to Use?

When a good heuristic is available (e.g., A* uses BeFS with cost).

When searching for the shortest path in an unweighted graph.

Example Use Cases:

  • BeFS: Used in AI pathfinding, like Google Maps with heuristics.

  • BFS: Used in shortest path problems in unweighted graphs, social networks.

Lecture 12: (22/02/2025)

Class Recordingarrow-up-right

Coding Class - No Lecture

Lecture 13: (23/02/2025)

Class Recordingarrow-up-right

Practical Class with Seach topics

Lecture 14: (01/03/2025)

Class Recordingarrow-up-right

IDS

Greedy best first search

Lecture 15: (02/03/2025)

Class Recordingarrow-up-right

Knowledge based Agent

Lecture 16: (08/03/2025)

Class Recording

Logic to Represent Knowledge

Sentence

Validity and Satisfiability

Tautology: Either the sun rises from the east, or it does not.

Satisfiable: Some IITians are entrepreneurs.

Unsatisfiable: A student is both a topper and not a topper at the same time

Real Time Example

True → False = False Submit assignment⇒Full marks Go to work⇒Get paid Here Full marks true if Submit assignment is true Get paid is also true Go to work is true

False → True = True Rain⇒Take umbrella Studied 10 hours⇒Pass exam

Take umbrella can happen on both case of Rain or Not Rain Pass exam can happen on both case of Studied or Not Studied

Logic Equivalent knowledge based agent

Lecture 17: (09/03/2025)

Class Recording

Lecture 18: (16/03/2025)

Class Recording

Lecture 19: (22/03/2025)

Class Recording

Lecture 20: (23/03/2025)

Class Recording

Lecture 21: (29/03/2025)

Class Recording

Lecture 22: (30/03/2025)

Class Recording

file-pdf
857KB

Lecture 23: (05/04/2025)

Class Recording

Coding Class - No Lecture

Last updated