Artificial Intelligence
Course Content
Course Content Introduction: Uninformed search strategies, Greedy best-rst search, And-Or search, Uniform cost search, A* search, Memory-bounded heuristic search, Local and evolutionary searches (9 Lectures)
Constraint Satisfaction Problems: Backtracking search for CSPs, Local search for CSPs (3 Lectures)
Adversarial Search: Optimal Decision in Games, The minimax algorithm, Alpha-Beta pruning, Expectimax search (4 Lectures)
Knowledge and Reasoning: Propositional Logic, Reasoning Patterns in propositional logic; First order logic: syntax, semantics, Inference in First order logic, unication and lifting, backward chaining, resolution (9 Lectures)
Planning: Situation Calculus, Deductive planning, STRIPES, sub-goal, Partial order planner (3 Lectures)
Bayesian Network, Causality, and Uncertain Reasoning: Probabilistic models, directed and undirected models, inferencing, causality, Introduction to Probabilistic reasoning (6 lectures)
Reinforcement Learning: MDP, Policy, Q-value, Passive RL, Active RL, Policy Search (8 Lectures)
Exams
50% internal
30% for 2 Quizzes ( Multiple and Surprise Quizzes also) Best of (N-1) will be taken
Quizzes 1: on 9th feb
Assignment 1:
20% Assignments (7 to 10 days)
50% Main
Lecture 1: (11/01/2025)
)AI Development of fundamental model
Narrow AI is created to solve one given problem, for example, a chatbot. Artificial General Intelligence (AGI) is a theoretical application of generalised Artificial Intelligence in any domain, solving any problem that requires AI

Optimisation, throughput, accuracy, error modify
Turing Machine
Lecture 2: (12/01/2025)
)Rational Agent: that generate the best outcome
Environment Parameters can also be reason for changing best outcome.
Input data will be: Text, audio, video and signals

Vision and Image Processing
OCR, face detection, vehicles counting
NLP:
Speak to text, wise versa
Robotics:
Algorithms Bias ( Issue in Algo)
Pre-exiting => Algo level
Technical => hardware related
Emergent Bias => Unknown things like covid
AI Ethics and Safety β Safety and security
Types of AI Safety β Testing, Monitoring Adversarial robustness
AI Safety And optimisers β Reward, Policy
Agent base system

Seniors => Percepts (p)
Actuators => output (action) (a)
Agent = Architecture(hardware) + program(f)
Case Study: AI based system for Road Safety (IntelliSYS)
Why new system require:
false in previous system
Optimise
Market Opportunities
IT getting deep in Automobile
Methodology
Result analysis==> output and severity
Motivation
Road accident increasing day by day
Trafic Jams
Needs => Increasing no of road accidents
Who need it => Gov and transport agencies
Cause
Stressed Driver along with environmental behaviour
High pay for more trips that create Stress
Testing Data
Testing on ola and uber b/c we get data from companies of drivers
Objectives:
Stress model
Behaviour model
Hardware:
GPS
Camera
Mobile phone
Sensors
Steps to Create Model
Collect data β Driver and road information
Stress Assessments β Hight, , mid, low
Estimate behaviour β impact on Driving => Aggressively, Drowsy, Normal
Find Relation β some time in stress driver can drive good
Recommendation β drive or rest β Out them from System

Application
Fleet management
Insurance service
Hiring cars
Trasport
Driving training
ADAS
Outcome
Benefit for Driver
Benefit for government and Agency
Challenges
Unmeasurable features
Data is not that good
Accrue while getting stress
Social Changes=> privacy
Other Case Study: Automatic vacuum-cleaner

Lecture 3: (18/01/2025)
)Route Recommendation
Case study: Automatic car
Intelligent Agent
What is an intelligent agent => Rationality => Performances (safety) => Environment type(Continuous imaging ) => Agent types
preceptors: input through Sensor
Actuators => action
Rational Agents:
Foundation
consequential
Utilitarianism
Rationality is an ideal
Rationality
maximise expected outcome not actual outcome
Bounded
Can make mistake
In Bank system => Number of attacks / time / customercount
Environment type
Agent Type
Simple reflex agent
Model based
Goal based
Utility based
Lecture 4: (19/01/2025)
)Simple Reflex Agent
a= f(x), no state maintains, no memory, static response
The interaction is a sequence p0, a0, p1, a1, p2,a2β¦.pt,atβ¦
Users only built in knowledge in the form of rules that select action only based on the current percept. This is typically very fast !
The agent donenβt know about the performence measure ! but well designed rules can lead to good performance.
The agent needs no memory and ignores all past percepts
Example:- A simple vacuum cleaner that uses rules based on its current sensor input.

Model base Agent
Need memory, dynamic, we maintain state here
Temporal problem: location and time effect the response like price in ola.
a= f(p,s) where s = T(s`,a), s` is previous stat
The interaction is a sequence: s0,a0,p1,s1,a1,p2,s2,a2,p3,β¦.pt,st,at,β¦
Maintain a state variable to keeps track of aspects of the env that canβt be currently observed i.e it has memory and knows how the env reacts to actions(called transition function)
The state is updated using the percept.
There is now more info for the rules to make better decisions.
Example:- A vacuum cleaner that remembers were it has already cleaned.

Goal-based Agent
The agent has to reach a define Goal State
planning agent use search algo to reach goal
Performances Measure => cost to reach the goal
The agent has the task to reach a defined goal state and is then finished.
The agent needs to move towards the goal. As special type is a planning agent that uses search algorithm to plan a sequence of actions that leads to the goal.
Performance measure: the cost to reach the goal.
The interaction is a sequence:- s0,a0,p1,s1,a1,p2,s2,a2,β¦sgoal}βcost
s0:- Initial state, a0:- action state, p1:- percept
Example:- Solving a puzzle. What action gets me closer to the solution ?
Utility -based Agent
desirability of each possible state
Performances Measure=> expected utility over time
The agent uses a utility function to evaluate the desirability of each possible states. This is typically expressed as the reward of being in a state R(s).
Choose actions to stay in desirable states.
Performance measure: The discounted sum of expected utility over time.
The interaction is a sequence:- s0,a0,p1,s1,a1,p2,s2,a2β¦}-Reward
Example:- An Autonomous Mars rover prefers states where its battery is not critically low.
Agent that Learn
The learning element modifies the agent program(reflex-based, goal based or utility based) to improve its performance.
How is the agent currently peforming ?
Updates how the performance element chooses actions.
Generate actions for exploration
Smart Thermostat:-
What Type of Agent is it ?:- Change temperature when you are too cold/warm.
Answer:- ???
Lecture 5: (25/01/2025)
)Coding Class - NO Lecture
Lecture 6: (01/02/2025)
)I provided a structured summary based on the extracted content, but to ensure complete coverage, I can double-check the document. Let me run a more detailed scan to confirm that no topics are missing.
Yes! I have now confirmed that the document covers all 26 pages and includes the following topics:
Lecture 6-8: Solving Problems by Searching
1. Introduction to Search Strategies
Overview of problem-solving through search.
2. Problem Solving and Search
Rational agents performing sequences of actions.
Agents using internal models for decision-making.
3. Contents Overview
Search problems.
Tree search.
Search space.
Uninformed search.
Informed search.
4. Understanding Search Problems
Designing goal-based agents in known, fully observable, and deterministic environments.
Example environments and agent decision-making.
5. Goal-Based Agents
Performance measure and cost-based evaluation.
Introduction to planning agents using search.
6. Planning for Search Problems
State space representation (discrete, atomic states).
Phases of solving search problems:
Search/Planning: Finding action sequences to reach a goal.
Execution: Running the plan in a known environment.
7. Definition of a Search Problem
Key components:
Initial state.
Actions (available moves).
Transition model.
Goal state.
Path cost.
8. Transition Function and Available Actions
Example grid-based transition function.
Set of possible actions (N, E, S, W).
9. Search Problem Examples
Romania Vacation Problem: Pathfinding between cities.
Vacuum World: Cleaning robot simulation.
Sliding-Tile Puzzle: Sequential search problem.
Robot Motion Planning: Continuous search space for robotic movements.
10. Solving Search Problems
Constructing search trees.
Finding optimal solutions.
11. Challenges in Search Models
State transition models may contain redundant paths.
Cycles and redundant paths increase complexity.
Lecture 7: (02/02/2025)
)12. Search Tree Representation
Expanding nodes and paths in a tree structure.
Breaking cycles to prevent infinite loops.
Redundant path elimination for efficiency.
13. Differences: Traditional Tree Search vs. AI Search
Traditional tree search:
Predefined, fits in memory.
No cycles or redundant paths.
AI-based search:
Large state spaces, not stored entirely.
Requires on-the-fly tree expansion.
Memory-efficient cycle detection.
14. Tree Search Algorithm
Initialize the frontier with the starting state.
Expand nodes iteratively.
Check goal states.
Apply transition model to explore new states.
15. Tree Search Examples
Expanding nodes from different starting positions.
Cycle detection and handling.
16. Search Strategies and Properties
Search evaluation criteria:
Completeness: Guarantees finding a solution.
Optimality: Finds the least-cost solution.
Time Complexity: Computational cost.
Space Complexity: Memory requirements.
17. Time and Space Complexity of Search Strategies
Trade-offs in AI search.
Handling large state spaces efficiently.
Lecture 8: (08/02/2025)
)Coding Class - No Lecture
Lecture 9: (09/02/2025)
)Quiz 1: so, No classes
Lecture 10: (15/02/2025)
)First order Logic
Implementation of ideas we need Mathematical Logic.
Before talk about FOL first know understand Propositional Logic, Propositional Logic (also called Propositional Calculus or Boolean Logic) is a branch of logic that deals with propositions and their relationships. A proposition is a declarative statement that is either true (T) or false (F) but not both.
The problem with Propositional Logic (PL) is that it cannot express relationships between objects or handle variables and quantifiers. This is why we need First-Order Logic (FOL) for more complex reasoning.

Propositional Logic in AI represents knowledge using true/false statements but lacks object relationships. First-Order Logic (FOL) extends it by adding quantifiers, predicates, and variables for richer reasoning.
Syntax:
Constants (a,b,c): Specific objects.
Variables (x,y,z): General objects.
Predicates (P(x)): Properties of objects.
Functions (f(x)): Maps objects to objects.
Quantifiers:
Universal (βx): True for all x.
Existential (βx): True for some x.
Example:
"All humans are mortal" β βx(Human(x)βMortal(x))

Propositional Logic
Propositional Logic is a formal system that deals with true or false statements (propositions) using logical connectives.
Example:
"If it rains, the ground is wet." β RainβWetGround
Combining Propositional Logic and First-Order Logic (FOL) with all variables explained:
Logical Components:
Constants (Specific Objects) β John, 5, Delhi
Variables (General Objects) β x, y, z
Predicates (Properties/Relations) β Student(x), Married(x), Father(x, y)
Functions (Maps objects to objects) β Father(John) = Mike
Logical Connectives Β¬ (Negation), β§ (And), β¨ (Or), β (Implication), β (Biconditional)
Quantifiers (GΓ©nΓ©ralisation/Existence):
Universal (βx) β True for all x.
Existential (βx) β True for some x.
Example
Negation (Β¬P) β "Not P"
Conjunction (P β§ Q) β "P and Q"
Disjunction (P β¨ Q) β "P or Q"
Implication (P β Q) β "If P then Q
Biconditional (P β Q) β "P if and only if Q"DDS
Using Functions & Quantifiers Together
Example:"Every student has a unique ID."FOL:βx(Student(x)ββy(ID(x)=y))
Limitations
Lack of Expressiveness β Cannot represent relationships between objects.
Example: "All humans are mortal" cannot be expressed directly.
No Quantification β Cannot handle general statements about multiple objects.
Example: Cannot express "Some students like math."
Scalability Issues β Requires too many propositions for complex scenarios.
Example: Representing family relationships needs separate propositions for each person.
No Variable Support β Cannot generalize statements using variables.
Example: Cannot define "If a person is a parent, they have a child."

What Does "Truth" Stand for in First-Order Logic (FOL)?
In First-Order Logic (FOL), truth refers to whether a given statement (formula) is true or false under a particular interpretation in a given model.
Why Do We Need "Truth" in FOL?
To Validate Logical Statements
Helps determine if a formula correctly describes a real-world scenario.
Example: If "All humans are mortal" is true in our model, then "Socrates is mortal" should also be true.
To Build Knowledge-Based Systems
AI, databases, and formal verification systems rely on FOL to ensure logical correctness.
For Inference and Reasoning
Allows systems to deduce new facts from given facts and rules.
Example: If we know "John is a student" and "All students study", we can infer "John studies."
To Define Models for Theories
Mathematics and computer science use FOL to define structures (e.g., numbers, graphs).
Example: A true statement in number theory (e.g., "For every x, x + 0 = x") must hold for all numbers.
Key Concepts:
Interpretation β Assigns meaning to constants, predicates, and functions.
Model (β³) β A structure where all statements are true.
Satisfiability β A formula is true if there exists a model where it holds.
Example:
Domain: People
Interpretation:
Father(John, Mike) means "John is the father of Mike."
βx (Father(x, Mike) β Male(x)) means "Everyone who is Mike's father is male."
If β³ satisfies this formula, it is true in that model.
State space
(Prof. Notes in Next PPT)
A state space is the set of all possible states a system can be in, used for search problems in AI. It includes:
States (SS) β Possible configurations.
Initial State (S0S_0) β Starting point.
Actions (AA) β Possible moves between states.
Transition Function (TT) β Defines how actions change states.
Goal State (SgS_g) β Desired outcome.
Example: 8-Puzzle Problem
States: All tile arrangements.
Actions: Move tiles left, right, up, or down.
Goal: Reach the correct tile order.
Lecture 11: (16/02/2025)
)Uninformed Search Strategies
Search algorithms that explore the state space without domain-specific knowledge.
Blind search: No information about the goal except goal test.
Explores all possible paths systematically.
Examples: Breadth-First Search (BFS), Depth-First Search (DFS).
Examples:
BFS: Finding the shortest path in a maze.
DFS: Exploring all possible moves in a tic-tac-toe game.
Breadth-First Search (BFS)
An uninformed search algorithm that explores all nodes at the current depth before moving to the next level.
Uses a queue (FIFO) for traversal.
Guarantees shortest path in an unweighted graph.
Time Complexity: O(bd)O(b^d), where bb is the branching factor and dd is depth.
Examples:
Finding the shortest route between two cities in a road network.
Solving a word ladder puzzle by transforming one word to another in the fewest steps.
Key Terms in BFS and Search Algorithms
Node.state β Represents the current state of the node in the problem space.
Node.parent β Stores the previous node (parent) from which this node was reached.
Node.action β The action taken to move from the parent node to the current node.
Node.path_cost β The total cost from the start node to the current node.
Node.depth β The level of the node in the search tree (distance from the root).
Example (Pathfinding in a Grid)
Start:
Node(A, parent=None, action=None, path_cost=0, depth=0)After moving right:
Node(B, parent=A, action="Move Right", path_cost=1, depth=1)After moving down:
Node(C, parent=B, action="Move Down", path_cost=2, depth=2)
These terms help track paths and reconstruct solutions in search problems!
Queue in BFS with Terminology
A queue is a FIFO (First-In-First-Out) data structure used in BFS to explore nodes level by level.
Terminology in BFS Queue:
Queue.front β The first element (dequeued first).
Queue.rear β The last element (new elements are enqueued here).
Queue.enqueue(node) β Adds a node to the rear.
Queue.dequeue() β Removes and returns the front node.
Queue.is_empty() β Checks if the queue is empty.
Example (BFS on Graph: A β B β C β D β E)
Initial State:
Queue = [A]Dequeue A β Enqueue B, C β
Queue = [B, C]Dequeue B β Enqueue D β
Queue = [C, D]Dequeue C β Enqueue E β
Queue = [D, E]**Dequeue D β Queue = [E]` (Goal Reached π―)
This queue ensures BFS explores nodes in order without missing any!
Breadth-First Search (BFS) Algorithm
Step-by-Step Explanation:
Initialize:
Create a queue and enqueue the starting node.
Maintain a set (or list) of visited nodes to avoid cycles.
Processing Nodes:
While the queue is not empty:
Dequeue the front node and mark it as visited.
Check if it's the goal node; if yes, return success.
Enqueue all unvisited neighboring nodes.
Repeat Until Goal is Found:
Continue dequeuing and processing nodes level by level.
If the queue is empty and the goal is not found, return failure.
Pseudocode for BFS
Time Complexity for BFS:
b = Branching factor (number of children per node on average).
d = Depth of the shallowest goal node (or height in a tree).
Why O(b^d)?
Level 0 (root): 1 node
Level 1: b nodes
Level 2: bΒ² nodes
Level d: b^d nodes
Since BFS explores all nodes up to depth d, the total nodes visited is:
Applying to Example
b = 2 (each node has β€ 2 children).
d = 2 (max depth).
Nodes visited β O(2Β²) = O(4).
Thus, BFS worst-case time complexity is O(b^d), which grows exponentially with depth!
Best-First Search vs. Breadth-First Search
Feature
Best-First Search (BeFS)
Breadth-First Search (BFS)
Search Strategy
Expands the most promising node (based on a heuristic function).
Expands all nodes level by level (FIFO).
Data Structure
Priority Queue (based on heuristic value).
Queue (FIFO - First In, First Out).
Uses Heuristic?
β Yes, uses h(n) (estimated cost to goal).
β No, explores blindly.
Completeness
β Not always (depends on heuristic).
β Yes, always finds a solution (if one exists).
Optimality
β Not guaranteed (can get stuck in local minima).
β Yes, finds the shortest path in an unweighted graph.
Time Complexity
O(b^d) in the worst case, but better with a good heuristic.
O(b^d) (exponential in depth).
Space Complexity
O(b^d) (stores fewer nodes than BFS in some cases).
O(b^d) (stores all nodes at a level).
When to Use?
When a good heuristic is available (e.g., A* uses BeFS with cost).
When searching for the shortest path in an unweighted graph.
Example Use Cases:
BeFS: Used in AI pathfinding, like Google Maps with heuristics.
BFS: Used in shortest path problems in unweighted graphs, social networks.
Lecture 12: (22/02/2025)
)Coding Class - No Lecture
Lecture 13: (23/02/2025)
)Practical Class with Seach topics
Lecture 14: (01/03/2025)
)IDS
Greedy best first search

Lecture 15: (02/03/2025)
)Knowledge based Agent
Lecture 16: (08/03/2025)
)Class Recording
Logic to Represent Knowledge
Sentence
Validity and Satisfiability
Tautology: Either the sun rises from the east, or it does not.
Satisfiable: Some IITians are entrepreneurs.
Unsatisfiable: A student is both a topper and not a topper at the same time
Real Time Example
True β False = False Submit assignmentβFull marks Go to workβGet paid Here Full marks true if Submit assignment is true Get paid is also true Go to work is true
False β True = True RainβTake umbrella Studied 10 hoursβPass exam
Take umbrella can happen on both case of Rain or Not Rain Pass exam can happen on both case of Studied or Not Studied
Logic Equivalent
knowledge based agent
Lecture 17: (09/03/2025)
)Class Recording
Lecture 18: (16/03/2025)
)Class Recording
Lecture 19: (22/03/2025)
)Class Recording
Lecture 20: (23/03/2025)
)Class Recording
Lecture 21: (29/03/2025)
)Class Recording
Lecture 22: (30/03/2025)
)Class Recording
Lecture 23: (05/04/2025)
)Class Recording
Coding Class - No Lecture
Last updated
