logo

9. Artificial Intelligence, Data Science, and Internet of Things (ACtE09)

Computer Engineering - Nec (Nepal Engineering Council)

No MCQ questions available for this chapter.

9. Artificial Intelligence and Neural Networks (ACtE09)

9.1 Introduction to AI and Intelligent Agent:

  • Concept of Artificial Intelligence (AI): AI refers to the creation of machines or systems that can perform tasks typically requiring human intelligence, such as learning, reasoning, and decision-making.
  • AI Perspectives: AI can be viewed from various perspectives, including symbolic AI (based on rule-based systems and logic) and sub-symbolic AI (based on neural networks and machine learning).
  • History of AI: The development of AI spans several decades, beginning in the 1950s with the work of Alan Turing, and progressing through stages of symbolic logic, expert systems, and modern deep learning.
  • Applications of AI: AI is used in a variety of fields, including autonomous vehicles, robotics, natural language processing (NLP), healthcare diagnostics, finance, and entertainment (e.g., game AI).
  • Foundations of AI: AI's foundation includes areas like search algorithms, knowledge representation, machine learning, and reasoning.
  • Introduction to Agents: An intelligent agent is an entity that perceives its environment and takes actions to achieve specific goals, often with some degree of autonomy.
  • Structure of Intelligent Agent: Agents consist of sensors (to perceive the environment), actuators (to take actions), and an internal model to process information.
  • Properties of Intelligent Agents: An intelligent agent should be autonomous, reactive, proactive, and capable of learning and adapting to its environment.
  • PEAS Description of Agents: PEAS stands for Performance measure (how the agent’s performance is measured), Environment (the context in which the agent operates), Actuators (tools for acting on the environment), and Sensors (methods for perceiving the environment).
  • Types of Agents:
    • Simple Reflexive: These agents react to stimuli based on predefined rules.
    • Model-Based: They maintain an internal model of the world to make decisions.
    • Goal-Based: Agents that act to achieve specific goals.
    • Utility-Based: These agents evaluate actions based on a utility function that measures the desirability of outcomes.
  • Environment Types:
    • Deterministic: The environment behaves in a predictable manner.
    • Stochastic: The environment includes elements of chance, making outcomes uncertain.
    • Static: The environment remains unchanged during decision-making.
    • Dynamic: The environment can change while the agent is deliberating.
    • Observable: The agent has access to all necessary information in the environment.
    • Semi-observable: Some parts of the environment are hidden or unknown to the agent.
    • Single Agent: The environment contains only one agent.
    • Multi-Agent: Multiple agents exist within the environment, interacting with each other.

9.2 Problem Solving and Searching Techniques:

  • Problem Definition: Problem-solving involves representing a problem as a state space and applying algorithms to find a solution. The goal is to explore the state space efficiently.
  • Problem Formulation: This involves identifying the initial state, goal state, and the actions available to move from one state to another.
  • Well-defined Problems: Problems with clear goals, rules, and constraints.
  • Constraint Satisfaction Problem (CSP): These problems require satisfying a set of constraints (e.g., Sudoku puzzles) to reach a solution.
  • Uninformed Search Techniques:
    • Depth First Search (DFS): Explores a path as deep as possible before backtracking.
    • Breadth First Search (BFS): Explores all possible solutions at a given depth before moving deeper.
    • Depth Limited Search: A variant of DFS with a predefined depth limit.
    • Iterative Deepening Search: A combination of DFS and BFS that increases the depth limit gradually.
    • Bidirectional Search: Searches from both the initial state and the goal state, meeting in the middle.
  • Informed Search Techniques:
    • Greedy Best First Search: Chooses the path that seems closest to the goal, without considering the cost.
    • A Search*: A more refined search that combines the best of greedy and uniform-cost search by considering both the cost of reaching a node and the estimated cost to reach the goal.
    • Hill Climbing: Moves to the highest neighboring node, but can get stuck at local optima.
    • Simulated Annealing: A probabilistic search technique that avoids local optima by occasionally allowing worse solutions.
  • Game Playing: Involves strategies for optimal decision-making in adversarial settings (e.g., chess).
  • Adversarial Search Techniques:
    • Mini-max Search: Used for two-player games, where each player tries to minimize the opponent’s maximum gain.
    • Alpha-Beta Pruning: An optimization of the mini-max algorithm that avoids exploring branches that won’t affect the final decision.

9.3 Knowledge Representation:

  • Knowledge Representations and Mappings: Methods to model and structure knowledge so that a machine can reason about it.
  • Approaches to Knowledge Representation: Includes methods like semantic networks, frames, and logic-based systems.
  • Issues in Knowledge Representation: Challenges like dealing with uncertainty, incompleteness, and the complexity of representing human knowledge.
  • Semantic Nets: A network of nodes and edges that represent concepts and their relationships.
  • Frames: Data structures that represent stereotypical situations, e.g., a “person” frame might contain attributes like name, age, and address.
  • Propositional Logic (PL): A logical system that uses propositions (simple statements) and logical connectives like AND, OR, and NOT to form complex statements.
    • Inference using Resolution: A technique for deriving new facts from existing ones by eliminating contradictions.
  • Predicate Logic (FOPL): An extension of propositional logic that includes predicates, quantifiers, and variables to express more complex statements.
    • Rules of Inference: Procedures for deriving new statements from given ones, e.g., modus ponens, universal instantiation.
  • Bayes' Rule and Its Use: A mathematical formula for updating probabilities based on new evidence.
  • Bayesian Networks: A graphical model for representing probabilistic relationships among variables.
  • Reasoning in Belief Networks: Inference processes within belief networks, where the goal is to update beliefs based on observed evidence.

9.4 Expert System and Natural Language Processing:

  • Expert Systems: AI systems designed to emulate human expertise in specific domains, such as medical diagnosis or legal advice.
  • Architecture of an Expert System: Consists of a knowledge base, inference engine, and user interface.
  • Knowledge Acquisition: The process of gathering and structuring knowledge for use in an expert system.
  • Declarative Knowledge vs Procedural Knowledge: Declarative knowledge refers to facts and data, while procedural knowledge refers to the methods or procedures for solving tasks.
  • Development of Expert Systems: Involves defining the problem, gathering knowledge, and implementing the system.
  • Natural Language Processing (NLP) Terminology: NLP involves analyzing and generating human language with computers. Key terms include syntax, semantics, and pragmatics.
  • Natural Language Understanding and Generation: NLP systems must first understand the input and then generate appropriate responses.
  • Steps of Natural Language Processing: Tokenization, part-of-speech tagging, parsing, semantic analysis, and generation.
  • Applications of NLP: Examples include translation, sentiment analysis, and question answering.
  • NLP Challenges: Challenges like ambiguity, polysemy, and context dependence.
  • Machine Vision Concepts: Techniques for enabling machines to interpret visual information.
  • Machine Vision Stages: Image acquisition, pre-processing, feature extraction, and recognition.
  • Robotics: The study and design of robots that perform tasks autonomously or semi-autonomously.

9.5 Machine Learning:

  • Introduction to Machine Learning: Machine learning involves building algorithms that allow computers to learn from data and make predictions or decisions.
  • Concepts of Learning: Machine learning systems learn by recognizing patterns in data.
  • Supervised, Unsupervised, and Reinforcement Learning:
    • Supervised Learning: The system learns from labeled data and makes predictions based on that.
    • Unsupervised Learning: The system identifies patterns in unlabeled data.
    • Reinforcement Learning: The system learns by interacting with an environment and receiving rewards or punishments.
  • Inductive Learning (Decision Tree): A method of learning by creating decision trees that map input features to output labels.
  • Statistical-based Learning (Naive Bayes Model): A probabilistic model used for classification tasks based on Bayes' theorem.
  • Fuzzy Learning: Deals with reasoning that is approximate rather than fixed and exact, often used in situations with uncertainty.
  • Fuzzy Inference System: A system for mapping inputs to outputs using fuzzy logic.
  • Genetic Algorithm: A search heuristic that mimics the process of natural evolution, involving selection, crossover, mutation, and fitness evaluation.

9.6 Neural Networks:

  • Biological Neural Networks vs Artificial Neural Networks (ANN): Biological networks consist of neurons in the human brain, while ANNs are computational models that simulate these networks.
  • McCulloch-Pitts Neuron: A simple model of a biological neuron, based on binary inputs and outputs.
  • Mathematical Model of ANN: Involves representing neurons and their connections with mathematical equations.
  • Activation Functions: Functions that determine a neuron's output based on its input.
  • Architectures of Neural Networks: Refers to the organization of layers in an ANN (input layer, hidden layers, and output layer).
  • The Perceptron: The simplest type of neural network, consisting of a single layer of neurons.
  • The Learning Rate: A parameter that controls how quickly the network adjusts its weights during training.
  • Gradient Descent: An optimization technique used to minimize the error by adjusting the weights in the direction of the gradient.
  • The Delta Rule: A learning rule used for updating weights in a neural network.
  • Hebbian Learning: A learning principle based on the idea that neurons that fire together wire together.
  • Adaline Network: A type of neural network used for linear regression problems.
  • Multilayer Perceptron Neural Networks: A type of neural network with multiple layers of neurons that allows for more complex decision boundaries.
  • Backpropagation Algorithm: A method for training neural networks by propagating errors backward through the network to adjust the weights.
  • Hopfield Neural Network: A type of recurrent neural network used for pattern recognition and associative memory.