Right arrow

Introduction to Operating Systems

author image

Ayush Prashar

Date: 8th November, 2024

feature image

Contents

    Agenda

    • Introduction to OS
    • Importance of OS
    • Key functions of OS
    • Goals of OS
    • Types od OS
    • Process
    • Threads
    • Process vs Threads
    • Multithreading
    • Multitasking
    • Multithreading vs Multitasking

    Introduction to Operating Systems

    An operating system (OS) is a vital component of a computer system, managing hardware resources and providing a platform for application software. It acts as an interface between the user and computer hardware, ensuring efficient resource management and making hardware complexity invisible to the user. Without an OS, applications would be overly complex and bulky, as they would need to handle direct hardware interaction themselves, violating principles like "Don’t Repeat Yourself" (DRY- The Don't Repeat Yourself principle is a fundamental concept in software development aimed at reducing redundancy and ensuring that every piece of knowledge in a system is unique and has a single, unambiguous representation.) due to repeated resource allocation code across applications.

    The importance of an Operating System (OS) 

    1. Bulky and Complex Applications

    Without an OS, every application would need to contain its own code to interact directly with hardware, such as managing memory, performing I/O operations, and controlling devices like printers and network cards. This would result in the following:

    • Redundant Code: Every application would need to include low-level code to communicate with the CPU, memory, and other hardware devices. This would make each application extremely bulky because many applications would include similar code for these common tasks.
    • Increased Development Time: Developers would have to write complex and low-level code for each hardware interaction, significantly increasing development time. For example, a word processing application would need to include code to interact with disk storage, memory allocation, and the display—tasks that are generally abstracted by the OS.
    • Higher Chance of Errors: Writing code to directly manage hardware increases the likelihood of bugs and errors. Without an OS to handle resource allocation and management, applications would be responsible for managing these themselves, which could lead to improper resource usage, system crashes, or unpredictable behavior.

    By providing a layer of abstraction, the OS removes the need for applications to manage hardware directly, allowing them to be smaller, less complex, and easier to develop.

    2. Resource Exploitation

    In the absence of an OS, applications would not have a centralized system for managing resources like CPU time, memory, and I/O devices. This can lead to several problems:

    • Monopoly Over Resources: A single application could exploit system resources without limitations. For example, if there is no operating system to manage CPU scheduling, a single application could monopolize the CPU, preventing other applications from running efficiently. This would make it impossible to run multiple programs simultaneously in a fair and efficient manner.
    • No Efficient Multitasking: Multitasking, which allows multiple programs to run concurrently, would not be possible without the OS managing the allocation of system resources. Applications would have to manually handle when and how they give up resources, leading to inefficiency, as one misbehaving program could easily halt the entire system.
    • Inefficient Hardware Utilization: Without the OS to manage devices and peripherals (e.g., printers, network interfaces), multiple applications may try to access the same device simultaneously, leading to conflicts, deadlocks, or inefficient use of hardware. The OS coordinates access to these devices, ensuring that resources are shared in a controlled manner.

    The OS prevents this kind of resource exploitation by managing resource allocation across all applications, ensuring that each application gets its fair share of CPU time, memory, and I/O bandwidth.

    3. Memory Protection

    Memory is a critical resource, and the OS provides isolation and protection between different applications to ensure system stability and security. Without the OS, memory management would have the following issues:

    • No Memory Isolation: Each application would directly access and manage its own memory, which could lead to overlapping memory regions. If one application accidentally or intentionally writes data into another application’s memory space, it could corrupt that application’s data or cause it to crash.
    • Faulty Applications Affecting the System: If one application behaves unexpectedly or crashes, it could overwrite key system areas in memory, affecting other applications or even causing the entire system to crash. Without an OS to protect memory boundaries, one faulty application could bring down the entire system.
    • Security Vulnerabilities: Without memory protection, malicious software could easily access the memory of other applications, leading to security breaches. Sensitive data (e.g., passwords, user data) stored in memory by one application could be accessed and exploited by another application.

    To prevent these issues, the OS manages memory by providing virtual memory. This ensures that each application operates within its own isolated memory space, and it cannot access or interfere with the memory allocated to other applications. The OS also performs memory allocation and deallocation as needed, preventing memory leaks and ensuring that applications don’t consume more memory than necessary.

    Key functions of an OS

    1. Resource Management

    The OS manages all the hardware and software resources of a computer system. The key resources include the CPU, memory, files, and I/O devices like printers, disk drives, and network interfaces. Resource management involves the following:

    • CPU Management: The OS schedules tasks to efficiently use the CPU. It uses algorithms (e.g., round-robin, priority scheduling) to decide which process should run and for how long. This ensures maximum CPU utilization and fairness among processes, preventing one process from hogging the CPU.
    • Memory Management: The OS allocates memory to various programs and ensures that no two programs interfere with each other’s memory. It keeps track of each byte of memory in the system, allocating or deallocating memory space as required by different programs. Techniques like paging and segmentation allow the OS to manage memory more efficiently, helping avoid fragmentation and making sure memory is fully utilized.
    • File System Management: The OS manages file storage on disks, organizing data into files and directories. It controls access to files, ensuring proper permissions are enforced, and handles tasks like reading, writing, creating, and deleting files.
    • Device Management: The OS manages device communication via drivers, acting as an intermediary between hardware devices and the applications. It handles input/output requests, queues them, and ensures that devices like printers, storage disks, and network interfaces function correctly.

    2. Abstraction of Hardware Complexity

    Operating systems provide abstraction to hide the complexity of the underlying hardware from applications and users. This means that developers don’t need to write low-level code to interact directly with hardware components (e.g., managing disk sectors, or memory addresses). Instead, the OS provides a simple interface to perform tasks like reading files, allocating memory, or communicating with devices.

    This abstraction simplifies software development and enables portability, as developers can write programs without worrying about the specifics of the hardware platform the program will run on. For instance:

    • File Abstraction: Instead of directly managing disk sectors, the OS presents files and directories as abstractions, allowing users to read and write files with simple commands like open(), read(), and write().
    • Memory Abstraction: Applications don’t need to know the exact memory addresses of physical memory. Instead, the OS provides virtual memory, where each program operates in its own memory space, and the OS translates these virtual addresses to actual physical memory.
    • Device Abstraction: Applications communicate with devices using high-level commands rather than dealing with hardware-specific details. For example, printing a document involves calling the OS's print function rather than interacting with the printer's hardware directly.

    3. Process Isolation and Protection

    The OS ensures that multiple processes running on a system are isolated from one another, meaning that the actions of one process do not affect the behavior of other processes. This protection is crucial for system stability, security, and ensuring fair resource sharing. The key points of process isolation and protection include:

    • Memory Protection: Each process gets its own address space, and the OS ensures that processes cannot access each other’s memory unless explicitly allowed (e.g., shared memory). This prevents memory corruption and improves security.
    • Process Privileges: The OS defines different levels of access control, ensuring that unauthorized processes or users cannot access critical system resources or other users’ data.
    • Preemptive Multitasking: In a multitasking system, the OS enforces isolation by giving each process a limited time to use the CPU (time-slicing) and switching between processes. This prevents one process from monopolizing the CPU.
    • Access Control: The OS enforces user-level access permissions on files, devices, and other resources. For example, only authorized users can modify certain system files, ensuring data security.

    4. User Interface

    The OS provides a user interface (UI) that allows users to interact with the computer system. There are two primary types of user interfaces:

    • Command-Line Interface (CLI): In a CLI, users interact with the system by typing commands into a terminal. Examples include Linux shells like Bash or the Windows Command Prompt. CLIs are powerful for experienced users who need to perform complex tasks through commands and scripting.
    • Graphical User Interface (GUI): In a GUI, users interact with the system via visual elements like windows, icons, and menus. GUIs are more intuitive for everyday users, allowing them to interact with the OS by clicking, dragging, and dropping rather than typing commands. Examples include Windows, macOS, and GNOME or KDEon Linux.

    Through the UI, the OS enables users to launch applications, manage files, configure settings, and interact with hardware devices. The OS also ensures that user interactions (via mouse, keyboard, or touch) are efficiently translated into system operations.

    Some concepts you need to know before we move ahead

    1. Multiprogramming

    Multiprogramming is a technique used by the operating system (OS) to improve CPU utilization by loading multiple programs (or processes) into memory at the same time. The basic idea is to keep the CPU busy by switching between jobs whenever one job has to wait, such as when it needs to perform input/output (I/O) operations.

    • In a multiprogramming environment, several jobs are loaded into the computer's memory, and they reside in the ready state. These jobs are either waiting to be executed or waiting for their turn to access the CPU.
    • Job Scheduling: The OS selects which job to execute based on a scheduling algorithm (e.g., First-Come-First-Served, Shortest Job Next, etc.). When the current job gets stuck (e.g., waiting for a file to be read from the disk), the OS switches to another job that is ready to execute.
    • CPU Utilization: Instead of letting the CPU sit idle while waiting for an I/O operation to complete, multiprogramming allows the CPU to work on another job. This technique reduces idle time and makes better use of the CPU, increasing throughput (the number of jobs processed in a given time).
    • I/O and CPU Overlap: One key aspect of multiprogramming is the overlap of I/O and CPU tasks. While one job is performing I/O, another job can use the CPU. This overlap ensures that no CPU time is wasted, leading to a more efficient system.

    Benefits of Multiprogramming:

    • Reduces idle time by switching to other processes.
    • By keeping the CPU busy while waiting for I/O operations, multiprogramming improves the overall throughput of the system.

    Challenges:

    • With multiple programs in memory, the OS needs to manage memory allocation efficiently to ensure that no program interferes with others.
    • Managing multiple processes, ensuring they don’t overwrite each other’s memory, and deciding which to execute requires complex algorithms.

    2. Context Switching

    Context switching is the process by which the OS saves the state (or "context") of a currently running process and restores the state of another process. This allows the CPU to switch from one process to another, enabling multitasking.

    • Every process has its own state, which includes the contents of the CPU registers, the program counter (indicating the next instruction to be executed), and other execution-related information stored in a data structure called the Process Control Block (PCB).
    • When Context Switching Happens: Context switching occurs in several situations:
      • When a running process is blocked (e.g., waiting for I/O).
      • When the OS decides to allocate CPU time to another process (preemption in time-sharing systems).
      • When an interrupt occurs (e.g., hardware requesting attention).
    • Saving and Restoring State: When context switching occurs, the OS saves the state of the current process in its PCB. Then, it loads the state of the next process to be executed. The CPU now starts executing this new process from where it left off last time.

    Benefits of Context Switching:

    • Context switching allows multiple processes to run seemingly at the same time (concurrent execution).
    • By switching to other ready processes when one process is blocked, context switching minimizes CPU idle time.

    Overhead:

    • Although context switching enables multitasking, it incurs some overhead. Switching from one process to another requires time to save and load states, which could slow down the system if context switches occur too frequently.
    • Each process requires its own memory and resources, so frequent switching may lead to performance degradation in memory-constrained systems.

    3. Time-Sharing

    Time-sharing is an extension of multiprogramming, where the CPU is shared between multiple processes in such a way that users experience the system as though their processes are running simultaneously. Time-sharing systems allocate a small time slice or time quantum to each process, rapidly switching between them to create the illusion of parallel execution.

    • Time Quantum: In time-sharing systems, each process is allocated a fixed amount of CPU time, called a time quantum. Once the process has used its allocated time, the OS performs a context switch to give the CPU to another process.
    • User Interaction: Time-sharing is essential in multi-user systems where multiple users interact with the system simultaneously. It ensures that each user's programs respond quickly, even when multiple users are working on the same machine.

    Benefits of Time-Sharing:

    • Time-sharing allows systems to respond quickly to user inputs, even in environments where multiple users or applications are running simultaneously.
    • The CPU is shared fairly among all processes, ensuring no single process monopolizes the CPU.

    Challenges:

    • The choice of time quantum is critical. If it's too short, the system spends too much time on context switching. If it's too long, users may experience delays in their processes.
    • Time-sharing systems have higher overhead due to the frequent context switches, but this overhead is usually justified by the improved user experience.

    Goals Of OS 

    1. Maximum CPU Utilization

    The goal of maximum CPU utilization means that the OS tries to keep the CPU as busy as possible to ensure it is not sitting idle. CPUs are the most critical and expensive resources in a system, and any idle time is wasted processing power. The OS aims to manage tasks in a way that minimizes the CPU's downtime.

    • Multiprogramming: The OS uses multiprogramming, where multiple jobs or processes are loaded into memory, allowing the CPU to switch between them when one process is waiting for input/output (I/O) operations. This ensures that the CPU always has something to process.
    • Context Switching: When a process is blocked (e.g., waiting for I/O), the OS can save its state (context) and switch the CPU to execute another ready process. This technique reduces CPU idle time and keeps it consistently busy.
    • Time-sharing: In systems with time-sharing, the OS assigns each process a small time slice (quantum) of CPU time, rapidly switching between processes to give the illusion of simultaneous execution (multitasking). This increases CPU utilization, especially in multi-user environments.
    • With better CPU utilization, the system can process more jobs in a given time frame, improving overall performance.
    • Ensuring that the CPU is not idle leads to efficient use of system resources, reducing delays and bottlenecks in processing.

    2. Avoid Process Starvation

    Process starvation occurs when low-priority processes are repeatedly bypassed in favor of high-priority ones, causing them to wait indefinitely for CPU access. The OS aims to avoid starvation by ensuring that all processes, regardless of their priority, are eventually executed. This promotes fairness and balanced resource allocation.

    • These scheduling systems allow processes to move between different priority levels based on their behavior (e.g., CPU-bound vs. I/O-bound). This dynamic adjustment ensures that no process remains stuck in a low-priority queue for too long.
    • the OS can forcibly stop a high-priority process after it consumes a certain amount of CPU time, allowing lower-priority processes to run, avoiding long delays for those processes.
    • Every process is guaranteed a chance to run, leading to equitable resource distribution among tasks.
    • Preventing starvation ensures that all users and applications get a fair share of the CPU, reducing delays and improving responsiveness for interactive processes.

    3. High-Priority Job Execution

    In many systems, certain tasks or processes are more critical than others and must be executed quickly. High-priority jobs, such as system-level tasks or real-time applications (e.g., flight control systems, emergency services), should not be delayed. The OS prioritizes these jobs to ensure they are executed ahead of less critical tasks.

    • In a priority scheduling algorithm, each process is assigned a priority level, and the OS always selects the highest-priority task for execution. This ensures that urgent jobs are executed first.
    • When a high-priority job arrives, the OS can interrupt the execution of a lower-priority process and allocate CPU time to the more critical task. This prevents delays in time-sensitive processes.
    • For hardware events (e.g., I/O device requests), the OS uses interrupts to prioritize system-level tasks. When an interrupt is triggered, the CPU suspends the current process to handle the more critical interrupt service routine, ensuring that important tasks are attended to immediately.

    SO from these Goals we can define the types of OS

    Types of OS with examples 

    1. Single Process Operating System

    A Single Process Operating System can execute only one process at a time. In such systems, the CPU is fully devoted to the execution of one task before moving on to another. These systems were prevalent in early computing systems.

    • Example: MS-DOS (Microsoft Disk Operating System)
      • It was one of the earliest operating systems used in personal computers. MS-DOS could execute only one program at a time, without the ability to multitask.

    Problems:

    • Process Starvation: Since only one process can be executed, higher-priority jobs may not be executed promptly if another process is already running.
    • Low CPU Utilization: During Input/Output (I/O) operations, the CPU often remains idle, waiting for the process to finish its I/O task.

    2. Batch Processing Operating System

    In a Batch Processing Operating System, jobs are collected and grouped into batches before being executed sequentially. This type of OS does not interact with the user while the job is being processed, making it efficient for executing similar jobs in groups.

    • Example: Early Systems using Punch Cards
      • Users submitted jobs (often using punch cards) to the operator, who would group them based on their requirements and execute them in batches.

    Drawbacks:

    • No Priority Scheduling: There’s no ability to assign priorities to jobs, leading to possible starvation of high-priority tasks.
    • Inefficient CPU Utilization: Similar to single-process OS, the CPU might remain idle during I/O operations, as jobs in a batch often wait for I/O tasks.

    3. Multiprogramming Operating System

    Multiprogramming Operating Systems improve upon batch processing by allowing multiple jobs to reside in memory at the same time. When one job is waiting for I/O operations, the CPU can switch to another ready job, reducing idle time.

    • How It Works: When a job requires I/O, it is moved to the waiting state, and the CPU switches to execute the next ready job. The state of the current process is saved in a Process Control Block (PCB) to be resumed later. This technique is known as Context Switching.

    Benefits:

    • Reduced CPU Idle Time: Multiple jobs in memory ensure that the CPU always has something to execute, minimizing idle time.

    Problems Solved:

    • Better CPU Utilization: Multiprogramming helps maximize CPU utilization but does not solve the issue of real-time responsiveness since it lacks time-sharing capabilities.

    4. Multitasking Operating System

    Multitasking Operating Systems extend multiprogramming by incorporating Time Sharing. In this system, a small time slice or quantum is allotted to each job, and the OS frequently switches between jobs, giving the illusion of simultaneous execution.

    • Context Switching: The OS switches between tasks frequently, saving and loading the state of each process to ensure they continue execution smoothly.

    Benefits:

    • High-Priority Execution: These systems typically incorporate priority-based scheduling, ensuring that high-priority tasks are executed first.
    • Improved Responsiveness: Multiple applications can be executed concurrently without significant delays, providing a smooth user experience.

    5. Multiprocessing Operating System

    Multiprocessing Operating Systems are designed to use multiple CPUs, allowing several processes to be executed simultaneously. In such systems, tasks are distributed among different CPUs, improving throughput and reliability.

    • CPU Scheduling: Jobs are distributed among available CPUs, increasing system efficiency. Tasks can be executed in parallel, which enhances performance.

    Benefits:

    • Better Reliability: If one CPU fails, other CPUs can continue executing tasks, improving system reliability.
    • Parallelism: Multiple CPUs allow for true parallel processing, where different tasks can be processed at the same time.
    • Reduced Starvation: Since tasks are distributed across multiple CPUs, the likelihood of a process waiting for too long is minimized.

    6. Distributed Operating System

    A Distributed Operating System uses multiple independent computers, often referred to as nodes or sites, connected via a network to share resources and execute tasks. These systems are also known as Loosely Coupled Systems.

    • Multiple Systems: Each system has its own memory and CPU, but they work together, appearing to the user as a single, unified system. The processors communicate with one another through high-speed buses or communication lines (e.g., telephone lines).

    Advantages:

    • Scalability: New systems can be added to handle different tasks, allowing for scalability as demand grows.
    • Resource Sharing: Resources (e.g., memory, processors) are shared across systems, improving overall efficiency.
    • Fault Tolerance: If one site fails, the rest of the system can continue operating, ensuring service continuity.
    • Reduced Processing Delays: Distributing the load across multiple systems reduces processing delays and improves overall system responsiveness.

    7. Real-Time Operating System (RTOS)

    A Real-Time Operating System (RTOS) is designed for applications that require immediate responses, where the correctness of the operation depends not only on the result but also on the time it is delivered. These systems are used in critical environments where a delay in response can have serious consequences.

    • Tight Time Boundaries: Tasks must be executed within predefined time limits, ensuring that the system meets stringent timing requirements.

    Types of RTOS:

    1. Hard Real-Time Systems:
    • These systems guarantee that critical tasks are completed on time. Any delay in execution could lead to system failure.
    • Example: Air Traffic Control Systems, Weapon Systems, Industrial Control Systems.
    • Hard real-time systems do not use secondary storage and often store data in ROM (Read-Only Memory). Virtual memory is typically absent.
    2. Soft Real-Time Systems:
    • These systems prioritize critical tasks but allow less critical tasks to have some flexibility in their response time.
    • Example: Multimedia Systems, Virtual Reality, Scientific Research Systems.
    • Soft real-time systems are less restrictive compared to hard real-time systems. They allow for more variation in timing but still prioritize the execution of time-sensitive tasks.

    Benefits:

    • Accuracy and Speed: Tasks are executed within strict time limits, which is crucial for systems requiring real-time responses (e.g., medical imaging, robotics).

    Process

    A process is essentially a program that is under execution in a computer’s main memory (RAM). A program becomes a process when it is loaded into the memory and starts running.

    • Program vs. Process: A program is a static set of instructions, typically stored on disk (e.g., .exe files), while a process is the active execution of that program in memory.
      • Example: When you click on an executable file like myProgram.exe, the operating system loads it into memory and starts executing it, turning the program into a process.

    Key Characteristics of a Process:

    1. Process Control Block (PCB): Each process has a PCB containing important details like process ID, program counter, registers, memory limits, and priority.
    2. Execution in RAM: The process resides in RAM, where the OS allocates necessary resources (CPU time, memory).
    3. Process States:
      • New: The process is being created.
      • Running: Instructions are being executed by the CPU.
      • Waiting: The process is waiting for an event (e.g., I/O completion).
      • Terminated: The process has finished execution.

    Process Isolation:

    • Processes are isolated from one another, meaning that one process cannot interfere with another's memory or data. This ensures protection and security between processes.

    Threads

    A thread is often called a lightweight process, and it represents a single sequence of execution within a process.

    • Process vs. Thread: While a process can have multiple threads of execution, each thread within a process shares the same memory space and resources but operates independently.
      • Example: In a web browser, one thread handles the UI, another handles loading web pages, and another might handle network communication, all running in parallel.

    Key Characteristics of Threads:

    1. Independent Execution: Threads execute independently from one another, even though they share resources like memory.
    2. Shared Resources: Multiple threads within the same process share memory space, file handles, and other resources. This is efficient but also means that there is no isolation between threads.
    3. Lightweight: Threads are more lightweight than processes because they do not require separate memory allocations for each one (unlike processes).

    Use Case:

    • Threads are typically used to perform parallelism within a single application. For example, in a text editor, you may have one thread handling text input, another for spell-checking, and another for saving the file—all running simultaneously without slowing down the editor.

    Difference Between Process and Thread

    S.No.

    Process

    Thread

    1

    A process is heavy-weight and resource-intensive, as it requires its own memory space and system resources to execute.

    A thread is light-weight, sharing the process’s resources, making it less resource-intensive compared to a process.

    2

    Switching between processes involves interaction with the operating system, making process context switching slower and more complex.

    Thread switching occurs within a process and does not need to interact with the operating system directly, making it faster.

    3

    In a multi-processing environment, each process has its own memory space, files, and resources, and does not share these resources with other processes.

    All threads of a process share the same memory space, open files, and resources, reducing redundancy.

    4

    If one process is blocked (e.g., waiting for I/O), other processes are unaffected and continue to execute. However, within the same process, nothing can execute until it's unblocked.

    If one thread is blocked, another thread in the same process can continue to execute, providing better concurrency and resource utilization.

    5

    Multiple processes without using threads tend to consume more resources, such as memory and CPU cycles, due to the need to allocate separate memory spaces.

    Multiple threads within a process share memory and resources, leading to more efficient resource usage.

    6

    Each process runs independently, with no access to another process’s memory or data unless inter-process communication (IPC) mechanisms are used.

    Threads within the same process can read, write, and modify each other’s data, allowing for shared communication but also increasing the risk of synchronization issues.

    Multithreading

    Multithreading refers to the execution of multiple threads concurrently within a single process. Each thread has its own path of execution, allowing a program to handle multiple tasks simultaneously.

    Key Concepts in Multithreading:

    1. Concurrency: Multiple threads run at the same time (though in reality, threads may take turns on the CPU if there are fewer CPUs than threads).
    2. Parallelism: When multiple CPUs (or cores) are available, threads can be truly executed in parallel, taking advantage of modern multi-core processors.
    3. Thread Scheduling: The OS schedules threads for execution based on their priority. Each thread is given a time slice to execute, and the OS switches between threads as needed.

    Example of Multithreading:

    • In a multithreaded web browser, one thread handles user input (clicks, scrolls), another downloads content in the background, and another renders the page. This makes the browser fast and responsive, even when multiple tasks are running.

    Benefits of Multithreading:

    • Increased Responsiveness: Programs can handle multiple tasks without freezing or slowing down.
    • Efficient Resource Use: Threads share memory, avoiding the overhead of creating new processes.
    • Parallelism: In a multi-core CPU environment, threads can be executed simultaneously, improving performance.

    Multitasking 

    Multitasking refers to the ability of an operating system (OS) to execute multiple tasks or processes at the same time. A task in this context is typically a process or program that the computer is executing. The goal of multitasking is to make effective use of the CPU, ensuring that it is not idle when there are multiple tasks waiting to be executed.

    Types of Multitasking

    There are two primary types of multitasking:

    1. Preemptive Multitasking
    2. Cooperative Multitasking

    We will learn about these in detail in further classes for now here is a brief explanation of what these types are:

    Preemptive Multitasking

    In preemptive multitasking, the operating system controls the execution of tasks and decides which task runs at a given time. The OS allocates a time slice (also called a quantum) to each task, and once the time slice expires, the OS preempts(interrupts) the task and switches to another task.

    Cooperative Multitasking

    In cooperative multitasking, each task voluntarily yields control of the CPU to the OS. The OS does not forcibly preempt tasks; instead, tasks are expected to cooperate by giving up control periodically so that other tasks can run.

    Multitasking vs. Multithreading

    Multitasking and multithreading both refer to performing multiple tasks simultaneously, but there are key differences between the two.

    Multitasking:

    • Definition: Multitasking involves executing multiple processes at the same time, where each process is isolated and has its own memory and resources.
    • Concept: It allows multiple programs (processes) to run concurrently on the CPU. The OS uses process context switching to switch between processes, which may involve switching memory address spaces and other resources.
    • Example: Running a text editor, web browser, and a media player simultaneously.

    Multithreading:

    • Definition: Multithreading refers to multiple threads of execution running within the same process. These threads share the same memory and resources.
    • Concept: Threads in the same process can execute tasks in parallel without the overhead of context switching between separate processes.
    • Example: In a video editing application, one thread might handle video rendering, another thread handles audio processing, and yet another handles user input.

    Feature

    Multitasking

    Multithreading

    Definition

    Execution of multiple processes simultaneously

    Execution of multiple threads within a single process

    Context Switching

    OS switches between processes (slower)

    OS switches between threads of the same process (faster)

    Memory Sharing

    Processes have separate memory and resources

    Threads share memory and resources

    Overhead

    More overhead due to process creation and switching

    Less overhead, lightweight switching

    Isolation

    Processes are isolated from each other

    Threads are not isolated from each other

    Example

    Running a text editor and a browser at the same time

    Browser with multiple tabs performing different tasks

    Thread Context Switching vs. Process Context Switching

    When switching between tasks, either threads or processes, the OS needs to save the current state and load the state of the new task. This process is called context switching.

    Thread Context Switching:

    • The OS switches between different threads within the same process.
    • What is saved?: Registers, program counter, and stack pointer.
    • What is not switched?: Memory address space (since threads share the same memory).
    • Speed: Faster, as memory space remains the same.
    • CPU Cache: The CPU’s cache is preserved during thread context switching, making it more efficient.

    Process Context Switching:

    • The OS switches between different processes.
    • What is saved?: Registers, program counter, stack pointer, and memory address space.
    • What is switched?: The entire memory address space, which can be time-consuming.
    • Speed: Slower, as the CPU must flush its cache, and memory space must be reloaded.

    Context Switching

    Thread Context Switching

    Process Context Switching

    Switching Scope

    Switches between threads within the same process

    Switches between completely separate processes

    Memory Address Space

    Memory space is shared, so it’s not switched

    Memory space is switched, as each process has its own

    Speed

    Fast

    Slow

    CPU Cache

    CPU’s cache state is preserved

    CPU’s cache state is flushed