top of page
Search

OS internals

  • Writer: Abhilasha
    Abhilasha
  • Jul 14, 2024
  • 9 min read

Privilege Separation

  • Concept: Modern operating systems separate user applications (untrusted) from critical operating system components (trusted) to enhance stability and security.

  • Implementation: IA-32 processors use protection rings, where kernel mode (most privileged) is in ring 0 and user mode (least privileged) is in ring 3.

  • Access Mechanism: User applications switch to kernel mode using system calls to access OS services, which are handled with specific instructions or interrupts.

System Calls

  • Purpose: User applications request OS services like file operations or network communication via system calls.

  • Invocation: Invoking a system call involves switching from user mode to kernel mode, executing the requested operation, and then returning control to the user application.

  • Example: Interaction with system resources such as files or network sockets requires system calls for access and management.

Processes

  • Definition: A process represents an instance of a program in execution, managed by the OS.

  • Attributes: Each process has a unique process ID and its own address space containing program code, shared libraries, data, and runtime stack.

  • Multiprogramming: OS supports multiprogramming, allowing multiple processes to appear to run simultaneously by time-sharing CPU resources.

Threads

  • Basic Unit: Threads are units of CPU execution within a process.

  • Characteristics: Threads within a process share the same code, data, and OS resources but have unique execution contexts including CPU registers and execution stacks.

  • Concurrency: Multiple threads within a process can execute tasks concurrently, such as handling network communication or user interface updates.

Handles

  • Resource Management: OS manages active resources like processes, threads, files, and network connections through data structures called handles.

  • Purpose: Handles provide processes with unique identifiers to access and manipulate system resources while enforcing access controls and tracking resource usage.

CPU Scheduling

  • Objective: OS schedules CPU time among multiple threads to optimize CPU utilization and responsiveness.

  • Mechanism: Scheduler policies determine which threads execute and for how long, considering factors like I/O waits and computational needs.

  • Context Switch: OS performs context switches to suspend and resume threads, saving and restoring their execution contexts including CPU registers and program state.


Privilege separation in modern operating systems is crucial for maintaining system stability and security. Here’s an explanation along with a simplified diagram to illustrate how this works:

Privilege Separation

Concept: To protect critical components of the operating system from potentially harmful actions by user applications, modern OSs enforce privilege separation. This means separating the execution of user applications (untrusted) from the OS kernel (trusted).

Implementation:

  1. Protection Rings:

  • The IA-32 processor architecture defines four privilege levels, known as protection rings.

  • Kernel mode, where the OS kernel executes (most privileged), is typically in Ring 0.

  • User mode, where user applications execute (least privileged), is generally in Ring 3.

  1. Execution Modes:

  • User Mode (Ring 3): User applications run here. They have limited access to system resources and cannot execute privileged instructions directly.

  • Kernel Mode (Ring 0): The OS kernel runs here. It has unrestricted access to system resources, including hardware and privileged instructions.

  1. Access Mechanism:

  • User applications need to access OS services or critical components (like hardware) that require privileged access.

  • This is achieved through system calls, which allow applications to transition from user mode to kernel mode briefly to perform privileged operations.

Diagram Explanation

  • User Mode (Ring 3):

  • User applications execute here.

  • They interact with the OS through system calls to request services like file operations or network communication.

  • User mode provides limited access to system resources to protect against unauthorized access or modifications.

  • Kernel Mode (Ring 0):

  • The OS kernel executes in this mode.

  • It has full access to system resources and can execute privileged instructions directly.

  • Kernel mode handles system-wide tasks such as memory management, process scheduling, and device drivers.

  • Access to kernel mode is tightly controlled to prevent unauthorized access or modifications that could compromise system stability or security.

Example System Call Flow

  1. User Application Request:

  • A user application needs to read a file from disk.

  • It initiates a system call (e.g., read()).

  1. Transition to Kernel Mode:

  • The processor switches from user mode (Ring 3) to kernel mode (Ring 0) upon encountering the system call instruction.

  • Context switches occur to ensure the integrity of the system state.

  1. System Call Handling:

  • The OS kernel receives the system call request.

  • It verifies permissions, accesses necessary data structures, and performs the requested file read operation in kernel space.

  1. Return to User Mode:

  • After completing the operation, the kernel prepares to return control to the user application.

  • The processor switches back to user mode, restoring the user application’s execution context.


System calls are fundamental mechanisms that allow user applications to request services from the operating system kernel. Here’s a breakdown of how system calls work based on the points you've provided:

System Calls Explanation

  1. User Application Request:

  • When a user application needs to perform tasks that require privileged access or interaction with hardware, it initiates a system call.

  • Examples include file operations (open, read, write, close), network communication (send, receive), process management (fork, exec), and device control.

  1. Low-level API:

  • System calls define the interface between user applications and the operating system kernel at a low level.

  • They provide a standardized way for applications to request OS services regardless of the underlying hardware or specific OS implementation.

  1. Invoking a System Call:

  • To invoke a system call, the application executes a specific instruction or software interrupt that triggers a transition from user mode to kernel mode.

  • This transition involves several steps:

  • Saving User Mode Context: Before switching to kernel mode, the current state of user mode registers (including the program counter and stack pointer) is saved. This ensures that the application can resume execution correctly after the system call.

  • Changing Execution Mode: The processor switches from user mode (where applications run) to kernel mode (where the OS kernel executes).

  • Initializing Kernel Stack: A new stack frame is set up in kernel mode to handle the execution of the system call.

  • Invoking the System Call Handler: The kernel identifies the requested system call based on parameters passed by the application (e.g., system call number).

  • Servicing the Request: The kernel executes the appropriate code to fulfill the requested operation, accessing kernel-reserved memory and resources as needed.

  1. Returning to User Mode:

  • After servicing the system call, the kernel prepares to return control to the user application.

  • The kernel restores the saved user mode context, including register values and the program counter.

  • Control then returns to the instruction immediately following the system call instruction in the application’s code.

  • The application continues its execution with the result of the system call operation (if applicable).

Example Scenario

  • Scenario: An application wants to open a file to read its contents.

  • Steps:

  1. The application executes the open() system call.

  2. This triggers a software interrupt or special instruction recognized by the processor.

  3. The processor switches from user mode to kernel mode.

  4. The kernel's system call handler receives the request, verifies permissions, and accesses filesystem data structures.

  5. If permitted, the file is opened and a file descriptor is returned to the application.

  6. The kernel prepares to return to user mode, restoring the saved user mode context.

  7. Control returns to the application code immediately after the open() system call, which can now proceed to read from or write to the opened file.



Processes Explanation

  1. Definition of a Process:

  • A process is an instance of a program that is currently being executed in memory by the operating system.

  • Each process represents the execution of a specific program, including its instructions, data, and resources.

  1. Operating System Management:

  • The operating system manages processes throughout their lifecycle, including creation, suspension (temporary halt), and termination.

  • These management tasks ensure efficient utilization of system resources and proper execution of programs.

  1. Multiprogramming Capability:

  • Most modern operating systems support multiprogramming, which allows multiple processes to appear as if they are executing simultaneously.

  • This capability is achieved through efficient scheduling of CPU time among competing processes, enabling concurrent execution and responsiveness.

  1. Process Creation:

  • When a program is executed, the operating system creates a new process.

  • Each process is assigned a unique identifier called a Process ID (PID), which distinguishes it from other processes running concurrently.

  • The process also receives its own address space, which is a virtual memory area reserved for its execution.

  1. Process Address Space:

  • The process address space serves as a container that holds various components necessary for program execution:

  • Code: The executable instructions of the program.

  • Shared Libraries: Dynamically linked libraries that provide common functions and resources.

  • Dynamic Data: Variables and data structures that are dynamically allocated during runtime.

  • Runtime Stack: Memory region used for function call management, local variables, and control information.

  1. Threads within a Process:

  • A process typically includes at least one thread of execution.

  • Threads are units of CPU utilization and are characterized by their own program counter, stack, and register set.

  • Multiple threads within a process share the same code, data, and resources, allowing for concurrent execution and efficient task management.

Example Scenario

  • Scenario: Launching a web browser application.

  • Steps:

  1. The user launches the web browser program.

  2. The operating system creates a new process for the web browser.

  3. The process is assigned a unique Process ID (PID) and an initial address space.

  4. Within the process, threads are created to handle various tasks (e.g., user interface, network communication, rendering).

  5. The process address space is populated with the browser's code, shared libraries, dynamic data, and runtime stacks.

  6. The operating system schedules CPU time for the browser process, allowing it to execute concurrently with other processes.

  7. When the browser is closed, the operating system terminates its process, freeing up resources and ending its execution.


Threads Explanation

  1. Basic Unit of CPU Utilization:

  • A thread is the smallest unit of execution that can be scheduled by the operating system.

  • It represents a single sequence of execution instructions within a process.

  1. Thread Characteristics:

  • Thread ID: Each thread within a process is identified by a unique Thread ID (TID). This identifier distinguishes it from other threads.

  • CPU Register Set: Threads maintain their own set of CPU registers, including the program counter (PC) and stack pointer (SP). These registers store the current execution state.

  • Execution Stack(s): Each thread has its own execution stack(s), typically including a user stack for function calls and a kernel stack for handling interrupts and exceptions.

  1. Shared Resources:

  • Despite having individual execution contexts (registers, stacks), threads within the same process share the following resources:

  • Code: Threads execute within the same executable code segment of the process.

  • Data: They have access to the process's global variables and static data.

  • Address Space: Threads share the same virtual address space, meaning they can access the same memory locations.

  • Operating System Resources: Threads utilize common resources managed by the operating system, such as file handles, network connections, and synchronization mechanisms.

  1. Concurrency in Threads:

  • Multiple threads within a process can execute concurrently, appearing to perform tasks simultaneously.

  • For example, one thread might handle user input while another updates a graphical user interface or communicates over a network.

  • Concurrent threads enable efficient multitasking and responsiveness in applications, as they can handle multiple operations simultaneously.

Example Scenario

  • Scenario: A word processing application.

  • Threads in Action:

  1. Main Thread: Manages the application's user interface, processing user input and displaying content.

  2. Background Thread: Handles periodic autosaving of documents to disk, operating concurrently with the main thread.

  3. Printing Thread: Initiates and manages printing tasks in the background while allowing the user to continue editing documents.

  4. Spell Checking Thread: Runs asynchronously to continuously check spelling errors in the document as the user types.


Handles Explanation

  1. Managing Resources:

  • Operating systems maintain data structures to manage various resources actively accessed by processes.

  • These resources include processes themselves, threads, files, network sockets, synchronization objects (like mutexes and semaphores), and regions of shared memory.

  1. Tracking and Identification:

  • Each resource managed by the operating system is assigned a unique identifier known as a handle.

  • Handle: It serves as a reference or token that allows processes to access and manipulate system resources.

  • Handles are crucial for the operating system to enforce access control policies and efficiently track the usage of resources across the system.

  1. Access Control and Usage Tracking:

  • Handles are used to enforce security and access control measures. They ensure that processes can only access resources for which they have appropriate permissions.

  • The operating system uses handles to monitor and manage the usage of resources, ensuring efficient allocation and deallocation based on demand.

  1. File Descriptors in Linux and Mac:

  • In Unix-like systems such as Linux and macOS, file descriptors serve a similar role as handles.

  • File Descriptor: It's a non-negative integer that uniquely identifies an open file in the operating system's file system.

  • Like handles, file descriptors are used by processes to perform input/output operations (I/O) on files, sockets, pipes, and other file-like entities.

Example Scenario

  • Scenario: A web server running on Linux.

  • Handles/File Descriptors in Use:

  1. Process Handle: Identifies the server process managing incoming client connections.

  2. Thread Handles: Used to manage concurrent client requests handled by different threads within the server process.

  3. File Descriptors: Assigned to open network sockets, allowing the server to communicate with multiple clients simultaneously.

  4. Shared Memory Handles: Enable efficient data sharing between server processes and threads for caching and session management.


CPU Scheduling Explanation

  1. Definition:

  • CPU Scheduling refers to the operating system's capability to manage and allocate CPU (Central Processing Unit) time among multiple threads or processes.

  • Its primary goal is to optimize CPU utilization by efficiently switching between threads waiting for CPU time and those performing computations.

  1. Optimization Goal:

  • Optimizing CPU Utilization: The scheduler aims to keep the CPU busy by ensuring that it is executing threads as much as possible.

  • This involves scheduling threads that are ready to execute, minimizing idle CPU time.

  1. Scheduler Policies:

  • Policy Implementation: The operating system's scheduler implements various policies that determine which thread or process gets CPU time and for how long.

  • Examples of scheduling policies include First Come, First Served (FCFS), Round Robin, Priority Scheduling, and Multilevel Queue Scheduling.

  1. Context Switching:

  • Definition: A context switch occurs when the operating system suspends the execution of a thread to allow another thread to execute.

  • Execution Context: It includes the current state of the CPU registers (like program counter, stack pointer, and other register values) and other CPU state information.

  • Process of Context Switching:

  • When a context switch is triggered, the current thread's execution context is saved in main memory.

  • The scheduler then selects another thread from the ready queue, loads its execution context from memory into the CPU registers.

  • Execution resumes from the point where the previously executing thread was suspended.

Example Scenario

  • Scenario: A multitasking operating system running on a server.

  • CPU Scheduling in Action:

  • Multiple processes are competing for CPU time.

  • The scheduler determines which process should execute next based on its scheduling policy.

  • During execution, if a process requires I/O (e.g., reading from disk), it may voluntarily relinquish the CPU.

  • The scheduler then switches to another ready process, ensuring efficient utilization of CPU resources.


 
 
 

Recent Posts

See All
PE internals

Linked Libraries and Functions Imported Functions: Definition: These are functions used by a program that are actually stored in...

 
 
 
Memory Management in short

Address Space CPU Access: To run instructions and access data in main memory, the CPU needs unique addresses for that data. Definition:...

 
 
 
Memory Forensics

Memory Forensics Memory forensics is the process of analyzing and investigating data stored in the memory (RAM) of a computer system....

 
 
 

Comments


Subscribe Form

Thanks for submitting!

©2021 by just dump 1. Proudly created with Wix.com

bottom of page