Operating System Dinosaur Book Chapter 1 Summary

Table of Contents

This document summarizes content in preparation for the midterm exam on operating systems, referencing class PowerPoint presentations, OSTEP, and the Operating System Concepts book. Additionally, the blog by Seongbeom Park proved to be very helpful.

1. Basic Concepts of Operating Systems

1.1 Role of the Operating System

What happens when a program is executed? The processor executing the program fetches, decodes, and executes instructions until completion.

During this process, various tasks occur, such as executing programs concurrently, sharing memory between programs, and interacting with hardware devices. The operating system manages these tasks.

It allows users to delegate the allocation of system resources to the operating system, enabling them to run programs without concerning themselves with complexities. In other words, it mediates between hardware and applications.

1.2 Computer System Structure

What is the structure of a computer system? Modern computers consist of one or more CPUs and multiple device controllers, with CPUs and device controllers accessing shared memory via a common bus. The bus can be understood as a medium for data transfer.

Each device controller controls a specific connected device, and multiple devices may be attached to one controller. Each controller has local buffers and registers, managing data transfer between the device and its local buffer.

The CPU and device controllers can operate in parallel, competing for memory cycles. A separate memory controller determines which device controller gains access to shared memory first.

1.3 Interrupts

Consider a program performing general input/output operations. The device driver loads values into the appropriate registers of the controller. The device controller reads the information from these registers and decides on actions (for example, reading input from the keyboard). The device controller then begins transferring data from the device to its local buffer.

Upon completion, the device controller sends a signal to the device driver indicating that the operation has finished, and the device driver transfers control to another part of the operating system. The notification from the controller to the driver occurs through an interrupt, which is central to the interaction between the operating system and hardware.

When the CPU is interrupted, it stops its current task and immediately transfers control to a fixed location. This location typically holds the starting address of the interrupt service routine. Upon completing the routine, the CPU resumes its original task.

Interrupts can be generated by both hardware and software. Hardware generates interrupts via the system bus, while software can generate interrupts through system calls, often referred to as traps.

The key goal of interrupts is to transfer program control to the appropriate service routine. This can involve invoking a handler that inspects the information upon an interrupt, determining the cause, and calling the appropriate service routine.

However, this can slow down interrupt handling. Therefore, indirect methods using a table to invoke interrupts exist. This table, known as an interrupt vector, maps interrupt numbers to the addresses of service routines. When an interrupt occurs, the CPU locates the corresponding service routine's address in the table and transfers control there.

After the interrupt is executed, the CPU must continue its previous task, necessitating the saving of the current state before jumping to the interrupt routine.

1.4 DMA

Interrupt-driven I/O processing operates as follows. When an I/O operation begins, the device driver loads a value into the device controller's register. The device controller then checks this register, performs the operation, and generates an interrupt for the CPU. The CPU receives the interrupt, calls the interrupt handler, which identifies the cause of the interrupt and calls the appropriate service routine. Once that service routine is complete, the CPU resumes the interrupted work.

However, this method can lead to delays since data passed from the device controller to the CPU must be routed through the CPU, impacting processing speed. An alternative is Direct Memory Access (DMA), which allows the device controller to load data directly into memory without going through the CPU.

Thus, only one interrupt is generated for each data block, signaling to the CPU that data transfer from the device controller to main memory is complete. During this time, the CPU can perform other tasks.

1.5 Computer System Architecture

Single Processor

A single processor system contains only one CPU, and that CPU has a single processing core. The core is a component that executes instructions and includes registers for local data storage.

Multi-Processor

A multi-processor system consists of two or more processors with one or more CPUs each. Multi-processor systems share bus, main memory, and peripheral devices. Increasing the number of processors improves throughput.

However, using N processors does not precisely increase throughput by N times. There are overheads in ensuring all processors operate correctly and contention for shared resources.

Multi-Programming

Multi-programming involves executing multiple programs simultaneously. It organizes programs so that at least one program is always running, maintaining several processes in memory. When one program is idle, another executes, ensuring the CPU is continuously utilized.

Multi-tasking, also known as time-sharing, involves switching between processes to execute them. Frequent switching provides fast response times to users. When multiple processes are ready for execution, the system uses scheduling to select the next process to run. Additionally, the use of virtual memory allows the execution of processes that are partially loaded into memory, enabling processes larger than physical memory.

1.6 Ensuring Operating System Safety

The operating system must prevent incorrect or malicious programs from improperly executing other programs. To achieve this, it offers functionalities such as:

  • Dual Mode
  • I/O Protection
  • Memory Protection
  • Timer

Dual Mode

The operating system requires two distinct modes of operation: user mode and kernel mode. This is represented by a mode bit where 0 indicates kernel mode and 1 indicates user mode, allowing differentiation between tasks executed on behalf of the operating system and those executed for users.

Certain instructions are designed to execute only in kernel mode and are termed privileged instructions. If executed in user mode, it is deemed illegal and results in an exception. Users can request specific tasks from the operating system through system calls.

I/O Protection

All I/O commands are privileged instructions that can only be executed in kernel mode.

Memory Protection

The interrupt vector (a table storing addresses of tasks for each interrupt) and the memory locations of interrupt service routines must be protected. Consequently, the operating system uses two registers to determine the memory range accessible by a program.

The base register stores the minimum physical memory address accessible, while the limit register stores the range of accessible memory. Memory outside this range is considered protected.

Timer

The timer is a means to ensure control for the operating system by generating interrupts at regular intervals, specifying that the computer should be interrupted after a designated time.

The operating system timer is set to a specific value, which counts down with each clock tick. Once this value reaches zero, an interrupt is raised. The instructions to modify this timer value are privileged and can only be executed in kernel mode.

1.7 System Calls

A system call is the method through which a program requests the operating system's services. Users can utilize operating system functions through system calls.

Each system call is generally associated with a specific number, with a system call table maintaining this relationship. Users employing system calls can do so without needing to understand their internal implementation, using an API instead.

When passing parameters to system calls, there are three common methods:

  1. Passing through registers Parameters are directly passed into registers. However, this may lead to a shortage of available registers.
  2. Passing through blocks Parameters are stored in a block or table in memory, passing the address of this block to a register.
  3. Passing via the stack Parameters are stored on the stack, with the stack pointer passed to a register.

Types of System Calls

  • Process Control
  • File Management
  • Device Management
  • Information Maintenance
  • Communication
  • Protection (resource authorization management, etc.)

1.8 System Booting

System booting begins with loading the kernel.

When the system power is turned on, a program in non-volatile memory known as the bootstrap program is executed. This program loads the kernel into memory and executes it.

Sometimes, the bootstrap loader may load a more complex boot program from disk, which in turn loads the kernel.