HOME

Top 10 List of Week 04

  1. Memory
    Memory consists of a large array of bytes, each with its own address. The CPU fetches instructions from memory according to the value of the program counter. Main memory and the registers buily into each processing core are the only general-purpose storage that the CPU can access directly. Registers that are built into each CPU core are generally accessible within one cycle of the CPU clock, hence making it faster. Unlike registers, main memory is accessed via a transaction on the memory bus. Completing a memory access may take many cycles of the CPU clock.

  2. Binding Address
    Address binding is the process of mapping from one address space to another address space. An address binding can be done in 3 different ways:
    • Compile time (If you know that during compile time where process will reside in memory then absolute address is generated)
    • Load time (If it is not known at the compile time where process will reside then relocatable address will be generated. Loader translates the relocatable address to absolute address.)
    • Execution time (The instructions are in memory and are being processed by the CPU. )
  3. Logical vs Physical An address generated by the CPU is commonly reffered to as a logical address, whereas an address seen by the memory unit,the one loaded into the memory-address register of the memory, is commonly referred to as a physical address/virtual address. The set of all logical addresses generated by a program is a logical address space The set of all physical addresses corresponding to these logical addresses is a physical address space. Thus, in the execution-time address-binding scheme, the logical and physical address spaces differ. The run-time mapping from vitual to physical addresses is done by a hardware device called the memory-management unit(MMU)

  4. Dynamic Loading & Linking
    • To obtain better memory-space utilization, we can use dynamic loading. With dynamic loading, a routine is not loaded until it is called/needed. This method is useful when large amounts of code are needed to handle infrequently occuring cases, such as error routines. In such a situation, atlhough the total program size may be large, the portion that is used and loaded may be much smaller.
    • Dynamically linked libraries (DLLs) or shared libraries are system libraries that are linked to user programs only when the programs are run. Without this facility, each program on a system must include a copy of its language library. This requirement increases the size of an executable image, but also waste main memory.
  5. Contiguous Memory Allocation
    The main memory has the responsibility to accomodate the OS and userspace. We therefore need to alocate it efficiently. In Contiguous memory allocation,when the process arrives from the ready queue to the main memory for execution, the contiguous memory blocks are allocated to the process according to its requirement. Now, to allocate the contiguous space to user processes, the memory can be divide either in: -Fixed-sized partition : the memory is divided into fixed-sized blocks and each block contains exactly one process -Variable-sized partition : the OS maintains a table that contains the information about all memory parts that are occupied and available for the processes.

  6. A system uses different algorithms to allocate memory from the main memory segment. These algorithms are also known as the memory partitioning algorithms are broadly categorized under the following algorithms:
    • First fit : In the first fit approach is to allocate the first free partition or hole large enough which can accommodate the process. It finishes after finding the first suitable free partition.
    • Best fit : The best fit deals with allocating the smallest free partition which meets the requirement of the requesting process. This algorithm first searches the entire list of free partitions and considers the smallest hole that is adequate. It then tries to find a hole which is close to actual process size needed.
    • Worst fit : In worst fit approach is to locate largest available free portion so that the portion left will be big enough to be useful. It is the reverse of best fit. Both first fit and best fit are better than worst fit in terms of decreasing time and storage utilization. Neither first fit nor best fit is clearly better than the other in terms of storage utiization, but firs fit is generally faster.
  7. Paging
    Paging is a memory management scheme that eliminates the need for contiguous allocation of physical memory. This scheme permits the physical address space of a process to be non–contiguous. Address generated by CPU is divided into:
    • Page number(p): Number of bits required to represent the pages in Logical Address Space or Page number
    • Page offset(d): Number of bits required to represent particular word in a page or page size of Logical Address Space or word number of a page or page offset.
      Physical Address is divided into:
    • Frame number(f): Number of bits required to represent the frame of Physical Address Space or Frame number.
    • Frame offset(d): Number of bits required to represent particular word in a frame or frame size of Physical Address Space or word number of a frame or frame offset.
      Steps taken by the MMU to translate a logical addres to physical address:
    • Extract p and use it as an index to the page table
    • Extract corresponding f from the page table
    • Replace p in the logical address with f
  8. Hashed Page Table In hashed page tables, the virtual page number in the virtual address is hashed into the hash table. They are used to handle address spaces higher than 32 bits. Each entry in the hash table has a linked list of elements hashed to the same location (to avoid collisions – as we can get the same value of a hash function for different page numbers). The hash value is the virtual page number. The Virtual Page Number is all the bits that are not a part of the page offset. For each element in the hash table, there are three fields:
    • Virtual Page Number (which is the hash value).
    • Value of the mapped page frame.
    • A pointer to the next element in the linked list.
  9. Inverted Page Table Inverted page table structure that consists of one-page table entry for every frame of the main memory. So the number of page table entries in the inverted page table reduces to the number of frames in physical memory and a single page table is used to represent the paging information of all the processes. Elements:
    • Page number – It specifies the page number range of the logical address.
    • Process id – An inverted page table contains the address space information of all the processes in execution. Since two different processes can have similar set of virtual addresses, it becomes necessary in Inverted Page Table to store a process Id of each process to identify it’s address space uniquely. This is done by using the combination of PId and Page Number. So this Process Id acts as an address space identifier and ensures that a virtual page for a particular process is mapped correctly to the corresponding physical frame.
    • Control bits – These bits are used to store extra paging-related information. These include the valid bit, dirty bit, reference bits, protection and locking information bits.
    • Chained pointer – It may be possible sometime that two or more processes share a part of main memory. In this case, two or more logical pages map to same Page Table Entry then a chaining pointer is used to map the details of these logical pages to the root page table.
  10. Memory Swapping Memory swapping is a computer technology that enables an operating system to provide more memory to a running application or process than is available in physical random access memory (RAM) Memory swapping works by making use of virtual memory and storage space in an approach that provides additional resources when required. In short, this additional memory enables the computer to run faster and crunch data better.