CPS 356 Lecture notes: Memory Management

Coverage: [OSCJ] Chapter 8, §§8.1-8.3 (pp. 349-364)

Memory Management

Memory most of the time means main memory (MM).

Memory is an array of bytes or words.
Main memory is a resource which must be allocated and deallocated.

Memory management techniques determine
  • where and how a process resides in memory
  • how addressing is performed
I/O is the biggest bottleneck of any computer system. Why?

More memory makes your programs run faster. Why?

What is the purpose of a computer system?
To execute programs.

These programs, with the data they access, must reside in memory (at least partially) to execute.

To improve CPU utilization and response time to users, the computer must keep several process in memory.


There are various schemes to determine how and when to bring processes into and out of MM.

Selection of a particular scheme depends on sever factors, but especially on the hardware design.

Each of these schemes requires some hardware support.

We will study various techniques (some are historical, but are presented for comparison purposes).

For each technique, we will study:
  • algorithms
  • advantages and disadvantages
  • speed requirements

Single Contiguous

   while (there is a new job ready)
      if (the job size is <= total memory size)
         allocate memory
         load and execute job
         deallocate memory
Advantage: simplicity

  • CPU wasted (no multiprogramming)
  • MM not fully used (internal fragmentation)
  • job size limited to the size of MM
Before we discuss other techniques we must cover some fundamentals of the mechanics of using memory.


  • only CPU can directly access registers (1 cycle of the CPU clock) and memory (can take several cycles of the CPU clock during which the processor is stalled)
  • remedy to accommodate this speed differential?
Base and limit registers, and protection of memory space
  • we must ensure that a process does not access memory space dedicated to another process
  • example program
  • when kernel is executing, it has unrestricted access to both the operating system and the user memory
  • Fig. 8.1
  • Fig. 8.2

Address Binding

Fig. 8.3 (compare to my figure)
  • binding time: when variables or instructions are bound to a physical memory location
  • addresses in a user program are typically symbolic (e.g., i)
  • compiler typically binds these symbolic addresses to relocatable addresses (e.g., 14 bytes from the beginning of this module)
  • the loader binds these relocatable addresses to absolute addresses (e.g., 74014)
  • binding is just a mapping from one address space to another
  • binding of instructions or data to memory addresses can be done at
    • compile time: compiler generates absolute code (e.g., MS-DOS .com format programs); if addresses change, need to re-compile (load just copies)
    • load time: compiler generates relocatable code and the final binding is delayed until load-time; if starting address changes, need to re-load the program
    • run time: if process is moved in memory during execution, then binding must be delayed until run-time; special hardware required; loader just copies, but address must be resolved every time used
Memory management is all about the various way in which these bindings can happen, and what hardware support is necessary to make it work.

logical address = relative address = virtual address
physical address = absolute address

Logical vs. Physical Memory Space

  • the CPU generates logical addresses
  • the address seen by the memory unit or the address loaded into the MAR is a physical address
  • the addresses seen by the MMU, and thus loaded into the MAR, are physical addresses
  • compile-time and load-time bindings generate identical logical and physical addresses
  • run-time binding results in different logical and physical addresses
  • run-time mapping from virtual to physical address is done by the MMU
  • relocation (or base) register holds the address of the beginning of a partition
  • limit (or bounds) register holds the length of the partition
  • Fig. 8.4 (without protection)
  • Fig. 8.6 (with protection)
  • user program and CPU never see physical addresses
  • the final location of a referenced memory address is not determined until the reference is made
  • logical addresses in the range 0 to max
  • physical addresses in the range R+0 to R+max
The concept of a logical address space which is bound to a separate physical address space is fundamental to proper memory management.

Dynamic Loading

  • contrast to scheme above
    • not controlled by the OS; rather responsibility of the programmer
    • no special hardware support required
  • advantages
    • unused routine, such as an error routine, is never loaded
    • although program image may be large, portion that is actually used may be small
  • disadvantage: not controlled by the OS rather responsibility of the programmer

Dynamic Linking and Shared Libraries

  • concept of a stub (requires OS support, why?)
  • stub checks to see if the needed routine is in memory
  • stub replaces itself with the address of the routine and starts executing it
  • advantages
    • all processes that use a language library execute only one copy of the library code in memory
    • bugs fixes
    • updates
    • without this support, such programs would need to be re-linked to gain access to the new library


  • overview of hardware and relevant registers
  • address binding
  • logical vs. physical address space
  • dynamic loading
  • dynamic linking and shared libraries


Swapping (self-study): normally once a process is loaded into memory, it remains there until it terminates; with swapping, that is not true. Fig. 8.5.

Contiguous Memory Allocation

  • fixed
  • variable
Fixed (Static) Partitions
  • divide memory up into a fixed number of partitions
  • the partitions need not be the same size, but are fixed (static) at boot time
  • a job is put into a partition which is large enough to hold it
        Main Memory
    |                  |
    |                  |
    |                  |
    +------------------+   ← partition boundary
    |                  |
    |                  |
    |                  |
    +------------------+   ← partition boundary
    |                  |
    |                  |
    |                  |
    +------------------+   ← partition boundary
    |                  |
    |                  |
    |                  |
  • Advantage: first attempt at multiprogramming
  • Disadvantages
    • job size is limited to largest partition size
    • degree of multiprogramming limited by number of partitions
    • memory is wasted in partition (internal fragmentation)
    • a ready job might wait even if there is enough memory available (external fragmentation)
      • wait
      • defragment → compaction (can be expensive; only possible if addresses are relocated dynamically (physical address bound at run-time)
      • break up process so that logical address space is not contiguous
        • paging
        • segmentation
    • must translate relative address to physical address
    • must protect memory space (from other user processes and OS)
    • must determine partition for a job
Variable (Dynamic) Partitions
Relation (Dynamic) Partitions
  • no initial partitions are created; memory is one big free partition
  • as a new job is loaded, a partition is created big enough to store that job
  • the new partition is carved out of an existing free partition
  • this leaves a smaller free partition
  • allocating a partition: form a partition from a (the first?) free partition of ample size
  • deallocationg a partition:
    • return the partition to the free partition table
    • merge with the other free partitions when possible
  • if no free hole is large enough to satisfy the memory requirements of the next job on the job queue, we either
    • wait, or
    • skip down the queue to find a job for which the hole is large enough
  • if a hole is too big, it can be split into two parts
  • if a new hole is adjacent to other holes, these holes can be merged
  • implementation requires a free block table
        Main Memory
    |                  |
    |    occupied      |
    |                  |
    |       free       |
    |                  |
    +------------------+   →
    |                  |
    |       free       |
    |                  |
    |                  |
    |    occupied      |
    |                  |
    |    occupied      |
    |                  |
    |                  |
    |                  |
    |       free       |
    |                  |
    |                  |
    |                  |
    |    occupied      |
  • we must have a particular technique (first, best, or worst) for selecting a free partition
  • this procedure is a particular instance of the general dynamic storage allocation problem

Partition Selection Algorithms

  • sorting the free block table in a particular manner results in a specific selection algorithm:
    • first fit: sort by location
    • best fit: sort by size (ascending); smallest available partition which will hold the job is selected; produces smallest leftover hole
    • worse fit: sort by size (descending); largest available partition which will hold the job is selected;
      • produces largest leftover hole
      • most efficient in terms of programming; takes the least amount of effort (work)
      • on random jobs, this algorithm performs the best
  • simulations have shown that both first fit and best fit perform better than worse fit in terms of decreasing time and storage utilization
  • neither first fit not best fit performs better than the other in terms of storage utilization, but first fit is generally faster
Example: consider the following job stream with 100 blocks of total memory:
  • job 1 arrives (requiring 22 blocks)
  • job 2 arrives (requiring 24 blocks)
  • job 3 arrives (requiring 30 blocks)
  • job 4 arrives (requiring 10 blocks)
  • job 1 terminates
  • job 3 terminates
  • job 5 arrives (requiring 12 blocks)
No matter which partition selection algorithm is used, the situation is the same until job 5 arrives:

    Main Memory
a +------------------+ 0
  |                  |
  |  22 free blocks  |     ← first fit
  |                  |
b +------------------+ 22
  |                  |
  | job 2 (24 blocks)|
  |                  |
c +------------------+ 46
  |                  |
  |  30 free blocks  |     ← worse fit
  |                  |
d +------------------+ 76
  |                  |
  | job 4 (10 blocks)|
  |                  |
e +------------------+ 86
  |                  |
  |  14 free blocks  |     ← best fit
  |                  |
  +------------------+ 100

Free Partition Tables:

First Fit:

Best Fit:

Worst Fit:

  • what are the advantages and disadvantages of the selection?
    • disadvantages of first- and best-fit: external fragmentation
    • first fit wastes a third of memory due to external fragmentation
    • worst fit tends to work out better statistically

Memory hierarchy

(ref. [OSIDP6] Fig. 1.14 on p. 27; image courtesy [OSIDP6] webpage)

As we go down the hierarchy,
  • storage size increases,
  • access time increases,
  • cost decreases (per bit), and
  • persistence increases
  • decreasing frequency of access by processor

Locality of Reference Principle

(ref. [OSIDP6] Fig. 1.15 on p. 28; image courtesy [OSIDP6] webpage)


(mostly historical at this point)
  • programs are sectioned into modules
  • not all modules need to be in main memory at the same time
      |             |
      B             E
    |   |
    C   D
  • programmer specifies which modules can overlay each other
  • linker inserts commands to invoke the loader when the modules are referenced
  • advantages
    • reduced memory usage
    • a job which is bigger than memory now can run
  • disadvantages
    • overlap map must be specified by programmer
    • programmer must know memory requirements
    • the programmer must strive for completely disjoint overlapped modules
    • overlapped modules must be completely disjoint
    • internal fragmentation


    [OSCJ] A. Silberschatz, P.B. Galvin, and G. Gagne. Operating Systems Concepts with Java. John Wiley and Sons, Inc., Seventh edition, 2007.

Return Home