CPS 356 Lecture notes: Paging and Segmentation
Coverage: [OSCJ] §§8.4-8.8 (pp. 364-387)
- permits the physical address space of a process to be noncontiguous
- whole process in main memory, but does not have to be contiguous
- split physical memory into fixed-sized blocks called frames
- split logical memory into blocks of same size called pages
- last page of process may not occupy an entire frame (i.e.,
some internal fragmentation)
- frame and page size are an efficiency issue;
page size is between 512 bytes and 16 MB depending on the computer
- paging increases context switch time, but
overhead of page table decreases as page size increases
logical address is a pair: (page number p, page offset d)
which indexes into page table (which resides in the PCB).
page table contains the base address of each frame in physical memory
p must be in range of the pages
d must be less than the page size
physical address = frame_number * page_size + offset
For instance, (2, 325).
Does paging increase the context switch overhead? and if so, why?
Overhead of page table decreases as the page size increases.
Use registers if page table small (< 256 entries).
However, on most contemporary computers page table
might be very large (> 1 million entries).
Keep it in main memory and have 1 register point to it, but
that approach increases the context switch time.
Solution: cache (translation look-aside-buffer or TLB)
memory access = 100 ns
cache search = 20 ns
hit ratio = 80%
effective access time of hit = 120 ns
effective access time of miss = 220 ns
(0.8)(20+100) + (0.2)(20+100+100) =
0.8*120 + 0.2*220 =
96 + 44 = 140 ns
40% slowdown when compared to no paging.
How does performance improve with a 98% hit ratio?
Logical to Physical Address
Structure of the page table
Page Table Structure
- main idea: page the page table itself
- hierarchical paging, called a forward-mapped page table
(applicable in 64-bit systems?)
- hashed page tables
- inverted page tables
- decreases amount of memory required to store page table, but
- increases what?
- what about shared memory?
- paging separates (and even blurs)
the user's view of memory from physical memory
- segmentation is a memory management scheme which supports
the user view of memory
- compiler automatically constructs segments reflecting the
- loader assigns these segments segment numbers
- logical address is: (segment name/num, offset)
- segmentation table: an array of (seg base, seg limit)
- (2,53) → (4300+53) = 4353
- eliminates internal fragmentation
- external fragmentation possible
- segmentation fault
Memory Management Techniques:
Intel Pentium uses pure segmentation (or segmentation with paging).
- single contiguous
- fixed (static) partitions
- relocation (dynamic) partitions
- paged segmentation
- demand paging
- segmentation with demand paging
Memory Management Summary
- goal: high degree of multiprogramming (most efficient use of
- with a fixed memory size, how can we increase the degree of
six main ways all under the umbrella of
packing as many processes as possible into main memory
- dynamic loading
- dynamic linking
- sharing code
- virtual memory
- memory management schemes range from simple single-user
approaches to complex multiprogramming schemes such paged
- most important factor is the hardware? why?
- a simple base or base/limit register pair is sufficient for
single and multiple partition schemes, but paging/segmentation
require mapping tables
- as the memory management scheme becomes more complex, the
time required to translate from a logical to physical address
increases; page table
- main memory
- to maximize memory use, we must reduce memory waste or
- fixed-size partitions (and paging to a small extent)
suffer from internal fragmentation
- variable-sized partitions and segmentation suffer from external
- OS must also provide protection so that processes do not access
data outside of their region in memory
- swapping: now part of the ready queue can exist in second memory;
allows more processes to run than can be fit into main memory
||A. Silberschatz, P.B. Galvin, and G. Gagne.
Operating Systems Concepts with Java.
John Wiley and Sons, Inc., Seventh edition, 2007.